W. Q. Meeker, L. A. Escobar, and J. K. Freels
07 May 2017
Basic ideas for planning a life test or field tracking study
The use of simulation to indicate how the results of a life test might look, to see how the data might be analyzed, and to get an idea of the expected precision for a proposed solution
The use of large-sample approximate methods to assess the precision of the results obtained from a future reliability study
How to determine an approximate sample size that provides a specified precision
How to assess the trade-offs involving sample size and study length
The use of simulation to check and "calibrate" large-sample approximate methods
Methods for assessing sensitivity of test planning conclusions to unknown inputs that must be provided
Example test goals:
The probability of failure after 1000 hours is less that \(.02\)
The expected number of warranty claims after \(1\) year is \(<5\%\)
Life testing can be expensive and is nearly always time-constrained
Simulation is an important tool for ensuring the test results support the conclusion and address any test planning concerns
It's intuitively understood that
Increasing the number of test samples will generate more information
Increasing the amount of test time will generate more information
More information allows for more precise estimates
But, it's also known that
Generating more information always requires more investment
At some point, the investment required for greater precision becomes disproportionate
Often, information exists about the system to be tested or the data to be gathered
In the text, these prior data are referred to as "planning values" and are denoted by a \(\Box\), e.g. \(\mu^{\Box},\sigma^{\Box}\)
Simulation uses "planning values" to ensure that a proposed life-test can achieve results to support the desired conclusion.
\[ \begin{aligned} t_{0.12}^{\Box}&=500\;\;\text{hours}\\\\ t_{0.20}^{\Box}&=1000\;\text{hours}\\\\ p_{c}^{\Box}&=0.2 \end{aligned} \]
For the initial evaluation, the test planners assume that the insulation lifetimes can be modeled with either a Weibull or lognormal distribution
Using the given planning values and appropriate probability paper, the test planners can graphically estimate the distribution parameters (Figure 10.1)
Weibull distribution
Lognormal distribution
Simulation is a powerful tool for estimating test plan properties, but the results are often open to interpretation
Difficult to distinguish between distributions
Hard to gauge estimation precision from simulation plots
In contrast to simulation, large sample approximations have some advantages
Can directly approximate estimation precision as a function of sample size
Estimate the required sample size for a desired precision
Variance factors allow trade-offs in test planning decisions
\[ \mathbf{\widehat{\underline{\theta}}}_{_{MLE}}\sim MVN(\mathbf{\underline{\theta}}, \Sigma_{\mathbf{\widehat{\underline{\theta}}}}) \]
and
\[ \Sigma_{\mathbf{\widehat{\underline{\theta}}}}=\mathcal{I}^{-1}_{\underline{\theta}}=E\left[-\frac{\partial^2\mathcal{L}(\underline{\theta}|\underline{t})}{\partial\underline{\theta}\partial\underline{\theta}^T}\right]^{-1}=\sum_{i=1}^n E\left[-\frac{\partial^2\mathcal{L}_i(\underline{\theta}|\underline{x})}{\partial\underline{\theta}\partial\underline{\theta}^T}\right]^{-1} \]
In many cases we're interested performing a test to gain information on some function of \(\mathbf{\underline{\widehat{\theta}}}\)
For large samples \(g(\mathbf{\underline{\widehat{\theta}}})\sim NOR\left(g(\mathbf{\underline{\theta}}),\widehat{se}_{g(\mathbf{\underline{\widehat{\theta}}})}\right)\)
where \(\widehat{se}_{g(\mathbf{\underline{\widehat{\theta}}})}=\sqrt{\widehat{Var}\left[\widehat{g}(\mathbf{\underline{\widehat{\theta}}})\right]}\)
\[ \widehat{Var}\left[\widehat{g}(\mathbf{\underline{\widehat{\theta}}})\right]=\left[\frac{\partial g(\mathbf{\underline{\theta}})}{\partial \mathbf{\underline{\theta}}}\right]^T\Sigma_{\mathbf{\underline{\widehat{\theta}}}}\left[\frac{\partial g(\mathbf{\underline{\theta}})}{\partial \mathbf{\underline{\theta}}}\right] \]
\[ \log[g(\mathbf{\underline{\widehat{\theta}}})]\sim NOR\left(\log[g(\mathbf{\underline{\theta}})],\widehat{se}_{\log[\widehat{g}(\mathbf{\underline{\theta}})]}\right) \]
and
\[ \widehat{Var}\left[\log(\widehat{g}(\mathbf{\underline{\theta}}))\right]=\left(\frac{1}{g(\mathbf{\underline{\theta}})}\right)^2\widehat{Var}[\widehat{g}(\mathbf{\underline{\theta}})] \]
This is the same procedure presented in previous chapters for strictly positive quantities, but generalized for vector-valued functions
The approximate standard errors for \(\widehat{g}(\mathbf{\underline{\theta}})\;\text{and}\;\widehat{g}(\mathbf{\log[\underline{\theta}]})\) may be represented as
\[ \widehat{se}_{\widehat{g}(\mathbf{\underline{\theta}})}=\frac{1}{\sqrt{n}}\sqrt{V_{\widehat{g}}}\;\;\text{and}\;\;\widehat{se}_{\widehat{g}(\mathbf{\log[\underline{\theta}]})}=\frac{1}{\sqrt{n}}\sqrt{V_{\log(\widehat{g})}} \]
\[ V_{\widehat{g}}=n\widehat{Var}_{\widehat{g}(\mathbf{\underline{\theta}})}\;\;\text{and}\;\;V_{\log(\widehat{g})}=n\widehat{Var}_{\widehat{g}(\mathbf{\log[\underline{\theta}]})} \]
If the goal for a test event is to make conclusions regarding the value of a function of the parameters \(g(\mathbf{\underline{\theta}})\in \mathbb{R}\)
An approximate \(100(1-\alpha)\%\) CI for \(g(\mathbf{\underline{\theta}})\) based on the large-sample approximation would be expressed as Equation 10.3
\[ \begin{aligned} \left[\underline{g},\overline{g}\right]&=\widehat{g}\pm z_{(1-\alpha/2)}(1/\sqrt{n})\sqrt{\widehat{V}_{\widehat{g}}}\\\\ &=\widehat{g}\pm D \end{aligned} \]
Where
Rearranging Equation 10.3 to solve for \(n\) results in Equation 10.4
\[ n=\frac{z^2_{(1-\alpha/2)}V^{\Box}_{\widehat{g}}}{D^2_T} \]
where
The purpose of an upcoming test is to create a \(95\%\) CI for the mean life of light-bulbs
Engineers provide the following planning information
\(\widehat{\mu}_{_{MLE}}=\bar{t}\)
\(Var[\bar{t}]=\sigma^2/n \rightarrow V_{\widehat{\mu}}=nVar[\bar{t}]=\sigma\)
\(V^{\Box}_{\widehat{\mu}}=(\sigma^{\Box})^2=200^2\)
Substituting these values into Equation 10.4 gives an estimate of the required sample size
\[ n=\frac{z^2_{(1-\alpha/2)}V^{\Box}_{\widehat{\mu}}}{D_T^2}=\frac{(1.96)^2(200)^2}{30^2}\approx 171 \mbox{ samples} \]
If the goal for a test event is to make conclusions regarding the value of a function of the parameters \(g(\mathbf{\underline{\theta}})\in \mathbb{R}^+\)
An approximate \(100(1-\alpha)\%\) CI for \(\log[g(\mathbf{\underline{\theta}})]\) based on the large-sample approximation would be expressed as Equation 10.5
\[ \begin{aligned} \left[\underline{\log[g]},\overline{\log[g]}\right]&=\log[\widehat{g}]\pm (1/\sqrt{n})z_{(1-\alpha/2)}(1/\sqrt{n})\sqrt{\widehat{V}_{\log[\widehat{g}]}}\\\\ &=\log[\widehat{g}]\pm \log[R] \end{aligned} \]
Where
Rearranging Equation 10.5 to solve for \(n\) results in Equation 10.6
\[ n=\frac{z^2_{(1-\alpha/2)}V^{\Box}_{\log[\widehat{g}]}}{(\log[R_T])^2} \]
Where
The purpose of a test is to compute a \(95\%\) CI for the mean life of a new electrical insulation
Engineers provide the following planning information
Assume that the lifetime of the insulation \(T\sim EXP(\theta)\)
Since \(\theta \in [0, \infty)\), Equation 10.6 is used to compute \(n\)
\(\theta^{\Box}=1000\;\text{hours}\)
\(R_T=1.5\) hours
From Section 7.6.3
\(\widehat{\theta}_{_{MLE}}=\frac{TTT}{r}\)
\(V_{[\widehat{\theta}]}=n\widehat{Var}[\widehat{\theta}]=\frac{n}{E\left[-\frac{\partial^2 \mathcal{L}(\theta)}{\partial\theta^2}\right]}=\frac{\theta^2}{1-\exp\left(-\frac{t_c}{\theta}\right)}\)
\(V^{\Box}_{\log[\widehat{\theta}]}=\frac{V^{\Box}_\widehat{\theta}}{(\theta^{\Box})^2}=\frac{1}{1-\exp\left(-\frac{t_c}{\theta}\right)}=\frac{1}{1-\exp\left(-\frac{500}{1000}\right)}=2.5415\)
Substituting these values into Equation 10.4 gives an estimate of the required sample size
\[ n=\frac{z^2_{(1-\alpha/2)}V^{\Box}_{\log[\widehat{\theta}]}}{(\log[R_T])^2}=\frac{(1.96)^2(2.5415)}{(\log[1.5])^2}\approx 60 \mbox{ samples} \]
\[E\left[-\frac{\partial^2\mathscr{L}(\theta)}{\partial\theta^2}\right] = \frac{1-\exp\left(-\frac{t_c}{\theta}\right)}{n\theta^2}\]
\[ \mathscr{L}(\theta)=\sum_{i=1}^n f(t_i|\theta)^{\delta_i}\times S(t_i|\theta)^{1-\delta_i} \quad i = 1,\cdots,n \]
\[ \delta_i= \begin{cases} 1 \mbox{ if $t_i$ is a failure time}\\ 0 \mbox{ if $t_i$ is a right-censored observation} \end{cases} \]
Further, Example 10.4 states that the assumed underlying distribution is \(EXP(\theta)\), therefore we can make the following substitutions
Including these substitutions results in this updated likelihood function
\[ \mathscr{L}(\theta)=\sum_{i=1}^n \left(\frac{1}{\theta}\exp\left[-\frac{t_i}{\theta}\right]\right)^{\delta_i}\times\left(\exp\left[-\frac{t_i}{\theta}\right]\right)^{1-\delta_i} \quad i = 1,\cdots,n \]
\[ \begin{aligned} \mathcal{L}(\theta)=&\sum_{i=1}^n {\delta_i}\left(\log\left[\frac{1}{\theta}\right]-\frac{t_i}{\theta}\right) + (1-\delta_i)\left(-\frac{t_i}{\theta}\right) \quad i = 1,\cdots,n\\\\ =&\sum_{i=1}^n -\delta_i\log[\theta]-\delta_i\frac{t_i}{\theta} -\frac{t_i}{\theta} +\delta_i\frac{t_i}{\theta} \quad i = 1,\cdots,n\\\\ =&\sum_{i=1}^n -\delta_i\log[\theta]-\frac{t_i}{\theta} \quad i = 1,\cdots,n\\\\ =& -\log[\theta]\sum_{i=1}^n \delta_i- \frac{1}{\theta}\sum_{i=1}^nt_i \quad i = 1,\cdots,n\\\\ =& -r\log[\theta] - \frac{TTT}{\theta} \end{aligned} \]
Observing the last expression
Recall that \(\delta_i = 1\) is the event at time \(t_i\) is a failure and is \(0\) otherwise
Thus, \(\sum_{i=1}^n \delta_i\) is just the number of failures \(r\)
Further, \(\sum_{i=1}^n t_i\) can easily be seen as the total time on test \(TTT\)
Our goal is to find \(E\left[\frac{\partial^2\mathscr{L}}{\partial\theta^2}\right]\), therefore our next steps are to:
Find the first and second derivatives of \(\mathscr{L}\) wrt \(\theta\)
Find the expected value for the negative of the resulting expression
The first and second derivatives are expressed as
\[ \begin{aligned} \mathscr{L} &=-r\log[\theta] - \frac{TTT}{\theta}\\\\ \frac{\partial\mathscr{L}}{\partial\theta} &=-\frac{r}{\theta}+\frac{TTT}{\theta^2}\\\\ \frac{\partial^2\mathscr{L}}{\partial\theta^2} &= \frac{r}{\theta^2}-\frac{TTT}{\theta^3}\\\\ -\frac{\partial^2\mathscr{L}}{\partial\theta^2} &= -\frac{r}{\theta^2}+\frac{TTT}{\theta^3} \end{aligned} \]
Finally, we want to find the expected value of this expression
Note that the expectation operator is distributive
\[ \begin{aligned} E\left[-\frac{\partial^2\mathscr{L}}{\partial\theta^2}\right] =\;& E\left[-\frac{r}{\theta^2}+\frac{TTT}{\theta^3}\right]\\\\ =\;& E\left[-\frac{r}{\theta^2}\right]+E\left[\frac{TTT}{\theta^3}\right]\\\\ =\;& -\frac{E[r]}{\theta^2}+\frac{E[TTT]}{\theta^3} \end{aligned} \]
First, lets look at \(E[r]\) the expected number of failures
\[ \begin{aligned} E[r] &= n \times F(t_c)\\\\ &= n\left(1-\exp\left[-\frac{t_c}{\theta}\right]\right) \end{aligned} \]
Now, lets look at \(E[TTT]\) the expected total time on test
We know that \(TTT = \sum_{i=1}^r t_i + (n-r)t_c\), applying the expectation operator gives
\[ E[TTT] = E\left[\sum_{i=1}^r t_i|r \right] + (n-r)t_c \]
Apply the methods previously discussed to compute sample sizes under the special case when
The lifetimes follow a location-scale distribution \(T\sim\Phi_{_{(\cdot)}}(t|\mu,\sigma)\)
The data are singly right censored (Type I) at time \(t_c\)
For this data the large sample variance-covariance matrix can be computed as
\[ \Sigma_{(\widehat{\mu},\widehat{\sigma})}=\left[\begin{array}{cc} \widehat{Var}[\widehat{\mu}] & \widehat{Cov}[\widehat{\mu},\widehat{\sigma}]\\ \widehat{Cov}[\widehat{\mu},\widehat{\sigma}] & \widehat{Var}[\widehat{\mu}]\end{array}\right]=\frac{1}{n}\left[\begin{array}{cc} V_{\widehat{\mu}} & V_{(\widehat{\mu},\widehat{\sigma})}\\ V_{(\widehat{\mu},\widehat{\sigma})} & V_{\widehat{\sigma}} \end{array}\right]=\mathcal{I}^{-1}_{(\widehat{\mu},\widehat{\sigma})} \]
Table C.20 lists the large-sample approximate values for the variance-covariance matrix elements assuming a normal distribution
The values in the table are defined with respect to the standardized censoring time (the number of standard deviations \(t_c\) is from \(\mu\))
\[ \zeta_c=\frac{t_c-\mu}{\sigma} \]
\[ \zeta_c=\frac{\log[t_c]-\mu}{\sigma} \]
First, we use existing prior information or "expert" knowledge to compute the planning values \(\mu^{\Box}, \sigma^{\Box}\)
The censoring time, \(t_c\), will be determined before testing begins
Thus, we can compute \(\zeta_c^{\Box}\) - assuming a lognormal distribution
\[ \zeta_c^{\Box}=\frac{\log[t_c]-\mu^{\Box}}{\sigma^{\Box}} \]
\(100\Phi(\zeta_c)\equiv\) the population fraction failing up to time \(t_c\) (given \(\mu^{\Box}\) and \(\sigma^{\Box}\))
\((1/\sigma^2)V_{\widehat{\mu}}\equiv\) the scaled large-sample variance factor for \(\mu\)
\((1/\sigma^2)V_{\widehat{\sigma}}\equiv\) the scaled large-sample variance factor for \(\sigma\)
\((1/\sigma^2)V_{(\widehat{\mu},\widehat{\sigma})}\equiv\) the scaled large-sample covariance factor for \(\sigma\) and \(\sigma\)
\(\rho(\widehat{\mu},\widehat{\sigma})\equiv\) the large sample correlation between \(\widehat{\mu}\) and \(\widehat{\sigma}\)
\(f_{11}, f_{22}, f_{12}\equiv\) the scaled Fisher information matrix for a single observation from the corresponding location scale distribution
In Example 10.1 we used the available planning information to find \(\mu^{\Box}=8.774\) and \(\sigma^{\Box}=1.244\)
Now, we're interested in computing a \(95\%\) CI for \(\beta=1/\sigma, s.t.\) the endpoints are \(50\%\) away from \(\beta_{_{MLE}}\)
Since \(\beta\) is a strictly positive quantity, our confidence interval precision factor is expressed as
\[ R_{_{T}}=1.5=\frac{\overset{\sim}{g}=1.5\widehat{g}}{\widehat{g}}=\frac{\widehat{g}=1.5\underset{\sim}{g}}{\underset{\sim}{g}}=\sqrt{\frac{1.5\widehat{g}}{\frac{\widehat{g}}{1.5}}}=\sqrt{(1.5)^2} \]
Under these assumptions, what is the required sample size?
Since \(\beta=1/\sigma\) is strictly positive, Equation 10.6 should be used
\[ n=\frac{z^2_{(1-\alpha/2)}V^{\Box}_{\log[\widehat{\beta}]}}{(\log[R_T])^2}=\frac{(1.96)^2 V^{\Box}_{\log[\widehat{\beta}]}}{(\log[1.5])^2} \]
\[ V^{\Box}_{\log[\widehat{\beta}]}=V^{\Box}_{\log[\widehat{\sigma}]}=\frac{1}{(\sigma^{\Box})^2}V^{\Box}_{\widehat{\sigma}} \]
Note, this example assumes that the test data are best modeled using a Weibull distribution - therefore Table C.20 should not be used.
The SMRD package contains the table.lines or lsinf functions to compute these quantities for other location-scale distributions
The lsinf function has three required arguments
z - the value of \(\zeta^{\Box}_c\)censor.type - the type of censoringdistribution - the assumed underlying distributionFor this example, the value of \(\zeta^{\Box}_c\) is
\[ \zeta^{\Box}_c = \frac{t_c - \mu^{\Box}}{\sigma^{\Box}} =\frac{1000-8.774}{1.244} = -1.5 \]
lsinf function, assuming a weibull distribution, with right censored observation and \(\zeta^{\Box}_c=-1.5\) gives$f11
[1] 0.1999893
$f12
[1] -0.3112703
$f22
[1] 0.6954854
$matrix
f_i1 f_i2
f_1j 0.1999893 -0.3112703
f_2j -0.3112703 0.6954854
However, note that lsinf returns the scaled Fisher information matrix, while we want \(V_{\log[\widehat{\sigma}]} =\frac{1}{\sigma^2}V_{\widehat{\sigma}}\)
We can easily get the desired result by transforming the matrix returned by lsinf
To find \(V_{\widehat{\sigma}}\), note that
\[ \begin{aligned} \frac{\sigma^2}{n}\mathcal{I}_{(\mu,\sigma)}&=\left[\begin{array}{cc}f_{11}&f_{12}\\f_{12}&f_{22}\end{array}\right]\\\\ \mathcal{I}_{(\mu,\sigma)}&=\frac{n}{\sigma^2}\left[\begin{array}{cc}f_{11}&f_{12}\\f_{12}&f_{22}\end{array}\right]\\\\ \mathcal{I}^{-1}_{(\mu,\sigma)}&=\frac{\sigma^2}{n}\left[\begin{array}{cc}f_{11}&f_{12}\\f_{12}&f_{22}\end{array}\right]^{-1} \end{aligned} \]
\[ \begin{aligned} \mathcal{I}^{-1}_{(\widehat{\mu},\widehat{\sigma})}=\frac{1}{n}\left[\begin{array}{cc}V_{\widehat{\mu}}&V_{\widehat{\mu},\widehat{\sigma}}\\V_{\widehat{\mu},\widehat{\sigma}}&V_{\widehat{\sigma}}\end{array}\right]&=\frac{\sigma^2}{n}\left[\begin{array}{cc}f_{11}&f_{12}\\f_{12}&f_{22}\end{array}\right]^{-1}\\\\ \frac{1}{\sigma^2}\left[\begin{array}{cc}V_{\widehat{\mu}}&V_{\widehat{\mu},\widehat{\sigma}}\\V_{\widehat{\mu},\widehat{\sigma}}&V_{\widehat{\sigma}}\end{array}\right]&=\left[\begin{array}{cc}f_{11}&f_{12}\\f_{12}&f_{22}\end{array}\right]^{-1} \end{aligned} \]
lsinf gives f_1j f_2j
f_i1 16.480523 7.375995
f_i2 7.375995 4.739033
We then find \(1/(\sigma^{\Box})^2 V^{\Box}_{\widehat{\sigma}}=\) f_inv[2,2] \(\approx 4.74\)
And can finally compute the number of samples that should be tested
\[ n=\frac{(1.96)^2(4.74)}{[\log(1.5)]^2}\approx 111\;\text{samples} \]
lsinf is the function table.lines that returns the columns in Table C.20 for a give value of \(\zeta_c\)$z [1] -1.5
$phib [1] 0.0668072
$f11 [1] 0.2790593
$f22 [1] 0.805458
$f12 [1] -0.4478958
$v11 [1] 33.33856
$v22 [1] 11.55049
$v12 [1] 18.53877
$rho [1] 0.944729
$vmugsigma [1] 3.583468
$vsigmagmu [1] 1.24153
$z [1] -1.5
$phib [1] 0.1999893
$f11 [1] 0.1999893
$f22 [1] 0.6954854
$f12 [1] -0.3112703
$v11 [1] 16.48052
$v22 [1] 4.739033
$v12 [1] 7.375995
$rho [1] 0.8346229
$vmugsigma [1] 5.000268
$vsigmagmu [1] 1.437845
\[ \begin{aligned} V_{\widehat{g}}&=\left(\frac{\partial g}{\partial\mu}\right)^2 V_{\widehat{\mu}}+\left(\frac{\partial g} {\partial\sigma}\right)^2 V_{\widehat{\sigma}}+\left(\frac{\partial g}{\partial\mu}\right)\left(\frac{\partial g}{\partial\sigma}\right)V_{(\widehat{\mu},\widehat{\sigma})}\\\\ V_{\log[\widehat{g}]}&=\left(\frac{1}{g}\right)^2 V_{\widehat{g}}, \;\;\;g>0\\\\ V_{\exp[\widehat{g}]}&=g^2V_{\widehat{g}} \end{aligned} \]
Where these variance factors depend on
\[ \log(t_p|\mu,\sigma)=\mu+\Phi^{-1}_{(\cdot)}(p)\sigma \]
\[ V_{\log[t_p]}=V_{\widehat{\mu}}+\left[\Phi^{-1}_{(\cdot)}(p)\right]^2 V_{\widehat{\sigma}}+2\Phi^{-1}_{(\cdot)}(p) V_{(\widehat{\mu},\widehat{\sigma})} \]
\[ n=\frac{z_{(1-\alpha/2)}V^{\Box}_{\log[t_p]}}{(\log[R_T])^2} \]
Studies have been performed to determine the value of the quantile variance factor as a function of
For log-location scale distributions
\[ h(t_e|\mu,\sigma)=\frac{\phi(\zeta_e)}{t_e\sigma[1-\Phi(\zeta_e)]} \]
After taking derivatives, we have
\[ V_{\log[\widehat{h}]}=\frac{1}{h^2}V_{\widehat{h}}=\left[\left(\frac{\partial h}{\partial\mu} \right)^2 V_{\widehat{\mu}}+\left(\frac{\partial h}{\partial\sigma}\right)^2 V_{\widehat{\sigma}} +2\left(\frac{\partial h}{\partial\mu}\right)\left(\frac{\partial h}{\partial\sigma}\right) V_{(\widehat{\mu},\widehat{\sigma})}\right] \]
and
\[ n=\frac{z_{(1-\alpha/2)}V^{\Box}_{\log[\widehat{h}]}}{(\log[R_T])^2} \]
Recall the electrical insulation test discussed in Examples 10.1, 10.5, and 10.7
Suppose that we now wish to plan a life test to compute a \(95\%\) CI for \(h(1000)\) wherein the endpoints are approximately \(50\%\) away from \(\widehat{h}(1000)_{_{MLE}}\)
In Example 10.1 we noted
Since \(t_e=t_c=1000\;\text{hours}\rightarrow p_e=p^{\Box}_c=0.2\)
Observing where the \(p_e=0.2\) and \(p^{\Box}_c=0.2\) lines intersect in Figure 10.9, we see that
\[ V_{\log[\widehat{h}(1000)]}\approx 8.2 \]
and
\[ n=\frac{z_{(1-\alpha/2)}V^{\Box}_{\log[\widehat{h}(1000)]}}{(\log[R_T])^2}=\frac{(1.96)^2(8.2)}{(\log[1.5])^2}\approx 191 \]
It's often necessary to plan tests to demonstrate the reliability performance of a product
Objective: Given a performance specification and desired confidence level, want to specify
To demonstrate if the measure of interest exceeds the specification with the \(100(1-\alpha)\%\) confidence
Example: For \(t_e=8760\;\text{hours (1 year)}\) want to demonstrate with \(100(1-\alpha)\%\) confidence that
\[ \underline{t}_{.01}>t_e\iff \overline{F}(t_e)<.01 \]
Often a reliability test will conclude without observing any failures
What does this mean?
The \(f_{ij}\) elements in Appendix C.20 are the elements of the scaled Fisher information matrix, that is
The variance terms in Table C.20 are then found from
The asymptotic correlation is then computed as
\[ \rho(\bar{\mu},\bar{\sigma}) =\frac{V_{(\bar{\mu},\bar{\sigma})}}{\sqrt{V_{\bar{\mu}}V_{\bar{\sigma}}}} =\frac{0.07634}{\sqrt{1.04168\times 0.64344}}=0.09325 \]
Similarly, the scaled asymptotic variances for either a known \(\mu\) or a known \(\sigma\) are
\[ \begin{aligned} \frac{n}{\sigma^{2}}Avar(\bar{\mu}|\sigma)&=\frac{1}{\sigma^{2}}V_{\bar{\mu}|\sigma}=\left[f_{11}\right]^{-1}=\left[0.96841\right]^{-1}=1.03262\\ \frac{n}{\sigma^{2}}Avar(\bar{\sigma}|\mu)&=\frac{1}{\sigma^{2}}V_{\bar{\sigma}|\mu}=\left[f_{22}\right]^{-1}=\left[1.56779\right]^{-1}=0.63784 \end{aligned} \]