Multiply typeII censoring
A generalization of typeII censoring scheme is known as multiply typeII censoring scheme. The following two types of multiply typeII censoring schemes are available:

Under this scheme, from n items on the test we observe only the r_{1}th, r_{2}th,⋯,r_{k}th failure times \(X_{{r_{1} }}\), \(X_{{r_{2} }}\), …,\(X_{{r_{k} }}\), where 1 ≤ r_{1} < r_{2} < ⋯ < r_{k} ≤ n, and the rest of the data are not available.

In life testing experiments the test is terminated either at predetermined time observed (typeI censoring) or at a predetermined number of failures observed (typeII censoring). Such censoring schemes may be from left or right. Sometimes left and right censoring appears together, this is known as doubly censoring. Furthermore, if mid censoring arises amongst the doubly censoring in the type II censoring scheme, the scheme is also known as multiply typeII censoring.
Applications of multiply typeII censoring scheme are available in literature. Some of the references related to the multiply typeII censoring scheme described in (a) are Balakrishnan [12], Balakrishnan et al. [13], Shah and Patel [14], Kim and Han [15], and in (b) are Upadhyay et al. [16], Kong [17], Shah and Patel [18], Kang et al. [19], Patel and Patel [20] and Shafay and Sultan [21].
Joint multiply typeII censoring
Let there are two lines of similar products and our aim is to study the relative merits of these two products. A sample of size m is drawn from one product line, called typeA and another sample of size n is drawn from the other product line, called typeB.
Suppose that, \(Y_{1} , Y_{2} , \ldots , Y_{m}\) the lifetimes of m specimens of product typeA, are independent identically distributed random variables from distribution function F(y) and density function f(y), and \(W_{1} , W_{2} , \ldots , W_{n}\), the lifetimes of n specimens of product typeB, are also independent identically distributed random variables from distribution function G(w) and density function g(w). Further, suppose \(X_{1} < X_{2} < \cdots < X_{N}\) denote the order statistics of the N = m + n random variables \(\{ Y_{1} ,Y_{2} , \ldots , Y_{m} ;\;W_{1} , W_{2} , \ldots , W_{n} \}\). Here we assume that the probability distributions of Y and W are continuous, a unique ordering is always possible, since theoretical ties do not exist. Even though, if two observations from Y and W are equal, we can break the tie using random arrangement of these observations.
Suppose that only k ordered failure times \(X_{{r_{1} }} ,X_{{r_{2} }} , \ldots ,X_{{r_{k} }}\) are observed out of \(X_{1} , X_{2} , \ldots ,X_{N}\) ordered failure times. Here experimenter prefixes the values of \(r_{1} ,r_{2} , \ldots ,r_{k}\) before conducting the life testing experiment. In this scheme, initial \(r_{1}  1\) failures, some intermediate failures and last \(N  r_{k}\) failures are not observed, since our aim is to save time and cost of the experiment. The joint multiply typeII censoring scheme can be visualized graphically as follows.
where a_{i} = number of failures of typeA with \(X_{{r_{i  1} }} < Y < X_{{r_{i} }}\), i = 1, 2,…, k; r_{0} = 0, X_{0} = 0; b_{i} = number of failures of typeB with \(X_{{r_{i  1} }} < W < X_{{r_{i} }}\), i = 1, 2,…, k; a_{k+1} = number of failures of typeA with \(X_{{r_{k} }} < Y\) = m − m_{k} − \(\mathop \sum \nolimits_{i = 1 }^{k} a_{i}\); b_{k+1} = number of failures of typeB with \(X_{{r_{k} }} < W\) = n − n_{k −}\(\mathop \sum \nolimits_{i = 1 }^{k} b_{i}\) = R_{k+1 −} a_{k+1}; Let m_{k} = number of failures of typeA out of the k observed failures = \(\mathop \sum \nolimits_{i = 1 }^{k} Z_{i}\); n_{k} = number of failures of typeB out of the k observed failures = \(\mathop \sum \nolimits_{i = 1 }^{k} (1  Z_{i} ) = k  m_{k}\); Note that m_{k} and n_{k} both simultaneously cannot be zero. If any one of them is zero the problem reduces to estimation based on single sample only.
$${\text{Denote}}\;R_{1} = r_{1} {}1,\;R_{i} = r_{i}  r_{i  1}  1,\;i = 1,2, \ldots ,k\;{\text{and}}\;R_{k + 1} = a_{k + 1} + b_{k + 1}$$
Then, under the joint multiply typeII censoring scheme, the observable data consist of (Z, X), where
$$X = \{ X_{{r_{1} }} ,X_{{r_{2} }} , \ldots ,X_{{r_{k} }} \} ,\;{\text{with}}\;1 < r_{1} < r_{2} < \cdots < r_{k} < N,\;{\text{and}}$$
\(Z = \{ Z_{1} , Z_{2} , \ldots , Z_{k} \}\) with \(Z_{i} = 1\;{\text{or}}\;0\) according to \(Z_{i}\) is from Y or W failure.
Then the likelihood function of (Z, X) will be given by
$$\begin{aligned} L & = C\left\{ {F\left( {x_{{r_{1} }} } \right)} \right\}^{{a_{1} }} \left\{ {G\left( {x_{{r_{1} }} } \right)} \right\}^{{b_{1} }} \mathop \prod \limits_{i = 1}^{k} \left\{ {f\left( {x_{{r_{i} }} } \right)} \right\}^{{z_{i} }} \mathop \prod \limits_{i = 1}^{k} \left\{ {g\left( {x_{{r_{i} }} } \right)} \right\}^{{1  z_{i} }} \mathop \prod \limits_{i = 2}^{k} \left[ {F\left( {x_{{r_{i} }} } \right)  F\left( {x_{{r_{i  1} }} } \right)} \right]^{{a_{i} }} \\ & \quad \times \prod\limits_{i = 2}^{k} {\left[ {G\left( {x_{{r_{i} }} } \right)  \left( {x_{{r_{i  1} }} } \right)} \right]^{{b_{i} }} \left\{ {\overline{F}\left( {X_{{r_{k} }} } \right)} \right\}^{{a_{k + 1} }} } \left\{ {\overline{G}\left( {X_{{r_{k} }} } \right)} \right\}^{{b_{k + 1} }} \\ \end{aligned}$$
(1)
where \(C = \frac{m!n!}{{\mathop \prod \nolimits_{i = 1}^{k + 1} a_{i} !\mathop \prod \nolimits_{i = 1}^{k + 1} b_{i} ! }}\).
Consider exponential life time models for life time Y and W as
$$F(y) = 1  e^{{  y/\theta_{1} }} \;{\text{and}}\;G(w) = 1  e^{{  w/\theta_{2} }} ,\; y > 0,\;w > 0;\;\theta_{i} > 0, \;i = 1, 2.$$
(2)
Assuming \(T_{i} = x_{{r_{i} }}  x_{{r_{i  1} }}\), i = 2,…,k and substituting Eq. (2) in Eq. (1) the likelihood function becomes
$$\begin{aligned} L = L(\theta X) & = \frac{C}{{\theta_{1}^{{m_{k} }} \theta_{2}^{{n_{k} }} }}\left( {1  e^{{  x_{{\frac{{r_{1} }}{{\theta_{1} }}}} }} } \right)^{{a_{1} }} \exp \left( {  \frac{{u_{k} }}{{\theta_{1} }}} \right)\mathop \prod \limits_{i = 2}^{k} \left[ {1  e^{{  T_{{\frac{i}{{\theta_{1} }}}} }} } \right]^{{a_{i} }} \left( {1  e^{{  x_{{\frac{{r_{1} }}{{\theta_{2} }}}} }} } \right)^{{b_{1} }} \exp \left( {  \frac{{v_{k} }}{{\theta_{2} }}} \right) \\ & \quad \times \mathop \prod \limits_{i = 2}^{k} \left[ {1  e^{{  T_{{\frac{i}{{\theta_{2} }}}} }} } \right]^{{b_{i} }} \\ \end{aligned}$$
(3)
where
$$\theta = (\theta_{1} ,\theta_{2} ),\;u_{k} = \mathop \sum \limits_{i = 2}^{k} a_{i} x_{{r_{i  1} }} + \mathop \sum \limits_{i = 1}^{k} z_{i} x_{{r_{i} }} + a_{k + 1} x_{{r_{k} }}$$
and
$$v_{k} = \mathop \sum \limits_{i = 2}^{k} b_{i} x_{{r_{i  1} }} + \mathop \sum \limits_{i = 1}^{k} (1  z_{i} )x_{{r_{i} }} + b_{k + 1} x_{{r_{k} }}$$
Maximum likelihood estimate (MLE) and asymptotic standard error
The MLEs \(\hat{\theta }_{1}\) and \(\hat{\theta }_{2}\) of the parameters \(\theta_{1}\) and \(\theta_{2}\) are obtained by maximizing Eq. (3). To maximize the likelihood function in Eq. (3) we derive the likelihood equations
$$\frac{\partial \log L}{{\partial \theta_{1} }} = 0 \;{\text{and}}\;\frac{\partial \log L}{{\partial \theta_{2} }} = 0.$$
Solving the above equations we get MLEs of mean life times \(\theta_{1}\) and \(\theta_{2}\).The maximum likelihood method does not admit explicit solutions, but we have two equations as
$$m_{k} \theta_{1} = u_{k}  \frac{{a_{1} x_{{r_{1} e^{{  \left( {x_{{r_{1} }} /\theta_{1} } \right)}} }} }}{{1  e^{{  \left( {x_{{r_{1} }} /\theta_{1} } \right) }} }}  \mathop \sum \limits_{i = 2}^{k} \left\{ {\frac{{a_{i} T_{i} e^{{  \left( {T_{i} /\theta_{1} } \right)}} }}{{1  e^{{  \left( {T_{i} /\theta_{1} } \right)}} }}} \right\}$$
(4)
and
$$n_{k} \theta_{2} = v_{k}  \frac{{b_{1} x_{{r_{1} e^{{  \left( {x_{{r_{1} }} /\theta_{2} } \right)}} }} }}{{1  e^{{  \left( {x_{{r_{1} }} /\theta_{2} } \right) }} }}  \mathop \sum \limits_{i = 2}^{k} \left\{ {\frac{{b_{i} T_{i} e^{{  \left( {T_{i} /\theta_{2} } \right)}} }}{{1  e^{{  \left( {T_{i} /\theta_{2} } \right)}} }}} \right\}$$
(5)
Using any method of iteration, the Eqs. (4) and (5) can be solved for \(\theta_{1}\) and \(\theta_{2}\). The solutions of the two equations give us the MLEs \(\hat{\theta }_{1} {\text{and }}\hat{\theta }_{2}\) of the parameters \(\theta_{1}\) and \(\theta_{2}\) respectively.
Let \(I\left( {\theta_{1} ,\theta_{2} } \right)\) = \(I_{i,j} \left( {\theta_{i} ,\theta_{j} } \right)\) i, j = 1, 2 denote the Fisher information matrix of the parameters \(\theta_{1}\) and \(\theta_{2}\), whereand consequently the observed Fisher information matrix is given by
$$I_{i,j} \left( {\theta_{i} ,\theta_{j} } \right) =  E\left( {\frac{{\partial^{2} logL}}{{\partial \theta_{i} \partial \theta_{j} }}} \right)$$
$$\hat{I}\left( {\theta_{1} ,\theta_{2} } \right) = \left( {\begin{array}{*{20}c} {  \frac{{\partial^{2} \log L}}{{\partial \theta_{1}^{2} }}} & 0 \\ 0 & {  \frac{{\partial^{2} \log L}}{{\partial \theta_{2}^{2} }}} \\ \end{array} } \right)_{{(\theta_{1} ,\theta_{2} ) = (\hat{\theta }_{1} , \hat{\theta }_{2} )}}$$
(6)
where
$$\begin{aligned} \frac{{\partial^{2} \log L}}{{\partial \theta_{1}^{2} }} & = \frac{2}{{\theta_{1}^{3} }}\left\{ {\frac{{a_{1} x_{{r_{1} e^{{  \left( {x_{{r_{1} }} /\theta_{1} } \right)}} }} }}{{1  e^{{  \left( {x_{{r_{1} }} /\theta_{1} } \right) }} }}} \right\}  \frac{{a_{1} x_{{r_{1} }}^{2} }}{{\theta_{1}^{4} }}\frac{{e^{{  \left( {x_{{r_{1} }} /\theta_{1} } \right)}} }}{{\left( {1  e^{{  \left( {x_{{r_{1} }} /\theta_{1} } \right) }} } \right)^{2} }}  \frac{{2u_{k} }}{{\theta_{1}^{3} }} + \frac{{m_{k} }}{{\theta_{1}^{2} }} + \frac{2}{{\theta_{1}^{3} }}\mathop \sum \limits_{i = 2}^{k} \frac{{a_{i} T_{i} e^{{  \left( {T_{i} /\theta_{1} } \right)}} }}{{1  e^{{  \left( {T_{i} /\theta_{1} } \right)}} }} \\ & \quad  \frac{1}{{\theta_{1}^{4} }}\mathop \sum \limits_{i = 2}^{k} \frac{{a_{i} T_{i}^{2} e^{{  \left( {T_{i} /\theta_{1} } \right)}} }}{{\left( {1  e^{{  \left( {T_{i} /\theta_{1} } \right)}} } \right)^{2} }} \\ \end{aligned}$$
(7)
and
$$\begin{aligned} \frac{{\partial^{2} \log L}}{{\partial \theta_{2}^{2} }} & = \frac{2}{{\theta_{2}^{3} }}\left\{ {\frac{{b_{1} x_{{r_{1} e^{{  \left( {x_{{r_{1} }} {/}\theta_{2} } \right)}} }} }}{{1  e^{{  \left( {x_{{r_{1} }} /\theta_{2} } \right) }} }}} \right\}  \frac{{b_{1} x_{{r_{1} }}^{2} }}{{\theta_{2}^{4} }}\frac{{e^{{  \left( {x_{{r_{1} }} /\theta_{2} } \right)}} }}{{\left( {1  e^{{  \left( {x_{{r_{1} }} /\theta_{2} } \right) }} } \right)^{2} }}  \frac{{2v_{k} }}{{\theta_{2}^{3} }} + \frac{{n_{k} }}{{\theta_{2}^{2} }} + \frac{2}{{\theta_{2}^{3} }}\mathop \sum \limits_{i = 2}^{k} \frac{{b_{i} T_{i} e^{{  \left( {T_{i} /\theta_{2} } \right)}} }}{{1  e^{{  \left( {T_{i} /\theta_{2} } \right)}} }} \\ & \quad  \frac{1}{{\theta_{2}^{4} }}\mathop \sum \limits_{i = 2}^{k} \frac{{b_{i} T_{i}^{2} e^{{  \left( {T_{i} /\theta_{2} } \right)}} }}{{\left( {1  e^{{  \left( {T_{i} /\theta_{2} } \right)}} } \right)^{2} }} \\ \end{aligned}$$
(8)
Hence asymptotic standard errors (ASEs) of MLEs are obtained by
$${\text{ASE}}(\hat{\theta }_{1} ) = \sqrt {\frac{  1}{{E\left( {\frac{{\partial^{2} \log L}}{{\partial \theta_{1}^{2} }}} \right)}}} \;{\text{and}}\;{\text{ASE}}(\hat{\theta }_{2} ) = \sqrt {\frac{  1}{{E\left( {\frac{{\partial^{2} \log L}}{{\partial \theta_{2}^{2} }}} \right)}}}$$
(9)
Then, by using the asymptotic normality of the MLEs, we can express the asymptotic (1 − α)100% confidence intervals for \(\theta_{1}\) and \(\theta_{2}\) as
$$\hat{\theta }_{1} \pm Z_{\alpha /2} {\text{ASE(}}\hat{\theta }_{1} {)}\;{\text{and}}\;\hat{\theta }_{2} \pm Z_{\alpha /2} {\text{ASE(}}\hat{\theta }_{2} {)}$$
(10)
where \(Z_{\alpha /2}\) denotes the upper α/2 percentage point of the standard normal distribution.
MLEs of Reliabilities at time t of the product of typeA and typeB are respectively given by
$$\widehat{{R_{{\text{A}}} }}(t) = e^{{  t/\hat{\theta }_{1} }} \;{\text{and}}\;\widehat{{R_{{\text{B}}} }}(t) = e^{{  t/\hat{\theta }_{2} }}$$
(11)
The ASE of MLE of reliability at time t for the product A is calculated as
$${\text{ASE}}_{{\text{A}}} (t) = \left. {\sqrt {\frac{{{\text{d}}R_{{\text{A}}} \left( t \right)}}{{{\text{d}}\theta_{1} }}V(\hat{\theta }_{1} )} } \right_{{\theta = \hat{\theta }_{1} }}$$
(12)
Similarly the asymptotic standard error of MLE of reliability at time t for the product B can be calculated.
Influence measure
In this section we have considered the influence of individual observations on maximum likelihood estimates. We have used the method considered by Poon and Tang [22].
Let L(θ) be the likelihood function of θ given in Eq. (1). Define the case weight perturbation \(w = \left( {w_{1} , w_{2} , \ldots .,w_{N} } \right)^{^{\prime}}\) and corresponding perturbed loglikelihood function will be
$$\begin{aligned} \log L\left( {\theta w} \right) &= \mathop \sum \limits_{i = 1}^{{a_{1} }} w_{i} \log \left( {F\left( {x_{{r_{1} }} ,\theta_{1} } \right)} \right) + \mathop \sum \limits_{{i = a_{1} + 1}}^{{r_{1}  1}} w_{i} \log \left( {G\left( {x_{{r_{1} }} ,\theta_{2} } \right)} \right) + \mathop \sum \limits_{i = 1}^{k} w_{{r_{i} }} z_{i} \log \left( {f\left( {x_{{r_{1} }} ,\theta_{1} } \right)} \right) \\ & \quad + \mathop \sum \limits_{i = 1}^{k} w_{{r_{i} }} (1  z_{i} )\log \left( {g\left( {x_{{r_{1} }} ,\theta_{2} } \right)} \right) + \mathop \sum \limits_{j = 2}^{k} \mathop \sum \limits_{{i = r_{j  1} + 1}}^{{a_{j} + r_{j  1} }} w_{i } \log \left[ {F\left( {x_{{r_{j} }} ,\theta_{1} } \right)  F\left( {x_{{r_{j  1} }} ,\theta_{1} } \right)} \right] \\ & \quad + \mathop \sum \limits_{ j = 2}^{k} \mathop \sum \limits_{{i = a_{j} + r_{j  1} + 1}}^{{r_{j}  1}} w_{i } \log \left[ {G\left( {x_{{r_{j} }} ,\theta_{2} } \right)  G\left( {x_{{r_{j  1} }} ,\theta_{2} } \right)} \right] \\ & \quad + \mathop \sum \limits_{{i = r_{k} + 1}}^{{a_{k + 1} + r_{k} }} w_{i} \log \left[ {1  F\left( {x_{{r_{k} }} ,\theta_{1} } \right)} \right] + \mathop \sum \limits_{{i = a_{k + 1} + r_{k} + 1}}^{N} w_{i} \log \left[ {1  G\left( {x_{{r_{k} }} ,\theta_{2} } \right)} \right] \\ \end{aligned}$$
(13)
Using Eq. (2) in Eq. (13) it can be further simplified as
$$\begin{aligned} \log L\left( {\theta {}w} \right) &= \mathop \sum \limits_{i = 1}^{{a_{1} }} w_{i} {\text{log}}\left( {1  \exp \left( {  \frac{{x_{{r_{1} }} }}{{\theta_{1} }}} \right)} \right) + \mathop \sum \limits_{{i = a_{1} + 1}}^{{r_{1}  1}} w_{i} {\text{log}}\left( {1  \exp \left( {  \frac{{x_{{r_{1} }} }}{{\theta_{2} }}} \right)} \right) \hfill \\ &\quad  \left( {\frac{1}{{\theta_{1} }}} \right)\left\{ {\mathop \sum \limits_{i = 1}^{k} w_{{r_{i} }} z_{i} x_{{r_{i} }} + \mathop \sum \limits_{j = 2}^{k} \mathop \sum \limits_{{i = r_{j  1} + 1}}^{{a_{j} + r_{j  1} }} w_{i} x_{{r_{j  1} }} + \mathop \sum \limits_{{i = r_{k} + 1}}^{{a_{k + 1} + r_{k} }} w_{i} x_{{r_{k} }} } \right\} \hfill \\ &\quad  \left( {\frac{1}{{\theta_{2} }}} \right)\left\{ {\mathop \sum \limits_{i = 1}^{k} w_{{r_{i} }} (1  z_{i} )x_{{r_{i} }} + \mathop \sum \limits_{j = 2}^{k} \mathop \sum \limits_{{i = a_{j} + r_{j  1} + 1}}^{{r_{j}  1}} w_{i} x_{{r_{j  1} }} + \mathop \sum \limits_{{i = a_{k + 1} + r_{k} + 1}}^{N} w_{i} x_{{r_{k} }} } \right\} \hfill \\ &\quad  \mathop \sum \limits_{i = 1}^{k} w_{{r_{i} }} z_{i} log\theta_{1} + \mathop \sum \limits_{j = 2}^{k} \mathop \sum \limits_{{i = r_{j  1} + 1}}^{{a_{j} + r_{j  1} }} w_{i} {\text{log}}(1  \exp \left( {  \frac{{T_{j} }}{{\theta_{1} }}} \right)) \hfill \\ & \quad  \mathop \sum \limits_{i = 1}^{k} w_{{r_{i} }} (1  z_{i} )log\theta_{2} + \mathop \sum \limits_{j = 2}^{k} \mathop \sum \limits_{{i = a_{j} + r_{j  1} + 1}}^{{r_{j}  1}} w_{i} {\text{log}}(1  \exp \left( {  \frac{{T_{j} }}{{\theta_{2} }}} \right)) \hfill \\ \end{aligned}$$
(14)
If \(w = w_{0} = \left( {1, \;1, \ldots ,\;1} \right)^{{\prime}}\), from Eq. (14) we see that \(\log L\left( {\theta w_{0} } \right) = \log L\left( \theta \right)\). If ith observation is deleted i.e. w_{i} = 0 and if such deletion leads to very different MLE of θ = (θ_{1}, θ_{2}), then it leads to influence of ith observation. In a similar manner if a small perturbation of w_{i} from w_{i} = 1 leads to a very different MLE of θ = (θ_{1}, θ_{2}), it will be evidence of influence of ith observation. Thus, if \(\log L\left( {\theta w} \right)\) becomes maximum under \(\hat{\theta }_{w}\), then the change of \(\hat{\theta }_{w}\), as a function of w reveals the information about how influential an individual observation is. Cook [23] proposed the displacement D(w) as
$$D\left( w \right) = \log L\left( {\hat{\theta }w_{0} } \right)  \log L\left( {\hat{\theta }_{w} w_{0} } \right)$$
(15)
The directions giving large change of D(w) at w_{0} are of interest. Cook [23] proposed the following straightforward computation method of such a direction as described below:
$${\text{Let}}\;H = \left. {\frac{{\partial^{2} \log L\left( \theta \right)}}{{\partial \theta_{1} \partial \theta_{2} }}} \right\left( {\theta_{1} , \theta_{2} } \right) = \left( {\hat{\theta }_{1} , \hat{\theta }_{2} } \right)$$
(16)
which is a 2 × 2 matrix.
Define
$$A = \left( {\frac{{\partial^{2} \log L\left( {\theta w} \right)}}{{\partial \left( {\theta_{1} ,\theta_{2} } \right)\partial w}}} \right)_{{(\theta_{1} , \theta_{2} ) = (\hat{\theta }_{1} , \hat{\theta }_{2} ),w = w_{0} }}$$
(17)
which is a 2 × N matrix, N = m + n is a total sample size.
The (1, j)th element of the matrix A is given by
$$\left. {\frac{{\partial^{2} \log L\left( {\theta w} \right)}}{{\partial \theta_{1} \partial wj}}} \right\theta_{1} = \hat{\theta }_{1} ,w = w_{{0_{j} }}$$
(18)
and (2, j)th element as
$$\left. {\frac{{\partial^{2} \log L\left( {\theta w} \right)}}{{\partial \theta_{2} \partial wj}}} \right\theta_{2} = \hat{\theta }_{2} ,w = w_{{0_{j} }}$$
(19)
Using the Eqs. (7) and (8) the matrix H can be computed at MLE of the parameters.
Then matrix \({\Lambda }\) is defined as
$${\Lambda } =  A^{\prime}(H)^{  1} A$$
(20)
Poon and Poon [24] introduced the basic perturbation direction (pd_{i}) with the help of diagonal elements and trace value of the matrix \({\Lambda }.\)
$${\text{pd}}_{i} = \frac{{\Lambda_{ii} }}{{\sqrt {{\text{trace}}\left( {\Lambda^{2} } \right)} }},\quad i = 1,2, \ldots N$$
(21)
and suggest that the observations with large pd_{i} values as influential. From Eqs. (16) to (17), influence measures can be computed with given density function f(x_{i}, θ). To identify the large values of pd_{i}, Poon and Poon [24] introduced the reference constant (c) as
$$c = \frac{{2{\text{trace}}\left( \Lambda \right)}}{{N\sqrt {{\text{trace}}\left( {\Lambda^{2} } \right)} }}$$
(22)
which can be used to identify the observations having large pd_{i} values.
Bayes estimation
In this section we consider the Bayes estimates of mean life time and reliability of the products of typeA and typeB. To obtain Bayes estimate of the parameter θ of the distribution, we should decide prior distribution of θ. Prior distribution can be determined by analyst’s predata understanding/knowledge/belief about θ. Usually the parametric form of the prior is chosen such that the posterior distribution of θ be of the same form i.e. belongs to the same family of the prior distributions. Use of such a prior is mostly for mathematical and computational convenience in practice. Usually the Bayes estimate of the parameter θ fall somewhere between the prior and likelihood estimates. Thus Bayes estimate of the parameter depends on the initial beliefs about the parameter θ.
Unfortunately, when both the parameters \(\theta_{1}\) and \(\theta_{2}\) are unknown then there does not exist any natural conjugate priors. In this article, similarly as in Kundu and Pradhan [25], we use the inverse gamma priors for the parameters \(\theta_{1}\) and \(\theta_{2}\).
The inverse gamma priors \({\text{IG(}}c_{1} ,d_{1} {)}\) and \({\text{IG}}(c_{2} ,d_{2} )\) for \(\theta_{1}\) and \(\theta_{2}\) respectively defined as
$$\pi_{1} \left( {\theta_{1} } \right) = \frac{{e^{{  d_{1} /\theta_{1} }} d_{1}^{{c_{1} }} }}{{\theta_{1}^{{c_{1} + 1}} \Gamma c_{1} }} = {\text{IG}}\left( {c_{1} ,d_{1} } \right)\;{\text{and}}\;\pi_{2} \left( {\theta_{2} } \right) = \frac{{e^{{  d_{2} /\theta_{2} }} d_{2}^{{c_{2} }} }}{{\theta_{2}^{{c_{2} + 1}} \Gamma c_{2} }} = {\text{IG}}\left( {c_{2} ,d_{2} } \right),$$
(23)
$$\theta_{i} > 0, \;a_{i} > 0,\;b_{i} > 0, \;i = 1,2.$$
On the basis of the likelihood function in Eq. (3) and above independent inverse gamma prior distributions, the joint posterior density function of \(\theta_{1}\) and \(\theta_{2}\) can be constructed as
$$\begin{aligned} & h\left( {\theta_{1} , \theta_{2} \left {\underline {x} } \right.} \right) = L\pi_{1} \left( {\theta_{1} } \right)\pi_{2} \left( {\theta_{2} } \right) \\ & \quad \propto \frac{{e^{{  \frac{{\left( {u_{k} + d_{1} } \right)}}{{\theta_{1} }}}} \left( {u_{k} + d_{1} } \right)^{{m_{k} + c_{1} }} }}{{\theta_{1}^{{m_{k} + c_{1} + 1}} \Gamma \left( {m_{k} + c_{1} } \right)}}\frac{{e^{{  \frac{{\left( {v_{k} + d_{2} } \right)}}{{\theta_{2} }}}} \left( {v_{k} + d_{2} } \right)^{{n_{k} + c_{2} }} }}{{\theta_{2}^{{n_{k} + c_{2} + 1}} \Gamma \left( {n_{k} + c_{2} } \right)}}\frac{{\left( {1  e^{{  \left( {\frac{{x_{{r_{1} }} }}{{\theta_{1} }}} \right)}} } \right)^{{a_{1} }} \left( {1  e^{{  \left( {\frac{{x_{{r_{1} }} }}{{\theta_{2} }}} \right)}} } \right)^{{b_{1} }} }}{{\left( {u_{k} + d_{1} } \right)^{{m_{k} + c_{1} }} \left( {v_{k} + d_{2} } \right)^{{n_{k} + c_{2} }} }} \\ & \quad \times \mathop \prod \limits_{i = 2}^{k} \left[ {1  e^{{  T_{{\frac{i}{{\theta_{1} }}}} }} } \right]^{{a_{i} }} \mathop \prod \limits_{i = 2}^{k} \left[ {1  e^{{  T_{{\frac{i}{{\theta_{2} }}}} }} } \right]^{{b_{i} }} \\ \end{aligned}$$
(24)
$$\propto {\text{IG}}\left( {m_{k} + c_{1} ,u_{k} + d_{1} } \right){\text{IG}}(n_{k} + c_{2} ,v_{k} + d_{2} )h_{3} (\theta_{1} , \theta_{2} \left {\underline {x} } \right.)$$
(25)
where
$$h_{3} \left( {\theta_{1} , \theta_{2} \left {\underline {x} } \right.} \right) = \frac{{\left( {1  e^{{  \left( {\frac{{x_{{r_{1} }} }}{{\theta_{1} }}} \right)}} } \right)^{{a_{1} }} \left( {1  e^{{  \left( {\frac{{x_{{r_{1} }} }}{{\theta_{2} }}} \right)}} } \right)^{{b_{1} }} }}{{\left( {u_{k} + d_{1} } \right)^{{m_{k} + c_{1} }} \left( {v_{k} + d_{2} } \right)^{{n_{k} + c_{2} }} }}\mathop \prod \limits_{i = 2}^{k} \left[ {1  e^{{  T_{{\frac{i}{{\theta_{1} }}}} }} } \right]^{{a_{i} }} \mathop \prod \limits_{i = 2}^{k} \left[ {1  e^{{  T_{{\frac{i}{{\theta_{2} }}}} }} } \right]^{{b_{i} }}$$
(26)
From the expression of the posterior distribution given in (14) it is quite difficult to obtain Bayes estimates of the parameters in closed form, so we use approximation method to evaluate them. There are several approximation methods to obtain Bayes estimates of the parameters. Here, we use importance sampling method proposed by Kundu and Pradhan [25]. The importance sampling method can be used to derive estimates of parameters.
Using importance sampling approach Bayes estimates of \(\theta_{1} \;{\text{and}}\;\theta_{2}\) can be obtained as follows:

Step 1: Generate \(\theta_{1}\) from \({\text{IG}}\left( {m_{k} + c_{1} ,u_{k} + d_{1} } \right)\)∼Inverse gamma(\(m_{k} + c_{1} ,u_{k} + d_{1} )\)

Step 2: Generate \(\theta_{2}\) from IG(\(n_{k} + c_{2} ,v_{k} + d_{2} )\)∼ Inverse gamma(\(n_{k} + c_{2} ,v_{k} + d_{2} )\)

Step 3: Repeat Steps 1 and 2 N times to obtain (\(\theta_{11} , \theta_{21}\)), …, (\(\theta_{1N} , \theta_{2N}\))

Step 4: The Bayes estimate \(\hat{\varepsilon }_{{\text{B}}}\) of \(\varepsilon \left( {\theta_{1} , \theta_{2} } \right)\), any function of \(\theta_{1} {\text{and}} \theta_{2}\) under the squarederror loss function can then be approximated as
$$\hat{\varepsilon }_{{\text{B}}} = \frac{{\mathop \sum \nolimits_{i = 1}^{N} \varepsilon \left( {\theta_{1i} , \theta_{2i } } \right)h_{3} \left( {\theta_{1i} , \theta_{i2} \left {\underline {x} } \right.} \right)}}{{\mathop \sum \nolimits_{i = 1}^{N} h_{3} \left( {\theta_{1i} , \theta_{2i} \left {\underline {x} } \right.} \right)}}$$
(27)
The Bayes estimate of \(\theta_{1}\) is obtained by considering \(\varepsilon \left( {\theta_{1} , \theta_{2} } \right) = \theta_{1}\) in the above computation.
Similarly, the Bayes estimate of \(\theta_{2}\) can be computed. The Bayes estimates of the reliabilities at time t of the two types of products can be obtained by replacing the function \(\varepsilon \left( {\theta_{1} , \theta_{2} } \right)\) by an expression of reliability function given in (11). Some further applications of this method can also be found in Kundu and Pradhan [25] and Rastogi and Tripathi [26].