It follows from (1) and (3) that, based on a given type-II censored sample x drawn from the GB distribution, the joint PDF of the papulation parameters β and λ is given by:
$$ L(\beta,\,\lambda|{\mathbf{x}}) \ \propto\ {\beta}^{r}{\lambda}^{r}\, {e^{-2\,\beta\, T_{1}+T_{2}}}, $$
(6)
where
$$\begin{array}{*{20}l} {}T_{1}\!&=\!{(n\,-\,r)\, x^{\lambda}_{r}\,+\,\sum_{j=1}^{r} x^{\lambda}_{j} },\\ {}T_{2}\!&=\!(n\,-\,r)\!\ln\! \left(3\,-\,2\,{e^{-\beta\,x^{\lambda}_{r}}} \right)\,+\,{\lambda}\!\sum_{j=1}^{r}{{\ln(x_{j})}}\,+\,\!\sum_{j=1}^{r} \!\ln\left(\!1\,-\,{e^{-\beta\, x^{\lambda}_{j}}}\!\right)\!. \end{array} $$
When λ is known
In this case, for fixed λ, say λ=λ(0), let θ=1/β and \(y_{i}=x_{i}^{\lambda ^{(0)}}\), i=1, 2, ⋯ r. Then, y1,⋯,yr is a type-II random sample from Bilal(θ) distribution. Abd-Elrahman and Niazi [7] established the existence and uniqueness theorem for the maximum likelihood estimate (MLE) of the parameter θ, say \(\hat \theta _{M}\). The MLE for the parameter β is then by \(\hat \beta _{M}\left (\lambda ^{(0)}\right)=1/\hat \theta _{M}\). Clearly, \(\hat \beta _{M}\left (\lambda ^{(0)}\right)\) exists and it is unique.
Now, we provide an iterative technique for finding \(\hat \beta _{M}\left (\lambda ^{(0)}\right)\) as follows. Let,
$$ {}\begin{aligned} W_{1}&={\frac{\beta\,{x^{{\lambda}^{(0)}}_{r}}{e^{-\beta\,{x^{{\lambda}^{(0)}}_{r}} }}}{3-2\,{e^{-\beta\,{x^{{\lambda}^{(0)}}_{r}}}}}},\qquad W_{2j}={\frac{\beta\,{x^{{\lambda}^{(0)}}_{j}}{e^{-\beta\,{x^{{\lambda}^{(0)}}_{j}} }}}{1-{e^{-\beta\,{x^{{\lambda}^{(0)}}_{j}}}}}},\\ j&=1, \, 2,\, \cdots,\, r. \end{aligned} $$
(7)
In view of (6) and (7), the likelihood equation of β is then given by:
$$\begin{array}{@{}rcl@{}} \frac{\partial\,{\ln L(\beta,\,\lambda^{(0)}|{\mathbf{x}})}}{\partial\,{\beta}}&\,=\,&\frac{r+\,2\, (n\,-\,r) \,W_{1}+\sum_{j=1}^{r}W_{2j}}{\beta}\\&&-{2\left((n\,-\,r)\, {x^{{\lambda}^{(0)}}_{r}}\,+\,\sum_{j=1}^{r}{x^{{\lambda}^{(0)}}_{j}}\right)}. \end{array} $$
For ν=0,1,2,⋯, we calculate \(\hat \beta _{M}({\lambda }^{(0)})\) by using the following formula:
$$ {}\begin{aligned} \hat\beta^{(\nu+1)}_{M}&\left(\lambda^{(0)}\right)\\ &=\left.\frac{r+2\, \left(n\,-\,r \right) W_{1}+\sum_{j=1}^{r} W_{2j}}{2\,\left(\left(n\,-\,r \right) {x^{\lambda}_{r}}+\sum_{j=1}^{r}{x^{\lambda}_{j}}\right)} \right|_{\beta=\hat\beta^{(\nu)}_{M}(\lambda^{(0)}),\,\lambda=\lambda^{(0)}}, \end{aligned} $$
(8)
iteratively until some level of accuracy is reached.
Remark 1
Note that, all of the functions W1 and W2j, j=1,2, ⋯, r, which appear in (8), need to have some initial value for β, say \(\hat \beta ^{(0)}\). This initial value can be obtained based on the available type-II censored sample as if it is complete, see Ng et al. [10]. We use the moment estimator of β as a starting point in the iterations (8). That is, in view of (3), \(\hat \beta ^{(0)}\) is given by
$$ \hat{\beta}^{(0)}= \frac{5\,r\,}{6\,{\sum_{i=1}^{r} x_{i}^{\lambda^{(0)}}}}. $$
(9)
When β is known
When β is assumed to be known, say β(0), it follows from (6) that the likelihood equation of λ is given by
$$ {}\begin{aligned} \frac{\partial\,{\ln L(\beta^{(0)},\,\lambda|{\mathbf{x}})}}{\partial\,{\lambda}}\!&=\! {\frac{r}{\lambda}}\,-\,2\, (n\,-\,r) \ln (x_{r}) \left(\beta^{(0)}{x^{\lambda}_{r}}\,-\,W_{1}\right)\\ &\quad+\sum_{j=1}^{r}\ln (x_{j}) \left(1\,-\,2\!\,\beta^{(0)}{x^{\lambda}_{j}}\,+\,W_{2j} \right), \end{aligned} $$
(10)
where W1 and W2j, j = 1,2,⋯,r, are as given by (7) after replacing β, λ(0) by β(0) and λ, respectively. In order to established the existence and uniqueness of the MLE for λ, the following theorem is needed.
Theorem 1
For a given fixed value of the parameter β = β(0), the MLE for the parameter λ, \(\hat \lambda _{M}\left (\beta ^{(0)}\right)\), exists and it is unique.
Proof
See Appendix. □
The MLE \(\hat \lambda _{M}\left (\beta ^{(0)}\right)\) can be iteratively obtained by using Newton’s method, i.e.,
$$ \begin{aligned} \hat\lambda^{(\nu+1)}_{M}\left(\beta^{(0)}\right)&= \hat\lambda^{(\nu)}_{M}\left(\beta^{(0)}\right)\\ &\quad-\left.\left\{ \frac{\lambda\,{\mathcal{G}}_{1} (\beta^{(0)},\,\lambda|{ \mathbf{x}})} {\lambda\,{\mathcal{G}}_{2} (\beta^{(0)},\,\lambda|{\mathbf{x}})+{\mathcal{G}}_{1} \left(\beta^{(0)},\,\lambda|{\mathbf{x}}\right)} \right\} \right|_{\lambda=\hat\lambda^{(\nu)}_{M}\left(\beta^{(0)}\right)}\, {,} \end{aligned} $$
(11)
for ν=0,1,2,⋯, where \({\mathcal {G}}_{1}(\cdot,\,\lambda |{\mathbf {x}})\) is as given by (10) and \({\mathcal {G}}_{2}(\cdot,\,\lambda |{\mathbf {x}})\) is the second derivative of lnL(·, λ|x) with respect to (w.r.t.) λ, which is given in the “Appendix” section.
Remark 2
An initial value for λ, \(\hat \lambda ^{(0)}_{M}\), can be obtained as follows: (1) Calculate the sample coefficient of variation (CV) based on a given type-II censored sample data as if it is complete. (2) Equating the sample CV with its corresponding CV from the population would results in an equation of λ only. (3) \(\hat \lambda ^{(0)}_{M}\) would be the solution of this equation, which provides a good starting point for (11). This technique have been used by, e.g., Kundu and Howlader [11] and Abd-Elrahman [1].
Here, the population CV of the GB distribution is given by
$$ {}\begin{aligned} {\mathcal{C}}(\lambda)&= \sqrt {{\frac{ \left({3}^{m_{2}}-{2}^{m_{2}} \right) \Gamma \left(m_{2} \right) }{ \left({3}^{m_{1}}-{2}^{m_{1}} \right)^{2} \left(\Gamma \left(m_{1} \right) \right)^{2}}}-1},\\ m_{1}&=1+\frac{1}{\lambda},\quad m_{2}=1+\frac{2}{\lambda}. \end{aligned} $$
(12)
When both β and λ are unknown
In this case, first an initial value for λ, \(\hat \lambda ^{(0)}\), can be obtained as described in “When β is known” section. Once \(\hat \lambda ^{(0)}\) is obtained, an initial value for the parameter β, \(\hat \beta ^{(0)}\), can be calculated as the right hand side of (9) after replacing λ(0) by \(\hat \lambda ^{(0)}\).
Based on the initials \(\hat \beta ^{(0)}\) and \(\hat \lambda ^{(0)}\), an updated value for β, \(\hat \beta ^{(1)}\), can be obtained by using (8). Similarly, based on the pair (\(\hat \beta ^{(1)},\hat \lambda ^{(0)}\)), an updated value for λ, \(\hat \lambda ^{(1)}\), can be obtained by using (11), and so on. As a stopping rule, the iterations will be terminated after some value s<1000 with a level of accuracy, ε≤1.2×10−7, which is defined as
$$\epsilon\,=\, \left\vert\frac{\hat\beta^{(s+1)}-\hat\beta^{(s)}} {\hat\beta^{(s)}}\right\vert +\left\vert\frac{\hat\lambda^{(s+1)}-\hat\lambda^{(s)}}{\hat\lambda^{(s)}}\right\vert. $$
Hence, the limiting pair of estimates \(\left (\hat \beta ^{(s)}, \hat \lambda ^{(s)}\right)\) exists and it is unique, which would maximizes the likelihood function (6) w.r.t., the unknown population parameters β and λ. That is, \(\hat \beta _{M}\,=\,\hat \beta ^{(s)}\) and \(\hat \lambda _{M}\,=\,\hat \lambda ^{(s)}\).
Substituting the values of β and λ in (4) by their MLEs, the MLE for reliability function s(t) at some value of t = t0 can then be obtained.
Fisher information matrix (FIM)
In this section, by using the missing information principle, the Fisher information matrix (FIM) about the underlying population parameters based on type-II censoring is provided. Suppose that, x = (x1, x2, …,xr)′ and Y = (Xr+1, Xr+2, …, Xn)′ denote the ordered observed censored and the unobserved ordered data, respectively. The vector Y can be thought of as the missing data. Combine x and Y to form the complete data set W. It is easy to show that the amount of information about the unknown parameters β and λ, which is provided by W is given by:
$$\begin{array}{@{}rcl@{}} I_{\mathbf{W}}\left({\beta}, \, \lambda\right) \,=\,\left[\begin{array}{cc} \frac{\,c_{1}}{\beta^{2}}&{\frac{\,c_{2}\,-\,c_{1}\,\ln \left(\beta \right)}{\beta\,\lambda}} \\ {\frac {\,c_{2}\,-\,c_{1}\,\ln \left(\beta \right)}{\beta\,\lambda}}&{\frac {\,c_{3}\,+\,\ln \left(\beta \right) \left\{ c_{1}\,\ln \left(\beta \right)\! -\! c_{4} \right\}}{{\lambda}^{2}}}\end{array} \right] \end{array} $$
(13)
with c1=1.92468,c2=0.05606,c3=1.79061, and c4=0.11211.
For s = r + 1,r + 2,…,n, the conditional distribution of each Xs∈Y given Xs > xr follows the truncated underlying distribution with left truncation at xr, see Ng et al. [10]. Therefore, in view of (1) and (3), the PDF of Xs∈Y given Xs > xr is given by
$$ \begin{aligned} f (x|X_{s}>x_{r};\,\beta,\:\lambda) \!&=\!\frac{6\,\beta\,e^{-2\,\beta\, \left(x^{\lambda}\,-\,x^{\lambda}_{r}\right)}\, \left(1\,-\,e^{-\beta\,x^{\lambda}}\right)} {\left(3\,-\,2\,e^{-\beta\,{x^{\lambda}_{r}}}\right) },\\ &\quad x>x_{r}, \: (\beta,\:\lambda>0). \end{aligned} $$
(14)
Hence, the expected ordered unobserved (missing) information matrix IY(β, λ), which is related to the vector Y, is then given by
$$ \begin{aligned} I_{\mathbf{Y}|\mathbf{x}}(\beta,\,\lambda)\,=\,-(n\,-\,r)\,{{{\mathrm{I}\!\mathrm{E}}}} \left[ \begin{array}{cc} \frac{\partial^{2} \,\ln [f(x|X_{s\,}\!>\!x_{r};\,\beta,\,\lambda)]}{\partial\,\beta^{2}} &\ \frac{\partial^{2} \,\ln [f(x|X_{s\,}\!>\!x_{r};\,\beta,\,\lambda)]}{\partial\,\beta\,\partial\,\lambda} \\ \frac{\partial^{2} \,\ln [f(x|X_{s\,}\!>\!x_{r};\,\beta,\,\lambda)]}{\partial\,\lambda\,\partial\,\beta} &\frac{\partial^{2} \,\ln [f(x|X_{s\,}\!>\!x_{r};\,\beta,\,\lambda)]}{\partial\,\lambda^{2}} \end{array} \right]. \end{aligned} $$
(15)
In order to evaluate of the expectations involved in (15), calculations for the following expressions are required.
1) Part 1
$$ {}I^{(k)}(y)=\int_{y}^{\infty}{\left\{\ln (t)\right\}}^{k}\, G_{1}(t)\,{\mathrm{d}\!\,} t,\qquad y>0,\quad k=0,1,2, $$
(16)
where
$${G_{1}(t)=\frac{t\,{e^{-2\,t}}\, \left[ {t\,{e^{-t}\,+\,\left(1-\,{e^{-t}} \right)\,\left(2-3\,{e^{-t}} \right)} } \right] }{1-{e^{-t}}}}. $$
Denote \( I_{0} = {{{\lim }_{y\,\to \,0^{+}}}} I^{(0)}(y) = 0.32078\), \( I_{1} = {{{\lim }_{y\,\to \,0^{+}}}} I^{(1)}(y) = 0.00934\) and \( I_{2} ={{{\lim }_{y\,\to \,0^{+}}}} I^{(2)}(y) = 0.13177\). Then, (16) can be rewritten as
$$ {}I^{(k)}(y)=I_{k}- \int_{\,0}^{y}{\left\{\ln (t)\right\}}^{k}\, G_{1}(t)\,{\mathrm{d}\!\,} t,\qquad y>0,\quad k=0,1,2. $$
(17)
The integrals involved in (17) can be calculated by using a simple numerical integration tool, e.g., Simpson’s rule.
2) Part 2
$$\begin{array}{@{}rcl@{}} I^{(3)}(y)\!&=&\!\int_{y}^{\infty} {\frac{t^{2}\,e^{-3\,t}}{1-{e^{-t}}}} \, \mathrm{d}\, t \! =\! I_{3 }- \int_{\,0}^{y} {\frac{t^{2}\,e^{-3\,t}}{1-{e^{-t}}}} \, \mathrm{d}\, t,\quad y>0, \\ &=&I_{3 }-\,\sum_{j=0}^{\infty}\left\{\int_{\,0}^{y} { {t^{2}\,e^{-(j+3)\,t}}} \, \mathrm{d}\, t\right\},\\ &=&{e^{-3\,y}}\sum_{j=0}^{\infty }{\frac{\left(1+ \left(1+ \left(3+j \right) y \right)^{2} \right) {e^{-j\,y}}}{ \left(3+j \right)^{3}}}, \end{array} $$
(18)
where \(I_{3 }\,=\,{\lim }_{y\,\to \, 0^{+}} I^{(3)}(y)\,=\,-\frac {9}{4}\,+\,2\, \sum _{i=1}^{\infty }\,i^{-3}\,=\,0.154114\,\).
Now, in view of (17) and (18), it is easy to show that the elements Ii j of IY|x(β, λ) after division by (n − r), i, j=1,2, are given by
$$\begin{array}{*{20}l} {}I_{11} &= \frac{1}{\beta^{2}}\left\{1 +6\,\left({\frac{e^{-y}I^{(3)}(y)}{3 - 2\,e^{-y}}} {- \frac{y^{2} e^{-y}}{{\left(3 - 2\, e^{-y} \right)}^{2}}}\right) \right\}, \, y\,=\,\beta\,x^{\lambda}_{r}, \end{array} $$
(19)
$$\begin{array}{*{20}l} {}I_{12} &= -\,\frac{6}{\beta\,\lambda} \,\left\{\!\frac{t_{1}(x_{r}) + \left[I^{(0)}{(y)} - \ln \left(\beta \right)I^{(1)}{(y)}\right] \,e^{2\,y}}{\left(3 - 2\,{e^{-y}} \right) }\!\right\} \,=\,I_{21}, \end{array} $$
(20)
$$\begin{array}{*{20}l} I_{22} &= \frac{1}{\lambda^{2}}\begin{array}{l} \left\{{1 + \frac{6 \left[e^{2\,y}\left[{\left(\ln (\beta) \right)}^{2} I^{(0)}{(y)} - 2 \ln(\beta)\,I^{(1)}(y) + I^{(2)}(y)\right] - t_{2}(x_{r})\right]}{ \left(3 - 2\,e^{-y}\right)}}\right\}, \end{array} \end{array} $$
(21)
where
$${}t_{1}(x_{r})\,=\,{\frac {{\beta\,x^{\lambda}_{r}}\ln\! \left(x^{\lambda}_{r} \right)\! \left[\!\left(\! 1\,-\,{e^{-\beta\,{x^{\lambda}_{r}}}} \!\right)\! \left(\! 3\,-\,2\,{e^{- \beta\,{x^{\lambda}_{r}}}} \!\right)\! +\!\beta\, {x^{\lambda}_{r}}{e^{-\beta\,{x^{\lambda}_{r}}}} \right] }{ \left(3\,-\,2\,{e^{-\beta\,{x^{\lambda}_{r}}}} \right)}} $$
and
$${}t_{2}(x_{r})\,=\,{\frac {\beta\,{x^{\lambda}_{r}}\! \left(\ln\! \left({x^{\lambda}_{r}} \right) \right)^{2} \!\left[ \!\beta\,{x^{\lambda}_{r}} {e^{-\beta\,{x^{\lambda}_{r}}}}\,+\, \left(\! \!1\,-\,{e^{-\beta\,{x^{\lambda}_{r }}}}\! \right) \!\left(\! 3\,-\,2\,{e^{-\beta\,{x^{\lambda}_{r}}}}\! \right) \!\right] }{ \left(3\,-\,2\,{e^{-\beta\,{x^{\lambda}_{r}}}} \right)}}. $$
Note that the elements Ii j, i,j = 1,2, constitute the Fisher information related to each Xs, s = r+1,r+2,⋯,n, where Xs is distributed as in (14). Therefore, in view of (19–21), the elements of the FIM about the parameters β and λ related to the complete data set W can be obtained as \(n\, {\lim }_{y\,\to \, 0^{+}}\, I_{i\,j},\,i,\,j=1,2\), which give as the same results as in (13).
Therefore, the FIM gains about the two unknown parameters β and λ from a given type-II censored sample, (x1,x2,⋯xr)′, is then given by
$$I_{\mathbf{x}}(\beta,\,\lambda)= I_{\mathbf{W}}(\beta,\,\lambda) - I_{{\mathbf{Y}}|\mathbf{x}}(\beta,\,\lambda).$$
Asymptotic variances and covariance
Once Ix(β, λ) is calculated, at \(\beta \,=\,\hat \beta _{M}\) and \(\lambda \,=\,\hat \lambda _{M}\), the asymptotic variance-covariance matrix of the MLEs of the two unknown parameters β and λ is then given by
$${}{\mathbf{Var-Cov}}\left(\hat\beta_{M},\,\hat\lambda_{M}\right)= {I^{-1}_{\mathbf{x}}\left(\hat\beta_{M},\,\hat\lambda_{M}\right)}= \left[ \begin{array}{cc} {\hat\sigma_{1}^{2}}&\hat\sigma_{12}\\\noalign{\medskip}\hat\sigma_{21}&{\hat\sigma_{2}^{2}}\end{array} \right]. $$
Again, once \({I^{-1}_{\mathbf {x}}\left (\hat \beta _{M},\,\hat \lambda _{M}\right)}\) is obtained, the asymptotic variance of the reliability function s(t0) can then be calculated as the lower bound of the Cram\(\acute {\mathrm {e}}\)r-Rao inequality of the variance of any unbiased estimator for s(t0). That is,
$$ {} \begin{aligned} \text{Var}[ \widehat{s(t_{\,0})}]&= 36\,{t^{2\,\hat\lambda_{M}}_{0}}{e^{-4\,\hat\beta_{M}{t^{\hat\lambda_{M}}_{0}}}}\left[{\hat\sigma_{2}^{2}} {{\hat\beta_{M}^{2}}} \left[ \ln ({t_{\,0}}) \right]^{2}\right.\\ &\quad\left.+\hat\beta_{M}\,\ln({t_{\,0}})\, {\hat\sigma_{12}}\,+\, {{\hat\sigma_{1}^{2}}} \right] {\left[1\,-\,{e^ {-\hat\beta_{M} t^{\hat\lambda_{M}}_{0}}}\right]}^{2}. \end{aligned} $$
(22)
Consequently, the asymptotic (1 − α) 100 % confidence intervals, ACIs, for \(\hat {\beta }_{M}\), \(\hat {\lambda }_{M}\), and \(\widehat {s(t_{\,0})}_{M}\) are given by
$$ \begin{aligned} {}&\left[\hat{\beta}_{M}\,\mp\, Z_{\frac{\alpha}{2}}\,{\hat\sigma_{1}}\right],\, \left[\hat{\lambda}_{M}\,\mp\, Z_{\frac{\alpha}{2}}\,{\hat\sigma_{2}}\right] \, \text{and}\\ &\qquad\left[\widehat{s(t_{\,0})}_{M}\,\mp\, Z_{\frac{\alpha}{2}}\,\sqrt{\text{Var}[\widehat{s(t_{\,0})}]}\right], \end{aligned} $$
(23)
respectively, where \(Z_{\frac {\alpha }{2}}\) is the percentile \((1\,-\,{\frac {\alpha }{2}})\) of the standard normal distribution.