Skip to main content
  • Original research
  • Open access
  • Published:

Bayesian estimation of the reliability characteristic of Shanker distribution

Abstract

In this study, we discussed the Bayesian property of unknown parameter and reliability characteristic of the Shanker distribution. The maximum likelihood estimate is calculated. The approximate confidence interval of the unknown parameter is constructed based on the asymptotic normality of maximum likelihood estimator. Two bootstrap confidence intervals for the unknown parameter are also computed. Bayesian estimates of parameter and reliability characteristic against squared error loss function are obtained. Lindley’s approximation and Metropolis-Hastings algorithm are applied to obtain the Bayes estimates. In consequence, we also construct the highest posterior density intervals. A numerical comparison is also made to compare different methods through a Monte Carlo simulation study. Finally, two real data sets are also analyzed using the proposed methods.

Introduction

In the literature, a continuous one-parameter distribution named the “Shanker distribution” has its origin in the papers by Shanker [1]. Shanker distribution has been found useful for modeling lifetime data from engineering and medical science. The author studied its mathematical and statistical properties. He discussed its shape, moments, skewness, kurtosis, and also its reliability characteristics. The author also obtained the Bayesian estimation of the unknown parameter and applications for modeling lifetime data from engineering and biomedical science.

The Shanker distribution with parameter θ has the probability density function (PDF) and cumulative distribution function (CDF), respectively,

$$ ~~~~f_{X}(x) = \frac{\theta^{2}}{(\theta^{2} +1)}\, (\theta+x)\,e^{-\theta\,x}~,~ x>0,~\theta>0, $$
(1)
$$ F_{X}(x) =1- \frac{(\theta^{2}+\theta\,x+1)}{(\theta^{2} +1)}\,e^{-\theta\,x}~,~x>0.~~~~~~~~~~ $$
(2)

This distribution is the mixture of the exponential (θ) and gamma (2,θ) with their mixing proportions \(\frac {\theta ^{2}}{\theta ^{2}+1}\) and \(\frac {1}{\theta ^{2}+1}\) respectively.

Then, the corresponding reliability function, hazard function, and mean residual life function of X are given, respectively, by

$$\begin{array}{@{}rcl@{}} R(t)=\frac{\left(\theta^{2}+\theta\,t+1\right)}{(\theta^{2} +1)}\,e^{-\theta\,t},~~t>0, \end{array} $$
(3)
$$\begin{array}{@{}rcl@{}} h(t)= \frac{\theta^{2} (\theta +t)}{\left(\theta^{2}+\theta\,t+1\right)},~~t>0. \end{array} $$
(4)
$$\begin{array}{@{}rcl@{}} m(t)=\frac{\left(\theta^{2}+\theta\,t+2\right)}{\theta\left(\theta^{2}+\theta\,t+1\right)},~~t>0. \end{array} $$
(5)

In recent years, several researchers have investigated the inference problems for Shanker distribution. In the literature, Shanker distribution has its origin in the papers by Shanker [1]. He discussed statistical properties of Shanker distribution. In this paper, the author discussed some inferential issues also. Three real-life data sets are provided to illustrate its exibility and potentiality over the lindley and exponential distribution. Shanker and Fesshay [2] studied the modeling of lifetime data using one-parameter Akash, Shanker, Lindley, and exponential distributions.

Recently, many authors consider Bayesian estimation for univariate distributions. Rastogi and Merovci [3] detailed the study about the Bayesian estimation for parameters and reliability characteristics of the Weibull Rayleigh distribution. Chandrakant et al. [4] discussed various inference properties of a Weibull inverse exponential distribution. The authors estimated the unknown parameters using classical and Bayesian techniques. Two real data sets are analyzed in support of the proposed estimation.

We obtain the different classical and Bayesian point estimators of unknown parameter using maximum likelihood and Bayesian methods of estimation. The Bayes estimates of unknown parameter are derived under the squared error loss function. We obtain the interval estimation. The approximate and two bootstrap confidence intervals (CIs) are derived. The highest posterior density (HPD) interval is considered as well. These point and interval estimators are treated as an important problem in many practical applications as well as financial, industrial, agricultural, and reliability experiments.

The layout of the paper is as follows: In “The maximum likelihood estimation” section, the maximum likelihood estimate(MLE) of the unknown parameter is obtained. The approximate and two bootstrap (CIs) are derived in the “Confidence intervals” section. In “The Bayesian estimation” section, the Bayes estimates relative to square error loss function and HPD interval are considered. The Monte Carlo simulation results are presented in the “Numerical comparison” section. The “Data analysis” section provided the illustration of the proposed procedure by using a real-life data. Eventually, the conclusion is inserted in the “Conclusions” section.

The maximum likelihood estimation

Suppose that X1,X2,…,Xn is a random sample of n-independent units obtained from a Shanker distribution as defined in (1). The likelihood of θ for the model (1) can be described as

$$\begin{array}{@{}rcl@{}} L(\theta)\propto \frac{\theta^{2\,n}\,e^{-\theta\, s}}{(\theta^{2} +1)^{n}}~\prod_{i=1}^{n} (\theta+ x_{i})\, \end{array} $$
(6)

where \(s=\sum _{i=1}^{n} x_{i}\). The logarithm of the likelihood (6) is

$$\begin{array}{@{}rcl@{}} \log L(\theta)\propto 2\,n\log \theta-n\log\left(\theta^{2} +1\right)-\theta\,s +\sum_{i=1}^{n} \log (\theta+x_{i}). \end{array} $$
(7)

Making the differentiation of Eq. (7) with respect to θ and then equating them to zero, we have, respectively,

$$\begin{array}{@{}rcl@{}} \frac{d \log L}{d \theta}=\frac{2\,n}{\theta}-\frac{2\,n\theta}{\theta^{2} +1}-s+\sum_{i=1}^{n}\frac{1}{(\theta+x_{i})}=0. \end{array} $$
(8)

The MLE \(\hat {\theta } \,\) of θ is a solution of Eq. (8). We observe that \(\hat {\theta }\) can not be obtained in closed form and have to solve Eq. (8) numerically to obtain the desired estimate. So, some numerical technique, for instance, the Newton- Raphson and Broydan method, may be used. We used the package “nleqslv" in (software) R to find the solution of the unknown parameter θ.

Note that (8) can be written in the form:

$$\begin{array}{@{}rcl@{}} \theta= h(\theta) \end{array} $$
(9)

where

$$\begin{array}{@{}rcl@{}} h(\theta)= 2\,n\,\left[\frac{2\,n\theta}{\theta^{2} +1}+s-\sum_{i=1}^{n}\frac{1}{(\theta+x_{i})}\right]^{-1} \end{array} $$

We design a simple iterative scheme to solve the above Eq. (9) for θ. We can start with an initial guess of θ, say θ(0), then find θ(1)=h(θ(0)) and, proceeding in this way, obtain θ(k)=h(θ(k−1)). Stop the iterative procedure, when |θ(k)θ(k−1)|<η, where η some pre-assigned tolerance limit.

Finally, using the invariance property of the MLE, the MLEs of R(t),h(t), and m(t), respectively, defined as \(\hat {R}(t), \hat {h}(t)\), and \(\hat {m}(t)\), are obtained as

$$\begin{array}{@{}rcl@{}} \hat{R}(t)= \frac{(\hat{\theta}^{2}+\hat{\theta}\,t+1)}{(\hat{\theta}^{2} +1)}\,e^{-\hat{\theta}\,t},~~ \hat{h}(t)=\frac{\hat{\theta}^{2} (\hat{\theta} +t)}{(\hat{\theta}^{2}+\hat{\theta}\,t+1)},~~~\text{and}~~~m(t)=\frac{(\hat{\theta}^{2}+\hat{\theta}\,t+2)} {\hat{\theta}(\hat{\theta}^{2}+\hat{\theta}\,t+1)}~~t>0. \end{array} $$

In next section, we obtain asymptotic intervals of θ using asymptotic normality property of MLEs.

Confidence intervals

Approximate CI

The asymptotic variance of \(\hat {\theta }\) for Shanker distribution is given by \(Var(\hat {\theta })= [I{_{X}(\hat {\theta })}]^{-1}\) where \(I(\hat {\theta })\) is the observed Fisher’s information which is given by \(I(\hat {\theta })=- \frac {d^{2} \log L}{d \theta ^{2}}\Bigg {|}_{\theta =\hat {\theta }}\).

Since the Shanker distribution belongs to one-parameter exponential family of distributions, therefore, the sampling distribution of \(\frac {(\hat {\theta }-\theta)}{\sqrt {\text {Var}(\hat {\theta })}}\) can be approximated by a standard normal distribution. The symmetric 100(1−ξ)% approximate CI for the parameter θ is then obtained by \(\hat {\theta }\pm z_{\frac {\xi }{2}}\sqrt {\text {Var}(\hat {\theta })},\) where 0<ξ<1 and \(z_{\frac {\xi }{2}}\) denotes the upper \(\frac {\xi }{2}\)th percentile of the standard normal distribution. Using the simulation, we can estimate the coverage probability

$$P\left[\left|\frac{(\hat{\theta}-\theta)}{\sqrt{\text{Var}(\hat{\theta})}}\right|\leq z_{\frac{\xi}{2}}\right].$$

We construct some more CIs for the unknown parameter in the next subsection.

Bootstrap CIs

We propose to use CIs based on the parametric bootstrap methods. It is known that CIs with the asymptotic results do not implement very well for small samples. There are three types of resampling plans: non-parametric, semi-parametric, and parametric. The bootstrap techniques depend on these three resampling plans, see Efron [5]. We used the parameteric bootstrap methods where the parametric model for the data is known \(f\left (\underline {x};.\right) \) up to the unknown parameter θ, so that the bootstrap data are sampled from \(f\left (\underline {x};\hat {\theta }\right),\) where \(\hat {\theta }\) is the MLE from the original data. Many studies dealt with percentile bootstrap method (Boot-p) based on the idea of Efron and Tibshirani [6] and bootstrap-t method (Boot-t) based on the idea of Hall [7] and Hall [8], such as Kundu and Joarder [9] among others. Kundu and Joarder [9] proposed two parametric bootstrap confidence intervals for the unknown parameter, say θ. The following procedures are followed to obtain bootstrap samples for the two methods:

The following steps are required to construct CI using Boot-p method:

  1. 1.

    Draw sample X1,X2,…,Xn from (1) and calculate the estimate \(\hat {\theta }\).

  2. 2.

    Next, draw a bootstrap sample \(\left (X_{1}^{*},X_{2}^{*}, \ldots, X_{n}^{*}\right)\) using \(\hat {\theta }\). Derive the updated bootstrap estimate of θ, say \(\hat {\theta }^{*}\), using this sample.

  3. 3.

    Repeat Step [2] B times.

  4. 4.

    Let \(\widehat {F}(x)=P(\hat {\theta }^{*} \leq x)\) be the cumulative distribution function of \(\hat {\theta }^{*}\). Then, define \(\hat {\theta }_{Boot-p}(x)=\widehat {F}^{-1}(x)\) for a given x. The approximate 100(1−ξ)% CI for θ is given by

    \(\left (\hat {\theta }_{Boot-p}\left (\frac {\xi }{2}\right), ~\hat {\theta }_{Boot-p}\left (1-\frac {\xi }{2}\right)\right),\)

and the following steps are required to construct CI using Boot-t method:

  1. 1.

    Draw sample X1,X2,…,Xn from (1) and obtain the estimate \(\hat {\theta }\).

  2. 2.

    Next, draw a bootstrap sample \(\left (X_{1}^{*},X_{2}^{*}, \ldots, X_{n}^{*}\right)\) using \(\hat {\theta }\). Then, derive the estimates \(\hat {\theta }^{*}\) and \(\hat {V}(\hat {\theta }^{*})\).

  3. 3.

    Obtain the T statistic defined as

    \(~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ T^{*}=\frac {\hat {\theta }^{*}-\hat {\theta }}{\sqrt {\hat {V}(\hat {\theta }^{*})}}.\)

  4. 4.

    Repeat Step 2 B times.

  5. 5.

    Let \(\widehat {F}(x)=P(T^{*} \leq x)\) be the cumulative distribution function of T. Define \(\hat {\theta }_{\mathrm {Boot-t}}(x)=\hat {\theta }+\sqrt {\hat {V}(\hat {\theta }^{*})}\widehat {F}^{-1}(x)\) for a given x. The approximate 100(1−ξ)% CI for θ is given by

    \(\left (\hat {\theta }_{\mathrm {Boot-t}}\left (\frac {\xi }{2}\right), ~\hat {\theta }_{\mathrm {Boot-t}}\left (1-\frac {\xi }{2}\right)\right)\).

The Bayes estimators of unknown parameter and reliability characteristics are obtained in the next section.

The Bayesian estimation

The Bayesian inference procedures have been developed under the usual squared error loss function (quadratic loss), which is symmetrical, and associates equal importance to the losses due to overestimation and underestimation of equal magnitude. One may refer to paper by Canfield [10] for detail exposition in this direction. The mathematical form of squared loss function may simply be expressed as:

$$\begin{array}{@{}rcl@{}} \mathrm{Squared~ error~ loss}:~ L_{s}(\upsilon,\eta)&=&(\eta - \upsilon)^{2}. \end{array} $$

Suppose that X1,X2,…,Xn is a complete sample drawn from the model (1). We assume that θ is a prior distributed as gamma distribution denoted as G (a,b) where a>0 and b>0 with corresponding probability density function written as

$$\begin{array}{@{}rcl@{}} \pi(\theta)\propto \,\theta^{a-1}\,e^{-b\,\theta}~~\theta>0,\, ~a>0,\,b>0. \end{array} $$
(10)

After a simple calculation, the posterior distribution of θ is obtained as

$$\begin{array}{@{}rcl@{}} \pi(\theta\, |\,\underbar{x})\propto \frac{\theta^{2\,n+a-1}\,\,e^{-\theta\, (b+s)}}{(\theta^{2} +1)^{n}}\,\prod_{i=1}^{n} (\theta+ x_{i}) \,, \end{array} $$
(11)

where x̲=(x1,x2,…,xn).

Now, the corresponding Bayes estimate of θ against the loss function Ls is obtained as

$$\begin{array}{@{}rcl@{}} \tilde{\theta}_{s}=E[\theta\,|\,\underbar{x}\,] =\frac{1}{k}\int_{0}^{\infty}\frac{\theta^{2\,n+a}}{(\theta^{2} +1)^{n}}\,e^{-\theta\,(b+s)}\,\prod_{i=1}^{n} (\theta+ x_{i}) ~d\theta, \end{array} $$
$$\begin{array}{@{}rcl@{}} k=\int_{0}^{\infty}\frac{\theta^{2\,n+a-1}}{(\theta^{2} +1)^{n}}\,e^{-\theta\,(b+s)}\,\prod_{i=1}^{n} (\theta+ x_{i}) ~d\theta. \end{array} $$

Next, the Bayes estimates of R(t),h(t) and m(t) with respect to square error loss function can be written, as

$$\begin{array}{@{}rcl@{}} \tilde{R}_{s}(t) =\frac{1}{k}\int_{0}^{\infty}\frac{\theta^{2\,n+a-1}\,(\theta^{2}+\theta\,t+1)}{(\theta^{2} +1)^{(n+1)}}\,e^{-\theta\,(b+s+t)}\,\prod_{i=1}^{n} (\theta+ x_{i}) ~d\theta, \end{array} $$
$$\begin{array}{@{}rcl@{}} \tilde{h}_{s}(t) =\frac{1}{k}\int_{0}^{\infty}\frac{\theta^{2\,n+a+1}(\theta+t)}{(\theta^{2}+\theta\,t+1) (\theta^{2} +1)^{n}}\,e^{-\theta\,(b+s)}\,\prod_{i=1}^{n} (\theta+ x_{i}) ~d\theta, \end{array} $$
$$\begin{array}{@{}rcl@{}} \tilde{m}_{s}(t) =\frac{1}{k}\int_{0}^{\infty}\frac{\theta^{2\,n+a-2}\,(\theta^{2}+\theta\,t+2)}{(\theta^{2}+\theta\,t+1)\,(\theta^{2} +1)^{n}}\,e^{-\theta\,(b+s)}\,\prod_{i=1}^{n} (\theta+ x_{i}) ~d\theta, \end{array} $$

It is clear that all the above Bayes estimators do not have simple closed forms. Therefore, in next sections, we employ two popular approximation procedures to calculate the approximate Bayes estimates of the parameter and reliability characteristic.

Lindley’s approximation

In the previous subsection, we obtained the Bayes estimates of θ under squared error loss function. These estimates are of the form of the ratio of the two integrals. Lindley [11] developed a procedure to approximate the ratio of the two integrals. In this subsection, using this technique, we obtain the approximate Bayes estimates of θ under the stated loss functions. For illustration, consider the ratio of integral I(x), where

$$\begin{array}{@{}rcl@{}} I(\underbar{x})=\frac{\int_{\theta}u(\theta) ~e^{l(\theta)+\rho(\theta)}~ d\theta}{ \int_{\theta}~e^{l(\theta)+\rho(\theta)}~ d\theta}\,, \end{array} $$
(12)

where u(θ) is function of θ only and l(θ) is the log-likelihood and ρ(θ)= logπ(θ). Let \(\hat {\theta }\) denote the MLE of θ. Applying Lindley’s approximation procedure, I(x̲) can be written as

$$\begin{array}{@{}rcl@{}} I(\underbar{x})= u(\hat{\theta})+ 0.5\left[\left(\hat{u}_{\theta \theta}+2\,\hat{u}_{\theta}~\hat{\rho_{\theta}}\right)~\hat{\sigma}_{\theta \theta}+\hat{u}_{\theta}~\hat{\sigma}^{2}_{\theta \theta}~\hat{l}_{\theta\theta\theta}\right], \end{array} $$

where uθθ denotes the second derivative of the function u(θ) with respect to θ, and \(\hat {u}_{\theta \theta }\) represents the same expression evaluated at \(\theta =\hat {\theta }\). All other quantities appearing in the above expression of I(x̲) are interpreted as follows

$$\begin{array}{@{}rcl@{}} \hat{l}_{\theta \theta0}&=&\frac{\partial^{2}l}{\partial {\theta}^{2} }\Bigg{|}_{\theta=\hat{\theta}}=-\,\frac{2\,n}{\hat{\theta}^{2}}-\frac{2\,n\,(\theta^{2}-1)}{(\theta^{2}+1)^{2}}-\sum_{i=1}^{n} \frac{1}{(\hat{\theta}+x_{i})^{2}},~~~~~~\hat{\sigma}_{\theta \theta}=-\frac{1}{\hat{l}_{\theta \theta}}, \end{array} $$
$$\begin{array}{@{}rcl@{}} \hat{l}_{\theta \theta \theta}&=&\frac{\partial^{3}l}{\partial {\theta}^{3} }\Bigg{|}_{\theta=\hat{\theta}}= \,\frac{4\,n}{\hat{\theta}^{3}}+\frac{4\,n\,\theta\,(\theta^{2}-3)}{(\theta^{2}+1)^{3}}+\sum_{i=1}^{n} \frac{2}{(\hat{\theta}+x_{i})^{3}},~~~~~\hat{\rho}_{\theta}=\frac{(a-1)}{\hat{\theta}}-b. \end{array} $$

Now, to obtain the Bayes estimate of θ under the loss function Ls, we have

$$\begin{array}{@{}rcl@{}} u(\theta)=\theta,~~u_{\theta}=1,~~u_{\theta \theta}=0,~~ \tilde{\theta}_{s}=\hat{\theta}+ 0.5\left[\left(\hat{u}_{\theta \theta}+2\,\hat{u}_{\theta}~\hat{\rho_{\theta}}\right)~\hat{\sigma}_{\theta \theta}+\hat{u}_{\theta}~\hat{\sigma}^{2}_{\theta \theta}~\hat{l}_{\theta\theta\theta}\right]{.} \end{array} $$

In a similar manner, we can derive the Bayes estimates of R(t),h(t), and m(t) with respect to the square error loss functions.

In the next subsection, we use the Metropolis-Hastings (MH) algorithm and compute some more estimates of unknown parameter. One may refer to Metropolis et al. [12] and Hastings [13] for various applications of this method.

Metropolis-Hastings algorithm

We generate samples from the given posterior distribution using the normal as proposal distribution for θ. We need the following procedure to generate posterior samples using the proposed algorithm.

Step 1: Choose an initial guess of θ and call it θ0.

Step 2: Generate θ using the proposal N(θn−1,σ2) distribution.

Step 3: Compute \(h = \frac {\pi (\theta '\vert x)}{\pi (\theta _{n-1}\vert x)}\).

Step 4: Then, generate a sample u from the uniform U(0,1) distribution.

Step 5: If uh, then set

θnθ; otherwise θnθn−1.

Step 6: Repeat steps (2–5) Q times and collect adequate number of replicates.

In order to avoid dependence of the sample on the initial values, the first few samples are discarded, and to minimize the correlation between subsequent samples, lagging is used. Then, the resulting sample is approximately independent. In this way, we are able to generate sample from the posterior distribution of θ. Suppose that Q denotes the total number of generated sample and Q0 denotes the initial burn-in sample.

Finally, we observe that the associated Bayes estimate of θ under square error loss function is given by

$$\hat{\theta}_{MH,s} = \frac{1}{Q - Q_{0}} \sum_{i = Q_{0} + 1}^{Q} \theta_{i}.$$

Finally, we observe that the associated Bayes estimate of R(t) under square error loss function is given by

$$\hat{R(t)}_{s} = \frac{1}{Q - Q_{0}} \sum_{i = Q_{0} + 1}^{Q} \frac{\left(\theta_{i}^{2}+\theta_{i}\,t+1\right)}{\left(\theta_{i}^{2} +1\right)}\,e^{-\theta_{i}\,t}.$$

In a similar manner, we can derive the Bayes estimates of h(t) and m(t) with respect to the square error loss functions.

It should be noticed that the 100(1−ξ)% HPD interval for the unknown parameter θ can easily be constructed using the MH samples. The idea was developed by Chen and Shao [14]. First, arrange the sample θ1,θ2,...,θQ in increasing order. Then, the 100(1−ξ)% credible interval for θ is obtained as (θ1,θ(1−ξ)Q+1),..., (θQξ,θQ), where z denotes the greatest integer less than or equal to z. Among all such credible intervals, the shortest one is the HPD interval.

Numerical comparison

In order to evaluate the performance of all the point estimates and different methods of constructing CIs and HPD interval discussed in the preceding sections, a Monte Carlo simulation study was conducted and the results are presented in this section. For the simulation study, we took θ=0.5,n=30,50,70,90,110. The main idea to take different combinations of n is to see how the MLE and Bayes estimates perform for them. The informative and non-informative prior were used to obtain the Bayes estimates and HPD intervals. The hyper-parameters were taken as follows: a=4,b=8 for informative prior and a=0,b=0 for non-informative prior. They were chosen such that the expectation of prior match with the true parameter value. To obtain the Bayes estimates and HPD intervals using MH algorithm, we set Q=10000 replications and Q0=2000 as the initial burn-in sample. In all cases, Bayes estimates are against square error loss function. Under these settings, the average estimates and the estimated mean squared errors (MSEs) of different estimates based on the 10000 simulated complete samples from the Shanker distribution are listed in Tables 1, 2, 3, and 4. Also, for comparison purposes, using simulated samples, the coverage probabilities and average lengths of various CIs and HPD intervals based on 95% of the true coverage probability were computed. To obtain the bootstrap CIs, we set B=10000 replications. The average lengths and the corresponding coverage probabilities are reported in Table 5.

Table 1 Average and MSEs values of all estimates of θ for different choices of n
Table 2 Average and MSEs values of all estimates of R(t) for different choices of n and T
Table 3 Average and MSEs values of all estimates of h(t) for different choices of n and T
Table 4 Average and MSEs values of all estimates of m(t) for different choices of n and T
Table 5 Estimated coverage probabilities (in%) and average lengths of interval estimates of θ for different choices of n

All the computations were conducted in R software (R i386 3.2.2), and R codes can be obtained from the author upon request. Some of the points are quite clear from the simulation study. Based on tabulated the average estimates, the estimated MSE, coverage probability, and average length values following the conclusions can be drawn from Tables 1, 2, 3, 4, and 5.

  1. 1.

    It is observed that the average estimates of the MLE and Bayes estimates are all close to the true parameter for different combinations of n. Also, the performance of the MLE and Bayes estimates of the unknown parameter is quite satisfactory, in terms of their MSEs. We found that Bayes estimators have smaller MSE values than the MLE of θ.

  2. 2.

    The performance of the Lindley estimates is very similar to that of the corresponding Bayes estimates using the MH algorithm.

  3. 3.

    As expected, the MSE values of all estimates decrease as the sample size n grows.

  4. 4.

    It is observed that coverage probabilities obtained by approximate CI, Boot-p, and Boot-t CIs are better than HPD interval and they are close to the nominal level. But HPD interval provides a good balance between the coverage probabilities as well as average lengths. Therefore, in general, we would recommend to use the HPD interval. If one wants to guarantee that the coverage probability is close to the nominal level and the length of HPD interval is not the major concern, then approximate CI and Boot-t CI are proposed in most cases.

  5. 5.

    From the comparison of Boot-p and Boot-t CIs, it is observed that the performance of Boot-t CI is marginally better than Boot-p CI in terms of average lengths and coverage probabilities.

Data analysis

For illustrative purposes, we have analyzed two real data sets which have been recently considered by Ghitany et al. [15]. They fitted these real data sets to the Shanker distribution and found that the Shanker distribution fits both real data sets reasonably good. They also obtained useful inference for the prescribed model.

Data set 1: The first data-set represents the waiting times (in minutes) before service of 100 bank customers and was examined and analyzed by Ghitany et al. (2008) for fitting the Lindley distribution. The data are as follows:

Based on the original data, the MLEs of unknown parameter θ were evaluated from Eq. (8). We also computed different Bayes estimates under square error loss function using Lindley’s approximation and MH algorithm. Since we did not have any prior information on a and b, we assumed the non-informative prior, i.e., a=b=0. The MLEs and all Bayes estimates of unknown parameter θ are displayed in Table 6, and also, we constructed the approximate CI, Boot CIs, and HPD interval. From Table 6, it is observed that all the estimates are close to each other. The HPD interval performs better among all the intervals in respect of length. The MLEs and Bayes estimates of reliability characteristics are derived in Table 7.

Table 6 Point and interval estimates of θ from data set 1
Table 7 Point estimates of R(t),h(t) and m(t) for different choices of T from data set 1

Data set 2: This data set is the strength data of glass of the aircraft window reported by Fuller et al. [16]. The data are as follows

Table 8 shows the MLEs and Bayes estimates of the unknown parameter θ. The 95% CIs, Boot CIs, and HPD intervals for θ are also presented in Table 8. All the Bayes estimates and HPD intervals were evaluated against non-informative prior distribution. Similar to the obtained result from data set 1, we observe that the MLE and Bayes procedures have similar values for θ. Also, HPD intervals have shortest lengths among all the interval estimates. In Table 9, the classical and Bayes estimation of reliability characteristics are calculated.

Table 8 Point and interval estimates of θ from data set 2
Table 9 Point estimates of R(t),h(t), and m(t) for different choices of T from data set 2

Conclusions

In this paper, we have discussed the classical and Bayesian inferential of unknown parameter and reliability characteristics of the Shanker distribution. We have provided the MLE and Bayes estimates and the corresponding CIs and HPD interval. A numerical simulation has been conducted to compare the performance of different methods, and results of a simulation study have been reported comprehensively in this paper. The method can be extended for progressively type-I hybrid censoring scheme and other censoring schemes also. We believe that more work is needed along these directions.

Abbreviations

BE:

Bayes estimate

CDF:

Cumulative distribution function

CIs:

Confidence intervals

HPD:

Highest posterior density

ML:

Maximum likelihood

MLE:

ML estimate

MSE:

Mean squared error

PDF:

Probability density function

References

  1. Shanker, R.: Shanker distribution and its applications. Int. J. Stat. Appl. 5(6), 338–348 (2015).

    Google Scholar 

  2. Shanker, R., Fesshay, H.: On modeling of lifetime data using one parameter Akash, Shanker, Lindley and exponential distributions. Biom. Biostat. Int. J. 3(6), 00084 (2016).

    Google Scholar 

  3. Rastogi, M. K., Merovci, F.: Bayesian estimation for parameters and reliability characteristic of the Weibull Rayleigh distribution. J. King Saud Univ. Sci. 30(4), 472–478 (2018).

    Article  Google Scholar 

  4. Chandrakant Rastogi, M. K., Tripathi, Y. M.: On a Weibull-Inverse Exponential Distribution. Ann. Data. Sci. 5(2), 209–234 (2018).

    Article  Google Scholar 

  5. Efron, B.: The jackknife, the bootstrap, and other resampling plans. CBMS-NSF Regional Conference Series in Applied Mathematics. Philadelphia. Society for Industrial and Applied Mathematics (SIAM) (1982).

  6. Efron, B., Tibshirani, R.: Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy. Stat. Sci. 1, 54–77 (1986).

    Article  MathSciNet  Google Scholar 

  7. Hall, P.: Theoretical comparison of bootstrap confidence intervals. Ann. Stat. 16, 927–953 (1988).

    Article  MathSciNet  Google Scholar 

  8. Hall, P.: The bootstrap and edgeworth expansion. Springer-Verlag, New York (1992).

    Book  Google Scholar 

  9. Kundu, D., Joarder, A.: Analysis of type-II progressively hybrid censored data. Comput. Stat. Data Anal. 50, 2509–2528 (2006).

    Article  MathSciNet  Google Scholar 

  10. Canfield, R. V.: A Bayesian approach to reliability estimation using a loss function. IEEE Trans. Reliab. 19(1), 13–16 (1970).

    Article  Google Scholar 

  11. Lindley, D. V.: Approximate Bayesian method. Trabajos de Estadistica. 31, 223–237 (1980).

    Article  MathSciNet  Google Scholar 

  12. Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., Teller, E.: Equations of state calculations by fast computing machines. J. Chem. Phys. 21, 1087–1092 (1953).

    Article  Google Scholar 

  13. Hastings, W. K.: Monte Carlo sampling methods using Markov chains and their applications. Biometrika. 57, 97–109 (1970).

    Article  MathSciNet  Google Scholar 

  14. Chen, M. H., Shao, Q. M.: Monte Carlo estimation of Bayesian credible and HPD intervals. J. Comput. Graph. Stat. 8, 69–92 (1999).

    MathSciNet  Google Scholar 

  15. Ghitany, M. E., Atieh, B., Nadarajah, S.: Lidley distribution and its application. Math. Comput. Simul. 78, 493–506 (2008).

    Article  Google Scholar 

  16. Fuller, E. J., Frieman, S., Quinn, J., Quinn, G., Carter, W.: Fracture mechanics approach to the design of glass aircraft windows: A case study. SPIE Proc. 2286, 419–430 (1994).

    Article  Google Scholar 

Download references

Acknowledgments

Not applicable.

Funding

There are no sources of funding for the research.

Availability of data and materials

It is not applicable in my paper.

Author information

Authors and Affiliations

Authors

Contributions

The author read and approved the final manuscript.

Corresponding author

Correspondence to Tahani A. Abushal.

Ethics declarations

Competing interests

The author declares that he/she has no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Abushal, T.A. Bayesian estimation of the reliability characteristic of Shanker distribution. J Egypt Math Soc 27, 30 (2019). https://doi.org/10.1186/s42787-019-0033-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s42787-019-0033-x

Keywords

Mathematics Subject Classification