 Original research
 Open Access
 Published:
Bayesian estimation of the reliability characteristic of Shanker distribution
Journal of the Egyptian Mathematical Society volume 27, Article number: 30 (2019)
Abstract
In this study, we discussed the Bayesian property of unknown parameter and reliability characteristic of the Shanker distribution. The maximum likelihood estimate is calculated. The approximate confidence interval of the unknown parameter is constructed based on the asymptotic normality of maximum likelihood estimator. Two bootstrap confidence intervals for the unknown parameter are also computed. Bayesian estimates of parameter and reliability characteristic against squared error loss function are obtained. Lindley’s approximation and MetropolisHastings algorithm are applied to obtain the Bayes estimates. In consequence, we also construct the highest posterior density intervals. A numerical comparison is also made to compare different methods through a Monte Carlo simulation study. Finally, two real data sets are also analyzed using the proposed methods.
Introduction
In the literature, a continuous oneparameter distribution named the “Shanker distribution” has its origin in the papers by Shanker [1]. Shanker distribution has been found useful for modeling lifetime data from engineering and medical science. The author studied its mathematical and statistical properties. He discussed its shape, moments, skewness, kurtosis, and also its reliability characteristics. The author also obtained the Bayesian estimation of the unknown parameter and applications for modeling lifetime data from engineering and biomedical science.
The Shanker distribution with parameter θ has the probability density function (PDF) and cumulative distribution function (CDF), respectively,
This distribution is the mixture of the exponential (θ) and gamma (2,θ) with their mixing proportions \(\frac {\theta ^{2}}{\theta ^{2}+1}\) and \(\frac {1}{\theta ^{2}+1}\) respectively.
Then, the corresponding reliability function, hazard function, and mean residual life function of X are given, respectively, by
In recent years, several researchers have investigated the inference problems for Shanker distribution. In the literature, Shanker distribution has its origin in the papers by Shanker [1]. He discussed statistical properties of Shanker distribution. In this paper, the author discussed some inferential issues also. Three reallife data sets are provided to illustrate its exibility and potentiality over the lindley and exponential distribution. Shanker and Fesshay [2] studied the modeling of lifetime data using oneparameter Akash, Shanker, Lindley, and exponential distributions.
Recently, many authors consider Bayesian estimation for univariate distributions. Rastogi and Merovci [3] detailed the study about the Bayesian estimation for parameters and reliability characteristics of the Weibull Rayleigh distribution. Chandrakant et al. [4] discussed various inference properties of a Weibull inverse exponential distribution. The authors estimated the unknown parameters using classical and Bayesian techniques. Two real data sets are analyzed in support of the proposed estimation.
We obtain the different classical and Bayesian point estimators of unknown parameter using maximum likelihood and Bayesian methods of estimation. The Bayes estimates of unknown parameter are derived under the squared error loss function. We obtain the interval estimation. The approximate and two bootstrap confidence intervals (CIs) are derived. The highest posterior density (HPD) interval is considered as well. These point and interval estimators are treated as an important problem in many practical applications as well as financial, industrial, agricultural, and reliability experiments.
The layout of the paper is as follows: In “The maximum likelihood estimation” section, the maximum likelihood estimate(MLE) of the unknown parameter is obtained. The approximate and two bootstrap (CIs) are derived in the “Confidence intervals” section. In “The Bayesian estimation” section, the Bayes estimates relative to square error loss function and HPD interval are considered. The Monte Carlo simulation results are presented in the “Numerical comparison” section. The “Data analysis” section provided the illustration of the proposed procedure by using a reallife data. Eventually, the conclusion is inserted in the “Conclusions” section.
The maximum likelihood estimation
Suppose that X_{1},X_{2},…,X_{n} is a random sample of nindependent units obtained from a Shanker distribution as defined in (1). The likelihood of θ for the model (1) can be described as
where \(s=\sum _{i=1}^{n} x_{i}\). The logarithm of the likelihood (6) is
Making the differentiation of Eq. (7) with respect to θ and then equating them to zero, we have, respectively,
The MLE \(\hat {\theta } \,\) of θ is a solution of Eq. (8). We observe that \(\hat {\theta }\) can not be obtained in closed form and have to solve Eq. (8) numerically to obtain the desired estimate. So, some numerical technique, for instance, the Newton Raphson and Broydan method, may be used. We used the package “nleqslv" in (software) R to find the solution of the unknown parameter θ.
Note that (8) can be written in the form:
where
We design a simple iterative scheme to solve the above Eq. (9) for θ. We can start with an initial guess of θ, say θ^{(0)}, then find θ^{(1)}=h(θ^{(0)}) and, proceeding in this way, obtain θ^{(k)}=h(θ^{(k−1)}). Stop the iterative procedure, when θ^{(k)}−θ^{(k−1)}<η, where η some preassigned tolerance limit.
Finally, using the invariance property of the MLE, the MLEs of R(t),h(t), and m(t), respectively, defined as \(\hat {R}(t), \hat {h}(t)\), and \(\hat {m}(t)\), are obtained as
In next section, we obtain asymptotic intervals of θ using asymptotic normality property of MLEs.
Confidence intervals
Approximate CI
The asymptotic variance of \(\hat {\theta }\) for Shanker distribution is given by \(Var(\hat {\theta })= [I{_{X}(\hat {\theta })}]^{1}\) where \(I(\hat {\theta })\) is the observed Fisher’s information which is given by \(I(\hat {\theta })= \frac {d^{2} \log L}{d \theta ^{2}}\Bigg {}_{\theta =\hat {\theta }}\).
Since the Shanker distribution belongs to oneparameter exponential family of distributions, therefore, the sampling distribution of \(\frac {(\hat {\theta }\theta)}{\sqrt {\text {Var}(\hat {\theta })}}\) can be approximated by a standard normal distribution. The symmetric 100(1−ξ)% approximate CI for the parameter θ is then obtained by \(\hat {\theta }\pm z_{\frac {\xi }{2}}\sqrt {\text {Var}(\hat {\theta })},\) where 0<ξ<1 and \(z_{\frac {\xi }{2}}\) denotes the upper \(\frac {\xi }{2}\)th percentile of the standard normal distribution. Using the simulation, we can estimate the coverage probability
We construct some more CIs for the unknown parameter in the next subsection.
Bootstrap CIs
We propose to use CIs based on the parametric bootstrap methods. It is known that CIs with the asymptotic results do not implement very well for small samples. There are three types of resampling plans: nonparametric, semiparametric, and parametric. The bootstrap techniques depend on these three resampling plans, see Efron [5]. We used the parameteric bootstrap methods where the parametric model for the data is known \(f\left (\underline {x};.\right) \) up to the unknown parameter θ, so that the bootstrap data are sampled from \(f\left (\underline {x};\hat {\theta }\right),\) where \(\hat {\theta }\) is the MLE from the original data. Many studies dealt with percentile bootstrap method (Bootp) based on the idea of Efron and Tibshirani [6] and bootstrapt method (Boott) based on the idea of Hall [7] and Hall [8], such as Kundu and Joarder [9] among others. Kundu and Joarder [9] proposed two parametric bootstrap confidence intervals for the unknown parameter, say θ. The following procedures are followed to obtain bootstrap samples for the two methods:
The following steps are required to construct CI using Bootp method:

1.
Draw sample X_{1},X_{2},…,X_{n} from (1) and calculate the estimate \(\hat {\theta }\).

2.
Next, draw a bootstrap sample \(\left (X_{1}^{*},X_{2}^{*}, \ldots, X_{n}^{*}\right)\) using \(\hat {\theta }\). Derive the updated bootstrap estimate of θ, say \(\hat {\theta }^{*}\), using this sample.

3.
Repeat Step [2] B times.

4.
Let \(\widehat {F}(x)=P(\hat {\theta }^{*} \leq x)\) be the cumulative distribution function of \(\hat {\theta }^{*}\). Then, define \(\hat {\theta }_{Bootp}(x)=\widehat {F}^{1}(x)\) for a given x. The approximate 100(1−ξ)% CI for θ is given by
\(\left (\hat {\theta }_{Bootp}\left (\frac {\xi }{2}\right), ~\hat {\theta }_{Bootp}\left (1\frac {\xi }{2}\right)\right),\)
and the following steps are required to construct CI using Boott method:

1.
Draw sample X_{1},X_{2},…,X_{n} from (1) and obtain the estimate \(\hat {\theta }\).

2.
Next, draw a bootstrap sample \(\left (X_{1}^{*},X_{2}^{*}, \ldots, X_{n}^{*}\right)\) using \(\hat {\theta }\). Then, derive the estimates \(\hat {\theta }^{*}\) and \(\hat {V}(\hat {\theta }^{*})\).

3.
Obtain the T^{∗} statistic defined as
\(~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ T^{*}=\frac {\hat {\theta }^{*}\hat {\theta }}{\sqrt {\hat {V}(\hat {\theta }^{*})}}.\)

4.
Repeat Step 2 B times.

5.
Let \(\widehat {F}(x)=P(T^{*} \leq x)\) be the cumulative distribution function of T^{∗}. Define \(\hat {\theta }_{\mathrm {Boott}}(x)=\hat {\theta }+\sqrt {\hat {V}(\hat {\theta }^{*})}\widehat {F}^{1}(x)\) for a given x. The approximate 100(1−ξ)% CI for θ is given by
\(\left (\hat {\theta }_{\mathrm {Boott}}\left (\frac {\xi }{2}\right), ~\hat {\theta }_{\mathrm {Boott}}\left (1\frac {\xi }{2}\right)\right)\).
The Bayes estimators of unknown parameter and reliability characteristics are obtained in the next section.
The Bayesian estimation
The Bayesian inference procedures have been developed under the usual squared error loss function (quadratic loss), which is symmetrical, and associates equal importance to the losses due to overestimation and underestimation of equal magnitude. One may refer to paper by Canfield [10] for detail exposition in this direction. The mathematical form of squared loss function may simply be expressed as:
Suppose that X_{1},X_{2},…,X_{n} is a complete sample drawn from the model (1). We assume that θ is a prior distributed as gamma distribution denoted as G (a,b) where a>0 and b>0 with corresponding probability density function written as
After a simple calculation, the posterior distribution of θ is obtained as
where x̲=(x_{1},x_{2},…,x_{n}).
Now, the corresponding Bayes estimate of θ against the loss function L_{s} is obtained as
Next, the Bayes estimates of R(t),h(t) and m(t) with respect to square error loss function can be written, as
It is clear that all the above Bayes estimators do not have simple closed forms. Therefore, in next sections, we employ two popular approximation procedures to calculate the approximate Bayes estimates of the parameter and reliability characteristic.
Lindley’s approximation
In the previous subsection, we obtained the Bayes estimates of θ under squared error loss function. These estimates are of the form of the ratio of the two integrals. Lindley [11] developed a procedure to approximate the ratio of the two integrals. In this subsection, using this technique, we obtain the approximate Bayes estimates of θ under the stated loss functions. For illustration, consider the ratio of integral I(x), where
where u(θ) is function of θ only and l(θ) is the loglikelihood and ρ(θ)= logπ(θ). Let \(\hat {\theta }\) denote the MLE of θ. Applying Lindley’s approximation procedure, I(x̲) can be written as
where u_{θθ} denotes the second derivative of the function u(θ) with respect to θ, and \(\hat {u}_{\theta \theta }\) represents the same expression evaluated at \(\theta =\hat {\theta }\). All other quantities appearing in the above expression of I(x̲) are interpreted as follows
Now, to obtain the Bayes estimate of θ under the loss function L_{s}, we have
In a similar manner, we can derive the Bayes estimates of R(t),h(t), and m(t) with respect to the square error loss functions.
In the next subsection, we use the MetropolisHastings (MH) algorithm and compute some more estimates of unknown parameter. One may refer to Metropolis et al. [12] and Hastings [13] for various applications of this method.
MetropolisHastings algorithm
We generate samples from the given posterior distribution using the normal as proposal distribution for θ. We need the following procedure to generate posterior samples using the proposed algorithm.
Step 1: Choose an initial guess of θ and call it θ_{0}.
Step 2: Generate θ^{′} using the proposal N(θ_{n−1},σ^{2}) distribution.
Step 3: Compute \(h = \frac {\pi (\theta '\vert x)}{\pi (\theta _{n1}\vert x)}\).
Step 4: Then, generate a sample u from the uniform U(0,1) distribution.
Step 5: If u≤h, then set
θ_{n}→θ^{′}; otherwise θ_{n}→θ_{n−1}.
Step 6: Repeat steps (2–5) Q times and collect adequate number of replicates.
In order to avoid dependence of the sample on the initial values, the first few samples are discarded, and to minimize the correlation between subsequent samples, lagging is used. Then, the resulting sample is approximately independent. In this way, we are able to generate sample from the posterior distribution of θ. Suppose that Q denotes the total number of generated sample and Q_{0} denotes the initial burnin sample.
Finally, we observe that the associated Bayes estimate of θ under square error loss function is given by
Finally, we observe that the associated Bayes estimate of R(t) under square error loss function is given by
In a similar manner, we can derive the Bayes estimates of h(t) and m(t) with respect to the square error loss functions.
It should be noticed that the 100(1−ξ)% HPD interval for the unknown parameter θ can easily be constructed using the MH samples. The idea was developed by Chen and Shao [14]. First, arrange the sample θ_{1},θ_{2},...,θ_{Q} in increasing order. Then, the 100(1−ξ)% credible interval for θ is obtained as (θ_{1},θ_{⌊(1−ξ)Q+1⌋}),..., (θ_{⌊Qξ⌋},θ_{Q}), where ⌊z⌋ denotes the greatest integer less than or equal to z. Among all such credible intervals, the shortest one is the HPD interval.
Numerical comparison
In order to evaluate the performance of all the point estimates and different methods of constructing CIs and HPD interval discussed in the preceding sections, a Monte Carlo simulation study was conducted and the results are presented in this section. For the simulation study, we took θ=0.5,n=30,50,70,90,110. The main idea to take different combinations of n is to see how the MLE and Bayes estimates perform for them. The informative and noninformative prior were used to obtain the Bayes estimates and HPD intervals. The hyperparameters were taken as follows: a=4,b=8 for informative prior and a=0,b=0 for noninformative prior. They were chosen such that the expectation of prior match with the true parameter value. To obtain the Bayes estimates and HPD intervals using MH algorithm, we set Q=10000 replications and Q_{0}=2000 as the initial burnin sample. In all cases, Bayes estimates are against square error loss function. Under these settings, the average estimates and the estimated mean squared errors (MSEs) of different estimates based on the 10000 simulated complete samples from the Shanker distribution are listed in Tables 1, 2, 3, and 4. Also, for comparison purposes, using simulated samples, the coverage probabilities and average lengths of various CIs and HPD intervals based on 95% of the true coverage probability were computed. To obtain the bootstrap CIs, we set B=10000 replications. The average lengths and the corresponding coverage probabilities are reported in Table 5.
All the computations were conducted in R software (R i386 3.2.2), and R codes can be obtained from the author upon request. Some of the points are quite clear from the simulation study. Based on tabulated the average estimates, the estimated MSE, coverage probability, and average length values following the conclusions can be drawn from Tables 1, 2, 3, 4, and 5.

1.
It is observed that the average estimates of the MLE and Bayes estimates are all close to the true parameter for different combinations of n. Also, the performance of the MLE and Bayes estimates of the unknown parameter is quite satisfactory, in terms of their MSEs. We found that Bayes estimators have smaller MSE values than the MLE of θ.

2.
The performance of the Lindley estimates is very similar to that of the corresponding Bayes estimates using the MH algorithm.

3.
As expected, the MSE values of all estimates decrease as the sample size n grows.

4.
It is observed that coverage probabilities obtained by approximate CI, Bootp, and Boott CIs are better than HPD interval and they are close to the nominal level. But HPD interval provides a good balance between the coverage probabilities as well as average lengths. Therefore, in general, we would recommend to use the HPD interval. If one wants to guarantee that the coverage probability is close to the nominal level and the length of HPD interval is not the major concern, then approximate CI and Boott CI are proposed in most cases.

5.
From the comparison of Bootp and Boott CIs, it is observed that the performance of Boott CI is marginally better than Bootp CI in terms of average lengths and coverage probabilities.
Data analysis
For illustrative purposes, we have analyzed two real data sets which have been recently considered by Ghitany et al. [15]. They fitted these real data sets to the Shanker distribution and found that the Shanker distribution fits both real data sets reasonably good. They also obtained useful inference for the prescribed model.
Data set 1: The first dataset represents the waiting times (in minutes) before service of 100 bank customers and was examined and analyzed by Ghitany et al. (2008) for fitting the Lindley distribution. The data are as follows:
Based on the original data, the MLEs of unknown parameter θ were evaluated from Eq. (8). We also computed different Bayes estimates under square error loss function using Lindley’s approximation and MH algorithm. Since we did not have any prior information on a and b, we assumed the noninformative prior, i.e., a=b=0. The MLEs and all Bayes estimates of unknown parameter θ are displayed in Table 6, and also, we constructed the approximate CI, Boot CIs, and HPD interval. From Table 6, it is observed that all the estimates are close to each other. The HPD interval performs better among all the intervals in respect of length. The MLEs and Bayes estimates of reliability characteristics are derived in Table 7.
Data set 2: This data set is the strength data of glass of the aircraft window reported by Fuller et al. [16]. The data are as follows
Table 8 shows the MLEs and Bayes estimates of the unknown parameter θ. The 95% CIs, Boot CIs, and HPD intervals for θ are also presented in Table 8. All the Bayes estimates and HPD intervals were evaluated against noninformative prior distribution. Similar to the obtained result from data set 1, we observe that the MLE and Bayes procedures have similar values for θ. Also, HPD intervals have shortest lengths among all the interval estimates. In Table 9, the classical and Bayes estimation of reliability characteristics are calculated.
Conclusions
In this paper, we have discussed the classical and Bayesian inferential of unknown parameter and reliability characteristics of the Shanker distribution. We have provided the MLE and Bayes estimates and the corresponding CIs and HPD interval. A numerical simulation has been conducted to compare the performance of different methods, and results of a simulation study have been reported comprehensively in this paper. The method can be extended for progressively typeI hybrid censoring scheme and other censoring schemes also. We believe that more work is needed along these directions.
Abbreviations
 BE:

Bayes estimate
 CDF:

Cumulative distribution function
 CIs:

Confidence intervals
 HPD:

Highest posterior density
 ML:

Maximum likelihood
 MLE:

ML estimate
 MSE:

Mean squared error
 PDF:

Probability density function
References
 1
Shanker, R.: Shanker distribution and its applications. Int. J. Stat. Appl. 5(6), 338–348 (2015).
 2
Shanker, R., Fesshay, H.: On modeling of lifetime data using one parameter Akash, Shanker, Lindley and exponential distributions. Biom. Biostat. Int. J. 3(6), 00084 (2016).
 3
Rastogi, M. K., Merovci, F.: Bayesian estimation for parameters and reliability characteristic of the Weibull Rayleigh distribution. J. King Saud Univ. Sci. 30(4), 472–478 (2018).
 4
Chandrakant Rastogi, M. K., Tripathi, Y. M.: On a WeibullInverse Exponential Distribution. Ann. Data. Sci. 5(2), 209–234 (2018).
 5
Efron, B.: The jackknife, the bootstrap, and other resampling plans. CBMSNSF Regional Conference Series in Applied Mathematics. Philadelphia. Society for Industrial and Applied Mathematics (SIAM) (1982).
 6
Efron, B., Tibshirani, R.: Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy. Stat. Sci. 1, 54–77 (1986).
 7
Hall, P.: Theoretical comparison of bootstrap confidence intervals. Ann. Stat. 16, 927–953 (1988).
 8
Hall, P.: The bootstrap and edgeworth expansion. SpringerVerlag, New York (1992).
 9
Kundu, D., Joarder, A.: Analysis of typeII progressively hybrid censored data. Comput. Stat. Data Anal. 50, 2509–2528 (2006).
 10
Canfield, R. V.: A Bayesian approach to reliability estimation using a loss function. IEEE Trans. Reliab. 19(1), 13–16 (1970).
 11
Lindley, D. V.: Approximate Bayesian method. Trabajos de Estadistica. 31, 223–237 (1980).
 12
Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., Teller, E.: Equations of state calculations by fast computing machines. J. Chem. Phys. 21, 1087–1092 (1953).
 13
Hastings, W. K.: Monte Carlo sampling methods using Markov chains and their applications. Biometrika. 57, 97–109 (1970).
 14
Chen, M. H., Shao, Q. M.: Monte Carlo estimation of Bayesian credible and HPD intervals. J. Comput. Graph. Stat. 8, 69–92 (1999).
 15
Ghitany, M. E., Atieh, B., Nadarajah, S.: Lidley distribution and its application. Math. Comput. Simul. 78, 493–506 (2008).
 16
Fuller, E. J., Frieman, S., Quinn, J., Quinn, G., Carter, W.: Fracture mechanics approach to the design of glass aircraft windows: A case study. SPIE Proc. 2286, 419–430 (1994).
Acknowledgments
Not applicable.
Funding
There are no sources of funding for the research.
Availability of data and materials
It is not applicable in my paper.
Author information
Affiliations
Contributions
The author read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The author declares that he/she has no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Abushal, T.A. Bayesian estimation of the reliability characteristic of Shanker distribution. J Egypt Math Soc 27, 30 (2019). https://doi.org/10.1186/s427870190033x
Received:
Accepted:
Published:
Keywords
 Shanker distribution
 Maximum likelihood estimate
 Bootstrap technique
 Metropolishastings algorithm
Mathematics Subject Classification
 62F10