The Bayesian inference procedures have been developed under the usual squared error loss function (quadratic loss), which is symmetrical, and associates equal importance to the losses due to overestimation and underestimation of equal magnitude. One may refer to paper by Canfield [10] for detail exposition in this direction. The mathematical form of squared loss function may simply be expressed as:
$$\begin{array}{@{}rcl@{}} \mathrm{Squared~ error~ loss}:~ L_{s}(\upsilon,\eta)&=&(\eta - \upsilon)^{2}. \end{array} $$
Suppose that X1,X2,…,Xn is a complete sample drawn from the model (1). We assume that θ is a prior distributed as gamma distribution denoted as G (a,b) where a>0 and b>0 with corresponding probability density function written as
$$\begin{array}{@{}rcl@{}} \pi(\theta)\propto \,\theta^{a-1}\,e^{-b\,\theta}~~\theta>0,\, ~a>0,\,b>0. \end{array} $$
(10)
After a simple calculation, the posterior distribution of θ is obtained as
$$\begin{array}{@{}rcl@{}} \pi(\theta\, |\,\underbar{x})\propto \frac{\theta^{2\,n+a-1}\,\,e^{-\theta\, (b+s)}}{(\theta^{2} +1)^{n}}\,\prod_{i=1}^{n} (\theta+ x_{i}) \,, \end{array} $$
(11)
where x̲=(x1,x2,…,xn).
Now, the corresponding Bayes estimate of θ against the loss function Ls is obtained as
$$\begin{array}{@{}rcl@{}} \tilde{\theta}_{s}=E[\theta\,|\,\underbar{x}\,] =\frac{1}{k}\int_{0}^{\infty}\frac{\theta^{2\,n+a}}{(\theta^{2} +1)^{n}}\,e^{-\theta\,(b+s)}\,\prod_{i=1}^{n} (\theta+ x_{i}) ~d\theta, \end{array} $$
$$\begin{array}{@{}rcl@{}} k=\int_{0}^{\infty}\frac{\theta^{2\,n+a-1}}{(\theta^{2} +1)^{n}}\,e^{-\theta\,(b+s)}\,\prod_{i=1}^{n} (\theta+ x_{i}) ~d\theta. \end{array} $$
Next, the Bayes estimates of R(t),h(t) and m(t) with respect to square error loss function can be written, as
$$\begin{array}{@{}rcl@{}} \tilde{R}_{s}(t) =\frac{1}{k}\int_{0}^{\infty}\frac{\theta^{2\,n+a-1}\,(\theta^{2}+\theta\,t+1)}{(\theta^{2} +1)^{(n+1)}}\,e^{-\theta\,(b+s+t)}\,\prod_{i=1}^{n} (\theta+ x_{i}) ~d\theta, \end{array} $$
$$\begin{array}{@{}rcl@{}} \tilde{h}_{s}(t) =\frac{1}{k}\int_{0}^{\infty}\frac{\theta^{2\,n+a+1}(\theta+t)}{(\theta^{2}+\theta\,t+1) (\theta^{2} +1)^{n}}\,e^{-\theta\,(b+s)}\,\prod_{i=1}^{n} (\theta+ x_{i}) ~d\theta, \end{array} $$
$$\begin{array}{@{}rcl@{}} \tilde{m}_{s}(t) =\frac{1}{k}\int_{0}^{\infty}\frac{\theta^{2\,n+a-2}\,(\theta^{2}+\theta\,t+2)}{(\theta^{2}+\theta\,t+1)\,(\theta^{2} +1)^{n}}\,e^{-\theta\,(b+s)}\,\prod_{i=1}^{n} (\theta+ x_{i}) ~d\theta, \end{array} $$
It is clear that all the above Bayes estimators do not have simple closed forms. Therefore, in next sections, we employ two popular approximation procedures to calculate the approximate Bayes estimates of the parameter and reliability characteristic.
Lindley’s approximation
In the previous subsection, we obtained the Bayes estimates of θ under squared error loss function. These estimates are of the form of the ratio of the two integrals. Lindley [11] developed a procedure to approximate the ratio of the two integrals. In this subsection, using this technique, we obtain the approximate Bayes estimates of θ under the stated loss functions. For illustration, consider the ratio of integral I(x), where
$$\begin{array}{@{}rcl@{}} I(\underbar{x})=\frac{\int_{\theta}u(\theta) ~e^{l(\theta)+\rho(\theta)}~ d\theta}{ \int_{\theta}~e^{l(\theta)+\rho(\theta)}~ d\theta}\,, \end{array} $$
(12)
where u(θ) is function of θ only and l(θ) is the log-likelihood and ρ(θ)= logπ(θ). Let \(\hat {\theta }\) denote the MLE of θ. Applying Lindley’s approximation procedure, I(x̲) can be written as
$$\begin{array}{@{}rcl@{}} I(\underbar{x})= u(\hat{\theta})+ 0.5\left[\left(\hat{u}_{\theta \theta}+2\,\hat{u}_{\theta}~\hat{\rho_{\theta}}\right)~\hat{\sigma}_{\theta \theta}+\hat{u}_{\theta}~\hat{\sigma}^{2}_{\theta \theta}~\hat{l}_{\theta\theta\theta}\right], \end{array} $$
where uθθ denotes the second derivative of the function u(θ) with respect to θ, and \(\hat {u}_{\theta \theta }\) represents the same expression evaluated at \(\theta =\hat {\theta }\). All other quantities appearing in the above expression of I(x̲) are interpreted as follows
$$\begin{array}{@{}rcl@{}} \hat{l}_{\theta \theta0}&=&\frac{\partial^{2}l}{\partial {\theta}^{2} }\Bigg{|}_{\theta=\hat{\theta}}=-\,\frac{2\,n}{\hat{\theta}^{2}}-\frac{2\,n\,(\theta^{2}-1)}{(\theta^{2}+1)^{2}}-\sum_{i=1}^{n} \frac{1}{(\hat{\theta}+x_{i})^{2}},~~~~~~\hat{\sigma}_{\theta \theta}=-\frac{1}{\hat{l}_{\theta \theta}}, \end{array} $$
$$\begin{array}{@{}rcl@{}} \hat{l}_{\theta \theta \theta}&=&\frac{\partial^{3}l}{\partial {\theta}^{3} }\Bigg{|}_{\theta=\hat{\theta}}= \,\frac{4\,n}{\hat{\theta}^{3}}+\frac{4\,n\,\theta\,(\theta^{2}-3)}{(\theta^{2}+1)^{3}}+\sum_{i=1}^{n} \frac{2}{(\hat{\theta}+x_{i})^{3}},~~~~~\hat{\rho}_{\theta}=\frac{(a-1)}{\hat{\theta}}-b. \end{array} $$
Now, to obtain the Bayes estimate of θ under the loss function Ls, we have
$$\begin{array}{@{}rcl@{}} u(\theta)=\theta,~~u_{\theta}=1,~~u_{\theta \theta}=0,~~ \tilde{\theta}_{s}=\hat{\theta}+ 0.5\left[\left(\hat{u}_{\theta \theta}+2\,\hat{u}_{\theta}~\hat{\rho_{\theta}}\right)~\hat{\sigma}_{\theta \theta}+\hat{u}_{\theta}~\hat{\sigma}^{2}_{\theta \theta}~\hat{l}_{\theta\theta\theta}\right]{.} \end{array} $$
In a similar manner, we can derive the Bayes estimates of R(t),h(t), and m(t) with respect to the square error loss functions.
In the next subsection, we use the Metropolis-Hastings (MH) algorithm and compute some more estimates of unknown parameter. One may refer to Metropolis et al. [12] and Hastings [13] for various applications of this method.
Metropolis-Hastings algorithm
We generate samples from the given posterior distribution using the normal as proposal distribution for θ. We need the following procedure to generate posterior samples using the proposed algorithm.
Step 1: Choose an initial guess of θ and call it θ0.
Step 2: Generate θ′ using the proposal N(θn−1,σ2) distribution.
Step 3: Compute \(h = \frac {\pi (\theta '\vert x)}{\pi (\theta _{n-1}\vert x)}\).
Step 4: Then, generate a sample u from the uniform U(0,1) distribution.
Step 5: If u≤h, then set
θn→θ′; otherwise θn→θn−1.
Step 6: Repeat steps (2–5) Q times and collect adequate number of replicates.
In order to avoid dependence of the sample on the initial values, the first few samples are discarded, and to minimize the correlation between subsequent samples, lagging is used. Then, the resulting sample is approximately independent. In this way, we are able to generate sample from the posterior distribution of θ. Suppose that Q denotes the total number of generated sample and Q0 denotes the initial burn-in sample.
Finally, we observe that the associated Bayes estimate of θ under square error loss function is given by
$$\hat{\theta}_{MH,s} = \frac{1}{Q - Q_{0}} \sum_{i = Q_{0} + 1}^{Q} \theta_{i}.$$
Finally, we observe that the associated Bayes estimate of R(t) under square error loss function is given by
$$\hat{R(t)}_{s} = \frac{1}{Q - Q_{0}} \sum_{i = Q_{0} + 1}^{Q} \frac{\left(\theta_{i}^{2}+\theta_{i}\,t+1\right)}{\left(\theta_{i}^{2} +1\right)}\,e^{-\theta_{i}\,t}.$$
In a similar manner, we can derive the Bayes estimates of h(t) and m(t) with respect to the square error loss functions.
It should be noticed that the 100(1−ξ)% HPD interval for the unknown parameter θ can easily be constructed using the MH samples. The idea was developed by Chen and Shao [14]. First, arrange the sample θ1,θ2,...,θQ in increasing order. Then, the 100(1−ξ)% credible interval for θ is obtained as (θ1,θ⌊(1−ξ)Q+1⌋),..., (θ⌊Qξ⌋,θQ), where ⌊z⌋ denotes the greatest integer less than or equal to z. Among all such credible intervals, the shortest one is the HPD interval.