 Original research
 Open Access
 Published:
Bivariate general exponential models with stressstrength reliability application
Journal of the Egyptian Mathematical Society volume 28, Article number: 9 (2020)
Abstract
In this paper, we introduce two families of general bivariate distributions. We refer to these families as general bivariate exponential family and general bivariate inverse exponential family. Many bivariate distributions in the literature are members of the proposed families. Some properties of the proposed families are discussed, as well as a characterization associated with the stressstrength reliability parameter, R, is presented. Concerning R, the maximum likelihood estimators and a simple estimator with an explicit form depending on some marginal distributions are obtained in case of complete sampling. When the stress is censored at the strength, an explicit estimator of R is also obtained. The results obtained can be applied to a variety of bivariate distributions in the literature. A numerical illustration is applied on some wellknown distributions. Finally a real data example is presented to fit one of the proposed models.
Introduction
Mokhlis et al. [1] presented two forms of survival functions, given by
where g_{1}(u; c) does not contain θ, θ ∈ Θ, and g_{2}(u; c) does not contain β ∈ β, c ∈ C, { Θ, β, and C} are the parametric spaces, where g_{1}(u; c) is a continuous, monotone increasing, and differential function such that g_{1}(u; c) → 0 as u → 0 and g_{1}(u; c) → ∞ as u → ∞, while g_{2}(u; c) is continuous, monotone decreasing and differential function such that g_{2}(u; c) → 0 as u → ∞ and g_{2}(u; c) → ∞ as u → 0. With appropriate choices of g_{i}(u; c), i = 1, 2, in (1) and (2), many distributions in the literature can be obtained, such as exponential distribution, Weibull distribution, Rayleigh distribution, Pareto, Lomax, and others from the first form (1), and inverse exponential distribution, inverse Weibull distribution, inverse Rayleigh, Burr type III distribution and others from the second form (2), see Mokhlis et al. [1]. For facilitation we will denote the forms (1) and (2) by EF(θ, c) and IEF(β; c) and denote its survival functions and probability density function by F_{EF}(u; θ, c), f_{EF}(u; θ, c) and F_{IEF}(u; β, c), f_{IEF}(u; β, c), respectively.
In the area of stressstrength models, there have been a large amount of work regarding estimation of the reliability parameter, R = P(Y < X), when X and Y independent random variables belonging to the same univariate family, see for example Mokhlis [2], Kundu and Gupta [3], Singh et al. [4] and others. Recently, Mokhlis et al. [1] discussed R, when the variables are independent with survival functions having forms (1) and (2), respectively. Indeed, many real situations entail that X and Y are related in some way. However, some authors have studied the stressstrength reliability parameter, R, for some specified bivariate distributions, see for example Kotz et al. [5], Mokhlis [6], Nadarajah and Kotz [7], Nguimkeu et al. [8], Pak et al. [9] and, AbdelHamid [10].
There are many methods in the literature for obtaining bivariate distribution. Some of the popular methods are the copula type, the bivariate pseudo type and the MarshallOlkin type. Recently many attempts of obtaining generalized bivariate distributions using these types are presented in the literature. Among those are Kolesarova et al. [11], Arnold and Arvanitis [12], ElBassiouny et al. [13] and Sarhan [14].
In the present paper, we introduce two bivariate models of distributions which are types of the bivariate MarshallOlkin distribution. We call these models bivariate exponential and bivariate inverse exponential models. Some properties of the proposed models are discussed. Many bivariate distributions in the literature can be considered as special cases or members of our models, for example, Marshall and Olkin (MO) bivariate exponential distribution and MO bivariate Weibull introduced by Marshall and Olkin [15] and the bivariate Rayleigh distribution introduced by Pak et al. [9].
An explicit expression of the stressstrength parameter R is obtained showing that it is not a function of the parameter c (c could be a vector parameter). The maximum likelihood estimator of R is obtained as well as simple estimators of R are obtained in a closed form depending on the marginal distribution of X and the distribution of min{X, Y} or depending on the marginal distribution of Y and the distribution of max{X, Y}. Since many bivariate distributions in the literature belong to the proposed families, the results obtained could be applicable to a variety of bivariate distributions.
The remaining part of the paper is organized as follows: In the “Proposed families of bivariate distributions” section, we introduce two new families (models) of bivariate distributions. Some characterization of the proposed models such as marginals and the distribution of min{X, Y} and max {X, Y} are also discussed. The stressstrength reliability parameter, R, concerning the new models is considered in the “Stressstrength reliability” section. In the “Point estimation of R” section, we obtain maximum likelihood estimators of R as well as simple estimators of R depending on some marginal distributions in case of complete sampling. When the stress is censored at the strength, an explicit estimator of R is also obtained. Some bivariate members of the proposed family are presented in the “Special cases” section. In the “Numerical illustrations” section, a numerical illustration using some wellknown distributions is performed to highlight the theoretical results. Also an application is introduced using real data example. Finally conclusions of the results obtained are introduced in the “Conclusions” section.
Proposed families of bivariate distributions
In this section, we introduce two new families of bivariate distributions with marginals having distributions with forms (1) or (2). We apply a similar technique of that proposed by Marshall and Olkin [15], for obtaining these families.
The construction of the families (models)
Lifetime model
Suppose that a system consists of two subsystems, say A and B. Subsystem A contains two components, say A_{1}, and C, connected in series (parallel) with lifetimes U_{1} and U_{0} , respectively. Subsystem B contains the two components, say B_{1} and C, connected in series (parallel), where the lifetime of component B_{1} is U_{2}.
Suppose that U_{i}, i = 0, 1, 2. , are independent random variables following EF(θ_{i}, c), i = 0, 1, 2 for the series case and IEF(β_{i}; c), i = 0, 1, 2. , for the parallel case, i.e.,
If X and Y are the lifetimes of the two subsystems A and B, respectively, then we have X = min {U_{0}, U_{1}} and Y = min {U_{0}, U_{2}}., for the series case, while X = max {U_{0}, U_{1}} and Y = max {U_{0}, U_{2}}, for the parallel case.
Stress model
Consider a twocomponent system and consider three independent stresses say U_{0}, U_{1}, and U_{2}. Each component is subject to an individual stress say U_{1} and U_{2}, respectively, while U_{0} is an overall stress transmitted to both the components equally. Then,
 1.
The observed stress on the two components is X = max {U_{0}, U_{1}} and Y = max {U_{0}, U_{2}}., respectively.
 2.
If the stresses are always fatal, then the lifetime of the two components are X = min {U_{0}, U_{1}} and Y = min {U_{0}, U_{2}}.
We can observe that in the two models there is the possibility of having X = Y; thus, the two models have both an absolute continuous part and a singular part, similar to MO bivariate exponential model.
Theorems 1–3 present the survival functions and the probability density functions of the proposed bivariate families.
Theorem 1 Suppose U_{i}, i = 0, 1, 2., are independent random variables following EF(θ_{i}; c), i = 0, 1, 2., and let X = min {U_{0}, U_{1}} and Y = min {U_{0}, U_{2}}; then, the bivariate vector (X, Y) will have the survival function
Proof Obviously, from \( {\overline{F}}_{X,Y}\left(x,y\right)=P\left(X>x,Y>y\right) \), we can write \( {\overline{F}}_{\mathrm{BEF}}\left(X,Y\right) \) as
Since U_{i} are independent random variables following EF(θ_{i}; c), i = 0, 1, 2. Hence, (4) holds. ∎
We will denote the bivariate distribution with survival function having the form (4) by BEF(θ_{0}, θ_{1}, θ_{2}; c). Clearly, X and Y are independent if and only if (iff) θ_{0} = 0. The joint survival function can also written as
Theorem 2 Suppose U_{i}, i = 0, 1, 2. , are independent random variables following EIF(β_{i}; c), i = 0, 1, 2. , and let X = max {U_{0}, U_{1}} and Y = max {U_{0}, U_{2}}; then, the bivariate vector (X, Y) has the cumulative function
Proof Similarly as in Theorem 1, using F_{X, Y}(x, y) = P(X < x, Y < y), we can show that (5) holds. ∎
We will denote the bivariate distribution with cumulative function with the form (5) by BIEF(β_{0}, β_{1}, β_{2}; c). Clearly, X and Y are independent iff β_{0} = 0. The joint cumulative function can also be written as
Theorem 3 If the vector (X, Y) has either BEF(θ_{0}, θ_{1}, θ_{2}; c) or BIEF(β_{0}, β_{1}, β_{2}; c), then their joint pdf is given by
where \( {\displaystyle \begin{array}{c}{f}_1\left(x,y\right)=\left\{\begin{array}{c}{\theta}_2\left({\theta}_0+{\theta}_1\right){g}_1^{\prime}\left(x;c\right){g}_1^{\prime}\left(y;c\right)\ {\mathrm{e}}^{\left({\theta}_0+{\theta}_1\right){g}_1\left(x;c\right){\theta}_2{g}_1\left(y;c\right)},\mathrm{for}\ \mathrm{BEF}\left({\theta}_0,{\theta}_1,{\theta}_2;c\right)\\ {}\ {\beta}_1\left({\beta}_0+{\beta}_2\right){g}_2^{\prime}\left(x;c\right){g}_2^{\prime}\left(y;c\right){e}^{{\beta}_1{g}_2\left(x;c\right)\left({\beta}_0+{\beta}_2\right){g}_2\left(y;c\right)},\mathrm{for}\ \mathrm{BIEF}\left({\upbeta}_0,{\upbeta}_1,{\upbeta}_2;\mathrm{c}\right)\end{array}\right.\\ {}{f}_2\left(x,y\right)=\left\{\begin{array}{c}{\theta}_1\left({\theta}_0+{\theta}_2\right){g}_1^{\prime}\left(x;c\right){g}_1^{\prime}\left(y;c\right)\ {\mathrm{e}}^{{\theta}_1{g}_1\left(x;c\right)\left({\theta}_0+{\theta}_2\right){g}_1\left(y;c\right)},\mathrm{for}\ \mathrm{BEF}\left({\theta}_0,{\theta}_1,{\theta}_2;c\right)\\ {}\ {\beta}_2\left({\beta}_0+{\beta}_1\right){g}_2^{\prime}\left(x;c\right){g}_2^{\prime}\left(y;c\right){e}^{\left({\beta}_0+{\beta}_1\right){g}_2\left(x;c\right){\beta}_2{g}_2\left(y;c\right)},\mathrm{for}\ \mathrm{BIEF}\left({\upbeta}_0,{\upbeta}_1,{\upbeta}_2;\mathrm{c}\right)\end{array}\right.\end{array}} \)
and
With θ = θ_{0} + θ_{1} + θ_{2}, β = β_{0} + β_{1} + β_{2} and \( {g}_i^{\prime}\left(t;c\right),i=1,2, \) is the first derivative of g_{i}(t; c) with respect to t.
Proof Clearly, for the two models, f_{1}(x, y) and f_{2}(x, y) can be easily obtained by using \( \frac{\partial^2{\overline{F}}_{X,Y}\left(x,y\right)}{\partial x\partial y} \) or \( \frac{\partial^2{F}_{X,Y}\left(x,y\right)}{\partial x\partial y} \) for x > y and y > x respectively. For f_{0}(x), we use the relation
\( {\int}_0^{\infty }{\int}_0^x{f}_1\left(x,y\right) dydx+{\int}_0^{\infty }{\int}_0^y{f}_2\left(x,y\right) dx dy+{\int}_0^{\infty }{f}_0(x) dx=1 \). So, for the BEF, we have
and
Thus,
Similarly for the BIEF, we have \( {\int}_0^{\infty }{f}_0(x) dx=1+\left({\beta}_1+{\beta}_2\right){\int}_0^{\infty }{g}_2^{\prime}\left(t;c\right){\mathrm{e}}^{\beta {g}_2\left(t;c\right)} dt={\beta}_0{\int}_0^{\infty }{g}_2^{\prime}\left(t;c\right){\mathrm{e}}^{\beta {g}_2\left(t;c\right)} dt \).
Hence, the proof is complete. ∎
Notice that both distribution BEF(θ_{0}, θ_{1}, θ_{2}; c) and BIEF (β_{0}, β_{1}, β_{2}; c) are singular on the line X = Y, since P(X = Y) ≠ 0. Thus the two models have a singular part and an absolute continuous part, similar to Marshall and Olkin’s model. The following theorem provides explicitly the absolute continuous part and the singular part of BEF and BIEF.
Theorem 4 If the vector (X, Y) has BEF(θ_{0}, θ_{1}, θ_{2}; c) or BIEF(β_{0}, β_{1}, β_{2}; c), then
 (i)
The survival function for the BEF is
$$ {\overline{F}}_{\mathrm{BEF}}\left(x,y\right)=\frac{\theta_1+{\theta}_2}{\theta }{\overline{F}}_{\mathrm{BEF}(a)}\left(x,y\right)+\frac{\theta_0}{\theta }{\overline{F}}_{\mathrm{BEF}(s)}\left(x,y\right), $$(7)Where, θ = θ_{0} + θ_{1} + θ_{2}, \( {\overline{F}}_{\mathrm{BEF}(s)}\left(x,y\right)={\mathrm{e}}^{\theta {g}_1\left(\max \left\{x,y\right\};c\right)} \) is the singular part, and \( {\overline{F}}_{\mathrm{BEF}(a)}\left(x,y\right)=\frac{\theta }{\theta_1+{\theta}_2}{\mathrm{e}}^{{\theta}_1{g}_1\left(x;c\right){\theta}_2{g}_1\left(y;c\right){\theta}_0{g}_1\left(\max \left\{x,y\right\};c\right)}\frac{\theta_0}{\theta_1+{\theta}_2}{\mathrm{e}}^{\theta {g}_1\left(\max \left\{x,y\right\};c\right)} \) is the absolute continuous part.
 (ii)
The cumulative function for the BIEF is
$$ {F}_{\mathrm{BIEF}}\left(x,y\right)=\frac{\beta_1+{\beta}_2}{\beta }{F}_{\mathrm{BIEF}(a)}\left(x,y\right)+\frac{\beta_0}{\beta }{F}_{\mathrm{BIEF}(s)}\left(x,y\right), $$(8)where, β = β_{0} + β_{1} + β_{2}, \( {F}_{\mathrm{BIEF}(s)}\left(x,y\right)={\mathrm{e}}^{\beta {g}_2\left(\min \left\{x,y\right\};c\right)} \) is the singular part and \( {F}_{\mathrm{BIEF}(a)}\left(x,y\right)=\frac{\beta }{\beta_1+{\beta}_2}{\mathrm{e}}^{{\beta}_1{g}_2\left(x;c\right){\beta}_2{g}_2\left(y;c\right){\beta}_0{g}_2\left(\min \left\{x,y\right\};c\right)}\frac{\beta_0}{\beta_1+{\beta}_2}{\mathrm{e}}^{\beta {g}_2\left(\min \left\{x,y\right\};c\right)} \) is the absolute continuous part.
Proof (i) For the BEF, using the fact that \( {\overline{F}}_{\mathrm{BEF}}\left(x,y\right)=\alpha {\overline{F}}_{\mathrm{BEF}(a)}\left(x,y\right)+\left(1\alpha \right){\overline{F}}_{\mathrm{BEF}(s)}\left(x,y\right) \)
Hence α may be obtained as
and \( {\overline{F}}_{\mathrm{BEF}(a)}\left(x,y\right)=\underset{y}{\overset{\infty }{\int }}\underset{x}{\overset{\infty }{\int }}{f}_{\mathrm{BEF}(a)}\left(u,v\right) dudv \); hence, with α and \( {\overline{F}}_{\mathrm{BEF}(a)}\left(x,y\right) \) known, the singular part \( {\overline{F}}_{\mathrm{BEF}(s)}\left(x,y\right) \) can be obtained by subtraction.
(ii) Similarly for the BIEF, F_{BIEF(a)}(x, y) is computed by using F_{BIEF}(x, y) = γF_{BIEF(a)}(x, y) +(1 − γ)F_{BIEF(s)}(x, y), 0 ≤ γ ≤ 1. Using a similar manner as in part (i), we can show that (8) holds. ∎
The marginal distributions of X and Y and the conditional distributions are given by Theorems 5 and 6, while the distributions of min{X, Y}, for the BEF, and max{X, Y}, for the BIEF, are given by Theorem 7.
Theorem 5 If the vector (X, Y) has either BEF(θ_{0}, θ_{1}, θ_{2}; c) or BIEF(β_{0}, β_{1}, β_{2}; c), then the marginal distributions of X and Y are either EF(θ_{0}, θ_{i}; c) or IEF (β_{0}, β_{i}; c), i=1,2, respectively.
Proof If (X, Y) has BEF(θ_{0}, θ_{1}, θ_{2}; c), then from (6) we have
Similarly we can derive f_{Y}(y). In a similar manner, f_{X}(x) and f_{Y}(y) can be shown to have IEF (β_{0}, β_{i}; c), i=1,2, respectively for the BIEF. ∎
Notice that the marginal distributions of X and Y can also be obtained using the next lemma.
Lemma 1
(i) Let X = min {U_{0}, U_{1}}, then X ∼ EF(θ_{0} + θ_{1}; c) iff U_{0} and U_{1} are independent and U_{0} ∼ EF(θ_{0}; c), U_{1} ∼ EF(θ_{1}; c).
(ii) Let X = max {U_{0}, U_{1}}, then X ∼ IEF(β_{0} + β_{1}; c) iff U_{0} and U_{1} are independent and U_{0} ∼ IEF(β_{0}; c), U_{1} ∼ IEF(β_{1}; c).
Here “∼” means follows or has the distribution.
Proof (i) for X = min {U_{0}, U_{1}}, we have
If U_{0} and U_{1} are independent and U_{0} ~ EF(θ_{0}; c) and U_{1} ~ EF(θ_{1}; c) U_{1} ∼ EF(θ_{1}; c), then
Conversely, if X ∼ EF(θ_{0} + θ_{1}; c), then
Then, U_{0} and U_{1} are independent and \( {\overline{F}}_{U_0}(x)={e}^{{\theta}_0{g}_1\left(x;c\right)} \) and \( {\overline{F}}_{U_1}(x)={e}^{{\theta}_1{g}_1\left(x;c\right)}, \) i.e. U_{0} ∼ EF(θ_{0}; c) and U_{1} ∼ EF(θ_{1}; c).
 (ii)
Similarly for the BIEF. ∎
Consequently, from Theorems 1 and 2 and Lemma 1, we have the following lemma, Lemma 2.
Lemma 2
(i) (X, Y) ∼ BEF(θ_{0}, θ_{1}, θ_{2}; c) iff there exist independent EF random variables U_{i}, i = 0, 1, 2, such that X = min {U_{0}, U_{1}} and Y = min {U_{0}, U_{2}}.
(ii) (X, Y) ∼ BIEF(β_{0}, β_{1}, β_{2}; c) if and only if there exist independent IEF random variables U_{i}, i = 0, 1, 2, such that X = max {U_{0}, U_{1}} and Y = max {U_{0}, U_{2}}.
Theorem 6 The conditional distribution of X given Y = y, is given by
for the BEF, while for the BIEF is given by
Proof The proof is trivial so it is omitted.
Theorem 7 If (X, Y) is a bivariate vector of continuous random variables, then
 (i)
min{X, Y} ∼ EF(θ; c), if (X, Y) ∼ BEF(θ_{0}, θ_{1}, θ_{2}; c),
 (ii)$$ \max \left\{X,Y\right\}\sim \mathrm{IEF}\left(\beta; c\right),\mathrm{if}\ \left(X,Y\right)\sim \mathrm{BIEF}\left({\beta}_0,{\beta}_1,{\beta}_2;c\right). $$
Proof (i) if (X, Y) ∼ BEF(θ_{0}, θ_{1}, θ_{2}; c), then using (4), we have
Similarly by using (5) for the BIEF, we can show that max{X, Y} ∼ IEF(β; c). ∎
Stressstrength reliability
In this section, we present the stressstrength reliability of the two bivariate models. Many bivariate distributions in the literature have forms of the proposed models, for example, MO bivariate exponential distribution, Marshal and Olkin [15], and the bivariate Rayleigh distribution introduced by Pak et al. [9] for the BEF and bivariate inverse Weibull and bivariate Burr type III for the BIEF. So the following theorem can be applied to many distributions possessing BEF or BIEF.
Theorem 8 Let (X, Y) be a bivariate vector. Then, the stressstrength reliability function, R, is given by
 (i)$$ R=P\left(Y<X\right)=\frac{\theta_2}{\theta }, $$(11)
iff (X, Y) ∼ BEF(θ_{0}, θ_{1}, θ_{2}; c), where θ = θ_{0} + θ_{1} + θ_{2}.
 (ii)$$ R=P\left(Y<X\right)=\frac{\beta_1}{\beta }, $$(12)
iff (X, Y) ∼ BIEF(β_{0}, β_{1}, β_{2}; c), where β = β_{0} + β_{1} + β_{2}.
Proof (i) First, suppose that (X, Y) ∼ BEF(θ_{0}, θ_{1}, θ_{2}; c), then using (6),
Conversely, suppose that Eq. (11) holds. From Mokhlis et al. ([1], Theorem 1), since \( R=\frac{\theta_2}{\theta_0+{\theta}_1+{\theta}_2} \), we have two independent random variables, say, X and U_{2} where X ∼ EF(θ_{0} + θ_{1}; c) and U_{2} ∼ EF(θ_{2}; c).
From Lemma 1, since X ∼ EF(θ_{0} + θ_{1}; c), then X = min {U_{0}, U_{1}}, where U_{0} ∼ EF(θ_{0}; c) and
U_{1} ∼ EF(θ_{1}; c). Then,
Let Y = min {U_{0}, U_{2}}. Thus, using Lemma 2, the proof is completed.
(ii) Similarly, suppose that (X, Y) ∼ BIEF(β_{0}, β_{1}, β_{2}; c), then using (6),
Conversely, suppose that Eq. (12) holds. From Mokhlis et al. ([1], Theorem 2), since \( R=\frac{\beta_1}{\beta_0+{\beta}_1+{\beta}_2} \), then we have two independent random variables, say U_{1} and Y, where U_{1} must be distributed as IEF(β_{1}; c) and Y must be distributed as IEF(β_{0} + β_{2}; c). From Lemma 1, since Y = max {U_{0}, U_{2}}, then U_{0} ∼ IEF(β_{0}; c) and U_{2} ∼ IEF(β_{2}; c). Thus, we have
Let X = max {U_{0}, U_{1}}. Using Lemma 2, the proof is completed. ∎
Point estimation of R
Let (X_{1}, Y_{1}), …, (X_{n}, Y_{n}) be a random sample of size n from either BEF(θ_{0}, θ_{1}, θ_{2}; c) or BIEF (β_{0}, β_{1}, β_{2}; c), assuming c is known. Let n_{1} be the number of observations having y_{i} > x_{i} and n_{2} be the number of observations having y_{i} < x_{i} and n_{0} be the number of observations having y_{i} = x_{i} in the sample of size n, where n = n_{0} + n_{1} + n_{2}. Then, the nonparametric estimator of R is given by \( \check{R}=\frac{n_2}{n} \), where n_{2} is binomial (n, R). Thus, \( E\left(\check{R}\right)=R \) and variance \( V\left(\check{R}\right)=\frac{R}{n}\left(1R\right). \)
Maximum likelihood estimators of R
Let (X_{1}, Y_{1}), …, (X_{n}, Y_{n}) be a random sample of size n from either BEF(θ_{0}, θ_{1}, θ_{2}; c) or BIEF(β_{0}, β_{1}, β_{2}; c), then the maximum likelihood estimator (MLE), \( \hat{R}, \) of R is given by
Where \( {\hat{\theta}}_i \), \( {\hat{\beta}}_i \) are the maximum likelihood estimators of θ_{i}, β_{i}, i, = 0, 1, 2, respectively.
First, suppose that (X_{1}, Y_{1}), …, (X_{n}, Y_{n}) is a random sample of size n from BEF(θ_{0}, θ_{1}, θ_{2}; c), then the MLE’s \( {\hat{\theta}}_i \) of θ_{i}, i = 0, 1, 2, can be obtained by writing the loglikelihood function \( {\displaystyle \begin{array}{c}\log L={\sum}_{i=0}^2{n}_i\log {\theta}_i+{\sum}_{i=1}^2{n}_i\log \left({\theta}_0+{\theta}_{3i}\right)+{\sum}_{i=1}^n\log {g}_1^{\prime}\left({x}_i;c\right)+{\sum}_{i=1,{x}_i\ne {y}_i}^n\log {g}_1^{\prime}\left({y}_i;c\right)\\ {}{\theta}_1\sum \limits_{i=1}^n{g}_1\left({x}_i;c\right){\theta}_2\sum \limits_{i=1}^n{g}_1\left({y}_i;c\right){\theta}_0\sum \limits_{i=1}^n{g}_1\left(\max \left\{{x}_i,{y}_i\right\};c\right).\end{array}} \)and solving the likelihood system of equations w.r.t. θ_{i}, i = 0, 1, 2.
Similarly for the BIEF(β_{0}, β_{1}, β_{2}; c), the MLE’s \( {\hat{\beta}}_i \) of β_{i}, i = 0, 1, 2, can be obtained by writing the loglikelihood function \( {\displaystyle \begin{array}{c}\log L={n}_0\log {\beta}_0+{\sum}_{i=1}^2{n}_i\log \left({\beta}_{3i}\right)+{\sum}_{i=1}^2{n}_i\log \left({\beta}_0+{\beta}_i\right)+{\sum}_{i=1}^n\log \left({g}_2^{\prime}\left({x}_i;c\right)\right)\\ {}{\sum}_{i=1,{x}_i\ne {y}_i}^n\log \left({g}_2^{\prime}\left({y}_i;c\right)\right){\beta}_1{\sum}_{i=1}^n{g}_2\left({x}_i;c\right){\beta}_2{\sum}_{i=1}^n{g}_2\left({y}_i;c\right){\beta}_0{\sum}_{i=1}^n{g}_2\left(\min \left\{{x}_i,{y}_i\right\};c\right),\end{array}} \)
and solving the likelihood system of equations w.r.t. β_{i}, i = 0, 1, 2.
However, the previews likelihood systems of equations generated by either BEF(θ_{0}, θ_{1}, θ_{2}; c) or BIEF(β_{0}, β_{1}, β_{2}; c) are computational inconvenient and can be solved numerically by using a Newton Raphson procedure or by Fisher’s method of scoring.
Now, we introduce a simple estimator of R, depending on the marginal distributions of X and min{X, Y} for the BEF and depending on the marginal distributions of Y and max{X, Y} for the BIEF.
Let (X_{1}, Y_{1}), …, (X_{n}, Y_{n}) be a random sample of size n from either BEF(θ_{0}, θ_{1}, θ_{2}; c) or BIEF(β_{0}, β_{1}, β_{2}; c) then a simple estimator, \( \overset{\sim }{R} \), of R is given by
For the BEF, we have X ∼ EF(θ_{0} + θ_{1}; c); thus, the MLE of (θ_{0} + θ_{1}), is given by (see Mokhlis et al. [1])
Similarly, since min{X, Y} ∼ EF(θ; c), hence the MLE of θ is given by
Thus,
Replacing the parameters in (11) by their estimators in (18) and (19) we get the simple estimator of R for the BEF.
Similarly for the BIEF model, Y ∼ IEF(β_{0} + β_{2}; c); thus, the MLE of (β_{0} + β_{2}) is given by (see Mokhlis et al. [1])
and since max{X, Y} ∼ IEF(β; c), then the MLE of β is given by
Thus,
Again replacing the parameters in (12) by their estimators given by (21) and (22), we obtain the simple estimator of R for the BIEF.
Estimation of R when the stress is censored at the strength
Sometimes, obtaining the estimate of R based on complete sample is neither possible nor desirable on account of lack of time or minimization of the experiment cost. Thus, there are some situations where the stress is censored at the strength (see Hanagel [16]).
Let (X_{1}, Y_{1}), …, (X_{n}, Y_{n}) be a random sample of size n from BEF(θ_{0}, θ_{1}, θ_{2}; c); then, the strength and stress associated with the ith pair of sample is
and the likelihood function can be written as
Easily, the likelihood equations are
Thus, the MLE’s \( \overline{\theta_0+{\theta}_1},\overline{\theta_2} \) of θ_{0} + θ_{1} and θ_{2}, respectively, are
\( \overline{\theta_0+{\theta}_1}=\frac{n}{\sum \limits_{i=1}^n{g}_1\left({x}_i;c\right)}, \) and \( \overline{\theta_2}=\frac{n_2}{\sum \limits_{i=1}^n{g}_1\left(\min \left\{{x}_i,{y}_i\right\};c\right)} \),
then, the MLE, R, of R when the stress is censored at the strength is given by
Special cases
Table 1 present some wellknown bivariate distributions as members of the BEF(θ_{0}, θ_{1}, θ_{2}; c) or BIEF(β_{0}, β_{1}, β_{2}; c) and some other distributions for some choices of θ_{i}, β_{i}, i = 0, 1, 2, g_{1}(x; c) and g_{2}(x; c).
Clearly putting c = 1 and 2 in the bivariate inverse Weibull, we get bivariate inverse exponential and bivariate inverse Rayleigh distributions, respectively.
Notice that the bivariate modified Weibull distribution proposed by ElBassiouny [13] is a special case of BEF(θ_{0}, θ_{1}, θ_{2}; c) where θ_{1} = α_{1}, θ_{2} = α_{2}, θ_{0} = α_{3}, and c = (β, λ). Also, the bivariate generalized Rayleigh distribution introduced by Sarhan [14] with shape parameter equals 1 is a special case of BIEF(β_{0}, β_{1}, β_{2}; c), where β_{0} = β_{1} = β_{2} = λ^{2} and c = 1.
Numerical illustrations
For illustrations of the results obtained in the previous sections numerically, a simulation study is performed; 1000 samples each of size, 10, 20, 30, and 50, are generated from some BEF and BIEF distributions. The reliability, R, is computed for the following cases.
Case 1 (X, Y) has MO bivariate exponential distribution with parameters θ_{0} = 0.15, θ_{1} = 0.2 and θ_{2} = 0.5.
Case 2 (X, Y) has bivariate Rayleigh distribution with parameters θ_{0} = 0.2, θ_{1} = 0.25 and θ_{2} = 0.8.
Case 3 (X, Y) has bivariate inverse exponential distribution with parameters β_{0} = 0.25, β_{1} = 1, and β_{2} = 0.35.
Case 4 (X, Y) has bivariate inverse Rayleigh distribution with parameters β_{0} = 0.2, β_{1} = 1.2, and β_{2} = 0.1.
It is to be noted that these values are chosen arbitrary just for illustrating the results obtained.
Tables 2 and 3 show the true value of R and its corresponding estimate by using maximum likelihood method, (\( {R}^M=\hat{R} \)), our simple method of estimation (\( {R}^S=\overset{\sim }{R} \)), and when the stress is censored at the strength, the estimate of R is denoted by (\( {R}^C=\overline{R} \)) and nonparametric estimate \( \left({R}^N=\overset{\smile }{R}\right) \). The values R^{(M)}, R^{(S)}, R^{(C)}and R^{(N)} that appear in Tables 2 and 3 are the mean of the 1000 replicates of the corresponding estimates. For comparison, we calculate the bias (b) and mean square error (MSE), of the different estimates in each case considered. Where bias (b) is the difference of the mean of the 1000 replicates estimates from the true values of R and MSE is the mean of the squares of the differences of the 1000 replicates estimates from the true values of R. The calculations are performed by applying the Maple program.
From Tables 2 and 3, we see that all estimates converge to R, when n increases and MSE decreases. In Table 2, we see MSE^{(M)} < MSE^{(C)} < MSE^{(S)} < MSE^{(N)}, while in Table 3, MSE^{(M)} < MSE^{(S)} < MSE^{(N)}. However, the R^{(C)}, when the stress is censored at the strength for the BEF, and R^{(s)}, for the BEF and BIEF, both estimators are simple, easier in computation and provide sufficient results for biasedness and mean square error.
Real data example
In real life, there are many situations where we have X < Y, Y < X or X = Y, such as nuclear reactor safety, competing risks, (see Kotz et al. [17]). In the medical field X and Y can represent the blood pressure or count of the white blood cells for patients before and after a certain operation.
The following data set is from the American Football (National Football League) matches for three consecutive weekends in 1986. The data was first published in the “Washington Post” and available in Csörgő and Welsh [18] Table 4.
The bivariate variables X and Y are as follows: X represents the game time to the first points scored by kicking the ball between goal posts and Y represents the game time to the first points scored by moving the ball into the end zone. This data was first analyzed by Csörgő and Welsh [18], by converting the seconds to decimal minutes. Also Kundu and Gupta [19] and Jamalizadeh and Kundu [20] analyzed this data.
We consider BEF and BIEF for fitting this data set. First, we fit each EF and IEF to X and Y separately. The data fit two cases, namely exponential which is special case of the EF and inverse exponential which is special case of the IEF, respectively. In case of exponential distribution, the MLEs of the scale parameters of X and Y are 0.1102 and 0.0745, respectively, while for the inverse exponential the MLEs of the scale parameters are 4.4000 and 5.0214, respectively.
The KolmogorovSmirnov distances between the fitted distribution and the empirical distribution function for X and Y are 0.14997 and 0.1182 respectively for the exponential case, while those for the inverse exponential case are 0.1530 and 0.1955. The above values are less than the critical value D_{0.05} ≅ 0.2099, for n = 42, so that each of exponential distribution and inverse exponential distribution is an appropriate fit for the given data. This means that there may exist three independent random variables, say U_{i}, i = 1, 2, 3, with EF or IEF thus X = min {U_{0}, U_{1}} or max{U_{0}, U_{1}} and Y = min {U_{0}, U_{2}} or max{U_{0}, U_{2}}.
Now, we try to test whether MO bivariate exponential distribution or bivariate inverse exponential distribution provides better fit to the above data set. We use the Akaike information criterion (AIC) to check the model validity. Based on the above data, the MLEs of parameters for the MO bivariate exponential distribution θ_{0} = 0.0715, θ_{1} = 0.0456, and θ_{2} = 0.0030, and the MLEs of parameters for the bivariate inverse exponential distribution are β_{0} = 4.2769, β_{1} = 0.1746, and β_{2} = 2.0715. Thus, for the case of MO bivariate exponential the loglikelihood value is − 227.9347 and the corresponding AIC is 461.8694, while for bivariate inverse exponential distribution the loglikelihood value is − 249.6874 and the AIC is 505.3748. Therefore, MO bivariate exponential provides better fit than bivariate inverse exponential distribution. We estimate the reliability parameter R using the corresponding MLEs \( {\hat{\theta}}_i \)θ_{i}, i = 0, 1, 2 for the MO bivariate exponential distribution is R = 0.0248, while using the proposed simple estimators, we have R^{S} = 0.0235 and R^{C} = 0.0238 and the nonparametric estimator R^{N} = 0.0238.
Conclusions
In this paper, we have suggested two forms of bivariate distributions, BEF and BIEF, with marginal distributions having a general exponential form or inverse exponential form. Some distributions in the literature belong to these families, such as the MO bivariate exponential distribution, Marshall and Olkin [15], and bivariate Rayleigh distribution, Pak et al. [9]. Other bivariate distributions could belong to these families such as bivariate Weibull and bivariate Burr type III and others according to the form of g_{1}(x; c) or g_{2}(x; c). We discussed some properties of the proposed families and studied the stressstrength reliability parameter, R = P(Y < X). The MLEs of the distribution parameters are derived and simple estimators of R based on some marginal distributions are introduced in case of complete sampling. When the stress is censored at the strength, an explicit estimator of R is also obtained for the BEF distribution. Some bivariate members of the proposed families are presented. A simulation study is performed showing that the proposed simple estimators of R are easier in computation and provide sufficient results with respect to biasedness and mean square error. An example of a real data of bivariate variables (X, Y) belonging to the proposed family is also introduced.
Availability of data and materials
The data used in the simulation study was generated by Maple program, while the real data example is available in Csörgő and Welsh [18].
Abbreviations
 AIC:

Akaike information criterion
 b :

Bias
 BEF:

General bivariate exponential distribution
 BIEF:

General bivariate inverse exponential distribution
 EF:

Distribution of general exponential form
 IEF:

Distribution of general inverse exponential form
 Iff:

If and only if
 MLE:

The maximum likelihood estimate
 MO:

MarshallOlkin
 MSE:

Mean square error
 R :

Reliability parameter
References
 1.
Mokhlis, N.A., Ibrahim, E.J., Gharieb, M.D.: Stressstrength reliability with general form distributions. Commun. Stat. Theory. Methods. 46(3), 1230–1246 (2017)
 2.
Mokhlis, N.A.: Reliability of a stressstrength model with Burr type III distributions. Commun. Stat. Theory. Methods. 34(7), 1643–1657 (2005)
 3.
Kundu, D., Gupta, R.D.: Estimation of P(Y<X) for Weibull distribution. IEEE. Trans. Reliability. 55, 270–280 (2006)
 4.
Singh, S.K., Singh, U., Singh Yadav, A., Vishwkarma, P.K.: On the estimation of stressstrength reliability parameter of inverted exponential distribution. Int. J. Sci. World. 3, 98–112 (2015)
 5.
Kotz, S., Lumelskii, Y., Pensky, M.: The StressStrength Model and Its Generalizations: Theory and Applications. World Scientific, Singapore (2003)
 6.
Mokhlis, N.M.: Reliability of strength model with a bivariate exponential distribution. J. Egypt. Math. Soc. 14, 69–78 (2006)
 7.
Nadarjah, S., Kotz, S.: Reliability for some bivariate exponential distributions. Math. Probl. Eng. 2006, 1–14 (2006)
 8.
Nguimkeu, P., Rekkas, M., Wong, A.: Interval estimation for the stressstrength reliability with bivariate normal variables. Open J. Stat. 4, 630–640 (2014)
 9.
Pak, A., Khoolenjani, N.B., Jafari, A.A.: Inference on P (Y< X) in bivariate Rayleigh distribution. Commun. Stat. Theory. Methods. 43(22), 4881–4892 (2014)
 10.
AbdelHamid, A.H.: Stressstrength reliability for general bivariate distributions. J. Egypt. Math. Soc. 5, 617–621 (2016)
 11.
Kolesárová, A., Mesiar, R., SamingerPlatz, S.: Generalized FarlieGumbelMorgenstern Copulas, International Conference on Information Processing and Management of Uncertainty in KnowledgeBased Systems, pp. 244–252. Springer, Cham (2018)
 12.
Arnold, B.C., Arvanitis, M.A.: On bivariate pseudoexponential distributions. J. Appl. Stat. 1–13 (2019). https://doi.org/10.1080/02664763.2019.1686132
 13.
ElBassiouny, A.H., Shahen, H.S., Abouhawwash, M.: A new bivariate modified Weibull distribution and its extended distribution. J. Stat. Appl. Probability. 7(2), 217–231 (2018)
 14.
Sarhan, A.: The bivariate generalized Rayleigh distribution. J. Math. Sci. Mod. 2(2), 99–111 (2019)
 15.
Marshall, A.W., Olkin, I.: A multivariate exponential distribution. J. Am. Stat. Assoc. 62, 30–44 (1967)
 16.
Hanagel, D.: Estimation of reliability when stress is censored at strength. Commun. Stat. Theory. Methods. 26(4), 911–919 (1997)
 17.
Kotz, S., Balakrishnan, N., Johnson, N.L.: Continuous Multivariate Distributions, vol. 1. Models and Applications, 1, 2nd ed., John Wiley & Sons, New York (2004)
 18.
Csörgő, S., Welsh, A.H.: Testing for exponential and Marshall–Olkin distributions. J. Stat. Plann. Inference. 23(3), 287–300 (1989)
 19.
Kundu, D., Gupta, R.D.: Modified SarhanBalakrishnan singular bivariate distribution. J. Stat. Plann. Inference. 140(2), 526–538 (2010)
 20.
Jamalizadeh, A., Kundu, D.: Weighted Marshall–Olkin bivariate exponential distribution. Statistics. 47(5), 917–928 (2013)
Acknowledgements
The authors thank the editor and the anonymous referees for their valuable comments.
Funding
Not applicable
Author information
Affiliations
Contributions
Both authors have jointly worked to the manuscript with an equal contribution. Both authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
khames, S.K., Mokhlis, N.A. Bivariate general exponential models with stressstrength reliability application. J Egypt Math Soc 28, 9 (2020). https://doi.org/10.1186/s427870200069y
Received:
Accepted:
Published:
Keywords
 Stressstrength reliability
 Exponential distribution model
 Inverse exponential distribution model
 Maximum likelihood estimator
Mathematics Subject Classifications
 62N05
 62E10
 62F10
 62G05
 62N02