# Bivariate general exponential models with stress-strength reliability application

## Abstract

In this paper, we introduce two families of general bivariate distributions. We refer to these families as general bivariate exponential family and general bivariate inverse exponential family. Many bivariate distributions in the literature are members of the proposed families. Some properties of the proposed families are discussed, as well as a characterization associated with the stress-strength reliability parameter, R, is presented. Concerning R, the maximum likelihood estimators and a simple estimator with an explicit form depending on some marginal distributions are obtained in case of complete sampling. When the stress is censored at the strength, an explicit estimator of R is also obtained. The results obtained can be applied to a variety of bivariate distributions in the literature. A numerical illustration is applied on some well-known distributions. Finally a real data example is presented to fit one of the proposed models.

## Introduction

Mokhlis et al.  presented two forms of survival functions, given by

$$\overline{F}\left(u;\theta, c\right)={e}^{-\theta {g}_1\left(u;c\right)},$$
(1)
$$\overline{F}\left(u;\beta, c\right)=1-{e}^{-\beta {g}_2\left(u;c\right)},$$
(2)

where g1(u; c) does not contain θ, θ  Θ, and g2(u; c) does not contain β β, c C, { Θ, β, and C} are the parametric spaces, where g1(u; c) is a continuous, monotone increasing, and differential function such that g1(u; c) → 0 as u → 0 and g1(u; c) → ∞ as u → ∞, while g2(u; c) is continuous, monotone decreasing and differential function such that g2(u; c) → 0 as u → ∞ and g2(u; c) → ∞ as u → 0. With appropriate choices of gi(u; c), i = 1, 2, in (1) and (2), many distributions in the literature can be obtained, such as exponential distribution, Weibull distribution, Rayleigh distribution, Pareto, Lomax, and others from the first form (1), and inverse exponential distribution, inverse Weibull distribution, inverse Rayleigh, Burr type III distribution and others from the second form (2), see Mokhlis et al. . For facilitation we will denote the forms (1) and (2) by EF(θ, c) and IEF(β; c) and denote its survival functions and probability density function by FEF(u; θ, c), fEF(u; θ, c) and FIEF(u; β, c), fIEF(u; β, c), respectively.

In the area of stress-strength models, there have been a large amount of work regarding estimation of the reliability parameter, R = P(Y < X), when X and Y independent random variables belonging to the same univariate family, see for example Mokhlis , Kundu and Gupta , Singh et al.  and others. Recently, Mokhlis et al.  discussed R, when the variables are independent with survival functions having forms (1) and (2), respectively. Indeed, many real situations entail that X and Y are related in some way. However, some authors have studied the stress-strength reliability parameter, R, for some specified bivariate distributions, see for example Kotz et al. , Mokhlis , Nadarajah and Kotz , Nguimkeu et al. , Pak et al.  and, Abdel-Hamid .

There are many methods in the literature for obtaining bivariate distribution. Some of the popular methods are the copula type, the bivariate pseudo type and the Marshall-Olkin type. Recently many attempts of obtaining generalized bivariate distributions using these types are presented in the literature. Among those are Kolesarova et al. , Arnold and Arvanitis , El-Bassiouny et al.  and Sarhan .

In the present paper, we introduce two bivariate models of distributions which are types of the bivariate Marshall-Olkin distribution. We call these models bivariate exponential and bivariate inverse exponential models. Some properties of the proposed models are discussed. Many bivariate distributions in the literature can be considered as special cases or members of our models, for example, Marshall and Olkin (M-O) bivariate exponential distribution and M-O bivariate Weibull introduced by Marshall and Olkin  and the bivariate Rayleigh distribution introduced by Pak et al. .

An explicit expression of the stress-strength parameter R is obtained showing that it is not a function of the parameter c (c could be a vector parameter). The maximum likelihood estimator of R is obtained as well as simple estimators of R are obtained in a closed form depending on the marginal distribution of X and the distribution of min{X, Y} or depending on the marginal distribution of Y and the distribution of max{X, Y}. Since many bivariate distributions in the literature belong to the proposed families, the results obtained could be applicable to a variety of bivariate distributions.

The remaining part of the paper is organized as follows: In the “Proposed families of bivariate distributions” section, we introduce two new families (models) of bivariate distributions. Some characterization of the proposed models such as marginals and the distribution of min{X, Y} and max {X, Y} are also discussed. The stress-strength reliability parameter, R, concerning the new models is considered in the “Stress-strength reliability” section. In the “Point estimation of R” section, we obtain maximum likelihood estimators of R as well as simple estimators of R depending on some marginal distributions in case of complete sampling. When the stress is censored at the strength, an explicit estimator of R is also obtained. Some bivariate members of the proposed family are presented in the “Special cases” section. In the “Numerical illustrations” section, a numerical illustration using some well-known distributions is performed to highlight the theoretical results. Also an application is introduced using real data example. Finally conclusions of the results obtained are introduced in the “Conclusions” section.

## Proposed families of bivariate distributions

In this section, we introduce two new families of bivariate distributions with marginals having distributions with forms (1) or (2). We apply a similar technique of that proposed by Marshall and Olkin , for obtaining these families.

### The construction of the families (models)

Suppose that a system consists of two subsystems, say A and B. Subsystem A contains two components, say A1, and C, connected in series (parallel) with lifetimes U1 and U0 , respectively. Subsystem B contains the two components, say B1 and C, connected in series (parallel), where the lifetime of component B1 is U2.

Suppose that Ui, i = 0, 1, 2. , are independent random variables following EF(θi, c), i = 0, 1, 2 for the series case and IEF(βi; c), i = 0, 1, 2. , for the parallel case, i.e.,

$${\overline{F}}_{U_i}(u)=\left\{\begin{array}{c}{\overline{F}}_{\mathrm{EF}}\left(u;\theta, c\right)={e}^{-{\theta}_i{g}_1\left(u;c\right)},i=0,1,2,\mathrm{for}\ \mathrm{the}\ \mathrm{series}\ \mathrm{case},\\ {}{\overline{F}}_{\mathrm{IEF}}\left(u;\beta, c\right)=1-{e}^{-{\beta}_i{g}_2\left(u;c\right)},i=0,1,2,\mathrm{for}\ \mathrm{the}\ \mathrm{parallel}\ \mathrm{case}.\end{array}\right.$$
(3)

If X and Y are the lifetimes of the two subsystems A and B, respectively, then we have X = min {U0, U1} and Y = min {U0, U2}., for the series case, while X = max {U0, U1} and Y = max {U0, U2}, for the parallel case.

#### Stress model

Consider a two-component system and consider three independent stresses say U0, U1, and U2. Each component is subject to an individual stress say U1 and U2, respectively, while U0 is an overall stress transmitted to both the components equally. Then,

1. 1.

The observed stress on the two components is X = max {U0, U1} and Y = max {U0, U2}., respectively.

2. 2.

If the stresses are always fatal, then the lifetime of the two components are X = min {U0, U1} and Y = min {U0, U2}.

We can observe that in the two models there is the possibility of having X = Y; thus, the two models have both an absolute continuous part and a singular part, similar to M-O bivariate exponential model.

Theorems 1–3 present the survival functions and the probability density functions of the proposed bivariate families.

Theorem 1 Suppose Ui, i = 0, 1, 2., are independent random variables following EF(θi; c), i = 0, 1, 2., and let X = min {U0, U1} and Y = min {U0, U2}; then, the bivariate vector (X, Y) will have the survival function

$${\overline{F}}_{BEF}\left(X,Y\right)=\exp \left\{-{\theta}_1{g}_1\left(x;c\right)-{\theta}_2{g}_1\left(y;c\right)-{\theta}_0{g}_1\left(\max \left\{x,y\right\};c\right)\right\}.$$
(4)

Proof Obviously, from $${\overline{F}}_{X,Y}\left(x,y\right)=P\left(X>x,Y>y\right)$$, we can write $${\overline{F}}_{\mathrm{BEF}}\left(X,Y\right)$$ as

$$P\left(\min \left\{{U}_0,{U}_1\right\}>x,\min \left\{{U}_0,{U}_2\right\}>y\right)=P\left({U}_1>x,{U}_2>y,{U}_0>\min \left(x,y\right)\right).$$

Since Ui are independent random variables following EF(θi; c), i = 0, 1, 2. Hence, (4) holds.

We will denote the bivariate distribution with survival function having the form (4) by BEF(θ0, θ1, θ2; c). Clearly, X and Y are independent if and only if (iff) θ0 = 0. The joint survival function can also written as

$${\overline{F}}_{\mathrm{BEF}}\left(X,Y\right)=\left\{\begin{array}{c}\exp \left\{-\left({\theta}_0+{\theta}_1\right){g}_1\left(x;c\right)-{\theta}_2{g}_1\left(y;c\right)\right\},\kern0.75em \mathrm{if}\ x\ge y\\ {}\exp \left\{-{\theta}_1{g}_1\left(x;c\right)-\left({\theta}_0+{\theta}_2\right){g}_1\left(y;c\right)\right\},\kern0.5em \mathrm{if}\ y>x\ \end{array}\right.$$

Theorem 2 Suppose Ui, i = 0, 1, 2. , are independent random variables following EIF(βi; c), i = 0, 1, 2. , and let X = max {U0, U1} and Y = max {U0, U2}; then, the bivariate vector (X, Y) has the cumulative function

$${F}_{\mathrm{BIEF}}\left(X,Y\right)=\exp \left\{-{\beta}_1{g}_2\left(x;c\right)-{\beta}_2{g}_2\left(y;c\right)-{\beta}_0{g}_2\left(\min \left\{x,y\right\};c\right)\right\}.$$
(5)

Proof Similarly as in Theorem 1, using FX, Y(x, y) = P(X < x, Y < y), we can show that (5) holds.

We will denote the bivariate distribution with cumulative function with the form (5) by BIEF(β0, β1, β2; c). Clearly, X and Y are independent iff β0 = 0. The joint cumulative function can also be written as

$${F}_{\mathrm{BIEF}}\left(X,Y\right)=\left\{\begin{array}{c}\exp \left\{-{\beta}_1{g}_2\left(x;c\right)-\left({\beta}_0+{\beta}_2\right){g}_2\left(y;c\right)\right\},\mathrm{if}\ x\ge y\\ {}\exp \left\{-\left({\beta}_0+{\beta}_1\right){g}_2\left(x;c\right)-{\beta}_2{g}_2\left(y;c\right)\right\},\mathrm{if}\ y>x\ \end{array}\right.$$

Theorem 3 If the vector (X, Y) has either BEF(θ0, θ1, θ2; c) or BIEF(β0, β1, β2; c), then their joint pdf is given by

$${f}_{X,Y}\left(x,y\right)=\left\{\begin{array}{c}{f}_1\left(x,y\right),\mathrm{if}\ x>\mathrm{y}\\ {}{f}_2\left(x,y\right),\mathrm{if}\ x<y\\ {}{f}_0(x),\mathrm{if}\ x=y\end{array}\right.$$
(6)

where $$\begin{array}{c}{f}_1\left(x,y\right)=\left\{\begin{array}{c}{\theta}_2\left({\theta}_0+{\theta}_1\right){g}_1^{\prime}\left(x;c\right){g}_1^{\prime}\left(y;c\right)\ {\mathrm{e}}^{-\left({\theta}_0+{\theta}_1\right){g}_1\left(x;c\right)-{\theta}_2{g}_1\left(y;c\right)},\mathrm{for}\ \mathrm{BEF}\left({\theta}_0,{\theta}_1,{\theta}_2;c\right)\\ {}\ {\beta}_1\left({\beta}_0+{\beta}_2\right){g}_2^{\prime}\left(x;c\right){g}_2^{\prime}\left(y;c\right){e}^{-{\beta}_1{g}_2\left(x;c\right)-\left({\beta}_0+{\beta}_2\right){g}_2\left(y;c\right)},\mathrm{for}\ \mathrm{BIEF}\left({\upbeta}_0,{\upbeta}_1,{\upbeta}_2;\mathrm{c}\right)\end{array}\right.\\ {}{f}_2\left(x,y\right)=\left\{\begin{array}{c}{\theta}_1\left({\theta}_0+{\theta}_2\right){g}_1^{\prime}\left(x;c\right){g}_1^{\prime}\left(y;c\right)\ {\mathrm{e}}^{-{\theta}_1{g}_1\left(x;c\right)-\left({\theta}_0+{\theta}_2\right){g}_1\left(y;c\right)},\mathrm{for}\ \mathrm{BEF}\left({\theta}_0,{\theta}_1,{\theta}_2;c\right)\\ {}\ {\beta}_2\left({\beta}_0+{\beta}_1\right){g}_2^{\prime}\left(x;c\right){g}_2^{\prime}\left(y;c\right){e}^{-\left({\beta}_0+{\beta}_1\right){g}_2\left(x;c\right)-{\beta}_2{g}_2\left(y;c\right)},\mathrm{for}\ \mathrm{BIEF}\left({\upbeta}_0,{\upbeta}_1,{\upbeta}_2;\mathrm{c}\right)\end{array}\right.\end{array}}$$

and

$${f}_0(x)=\left\{\begin{array}{c}{\theta}_0{g}_1^{\prime}\left(x;c\right){\mathrm{e}}^{-\theta {g}_1\left(x;c\right)},\mathrm{for}\ \mathrm{BEF}\left({\theta}_0,{\theta}_1,{\theta}_2;c\right)\\ {}-{\beta}_0{g}_2^{\prime}\left(x;c\right){e}^{-\beta {g}_2\left(x;c\right)\Big)},\mathrm{for}\ \mathrm{BIEF}\left({\upbeta}_0,{\upbeta}_1,{\upbeta}_2;\mathrm{c}\right)\end{array}\right.$$

With θ = θ0 + θ1 + θ2, β = β0 + β1 + β2 and $${g}_i^{\prime}\left(t;c\right),i=1,2,$$ is the first derivative of gi(t; c) with respect to t.

Proof Clearly, for the two models, f1(x, y) and f2(x, y) can be easily obtained by using $$\frac{\partial^2{\overline{F}}_{X,Y}\left(x,y\right)}{\partial x\partial y}$$ or $$\frac{\partial^2{F}_{X,Y}\left(x,y\right)}{\partial x\partial y}$$ for x > y and y > x respectively. For f0(x), we use the relation

$${\int}_0^{\infty }{\int}_0^x{f}_1\left(x,y\right) dydx+{\int}_0^{\infty }{\int}_0^y{f}_2\left(x,y\right) dx dy+{\int}_0^{\infty }{f}_0(x) dx=1$$. So, for the BEF, we have

$${\int}_0^{\infty }{\int}_0^x{f}_1\left(x,y\right) dydx=1-\left({\theta}_0+{\theta}_1\right){\int}_0^{\infty }{g}_1^{\prime}\left(t;c\right){\mathrm{e}}^{-\theta {g}_1\left(t;c\right)} dt$$

and

$${\int}_0^{\infty }{\int}_0^y{f}_2\left(x,y\right) dxdy=1-\left({\theta}_0+{\theta}_2\right){\int}_0^{\infty }{g}_1^{\prime}\left(t;c\right){\mathrm{e}}^{-\theta {g}_1\left(t;c\right)} dt,$$

Thus,

$${\int}_0^{\infty }{f}_0(x) dx=1-\left[2-\left({\theta}_0+\theta \right){\int}_0^{\infty }{g}_1^{\prime}\left(t;c\right){\mathrm{e}}^{-\theta {g}_1\left(t;c\right)} dt\right]={\theta}_0{\int}_0^{\infty }{g}_1^{\prime}\left(t;c\right){\mathrm{e}}^{-\theta {g}_1\left(t;c\right)} dt.$$

Similarly for the BIEF, we have $${\int}_0^{\infty }{f}_0(x) dx=1+\left({\beta}_1+{\beta}_2\right){\int}_0^{\infty }{g}_2^{\prime}\left(t;c\right){\mathrm{e}}^{-\beta {g}_2\left(t;c\right)} dt=-{\beta}_0{\int}_0^{\infty }{g}_2^{\prime}\left(t;c\right){\mathrm{e}}^{-\beta {g}_2\left(t;c\right)} dt$$.

Hence, the proof is complete.

Notice that both distribution BEF(θ0, θ1, θ2; c) and BIEF (β0, β1, β2; c) are singular on the line X = Y, since P(X = Y) ≠ 0. Thus the two models have a singular part and an absolute continuous part, similar to Marshall and Olkin’s model. The following theorem provides explicitly the absolute continuous part and the singular part of BEF and BIEF.

Theorem 4 If the vector (X, Y) has BEF(θ0, θ1, θ2; c) or BIEF(β0, β1, β2; c), then

1. (i)

The survival function for the BEF is

$${\overline{F}}_{\mathrm{BEF}}\left(x,y\right)=\frac{\theta_1+{\theta}_2}{\theta }{\overline{F}}_{\mathrm{BEF}(a)}\left(x,y\right)+\frac{\theta_0}{\theta }{\overline{F}}_{\mathrm{BEF}(s)}\left(x,y\right),$$
(7)

Where, θ = θ0 + θ1 + θ2, $${\overline{F}}_{\mathrm{BEF}(s)}\left(x,y\right)={\mathrm{e}}^{-\theta {g}_1\left(\max \left\{x,y\right\};c\right)}$$ is the singular part, and $${\overline{F}}_{\mathrm{BEF}(a)}\left(x,y\right)=\frac{\theta }{\theta_1+{\theta}_2}{\mathrm{e}}^{-{\theta}_1{g}_1\left(x;c\right)-{\theta}_2{g}_1\left(y;c\right)-{\theta}_0{g}_1\left(\max \left\{x,y\right\};c\right)}-\frac{\theta_0}{\theta_1+{\theta}_2}{\mathrm{e}}^{-\theta {g}_1\left(\max \left\{x,y\right\};c\right)}$$ is the absolute continuous part.

2. (ii)

The cumulative function for the BIEF is

$${F}_{\mathrm{BIEF}}\left(x,y\right)=\frac{\beta_1+{\beta}_2}{\beta }{F}_{\mathrm{BIEF}(a)}\left(x,y\right)+\frac{\beta_0}{\beta }{F}_{\mathrm{BIEF}(s)}\left(x,y\right),$$
(8)

where, β = β0 + β1 + β2, $${F}_{\mathrm{BIEF}(s)}\left(x,y\right)={\mathrm{e}}^{-\beta {g}_2\left(\min \left\{x,y\right\};c\right)}$$ is the singular part and $${F}_{\mathrm{BIEF}(a)}\left(x,y\right)=\frac{\beta }{\beta_1+{\beta}_2}{\mathrm{e}}^{-{\beta}_1{g}_2\left(x;c\right)-{\beta}_2{g}_2\left(y;c\right)-{\beta}_0{g}_2\left(\min \left\{x,y\right\};c\right)}-\frac{\beta_0}{\beta_1+{\beta}_2}{\mathrm{e}}^{-\beta {g}_2\left(\min \left\{x,y\right\};c\right)}$$ is the absolute continuous part.

Proof (i) For the BEF, using the fact that $${\overline{F}}_{\mathrm{BEF}}\left(x,y\right)=\alpha {\overline{F}}_{\mathrm{BEF}(a)}\left(x,y\right)+\left(1-\alpha \right){\overline{F}}_{\mathrm{BEF}(s)}\left(x,y\right)$$

$$\frac{\partial^2{\overline{F}}_{\mathrm{BEF}}\left(x,y\right)}{\partial x\partial y}=\alpha {f}_{\mathrm{BEF}(a)}\left(x,y\right)=\left\{\begin{array}{c}{f}_{\mathrm{EF}}\left(x;{\theta}_0+{\theta}_1,c\right){f}_{\mathrm{EF}}\left(y;{\theta}_2,c\right),\mathrm{if}\ x>y\\ {}{f}_{\mathrm{EF}}\left(x;{\theta}_1,c\right){f}_{\mathrm{EF}}\left(y;{\theta}_0+{\theta}_2,c\right),\mathrm{if}\ x<y\end{array}\right.$$

Hence α may be obtained as

$$\alpha =\underset{0}{\overset{\infty }{\int }}\underset{0}{\overset{x}{\int }}{f}_{\mathrm{EF}}\left(x;{\theta}_0+{\theta}_1,c\right){f}_{\mathrm{EF}}\left(y;{\theta}_2,c\right) dydx+\underset{0}{\overset{\infty }{\int }}\underset{0}{\overset{y}{\int }}{f}_{\mathrm{EF}}\left(x;{\theta}_1,c\right){f}_{\mathrm{EF}}\left(y;{\theta}_0+{\theta}_2,c\right) dxdy=\frac{\theta_1+{\theta}_2}{\theta },$$

and $${\overline{F}}_{\mathrm{BEF}(a)}\left(x,y\right)=\underset{y}{\overset{\infty }{\int }}\underset{x}{\overset{\infty }{\int }}{f}_{\mathrm{BEF}(a)}\left(u,v\right) dudv$$; hence, with α and $${\overline{F}}_{\mathrm{BEF}(a)}\left(x,y\right)$$ known, the singular part $${\overline{F}}_{\mathrm{BEF}(s)}\left(x,y\right)$$ can be obtained by subtraction.

(ii) Similarly for the BIEF, FBIEF(a)(x, y) is computed by using FBIEF(x, y) = γFBIEF(a)(x, y) +(1 − γ)FBIEF(s)(x, y), 0 ≤ γ ≤ 1. Using a similar manner as in part (i), we can show that (8) holds.

The marginal distributions of X and Y and the conditional distributions are given by Theorems 5 and 6, while the distributions of min{X, Y}, for the BEF, and max{X, Y}, for the BIEF, are given by Theorem 7.

Theorem 5 If the vector (X, Y) has either BEF(θ0, θ1, θ2; c) or BIEF(β0, β1, β2; c), then the marginal distributions of X and Y are either EF(θ0, θi; c) or IEF (β0, βi; c), i=1,2, respectively.

Proof If (X, Y) has BEF(θ0, θ1, θ2; c), then from (6) we have

$${f}_X(x)=\underset{0}{\overset{x}{\int }}{f}_1\left(x,y\right) dy+\underset{x}{\overset{\infty }{\int }}{f}_2\left(x,y\right) dy+{f}_0(x)=\left({\theta}_0+{\theta}_1\right){g}_1^{\prime}\left(x;c\right)\ {\mathrm{e}}^{-\left({\theta}_0+{\theta}_1\right){g}_1\left(x;c\right)}.$$

Similarly we can derive fY(y). In a similar manner, fX(x) and fY(y) can be shown to have IEF (β0, βi; c), i=1,2, respectively for the BIEF.

Notice that the marginal distributions of X and Y can also be obtained using the next lemma.

Lemma 1

(i) Let X = min {U0, U1}, then XEF(θ0 + θ1; c) iff U0 and U1 are independent and U0EF(θ0; c), U1EF(θ1; c).

(ii) Let X = max {U0, U1}, then XIEF(β0 + β1; c) iff U0 and U1 are independent and U0IEF(β0; c), U1IEF(β1; c).

Here “” means follows or has the distribution.

Proof (i) for X = min {U0, U1}, we have

$$P\left(X>x\right)=P\left(\min \left\{{U}_0,{U}_1\right\}>x\right)=P\left({U}_0>x,{U}_1>x\right).$$

If U0 and U1 are independent and U0 ~ EF(θ0; c) and U1 ~ EF(θ1; c) U1 EF(θ1; c), then

$$P\left(X>x\right)=P\left({U}_0>x\right)P\left({U}_1>x\right)={e}^{-\left({\theta}_0+{\theta}_1\right){g}_1\left(x;c\right)}.$$

Conversely, if XEF(θ0 + θ1; c), then

$$P\left(X>x\right)={e}^{-\left({\theta}_0+{\theta}_1\right){g}_1\left(x;c\right)}={e}^{-{\theta}_0{g}_1\left(x;c\right)}{e}^{-{\theta}_1{g}_1\left(x;c\right)}.$$

Then, U0 and U1 are independent and $${\overline{F}}_{U_0}(x)={e}^{-{\theta}_0{g}_1\left(x;c\right)}$$ and $${\overline{F}}_{U_1}(x)={e}^{-{\theta}_1{g}_1\left(x;c\right)},$$ i.e. U0 EF(θ0; c) and U1 EF(θ1; c).

1. (ii)

Similarly for the BIEF.

Consequently, from Theorems 1 and 2 and Lemma 1, we have the following lemma, Lemma 2.

Lemma 2

(i) (X, Y)  BEF(θ0, θ1, θ2; c) iff there exist independent EF random variables Ui, i = 0, 1, 2, such that X = min {U0, U1} and Y = min {U0, U2}.

(ii) (X, Y)  BIEF(β0, β1, β2; c) if and only if there exist independent IEF random variables Ui, i = 0, 1, 2, such that X = max {U0, U1} and Y = max {U0, U2}.

Theorem 6 The conditional distribution of X given Y = y, is given by

$${f}_{X\mid Y}\left(x|y\right)=\left\{\begin{array}{c}\frac{\theta_2\left({\theta}_0+{\theta}_1\right)}{\theta_0+{\theta}_2}\ {g}_1^{\prime}\left(x;c\right)\ {\mathrm{e}}^{-\left({\theta}_0+{\theta}_1\right){g}_1\left(x;c\right)+{\theta}_0{g}_1\left(y;c\right)},\mathrm{if}\ x>y\\ {}{\theta}_1{g}_1^{\prime}\left(x;c\right)\ {\mathrm{e}}^{-{\theta}_1{g}_1\left(x;c\right)},\mathrm{if}\ x<y\\ {}\frac{\theta_0}{\theta_0+{\theta}_2}\ {\mathrm{e}}^{-{\theta}_1{g}_1\left(x;c\right)},\mathrm{if}\ x=y\end{array}\right.$$
(9)

for the BEF, while for the BIEF is given by

$${f}_{X\mid Y}\left(x|y\right)=\left\{\begin{array}{c}-{\beta}_1{g}_2^{\prime}\left(x;c\right){e}^{-{\beta}_1{g}_2\left(x;c\right)},\mathrm{if}\ x>y\\ {}\frac{-{\beta}_2\left({\beta}_0+{\beta}_1\right)}{\ \left({\beta}_0+{\beta}_2\right)}{g}_2^{\prime}\left(x;c\right){e}^{-\left({\beta}_0+{\beta}_1\right){g}_2\left(x;c\right)+{\beta}_0{g}_2\left(y;c\right)},\mathrm{if}\ x<y\\ {}\frac{\beta_0}{\beta_0+{\beta}_2}\ {\mathrm{e}}^{-{\beta}_1{g}_2\left(x;c\right)},\mathrm{if}\ x=y\end{array}\right.$$
(10)

Proof The proof is trivial so it is omitted.

Theorem 7 If (X, Y) is a bivariate vector of continuous random variables, then

1. (i)

min{X, Y}  EF(θ; c), if (X, Y)  BEF(θ0, θ1, θ2; c),

2. (ii)
$$\max \left\{X,Y\right\}\sim \mathrm{IEF}\left(\beta; c\right),\mathrm{if}\ \left(X,Y\right)\sim \mathrm{BIEF}\left({\beta}_0,{\beta}_1,{\beta}_2;c\right).$$

Proof (i) if (X, Y)  BEF(θ0, θ1, θ2; c), then using (4), we have

$$P\left(\min \left\{X,Y\right\}>t\right)=P\left(X>t,Y>t\right)={e}^{-{\theta}_1{g}_1\left(t;c\right)-{\theta}_2{g}_1\left(t;c\right)-{\theta}_0{g}_1\left(t;c\right)}={e}^{-\theta {g}_1\left(t;c\right)}.$$

Similarly by using (5) for the BIEF, we can show that max{X, Y}  IEF(β; c).

## Stress-strength reliability

In this section, we present the stress-strength reliability of the two bivariate models. Many bivariate distributions in the literature have forms of the proposed models, for example, M-O bivariate exponential distribution, Marshal and Olkin , and the bivariate Rayleigh distribution introduced by Pak et al.  for the BEF and bivariate inverse Weibull and bivariate Burr type III for the BIEF. So the following theorem can be applied to many distributions possessing BEF or BIEF.

Theorem 8 Let (X, Y) be a bivariate vector. Then, the stress-strength reliability function, R, is given by

1. (i)
$$R=P\left(Y<X\right)=\frac{\theta_2}{\theta },$$
(11)

iff (X, Y)  BEF(θ0, θ1, θ2; c), where θ = θ0 + θ1 + θ2.

2. (ii)
$$R=P\left(Y<X\right)=\frac{\beta_1}{\beta },$$
(12)

iff (X, Y)  BIEF(β0, β1, β2; c), where β = β0 + β1 + β2.

Proof (i) First, suppose that (X, Y)  BEF(θ0, θ1, θ2; c), then using (6),

$$R={\int}_0^{\infty }{\int}_0^x{f}_1\left(x,y\right) dydx=\frac{\theta_2}{\theta }.$$

Conversely, suppose that Eq. (11) holds. From Mokhlis et al. (, Theorem 1), since $$R=\frac{\theta_2}{\theta_0+{\theta}_1+{\theta}_2}$$, we have two independent random variables, say, X and U2 where XEF(θ0 + θ1; c) and U2EF(θ2; c).

From Lemma 1, since XEF(θ0 + θ1; c), then X = min {U0, U1}, where U0EF(θ0; c) and

U1EF(θ1; c). Then,

$$P\left({U}_2<x\right)=P\left({U}_2<\min \left\{{U}_0,{U}_1\right\}\right)\equiv P\left(\min \left\{{U}_0,{U}_2\right\}<\min \left\{{U}_0,{U}_1\right\}\right).$$

Let Y = min {U0, U2}. Thus, using Lemma 2, the proof is completed.

(ii) Similarly, suppose that (X, Y)  BIEF(β0, β1, β2; c), then using (6),

$$R=P\left(Y<X\right)=\frac{\beta_1}{\beta }.$$

Conversely, suppose that Eq. (12) holds. From Mokhlis et al. (, Theorem 2), since $$R=\frac{\beta_1}{\beta_0+{\beta}_1+{\beta}_2}$$, then we have two independent random variables, say U1 and Y, where U1 must be distributed as IEF(β1; c) and Y must be distributed as IEF(β0 + β2; c). From Lemma 1, since Y = max {U0, U2}, then U0 IEF(β0; c) and U2 IEF(β2; c). Thus, we have

$$R=P\left(\max \left\{{U}_0,{U}_2\right\}<{U}_1\right)\equiv P\left(\max \left\{{U}_0,{U}_2\right\}<\max \left\{{U}_0,{U}_1\right\}\right).$$

Let X = max {U0, U1}. Using Lemma 2, the proof is completed.

## Point estimation of R

Let (X1, Y1), …, (Xn, Yn) be a random sample of size n from either BEF(θ0, θ1, θ2; c) or BIEF (β0, β1, β2; c), assuming c is known. Let n1 be the number of observations having yi > xi and n2 be the number of observations having yi < xi and n0 be the number of observations having yi = xi in the sample of size n, where n = n0 + n1 + n2. Then, the non-parametric estimator of R is given by $$\check{R}=\frac{n_2}{n}$$, where n2 is binomial (n, R). Thus, $$E\left(\check{R}\right)=R$$ and variance $$V\left(\check{R}\right)=\frac{R}{n}\left(1-R\right).$$

### Maximum likelihood estimators of R

Let (X1, Y1), …, (Xn, Yn) be a random sample of size n from either BEF(θ0, θ1, θ2; c) or BIEF(β0, β1, β2; c), then the maximum likelihood estimator (MLE), $$\hat{R},$$ of R is given by

$$\hat{R}=\left\{\begin{array}{c}\frac{{\hat{\theta}}_2}{{\hat{\theta}}_0+{\hat{\theta}}_1+{\hat{\theta}}_2},\mathrm{for}\ \mathrm{BEF}\left({\theta}_0,{\theta}_1,{\theta}_2;c\right)\\ {}\frac{{\hat{\beta}}_1}{{\hat{\beta}}_0+{\hat{\beta}}_1+{\hat{\beta}}_2},\mathrm{for}\ \mathrm{BIEF}\left({\upbeta}_0,{\upbeta}_1,{\upbeta}_2;\mathrm{c}\right)\ \end{array}\right.$$
(13)

Where $${\hat{\theta}}_i$$, $${\hat{\beta}}_i$$ are the maximum likelihood estimators of θi, βi, i, = 0, 1, 2, respectively.

First, suppose that (X1, Y1), …, (Xn, Yn) is a random sample of size n from BEF(θ0, θ1, θ2; c), then the MLE’s $${\hat{\theta}}_i$$ of θi, i = 0, 1, 2, can be obtained by writing the log-likelihood function $$\begin{array}{c}\log L={\sum}_{i=0}^2{n}_i\log {\theta}_i+{\sum}_{i=1}^2{n}_i\log \left({\theta}_0+{\theta}_{3-i}\right)+{\sum}_{i=1}^n\log {g}_1^{\prime}\left({x}_i;c\right)+{\sum}_{i=1,{x}_i\ne {y}_i}^n\log {g}_1^{\prime}\left({y}_i;c\right)\\ {}-{\theta}_1\sum \limits_{i=1}^n{g}_1\left({x}_i;c\right)-{\theta}_2\sum \limits_{i=1}^n{g}_1\left({y}_i;c\right)-{\theta}_0\sum \limits_{i=1}^n{g}_1\left(\max \left\{{x}_i,{y}_i\right\};c\right).\end{array}}$$and solving the likelihood system of equations w.r.t. θi, i = 0, 1, 2.

$$\begin{array}{c}\frac{n_1}{\theta_1}+\frac{n_2}{\theta_0+{\theta}_1}-{\sum}_{i=1}^n{g}_1\left({x}_i;c\right)=0,\\ {}\frac{n_2}{\theta_2}+\frac{n_1}{\theta_0+{\theta}_2}-{\sum}_{i=1}^n{g}_1\left({y}_i;c\right)=0,\\ {}\frac{n_0}{\theta_0}+\frac{n_1}{\theta_0+{\theta}_2}+\frac{n_2}{\theta_0+{\theta}_1}-{\sum}_{i=1}^n{g}_1\left(\max \left\{{x}_i,{y}_i\right\};c\right)=0.\end{array}}$$
(14)

Similarly for the BIEF(β0, β1, β2; c), the MLE’s $${\hat{\beta}}_i$$ of βi, i = 0, 1, 2, can be obtained by writing the log-likelihood function $$\begin{array}{c}\log L={n}_0\log {\beta}_0+{\sum}_{i=1}^2{n}_i\log \left({\beta}_{3-i}\right)+{\sum}_{i=1}^2{n}_i\log \left({\beta}_0+{\beta}_i\right)+{\sum}_{i=1}^n\log \left(-{g}_2^{\prime}\left({x}_i;c\right)\right)\\ {}{\sum}_{i=1,{x}_i\ne {y}_i}^n\log \left(-{g}_2^{\prime}\left({y}_i;c\right)\right)-{\beta}_1{\sum}_{i=1}^n{g}_2\left({x}_i;c\right)-{\beta}_2{\sum}_{i=1}^n{g}_2\left({y}_i;c\right)-{\beta}_0{\sum}_{i=1}^n{g}_2\left(\min \left\{{x}_i,{y}_i\right\};c\right),\end{array}}$$

and solving the likelihood system of equations w.r.t. βi, i = 0, 1, 2.

$$\begin{array}{c}\frac{n_2}{\beta_1}+\frac{n_1}{\beta_0+{\beta}_1}-\sum \limits_{i=1}^n{g}_2\left({x}_i;c\right)=0,\\ {}\frac{n_1}{\beta_2}+\frac{n_2}{\beta_0+{\beta}_2}-\sum \limits_{i=1}^n{g}_2\left({y}_i;c\right)=0,\\ {}\frac{n_0}{\beta_0}+\frac{n_1}{\beta_0+{\beta}_1}+\frac{n_2}{\beta_0+{\beta}_2}-\sum \limits_{i=1}^n{g}_2\left(\min \left\{{x}_i,{y}_i\right\};c\right)=0.\end{array}}$$
(15)

However, the previews likelihood systems of equations generated by either BEF(θ0, θ1, θ2; c) or BIEF(β0, β1, β2; c) are computational inconvenient and can be solved numerically by using a Newton Raphson procedure or by Fisher’s method of scoring.

Now, we introduce a simple estimator of R, depending on the marginal distributions of X and min{X, Y} for the BEF and depending on the marginal distributions of Y and max{X, Y} for the BIEF.

Let (X1, Y1), …, (Xn, Yn) be a random sample of size n from either BEF(θ0, θ1, θ2; c) or BIEF(β0, β1, β2; c) then a simple estimator, $$\overset{\sim }{R}$$, of R is given by

$$\overset{\sim }{R}=\left\{\begin{array}{c}1-\frac{\sum \limits_{i=1}^n{g}_1\left(\min \left\{{x}_i,{y}_i\right\};c\right)}{\sum \limits_{i=1}^n{g}_1\left({x}_i;c\right)},\mathrm{for}\ \mathrm{BEF}\left({\theta}_0,{\theta}_1,{\theta}_2;c\right)\\ {}1-\frac{\sum \limits_{i=1}^n{\mathrm{g}}_2\left(\max \left\{{x}_i,{y}_i\right\};c\right)}{\sum \limits_{i=1}^n{g}_2\left({y}_i;c\right)},\mathrm{for}\ \mathrm{BIEF}\left({\beta}_0,{\beta}_1,{\beta}_2;c\right)\end{array}\right.$$
(16)

For the BEF, we have XEF(θ0 + θ1; c); thus, the MLE of (θ0 + θ1), is given by (see Mokhlis et al. )

$$\hat{\left({\theta}_0+{\theta}_1\right)}=\frac{n}{\sum \limits_{i=1}^n{g}_1\left({x}_i;c\right)}.$$
(17)

Similarly, since min{X, Y} EF(θ; c), hence the MLE of θ is given by

$$\hat{\theta}=\frac{n}{\sum \limits_{i=1}^n{g}_1\left(\min \left\{{x}_i,{y}_i\right\};c\right)}.$$
(18)

Thus,

$${\theta}_2=\frac{n}{\sum \limits_{i=1}^n{g}_1\left(\min \left\{{x}_i,{y}_i\right\};c\right)}-\frac{n}{\sum \limits_{i=1}^n{g}_1\left({x}_i;c\right)}.$$
(19)

Replacing the parameters in (11) by their estimators in (18) and (19) we get the simple estimator of R for the BEF.

Similarly for the BIEF model, Y IEF(β0 + β2; c); thus, the MLE of (β0 + β2) is given by (see Mokhlis et al. )

$$\left(\hat{\beta_0+{\beta}_2}\right)=\frac{n}{\sum \limits_{i=1}^n{g}_2\left({y}_i;c\right)},$$
(20)

and since max{X, Y}  IEF(β; c), then the MLE of β is given by

$$\hat{\beta}=\frac{n}{\sum \limits_{i=1}^n{g}_2\left(\max \left\{{x}_i,{y}_i\right\};c\right)}$$
(21)

Thus,

$${\hat{\beta}}_1=\frac{n}{\sum \limits_{i=1}^n{g}_2\left(\max \left\{{x}_i,{y}_i\right\};c\right)}-\frac{n}{\sum \limits_{i=1}^n{g}_2\left({x}_i;c\right)}.$$
(22)

Again replacing the parameters in (12) by their estimators given by (21) and (22), we obtain the simple estimator of R for the BIEF.

### Estimation of R when the stress is censored at the strength

Sometimes, obtaining the estimate of R based on complete sample is neither possible nor desirable on account of lack of time or minimization of the experiment cost. Thus, there are some situations where the stress is censored at the strength (see Hanagel ).

Let (X1, Y1), …, (Xn, Yn) be a random sample of size n from BEF(θ0, θ1, θ2; c); then, the strength and stress associated with the ith pair of sample is

$$\begin{array}{c}\left({X}_i,{Y}_i\right)=\left({x}_i,{x}_i\right)\ \mathrm{if}\ {x}_i\le {y}_i\\ {}=\left({x}_i,{y}_i\right)\ \mathrm{if}\ {x}_i>{y}_i\end{array}}$$

and the likelihood function can be written as

$$L={\left({\theta}_0+{\theta}_1\right)}^n{\theta}_2^{n_2}\prod \limits_{i=1}^n{g}_1^{\prime}\left({x}_i;c\right)\prod \limits_{i=1}^{n_2}{g}_1^{\prime}\left({y}_i;c\right){e}^{-\left({\theta}_0+{\theta}_1\right)\sum \limits_{i=1}^n{g}_1\left({x}_i;c\right)-{\theta}_2\sum \limits_{i=1}^n{g}_1\left(\min \left\{{x}_i,{y}_i\right\};c\right)}$$

Easily, the likelihood equations are

$$\begin{array}{c}\frac{n}{\theta_0+{\theta}_1}-\sum \limits_{i=1}^n{g}_1\left({x}_i;c\right)=0,\\ {}\frac{n_2}{\theta_2}-\sum \limits_{i=1}^n{g}_1\left(\min \left\{{x}_i,{y}_i\right\};c\right)=0.\end{array}}$$

Thus, the MLE’s $$\overline{\theta_0+{\theta}_1},\overline{\theta_2}$$ of θ0 + θ1 and θ2, respectively, are

$$\overline{\theta_0+{\theta}_1}=\frac{n}{\sum \limits_{i=1}^n{g}_1\left({x}_i;c\right)},$$ and $$\overline{\theta_2}=\frac{n_2}{\sum \limits_{i=1}^n{g}_1\left(\min \left\{{x}_i,{y}_i\right\};c\right)}$$,

then, the MLE, R, of R when the stress is censored at the strength is given by

$$\overline{R}={\left(1+\frac{n\sum \limits_{i=1}^n{g}_1\left(\min \left\{{x}_i,{y}_i\right\};c\right)}{n_2\sum \limits_{i=1}^n{\mathrm{g}}_1\left({x}_i;c\right)}\right)}^{-1}.$$
(23)

## Special cases

Table 1 present some well-known bivariate distributions as members of the BEF(θ0, θ1, θ2; c) or BIEF(β0, β1, β2; c) and some other distributions for some choices of θi, βi, i = 0, 1, 2, g1(x; c) and g2(x; c).

Clearly putting c = 1 and 2 in the bivariate inverse Weibull, we get bivariate inverse exponential and bivariate inverse Rayleigh distributions, respectively.

Notice that the bivariate modified Weibull distribution proposed by El-Bassiouny  is a special case of BEF(θ0, θ1, θ2; c) where θ1 = α1, θ2 = α2, θ0 = α3, and c = (β, λ). Also, the bivariate generalized Rayleigh distribution introduced by Sarhan  with shape parameter equals 1 is a special case of BIEF(β0, β1, β2; c), where β0 = β1 = β2 = λ2 and c = 1.

## Numerical illustrations

For illustrations of the results obtained in the previous sections numerically, a simulation study is performed; 1000 samples each of size, 10, 20, 30, and 50, are generated from some BEF and BIEF distributions. The reliability, R, is computed for the following cases.

Case 1 (X, Y) has M-O bivariate exponential distribution with parameters θ0 = 0.15, θ1 = 0.2 and θ2 = 0.5.

Case 2 (X, Y) has bivariate Rayleigh distribution with parameters θ0 = 0.2, θ1 = 0.25 and θ2 = 0.8.

Case 3 (X, Y) has bivariate inverse exponential distribution with parameters β0 = 0.25, β1 = 1, and β2 = 0.35.

Case 4 (X, Y) has bivariate inverse Rayleigh distribution with parameters β0 = 0.2, β1 = 1.2, and β2 = 0.1.

It is to be noted that these values are chosen arbitrary just for illustrating the results obtained.

Tables 2 and 3 show the true value of R and its corresponding estimate by using maximum likelihood method, ($${R}^M=\hat{R}$$), our simple method of estimation ($${R}^S=\overset{\sim }{R}$$), and when the stress is censored at the strength, the estimate of R is denoted by ($${R}^C=\overline{R}$$) and non-parametric estimate $$\left({R}^N=\overset{\smile }{R}\right)$$. The values R(M), R(S), R(C)and R(N) that appear in Tables 2 and 3 are the mean of the 1000 replicates of the corresponding estimates. For comparison, we calculate the bias (b) and mean square error (MSE), of the different estimates in each case considered. Where bias (b) is the difference of the mean of the 1000 replicates estimates from the true values of R and MSE is the mean of the squares of the differences of the 1000 replicates estimates from the true values of R. The calculations are performed by applying the Maple program.

From Tables 2 and 3, we see that all estimates converge to R, when n increases and MSE decreases. In Table 2, we see MSE(M) < MSE(C) < MSE(S) < MSE(N), while in Table 3, MSE(M) < MSE(S) < MSE(N). However, the R(C), when the stress is censored at the strength for the BEF, and R(s), for the BEF and BIEF, both estimators are simple, easier in computation and provide sufficient results for biasedness and mean square error.

### Real data example

In real life, there are many situations where we have X < Y, Y < X or X = Y, such as nuclear reactor safety, competing risks, (see Kotz et al. ). In the medical field X and Y can represent the blood pressure or count of the white blood cells for patients before and after a certain operation.

The following data set is from the American Football (National Football League) matches for three consecutive weekends in 1986. The data was first published in the “Washington Post” and available in Csörgő and Welsh  Table 4.

The bivariate variables X and Y are as follows: X represents the game time to the first points scored by kicking the ball between goal posts and Y represents the game time to the first points scored by moving the ball into the end zone. This data was first analyzed by Csörgő and Welsh , by converting the seconds to decimal minutes. Also Kundu and Gupta  and Jamalizadeh and Kundu  analyzed this data.

We consider BEF and BIEF for fitting this data set. First, we fit each EF and IEF to X and Y separately. The data fit two cases, namely exponential which is special case of the EF and inverse exponential which is special case of the IEF, respectively. In case of exponential distribution, the MLEs of the scale parameters of X and Y are 0.1102 and 0.0745, respectively, while for the inverse exponential the MLEs of the scale parameters are 4.4000 and 5.0214, respectively.

The Kolmogorov-Smirnov distances between the fitted distribution and the empirical distribution function for X and Y are 0.14997 and 0.1182 respectively for the exponential case, while those for the inverse exponential case are 0.1530 and 0.1955. The above values are less than the critical value D0.05 0.2099, for n = 42, so that each of exponential distribution and inverse exponential distribution is an appropriate fit for the given data. This means that there may exist three independent random variables, say Ui, i = 1, 2, 3, with EF or IEF thus X = min {U0, U1} or max{U0, U1} and Y = min {U0, U2} or max{U0, U2}.

Now, we try to test whether M-O bivariate exponential distribution or bivariate inverse exponential distribution provides better fit to the above data set. We use the Akaike information criterion (AIC) to check the model validity. Based on the above data, the MLEs of parameters for the M-O bivariate exponential distribution θ0 = 0.0715, θ1 = 0.0456, and θ2 = 0.0030, and the MLEs of parameters for the bivariate inverse exponential distribution are β0 = 4.2769, β1 = 0.1746, and β2 = 2.0715. Thus, for the case of M-O bivariate exponential the log-likelihood value is − 227.9347 and the corresponding AIC is 461.8694, while for bivariate inverse exponential distribution the log-likelihood value is − 249.6874 and the AIC is 505.3748. Therefore, M-O bivariate exponential provides better fit than bivariate inverse exponential distribution. We estimate the reliability parameter R using the corresponding MLEs $${\hat{\theta}}_i$$θi, i = 0, 1, 2 for the M-O bivariate exponential distribution is R = 0.0248, while using the proposed simple estimators, we have RS = 0.0235 and RC = 0.0238 and the non-parametric estimator RN = 0.0238.

## Conclusions

In this paper, we have suggested two forms of bivariate distributions, BEF and BIEF, with marginal distributions having a general exponential form or inverse exponential form. Some distributions in the literature belong to these families, such as the M-O bivariate exponential distribution, Marshall and Olkin , and bivariate Rayleigh distribution, Pak et al. . Other bivariate distributions could belong to these families such as bivariate Weibull and bivariate Burr type III and others according to the form of g1(x; c) or g2(x; c). We discussed some properties of the proposed families and studied the stress-strength reliability parameter, R = P(Y < X). The MLEs of the distribution parameters are derived and simple estimators of R based on some marginal distributions are introduced in case of complete sampling. When the stress is censored at the strength, an explicit estimator of R is also obtained for the BEF distribution. Some bivariate members of the proposed families are presented. A simulation study is performed showing that the proposed simple estimators of R are easier in computation and provide sufficient results with respect to biasedness and mean square error. An example of a real data of bivariate variables (X, Y) belonging to the proposed family is also introduced.

## Availability of data and materials

The data used in the simulation study was generated by Maple program, while the real data example is available in Csörgő and Welsh .

## Abbreviations

AIC:

Akaike information criterion

b :

Bias

BEF:

General bivariate exponential distribution

BIEF:

General bivariate inverse exponential distribution

EF:

Distribution of general exponential form

IEF:

Distribution of general inverse exponential form

Iff:

If and only if

MLE:

The maximum likelihood estimate

M-O:

Marshall-Olkin

MSE:

Mean square error

R :

Reliability parameter

## References

1. Mokhlis, N.A., Ibrahim, E.J., Gharieb, M.D.: Stress-strength reliability with general form distributions. Commun. Stat. Theory. Methods. 46(3), 1230–1246 (2017)

2. Mokhlis, N.A.: Reliability of a stress-strength model with Burr type III distributions. Commun. Stat. Theory. Methods. 34(7), 1643–1657 (2005)

3. Kundu, D., Gupta, R.D.: Estimation of P(Y<X) for Weibull distribution. IEEE. Trans. Reliability. 55, 270–280 (2006)

4. Singh, S.K., Singh, U., Singh Yadav, A., Vishwkarma, P.K.: On the estimation of stress-strength reliability parameter of inverted exponential distribution. Int. J. Sci. World. 3, 98–112 (2015)

5. Kotz, S., Lumelskii, Y., Pensky, M.: The Stress-Strength Model and Its Generalizations: Theory and Applications. World Scientific, Singapore (2003)

6. Mokhlis, N.M.: Reliability of strength model with a bivariate exponential distribution. J. Egypt. Math. Soc. 14, 69–78 (2006)

7. Nadarjah, S., Kotz, S.: Reliability for some bivariate exponential distributions. Math. Probl. Eng. 2006, 1–14 (2006)

8. Nguimkeu, P., Rekkas, M., Wong, A.: Interval estimation for the stress-strength reliability with bivariate normal variables. Open J. Stat. 4, 630–640 (2014)

9. Pak, A., Khoolenjani, N.B., Jafari, A.A.: Inference on P (Y< X) in bivariate Rayleigh distribution. Commun. Stat. Theory. Methods. 43(22), 4881–4892 (2014)

10. Abdel-Hamid, A.H.: Stress-strength reliability for general bivariate distributions. J. Egypt. Math. Soc. 5, 617–621 (2016)

11. Kolesárová, A., Mesiar, R., Saminger-Platz, S.: Generalized Farlie-Gumbel-Morgenstern Copulas, International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, pp. 244–252. Springer, Cham (2018)

12. Arnold, B.C., Arvanitis, M.A.: On bivariate pseudo-exponential distributions. J. Appl. Stat. 1–13 (2019). https://doi.org/10.1080/02664763.2019.1686132

13. El-Bassiouny, A.H., Shahen, H.S., Abouhawwash, M.: A new bivariate modified Weibull distribution and its extended distribution. J. Stat. Appl. Probability. 7(2), 217–231 (2018)

14. Sarhan, A.: The bivariate generalized Rayleigh distribution. J. Math. Sci. Mod. 2(2), 99–111 (2019)

15. Marshall, A.W., Olkin, I.: A multivariate exponential distribution. J. Am. Stat. Assoc. 62, 30–44 (1967)

16. Hanagel, D.: Estimation of reliability when stress is censored at strength. Commun. Stat. Theory. Methods. 26(4), 911–919 (1997)

17. Kotz, S., Balakrishnan, N., Johnson, N.L.: Continuous Multivariate Distributions, vol. 1. Models and Applications, 1, 2nd ed., John Wiley & Sons, New York (2004)

18. Csörgő, S., Welsh, A.H.: Testing for exponential and Marshall–Olkin distributions. J. Stat. Plann. Inference. 23(3), 287–300 (1989)

19. Kundu, D., Gupta, R.D.: Modified Sarhan-Balakrishnan singular bivariate distribution. J. Stat. Plann. Inference. 140(2), 526–538 (2010)

20. Jamalizadeh, A., Kundu, D.: Weighted Marshall–Olkin bivariate exponential distribution. Statistics. 47(5), 917–928 (2013)

## Acknowledgements

The authors thank the editor and the anonymous referees for their valuable comments.

Not applicable

## Author information

Authors

### Contributions

Both authors have jointly worked to the manuscript with an equal contribution. Both authors read and approved the final manuscript.

### Corresponding author

Correspondence to S. K. khames.

## Ethics declarations

### Competing interests

The authors declare that they have no competing interests. 