The first derivative of the log-density (2) is
$$\begin{aligned} \eta (x)=\frac{\textrm{d}}{\textrm{d}x}\log f(x;a,b)=-a+\frac{(b-1)\textrm{e}^{-x}}{1-\textrm{e}^{-x}}. \end{aligned}$$
The mode is obtained by solution of \(\eta (x)=0\). So, the mode of X is
$$\begin{aligned} \text {mode}(X)={\left\{ \begin{array}{ll} 0,&{} b\le 1, \\ -\log \left( \frac{a}{a+b-1}\right) ,&{} \text {otherwise}. \end{array}\right. } \end{aligned}$$
By inverting \(F(x;a,b)=u\), the quantile function of X is
$$\begin{aligned} F^{-1}(u;a,b)=-\log Q_1(1-u;a,b), \quad u\in (0,1), \end{aligned}$$
where \(Q_1(\cdot ;a,b)\) is the inverse function of the Equation (1). Using the quantile function, the random variable
$$\begin{aligned} X=-\log Q_1(1-V;a,b)\quad \text {or}\quad X=-\log Q_1(V;a,b) \end{aligned}$$
(3)
has density function (2), where V is a uniform random variable over the interval (0, 1).
The rth moment of X is obtained as
$$\begin{aligned} \mu ^r=\mathbb {E}[X^r]=\frac{1}{B(a,b)}\int _0^\infty x^r\textrm{e}^{-ax}(1-\textrm{e}^{-x})^{b-1}\textrm{d}x. \end{aligned}$$
Consider the following convergent expansion in power series
$$\begin{aligned} (1-\textrm{e}^{-x})^{b-1}=\sum _{k=0}^\infty (-1)^k\left( {\begin{array}{c}b-1\\ k\end{array}}\right) \textrm{e}^{-kx}. \end{aligned}$$
Using the expansion above, the rth moment of X can be written as
$$\begin{aligned} \mu ^r=\frac{1}{B(a,b)}\sum _{k=0}^\infty (-1)^k\left( {\begin{array}{c}b-1\\ k\end{array}}\right) \int _0^\infty x^r\textrm{e}^{-(a+k)x}\textrm{d}x. \end{aligned}$$
Taking \(w=(a+k)x\), we have
$$\begin{aligned} I=&\int _0^\infty x^r\textrm{e}^{-(a+k)x}\textrm{d}x\\ =&\frac{1}{(a+k)^{r+1}}\int _0^\infty w^r\textrm{e}^{-w}dw\\ =&\frac{\Gamma (r+1)}{(a+k)^{r+1}}. \end{aligned}$$
So, the rth moment of X is given by
$$\begin{aligned} \mu ^r=\frac{1}{B(a,b)}\sum _{k=0}^\infty (-1)^k\left( {\begin{array}{c}b-1\\ k\end{array}}\right) \frac{\Gamma (r+1)}{(a+k)^{r+1}}. \end{aligned}$$
For \(s>0\), the rth incomplete moment of X is obtained as
$$\begin{aligned} m_r(s)=&\frac{1}{B(a,b)}\int _0^s x^r\textrm{e}^{-ax}(1-\textrm{e}^{-x})^{b-1}\textrm{d}x\\=&\frac{1}{B(a,b)}\sum _{k=0}^\infty (-1)^k\left( {\begin{array}{c}b-1\\ k\end{array}}\right) \int _0^s x^r\textrm{e}^{-(a+k)x}\textrm{d}x. \end{aligned}$$
Taking \(t=(a+k)x\), we have
$$\begin{aligned} J=&\int _0^s x^r\textrm{e}^{-(a+k)x}\textrm{d}x\\ =&\frac{1}{(a+k)^{r+1}}\int _0^{(a+k)s} t^r\textrm{e}^{-t}dt\\ =&\frac{\gamma (r+1,(a+k)s)}{(a+k)^{r+1}}, \end{aligned}$$
where \(\gamma (p,x)=\int _0^x z^{p-1}\textrm{e}^{-z}\textrm{d}z\) denotes the lower incomplete gamma function.
Then, the rth incomplete moment of X is given by
$$\begin{aligned} m_r(s)=&\frac{1}{B(a,b)}\sum _{k=0}^\infty (-1)^k\left( {\begin{array}{c}b-1\\ k\end{array}}\right) \frac{\gamma (r+1,(a+k)s)}{(a+k)^{r+1}}. \end{aligned}$$
(4)
An entropy is a measure of variation or uncertainty of a random variable. Two popular entropy measures are the Rényi and Shannon entropies. For \(\rho >0\) and \(\rho \ne 1\), the Rényi entropy of a random variable having pdf \(f(\cdot )\) with support in (a, b) is given by
$$\begin{aligned} \mathcal {I}_R(\rho )=\frac{1}{1-\rho }\log \left( \int _a^b f(x)^\rho \textrm{d}x\right) . \end{aligned}$$
For KB distribution, the Rényi entropy is
$$\begin{aligned} \mathcal {I}_R(\rho )=\frac{1}{1-\rho }\log \left( \frac{1}{B(a,b)^\rho }\int _0^\infty \textrm{e}^{-a\rho x}(1-\textrm{e}^{-x})^{(b-1)\rho } \textrm{d}x\right) . \end{aligned}$$
Setting \(v=\textrm{e}^{-x}\), we have
$$L = \int_{0}^{\infty } {{\text{e}}^{{ - a\rho x}} } (1 - {\text{e}}^{{ - x}} )^{{(b - 1)\rho }} {\text{d}}x = \int_{0}^{1} {v^{{a\rho - 1}} } (1 - v)^{{(b - 1)\rho }} dv = \;B(a\rho ,(b - 1)\rho + 1).{\text{ }}$$
Thus, the Rényi entropy of X becomes
$$\begin{aligned} \mathcal {I}_R(\rho )=\frac{1}{1-\rho }\log \left( \frac{B(a\rho ,(b-1)\rho +1)}{B(a,b)^\rho }\right) . \end{aligned}$$
The Shannon entropy is given by \(\mathcal {I}_s=\mathbb {E}[-\log f(X)]\). So, for KB distribution, the Shannon entropy is
$$\begin{aligned} \mathcal {I}_{S}=\log B(a,b)+a\mathbb {E}[X]-(b-1)\mathbb {E}[\log (1-\textrm{e}^{-X})]. \end{aligned}$$
From the maximum likelihood method, we can show that \(\mathbb {E}[X]=\psi (a+b)-\psi (a)\) and \(\mathbb {E}[\log (1-\textrm{e}^{-X})]=\psi (b)-\psi (a+b)\), where \(\psi (p)=d\log \Gamma (p)/dp\) is the digamma function.
Thus, the Shannon entropy is of X is
$$\begin{aligned} \mathcal {I}_{S}=\log B(a,b)+a\psi (a+b)-a\psi (a)-(b-1)[\psi (b)-\psi (a+b)]. \end{aligned}$$
Thus, we see that for KB distribution, the Rényi and Shannon entropies can be easily computed.
The mean deviations of X about the mean and about the median are given as
$$\varphi _{1} (\mu ^{1} ) = \int_{0}^{1} | x - \mu ^{1} |f(x;a,b){\text{d}}x = \;2\mu ^{1} F(\mu ^{1} ,a,b) - 2m_{1} (\mu ^{1} ){\text{ }}$$
and
$$\begin{aligned} \varphi _2(\omega )=\;&\int _0^1|x-\omega |f(x;a,b)\textrm{d}x,\\ =\;&\mu ^1 - 2m_1(\omega ), \end{aligned}$$
respectively, where \(\mu ^1=\mathbb {E}[X]\) and \(\omega =F^{-1}(0.5; a,b)\) and \(m_1(\cdot )\) is defined in (4).