# On the perturbation analysis of the maximal solution for the matrix equation $$X-\overset{m}{\sum \limits_{i=1}}{A}_i^{\ast}\kern0.1em {X}^{-1}\kern0.1em {A}_i+\sum \limits_{j=1}^n{B}_j^{\ast}\kern0.1em {X}^{-1}\kern0.1em {B}_j=I$$

## Abstract

In this paper, we study the perturbation estimate of the maximal solution for the matrix equation $$X-\overset{m}{\sum \limits_{i=1}}{A}_i^{\ast}\kern0.1em {X}^{-1}\kern0.1em {A}_i+\sum \limits_{j=1}^n{B}_j^{\ast}\kern0.1em {X}^{-1}\kern0.1em {B}_j=I$$ using the differentiation of matrices. We derive the differential bound for this maximal solution. Moreover, we present a perturbation estimate and an error bound for this maximal solution. Finally, a numerical example is given to clarify the reliability of our obtained results.

## Introduction

We consider the nonlinear matrix equation

$$X-\sum \limits_{i=1}^m{A}_i^{\ast }{X}^{-1}{A}_i+\sum \limits_{j=1}^n{B}_j^{\ast }{X}^{-1}{B}_j=I$$
(1)

where Ai, i = 1, 2, …, m, Bj, j = 1, 2, …, n are M × M nonsingular complex matrices and m, n are nonnegative integers. I is an M × M identity matrix. The conjugate transpose of the matrices Ai and Bj are $${A}_i^{\ast }$$and $${B}_j^{\ast }$$, respectively. This type of equations emerges in several areas of applications, such as ladder networks [1, 2], dynamic programming [3, 4] and control theory [5, 6]. Several authors [7,8,9,10,11,12,13,14,15] have studied the existence of positive definite solutions of comparable types of matrix equations. Perturbation analysis for various forms of matrix equations is studied in [16,17,18,19,20] respectively. Ramadan and El-Shazly  presented the existence of the maximal solution of Eq. (1). Perturbation bounds for the Hermitian positive definite solutions to the matrix equations X ± AXnA = Q are derived in [22,23,24]. Chen and Li  obtained the perturbation bound of the maximal solution for the equation X + AX−1A = P using the differential methods. Ran and Reurings [26, 27] studied the existence of a unique positive definite solution of the equations $$X-\overset{m}{\sum \limits_{i=1}}{A}_i^{\ast }{X}^{-1}{A}_i=I$$ and $$X+\overset{n}{\sum \limits_{j=1}}{B}_j^{\ast }{X}^{-1}{B}_j=I$$ using fixed point theorem. Sun  presented the perturbation bound for the maximal solution of the equation X = Q + AH(X − C)−1A . Li and Zhang  evaluated a perturbation bound to the unique solution of the equation X − AXpA = Q. Chen and Li  derived perturbation bounds for the Hermitian solutions to the equations X ± AX−1A = I. Duan and Wang  considered the perturbation estimate of the equation $$X-\overset{m}{\sum \limits_{i=1}}{A}_i^{\ast }X{A}_i+\sum \limits_{j=1}^n{B}_j^{\ast }X{B}_j=I$$, based on the matrix differentiation.

Liu  studied the perturbation bound of the M-matrix solution for the equation Q(X) = X2 − EX − F = 0. In this paper, we symbolized the maximal solution of Eq. (1) by XL. We denote H(M) the set of M × M Hermitian matrices. We organize this paper as follows: onset, in the “Preliminaries” section, some notations, a lemma and theorems which we will need to develop our work are presented. In the “Perturbation analysis for the matrix equation (1)” section, the differential bound for the maximal solution of the matrix equation (1) is derived. Moreover, a perturbation estimate and an error bound for this solution is given. In the “Numerical test problem” section, a numerical test is given to clarify the sharpness of the perturbation bound and the reliability of the obtained results.

## Preliminaries

For square nonsingular matrices P   and  Q, the following conditions hold:

1. i.

If P ≥ Q > 0 then P−1 ≤ Q−1.

2. ii.

The spectral norm is monotonic norm, i.e., if 0 < P ≤ Q, then P ≤ Q.

3. iii.

The mathematical expression P ≥ Q  (P > Q) means that P − Q is a Hermitian positive semi-definite (definite) matrix and [P, Q ] = { Y : P ≤ Y ≤ Q }.

Definition 2.1:  Let H = (hij)r × s , then differentiating of the matrix H is defined by dH = (dhij)r × s . For example,

$$let\ H=\left(\begin{array}{ccc}{v}^2& 3v-1& w+2v\\ {}{w}^2-3v& 2v& w+2\\ {}w-v& -5w& {w}^3+{v}^3\end{array}\right)$$
(2)
$$Then, dH=\left(\begin{array}{ccc}2v\; dv& 3\; dv& dw+2\; dv\\ {}2w\; dw-3\; dv& 2\; dv& dw\\ {} dw- dv& -5 dw& 3{w}^2\; dw+3{v}^2\; dv\end{array}\right)$$
(3)

### Lemma 2.1: . The matrix differentiation has the following properties:

1. 1)

d (H1 ± H2) = d H1 ± d H2 ;

2. 2)

d (c  H) = c (d H),where c is a complex number ;

3. 3)

d (H) = (d H);

4. 4)

d (H1H2H3) = (d H1)H2H3 + H1(d H2)H3 + H1H2(dH3);

5. 5)

d (H−1) =  − H−1(d H) H−1;

6. 6)

d H = 0, where H is a constant matrix and 0 is the zero matrix of the same dimension of H.

### Theorem 2.1: 

Let (Y, ≤) be a partially ordered set granted with a metric space d such that (Y, d) is complete. Let G : Y × Y → Y be a continuous mapping with the mixed monotone property on Y. If there exists ε [0, 1), where $$d\;\left(\;G\left(y,z\right),G\left(v,w\right)\;\right)\le \frac{\varepsilon }{2}\;\left[\;d\;\left(y,v\right)+d\;\left(z,w\right)\;\right]$$ for all (y, z), (v, w) Y × Y where y ≥ v and z ≤ w. Moreover, there exist y0, z0Y such that y0 ≤ G (y0, z0) and z0 ≥ G (z0, y0). Then,

1. (a)

G has a coupled fixed point $$\left(\tilde{y},\tilde{z}\right)\in Y\times Y$$;

2. (b)

The sequences { yk} and { zk} defined by yk + 1 = G ( yk, zk) and zk + 1 = G  ( zk, yk) converge to $$\tilde{y}$$ and $$\tilde{z}$$, respectively ;

In addition, suppose that every pair of elements has a lower bound and an upper bound, then

1. (c)

G has a unique coupled fixed point $$\left(\tilde{y},\tilde{z}\right)\in Y\times Y$$;

2. (d)

$$\tilde{y}=\tilde{z}$$; and

3. (e)

We have the following estimate:

$$\max\;\left\{\;d\;\left({y}_k,\tilde{y}\right),d\;\left({z}_k,\tilde{y}\right)\;\right\}\le \frac{\varepsilon^k}{2\;\left(1-\varepsilon \right)}\;\left[\;d\;\left(G\left({y}_0,{z}_0\right),{y}_0\right)+d\;\right(\;G\left({z}_0,{y}_0\right),{z}_0\;\Big].$$

Theorem 2.2 (Theorem of Schauder Fixed Point) 

Every continuous function g : T → T mapping T into itself has a fixed point, whereT be a nonempty compact convex subset of a normed vector space.

Suppose that the set of matrices Ψ defined by $$\varPsi =\left\{\;X\in H(M):X\ge \frac{1}{2}I\;\right\}$$.

let the mapping G : Ψ × Ψ → Ψ associated with Eq. (1) is defined by

$$G\left(X,Y\right)=I-\sum \limits_{j=1}^n{B}_j^{\ast }{X}^{-1}{B}_j+\sum \limits_{i=1}^m{A}_i^{\ast }{Y}^{-1}{A}_i$$
(4)

Theorem 2.3:  Suppose that the following assumptions hold

$$\sum \limits_{j=1}^n{\left\Vert {B}_j\right\Vert}^2<\frac{1}{4^2},\sum \limits_{i=1}^m{\left\Vert\;{A}_i\right\Vert}^2<\frac{1}{4^2}$$
(5)
$$6\sum \limits_{j=1}^n{B}_j^{\ast }{B}_j-2\sum \limits_{i=1}^m{A}_i^{\ast }{A}_i\le \frac{3}{2}I$$
(6)
$$6\sum \limits_{i=1}^m{A}_i^{\ast }{A}_i-2\sum \limits_{j=1}^n{B}_j^{\ast }{B}_j\le \frac{3}{2}I$$
(7)

Then Eq. (1) has a unique maximal solution XL with

$${X}_L\in \left[I-2\sum \limits_{j=1}^n{B}_j^{\ast }{B}_j+\frac{2}{3}\sum \limits_{i=1}^m{A}_i^{\ast }{A}_i,I-\frac{2}{3}\sum \limits_{j=1}^n{B}_j^{\ast }{B}_j+2\sum \limits_{i=1}^m{A}_i^{\ast }{A}_i\right].$$

Proof:  We demand that there exists (X, Y ) H(M) × H(M) a solution to the system

$$X=I-\sum \limits_{j=1}^n{B}_j^{\ast }{X}^{-1}{B}_j+\sum \limits_{i=1}^m{A}_i^{\ast }{Y}^{-1}{A}_i,$$
$$Y=I-\sum \limits_{j=1}^n{B}_j^{\ast }{Y}^{-1}{B}_j+\sum \limits_{i=1}^m{A}_i^{\ast }{X}^{-1}{A}_i$$
(8)

Now, taking $${X}_0=\frac{1}{2}I$$ and $${Y}_0=\frac{3}{2}I$$.

From condition (6) we have

$$6\overset{n}{\sum \limits_{j=1}}{B}_j^{\ast }{B}_j-2\sum \limits_{i=1}^m{A}_i^{\ast }{A}_i\le \frac{3}{2}I$$ then $$2\overset{n}{\sum \limits_{j=1}}{B}_j^{\ast }{B}_j-\frac{2}{3}\sum \limits_{i=1}^m{A}_i^{\ast }{A}_i\le \frac{1}{2}I$$ so we have $$G\left(\frac{1}{2}I,\frac{3}{2}I\right)=I-2\overset{n}{\sum \limits_{j=1}}{B}_j^{\ast }{B}_j+\frac{2}{3}\sum \limits_{i=1}^m{A}_i^{\ast }{A}_i\ge \frac{1}{2}I$$ , that is, $${X}_0=\frac{1}{2}I\le G\;\left(\frac{1}{2}I,\frac{3}{2}I\right)$$.

Moreover from condition (7) we get

$$6\overset{m}{\sum \limits_{i=1}}{A}_i^{\ast }{A}_i-2\sum \limits_{j=1}^n{B}_j^{\ast }{B}_j\le \frac{3}{2}I$$ then $$2\overset{m}{\sum \limits_{i=1}}{A}_i^{\ast }{A}_i-\frac{2}{3}\sum \limits_{j=1}^n{B}_j^{\ast }{B}_j\le \frac{1}{2}I$$ so we have

$$G\left(\frac{3}{2}I,\kern0.36em \frac{1}{2}I\right)=I-\frac{2}{3}\overset{n}{\sum \limits_{j=1}}{B}_j^{\ast }{B}_j+2\sum \limits_{i=1}^m{A}_i^{\ast }{A}_i\le \frac{3}{2}I$$ and $${Y}_0=\frac{3}{2}I\ge G\;\left(\frac{3}{2}I,\kern0.36em \frac{1}{2}I\right).$$

From Theorem 2.1 (a), there exists (X, Y ) H(M) × H(M)where G(X, Y) = X and G(Y, X) = Y that is, (X, Y) is a solution to (8). On the other hand, for every X, YH(M) there is a greatest lower bound and a least upper bound. Note also that the partial order G is a continuous mapping, by Theorem 2.1, (X, Y) is the unique coupled fixed point of G that is X = Y = XL.

Thus, the unique solution of Eq. (1) is XL.

Now, using the Theorem of Schauder Fixed Point, we state the mapping $$F:\left[G\;\left(\frac{1}{2}I,\kern0.36em \frac{3}{2}I\right),G\;\left(\frac{3}{2}I,\kern0.36em \frac{1}{2}I\right)\right]\to \varPsi$$ by

$$F\left({X}_L\right)=G\left({X}_L,{X}_L\right)=I-\sum \limits_{j=1}^n{B}_j^{\ast }{X}_L^{-1}{B}_j+\sum \limits_{i=1}^m{A}_i^{\ast }{X}_L^{-1}{A}_i,$$

For all $${X}_L\in \left[G\left(\frac{1}{2}I,\kern0.36em \frac{3}{2}I\right),G\left(\frac{3}{2}I,\kern0.36em \frac{1}{2}I\right)\right].$$

We want to prove that

$$F\kern0.36em \left(\left[G\;\left(\frac{1}{2}I,\kern0.36em \frac{3}{2}I\right),G\;\left(\frac{3}{2}I,\kern0.36em \frac{1}{2}I\right)\right]\right)\subseteq \left[G\;\left(\frac{1}{2}I,\kern0.36em \frac{3}{2}I\right),G\;\left(\frac{3}{2}I,\kern0.36em \frac{1}{2}I\right)\right].$$

Let $${X}_L\in \left[G\left(\frac{1}{2}I,\kern0.36em \frac{3}{2}I\right),G\left(\frac{3}{2}I,\kern0.36em \frac{1}{2}I\right)\right]$$ , that is $$G\left(\frac{1}{2}I,\kern0.36em \frac{3}{2}I\right)\le {X}_L\le G\left(\frac{3}{2}I,\kern0.36em \frac{1}{2}I\right)$$.

Applying the property of mixed monotone of G yields that

$$G\;\left(G\left(\frac{1}{2}I,\kern0.36em \frac{3}{2}I\right),G\left(\frac{3}{2}I,\kern0.36em \frac{1}{2}I\right)\right)\le F\left({X}_L\right)=G\left({X}_L,{X}_L\right)\le G\;\left(G\left(\frac{3}{2}I,\kern0.36em \frac{1}{2}I\right),G\left(\frac{1}{2}I,\kern0.36em \frac{3}{2}I\right)\right),$$

since $$G\left(\frac{1}{2}I,\kern0.36em \frac{3}{2}I\right)\ge \frac{1}{2}I$$ and $$G\left(\frac{3}{2}I,\kern0.36em \frac{1}{2}I\right)\le \frac{3}{2}I$$.

Applying the property of the mixed monotone of G again implies that

$$G\;\left(G\left(\frac{1}{2}I,\kern0.36em \frac{3}{2}I\right),G\left(\frac{3}{2}I,\kern0.36em \frac{1}{2}I\right)\right)\ge G\left(\frac{1}{2}I,\kern0.36em \frac{3}{2}I\right),$$
(9)
$$G\;\left(G\left(\frac{3}{2}I,\kern0.36em \frac{1}{2}I\right),G\left(\frac{1}{2}I,\kern0.36em \frac{3}{2}I\right)\right)\le G\left(\frac{3}{2}I,\kern0.36em \frac{1}{2}I\right).$$
(10)

From (9) and (10), it follows that

$$G\left(\frac{1}{2}I,\kern0.36em \frac{3}{2}I\right)\le F\left({X}_L\right)\le G\left(\frac{3}{2}I,\kern0.36em \frac{1}{2}I\right).$$

Thus, our claim that

$$F\kern0.36em \left(\left[G\left(\frac{1}{2}I,\kern0.36em \frac{3}{2}I\right),G\left(\frac{3}{2}I,\kern0.36em \frac{1}{2}I\right)\right]\right)\subseteq \left[G\left(\frac{1}{2}I,\kern0.36em \frac{3}{2}I\right),G\left(\frac{3}{2}I,\kern0.36em \frac{1}{2}I\right)\right].\mathrm{holds}$$

Now, we have a continuous mapping F that maps the compact convex set $$\left[G\left(\frac{1}{2}I,\kern0.36em \frac{3}{2}I\right),G\left(\frac{3}{2}I,\kern0.36em \frac{1}{2}I\right)\right]$$ into itself, from Schuader fixed point theorem we get that F has at least one fixed point in this set, but a fixed point of F is a solution of Eq. (1), and we proved already that Eq. (1) has a unique solution in Ψ. Thus, this solution must be in the set $$\left[G\left(\frac{1}{2}I,\kern0.36em \frac{3}{2}I\right),G\left(\frac{3}{2}I,\kern0.36em \frac{1}{2}I\right)\right]$$. That is,

$${X}_L\in \left[I-2\sum \limits_{j=1}^n{B}_j^{\ast }{B}_j+\frac{2}{3}\sum \limits_{i=1}^m{A}_i^{\ast }{A}_i,I-\frac{2}{3}\sum \limits_{j=1}^n{B}_j^{\ast }{B}_j+2\sum \limits_{i=1}^m{A}_i^{\ast }{A}_i\right].$$

Which completes the proof of the theorem.

## Perturbation analysis for the matrix equation (1)

Theorem 3.1

$$\mathrm{If}\sum \limits_{j=1}^n{\left\Vert {B}_j\right\Vert}^2<\frac{1}{4^2}\ \mathrm{and}\ \sum \limits_{i=1}^m{\left\Vert\;{A}_i\right\Vert}^2<\frac{1}{4^2}$$
(11)

then the maximal solution XL of Eq. (1) exists and satisfies

$$\left\Vert\;d\;{X}_L\right\Vert \le \frac{4\;\left[\sum \limits_{i=1}^m\left\Vert\;{A}_i\right\Vert\;\left\Vert\;d\;{A}_i\right\Vert +\sum \limits_{j=1}^n\left\Vert\;{B}_j\right\Vert\;\left\Vert\;d\;{B}_j\right\Vert \right]}{1-4\;\left[\sum \limits_{i=1}^m{\left\Vert\;{A}_i\right\Vert}^2+\sum \limits_{j=1}^n{\left\Vert\;{B}_j\right\Vert}^2\;\right]}$$
(12)

Proof: Differentiating Eq. ( 1 ) yields that

$$\begin{array}{l}d\;{X}_L-\sum \limits_{i=1}^m\left(d\;{A}_i^{\ast}\right)\;{X}_L^{-1}{A}_i-\sum \limits_{i=1}^m\;{A}_i^{\ast}\left(d\;{X}_L^{-1}\right)\;{A}_i-\sum \limits_{i=1}^m{A}_i^{\ast }{X}_L^{-1}\left(d\;{A}_i\right)\\ {}+\sum \limits_{j=1}^n\left(d\;{B}_j^{\ast}\right)\;{X}_L^{-1}{B}_j+\sum \limits_{j=1}^n{B}_j^{\ast}\left(d\;{X}_L^{-1}\right)\;{B}_j+\sum \limits_{j=1}^n{B}_j^{\ast }{X}_L^{-1}\;\left(d\;{B}_j\right)=\mathbf{0}.\end{array}}$$
(13)

Applying Lemma 2.1 to Eq. (13) we get that

$$\begin{array}{l}d\;{X}_L-\sum \limits_{i=1}^m\left(d\;{A}_i^{\ast}\right)\;{X}_L^{-1}{A}_i+\sum \limits_{i=1}^m\;{A}_i^{\ast }{X}_L^{-1}\left(d\;{X}_L\right)\;{X}_L^{-1}{A}_i-\sum \limits_{i=1}^m{A}_i^{\ast }{X}_L^{-1}\left(d\;{A}_i\right)\\ {}+\sum \limits_{j=1}^n\left(d\;{B}_j^{\ast}\right)\;{X}_L^{-1}{B}_j-\sum \limits_{j=1}^n{B}_j^{\ast }{X}_L^{-1}\left(d\;{X}_L\right)\;{X}_L^{-1}{B}_j+\sum \limits_{j=1}^n{B}_j^{\ast }{X}_L^{-1}\;\left(d\;{B}_j\right)=\mathbf{0}.\end{array}}$$
(14)

Eq. (14) can be rewritten as:

$$\begin{array}{l}d\;{X}_L+\sum \limits_{i=1}^m\;{A}_i^{\ast }{X}_L^{-1}\left(d\;{X}_L\right)\;{X}_L^{-1}{A}_i-\sum \limits_{j=1}^n{B}_j^{\ast }{X}_L^{-1}\left(d\;{X}_L\right)\;{X}_L^{-1}{B}_j\\ {}=\sum \limits_{i=1}^m\left(d\;{A}_i^{\ast}\right)\;{X}_L^{-1}{A}_i+\sum \limits_{i=1}^m{A}_i^{\ast }{X}_L^{-1}\left(d\;{A}_i\right)-\sum \limits_{j=1}^n\left(d\;{B}_j^{\ast}\right)\;{X}_L^{-1}{B}_j-\sum \limits_{j=1}^n{B}_j^{\ast }{X}_L^{-1}\;\left(d\;{B}_j\right)\end{array}}$$
(15)

Taking spectral norm for Eq. (15), we have

$$\begin{array}{l}\;\left\Vert\;d\;{X}_L+\sum \limits_{i=1}^m\;{A}_i^{\ast }{X}_L^{-1}\left(d\;{X}_L\right)\;{X}_L^{-1}{A}_i-\sum \limits_{j=1}^n{B}_j^{\ast }{X}_L^{-1}\left(d\;{X}_L\right)\;{X}_L^{-1}{B}_j\right\Vert \\ {}=\left\Vert \sum \limits_{i=1}^m\left(d\;{A}_i^{\ast}\right)\;{X}_L^{-1}{A}_i+\sum \limits_{i=1}^m{A}_i^{\ast }{X}_L^{-1}\left(d\;{A}_i\right)-\sum \limits_{j=1}^n\left(d\;{B}_j^{\ast}\right)\;{X}_L^{-1}{B}_j-\sum \limits_{j=1}^n{B}_j^{\ast }{X}_L^{-1}\;\left(d\;{B}_j\right)\;\right\Vert \\ {}\kern0.48em \le \sum \limits_{i=1}^m\kern0.24em \left\Vert\;\left(d\;{A}_i^{\ast}\right){X}_L^{-1}{A}_i\right\Vert +\sum \limits_{i=1}^m\;\left\Vert {A}_i^{\ast }{X}_L^{-1}\left(d\;{A}_i\right)\;\right\Vert +\sum \limits_{j=1}^n\kern0.24em \left\Vert\;\left(d\;{B}_j^{\ast}\right){X}_L^{-1}{B}_j\right\Vert +\sum \limits_{j=1}^n\left\Vert\;{B}_j^{\ast }{X}_L^{-1}\;\left(d\;{B}_j\right)\;\right\Vert \kern0.24em \\ {}\kern0.48em \le \sum \limits_{i=1}^m\kern0.24em \left\Vert\;d\;{A}_i^{\ast}\right\Vert\;\left\Vert\;{X}_L^{-1}\right\Vert\;\left\Vert {A}_i\right\Vert +\sum \limits_{i=1}^m\;\left\Vert {A}_i^{\ast}\right\Vert\;\left\Vert {X}_L^{-1}\right\Vert\;\left\Vert\;d\;{A}_i\right\Vert +\sum \limits_{j=1}^n\kern0.24em \left\Vert\;d\;{B}_j^{\ast}\right\Vert\;\left\Vert\;{X}_L^{-1}\right\Vert\;\left\Vert {B}_j\right\Vert +\sum \limits_{j=1}^n\kern0.24em \left\Vert\;{B}_j^{\ast}\right\Vert\;\left\Vert\;{X}_L^{-1}\right\Vert\;\left\Vert\;d\;{B}_j\right\Vert \kern0.36em \end{array}}$$
(16)

By Theorem 2.3, it is clear that a unique maximal solution XL to Eq. (1) exists with $${X}_L\in \left[\frac{1}{2}I,\frac{3}{2}I\right]$$, that is, $$\left\Vert\;{X}_L^{-1}\right\Vert \le 2$$, substituting with this value in (16) we get

$$\begin{array}{l}\left\Vert\;d\;{X}_L+\sum \limits_{i=1}^m\;{A}_i^{\ast }{X}_L^{-1}\left(d\;{X}_L\right)\;{X}_L^{-1}{A}_i-\sum \limits_{j=1}^n{B}_j^{\ast }{X}_L^{-1}\left(d\;{X}_L\right)\;{X}_L^{-1}{B}_j\right\Vert \\ {}\le 4\kern0.24em \left[\sum \limits_{i=1}^m\kern0.24em \left\Vert\;{A}_i\right\Vert \kern0.24em \left\Vert d\;{A}_i\right\Vert +\sum \limits_{j=1}^n\;\left\Vert\;{B}_j\right\Vert \kern0.24em \left\Vert\;d\;{B}_j\right\Vert \kern0.24em \right]\;\end{array}}$$
(17)

Also, we have

$$\begin{array}{l}\left\Vert\;d\;{X}_L+\sum \limits_{i=1}^m\;{A}_i^{\ast }{X}_L^{-1}\left(d\;{X}_L\right)\;{X}_L^{-1}{A}_i-\sum \limits_{j=1}^n{B}_j^{\ast }{X}_L^{-1}\left(d\;{X}_L\right)\;{X}_L^{-1}{B}_j\right\Vert \kern0.15em \\ {}\ge \left\Vert d\;{X}_L\right\Vert -\left\Vert \sum \limits_{i=1}^m\;{A}_i^{\ast }{X}_L^{-1}\left(d\;{X}_L\right)\;{X}_L^{-1}{A}_i\right\Vert -\left\Vert \sum \limits_{j=1}^n\;{B}_j^{\ast}\;{X}_L^{-1}\left(d\;{X}_L\right)\;{X}_L^{-1}{B}_j\right\Vert \\ {}\ge \left\Vert d\;{X}_L\right\Vert -\sum \limits_{i=1}^m\;{A}_i^{\ast }{X}_L^{-1}\left(d\;{X}_L\right)\;{X}_L^{-1}{A}_i\left\Vert -\sum \limits_{j=1}^n\right\Vert\;{B}_j^{\ast}\;{X}_L^{-1}\left(d\;{X}_L\right)\;{X}_L^{-1}{B}_j\Big\Vert \\ {}\kern0.84em \ge \left\Vert d\;{X}_L\right\Vert -\sum \limits_{i=1}^m\kern0.24em {\left\Vert {A}_i^{\ast}\right\Vert}^2\kern0.24em {\left\Vert {X}_L^{-1}\right\Vert}^2\kern0.24em \left\Vert d\;{X}_L\right\Vert -\sum \limits_{j=1}^n\kern0.24em {\left\Vert\;{B}_j^{\ast}\right\Vert}^2{\left\Vert\;{X}_L^{-1}\right\Vert}^2\;\left\Vert\;d\;{X}_L\right\Vert \\ {}\kern0.84em \ge \left[1-\sum \limits_{i=1}^m\kern0.24em {\left\Vert {A}_i\right\Vert}^2\;{\left\Vert {X}_L^{-1}\right\Vert}^2-\sum \limits_{j=1}^n{\left\Vert\;{B}_j\right\Vert}^2\;{\left\Vert\;{X}_L^{-1}\right\Vert}^2\right]\left\Vert\;d\;{X}_L\right\Vert \\ {}\kern0.72em \ge \left[1-4\left(\sum \limits_{i=1}^m\kern0.24em {\left\Vert {A}_i\right\Vert}^2+\sum \limits_{j=1}^n{\left\Vert\;{B}_j\right\Vert}^2\right)\right]\;\left\Vert\;d\;{X}_L\right\Vert \end{array}}$$
(18)

Combining (17) and (18), we have

$$\kern0.24em \left[\;1-4\kern0.24em \left(\sum \limits_{i=1}^m\kern0.24em {\left\Vert {A}_i\right\Vert}^2+\sum \limits_{j=1}^n{\left\Vert\;{B}_j\right\Vert}^2\;\right)\right]\;\left\Vert\;d\;{X}_L\right\Vert \le 4\kern0.24em \left[\sum \limits_{i=1}^m\kern0.24em \left\Vert\;{A}_i\right\Vert \kern0.24em \left\Vert d\;{A}_i\right\Vert +\sum \limits_{j=1}^n\;\left\Vert\;{B}_j\right\Vert \kern0.24em \left\Vert\;d\;{B}_j\right\Vert \kern0.24em \right].$$

From (11), we get that

$$\kern0.24em \left[\;1-4\kern0.24em \left(\sum \limits_{i=1}^m\kern0.24em {\left\Vert {A}_i\right\Vert}^2+\sum \limits_{j=1}^n{\left\Vert\;{B}_j\right\Vert}^2\;\right)\right]\kern0.58em >\frac{1}{2}.$$

Then,

$$\left\Vert\;d\;{X}_L\right\Vert \le \frac{4\;\left[\sum \limits_{i=1}^m\left\Vert\;{A}_i\right\Vert\;\left\Vert\;d\;{A}_i\right\Vert +\sum \limits_{j=1}^n\left\Vert\;{B}_j\right\Vert\;\left\Vert\;d\;{B}_j\right\Vert \right]}{1-4\;\left[\sum \limits_{i=1}^m{\left\Vert\;{A}_i\right\Vert}^2+\sum \limits_{j=1}^n{\left\Vert\;{B}_j\right\Vert}^2\;\right]}.$$

Thus the proof of the theorem is completed.

Theorem 3.2: Assume that $${\tilde{A}}_i$$and $${\tilde{B}}_j\in {C}^{n\times m}$$ be the perturbed matrices of Ai, i = 1, 2, …, m and Bj, j = 1, 2, …, n, respectively, and if $${E}_i={\tilde{A}}_i-{A}_i,i=1,2,\dots, m$$, $${E}_j={\tilde{B}}_j-{B}_j,j=1,2,\dots, n$$.

$$\mathrm{If}\ \sum \limits_{j=1}^n{\left\Vert {B}_j\right\Vert}^2<\frac{1}{4^2},\sum \limits_{i=1}^m{\left\Vert\;{A}_i\right\Vert}^2<\frac{1}{4^2}$$
(19)
$$\sum \limits_{i=1}^m{\left\Vert {A}_i\right\Vert}^2+\sum \limits_{i=1}^m{\left\Vert {E}_i\right\Vert}^2<\frac{1}{4^2}-2\sum \limits_{i=1}^m\left(\left\Vert {A}_i\right\Vert\;\left\Vert {E}_i\right\Vert\;\right)$$
(20)
$$\sum \limits_{j=1}^n{\left\Vert {B}_j\right\Vert}^2+\sum \limits_{j=1}^n{\left\Vert {E}_j\right\Vert}^2<\frac{1}{4^2}-2\sum \limits_{j=1}^n\left(\left\Vert {B}_j\right\Vert\;\left\Vert {E}_j\right\Vert\;\right)$$
(21)

then the maximal solutions XL and $${\tilde{X}}_L$$ of the equations

$${X}_L-\sum \limits_{i=1}^m{A}_i^{\ast }{X}_L^{-1}{A}_i+\sum \limits_{j=1}^n{B}_j^{\ast }{X}_L^{-1}{B}_j=I\ \mathrm{and}\ {\tilde{X}}_L-\sum \limits_{i=1}^m{\tilde{A}}_i^{\ast }{\tilde{X}}_L^{-1}{\tilde{A}}_i+\sum \limits_{j=1}^n{\tilde{B}}_j^{\ast }{\tilde{X}}_L^{-1}{\tilde{B}}_j=I$$
(22)

exist and satisfy

$$\left\Vert\;{\tilde{X}}_L-{X}_L\right\Vert \le \frac{1}{2}\kern0.24em \ln \kern0.24em \left[\frac{1-4\;\left[\sum \limits_{i=1}^m{\left\Vert\;{A}_i\right\Vert}^2+\sum \limits_{j=1}^n{\left\Vert\;{B}_j\right\Vert}^2\right]}{1-4\;\left[\sum \limits_{i=1}^m{\left(\;\left\Vert\;{A}_i\right\Vert +\left\Vert {E}_i\right\Vert\;\right)}^2+\sum \limits_{j=1}^n{\left(\;\left\Vert\;{B}_j\right\Vert +\left\Vert {E}_j\right\Vert\;\right)}^2\;\right]}\right]={S}_{err}$$
(23)

Proof: Using the hypothesis (19), (20), (21) together with Corollary 3.2 in , then the maximal solutions of the equations (22) exists.

Now, by the condition (20), we get

$$\begin{array}{l}\sum \limits_{i=1}^m{\left\Vert {\tilde{A}}_i\right\Vert}^2=\sum \limits_{i=1}^m{\left\Vert {A}_i+{E}_i\right\Vert}^2\\ {}\le \sum \limits_{i=1}^m{\left(\;\left\Vert {A}_i\right\Vert +\left\Vert {E}_i\right\Vert \right)}^2\\ {}=\sum \limits_{i=1}^m\left(\;{\left\Vert {A}_i\right\Vert}^2+2\;\left\Vert {A}_i\right\Vert\;\left\Vert {E}_i\right\Vert +{\left\Vert {E}_i\right\Vert}^2\right)\\ {}=\sum \limits_{i=1}^m{\left\Vert {A}_i\right\Vert}^2+\sum \limits_{i=1}^m{\left\Vert {E}_i\right\Vert}^2+2\sum \limits_{i=1}^m\left\Vert {A}_i\right\Vert\;\left\Vert {E}_i\right\Vert\;\\ {}<\frac{1}{4^2}-2\sum \limits_{i=1}^m\left\Vert {A}_i\right\Vert\;\left\Vert {E}_i\right\Vert +2\sum \limits_{i=1}^m\left\Vert {A}_i\right\Vert\;\left\Vert {E}_i\right\Vert =\kern0.36em \frac{1}{4^2}\end{array}}$$
(24)

In the same way we can prove

$$\sum \limits_{j=1}^n{\left\Vert {\tilde{B}}_j\right\Vert}^2<\frac{1}{4^2}$$
(25)

By (24), (25) and Theorem 3.1, we get that the equation $${\tilde{X}}_L-\overset{m}{\sum \limits_{i=1}}{\tilde{A}}_i^{\ast }{\tilde{X}}_L^{-1}{\tilde{A}}_i+\sum \limits_{j=1}^n{\tilde{B}}_j^{\ast }{\tilde{X}}_L^{-1}{\tilde{B}}_j=I$$ has a unique maximal solution $${\tilde{X}}_L$$.

$$Let\ {A}_i(t)={A}_i+t\;{E}_i,{B}_j(t)={B}_j+t\;{E}_j,\kern0.36em t\in \left[0,1\right]$$
(26)

Using (20), we get

$$\begin{array}{l}\sum \limits_{i=1}^m{\left\Vert {A}_i(t)\;\right\Vert}^2=\sum \limits_{i=1}^m{\left\Vert {A}_i+t\;{E}_i\right\Vert}^2\\ {}\le \sum \limits_{i=1}^m{\left(\;\left\Vert {A}_i\right\Vert +t\;\left\Vert {E}_i\right\Vert \right)}^2\\ {}=\sum \limits_{i=1}^m\left(\;{\left\Vert {A}_i\right\Vert}^2+2t\kern0.24em \left\Vert {A}_i\right\Vert\;\left\Vert {E}_i\right\Vert +{t}^2{\left\Vert {E}_i\right\Vert}^2\right)\\ {}\le \sum \limits_{i=1}^m\left(\;{\left\Vert {A}_i\right\Vert}^2+2\kern0.24em \left\Vert {A}_i\right\Vert\;\left\Vert {E}_i\right\Vert +{\left\Vert {E}_i\right\Vert}^2\right)\\ {}=\sum \limits_{i=1}^m{\left\Vert {A}_i\right\Vert}^2+\sum \limits_{i=1}^m{\left\Vert {E}_i\right\Vert}^2+2\sum \limits_{i=1}^m\left\Vert {A}_i\right\Vert\;\left\Vert {E}_i\right\Vert <\frac{1}{4^2}\kern0.24em \end{array}}$$
(27)

Similarly we can prove that

$$\sum \limits_{j=1}^n{\left\Vert {B}_j(t)\;\right\Vert}^2<\frac{1}{4^2}$$
(28)

Therefore, by (27), (28) and Theorem 2.3, we can see that for every t [0, 1], the equation

$$X-\sum \limits_{i=1}^m{A}_i^{\ast }(t)\;{X}^{-1}{A}_i(t)+\sum \limits_{j=1}^n{B}_j^{\ast }(t){X}^{-1}{B}_j(t)=I$$
(29)

has the maximal solution XL(t), particularly, we have

$${X}_L(0)={X}_L,{X}_L(1)={\tilde{X}}_L$$
(30)

By Theorem 3.1, we get

$$\begin{array}{l}\left\Vert\;{\tilde{X}}_L-{X}_L\right\Vert =\left\Vert {X}_L(1)-{X}_L(0)\right\Vert \\ {}=\left\Vert \kern0.24em \underset{0}{\overset{1}{\int }}d\;{X}_L(t)\;\right\Vert \\ {}\le \underset{0}{\overset{1}{\int }}\left\Vert\;d\;{X}_L(t)\;\right\Vert \\ {}\le \underset{0}{\overset{1}{\int }}\kern0.36em \frac{4\;\left[\sum \limits_{i=1}^m\left\Vert\;{A}_i(t)\;\right\Vert\;\left\Vert\;d\;{A}_i(t)\;\right\Vert +\sum \limits_{j=1}^n\left\Vert\;{B}_j(t)\;\right\Vert\;\left\Vert\;d\;{B}_j(t)\;\right\Vert \right]}{1-4\;\left[\sum \limits_{i=1}^m{\left\Vert\;{A}_i(t)\;\right\Vert}^2+\sum \limits_{j=1}^n{\left\Vert\;{B}_j(t)\;\right\Vert}^2\;\right]}\\ {}\le \underset{0}{\overset{1}{\int }}\frac{4\;\left[\sum \limits_{i=1}^m\left\Vert\;{A}_i(t)\;\right\Vert\;\left\Vert\;{E}_i\right\Vert +\sum \limits_{j=1}^n\left\Vert\;{B}_j(t)\;\right\Vert\;\left\Vert\;{E}_j\;\right\Vert \right]}{1-4\;\left[\sum \limits_{i=1}^m{\left\Vert\;{A}_i(t)\;\right\Vert}^2+\sum \limits_{j=1}^n{\left\Vert\;{B}_j(t)\;\right\Vert}^2\;\right]}\kern0.36em dt.\end{array}}$$
(31)

Noting that

$$\begin{array}{l}\left\Vert {A}_i(t)\;\right\Vert =\left\Vert {A}_i+t\;{E}_i\right\Vert \le \left\Vert {A}_i\right\Vert +t\;\left\Vert {E}_i\right\Vert, i=1,2,\dots, m,\\ {}\left\Vert {B}_j(t)\;\right\Vert =\left\Vert {B}_j+t\;{E}_j\right\Vert \le \left\Vert {B}_j\right\Vert +t\;\left\Vert {E}_j\right\Vert, j=1,2,\dots, n.\end{array}}$$
(32)

Substituting with (32) in (31), we have

$$\begin{array}{l}\left\Vert\;{\tilde{X}}_L-{X}_L\right\Vert \le \underset{0}{\overset{1}{\int }}\frac{4\;\left[\sum \limits_{i=1}^m\left\Vert\;{A}_i(t)\;\right\Vert\;\left\Vert\;d\;{A}_i(t)\;\right\Vert +\sum \limits_{j=1}^n\left\Vert\;{B}_j(t)\;\right\Vert\;\left\Vert\;d\;{B}_j(t)\;\right\Vert \right]}{1-4\;\left[\sum \limits_{i=1}^m{\left\Vert\;{A}_i(t)\;\right\Vert}^2+\sum \limits_{j=1}^n{\left\Vert\;{B}_j(t)\;\right\Vert}^2\;\right]}\\ {}\le \underset{0}{\overset{1}{\int }}\frac{4\;\left[\sum \limits_{i=1}^m\left(\;\left\Vert\;{A}_i\;\right\Vert +t\;\left\Vert\;{E}_i\right\Vert\;\right)\;\left\Vert\;{E}_i\right\Vert +\sum \limits_{j=1}^n\;\left(\left\Vert\;{B}_j\right\Vert +t\;\left\Vert\;{E}_j\;\right\Vert\;\right)\kern0.24em \left\Vert\;{E}_j\;\right\Vert \right]}{1-4\;\left[\sum \limits_{i=1}^m{\left(\;\left\Vert\;{A}_i\;\right\Vert +t\;\left\Vert\;{E}_i\right\Vert\;\right)}^2+\sum \limits_{j=1}^n{\left(\;\left\Vert\;{B}_j\;\right\Vert +t\;\left\Vert\;{E}_j\;\right\Vert\;\right)}^2\;\right]}\kern0.36em dt\\ {}=\frac{1}{2}\kern0.24em \ln \kern0.24em \left[\frac{1-4\;\left[\sum \limits_{i=1}^m{\left\Vert\;{A}_i\right\Vert}^2+\sum \limits_{j=1}^n{\left\Vert\;{B}_j\right\Vert}^2\right]}{1-4\;\left[\sum \limits_{i=1}^m{\left(\;\left\Vert\;{A}_i\right\Vert +\left\Vert {E}_i\right\Vert\;\right)}^2+\sum \limits_{j=1}^n{\left(\;\left\Vert\;{B}_j\right\Vert +\left\Vert {E}_j\right\Vert\;\right)}^2\;\right]}\right]={S}_{err}\end{array}}$$
(33)

Which ends the proof of the theorem.

## Numerical test problem

In this section, a numerical test is presented to clarify the acuteness of the perturbation bound for the unique maximal solution XLof Eq. (1) and ensure the correctness of Theorem 3.2. We performed the algorithms in MATLAB and ran the programs on a PC Pentium IV.

Example 4.1: Consider the nonlinear matrix equation

$${X}_L-{A}_1^{\ast }{X}_L^{-1}{A}_1-{A}_2^{\ast }{X}_L^{-1}{A}_2+{B}_1^{\ast }{X}_L^{-1}{B}_1+{B}_2^{\ast }{X}_L^{-1}{B}_2=I$$
(34)

and its perturbed equation

$${\tilde{X}}_L-{\tilde{A}}_1^{\ast }{\tilde{X}}_L^{-1}{\tilde{A}}_1-{\tilde{A}}_2^{\ast }{\tilde{X}}_L^{-1}{\tilde{A}}_2+{\tilde{B}}_1^{\ast }{\tilde{X}}_L^{-1}{\tilde{B}}_1+{\tilde{B}}_2^{\ast }{\tilde{X}}_L^{-1}{\tilde{B}}_2=I$$
(35)

where

$$\begin{array}{l}{A}_1=\left(\begin{array}{l}-0.02\kern2em -0.01\kern2.25em 0.01\kern2em -0.03\kern1.5em -0.02\\ {}-0.01\kern2em -0.02\kern2.25em 0.01\kern2em -0.04\kern1.5em -0.03\\ {}\kern0.5em 0.03\kern2em -0.01\kern1.75em -0.02\kern1.75em -0.01\kern1.75em -0.04\\ {}-0.05\kern2.25em 0.03\kern1.75em -0.01\kern2em -0.02\kern2em 0.01\\ {}-0.02\kern2.25em 0.03\kern1.75em -0.04\kern1.75em -0.01\kern1.75em -0.02\end{array}\right),{A}_2=\left(\begin{array}{l}-0.03\kern2em -0.01\kern2em -0.02\kern2em -0.01\kern1.5em -0.04\\ {}-0.01\kern2em -0.03\kern2em -0.01\kern2.25em 0.02\kern1.75em -0.03\\ {}-0.04\kern2em -0.01\kern2em -0.03\kern2em -0.01\kern2em 0.02\\ {}\ 0.02\kern2em -0.03\kern2em -0.01\kern2em -0.03\kern1.75em -0.01\\ {}-0.04\kern2em -0.02\kern2.25em 0.01\kern2em -0.01\kern1.75em -0.03\end{array}\right)\\ {}{B}_1=\left(\begin{array}{l}0.04\kern2.25em 0.01\kern2.25em 0.02\kern2.25em 0.03\kern2em -0.02\\ {}0.01\kern2.25em 0.04\kern2.25em 0.01\kern2.25em 0.02\kern2em -0.04\\ {}0.03\kern2.25em 0.01\kern2.25em 0.04\kern2.25em 0.01\kern2em -0.02\\ {}0.03\kern2em -0.02\kern2.25em 0.01\kern2.25em 0.04\kern2.25em 0.01\\ {}0.02\kern2.25em 0.02\kern2em -0.03\kern2.25em 0.01\kern2.25em 0.04\end{array}\right),{B}_2=\left(\begin{array}{l}0.05\kern2.25em 0.01\kern2.25em 0.01\kern2.25em 0.03\kern2em -0.02\\ {}0.01\kern2.25em 0.05\kern2.25em 0.01\kern2.25em 0.02\kern2.25em 0.03\\ {}0.02\kern2.25em 0.01\kern2.25em 0.05\kern2.25em 0.01\kern2.25em 0.03\\ {}0.03\kern2.25em 0.04\kern2.25em 0.01\kern2.25em 0.05\kern2.25em 0.01\\ {}0.03\kern2.25em 0.02\kern2.25em 0.04\kern2.25em 0.01\kern2.25em 0.05\end{array}\right)\\ {}{\overset{\sim }{A}}_1={A}_1+{10}^{-t}\left(\begin{array}{l}\kern0.5em 1.5\kern2.25em 0.3\kern2.25em 0.3\kern2.25em 0.9\kern2.25em -0.6\\ {}\kern0.5em 0.3\kern2.25em 1.5\kern2.25em 0.3\kern2.25em 0.6\kern2.25em 0.9\\ {}\kern0.5em 0.6\kern2.25em 0.3\kern2.25em 1.5\kern2.25em 0.3\kern2.25em 0.9\\ {}\kern0.5em 0.9\kern2.25em 1.2\kern2.25em 0.3\kern2.25em 1.5\kern2.25em 0.3\\ {}\kern0.5em 0.9\kern2.25em 0.6\kern2.25em 1.2\kern2.25em 0.3\kern2.25em 1.5\end{array}\right)\kern0.36em ,{\overset{\sim }{A}}_2={A}_2+{10}^{-t}\left(\begin{array}{l}\ 0.8\kern2.25em 0.2\kern2.25em 0.4\kern2.25em 0.6\kern2em -0.4\\ {}\ 0.2\kern2.25em 0.8\kern2.25em 0.2\kern2.25em 0.4\kern2em -0.8\\ {}\ 0.6\kern2.25em 0.2\kern2.25em 0.8\kern2.25em 0.2\kern2.25em -0.4\\ {}\ 0.6\kern1.75em -0.4\kern2.25em 0.2\kern2.25em 0.8\kern2.37em 0.2\\ {}\ 0.4\kern2.25em 0.4\kern1.75em -0.6\kern2.25em 0.2\kern2.25em 0.8\end{array}\right)\\ {}{\overset{\sim }{B}}_1={B}_1+{10}^{-t}\left(\begin{array}{l}\kern0.5em 0.9\kern2.5em 0.3\kern2.5em 0.6\kern2.5em 0.3\kern2.5em 1.2\\ {}\kern0.5em 0.3\kern2.5em 0.9\kern2.5em 0.3\kern2.25em -0.6\kern2.5em 0.9\\ {}\kern0.5em 1.2\kern2.5em 0.3\kern2.5em 0.9\kern2.5em 0.3\kern2.25em -0.6\\ {}-0.6\kern2.5em 0.9\kern2.5em 0.3\kern2.5em 0.9\kern2.5em 0.3\\ {}\ 1.2\kern2.5em 0.6\kern2.25em -0.3\kern2.5em 0.3\kern2.5em 0.9\end{array}\right)\mathrm{and}\ {\overset{\sim }{B}}_2={B}_2+{10}^{-t}\left(\begin{array}{l}\kern0.5em 0.2\kern2.25em 0.1\kern2.5em -0.1\kern2.25em 0.3\kern2.25em 0.2\\ {}\kern0.37em 0.1\kern2.25em 0.2\kern2.5em -0.1\kern2.25em 0.4\kern2.25em 0.3\\ {}-0.3\kern2.25em 0.1\kern2.75em 0.2\kern2.25em 0.1\kern2.25em 0.4\\ {}\ 0.5\kern2em -0.3\kern2.5em 0.1\kern2.25em 0.2\kern2.25em -0.1\\ {}\ 0.2\kern2em -0.3\kern2.5em 0.4\kern2.25em 0.1\kern2.25em 0.2\end{array}\right),t\in N\end{array}}$$
(36)

It is clear that all the conditions of Theorem 3.2 hold, that is,

$$\sum \limits_{j=1}^n{\left\Vert {B}_j\right\Vert}^2=0.02466<\frac{1}{4^2}\mathrm{and}\ \sum \limits_{i=1}^m{\left\Vert {A}_i\right\Vert}^2=\kern1.25em 0.014083\kern0.36em <\frac{1}{4^2}$$

that is, inequality (19) of Theorem 3.2 is satisfied.

$$\kern0.24em \left(\frac{1}{4^2}-2\sum \limits_{i=1}^m\left(\left\Vert {A}_i\right\Vert\;\left\Vert {E}_i\right\Vert\;\right)-\sum \limits_{i=1}^m{\left\Vert {A}_i\right\Vert}^2-\sum \limits_{i=1}^m{\left\Vert {E}_i\right\Vert}^2\right)=0.03728\kern0.48em >0,$$

that is, inequality (20) of Theorem 3.2 is satisfied.

$$\left(\frac{1}{4^2}-2\sum \limits_{j=1}^n\left(\left\Vert {B}_j\right\Vert\;\left\Vert {E}_j\right\Vert\;\right)-\sum \limits_{j=1}^n{\left\Vert {B}_j\right\Vert}^2-\sum \limits_{j=1}^n{\left\Vert {E}_j\right\Vert}^2\right)=0.03011\kern0.36em >0,$$

that is, inequality (21) of Theorem 3.2 is satisfied. Thus, the matrix equation (34) and its perturbed equation (35) have unique maximal positive definite solutions XL and $${\tilde{X}}_L$$, respectively. Now, we considered the sequences { Xk} and { Yk} derived from the following iterative process

$$\begin{array}{l}{X}_0=\frac{1}{2}I\ \mathrm{and}\ {Y}_0=\frac{3}{2}I,\\ {}{X}_{k+1}=I-\sum \limits_{j=1}^n{B}_j^{\ast }{X}_k^{-1}{B}_j+\sum \limits_{i=1}^m{A}_i^{\ast }{Y}_k^{-1}{A}_i,\\ {}{Y}_{k+1}=I-\sum \limits_{j=1}^n{B}_j^{\ast }{Y}_k^{-1}{B}_j+\sum \limits_{i=1}^m{A}_i^{\ast }{X}_k^{-1}{A}_i\kern1.44em k=0,1,2,\dots \end{array}}$$
(37)

For each iteration k, let the errors

$$R\left({X}_k\right)=\left\Vert {X}_k-I+\sum \limits_{j=1}^n{B}_j^{\ast }{X}_k^{-1}{B}_j-\sum \limits_{i=1}^m{A}_i^{\ast }{X}_k^{-1}{A}_i\right\Vert, R\left({Y}_k\right)=\left\Vert {Y}_k-I+\sum \limits_{j=1}^n{B}_j^{\ast }{Y}_k^{-1}{B}_j-\sum \limits_{i=1}^m{A}_i^{\ast }{Y}_k^{-1}{A}_i\right\Vert$$

after 8 iterations, we get

$${X}_L\approx {X}_8={Y}_8=\left(\begin{array}{l}\kern0.6em 1.0002\kern1.47em -0.0047332\kern0.99em -0.0032458\kern1.36em -0.004648\kern1.47em 0.00012669\kern0.24em \\ {}\kern0.75em -0.0047332\kern1.62em 0.99748\kern1.47em -0.0032128\kern1.23em -0.0032367\kern1.24em 0.0013173\\ {}\kern0.75em -0.0032458\kern0.99em -0.0032128\kern1.62em 0.99637\kern1.71em -0.0026448\kern1.11em 0.00032692\\ {}\kern0.75em -0.004648\kern1.11em -0.0032367\kern0.99em -0.0026448\kern1.98em 0.99756\kern1.72em 0.0019438\\ {}\kern0.99em 0.00012669\kern1em 0.0013173\kern1.11em 0.00032692\kern1.6em 0.0019438\kern1.5em 0.99841\end{array}\right),$$
$$eig\;\left({X}_L\right)=\left(\;0.98671\kern0.5em ,\kern0.5em 1.0047,\kern0.5em 0.99793,\kern0.5em 0.99975,\kern0.5em 1.0009\;\right),$$

with R8 = 2.318617e − 016. In the same way, we can get the unique maximal positive definite solution $${\tilde{X}}_L$$of the perturbed equation (35) as follows:

$${\overset{\sim }{X}}_L\approx \left(\begin{array}{l}\kern0.6em 0.99537\kern1.83em -0.0076648\kern1.86em -0.0050541\kern2.25em -0.0081866\kern1.25em -0.0019838\kern0.85em \\ {}\kern0.75em -0.0076648\kern2.25em 0.99702\kern2.2em -0.0037008\kern2.25em -0.0047039\kern1.75em 0.00028184\\ {}\kern0.75em -0.0050541\kern1.75em -0.0037008\kern2.75em 0.99257\kern3.5em -0.0043999\kern0.87em -0.00071791\\ {}\kern0.75em -0.0081866\kern1.11em -0.0047039\kern1.6em -0.0043999\kern3em 0.99415\kern1.6em -0.0009023\\ {}\kern0.75em -0.0019838\kern2em 0.00028184\kern1.11em -0.00071791\kern2.25em -0.0009023\kern1.5em 0.99686\end{array}\right),$$
$$eig\;\left({\overset{\sim }{X}}_L\right)=\left(\;0.9775,\kern0.62em 1.0049,1.0004,0.99646,0.99675\;\right).$$

Numerical results which are listed in Table 1 confirm that the inequality (23) of Theorem 3.2 holds.

## Conclusion

In this paper, the perturbation estimate of the maximal solution for the equation $$X-\overset{m}{\sum \limits_{i=1}}{A}_i^{\ast }{X}^{-1}{A}_i+\sum \limits_{j=1}^n{B}_j^{\ast }{X}^{-1}{B}_j=I$$ using the differentiation of matrices is presented. We derived the differential bound for this maximal solution. Moreover, a perturbation estimate and an error bound for this maximal solution is obtained. Finally, a numerical test is given to clarify the reliability of the obtained results.

## Availability of data and materials

All data generated or analyzed during this study are included in this article.

## References

1. 1.

Ando, T.: Limit of cascade iteration of matrices. Numer. Funct. Anal. Optim. 21, 579–589 (1980)

2. 2.

Anderson, W.N., Morley, T.D., Trapp, G.E.: Ladder networks, fixed points and the geometric mean. Circuits Systems Signal Process. 3, 259–268 (1983)

3. 3.

Engwerda, J.C.: On the existence of a positive solution of the matrix equation X + A TX −1A = I. Linear Algebra Appl. 194, 91–108 (1993)

4. 4.

Pusz, W., Woronowitz, S.L.: Functional calculus for sequilinear forms and the purification map. Rep. Math. Phys. 8, 159–170 (1975)

5. 5.

Buzbee, B.L., Golub, G.H., Nielson, C.W.: On direct methods for solving Poisson’s equations. SIAM J. Numer. Anal. 7, 627–656 (1970)

6. 6.

Green, W.L., Kamen, E.: Stabilization of linear systems over a commutative normed algebra with applications to spatially distributed parameter dependent systems. SIAM J. Control Optim. 23, 1–18 (1985)

7. 7.

Zhan, X.: On the matrix equation X + A TX −1A = I. Linear Algebra Appl. 247, 337–345 (1996)

8. 8.

Xu, S.F.: On the maximal solution of the matrix equation X + A TX −1A = I. Acta Sci. Natur. Univ. Pekinensis. 36, 29–38 (2000)

9. 9.

El-Sayed, S.M., Ramadan, M.A.: On the existence of a positive definite solution of the matrix equation $$X-{A}^{\ast}\;\sqrt[{2}^m]{X^{-1}}A=I$$. Intern. J. Computer Math. 76, 331–338 (2001)

10. 10.

Ran, A.C.M., Reurings, M.C.B.: On the nonlinear matrix equation X + A F(X)A = Q: solution and perturbation theory. Linear Algebra Appl. 346, 15–26 (2002)

11. 11.

Ramadan, M.A.: On the Existence of Extremal Positive Definite Solutions of a Kind of Matrix Equation. Inter. J. of Nonlinear Sciences & Numerical Simulation. 6, 115–126 (2005)

12. 12.

Ramadan, M.A., El-Shazly, N.M.,, On the matrix equation$$X+{A}^{\ast}\sqrt[{2}^m]{X^{-1}}A=I$$. Appl. Math. Comp. 173, 992–1013 (2006)

13. 13.

Yueting, Y.: The iterative method for solving nonlinear matrix equationX s + A X tA = Q. Applied Math. and Computation. 188, 46–53 (2007)

14. 14.

Sarhan, A.M., El-Shazly, N.M., Shehata, E.M.: On the Existence of Extremal Positive Definite Solutions of the Nonlinear Matrix Equation $${X}^r+\sum \limits_{i=1}^m{A}_i^{\ast }{X}^{\delta_i}{A}_i=I$$. Mathematical and Computer Modelling. 51, 1107–1117 (2010)

15. 15.

Berzig, M.: Solving a class of matrix equations via the Bhaskar – Lakshmikantham coupled fixed point theorem. Applied Mathematics Letters. 25, 1638–1643 (2012)

16. 16.

Stewart, G.W., Sun, J.C.: Matrix Perturbation Theory. Academic Press, Boston (1990)

17. 17.

Xu, S.F.: Perturbation analysis of the maximal solution of the matrix equation X + A X −1A = P. Linear Algebra Appl. 336, 61–70 (2001)

18. 18.

Hasanove, V.I.: Notes on two Perturbation estimates of the extreme solutions to the equations X ± A X −1A = Q. Appl. Math. and Comput. 216, 1355–1362 (2010)

19. 19.

El-Shazly, N.M., On the perturbation estimates of the maximal solution for the matrix equation $$X+{A}^T\sqrt{X^{-1}}A=P$$. Journal of the Egyptian Mathematical Society. 24, 644–649 (2016)

20. 20.

Lee, H.: Perturbation analysis for the matrix equation X = I − A X −1A + B X −1B. Korean J. Math. 22, 123–131 (2014)

21. 21.

Ramadan, M.A. and El-Shazly, N.M., On the Maximal Positive Definite Solution of the Nonlinear Matrix Equation $$X-\sum \limits_{i=1}^m{A}_i^{\ast }{X}^{-1}{A}_i+\sum \limits_{j=1}^n{B}_j^{\ast }{X}^{-1}{B}_j=I$$, Accepted for publication in Applied Mathematics and Information Sciences in 18 Aug. 2019.

22. 22.

Sun, J.G., Xu, S.F.: Perturbation analysis of the maximal solution of the matrix equation X + A X −1A = P. Linear Algebra Appl. 362, 211–228 (2003)

23. 23.

Hasanov, V.I., Ivanov, I.G.: Solutions and Perturbation estimates for the matrix equations X ± A X nA = Q. Appl. Math. and Comput. 156, 513–525 (2004)

24. 24.

Hasanov, V.I., Ivanov, I.G.: On two perturbation estimates of the extreme solutions to the equations X ± A X −1A = Q. Linear Algebra Appl. 413, 81–92 (2006)

25. 25.

Chen, X.S., Li, W.: On the matrix equation X + A X −1A = P: solution and perturbation analysis. Chinese Journal of Num. Math. And Appl. 4, 102–109 (2005)

26. 26.

Ran, A.C.M., Reurings, M.C.B.: The symmetric linear matrix equation. Electronic Journal of Linear Algebra. 9, 93–107 (2002)

27. 27.

Ran, A.C.M., Reurings, M.C.B.: A fixed point theorem in partially ordered sets and some applications to matrix equations. Proceedings of the American Mathematical Society. 132(5), 1435–1443 (2004)

28. 28.

Sun, J.G.: Perturbation analysis of the matrix equation X = Q + A H(X − C)−1A. Linear Algebra Appl. 372, 33–51 (2003)

29. 29.

Li, J., Zhang, Y.: Perturbation analysis of the matrix equation X − A X pA = Q. Linear Algebra Appl. 431, 1489–1501 (2009)

30. 30.

Chen, X.S., Li, W.: Perturbation analysis for the matrix equations X ± A X −1A = I. Taiwanese Journal of Mathematics. 13(3), 913–922 (2009)

31. 31.

Duan, X.F., Wang, Q.W.: Perturbation analysis for the matrix equation $$X-\sum \limits_{i=1}^m{A}_i^{\ast }X{A}_i+\sum \limits_{j=1}^n{B}_j^{\ast }X{B}_j=I$$. Journal of Applied Mathematics. (2012)

32. 32.

Liu, L.D.: Perturbation analysis of the quadratic matrix equation associated with an M-matrix. Journal of Computational and Applied Mathematics. 260, 410–419 (2014)

33. 33.

Fang, B.R., Zhou, J.D., Li, Y.M.: Matrix Theory. Tsinghua University Press, Beijing, China (2006)

34. 34.

Bhaskar, T.G., Lakshmikantham, V.: Fixed point theory in partially ordered metric spaces and applications. Nonlinear Anal. 65, 1379–1393 (2006)

35. 35.

Choudhary, B., Nanda, S.: Functional Analysis with Applications. Wiley, New Delhi (1989)

## Acknowledgements

The authors are grateful to the referees for their valuable suggestions and comments for the improvement of the paper.

Not applicable.

## Author information

Authors

### Contributions

Prof. Dr. MAR proposed the main idea of this paper. Prof. Dr. MAR and Dr. NME prepared the manuscript, performed all the steps of the proofs in this research and made substantial contributions to the conception, and designed the numerical method. All authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.

### Authors’ information

Mohamed A. Ramadan works in Egypt as Professor of Pure Mathematics (Numerical Analysis) at the Department of Mathematics and Computer Science, Faculty of Science, Menoufia University, Shebein El-Koom, Egypt. His areas of expertise and interest are as follows: eigenvalue assignment problems, solution of matrix equations, focusing on investigating, theoretically and numerically, the positive definite solution for class of nonlinear matrix equations, introducing numerical techniques for solving different classes of partial, ordering and delay differential equations, as well as fractional differential equations using different types of spline functions. Naglaa M. El-Shazly works in Egypt as Assistant Professor of Pure Mathematics (Numerical Analysis) at the Department of Mathematics and Computer Science, Faculty of Science, Menoufia University, Shebein El-Koom, Egypt. Her areas of expertise and interest are as follows: eigenvalue assignment problems, solution of matrix equations, theoretically and numerically, iterative positive definite solutions of different forms of nonlinear matrix equations.

### Corresponding author

Correspondence to Naglaa M. El–Shazly.

## Ethics declarations

### Competing interests

The authors declare that they have no competing interests. 