Skip to main content
  • Original research
  • Open access
  • Published:

Iterative algorithm for the reflexive solutions of the generalized Sylvester matrix equation

Abstract

In this paper, the generalized Sylvester matrix equation AV + BW = EVF + C over reflexive matrices is considered. An iterative algorithm for obtaining reflexive solutions of this matrix equation is introduced. When this matrix equation is consistent over reflexive solutions then for any initial reflexive matrix, the solution can be obtained within finite iteration steps. Furthermore, the complexity and the convergence analysis for the proposed algorithm are given. The least Frobenius norm reflexive solutions can also be obtained when special initial reflexive matrices are chosen. Finally, numerical examples are given to illustrate the effectiveness of the proposed algorithm.

Introduction and preliminaries

Consider the generalized Sylvester matrix equation

$$ AV+ BW= EVF+C, $$
(1.1)

where A, ERm × p, BRm × q, FRn × n, and CRm × n while VRp × n and WRq × n are matrices to be determined. An n × n real matrix PRn × n is called a generalized reflection matrix if PT = P and P2 = I. An n × n matrix A is said to be reflexive matrix with respect to the generalized reflection matrix P if A = PAP for more details see [1, 2]. The symbol AB stands for the Kronecker product of matrices A and B. The vectorization of an m × n matrix A, denoted by vec(A), is the mn × 1 column vector obtains by stacking the columns of the matrix A on top of one another: \( vec(A)={\left({a}_1^T\kern0.5em {a}_2^T\dots {a}_n^T\right)}^T \). We use tr(A) and ATto denote the trace and the transpose of the matrix A respectively. In addition, we define the inner product of two matrices A, B as 〈A, B〉 = tr(BTA). Then, the matrix norm of A induced by this inner product is Frobenius norm and denoted by A where 〈A, A〉 = A2.

The reflexive matrices with respect to the generalized reflection matrix PRn × n have many special properties and widely used in engineering and scientific computations [2, 3]. Several authors have studied the reflexive solutions of different forms of linear matrix equations; see for example [4,5,6,7]. Ramadan et al. [8] considered explicit and iterative methods for solving the generalized Sylvester matrix equation. Dehghan and Hajarian [9] constructed an iterative algorithm to solve the generalized coupled Sylvester matrix equations (AY − ZB, CY − ZD) = (E, F) over reflexive matrices. Also, Dehghan and Hajarian [10] proposed three iterative algorithms for solving the linear matrix equation A1X1B1 + A2X2B2 = C over reflexive (anti -reflexive) matrices. Yin et al. [11] presented an iterative algorithm to solve the general coupled matrix equations \( \sum \limits_{j=1}^q{A}_{ij}{X}_j{B}_{ij}={M}_i\left(i=1,2,\cdots, p\right) \) and their optimal approximation problem over generalized reflexive matrices. Li [12] presented an iterative algorithm for obtaining the generalized (P, Q)-reflexive solution of a quaternion matrix equation \( \sum \limits_{l=1}^u{A}_l{XB}_l+\sum \limits_{s=1}^v{C}_s\tilde{X}{D}_s=F \). In [13], Dong and Wang presented necessary and sufficient conditions for the existence of the {P, Q, k + 1}-reflexive (anti-reflexive) solution to the system of matrices AX = C, XB = D. In [14], Nacevska found necessary and sufficient conditions for the generalized reflexive and anti-reflexive solution for a system of equations ax = b and xc = d in a ring with involution. Moreover, Hajarian [15] established the matrix form of the biconjugate residual (BCR) algorithm for computing the generalized reflexive (anti-reflexive) solutions of the generalized Sylvester matrix equation \( \sum \limits_{i=1}^s{A}_i{XB}_i+\sum \limits_{j=1}^t{C}_j{YD}_j=M \). Liu [16] established some conditions for the existence and the representations for the Hermitian reflexive, anti-reflexive, and non-negative definite reflexive solutions to the matrix equation AX = B with respect to a generalized reflection P by using the Moore-Penrose inverse. Dehghan and Shirilord [17] presented a generalized MHSS approach for solving large sparse Sylvester equation with non-Hermitian and complex symmetric positive definite/semi-definite matrices based on the MHSS method. Dehghan and Hajarian [18] proposed two algorithms for solving the generalized coupled Sylvester matrix equations over reflexive and anti-reflexive matrices. Dehghan and Hajarian [19] established two iterative algorithms for solving the system of generalized Sylvester matrix equations over the generalized bisymmetric and skew-symmetric matrices. Hajarian and Dehghan [20] established two gradient iterative methods extending the Jacobi and Gauss Seidel iteration for solving the generalized Sylvester-conjugate matrix equation A1XB1 + A2XB2 + C1YD1 + C2YD2 = E over reflexive and Hermitian reflexive matrices. Dehghan and Hajarian [21] proposed two iterative algorithms for finding the Hermitian reflexive and skew–Hermitian solutions of the Sylvester matrix equation AX + XB = C. Hajarian [22] obtained an iterative algorithm for solving the coupled Sylvester-like matrix equations. El–Shazly [23] studied the perturbation estimates of the maximal solution for the matrix equation \( X+{A}^T\sqrt{X^{-1}}A=P \). Khader [24] presented numerical method for solving fractional Riccati differential equation (FRDE). Balaji [25] presented a Legendre wavelet operational matrix method for solving the nonlinear fractional order Riccati differential equation. The generalized Sylvester matrix equation has numerous applications in control theory, signal processing, filtering, model reduction, and decoupling techniques for ordinary and partial differential equations (see [26,27,28,29]).

In this paper, we will investigate the reflexive solutions of the generalized Sylvester matrix equation AV + BW = EVF + C. The paper is organized as follows: First, in the “Iterative algorithm for solving AV + BW = EVF + C” section, an iterative algorithm for obtaining reflexive solutions of this problem is derived. The complexity of the proposed algorithm is presented. In the “Convergence analysis for the proposed algorithm” section, the convergence analysis for the proposed algorithm is given. Also, the least Frobenius norm reflexive solutions can be obtained when special initial reflexive matrices are chosen. Finally, in “Numerical samples” section, four numerical examples are considered for ensuring the performance of the proposed algorithm.

Iterative algorithm for solving AV + BW = EVF + C

In this part, we consider the following problem:

Problem 2.1. For given matrices A, B, E, CRm × n, FRn × n, and two generalized reflection Matrices P, S of size n, find the matrices \( V\in {R}_r^{n\times n}(P) \) and \( W\in {R}_r^{n\times n}(S) \) such that

$$ AV+ BW= EVF+C $$
(2.1)

Where the subspace \( {R}_r^{n\times n}(P) \) is defined by \( {R}_r^{n\times n}(P)=\left\{Q\in {R}^{n\times n}:Q= PQP\right\} \), where P is the generalized reflection matrix: P2 = I, PT = P.

An iterative algorithm for solving the consistent Problem 2.1

This subsection, an iterative algorithm is proposed for solving Problem 2.1 assuming that this problem is consistent.

figure a

In the next theorem, we prove that the solutions {Vk + 1} and {Wk + 1} are reflexive solutions for the matrix Eq. (2.1).

Theorem 2.1 The solutions {Vk + 1} and{Wk + 1} generated from Algorithm 2.1 are reflexive solutions with respect to the generalized reflection matrices P and S of the matrix Eq. ( 2.1 ).

Proof By using the induction we can prove this theorem as follows:

For k = 1,

$$ {\displaystyle \begin{array}{l}{PV}_2P={PV}_1P+\frac{{\left\Vert {R}_1\right\Vert}^2}{{\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2}{PP}_1P\\ {}={V}_1+\frac{{\left\Vert {R}_1\right\Vert}^2}{{\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2}\frac{1}{2}\left[{PA}^T{R}_1P+{P}^2{A}^T{R}_1{P}^2-{PE}^T{R}_1{F}^TP-{P}^2{E}^T{R}_1{F}^T{P}^2\right]\\ {}={V}_1+\frac{{\left\Vert {R}_1\right\Vert}^2}{{\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2}\frac{1}{2}\left[{PA}^T{R}_1P+{A}^T{R}_1-{PE}^T{R}_1{F}^TP-{E}^T{R}_1{F}^T\right]={V}_2\end{array}} $$

Assume that PVkP = Vk i.e.,

$$ {PV}_kP={PV}_{k-1}P+\frac{{\left\Vert {R}_{k-1}\right\Vert}^2}{{\left\Vert {P}_{k-1}\right\Vert}^2+{\left\Vert {Q}_{k-1}\right\Vert}^2}{PP}_{k-1}P={V}_{k-1}+\frac{{\left\Vert {R}_{k-1}\right\Vert}^2}{{\left\Vert {P}_{k-1}\right\Vert}^2+{\left\Vert {Q}_{k-1}\right\Vert}^2}{P}_{k-1}={V}_k $$

Now, \( {\displaystyle \begin{array}{l}{PV}_{k+1}P={PV}_kP+\frac{{\left\Vert {R}_k\right\Vert}^2}{{\left\Vert {P}_k\right\Vert}^2+{\left\Vert {Q}_k\right\Vert}^2}{PP}_kP\\ {}={V}_k+\frac{{\left\Vert {R}_k\right\Vert}^2}{{\left\Vert {P}_k\right\Vert}^2+{\left\Vert {Q}_k\right\Vert}^2}\left(\frac{1}{2}\left[{PA}^T{R}_kP+{P}^2{A}^T{R}_k{P}^2-{PE}^T{R}_k{F}^TP-{P}^2{E}^T{R}_k{F}^T{P}^2\right]+\frac{{\left\Vert {R}_k\right\Vert}^2}{{\left\Vert {R}_{k-1}\right\Vert}^2}{PP}_{k-1}P\right)\\ {}={V}_k+\frac{{\left\Vert {R}_k\right\Vert}^2}{{\left\Vert {P}_k\right\Vert}^2+{\left\Vert {Q}_k\right\Vert}^2}\left(\frac{1}{2}\left[{PA}^T{R}_kP+{A}^T{R}_k-{PE}^T{R}_k{F}^TP-{E}^T{R}_k{F}^T\right]+\frac{{\left\Vert {R}_k\right\Vert}^2}{{\left\Vert {R}_{k-1}\right\Vert}^2}{P}_{k-1}\right)={V}_{k+1}\cdot \end{array}} \)

Similarly, we can prove Wk + 1 is reflexive solution with respect to the generalized reflection matrix S of the matrix Eq. (2.1).

The complexity of the proposed iterative algorithm

Algorithmic complexity is concerned about how fast or slow particular algorithm performs. We define complexity as a numerical function T(n) —time versus the input size n. The complexity of an algorithm signifies the total time required by the program to run till its completion. The time complexity of algorithms is most commonly expressed using the big O notation. It is an asymptotic notation to represent the time complexity. A theoretical and very crude measure of efficiency is the number of floating point operations (flops) needed to implement the algorithm. A “flop” is an arithmetic operation: +, x, or /. In this subsection, we compute the flops of the proposed Algorithm 2.1 of the Sylvester matrix equation AV + BW = EVF + C.

The flop counts for step 3:

The residual R1 requires 4mn(2n − 1) + 3mn flops, computing the reflection matrix P1 requires 4n2(2m − 1) + 4n2(2n − 1) + 2mn(2n − 1) + 4n2 flops, and computing the reflection matrix Q1 requires 2n2(2m − 1) + n2(2n − 1) + mn(2n − 1) + 2n2 flops.

The flop counts for step 5:

Computing Vk + 1 requires [6n2 + 2mn + 5] flops, Wk + 1 requires [6n2 + 2mn + 5] flops, Rk + 1

requires [4mn(2n − 1) + 4n2 + 6mn + 5] flops, Qk + 1 requires [2n2(2m − 1) + n2(2n − 1) + mn(2n − 1) + 4n2 + 4mn + 3] flops, and Pk + 1 requires [4n2(2m − 1) + 3n2(2n − 1) + 2mn(2n − 1) + 6n2 + 4mn + 3] flops.

Thus, the total count of Algorithm 2.1 is:

$$ {\displaystyle \begin{array}{l}k\left[6{n}^2\left(2m-1\right)+4{n}^2\left(2n-1\right)+7 mn\left(2n-1\right)+26{n}^2+18 mn+21\right]\\ {}+6{n}^2\left(2m-1\right)+5{n}^2\left(2n-1\right)+7 mn\left(2n-1\right)+6{n}^2+3 mn\approx k\left[12{n}^2m+8{n}^3+14{mn}^2\right]\\ {}+12{n}^2m+10{n}^3+14{mn}^2\end{array}} $$

where k represents the number of iterations which is needed to find the reflexive solutions of Eq. (2.1). We can conclude that the total flop count of Algorithm 2.1 is O(n3).

Convergence analysis for the proposed algorithm

In this section, first, we present two lemmas which are important tools for the convergence of Algorithm 2.1.

Lemma 3.1 Assume that the sequences {Ri}, {Pi} and{Qi} are obtained by Algorithm 2.1, if there exists an integer number s > 1, such that \( {R}_i\ne \mathbf{0}, \) for all i = 1, 2, …, s, then we have

$$ tr\left({R}_j^T{R}_i\right)=0\kern0.50em and\ tr\left({P}_j^T{P}_i+{Q}_j^T{Q}_i\right)=0,i,j=1,2,\dots, s,i\ne j $$
(3.1)

Proof In view of the fact that tr(Y) = tr(YT) for arbitrary matrix Y. Therefore, we only need to prove that

$$ tr\left({R}_j^T{R}_i\right)=0, tr\left({P}_j^T{P}_i+{Q}_j^T{Q}_i\right)=0,\mathrm{for}\ 1\le i<j\le s $$
(3.2)

We prove the conclusion (3.2) by induction through the following two steps.

Step 1: First, we show that

$$ tr\left({R}_{i+1}^T{R}_i\right)=0\ \mathrm{and}\ tr\left({P}_{i+1}^T{P}_i+{Q}_{i+1}^T{Q}_i\right)=0,i=1,2,\dots, s $$
(3.3)

To prove (3.3), we also use induction.

For i = 1, noting that P1 = PP1P, and Q1 = SQ1S, from the iterative Algorithm 2.1, we can write \( tr\left({R}_2^T{R}_1\right)= tr\left({\left[{R}_1-\frac{{\left\Vert {R}_1\right\Vert}^2}{{\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2}\left({AP}_1-{EP}_1F+{BQ}_1\right)\right]}^T{R}_1\right) \)

$$ {\displaystyle \begin{array}{l}={\left\Vert {R}_1\right\Vert}^2-\frac{{\left\Vert {R}_1\right\Vert}^2}{{\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2} tr\left({P}_1^T{A}^T{R}_1-{F}^T{P}_1^T{E}^T{R}_1+{Q}_1^T{B}^T{R}_1\right)\\ {}={\left\Vert {R}_1\right\Vert}^2-\frac{{\left\Vert {R}_1\right\Vert}^2}{{\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2} tr\left({P}_1^T{A}^T{R}_1-{P}_1^T{E}^T{R}_1{F}^T+{Q}_1^T{B}^T{R}_1\right)\\ {}={\left\Vert {R}_1\right\Vert}^2-\frac{{\left\Vert {R}_1\right\Vert}^2}{{\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2}\left[ tr\left({P}_1^T\left[\frac{A^T{R}_1+{PA}^T{R}_1P-{E}^T{R}_1{F}^T-{PE}^T{R}_1{F}^TP}{2}\right]\right.\right.\\ {}+{Q}_1^T\left[\frac{B^T{R}_1+{SB}^T{R}_1S}{2}\right]+{Q}_1^T\left[\frac{B^T{R}_1-{SB}^T{R}_1S}{2}\right]\\ {}\left.\left.+{P}_1^T\left[\frac{A^T{R}_1-{PA}^T{R}_1P-{E}^T{R}_1{F}^T+{PE}^T{R}_1{F}^TP}{2}\right]\right)\right]\\ {}={\left\Vert {R}_1\right\Vert}^2-\frac{{\left\Vert {R}_1\right\Vert}^2}{{\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2}\left[ tr\left({P}_1^T\left[\frac{A^T{R}_1+{PA}^T{R}_1P-{E}^T{R}_1{F}^T-{PE}^T{R}_1{F}^TP}{2}\right]\right.\right.\\ {}\left.\left.+{Q}_1^T\left[\frac{B^T{R}_1+{SB}^T{R}_1S}{2}\right]\right)\right]\\ {}={\left\Vert {R}_1\right\Vert}^2-\frac{{\left\Vert {R}_1\right\Vert}^2}{{\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2}\left[ tr\left({P}_1^T{P}_1+{Q}_1^T{Q}_1\Big)\right)\right]=0.\end{array}} $$
(3.4)

Similarly, we can write

$$ {\displaystyle \begin{array}{c}\begin{array}{l} tr\left({P}_2^T{P}_1+{Q}_2^T{Q}_1\right)= tr\left({\left[\frac{A^T{R}_2+{PA}^T{R}_2P-{E}^T{R}_2{F}^T-{PE}^T{R}_2{F}^TP}{2}+\frac{{\left\Vert {R}_2\right\Vert}^2}{{\left\Vert {R}_1\right\Vert}^2}{P}_1\right]}^T{P}_1\right.\\ {}\left.+{\left[\frac{B^T{R}_2+{SB}^T{R}_2S}{2}+\frac{{\left\Vert {R}_2\right\Vert}^2}{{\left\Vert {R}_1\right\Vert}^2}{Q}_1\right]}^T{Q}_1\right)\end{array}\\ {}=\frac{{\left\Vert {R}_2\right\Vert}^2}{{\left\Vert {R}_1\right\Vert}^2} tr\left({P}_1^T{P}_1+{Q}_1^T{Q}_1\right)+ tr\left(\begin{array}{l}{P}_1^T\left[\frac{A^T{R}_2+{PA}^T{R}_2P-{E}^T{R}_2{F}^T-{PE}^T{R}_2{F}^TP}{2}\right]\\ {}+{Q}_1^T\left[\frac{B^T{R}_2+{SB}^T{R}_2S}{2}\right]\end{array}\right)\\ {}\begin{array}{l}=\frac{{\left\Vert {R}_2\right\Vert}^2}{{\left\Vert {R}_1\right\Vert}^2}\left({\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2\right)+ tr\left({P}_1^T\left[\frac{A^T{R}_2+{A}^T{R}_2-{E}^T{R}_2{F}^T-{E}^T{R}_2{F}^T}{2}\right]\right.\\ {}\left.+{Q}_1^T\left[\frac{B^T{R}_2+{B}^T{R}_2}{2}\right]\right)\end{array}\\ {}=\frac{{\left\Vert {R}_2\right\Vert}^2}{{\left\Vert {R}_1\right\Vert}^2}\left({\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2\right)+ tr\left({R}_2^T\left[{AP}_1-{EP}_1F+{BQ}_1\right]\right)\\ {}=\frac{{\left\Vert {R}_2\right\Vert}^2}{{\left\Vert {R}_1\right\Vert}^2}\left({\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2\right)+ tr\left({R}_2^T\frac{{\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2}{{\left\Vert {R}_1\right\Vert}^2}\left({R}_1-{R}_2\right)\right)\\ {}=\frac{{\left\Vert {R}_2\right\Vert}^2}{{\left\Vert {R}_1\right\Vert}^2}\left({\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2\right)+\frac{{\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2}{{\left\Vert {R}_1\right\Vert}^2} tr\left({R}_2^T{R}_1\right)-{\left\Vert {R}_2\right\Vert}^2=0.\end{array}} $$
(3.5)

Now, assume that (3.3) holds for 1 < i ≤ t − 1 < s, noting that Pt = PPtP, and Qt = SQtS, then we have for i = t

$$ {\displaystyle \begin{array}{c} tr\left({R}_{t+1}^T{R}_t\right)= tr\left({\left[{R}_t-\frac{{\left\Vert {R}_t\right\Vert}^2}{{\left\Vert {P}_t\right\Vert}^2+{\left\Vert {Q}_t\right\Vert}^2}\left\{{AP}_t-{EP}_tF+{BQ}_t\right\}\right]}^T{R}_t\right)\\ {}={\left\Vert {R}_t\right\Vert}^2-\frac{{\left\Vert {R}_t\right\Vert}^2}{{\left\Vert {P}_t\right\Vert}^2+{\left\Vert {Q}_t\right\Vert}^2} tr\left({P}_t{A}^T{R}_t-{F}^T{P}_t{E}^T{R}_t+{Q}_t{B}^T{R}_t\right)\\ {}={\left\Vert {R}_t\right\Vert}^2-\frac{{\left\Vert {R}_t\right\Vert}^2}{{\left\Vert {P}_t\right\Vert}^2+{\left\Vert {Q}_t\right\Vert}^2} tr\left({P}_t^T{A}^T{R}_t-{P}_t^T{E}^T{R}_t{F}^T+{Q}_t^T{B}^T{R}_t\right)\\ {}\begin{array}{l}={\left\Vert {R}_t\right\Vert}^2-\frac{{\left\Vert {R}_t\right\Vert}^2}{{\left\Vert {P}_t\right\Vert}^2+{\left\Vert {Q}_t\right\Vert}^2}\left[ tr\left({P}_t^T\left[\frac{A^T{R}_t+{PA}^T{R}_tP-{E}^T{R}_t{F}^T-{PE}^T{R}_t{F}^TP}{2}\right]\right.\right.\\ {}\left.\left.+{Q}_t^T\left[\frac{B^T{R}_t+{SB}^T{R}_tS}{2}\right]\right)\right]\end{array}\\ {}={\left\Vert {R}_t\right\Vert}^2-\frac{{\left\Vert {R}_t\right\Vert}^2}{{\left\Vert {P}_t\right\Vert}^2+{\left\Vert {Q}_t\right\Vert}^2}\left[ tr\left({P}_t^T\left({P}_t-\frac{{\left\Vert {R}_t\right\Vert}^2}{{\left\Vert {R}_{t-1}\right\Vert}^2}{P}_{t-1}\right)+{Q}_t^T\left({Q}_t-\frac{{\left\Vert {R}_t\right\Vert}^2}{{\left\Vert {R}_{t-1}\right\Vert}^2}{Q}_{t-1}\right)\right)\right]\\ {}={\left\Vert {R}_t\right\Vert}^2-\frac{{\left\Vert {R}_t\right\Vert}^2}{{\left\Vert {P}_t\right\Vert}^2+{\left\Vert {Q}_t\right\Vert}^2}\left[{\left\Vert {P}_t\right\Vert}^2+{\left\Vert {Q}_t\right\Vert}^2-\frac{{\left\Vert {R}_t\right\Vert}^2}{{\left\Vert {R}_{t-1}\right\Vert}^2} tr\left({P}_t^T{P}_{t-1}+{Q}_t^T{Q}_{t-1}\right)\right]=0.\end{array}} $$
(3.6)

Also, we have

$$ {\displaystyle \begin{array}{c} tr\left({P}_{t+1}^T{P}_t+{Q}_{t+1}^T{Q}_t\right)= tr\left({\left[\frac{A^T{R}_{t+1}+{PA}^T{R}_{t+1}P-{E}^T{R}_{t+1}{F}^T-{PE}^T{R}_{t+1}{F}^TP}{2}+\frac{{\left\Vert {R}_{t+1}\right\Vert}^2}{{\left\Vert {R}_t\right\Vert}^2}{P}_t\right]}^T{P}_t\right.\\ {}\left.+{\left[\frac{B^T{R}_{t+1}+{SB}^T{R}_{t+1}S}{2}+\frac{{\left\Vert {R}_{t+1}\right\Vert}^2}{{\left\Vert {R}_t\right\Vert}^2}{Q}_t\right]}^T{Q}_t\right)\\ {}=\frac{{\left\Vert {R}_{t+1}\right\Vert}^2}{{\left\Vert {R}_t\right\Vert}^2} tr\left({P}_t^T{P}_t+{Q}_t^T{Q}_t\right)+ tr\left(\begin{array}{l}{P}_t^T\left[\frac{A^T{R}_{t+1}+{PA}^T{R}_{t+1}P-{E}^T{R}_{t+1}{F}^T-{PE}^T{R}_{t+1}{F}^TP}{2}\right]\\ {}+{Q}_t^T\left[\frac{B^T{R}_{t+1}+{SB}^T{R}_{t+1}S}{2}\right]\end{array}\right)\\ {}=\frac{{\left\Vert {R}_{t+1}\right\Vert}^2}{{\left\Vert {R}_t\right\Vert}^2} tr\left({P}_t^T{P}_t+{Q}_t^T{Q}_t\right)+ tr\left({R}_{t+1}^T\left[{AP}_t-{EP}_tF+{BQ}_t\right]\right)\\ {}\begin{array}{l}=\frac{{\left\Vert {R}_{t+1}\right\Vert}^2}{{\left\Vert {R}_t\right\Vert}^2} tr\left({P}_t^T{P}_t+{Q}_t^T{Q}_t\right)+ tr\left({R}_{t+1}^T\frac{{\left\Vert {P}_t\right\Vert}^2+{\left\Vert {Q}_t\right\Vert}^2}{{\left\Vert {R}_t\right\Vert}^2}\left({R}_t-{R}_{t+1}\right)\right)\\ {}=\frac{{\left\Vert {R}_{t+1}\right\Vert}^2}{{\left\Vert {R}_t\right\Vert}^2} tr\left({P}_t^T{P}_t+{Q}_t^T{Q}_t\right)+\frac{{\left\Vert {P}_t\right\Vert}^2+{\left\Vert {Q}_t\right\Vert}^2}{{\left\Vert {R}_t\right\Vert}^2} tr\left({R}_{t+1}^T{R}_t\right)-{\left\Vert {R}_{t+1}\right\Vert}^2=0.\end{array}\end{array}} $$
(3.7)

Therefore, the conclusion (3.3) holds for i = t. Hence, (3.3) holds by the principle of induction.

Step 2: In this step, we show for 1 ≤ i ≤ s − 1

$$ tr\left({R}_{i+l}^T{R}_i\right)=0\ \mathrm{and}\ tr\left({P}_{i+l}^T{P}_i+{Q}_{i+l}^T{Q}_i\right)=0 $$
(3.8)

for l = 1, 2, …, s. The case of l = 1 has been proven in step 1. Assume that (3.8) holds for l ≤ ν.

Now, we prove that \( tr\left({R}_{i+\nu +1}^T{R}_i\right)=0 \) and \( tr\left({P}_{i+\nu +1}^T{P}_i+{Q}_{i+\nu +1}^T{Q}_i\right)=0 \) through the following two substeps.

Substep 2.1: In this substep, we show that

$$ tr\left({R}_{\nu +2}^T{R}_1\right)=0 $$
(3.9)
$$ tr\left({P}_{\nu +2}^T{P}_1+{Q}_{\nu +2}^T{Q}_1\right)=0 $$
(3.10)

By Algorithm 2.1 and the induction assumptions, we have

$$ {\displaystyle \begin{array}{c} tr\left({R}_{\nu +2}^T{R}_1\right)= tr\left({\left[{R}_{\nu +1}-\frac{{\left\Vert {R}_{\nu +1}\right\Vert}^2}{{\left\Vert {P}_{\nu +1}\right\Vert}^2+{\left\Vert {Q}_{\nu +1}\right\Vert}^2}\left({AP}_{\nu +1}-{EP}_{\nu +1}F+{BQ}_{\nu +1}\right)\right]}^T{R}_1\right)\\ {}= tr\left({R}_{\nu +1}^T{R}_1\right)-\frac{{\left\Vert {R}_{\nu +1}\right\Vert}^2}{{\left\Vert {P}_{\nu +1}\right\Vert}^2+{\left\Vert {Q}_{\nu +1}\right\Vert}^2} tr\left({P}_{\nu +1}^T\left[{A}^T{R}_1-{E}^T{R}_1{F}^T\right]+{Q}_{\nu +1}^T{B}^T{R}_1\right)\\ {}\begin{array}{l}=-\frac{{\left\Vert {R}_{\nu +1}\right\Vert}^2}{{\left\Vert {P}_{\nu +1}\right\Vert}^2+{\left\Vert {Q}_{\nu +1}\right\Vert}^2} tr\left({P}_{\nu +1}^T\left[\frac{A^T{R}_1-{E}^T{R}_1{F}^T+{PA}^T{R}_1P-{PE}^T{R}_1{F}^TP}{2}\right]\right.\\ {}\left.+{Q}_{\nu +1}^T\left[\frac{B^T{R}_1+{SB}^T{R}_1S}{2}\right]\right)\end{array}\\ {}=-\frac{{\left\Vert {R}_{\nu +1}\right\Vert}^2}{{\left\Vert {P}_{\nu +1}\right\Vert}^2+{\left\Vert {Q}_{\nu +1}\right\Vert}^2} tr\left({P}_{\nu +1}^T{P}_1+{Q}_{\nu +1}^T{Q}_1\right)=0,\end{array}} $$

and \( tr\left({P}_{\nu +2}^T{P}_1+{Q}_{\nu +2}^T{Q}_1\right) \)

$$ {\displaystyle \begin{array}{l}= tr\left({\left[\frac{A^T{R}_{\nu +2}+{PA}^T{R}_{\nu +2}P-{E}^T{R}_{\nu +2}{F}^T-{PE}^T{R}_{\nu +2}{F}^TP}{2}+\frac{{\left\Vert {R}_{\nu +2}\right\Vert}^2}{{\left\Vert {R}_{\nu +1}\right\Vert}^2}{P}_{\nu +1}\right]}^T{P}_1\right.\\ {}\left.+{\left[\frac{B^T{R}_{\nu +2}+{SB}^T{R}_{\nu +2}S}{2}+\frac{{\left\Vert {R}_{\nu +2}\right\Vert}^2}{{\left\Vert {R}_{\nu +1}\right\Vert}^2}{Q}_{\nu +1}\right]}^T{Q}_1\right)\\ {}=\frac{{\left\Vert {R}_{\nu +2}\right\Vert}^2}{{\left\Vert {R}_{\nu +1}\right\Vert}^2} tr\left({P}_{\nu +1}^T{P}_1+{Q}_{\nu +1}^T{Q}_1\right)+ tr\left({R}_{\nu +2}^T\left[{AP}_1-{EP}_1F+{BQ}_1\right]\right)\\ {}=\frac{{\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2}{{\left\Vert {R}_1\right\Vert}^2} tr\left({R}_{\nu +2}^T\left({R}_1-{R}_2\right)\right)=0\end{array}} $$

Thus, (3.9) and (3.10) hold.

Substep 2.2: By Algorithm 2.1 and the induction assumptions, we can write

$$ {\displaystyle \begin{array}{c} tr\left({R}_{i+\nu +1}^T{R}_i\right)= tr\left({\left[{R}_{i+\nu }-\frac{{\left\Vert {R}_{i+\nu}\right\Vert}^2}{{\left\Vert {P}_{i+\nu}\right\Vert}^2+{\left\Vert {Q}_{i+\nu}\right\Vert}^2}\left({AP}_{i+\nu }-{EP}_{i+\nu }F+{BQ}_{i+\nu}\right)\right]}^T{R}_i\right)\\ {}= tr\left({R}_{\nu +1}^T{R}_i\right)-\frac{{\left\Vert {R}_{i+\nu}\right\Vert}^2}{{\left\Vert {P}_{i+\nu}\right\Vert}^2+{\left\Vert {Q}_{i+\nu}\right\Vert}^2} tr\left({P}_{i+\nu}^T\left[{A}^T{R}_i-{E}^T{R}_i{F}^T\right]+{Q}_{i+\nu}^T{B}^T{R}_i\right)\\ {}\begin{array}{l}=-\frac{{\left\Vert {R}_{i+\nu}\right\Vert}^2}{{\left\Vert {P}_{i+\nu}\right\Vert}^2+{\left\Vert {Q}_{i+\nu}\right\Vert}^2} tr\left({P}_{i+\nu}^T\left[\frac{A^T{R}_i-{E}^T{R}_i{F}^T+{PA}^T{R}_iP-{PE}^T{R}_i{F}^TP}{2}\right]\right.\\ {}\left.+{Q}_{i+\nu}^T\left[\frac{B^T{R}_i+{SB}^T{R}_iS}{2}\right]\right)\end{array}\\ {}=-\frac{{\left\Vert {R}_{i+\nu}\right\Vert}^2}{{\left\Vert {P}_{i+\nu}\right\Vert}^2+{\left\Vert {Q}_{i+\nu}\right\Vert}^2} tr\left({P}_{i+\nu}^T\left[{P}_i-\frac{{\left\Vert {R}_i\right\Vert}^2}{{\left\Vert {R}_{i-1}\right\Vert}^2}{P}_{i-1}\right]+{Q}_{i+\nu}^T\left[{Q}_i-\frac{{\left\Vert {R}_i\right\Vert}^2}{{\left\Vert {R}_{i-1}\right\Vert}^2}{Q}_{i-1}\right]\right)\\ {}=-\frac{{\left\Vert {R}_{i+\nu}\right\Vert}^2}{{\left\Vert {P}_{i+\nu}\right\Vert}^2+{\left\Vert {Q}_{i+\nu}\right\Vert}^2}\left[ tr\left({P}_{i+\nu}^T{P}_i+{Q}_{i+v}^T{Q}_i\right)-\frac{{\left\Vert {R}_i\right\Vert}^2}{{\left\Vert {R}_{i-1}\right\Vert}^2} tr\left({P}_{i+\nu}^T{P}_{i-1}+{Q}_{i+\nu}^T{Q}_{i-1}\right)\right]\\ {}=\frac{{\left\Vert {R}_i\right\Vert}^2{\left\Vert {R}_{i+\nu}\right\Vert}^2}{{\left\Vert {R}_{i-1}\right\Vert}^2\left({\left\Vert {P}_{i+\nu}\right\Vert}^2+{\left\Vert {Q}_{i+\nu}\right\Vert}^2\right)} tr\left({P}_{i+\nu}^T{P}_{i-1}+{Q}_{i+\nu}^T{Q}_{i-1}\right)\end{array}} $$
(3.11)

Also, we have

$$ {\displaystyle \begin{array}{c} tr\left({P}_{i+\nu +1}^T{P}_i+{Q}_{i+\nu +1}^T{Q}_i\right)\\ {}= tr\left({\left[\frac{A^T{R}_{i+\nu +1}+{PA}^T{R}_{i+\nu +1}P-{E}^T{R}_{i+\nu +1}{F}^T-{PE}^T{R}_{i+\nu +1}{F}^TP}{2}+\frac{{\left\Vert {R}_{i+\nu +1}\right\Vert}^2}{{\left\Vert {R}_{i+\nu}\right\Vert}^2}{P}_{i+\nu}\right]}^T{P}_i\right.\\ {}\left.+{\left[\frac{B^T{R}_{i+\nu +1}+{SB}^T{R}_{i+\nu +1}S}{2}+\frac{{\left\Vert {R}_{i+\nu +1}\right\Vert}^2}{{\left\Vert {R}_{i+\nu}\right\Vert}^2}{Q}_{i+\nu}\right]}^T{Q}_i\right)\\ {}=\frac{{\left\Vert {R}_{i+\nu +1}\right\Vert}^2}{{\left\Vert {R}_{i+\nu}\right\Vert}^2} tr\left({P}_{i+\nu}^T{P}_i+{Q}_{i+\nu}^T{Q}_i\right)+ tr\left({R}_{i+\nu +1}^T\left[{AP}_i-{EP}_iF+{BQ}_i\right]\right)\\ {}=\frac{{\left\Vert {P}_i\right\Vert}^2+{\left\Vert {Q}_i\right\Vert}^2}{{\left\Vert {R}_i\right\Vert}^2} tr\left({R}_{i+\nu +1}^T\left[{R}_i-{R}_{i+1}\right]\right)=\frac{{\left\Vert {P}_i\right\Vert}^2+{\left\Vert {Q}_i\right\Vert}^2}{{\left\Vert {R}_i\right\Vert}^2} tr\left({R}_{i+\nu +1}^T{R}_i\right)\\ {}=\frac{{\left\Vert {P}_i\right\Vert}^2+{\left\Vert {Q}_i\right\Vert}^2}{{\left\Vert {R}_i\right\Vert}^2}\frac{{\left\Vert {R}_i\right\Vert}^2{\left\Vert {R}_{i+\nu}\right\Vert}^2}{{\left\Vert {R}_{i-1}\right\Vert}^2\left({\left\Vert {P}_{i+\nu}\right\Vert}^2+{\left\Vert {Q}_{i+\nu}\right\Vert}^2\right)} tr\left({P}_{i+\nu}^T{P}_{i-1}+{Q}_{i+\nu}^T{Q}_{i-1}\right)\end{array}} $$
(3.12)

Repeating the above process (3.11) and (3.12), we can obtain, for certain α and β

\( tr\left({R}_{i+\nu +1}^T{R}_i\right)=\alpha tr\left({P}_{\nu +2}^T{P}_1+{Q}_{\nu +2}^T{Q}_1\right) \), and \( tr\left({P}_{i+\nu +1}^T{P}_i+{Q}_{i+\nu +1}^T{Q}_i\right)=\beta tr\left({P}_{\nu +2}^T{P}_1+{Q}_{\nu +2}^T{Q}_1\right) \).

Combining these two relations with (3.10) implies that (3.8) holds for l = ν + 1.

From steps 1 and 2, the conclusion (3.1) holds by the principle of induction.

Lemma 3.2 Let Problem 2.1 be consistent over reflexive matrices, and V and W be arbitrary reflexive solutions of Problem 2.1. Then for any initial reflexive matrices V 1 andW 1 , we have

$$ tr\left({\left({V}^{\ast }-{V}_i\right)}^T{P}_i+{\left({W}^{\ast }-{W}_i\right)}^T{Q}_i\right)={\left\Vert {R}_i\right\Vert}^2\ for\ i=1,2,\dots $$
(3.13)

where the Sequences {Ri}, {Pi}, {Qi}, {Vi} and{Wi} are generated by Algorithm 2.1.

Proof We can prove the conclusion (3.13) by using the induction as follows

For i = 1, noting that V − V1 = P(V − V1)P, and W − W1 = Z(W − W1)Z, we have

$$ {\displaystyle \begin{array}{c} tr\left({\left({V}^{\ast }-{V}_1\right)}^T{P}_1+{\left({W}^{\ast }-{W}_1\right)}^T{Q}_1\right)\\ {}= tr\left({\left({V}^{\ast }-{V}_1\right)}^T\left[\frac{A^T{R}_1+{PA}^T{R}_1P-{E}^T{R}_1{F}^T-{PE}^T{R}_1{F}^TP}{2}\right]+{\left({W}^{\ast }-{W}_1\right)}^T\left[\frac{B^T{R}_1+{SB}^T{R}_1S}{2}\right]\right)\\ {}\begin{array}{l}= tr\left({\left({V}^{\ast }-{V}_1\right)}^T\left[\frac{A^T{R}_1+{PA}^T{R}_1P-{E}^T{R}_1{F}^T-{PE}^T{R}_1{F}^TP+{A}^T{R}_1-{PA}^T{R}_1P-{E}^T{R}_1{F}^T}{2}\right.\right.\\ {}\left.+\frac{PE^T{R}_1{F}^TP}{2}\right]\left.+{\left({W}^{\ast }-{W}_1\right)}^T\left[\frac{B^T{R}_1+{SB}^T{R}_1S+{B}^T{R}_1-{SB}^T{R}_1S}{2}\right]\right)\end{array}\\ {}= tr\left({\left({V}^{\ast }-{V}_1\right)}^T\left[{A}^T{R}_1-{E}^T{R}_1{F}^T\right]+{\left({W}^{\ast }-{W}_1\right)}^T\left[{B}^T{R}_1\right]\right)\\ {}\begin{array}{l}= tr\left({R}_1^T\left[A\left({V}^{\ast }-{V}_1\right)-E\left({V}^{\ast }-{V}_1\right)F+B\left({W}^{\ast }-{W}_1\right)\right]\right)\\ {}= tr\left({R}_1^T\left[C-{AV}_1+{EV}_1F-{BW}_1\right]\right)={\left\Vert {R}_1\right\Vert}^2.\end{array}\end{array}} $$
(3.14)

Assume that the conclusion (3.13) holds for i = t. Now, for i = t + 1, we have

$$ {\displaystyle \begin{array}{l} tr\left({\left({V}^{\ast }-{V}_{t+1}\right)}^T{P}_{t+1}+{\left({W}^{\ast }-{W}_{t+1}\right)}^T{Q}_{t+1}\right)\\ {}= tr\left({\left({V}^{\ast }-{V}_{t+1}\right)}^T\left[\frac{A^T{R}_{t+1}+{PA}^T{R}_{t+1}P-{E}^T{R}_{t+1}{F}^T-{PE}^T{R}_{t+1}{F}^TP}{2}+\frac{{\left\Vert {R}_{t+1}\right\Vert}^2}{{\left\Vert {R}_t\right\Vert}^2}{P}_t\right]\right.\\ {}\left.+{\left({W}^{\ast }-{W}_{t+1}\right)}^T\left[\frac{B^T{R}_{t+1}+{SB}^T{R}_{t+1}S}{2}+\frac{{\left\Vert {R}_{t+1}\right\Vert}^2}{{\left\Vert {R}_t\right\Vert}^2}{Q}_t\right]\right)\\ {}= tr\left({\left({V}^{\ast }-{V}_{t+1}\right)}^T\left[{A}^T{R}_{t+1}-{E}^T{R}_{t+1}{F}^T\right]+{\left({W}^{\ast }-{W}_{t+1}\right)}^T\left[{B}^T{R}_{t+1}\right]\right.\\ {}+\frac{{\left\Vert {R}_{t+1}\right\Vert}^2}{{\left\Vert {R}_t\right\Vert}^2}\left[{\left({V}^{\ast }-{V}_{t+1}\right)}^T{P}_t+{\left({W}^{\ast }-{W}_{t+1}\right)}^T{Q}_t\right]\\ {}= tr\left({R}_{t+1}^T\left[C-{AV}_{t+1}+{EV}_{t+1}F-{BW}_{t+1}\right]\right)={\left\Vert {R}_{t+1}\right\Vert}^2\end{array}} $$
(3.15)

Hence, Lemma 3.2 holds for all i = 1, 2, … by the principle of induction.

Theorem 3.1 Assume that Problem 2.1 is consistent over reflexive matrices, then by using Algorithm 2.1 for any arbitrary initial reflexive matrices \( {V}_1\in {R}_r^{n\times n}(P) \) and \( {W}_1\in {R}_r^{n\times n}(S) \) , reflexive solutions of Problem 2.1 can be obtained within a finite iterative steps by Algorithm 2.1 in absence of roundoff errors.

Proof Assume that \( {R}_i\ne \mathbf{0} \) for i = 1, 2, …, mn. From Lemma 3.2, we get \( {P}_i\ne \mathbf{0} \) or \( {Q}_i\ne \mathbf{0} \) for i = 1, 2, …, mn. Therefore, we can compute Rmn + 1, Vmn + 1 and Wmn + 1 by Algorithm 2.1. Also from Lemma 3.1, we have

$$ tr\left({R}_{mn+1}^T{R}_i\right)=0\ \mathrm{for}\ i=1,2,\dots, mn, $$
(3.16)

and

$$ tr\left({R}_i^T{R}_j\right)=0\ \mathrm{for}\ i,j=1,2,\dots, mn,\left(i\ne j\right). $$
(3.17)

Therefore, the set {R1, R2, …, Rmn} is an orthogonal basis of the matrix space Rm × n, which implies that \( {R}_{mn+1}=\mathbf{0} \), i.e., Vmn + 1, and Wmn + 1 are reflexive matrices solutions of Problem 2.1. Hence, the proof is completed.

To obtain least Frobenius norm solution of the generalized solution pair of Problem 2.1, we first present the following lemma.

Lemma 3.3 [4] Assume that the consistent system of linear equations Ax = b has a solution xR(AT), then x is a unique least Frobenius norm solution of the system of linear equations.

Theorem 3.2 Suppose that Problem 2.1 is consistent over reflexive matrices. Let the initial iteration matrices \( {V}_1={A}^TG+{PA}^T\tilde{G}P-{E}^T{GF}^T-{PE}^T\tilde{G}{F}^TP \) and \( {W}_1={B}^TG+{SB}^T\tilde{G}S \) where G and \( \tilde{G} \) are arbitrary, or especially \( {V}_1=\mathbf{0} \) and \( {W}_1=\mathbf{0} \) , then the reflexive solutions V andW obtained by Algorithm 2.1, are the least Frobenius norm reflexive solutions of Eq. ( 2.1 ).

Proof The solvability of the matrix Eq. ( 2.1 ) over reflexive matrices is equivalent to the solvability of the system of equations

$$ \left\{\begin{array}{l} AV- EVF+ BW=C,\\ {} APVP- EPVPF+ BSWS=C.\end{array}\right. $$
(3.18)

And the system of equations ( 3.18 ) is equivalent to

$$ \left(\begin{array}{cc}\left(I\otimes A\right)-\left({F}^T\otimes E\right)& \left(I\otimes B\right)\\ {}\left(P\otimes AP\right)-\left({F}^TP\otimes EP\right)& \left(S\otimes BS\right)\end{array}\right)\left(\begin{array}{c} vec(V)\\ {} vec(W)\end{array}\right)=\left(\begin{array}{c} vec(C)\\ {} vec(C)\end{array}\right) $$
(3.19)

Now, assume that G and \( \tilde{G} \) are arbitrary matrices, we can write

$$ {\displaystyle \begin{array}{c}\left(\begin{array}{c} vec\left({A}^TG+{PA}^T\tilde{G}P-{E}^T{GF}^T-{PE}^T\tilde{G}{F}^TP\right)\\ {} vec\left({B}^TG+{SB}^T\tilde{G}S\right)\end{array}\right)\\ {}=\left(\begin{array}{cc}\left(I\otimes {A}^T\right)-\left(F\otimes {E}^T\right)& \left(P\otimes {PA}^T\right)-\left( PF\otimes {PE}^T\right)\\ {}\left(I\otimes {B}^T\right)& \left(S\otimes {SB}^T\right)\end{array}\right)\ \left(\begin{array}{c} vec(G)\\ {} vec\left(\tilde{G}\right)\end{array}\right)\\ {}={\left(\begin{array}{cc}\left(I\otimes A\right)-\left({F}^T\otimes E\right)& \left(I\otimes B\right)\\ {}\left(P\otimes AP\right)-\left({F}^TP\otimes EP\right)& \left(S\otimes BS\right)\end{array}\right)}^T\left(\begin{array}{c} vec(G)\\ {} vec\left(\tilde{G}\right)\end{array}\right)\\ {}\in R\left({\left(\begin{array}{cc}\left(I\otimes A\right)-\left({F}^T\otimes E\right)& \left(I\otimes B\right)\\ {}\left(P\otimes AP\right)-\left({F}^TP\otimes EP\right)& \left(S\otimes BS\right)\end{array}\right)}^T\right)\end{array}} $$

If we consider \( {V}_1={A}^TG+{PA}^T\tilde{G}P-{E}^T{GF}^T-{PE}^T\tilde{G}{F}^TP \) and \( {W}_1={B}^TG+{SB}^T\tilde{G}S \) then all Vk And Wk generated by Algorithm 2.1 satisfy

$$ \left(\begin{array}{c} vec\left({V}_k\right)\\ {} vec\left({W}_k\right)\end{array}\right)\in R\left({\left(\begin{array}{cc}\left(I\otimes A\right)-\left({F}^T\otimes E\right)& \left(I\otimes B\right)\\ {}\left(P\otimes AP\right)-\left({F}^TP\otimes EP\right)& \left(S\otimes BS\right)\end{array}\right)}^T\right) $$

By applying Lemma 3.3 with the initial iteration matrices \( {V}_1={A}^TG+{PA}^T\tilde{G}P-{E}^T{GF}^T-{PE}^T\tilde{G}{F}^TP \) and \( {W}_1={B}^TG+{SB}^T\tilde{G}S \) where G and \( \tilde{G} \) are arbitrary, or especially \( {V}_1=\mathbf{0} \) and \( {W}_1=\mathbf{0} \), the reflexive solutions V and W obtained by Algorithm 2.1 are the least Frobenius norm reflexive solutions of Eq. (2.1).

Numerical examples

In this section, four numerical examples are presented to illustrate the performance and the effectiveness of the proposed algorithm. We implemented the algorithms in MATLAB (writing our own programs) and ran the programs on a PC Pentium IV.

Example 4.1

Consider the generalized Sylvester matrix equation AV + BW = EVF + C where

$$ {\displaystyle \begin{array}{l}A=\left(\begin{array}{cccc}3& 2& 4& 1\\ {}0& -2& 1& 3\\ {}5& 2& 3& 2\\ {}2& 1& 3& 4\\ {}2& 0& 2& 0\end{array}\right),B=\left(\begin{array}{cccc}5& 0& 2& 3\\ {}\hbox{-} 5& 0& 4& 1\\ {}3& 4& 5& 2\\ {}3& 2& 2& 3\\ {}0& 3& 4& 6\end{array}\right),E=\left(\begin{array}{cccc}-3& 2& 4& 0\\ {}2& 0& -3& 2\\ {}3& 2& 3& 0\\ {}3& 4& 3& 0\\ {}3& 0& 3& 2\end{array}\right)\\ {}F=\left(\begin{array}{cccc}3& -4& 5& 1\\ {}2& -4& 1& 3\\ {}-4& 2& 2& 1\\ {}-3& 0& -2& -12\end{array}\right)\ \mathrm{and}\ C=\left(\begin{array}{cccc}84& -46& 49& 81\\ {}-13& 19& 11& 8\\ {}29& 70& 18& 15\\ {}26& 53& 29& 8\\ {}61& 35& -24& 68\end{array}\right)\end{array}} $$

Choosing arbitrary initial matrices V1 = W1 = 0. Applying Algorithm 2.1, we get the reflexive solutions of the matrix Eq. (2.1) as follows:

$$ {\displaystyle \begin{array}{l}{V}_{28}=\left(\begin{array}{cccc}1.0000& 3.0000& 0.0000& 0.0000\\ {}-2.0000& 2.0000& 0.0000& 0.0000\\ {}0.0000& 0.0000& 2.0000& 1.0000\\ {}0.0000& 0.0000& 4.0000& 2.0000\end{array}\right)\in {R}_r^{4\times 4}(P)\\ {}\mathrm{and}\ {W}_{28}=\left(\begin{array}{cccc}2.0000& 1.0000& 0.0000& 0.0000\\ {}3.0000& 3.0000& 0.0000& 0.0000\\ {}0.0000& 0.0000& 4.0000& 2.0000\\ {}0.0000& 0.0000& -1.0000& 3.0000\end{array}\right)\in {R}_r^{4x4}(S)\end{array}} $$

where \( P=S=\left(\begin{array}{cccc}1& 0& 0& 0\\ {}0& 1& 0& 0\\ {}0& 0& -1& 0\\ {}0& 0& 0& -1\end{array}\right) \), with the corresponding residual

R28 = C − (AV28 + BW28 − EV28F) = 6.8125 × 10−10. Moreover, It can be verified that PV28P = V28 and SW28S = W28. Table 1 indicates the number of iterations k and norm of the corresponding residual:

Table 1 The number of iterations and norm of the corresponding residual for the reflexive solution of the generalized Sylvester matrix equation Example 4.1

Now let \( \hat{V}=\left(\begin{array}{l}\kern0.5em 1\kern1.25em 1\kern1.25em 0\kern1.25em 0\\ {}-1\kern0.5em -1\kern1.25em 0\kern1.25em 0\\ {}\kern0.5em 0\kern1.25em 0\kern0.5em -2\kern1.25em 1\\ {}\kern0.5em 0\kern1.25em 0\kern1.25em 3\kern0.5em -1\end{array}\right),\hat{W}=\left(\begin{array}{l}\ 1\kern0.75em -1\kern1em 0\kern1.25em 0\\ {}\ 1\kern0.75em -1\kern1em 0\kern1.25em 0\\ {}\ 0\kern1.25em 0\kern1.25em 1\kern1.25em 2\\ {}\ 0\kern1.25em 0\kern0.75em -2\kern1.25em 1\end{array}\right) \).

By applying Algorithm 2.1 for the generalized Sylvester matrix equation \( A\overline{V}+B\overline{W}=E\overline{V}F+\overline{C}, \) and letting the initial pair \( {\overline{V}}_1={\overline{W}}_1=\mathbf{0} \), we can obtain the least Frobenius norm generalized solution \( {\overline{V}}^{\ast },{\overline{W}}^{\ast } \) of the generalized Sylvester matrix Eq. (2.1) as follows

$$ {\overline{V}}^{\ast }={\overline{V}}_{29}=\left(\begin{array}{l}-0.0000\kern1em 2.0000\kern1.75em 0.0000\kern0.75em 0.0000\\ {}-1.0000\kern1em 3.0000\kern1.75em 0.0000\kern0.75em 0.0000\\ {}\kern0.5em 0.0000\kern0.75em 0.0000\kern1.5em 4.0000\kern1em 0.0000\\ {}\kern0.5em 0.0000\kern0.75em 0.0000\kern1.5em 1.0000\kern1em 3.0000\end{array}\right), $$

\( {\overline{W}}^{\ast }={\overline{W}}_{29}=\left(\begin{array}{l}1.0000\kern1em 2.0000\kern2em 0.0000\kern1em 0.0000\\ {}2.0000\kern1em 4.0000\kern2em 0.0000\kern1em 0.0000\\ {}0.0000\kern1em 0.0000\kern1.75em 3.0000\kern0.5em -0.0000\\ {}0.0000\kern1em 0.0000\kern1.75em 1.0000\kern1em 2.0000\end{array}\right) \), with the corresponding residual

$$ \left\Vert {R}_{29}\right\Vert =\left\Vert C-\left({AV}_{29}+{BW}_{29}-{EV}_{29}F\right)\right\Vert =\kern0.5em 5.0896\times {10}^{-11}. $$

Table 2 indicates the number of iterations k and norm of the corresponding residual with \( {\overline{V}}_1={\overline{W}}_1=\mathbf{0} \).

Table 2 The number of iterations and norm of the corresponding residual for Example 4.1 with \( {\overline{V}}_1={\overline{W}}_1=\mathbf{0} \)

Example 4.2

(Special case) Consider the generalized Sylvester matrix equation AV + BW = EVF + C where

$$ {\displaystyle \begin{array}{l}A=\left(\begin{array}{cccc}-0.2& 1& 0& 0\\ {}1& -0.1& 0& 0\\ {}0& 0& -0.3& 0\\ {}0& 0& 0& 0.4\end{array}\right),E=\left(\begin{array}{cccc}2& 0& 0& 0\\ {}0& 1& 0& 0\\ {}0& 0& 3& 0\\ {}0& 0& 0& 2\end{array}\right),B=\left(\begin{array}{cccc}-2& 1& 0& 0\\ {}-1& 4& 0& 0\\ {}0& 0& -3& 0\\ {}0& 0& 0& 3\end{array}\right)\\ {}F=\left(\begin{array}{cccc}2& 0& 0& 0\\ {}0& 1& 0& 0\\ {}0& 0& -0.2& 0\\ {}0& 0& 0& 4\end{array}\right)\ \mathrm{and}\ C=\left(\begin{array}{cccc}1& 0& 0& 0\\ {}0& -0.2& 0& 0\\ {}0& 0& 1& 0\\ {}0& 0& -0.1& 3\end{array}\right)\end{array}} $$

Choosing arbitrary initial iterative matrices \( {V}_1={W}_1=\mathbf{0} \). Applying Algorithm 2.1, we get the reflexive solutions of the matrix Eq.(2.1) after 7 iterations when ε = 10−10 as follows:

$$ {\displaystyle \begin{array}{l}{V}_7=\left(\begin{array}{l}\hbox{-} 0.177130\kern1.25em \hbox{-} 0.016697\kern2em 0.000000\kern1.25em 0.000000\\ {}\kern0.5em 0.041119\kern1.25em 0.014553\kern2em 0.000000\kern1.25em 0.000000\\ {}\kern0.5em 0.000000\kern1.25em 0.000000\kern2em 0.033003\kern1.25em 0.000000\\ {}\kern0.5em 0.000000\kern1.25em 0.000000\kern2em \hbox{-} 0.008300\kern0.75em \hbox{-} 0.341520\end{array}\right)\in {R}_r^{4\times 4}(P)\ \mathrm{and}\\ {}{W}_7=\left(\begin{array}{l}\hbox{-} 0.085183\kern1em 0.005417\kern1.25em 0.000000\kern1.75em 0.000000\\ {}\ 0.044574\kern1em \hbox{-} 0.040468\kern1em 0.000000\kern1.75em 0.000000\\ {}\ 0.000000\kern1.25em 0.000000\kern1em \hbox{-} 0.330030\kern1.5em 0.000000\\ {}\ 0.000000\kern1.5em 0.000000\kern0.75em \hbox{-} 0.031125\kern1.5em 0.134810\end{array}\right)\in {R}_r^{4\times 4}(S)\\ {}\end{array}} $$

where \( P=\left(\begin{array}{cccc}1& 0& 0& 0\\ {}0& 1& 0& 0\\ {}0& 0& -1& 0\\ {}0& 0& 0& -1\end{array}\right) \) and S = P.

It can be verified that PV7P = V7 and SW7S = W7. Moreover, the corresponding residual R7 = C − AV7 + EV7F − BW7 = 8.1907e − 10.

Example 4.3

Consider the generalized Sylvester matrix equation AV + BW = EVF + C where

$$ {\displaystyle \begin{array}{l}A=\left(\begin{array}{l}2\kern1em \hbox{-} 1\kern1em \hbox{-} 3\kern1.25em 3\kern1em \hbox{-} 1\kern1.25em 0\\ {}3\kern1em \hbox{-} 1\kern1.25em 1\kern1em \hbox{-} 2\kern1.25em 0\kern1.25em 2\\ {}3\kern1.25em 2\kern1.25em 0\kern1.25em 1\kern1.25em 4\kern1em \hbox{-} 3\\ {}2\kern1em \hbox{-} 1\kern1.25em 3\kern1.25em 2\kern1.25em 0\kern1em \hbox{-} 3\\ {}4\kern1em \hbox{-} 1\kern1.25em 2\kern1em \hbox{-} 2\kern1.25em 3\kern1.25em 1\\ {}2\kern1em \hbox{-} 1\kern1.25em 0\kern1.25em 3\kern1em \hbox{-} 2\kern1.25em 1\end{array}\right),E=\left(\begin{array}{l}\ 4\kern1em \hbox{-} 3\kern1.25em 1\kern1.25em 2\kern1.25em 0\kern1em \hbox{-} 2\\ {}\ 0\kern1.25em 2\kern1.25em 4\kern1.25em 1\kern1em \hbox{-} 2\kern1.25em 3\\ {}\hbox{-} 2\kern1.25em 0\kern1.25em 4\kern1.25em 3\kern1.25em 2\kern1em \hbox{-} 3\\ {}\ 1\kern1em \hbox{-} 2\kern1.25em 4\kern1.25em 2\kern1.25em 0\kern1.25em 3\\ {}\ 3\kern1em \hbox{-} 2\kern1.25em 1\kern1.25em 4\kern1.25em 1\kern1.25em 2\\ {}1\kern1.25em 2\kern1.25em 0\kern1em \hbox{-} 2\kern1em \hbox{-} 3\kern1.25em 4\end{array}\right),B=\left(\begin{array}{l}1\kern1.25em 0\kern1em \hbox{-} 2\kern1em \hbox{-} 1\kern1.25em 3\kern1.25em 4\\ {}2\kern1em \hbox{-} 3\kern1.25em 3\kern1.25em 4\kern1.25em 0\kern1.25em 2\\ {}1\kern1.25em 0\kern1.25em 3\kern1em \hbox{-} 2\kern1.25em 2\kern1em \hbox{-} 3\\ {}2\kern1em \hbox{-} 1\kern1.25em 4\kern1.25em 0\kern1em \hbox{-} 3\kern1.25em 1\\ {}0\kern1em \hbox{-} 1\kern1.25em 0\kern1.25em 2\kern1.25em 3\kern1.25em 1\\ {}2\kern1em \hbox{-} 1\kern1.25em 3\kern1em \hbox{-} 3\kern1.25em 4\kern1.25em 1\end{array}\right)\\ {}F=\left(\begin{array}{l}3\kern1.25em 2\kern1.25em 0\kern1em \hbox{-} 1\kern1em \hbox{-} 2\kern1.25em 1\\ {}1\kern1em \hbox{-} 1\kern1.25em 4\kern1.25em 2\kern1.25em 1\kern1em \hbox{-} 3\\ {}\hbox{-} 3\kern1.25em 2\kern1em \hbox{-} 1\kern1.25em 0\kern1.25em 2\kern1.25em 1\\ {}\ 0\kern1em \hbox{-} 1\kern1.25em 2\kern1em \hbox{-} 3\kern1.25em 1\kern1.25em 4\\ {}\ 2\kern1em \hbox{-} 1\kern1.25em 4\kern1.25em 0\kern1.25em 2\kern1.25em 3\\ {}\ 2\kern1em \hbox{-} 1\kern1.25em 3\kern1.25em 2\kern1em \hbox{-} 3\kern1.25em 1\end{array}\right)\ \mathrm{and}\ C=\left(\begin{array}{l}\kern1.5em 0\kern0.75em \hbox{-} 11\kern1.25em 1\kern1.48em 22\kern0.75em \hbox{-} 30\kern1.5em 14\\ {}\kern0.75em \hbox{-} 37\kern1em 19\kern0.5em \hbox{-} 122\kern0.75em \hbox{-} 22\kern1em \hbox{-} 4\kern1.5em 28\\ {}\kern1em 65\kern0.75em \hbox{-} 21\kern1.25em 4\kern1em 32\kern0.75em \hbox{-} 77\kern1em \hbox{-} 9\\ {}\kern1em 12\kern1.25em 7\kern0.75em \hbox{-} 77\kern1.75em 5\kern1em \hbox{-} 36\kern1em \hbox{-} 65\\ {}\kern0.5em \hbox{-} 11\kern1em 33\kern0.75em \hbox{-} 67\kern1.5em 60\kern0.75em \hbox{-} 52\kern1em \hbox{-} 43\\ {}\kern0.5em \hbox{-} 37\kern1em 19\kern0.75em \hbox{-} 68\kern1em \hbox{-} 39\kern1.25em 61\kern1.5em 3\end{array}\right)\end{array}} $$

Choosing arbitrary initial matrices \( {V}_1={W}_1=\mathbf{0} \). Applying Algorithm 2.1, we get the reflexive solutions of the matrix Eq. (2.1) after 108 iterations when ε = 10−10 as follows:

$$ {\displaystyle \begin{array}{l}V=\left(\begin{array}{l}1\kern3em 3\kern2.75em \hbox{-} 2\kern3em 0\kern3em 0\kern3em 0\\ {}1\kern3em 3\kern2.75em \hbox{-} 2\kern3em 0\kern3em 0\kern3em 0\\ {}1\kern3em 3\kern3em 2\kern3em 0\kern3em 0\kern3em 0\\ {}0\kern2.75em 0\kern3em 0\kern3em 2\kern3em 1\kern2.75em \hbox{-} 2\\ {}0\kern3em 0\kern3em 0\kern3em 2\kern3em 1\kern2.75em \hbox{-} 2\\ {}0\kern3em 0\kern3em 0\kern3em 2\kern3em 1\kern3em 2\end{array}\right)\in {R}_r^{6\times 6}(P)\ \mathrm{and}\\ {}W=\left(\begin{array}{l}2\kern2.75em \hbox{-} 1\kern3em 3\kern3em 0\kern3em 0\kern3em 0\\ {}2\kern2.75em \hbox{-} 1\kern3em 3\kern3em 0\kern3em 0\kern3em 0\\ {}2\kern2.75em \hbox{-} 1\kern2.75em \hbox{-} 3\kern3em 0\kern3em 0\kern3em 0\\ {}0\kern3em 0\kern3em 0\kern3em 2\kern2.75em \hbox{-} 1\kern3em 3\\ {}0\kern3em 0\kern3em 0\kern3em 2\kern2.75em \hbox{-} 1\kern3em 3\\ {}0\kern3em 0\kern3em 0\kern3em 2\kern2.75em \hbox{-} 1\kern2.75em \hbox{-} 3\end{array}\right)\in {R}_r^{6\times 6}(S)\end{array}} $$

where \( P=S=\left(\begin{array}{l}1\kern1.25em 0\kern1.25em 0\kern1.25em 0\kern1.25em 0\kern1.25em 0\\ {}0\kern1.25em 1\kern1.25em 0\kern1.25em 0\kern1.25em 0\kern1.25em 0\\ {}0\kern1.25em 0\kern1.25em 1\kern1.25em 0\kern1.25em 0\kern1.25em 0\\ {}0\kern1.25em 0\kern1.25em 0\kern1em \hbox{-} 1\kern1.25em 0\kern1.25em 0\\ {}0\kern1.25em 0\kern1.25em 0\kern1.25em 0\kern1em \hbox{-} 1\kern1.25em 0\\ {}0\kern1.25em 0\kern1.25em 0\kern1.25em 0\kern1.25em 0\kern1em \hbox{-} 1\end{array}\right) \), with the corresponding residual

R108 = C − (AV108 + BW108 − EV108F) = 3.0452 × 10−10. Moreover, It can be verified that PV108P = V108 and SW108S = W108. Table 3 indicates the number of iterations k and norm of the corresponding residual:

Table 3 The number of iterations and norm of the corresponding residual for the reflexive solution of the generalized Sylvester matrix equation Example 4.3

Example 4.4

Consider the generalized Sylvester matrix equation AV + BW = EVF + C where

$$ {\displaystyle \begin{array}{l}A=\left(\begin{array}{l}2\kern1em \hbox{-} 1\kern1em \hbox{-} 3\kern1.25em 3\kern1em \hbox{-} 1\kern1.25em 0\\ {}5\kern1em \hbox{-} 2\kern1.25em 0\kern1.25em 5\kern1.25em 1\kern1em \hbox{-} 3\\ {}3\kern1em \hbox{-} 1\kern1.25em 1\kern1em \hbox{-} 2\kern1.25em 0\kern1.25em 6\\ {}1\kern1.25em 5\kern1em \hbox{-} 3\kern1em \hbox{-} 2\kern1.25em 0\kern1.25em 3\\ {}3\kern1.25em 2\kern1.25em 0\kern1em \hbox{-} 5\kern1.25em 4\kern1em \hbox{-} 3\\ {}2\kern1em \hbox{-} 1\kern1.25em 3\kern1em \hbox{-} 5\kern1.25em 0\kern1em \hbox{-} 3\\ {}4\kern1em \hbox{-} 1\kern1.25em 2\kern1em \hbox{-} 2\kern1.25em 3\kern1.25em 1\\ {}2\kern1em \hbox{-} 1\kern1.25em 0\kern1.25em 3\kern1em \hbox{-} 2\kern1.25em 1\end{array}\right),E=\left(\begin{array}{l}4\kern1em \hbox{-} 3\kern1.25em 1\kern1.25em 2\kern1.25em 0\kern1em \hbox{-} 2\\ {}\kern0.5em 0\kern1.25em 2\kern1.25em 4\kern1.25em 1\kern1em \hbox{-} 2\kern1.25em 3\\ {}\hbox{-} 2\kern1.25em 0\kern1.25em 4\kern1.25em 3\kern1.25em 2\kern1em \hbox{-} 3\\ {}\kern0.75em 3\kern1em \hbox{-} 2\kern1.25em 5\kern1.25em 4\kern1.25em 3\kern1.25em 0\\ {}\kern0.75em 1\kern1em \hbox{-} 2\kern1.25em 4\kern1.25em 2\kern1.25em 0\kern1.25em 3\\ {}\kern0.75em 1\kern1em \hbox{-} 3\kern1.25em 2\kern1.25em 4\kern1.25em 0\kern1.25em 2\\ {}\kern0.75em 3\kern1em \hbox{-} 2\kern1.25em 1\kern1.25em 4\kern1.25em 1\kern1.25em 2\\ {}\hbox{-} 1\kern1.25em 2\kern1.25em 0\kern1em \hbox{-} 2\kern1em \hbox{-} 3\kern1.25em 4\end{array}\right)\ B=\left(\begin{array}{l}1\kern1.25em 0\kern1em \hbox{-} 2\kern1em \hbox{-} 1\kern1.25em 3\kern1.25em 4\\ {}\ 3\kern1em \hbox{-} 5\kern1em \hbox{-} 3\kern1.25em 2\kern1.25em 4\kern1.25em 1\\ {}\ 2\kern1em \hbox{-} 3\kern1.25em 3\kern1.25em 4\kern1.25em 0\kern1.25em 2\\ {}\ 1\kern1.25em 0\kern1.25em 3\kern1em \hbox{-} 2\kern1.25em 2\kern1em \hbox{-} 3\\ {}\hbox{-} 4\kern1.25em 5\kern1em \hbox{-} 3\kern1.25em 4\kern1em \hbox{-} 1\kern1.25em 0\\ {}\ 2\kern1em \hbox{-} 1\kern1.25em 4\kern1.25em 0\kern1em \hbox{-} 3\kern1.25em 1\\ {}\ 0\kern1em \hbox{-} 1\kern1.25em 0\kern1.25em 2\kern1.25em 3\kern1.25em 1\\ {}\ 2\kern1em \hbox{-} 1\kern1.25em 3\kern1em \hbox{-} 3\kern1.25em 4\kern1.25em 1\end{array}\right),\\ {}F=\left(\begin{array}{l}\kern0.24em 3\kern1.25em 2\kern1.25em 0\kern1em \hbox{-} 1\kern1em \hbox{-} 2\kern1.25em 1\\ {}\kern0.24em 1\kern1em \hbox{-} 1\kern1.25em 4\kern1.25em 2\kern1.25em 1\kern1em \hbox{-} 3\\ {}\hbox{-} 3\kern1.25em 2\kern1em \hbox{-} 1\kern1.25em 0\kern1.25em 2\kern1.25em 1\\ {}\ 0\kern1em \hbox{-} 1\kern1.25em 2\kern1em \hbox{-} 3\kern1.25em 1\kern1.25em 4\\ {}\ 2\kern1em \hbox{-} 1\kern1.25em 4\kern1.25em 0\kern1.25em 2\kern1.25em 3\\ {}\ 2\kern1em \hbox{-} 1\kern1.25em 3\kern1.25em 2\kern1em \hbox{-} 3\kern1.25em 1\end{array}\right)\ \mathrm{and}\ C=\left(\begin{array}{l}\hbox{-} 11\kern0.75em \hbox{-} 51\kern0.75em \hbox{-} 58\kern1em 29\kern0.75em \hbox{-} 15\kern1em 68\\ {}\hbox{-} 27\kern1em 85\kern0.5em \hbox{-} 124\kern1em 52\kern0.75em \hbox{-} 26\kern0.75em \hbox{-} 82\\ {}\kern0.5em 59\kern0.75em \hbox{-} 93\kern0.75em 109\kern1em 39\kern1em \hbox{-} 6\kern1em 28\\ {}\hbox{-} 18\kern1em 22\kern0.75em \hbox{-} 24\kern0.75em \hbox{-} 14\kern1em \hbox{-} 5\kern1em 86\\ {}\hbox{-} 59\kern0.75em 125\kern0.5em \hbox{-} 144\kern0.75em \hbox{-} 41\kern0.75em \hbox{-} 68\kern0.75em \hbox{-} 11\\ {}\hbox{-} 40\kern1em 56\kern0.5em \hbox{-} 105\kern1em \hbox{-} 5\kern0.75em \hbox{-} 89\kern0.75em \hbox{-} 27\\ {}\hbox{-} 43\kern1em 43\kern0.5em \hbox{-} 110\kern1em 15\kern0.75em \hbox{-} 53\kern0.75em \hbox{-} 10\\ {}\hbox{-} 54\kern0.75em 144\kern0.5em \hbox{-} 109\kern1em 14\kern1em 20\kern0.75em \hbox{-} 75\end{array}\right)\end{array}} $$

Choosing arbitrary initial matrices \( {V}_1={W}_1=\mathbf{0} \). Applying Algorithm 2.1, we get the reflexive solutions of the matrix Eq. (2.1) after 96 iterations when ε = 10−12 as follows:

$$ {\displaystyle \begin{array}{l}V=\left(\begin{array}{l}\kern0.75em 1\kern2.25em 3\kern2em \hbox{-} 2\kern2em 0\kern2.5em 0\kern2.5em 0\\ {}2\kern2em \hbox{-} 2\kern2.25em 1\kern2.5em 0\kern2.5em 0\kern2.5em 0\\ {}\hbox{-} 2\kern2.5em 1\kern1.75em \hbox{-} 3\kern2.25em 0\kern2.5em 0\kern2.5em 0\\ {}0\kern2.5em 0\kern2.25em 0\kern2.25em 4\kern2.5em 1\kern2.25em \hbox{-} 2\\ {}0\kern2.75em 0\kern2.25em 0\kern2em \hbox{-} 3\kern2em \hbox{-} 2\kern2.25em \hbox{-} 1\\ {}0\kern2.75em 0\kern2.25em 0\kern2em \hbox{-} 1\kern2.5em 4\kern2.5em 1\end{array}\right)\in {R}_r^{6\times 6}(P)\ \mathrm{and}\ \\ {}W=\left(\begin{array}{l}2\kern2.75em \hbox{-} 1\kern2.75em 3\kern3em 0\kern3em 0\kern3em 0\\ {}\hbox{-} 3\kern2.5em 2\kern3em 1\kern3em 0\kern3em 0\kern3em 0\\ {}\hbox{-} 3\kern2.5em 4\kern3em 1\kern3em 0\kern3em 0\kern3em 0\\ {}\kern0.5em 0\kern2.25em 0\kern3em 0\kern3em 2\kern2.75em \hbox{-} 1\kern2.75em \hbox{-} 3\\ {}\kern0.5em 0\kern2.5em 0\kern2.75em 0\kern3em 4\kern3em 2\kern3em 1\\ {}\kern0.5em 0\kern2.5em 0\kern2.74em 0\kern3em 1\kern2.75em \hbox{-} 3\kern3em 2\end{array}\right)\in {R}_r^{6\times 6}(S)\end{array}} $$

where \( P=S=\left(\begin{array}{l}1\kern1.25em 0\kern1.25em 0\kern1.25em 0\kern1.25em 0\kern1.25em 0\\ {}0\kern1.25em 1\kern1.25em 0\kern1.25em 0\kern1.25em 0\kern1.25em 0\\ {}0\kern1.25em 0\kern1.25em 1\kern1.25em 0\kern1.25em 0\kern1.25em 0\\ {}0\kern1.25em 0\kern1.25em 0\kern1em \hbox{-} 1\kern1.25em 0\kern1.25em 0\\ {}0\kern1.25em 0\kern1.25em 0\kern1.25em 0\kern1em \hbox{-} 1\kern1.25em 0\\ {}0\kern1.25em 0\kern1.25em 0\kern1.25em 0\kern1.25em 0\kern1em \hbox{-} 1\end{array}\right) \), with the corresponding residual R96 = diag(C − (AV96 + BW96 − EV96F)) = 3.0152 × 10−12.

Moreover, It can be verified that PV96P = V96 and SW96S = W96. Table 4 indicates the number of iterations k and norm of the corresponding residual.

Table 4 The number of iterations and norm of the corresponding residual for the reflexive solution of the generalized Sylvester matrix equation Example 4.4

Conclusions

In this paper, an iterative method to solve the generalized Sylvester matrix equations over reflexive matrices is derived. With this iterative method, the solvability of the generalized Sylvester matrix equation can be determined automatically. Also, when this matrix equation is consistent, for any initial reflexive matrices, one can obtain reflexive solutions within finite iteration steps. In addition, both the complexity and the convergence analysis for our proposed algorithm are presented. Furthermore, we obtained the least Frobenius norm reflexive solutions when special initial reflexive matrices are chosen. Finally, four numerical examples were presented to support the theoretical results and illustrate the effectiveness of the proposed method.

Availability of data and materials

All data generated or analyzed during this study are included in this article.

References

  1. Horn, R.A., Johnson, C.R.: Topics in matrix analysis. Cambridge University Press, England (1991)

  2. Golub, G.H., Van Loan, C.F.: Matrix computations, 3rd edn. The Johns Hopkins University Press, Baltimore and London (1996)

    MATH  Google Scholar 

  3. Chen, H.C.: Generalized reflexive matrices: special properties and applications. SIAM J. Matrix Anal. Appl. 19, 140–153 (1998)

    Article  MathSciNet  Google Scholar 

  4. Peng, X.Y., Hu, X.Y., Zhang, L.: An iteration method for the symmetric solutions and the optimal approximation solution of the matrix equation AXB = C. Appl. Math. Comput. 160, 763–777 (2005)

    Article  MathSciNet  Google Scholar 

  5. Peng, X.Y., Hu, X.Y., Zhang, L.: The reflexive and anti-reflexive solutions of the matrix equation A H XB = C. Appl. Math. Comput. 186, 638–645 (2007)

    MathSciNet  Google Scholar 

  6. Wang, Q.W., Zhang, F.: The reflexive re-nonnegative definite solution to a quaternion matrix equation. Electron. J. Linear Algebra. 17, 88–101 (2008)

    MathSciNet  MATH  Google Scholar 

  7. Zhan, J.C., Zhou, S.Z., Hu, X.Y.: The (P, Q) generalized reflexive and anti-reflexive solutions of the matrix equation AX = B. Appl. Math. Comput. 209, 254–258 (2009)

    MathSciNet  MATH  Google Scholar 

  8. Ramadan, M.A., Abdel Naby, M.A., Bayoumi, A.M.: On the explicit and iterative solutions of the matrix equation AV + BW = EVF + C. Math. Comput. Model. 50, 1400–1408 (2009)

  9. Dehghan, M., Hajarian, M.: An iterative algorithm for the reflexive solutions of the generalized coupled Sylvester matrix equations and its optimal approximation. Appl. Math. Comput. 202, 571–588 (2008)

    MathSciNet  MATH  Google Scholar 

  10. Dehghan, M., Hajarian, M.: Finite iterative algorithms for the reflexive and anti-reflexive solutions of the matrix equation A 1 X 1 B 1 + A 2 X 2 B 2 = C. Math. Comput. Model. 49, 1937–1959 (2009)

    Article  Google Scholar 

  11. Yin, F., Guo, K., Huang, G.X.: An iterative algorithm for the generalized reflexive solutions of the general coupled matrix equations. Journal of Inequalities and Applications. 2013, 280 (2013)

    Article  MathSciNet  Google Scholar 

  12. Li, N.: Iterative algorithm for the generalized (P, Q) - reflexive solution of a quaternion matrix equation with j-conjugate of the unknowns. Bulletin of the Iranian Mathematical Society. 41, 1–22 (2015)

    MathSciNet  MATH  Google Scholar 

  13. Dong, C.Z., Wang, Q.W.: The {P, Q, k + l}- reflexive solution to system of matrices AX = C, XB = D. Mathematical Problems in Engineering. 2015, 9 (2015)

  14. Nacevska, B.: Generalized reflexive and anti-reflexive solution for a system of equations. Filomat. 30, 55–64 (2016)

    Article  MathSciNet  Google Scholar 

  15. Hajarian, M.: Convergence properties of BCR method for generalized Sylvester matrix equation over generalized reflexive and anti-reflexive matrices. Linear and Multilinear Algebra. 66(10), 1–16 (2018)

    Article  MathSciNet  Google Scholar 

  16. Liu, X.: Hermitian and non-negative definite reflexive and anti-reflexive solutions to AX = B. International Journal of Computer Mathematics. 95(8), 1666–1671 (2018)

    Article  MathSciNet  Google Scholar 

  17. Dehghan, M., Shirilord, A.: A generalized modified Hermitian and skew–Hermitian splitting (GMHSS) method for solving complex Sylvester matrix equation. Applied Mathematics and Computation. 348, 632–651 (2019)

    Article  MathSciNet  Google Scholar 

  18. Dehghan, M., Hajarian, M.: On the reflexive and anti–reflexive solutions of the generalized coupled Sylvester matrix equations. International Journal of Systems Science. 41(6), 607–625 (2010)

    Article  MathSciNet  Google Scholar 

  19. Dehghan, M., Hajarian, M.: On the generalized bisymmetric and skew–symmetric solutions of the system of generalized Sylvester matrix equations. Linear and Multilinear Algebra. 59(11), 1281–1309 (2011)

    Article  MathSciNet  Google Scholar 

  20. Hajarian, M., Dehghan, M.: The reflexive and Hermitian reflexive solutions of the generalized Sylvester–conjugate matrix equation. The Bulletin of the Belgian Mathematical Society. 20(4), 639–653 (2013)

    MathSciNet  MATH  Google Scholar 

  21. Dehghan, M., Hajarian, M.: Two algorithms for finding the Hermitian reflexive and skew–Hermitian solutions of Sylvester matrix equations. Applied Mathematics Letters. 24(4), 444–449 (2011)

    Article  MathSciNet  Google Scholar 

  22. Hajarian, M.: Solving the coupled Sylvester like matrix equations via a new finite iterative algorithm. Engineering Computations. 34(5), 1446–1467 (2017)

    Article  MathSciNet  Google Scholar 

  23. El-Shazly, N.M.: On the perturbation estimates of the maximal solution for the matrix equation \( X+{A}^T\sqrt{X^{-1}}A=P \). Journal of the Egyptian Mathematical Society. 24, 644–649 (2016)

    Article  MathSciNet  Google Scholar 

  24. Khader, M.M.: Numerical treatment for solving fractional Riccati differential equation. Journal of the Egyptian Mathematical Society. 21, 32–37 (2013)

    Article  MathSciNet  Google Scholar 

  25. Balaji, S.: Legendre wavelet operational matrix method for solution of fractional order Riccati differential equation. Journal of the Egyptian Mathematical Society. 23, 263–270 (2015)

    Article  MathSciNet  Google Scholar 

  26. Moore, B.: Principle component analysis in linear systems: controllability, observability and model reduction. IEEE Transactiona on Automatic Control. 26, 17–31 (1981)

    Article  Google Scholar 

  27. Kenney, C.S., Laub, A.J.: Controllability and stability radii for Companion form systems. Mathematics Control Signals and Systems. 1, 239–256 (1988)

    Article  MathSciNet  Google Scholar 

  28. Lam, J., Yan, W., Hu, T.: Pole assignment with eigenvalue and stability robustness. International Journal of Control. 72, 1165–1174 (1999)

    Article  MathSciNet  Google Scholar 

  29. Avrachenkov, K.E., Lasserre, J.B.: Analytic perturbation of Sylvester matrix equation. IEEE Transactiona on Automatic Control. 47, 1116–1119 (2002)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the referees for their valuable suggestions.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

MAR proposed the main idea of this paper. MAR and NME-S prepared the manuscript and performed all the steps of the proofs in this research. BIS has made substantial contributions to conception and designed the numerical methods. All authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.

Authors’ information

Mohamed A. Ramadan works in Egypt as Professor of Pure Mathematics (Numerical Analysis) at the Department of Mathematics and Computer Science, Faculty of Science, Menoufia University, Shebein El-Koom, Egypt. His areas of expertise and interest are: eigenvalue assignment problems, solution of matrix equations, focusing on investigating, theoretically and numerically, the positive definite solution for class of nonlinear matrix equations, introducing numerical techniques for solving different classes of partial, ordering and delay differential equations, as well as fractional differential equations using different types of spline functions. Naglaa M. El-Shazly works in Egypt as Assistant Professor of Pure Mathematics (Numerical Analysis) at the Department of Mathematics and Computer Science, Faculty of Science, Menoufia University, Shebein El-Koom, Egypt. Her areas of expertise and interest are: eigenvalue assignment problems, solution of matrix equations, theoretically and numerically, iterative positive definite solutions of different forms of nonlinear matrix equations. Basem I. Selim works in Egypt as Assistant Lecturer of Pure Mathematics at the Department of Mathematics and Computer Science, Faculty of Science, Menoufia University, Shebein El-Koom, Egypt. His research interests include Finite Element Analysis, Numerical Analysis, Differential Equations and Engineering, Applied and Computational Mathematics.

Corresponding author

Correspondence to Naglaa M. El–shazly.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ramadan, M.A., El–shazly, N.M. & Selim, B.I. Iterative algorithm for the reflexive solutions of the generalized Sylvester matrix equation. J Egypt Math Soc 27, 27 (2019). https://doi.org/10.1186/s42787-019-0030-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s42787-019-0030-0

Keywords

2010 Mathematics Subject Classification