# General-type proximal point algorithm for solving inclusion and fixed point problems with composite operators

## Abstract

The main purpose of this paper is to introduce a new general-type proximal point algorithm for finding a common element of the set of solutions of monotone inclusion problem, the set of minimizers of a convex function, and the set of solutions of fixed point problem with composite operators: the composition of quasi-nonexpansive and firmly nonexpansive mappings in real Hilbert spaces. We prove that the sequence xn which is generated by the proposed iterative algorithm converges strongly to a common element of the three sets above without commuting assumption on the mappings. Finally, applications of our theorems to find a common solution of some nonlinear problems, namely, composite minimization problems, convex optimization problems, and fixed point problems, are given to validate our new findings.

## Introduction

Throughout this paper, we assume that H be a real Hilbert space with the inner product ã€ˆÂ·,Â·ã€‰ and norm âˆ¥.âˆ¥ and K be a nonempty closed convex subset of H. Let A:Hâ†’2H, the domain of A,D(A), the image of a subset S of H, A(S) the range of A, R(A) and the graph of A,G(A) are defined as follows:

$$D(A):= \{x\in\,H\,:\,Ax\neq\emptyset\},\,\,A(S):= \cup \{Ax\,:\,x\in\,S\},$$
$$R(A):= A(H),\,\,G(A):=\{ (x,u):\,\,x\in\,D(A),\,\,u\in\,Ax\}.$$

An operator A:Kâ†’H is called monotone if

$$\begin{array}{@{}rcl@{}} \langle Ax-Ay,\,x-y\rangle \geq0,~~\forall~x,y\in K. \end{array}$$

An operator A:Kâ†’H is said to be strongly monotone if there exists a positive constant kâˆˆ(0,1) such that

$$\langle Ax-Ay, x-y\rangle \geq k\Vert x-y\Vert^{2},\,\,\, \forall x,y\in K.$$

It is said to be Î±-inverse strongly monotone if there exists a constant Î±>0 such that

$$\begin{array}{@{}rcl@{}} \langle Ax-Ay,\,\, x-y\rangle\geq \alpha \|Ax-Ay \|^{2},~~\forall~x,y\in K. \end{array}$$

It is immediate that if A is Î±-inverse strongly monotone, then A is monotone and Lipschitz continuous.

Let A:Hâ†’H be a single-valued nonlinear mapping and B:Hâ†’2H be a set-valued mapping. The variational inclusion problem is as follows: find xâˆˆH such that

$$0\in B(x)+A(x).$$
(1)

We denote the set of solutions of this problem by (A+B)âˆ’1(0). If A=0, then problem (1) becomes the inclusion problem introduced by Rockafellar [1]. Inclusions of the form specified by (1) arise in numerous problems of fundamental importance in mathematical optimization, either directly or through an appropriate reformulation. In what follows, we provide some motivating examples.

### General monotone inclusions

Consider the inclusion problem

$${find}\,\, x\in H_{1},\,\,{such\, that}\,\, 0 \in (A + K^{*} BK)(x),$$
(2)

where $$\phantom {\dot {i}\!}A : H_{1}\to 2^{H_{1}}$$ and $$\phantom {\dot {i}\!} B : H_{2}\to 2^{H_{2}}$$ are maximally monotone operators and K:H1â†’H2 is a linear, bounded operator with adjoint Kâˆ—. As was observed in [2], solving (2) can be equivalently cast as the following monotone inclusion posed in the product space:

$${find}\,\, \left(\begin{array}{c} x\\ y\\ \end{array}\right) \in H_{1}\times H_{2}\,\,{such that }\,\, \left(\begin{array}{c} 0\\ 0\\ \end{array}\right) \in \left(\begin{array}{cc} 0& A \\ 0&B^{-1} \\ \end{array}\right) \left(\begin{array}{c} x\\ y\\ \end{array}\right) + \left(\begin{array}{cc} 0& K^{*} \\ -K& 0 \\ \end{array}\right) \left(\begin{array}{c} x\\ y\\ \end{array}\right)$$
(3)

Notice that the first operator in (3) is maximally monotone whereas the second is bounded and linear (in particular, it is Lipschitz continuous with full domain). Consequently, (3) is also of the form specified by (1).

Many convex optimization problems can be formulated as the saddle point problem:

$$\min_{x\in H}\max_{x\in H} \left(g(x)+\Phi(x,y)-f(y)\right),$$
(4)

where f,g:Hâ†’(âˆ’âˆž, +âˆž] are proper, lsc, convex functions and $$\Phi : H \times H \to \mathbb {R}$$ is a smooth convex-concave function. Problems of this form naturally arise in machine learning, statistics, etc., where the dual (maximization) problem comes either from dualizing the constraints in the primal problem or from using the Fenchel-Legendre transform to leverage a nonsmooth composite part. Through its first-order optimality condition, the saddle point problem (22) can be expressed as the monotone inclusion

$${find}\,\, \left(\begin{array}{c} x\\ y\\ \end{array}\right) \in H\times H\,\,{such that }\,\, \left(\begin{array}{c} 0\\ 0\\ \end{array}\right) \in \left(\begin{array}{c} \partial g(x) \\ \partial f(y) \\ \end{array}\right) + \left(\begin{array}{c} \nabla_{x}\Phi(x,y) \\ \nabla_{y}\Phi(x,y)\\ \end{array}\right),$$
(5)

which is of the form specified by (1). In general, equations of inclusion monotone type (1) are nonlinear and there is no known method to find closed form solutions for them. Consequently, methods of approximating solutions of such equations are of interest.

The best-known splitting method for solving the inclusion (1) when B is single-valued is the forward-backward method, called so because each iteration combines one forward evaluation of B with one backward evaluation of A, introduced by Passty [3] and Lions and Mercier [4]. More precisely, the method generates a sequence according to

$$x_{n+1}= (I-\lambda_{n} B)^{-1} (I-\lambda_{n} A)x_{n},\,\,\lambda_{n}>0,$$
(6)

under the condition that D(B)âŠ‚D(A). It was shown, see for example [5], that weak convergence of (6) requires quite restrictive assumptions on A and B, such that the inverse of A is strongly monotone or B is Lipschitz continuous and monotone and the operator A+B is strongly monotone on D(B). Hence, the modification is necessary in order to guarantee the strong convergence of forward-backward splitting method (see, for example, [5â€“12] and the references contained in them).

A map T:Kâ†’K is said to be Lipschitz if there exists an Lâ‰¥0 such that

$$\|Tx-Ty\|\le L\|x-y\|,\,\, ~~~\forall x,y\in K,$$
(7)

if L<1, T is called contraction and if L=1, T is called nonexpansive. We denote by Fix(T) the set of fixed points of the mapping T, that is Fix(T):={xâˆˆD(T):x=Tx}. We assume that Fix(T) is nonempty. A map T is called quasi-nonexpansive if âˆ¥Txâˆ’pâˆ¥â‰¤âˆ¥xâˆ’pâˆ¥ holds for all x in K and pâˆˆFix(T). The mapping T:Kâ†’K is said to be firmly nonexpansive, if

$$\|Tx-Ty\|^{2}\leq \|x-y\|^{2} -\|(x - y)-(Tx-Ty) \|^{2},\,\forall x,y\in K.$$

We remark here that a nonexpansive mapping with a nonempty fixed point set is quasi-nonexpansive; however, the inverse may be not true. See the following example [13].

### Example 1

[13] Let $$H = \mathbb {R}$$ and define a mapping by T:Hâ†’H by

$$\begin{array}{@{}rcl@{}} Tx= \left \{ \begin{array}{ll} \frac{x}{2}\sin\left(\frac{1}{x}\right),\,\,\, x\neq 0\\\\ 0,\,\,\,\,\,\,\, x=0. \end{array} \right. \end{array}$$
(8)

Then, T is quasi-nonexpansive but not nonexpansive.

Fixed point theory has been revealed as a very powerful and effective method for solving a large number of problems which emerge from real-world applications and can be translated into equivalent fixed point problems. In order to obtain approximate solution of the fixed point problems, various iterative methods have been proposed (see, e.g., [14â€“19] and the reference therein).

In 2013, Yuan [20], motivated by the fact that forward-backward method is remarkably useful for finding fixed points of nonlinear mapping, proved the following theorem.

### Theorem 1

Let H be a real Hilbert space and C be a nonempty closed convex subset of H. Let A: Câ†’H be a Î±-inverse strongly monotone operator and S:Câ†’C be a quasi-nonexpansive mapping. Let B be a maximal monotone operator on H into 2H such that the domain of B is included in C such that F:=Fix(S)âˆ©(A+B)âˆ’1(0) is nonempty and Iâˆ’S is demiclosed. Let {Î±n} be a real number sequence in [0,1] and {Î»n} be a positive real number sequence. Let {xn} be a sequence in C generated in the following iterative process:

$$\begin{array}{@{}rcl@{}} \left \{ \begin{array}{lll} x_{1}\in C,\\ C_{1}=C\\ y_{n}= \alpha_{n} x_{n} +(1-\alpha_{n}) SJ_{\lambda_{n}}(x_{n}- \lambda_{n} Ax_{n}),\\ C_{n+1}=\lbrace z\in C_{n}:\,\, \Vert y_{n}-z \Vert\leq \Vert x_{n}-z \Vert \rbrace,\\ x_{n+1}=P_{C_{n+1}}x_{1},\,\,\,\,n\geq 1, \end{array} \right. \end{array}$$
(9)

where $$J_{\lambda _{n}} = (I + \lambda _{n} B).$$ Suppose that the sequences {Î±n} and {Î»n} satisfy the following restrictions:

1. (i)

0â‰¤Î±nâ‰¤a<1;

2. (ii)

0<bâ‰¤Î»nâ‰¤c<2Î±.

Then, the sequence {xn} converges strongly to PFx1.

However, we observe that in Theorem 1 recursion formula studied is not simpler.

Recently, iterative methods for nonexpansive mappings have been applied to solve convex minimization problems; see, e.g., [14, 16] and the references therein. A typical problem is to minimize a quadratic function over the set of the fixed points of a nonexpansive mapping on a real Hilbert space H:

$$\min_{x\in Fix(T)}\Big(\frac{1}{2}\langle Ax, x \rangle-\langle b, x\rangle \Big).$$
(10)

In [14], Xu proved that the sequence {xn} defined by iterative method below with initial guess x0âˆˆH chosen arbitrary:

$$x_{n+1}= \alpha_{n} b + (I- \alpha_{n}A)Tx_{n},\,\, \,\, n\geq 0,$$
(11)

converges strongly to the unique solution of the minimization problem (10), where T is a nonexpansive mappings in H and A a strongly positive bounded linear operator. In 2006, Marino and Xu [16] extended Moudafiâ€™s results [15] and Xuâ€™s results [14] via the following general iteration x0âˆˆH and

$$x_{n+1}= \alpha_{n} \gamma f(x_{n}) + (I- \alpha_{n}A)Tx_{n}, \,\, n\geq 0,$$
(12)

where $$\{\alpha _{n}\}_{n \in \mathbb {N}} \subset (0,1),$$A is a bounded linear operator on H, and T is a nonexpansive. Under suitable conditions, they proved the sequence {xn} defined by (12) converges strongly to the fixed point of T, which is a unique solution of the following variational inequality

$$\langle Ax^{*}-\gamma f(x^{*}), x^{*}-p\rangle\leq 0,\,\,\,\, \forall p\in Fix(T).$$

If T1 and T2 are self-mappings on K, a point xâˆˆK is called a common fixed point of Ti(i=1,2) if xâˆˆFix(T1)âˆ©Fix(T2). To find a solution of the common fixed point problems, several iterative approximation methods were introduced and studied. This problem can be applied in solving solutions of various problems in science and applied science, see [21, 22] for instance. We note that Fix(T1)âˆ©Fix(T2)âŠ‚Fix(T1âˆ˜T2) and almost all the results on common fixed point of nonlinear mappings in Hilbert spaces, commuting assumptions are needed on Ti(i=1,2).

One of the major problems in optimization is to find:

$$x^{*} \in H\,\, {such\, that}\,\, g(x^{*})= \min_{y\in H} g(y).$$
(13)

The set of all minimizers of g on H is denoted by argminyâˆˆHg(y). A successful and powerful tool for solving this problem is the well-known proximal point algorithm (shortly, the PPA) which was initiated by Martinet [23] in 1970 and later studied by Rockafellar [1] in 1976. Let H be a real Hilbert space and g:Hâ†’(âˆ’âˆž, +âˆž] be a proper lower semi-continuous and convex function. The PPA is defined as follows:

$$\begin{array}{@{}rcl@{}} \left \{ \begin{array}{ll} x_{1}\in H,\\ x_{n+1}=\text{argmin}_{y\in H} \left[ g(y)+\frac{1}{2\lambda_{n}}\Vert x_{n}-y\Vert^{2}\right], \end{array} \right. \end{array}$$
(14)

where Î»n>0 for all nâ‰¥1. In [1], Rockafellar proved that the sequence {xn} given by (14) converges weakly to a minimizer of g. He then posed the following question:

Q1 Does the sequence {xn} converges strongly? This question was resolved in the negative by GÃ¼ler [24]. He produced a proper lower semi-continuous and convex function g in l2 for which the PPA converges weakly but not strongly. This leads naturally to the following question:

Q2 Can the PPA be modified to guarantee strong convergence? In response to Q2, several works have been done (see, e.g., GÃ¼ler [24], Solodov and Svaiter [25], Kamimura and Takahashi [26], Lehdili and Moudafi [27], Reich [28] and the references therein).

Motivated by fixed point techniques of Yuan [20], the fact class of quasi-nonexpansive mappings properly includes that of nonexpansive map and an improvement of proximal point algorithm, we propose a new iterative scheme for finding a common element of the set of solutions of inclusion problem with set-valued maximal monotone mapping and inverse strongly monotone mapping, the set of minimizers of a convex function and the set of solutions of fixed point problem with composite operators in a real Hilbert space. We show that the iterative scheme proposed converges strongly to a common element of the three sets. Then, new strong convergence theorems are deduced. Our proposed algorithm does not involve commuting assumptions on Ti(i=1,2). Our technique of proof is of independent interest.

## Preliminairies

The demiclosedness of a nonlinear operator T usually plays an important role in dealing with the convergence of fixed point iterative algorithms.

### Definition 1

Let H be a real Hilbert space and T:D(T)âŠ‚Hâ†’H be a mapping. Iâˆ’T is said to be demiclosed at 0 if for any sequence {xn}âŠ‚D(T) such that {xn} converges weakly to p and âˆ¥xnâˆ’Txnâˆ¥ converges to zero, then pâˆˆFix(T).

### Lemma 1

(Demiclosedness Principle, [29]) Let H be a real Hilbert space and K be a nonempty closed and convex subset of H. Let T:Kâ†’K be a nonexpansive mapping. Then, Iâˆ’T is demiclosed at zero.

### Lemma 2

([30]) Let H be a real Hilbert space. Then, for any x,yâˆˆH, the following inequalities hold:

$$\lVert x+ y\rVert^{2} \leq \lVert x\rVert^{2}+ 2\langle y, x+y \rangle.$$
$$\Vert \lambda x+(1-\lambda)y \Vert^{2}=\lambda \Vert x \Vert^{2}+(1-\lambda) \Vert y \Vert^{2}-(1-\lambda)\lambda \Vert x-y \Vert^{2},\,\,\, \lambda\in (0,1).$$

Let a set-valued mapping B:Hâ†’2H be a maximal monotone. We define a resolvent operator $$J_{\lambda }^{B}$$ generated by B and Î» as follows:

$$J_{\lambda}^{B}=(I+\lambda B)^{-1}(x)\,\, \forall x\in H,$$

where Î» is a positive number. It is easy to see that the resolvent operator $$J_{\lambda }^{B}$$ is single-valued, nonexpansive, and 1-inverse strongly monotone and moreover, a solution of the problem 1 is a fixed point of the operator $$J_{\lambda }^{B} (I-\lambda A)$$ for all Î»>0.

### Lemma 3

[4] Let B:Hâ†’2H be a maximal monotone mapping and A:Hâ†’H be a Lipschitz and continuous monotone mapping. Then, the mapping B+A:Hâ†’2H is maximal monotone.

### Lemma 4

([31]) Assume that {an} is a sequence of nonnegative real numbers such that an+1â‰¤(1âˆ’Î±n)an+Î±nÏƒn for all nâ‰¥0, where {Î±n} is a sequence in (0,1) and {Ïƒn} is a sequence in $$\mathbb {R}$$ such that

$$(a)\,\, \sum _{n=0}^{\infty }\alpha _{n} = \infty$$, $$(b)\,\,\limsup _{n\rightarrow \infty }\,\sigma _{n}\leq 0$$ or $$\,\,\sum _{n=0}^{\infty }\arrowvert \sigma _{n} \alpha _{n}\arrowvert <\infty$$. Then, $${\lim }_{n\rightarrow \infty }a_{n}=0$$.

### Lemma 5

[32] Let K be a nonempty closed convex subset of a real Hilbert space H and A:Kâ†’H be a k-strongly monotone and L-Lipschitzian operator with k>0, L>0. Assume that $$0<\eta <\frac {2k}{ L^{2}}$$ and $$\tau =\eta \Big (k-\frac { L^{2}\eta }{2}\Big).$$ Then, for each $$t\in \Big (0, min\{1,\,\, \frac {1}{\tau }\}\Big),$$ we have

$$\| (I-t\eta A)x-(I-t\eta A)y\| \leq (1- t\tau) \| x-y\|,\,\, \forall\, x,y\in K.$$

### Lemma 6

[33] Let {tn} be a sequence of real numbers that does not decrease at infinity in a sense that there exists a subsequence $$\{t_{n_{i}}\}$$ of {tn} such that $${t_{n_{i}}} \leq {t_{n_{i+1}}}$$ for all iâ‰¥0. For sufficiently large numbers $$n\in \mathbb {N},$$ an integer sequence {Ï„(n)} is defined as follows:

$$\tau(n) = \max\lbrace k\leq n:\,\,t_{k} \leq t_{k+1} \rbrace.$$

Then, Ï„(n)â†’âˆž as nâ†’âˆž and

$$\max\lbrace t_{\tau(n)},\,\,\, t_{n} \rbrace\leq t_{\tau(n)+1}.$$

### Lemma 7

Let H be a real Hilbert space and A:Hâ†’H be an Î±-inverse strongly monotone mapping. Then, Iâˆ’Î¸A is a nonexpansive mapping for all x,yâˆˆH and Î¸âˆˆ[0,2Î±].

### Proof

For all x,yâˆˆH, we have

$$\begin{array}{@{}rcl@{}} \Vert (I-\theta A)x-(I-\theta A)y\Vert^{2} &=& \Vert (x - y)- \theta (Ax- Ay)\Vert^{2}\\ &=& \Vert x-y \Vert^{2}- 2\theta\langle Ax-Ay, x-y \rangle+\theta^{2}\Vert Ax-Ay \Vert^{2}\\ &\leq & \Vert x-y \Vert^{2}+\theta(\theta - 2\alpha)\Vert Ax-Ay \Vert^{2}. \end{array}$$

We obtain the desired result. â–¡

Let H be a real Hilbert space and F:Hâ†’(âˆ’âˆž, +âˆž] be a proper, lower semi-continuous, and convex function. For every Î»>0, the Moreau-Yosida resolvent of F, $$J_{\lambda }^{F}$$ is defined by:

$${J_{\lambda}^{F}}x= \text{argmin}_{u\in H} \left[ F(u)+\frac{1}{2\lambda}\Vert x-u\Vert^{2}\right],$$

for all xâˆˆH. It was shown in [24] that the set of fixed points of the resolvent associated to F coincides with the set of minimizers of F. Also, the resolvent $${J_{\lambda }^{F}}$$ of F is nonexpansive for all Î»>0.

### Lemma 8

(Miyadera [34]) Let H be a real Hilbert space and F:Kâ†’(âˆ’âˆž, +âˆž) be a proper, lower semi-continuous, and convex function. For every r>0 and Î¼>0, the following holds:

$$J^{F}_{r}x= J^{F}_{\mu}\left(\frac{\mu}{r}x +\left(1-\frac{\mu}{r}\right)J^{F}_{r}x \right).$$

### Lemma 9

Let H be a real Hilbert space and F:Hâ†’(âˆ’âˆž, +âˆž] be a proper, lower semi-continuous, and convex function. Then, for every x,yâˆˆH and Î»>0, the following sub-differential inequality holds:

$$\frac{1}{\lambda}\Vert J_{\lambda}^{F} x-y \Vert^{2}-\frac{1}{\lambda}\Vert x-y \Vert^{2}+ \frac{1}{\lambda}\Vert x-J_{\lambda}^{F} x\Vert^{2}+ F\left(J_{\lambda}^{F} x\right)\leq F(y).$$
(15)

## Main results

We start by the following result.

### Lemma 10

Let H be a real Hilbert space and let K be a nonempty closed convex subset of H. Let T1:Kâ†’K be a quasi-nonexpansive mapping and T2:Kâ†’K be a firmly nonexpansive mapping.Then, Fix(T1)âˆ©Fix(T2)=Fix(T1âˆ˜T2) and T1âˆ˜T2 is a quasi-nonexpansive mapping on K.

### Proof

We split the proof into two steps.

Step 1: First, we show that Fix(T1)âˆ©Fix(T2)=Fix(T1âˆ˜T2). We note that Fix(T1)âˆ©Fix(T2)âŠ‚Fix(T1âˆ˜T2). Thus, we only need to show that Fix(T1âˆ˜T2)âŠ†Fix(T1)âˆ©Fix(T2). Let pâˆˆFix(T1)âˆ©Fix(T2) and qâˆˆFix(T1âˆ˜T2). By using properties of T1 and T2, we have

$$\begin{array}{@{}rcl@{}} \ \Vert q-p \Vert^{2} & = & \Vert T_{1}\circ T_{2} q- T_{1}p \Vert^{2} \ \\ & \leq & \Vert T_{2}q-p \Vert^{2}. \end{array}$$
(16)

Using the fact that T2 is firmly nonexpansive, we have

$$\begin{array}{@{}rcl@{}} \| T_{2}q-p\|^{2}&\leq& \langle T_{2}q-p, q-p\rangle\\ &=& \frac{1}{2}(\| T_{2}q -p\|^{2}+ \|q-p\|^{2}- \| T_{2}q-q\|^{2}), \end{array}$$

which yields

$$\| T_{2}q -p\|^{2}\leq \|q-p\|^{2}- \| T_{2}q-q\|^{2}.$$
(17)

Using (16) implies that (28) becomes

$$\begin{array}{@{}rcl@{}} \| T_{2}q-p\|^{2} &\leq& \|q-p\|^{2}- \| T_{2}q-q\|^{2}\\ & \leq & \Vert T_{2}q-p \Vert^{2}- \| T_{2}q-q\|^{2}. \end{array}$$

Clearly, âˆ¥T2qâˆ’qâˆ¥=0 which implies that

$$q= T_{2}q.$$

Keeping in mind that T1âˆ˜T2q=q, we have

$$q=T_{1}\circ T_{2}q= T_{1}q.$$

Step 2: We show T1âˆ˜T2 is a quasi-nonexpansive mapping on K. Let xâˆˆK and pâˆˆFix(T1âˆ˜T2). Then, pâˆˆFix(T1)âˆ©Fix(T2) by step 1. We observe that,

$$\begin{array}{@{}rcl@{}} \|T_{1}\circ T_{2}x-p \| &=&\|T_{1}\circ T_{2}x-T_{1}p \| \\ &\leq & \| T_{2}x-p \|\\ &\leq& \|x-p\|. \end{array}$$

This completes the proof. â–¡

We now prove the following theorem.

### Theorem 2

Let H be a real Hilbert space and K be a nonempty closed convex subset of H. Let A: Kâ†’H be a Î±-inverse strongly monotone operator and g:Kâ†’(âˆ’âˆž, +âˆž] be a proper, lower semi-continuous, and convex function. Let T1:Kâ†’K be a quasi-nonexpansive mapping and T2:Kâ†’K be a firmly nonexpansive mapping.Let B be a maximal monotone operator on H into 2H such that the domain of B is included in K, let f:Kâ†’K be an b-Lipschitzian mapping and M:Kâ†’H be an Î¼-strongly monotone and L-Lipschitzian operator such that Î“:=Fix(T1)âˆ©Fix(T2)âˆ©(A+B)âˆ’1(0)âˆ©argminuâˆˆKg(u) is nonempty. Let {xn} be a sequence defined as follows:

$$\begin{array}{@{}rcl@{}} \left \{ \begin{array}{lll} x_{0}\in K,\\\\ u_{n}=\text{argmin}_{u\in K} \Big[ g(u)+\frac{1}{2\lambda_{n}}\Vert u-x_{n}\Vert^{2}\Big],\\\\ v_{n}=\theta_{n} u_{n}+(1-\theta_{n})T_{1}\circ T_{2} u_{n},\\\\ x_{n+1}= P_{K} \Big(\alpha_{n} \gamma f(x_{n}) +(I-\alpha_{n}\eta M) J_{\lambda_{n}}^{B}(v_{n}- \lambda_{n} Av_{n}) \Big), \end{array} \right. \end{array}$$
(18)

where {Î»n},{Î¸n}, and {Î±n} are sequences in (0,1) satisfying the following conditions:

$$(i) {\lim }_{n\rightarrow \infty }\alpha _{n}=0,\,\,\sum _{n=0}^{\infty } \alpha _{n}=\infty$$, and Î»nâˆˆ(Î», d)âŠ‚(0, min{1, 2Î±}),$$(ii)\,\,{\lim }_{n\rightarrow \infty }\inf (1-\theta _{n})\theta _{n}> 0,$$Iâˆ’T1âˆ˜T2 is demiclosed at the origin and $$0<\eta <\frac {2\mu }{ L^{2}},$$ 0<Î³b<Ï„, where $$\tau =\eta \Big (\mu -\frac { L^{2} \eta }{2}\Big).$$ Then, the sequence {xn} generated by (18) converges strongly to xâˆ—âˆˆÎ“, which solves the following variational inequality:

$$\langle \eta Mx^{*} -\gamma f(x^{*}), x^{*}-p \rangle\leq 0,\,\,\,\, \forall p\in \Gamma.$$
(19)

### Proof

From the choice of Î· and Î³,(Î·Mâˆ’Î³f) is strongly monotone, then the variational inequality (19) has a unique solution in Î“. In what follows, we denote xâˆ— to be the unique solution of (19). Without loss of generality, we can assume $$\alpha _{n}\in \Big (0, \text {min}\{1\,\,, \frac {1}{\tau }\}\Big).$$ Now, we prove that the sequence {xn} is bounded. Let pâˆˆÎ“. Then, g(p)â‰¤g(u) for all uâˆˆK. This implies that

$$g(p)+\frac{1}{2\lambda_{n}}\Vert p-p\Vert^{2} \leq g(u)+\frac{1}{2\lambda_{n}}\Vert u-p\Vert^{2}$$

and hence $${J_{\lambda _{n}}^{g}}p=p$$ for all nâ‰¥1, where $${J_{\lambda _{n}}^{g}}$$ is the Moreau-Yosida resolvent of g in K. Hence,

$$\|u_{n} -p\|\leq \| x_{n}-p\|,\,\,\, \forall n\geq 0.$$

By using (18) and Lemma 10, we have

$$\begin{array}{@{}rcl@{}} \| v_{n}-p\|^{2}&=&\Big\|\theta_{n}(u_{n}-p) + (1-\theta_{n})(T_{1}\circ T_{2} u_{n}-p)\Big\|^{2}\\ &=&\theta_{n} \| u_{n}-p\|^{2} + (1-\theta_{n}) \| T_{1}\circ T_{2} u_{n} -p\|^{2} -(1-\theta_{n})\theta_{n} \| T_{1}\circ T_{2} u_{n} -u_{n}\|^{2}\\ &\leq &\theta_{n} \|u_{n}-p\|^{2} + (1-\theta_{n})\|u_{n}-p\|^{2}-(1-\theta_{n})\theta_{n}\| T_{1}\circ T_{2} u_{n} -u_{n}\|^{2}. \end{array}$$

Hence,

$$\|v_{n}-p\|^{2} \leq \| u_{n}-p\|^{2}-(1-\theta_{n})\theta_{n} \|T_{1}\circ T_{2} u_{n}-u_{n}\|^{2}.$$
(20)

Since Î¸nâˆˆ]0,1[, we obtain,

$$\| v_{n}-p \|\leq \|u_{n}-p \|.$$
(21)

For each nâ‰¥0, we put $$z_{n} : = J_{\lambda _{n}}^{B}(I-\lambda _{n} A)v_{n}.$$ Then, from Lemma 7, we have

$$\|z_{n}-p\| = \|J_{\lambda_{n}}^{B} (I-\lambda_{n} A)v_{n}- p\|\leq \| v_{n}-p\|,\,\,\, \forall n\geq 0.$$

Therefore,

$$\|z_{n}-p\|\leq \|v_{n}-p \| \leq \|u_{n}-p\| \leq \| x_{n}-p\|.$$
(22)

Hence, by using Lemma 5 and inequalities (22) and (18), we have

$$\begin{array}{@{}rcl@{}} \lVert x_{n+1}-p\rVert & = & \lVert P_{K} \Big(\alpha_{n} \gamma f(x_{n}) +(I-\alpha_{n}\eta M)z_{n} \Big)- p\rVert\\ &\leq& \alpha_{n} \gamma \lVert f(x_{n}) -f(p)\rVert + (1-\tau\alpha_{n}) \lVert z_{n} -p\rVert + \alpha_{n}\lVert \gamma f(p)- \eta Mp\rVert \\ &\leq&(1-\alpha_{n}(\tau- b\gamma))\lVert x_{n}-p\rVert+\alpha_{n}\lVert \gamma f(p)- \eta Mp\rVert\\ &\leq& \max{\{\lVert x_{n} - p\|,\frac{\lVert \gamma f(p)- \eta M p \|}{\tau- b\gamma}\}}. \end{array}$$

By induction, we can conclude that

$$\lVert x_{n}-p\rVert \leq \max{\{\lVert x_{0} - p\|,\frac{\lVert \gamma f(p)- \eta M p \|}{\tau- b\gamma}\}}, \,\,\,\, n \geq 1.$$

Hence, {xn} is bounded. By using Lemma 5 and inequality (20), we obtain

$$\begin{array}{@{}rcl@{}} \lVert x_{n+1}-p\rVert^{2} &\leq & \lVert \alpha_{n}(\gamma f(x_{n})-\eta Mp)+ (I-\eta \alpha_{n} M)(z_{n} -p)\rVert^{2}\\ &\leq& \alpha_{n}^{2} \lVert \gamma f(x_{n}) -\eta Mp\rVert^{2} + (1-\tau\alpha_{n})^{2} \lVert z_{n} -p\rVert^{2}\\&& +2 \alpha_{n}(1-\tau\alpha_{n})\lVert\gamma f(x_{n}) -\eta Mp\rVert\lVert z_{n} -p\rVert\\ &\leq& \alpha_{n}^{2} \lVert \gamma f(x_{n}) -\eta Mp\rVert^{2} + (1-\tau\alpha_{n})^{2} \lVert v_{n} -p\rVert^{2}+2 \alpha_{n}(1-\tau\alpha_{n})\lVert\gamma f(x_{n}) -\eta Mp\rVert\lVert z_{n} -p\rVert\\ &\leq& \alpha_{n}^{2} \lVert \gamma f(x_{n}) -\eta Mp\rVert^{2} + (1-\tau\alpha_{n})^{2}\lVert x_{n}-p \rVert^{2}-(1-\tau\alpha_{n})^{2}(1-\theta_{n})\theta_{n} \| T_{1}\circ T_{2} u_{n}-u_{n}\|^{2}\\ &&+2 \alpha_{n}(1-\tau\alpha_{n})\lVert\gamma f(x_{n}) -\eta Mp\rVert\lVert x_{n} -p\rVert. \end{array}$$

Hence,

$$\begin{array}{@{}rcl@{}} (1-\tau\alpha_{n})^{2}(1-\theta_{n})\theta_{n} \| T_{1}\circ T_{2} u_{n}-u_{n} \|^{2} \leq \|x_{n}-p\|^{2} -\lVert x_{n+1}-p\rVert^{2} + \alpha_{n}^{2} \lVert \gamma f(x_{n}) -\eta Mp\rVert^{2}\\+ 2 \alpha_{n}(1-\tau\alpha_{n})\lVert\gamma f(x_{n}) -\eta Mp\rVert\lVert x_{n} -p\rVert. \end{array}$$

Since {xn} is bounded, then there exists a constant C>0, such that

$$(1-\tau\alpha_{n})^{2}(1-\theta_{n})\theta_{n}\| T_{1}\circ T_{2} u_{n}-u_{n} \|^{2} \leq \|x_{n}-p \|^{2} -\lVert x_{n+1}-p\rVert^{2} + \alpha_{n} C.$$
(23)

Next, we prove that xnâ†’xâˆ—. To see this, let us consider two possible cases.

Case 1. Assume that the sequence {âˆ¥xnâˆ’xâˆ—âˆ¥} is monotonically decreasing. Then, {âˆ¥xnâˆ’xâˆ—âˆ¥} must be convergent. Clearly, we have

$${\lim}_{n\rightarrow \infty} \Big[\Vert x_{n}-x^{*} \Vert^{2}-\Vert x_{n+1}-x^{*}\Vert^{2}\Big]= 0.$$
(24)

It then implies from (23) that

$${\lim}_{n\rightarrow \infty}(1-\theta_{n})\theta_{n}\| T_{1}\circ T_{2} u_{n}-u_{n} \|^{2} =0.$$
(25)

Since $${\lim }_{n\rightarrow \infty }\inf (1-\theta _{n}) \theta _{n}> 0,$$ we have

$${\lim}_{n\rightarrow \infty}\Vert T_{1}\circ T_{2} u_{n}-u_{n} \Vert =0.$$
(26)

Observing that,

$$\begin{array}{@{}rcl@{}} \Vert v_{n}-u_{n}\Vert &= & \Vert \theta_{n} u_{n} + (1-\theta_{n}) T_{1}\circ T_{2} u_{n}-u_{n}\Vert\\ &= & \Vert \theta_{n} u_{n} + (1-\theta_{n}) T_{1}\circ T_{2} u_{n}- \theta_{n} u_{n}- (1-\theta_{n}) u_{n}\Vert\\ & =& (1-\theta_{n})\Vert T_{1}\circ T_{2} u_{n}-u_{n} \Vert\\ &\leq& \Vert T_{1}\circ T_{2} u_{n}-u_{n} \Vert. \end{array}$$

Therefore, from (26), we get that

$${\lim}_{n\rightarrow \infty} \| v_{n}-u_{n} \|=0.$$
(27)

By using Lemma 9 and since g(xâˆ—)â‰¤g(un), we get

$$\|x_{n}-u_{n}\|^{2} \leq \|x_{n}-p\|^{2}-\|u_{n}-x^{*}\|^{2}.$$
(28)

From (18) and Lemma 5, we obtain

$$\begin{array}{@{}rcl@{}} \lVert x_{n+1}-x^{*}\rVert^{2} &= & \lVert P_{K} \Big(\alpha_{n} \gamma f(x_{n}) +(I-\alpha_{n}\eta M) z_{n} \Big) -x^{*}\rVert^{2}\\ &\leq& \lVert \alpha_{n}(\gamma f(x_{n})-\eta M x^{*})+ (I-\eta \alpha_{n} M)(z_{n}-x^{*})\rVert^{2}\\ &\leq& \alpha_{n}^{2} \lVert \gamma f(x_{n}) -\eta Mp\rVert^{2} + (1-\tau\alpha_{n})^{2} \lVert z_{n} - x^{*}\rVert^{2} +2 \alpha_{n}(1-\tau\alpha_{n})\lVert\gamma f(x_{n}) -\eta M x^{*}\rVert\lVert z_{n} -x^{*}\rVert\\ &\leq& \alpha_{n}^{2} \lVert \gamma f(x_{n}) -\eta M x^{*}\rVert^{2} + (1-\tau\alpha_{n})^{2} \Big[ \Vert x_{n}-x^{*} \Vert^{2}- \Vert x_{n}-u_{n} \Vert^{2} \Big] \\ &+& 2 \alpha_{n}(1-\tau\alpha_{n})\lVert\gamma f(x_{n}) -\eta Mp\rVert\lVert z_{n} -x^{*}\rVert\\ &\leq& \alpha_{n}^{2} \lVert \gamma f(x_{n}) -\eta M x^{*}\rVert^{2} + (1-\tau\alpha_{n})^{2} \Vert x_{n}-x^{*} \Vert^{2}+ (1- \tau \alpha_{n})^{2} \Vert x_{n}-u_{n} \Vert^{2} \\ &+& 2 \alpha_{n}(1-\tau\alpha_{n})\lVert\gamma f(x_{n}) -\eta M x^{*}\rVert\lVert z_{n}-x^{*}\rVert. \end{array}$$

Since {xn} is bounded, then there exists a constant C1>0, such that

$$(1- \tau \alpha_{n})^{2} \|x_{n}-u_{n}\|^{2}\leq \|x_{n}-x^{*}\|^{2}-\|x_{n+1}-x^{*}\|^{2}+\alpha_{n} C_{1}.$$
(29)

It then implies from (24) and Î±nâ†’0 that

$${\lim}_{n\rightarrow \infty}\| x_{n}-u_{n}\|=0.$$
(30)

On the other hand, using Lemma 7, we have

$$\begin{array}{@{}rcl@{}} \lVert x_{n+1}-x^{*}\rVert^{2} &= & \lVert P_{K} \Big(\alpha_{n} \gamma f(x_{n}) +(I-\alpha_{n}\eta M) z_{n} \Big)-x^{*}\rVert^{2}\\ &\leq& \lVert \alpha_{n}(\gamma f(x_{n})-\eta Mp)+ (I-\eta \alpha_{n} M)(z_{n} -x^{*})\rVert^{2}\\ &\leq& \alpha_{n}^{2} \lVert \gamma f(x_{n}) -\eta M x^{*}\rVert^{2} + (1-\tau\alpha_{n})^{2} \lVert z_{n} -x^{*}\rVert^{2} +2 \alpha_{n}(1-\tau\alpha_{n})\lVert\gamma f(x_{n}) -\eta M x^{*}\rVert\lVert z_{n} -x^{*}\rVert\\ &\leq& \alpha_{n}^{2} \lVert \gamma f(x_{n}) -\eta M x^{*}\rVert^{2} + (1-\tau\alpha_{n})^{2} \Big[ \Vert v_{n}-x^{*} \Vert^{2}+ \lambda (d-2\alpha) \Vert Av_{n}-Ax^{*} \Vert^{2} \Big] \\ &+& 2 \alpha_{n}(1-\tau\alpha_{n})\lVert\gamma f(x_{n}) -\eta Mx^{*}\rVert\lVert z_{n}-x^{*}\rVert. \end{array}$$

Therefore, we have

$$\begin{array}{@{}rcl@{}} (1-\alpha_{n})\lambda(2\alpha-d) \Vert Av_{n}-Ax^{*} \Vert^{2} & \leq & \lVert x_{n}-x^{*}\rVert^{2}- \lVert x_{n+1}-x^{*}\rVert^{2}+\alpha_{n} C_{2}, \end{array}$$

where C2 is a positive constant. Since Î±nâ†’0 as nâ†’âˆž, inequality (24) and {xn} is bounded, we obtain

$${\lim}_{n\rightarrow \infty}\Vert Av_{n}-Ax^{*} \Vert=0.$$
(31)

Since $$J_{\lambda _{n}}^{B}$$ is 1-inverse strongly monotone and (18), we have

$$\begin{array}{@{}rcl@{}} \lVert z_{n}-x^{*}\rVert^{2} &=& \lVert J_{\lambda_{n}}^{B}(I-\lambda_{n} A) v_{n}- J_{\lambda_{n}}^{B}(I-\lambda_{n} A)x^{*}\rVert^{2}\\ &\leq & \langle z_{n}-x^{*}, (I-\lambda_{n} A)v_{n}-(I-\lambda_{n} A)x^{*} \rangle\\ &=& \frac{1}{2}\Big[ \lVert (I-\lambda_{n} A) v_{n}-(I-\lambda_{n} A)x^{*}\rVert^{2} +\lVert z_{n}-x^{*}\rVert^{2}\\&&-\lVert (I-\lambda_{n} A) v_{n}-(I-\lambda_{n} A)x^{*}-(z_{n}-x^{*})\rVert^{2} \Big]\\ &\leq & \frac{1}{2}\Big[ \lVert v_{n}-x^{*}\rVert^{2}+\lVert z_{n}-x^{*}\rVert^{2}-\lVert v_{n}- z_{n}\rVert^{2}+2\lambda_{n}\langle z_{n}-x^{*}, A v_{n}-Ax^{*}\rangle-{\lambda_{n} }^{2}\Vert Av_{n}-Ax^{*} \Vert^{2} \Big]. \end{array}$$

So, we obtain

$$\lVert z_{n}-x^{*}\rVert^{2} \leq \lVert v_{n}-x^{*}\rVert^{2} -\lVert v_{n}- z_{n}\rVert^{2}+2\lambda_{n} \langle z_{n}-x^{*}, A v_{n}-Ax^{*}\rangle-{\lambda_{n} }^{2}\Vert A v_{n}-Ax^{*} \Vert^{2},$$

and thus

$$\begin{array}{@{}rcl@{}} \lVert x_{n+1}-x^{*}\rVert^{2} &\leq& \lVert \alpha_{n}(\gamma f(x_{n})-\eta M x^{*})+ (I-\eta \alpha_{n} M x^{*})(z_{n} -x^{*})\rVert^{2}\\ &\leq& \alpha_{n}^{2} \lVert \gamma f(x_{n}) -\eta M x^{*}\rVert^{2} + (1-\tau\alpha_{n})^{2} \lVert z_{n}-x^{*}\rVert^{2}+2 \alpha_{n}(1-\tau\alpha_{n})\lVert\gamma f(x_{n}) -\eta M x^{*}\rVert\lVert z_{n}-x^{*}\rVert\\ &\leq & \alpha_{n}^{2} \lVert \gamma f(x_{n}) -\eta M x^{*}\rVert^{2} +\lVert v_{n}-x^{*}\rVert^{2}- (1-\tau\alpha_{n})^{2}\lVert v_{n}- z_{n}\rVert^{2}- (1-\tau\alpha_{n})^{2} {\lambda_{n}}^{2}\Vert A v_{n}-Ax^{*} \Vert^{2}\\&+& 2\lambda_{n} (1-\tau\alpha_{n})^{2}\langle z_{n}-x^{*}, A v_{n}-Ax^{*} \rangle + 2 \alpha_{n}(1-\tau\alpha_{n})\lVert\gamma f(x_{n}) -\eta M x^{*}\rVert\lVert z_{n} -x^{*}\rVert\\ &\leq & \alpha_{n}^{2} \lVert \gamma f(x_{n}) -\eta M x^{*}\rVert^{2} +\lVert x_{n}-x^{*}\rVert^{2}\\&&- (1-\tau\alpha_{n})^{2}\lVert v_{n}- z_{n}\rVert^{2} - (1-\tau\alpha_{n})^{2} {\lambda_{n}}^{2}\Vert A v_{n}-Ax^{*} \Vert^{2}\\&+& 2\lambda_{n} (1-\tau\alpha_{n})^{2}\langle z_{n}-x^{*}, A v_{n}-Ax^{*} \rangle + 2 \alpha_{n}(1-\tau\alpha_{n})\lVert\gamma f(x_{n}) -\eta M x^{*}\rVert\lVert z_{n} -x^{*}\rVert. \end{array}$$

Since Î±nâ†’0 as nâ†’âˆž, inequalities (24) and (31), we obtain

$${\lim}_{n\rightarrow \infty}\Vert v_{n}- z_{n} \Vert^{2}=0.$$
(32)

Next, we prove that $$\limsup _{n\to +\infty }\langle \eta Mx^{*}-\gamma f(x^{*}), x^{*}-x_{n}\rangle \leq 0.$$ Since H is reflexive and {xn} is bounded, there exists a subsequence $$\{x_{n_{k}}\}$$ of {xn} such that $$x_{n_{k}}$$ converges weakly to xâˆ—âˆ— in K and

$$\limsup_{n\to +\infty}\langle \eta Mx^{*}-\gamma f(x^{*}), x^{*}-x_{n}\rangle={\lim}_{k\to +\infty}\langle \eta Mx^{*}-\gamma f(x^{*}), x^{*}-x_{n_{k}}\rangle.$$

From (26) and Iâˆ’T1âˆ˜T2 is demiclosed, we obtain xâˆ—âˆ—âˆˆFix(T1âˆ˜T2). Using Lemma 1, we have xâˆ—âˆ—âˆˆFix(T2)âˆ©Fix(T1). Using (18) and Lemma 8, we arrive at

$$\begin{array}{@{}rcl@{}} \lVert x_{n}-J^{g}_{\lambda}x_{n}\rVert &\leq & \lVert u_{n}-J^{g}_{\lambda}x_{n}\rVert+ \lVert u_{n} -x_{n}\rVert\\ &\leq & \lVert J^{g}_{\lambda_{n}}x_{n} -J^{g}_{\lambda}x_{n}\rVert+ \lVert u_{n} -x_{n}\rVert\\ &\leq & \lVert u_{n} -x_{n}\rVert + \lVert J^{g}_{\lambda}\Big(\frac{\lambda_{n}-\lambda}{\lambda_{n}}J^{g}_{\lambda_{n}}x_{n}+ \frac{\lambda}{\lambda_{n}}x_{n} \Big)- J^{g}_{\lambda}x_{n}\rVert\\ &\leq & \lVert u_{n} -x_{n}\rVert + \lVert \frac{\lambda_{n}-\lambda}{\lambda_{n}}J^{g}_{\lambda_{n}}x_{n}+ \frac{\lambda}{\lambda_{n}}x_{n} -x_{n}\rVert\\ &\leq & \lVert u_{n} -x_{n}\rVert +\Big(1-\frac{\lambda}{\lambda_{n}}\Big)\lVert u_{n} -x_{n}\rVert\\ &\leq & \Big(2-\frac{\lambda}{\lambda_{n}}\Big)\lVert u_{n} -x_{n}\rVert. \end{array}$$

Hence,

$${\lim}_{n\rightarrow \infty}\lVert x_{n}-J^{g}_{\lambda}x_{n}\rVert=0.$$
(33)

Since $$J^{g}_{\lambda }$$ is single-valued and nonexpansive, using (33) and Lemma 1, then $$x^{**} \in Fix (J^{g}_{\lambda })=\text {argmin}_{u\in K}\, g(u).$$ Let us show xâˆ—âˆ—âˆˆ(A+B)âˆ’1(0). Since A be an Î±2-inverse strongly monotone, A is Lipschitz continuous monotone mapping. It follows from Lemma 3 that B+A is maximal monotone. Let (v,u)âˆˆG(B+A), i.e., uâˆ’AvâˆˆB(v). Since $$z_{n_{k}} = J_{\lambda _{n_{k}}}^{B}(v_{n_{k}}-\lambda _{n_{k}} A v_{n_{k}}),$$ we have $$v_{n_{k}}- \lambda _{n_{k}} A v_{n_{k}}\in (I +\lambda _{n_{k}} B)z_{n_{k}},$$ i.e., $$\frac {1}{\lambda _{n_{k}}}(v_{n_{k}}-z_{n_{k}}-\lambda _{n_{k}} A v_{n_{k}})\in B(z_{n_{k}}).$$ By maximal monotonicity of B+A, we have

$$\langle v-z_{n_{k}}, u-Av- \frac{1}{\lambda_{n_{k}}} (v_{n_{k}}-z_{n_{k}} -\lambda_{n_{k}} A v_{n_{k}}) \rangle \geq 0$$

and so

$$\begin{array}{@{}rcl@{}} \langle v-z_{n_{k}}, u \rangle &\geq & \langle v-z_{n_{k}}, Av- \frac{1}{\lambda_{n_{k}}} (v_{n_{k}}-z_{n_{k}} -\lambda_{n_{k}} A v_{n_{k}}) \rangle\\ &= & \langle v-z_{n_{k}}, Av-Az_{n_{k}}+ Az_{n_{k}}+ \frac{1}{\lambda_{n_{k}}} (v_{n_{k}}-z_{n_{k}} -\lambda_{n_{k}} A v_{n_{k}}) \rangle\\ &\geq& \langle v-v_{n_{k}}, Az_{n_{k}}-Av_{n_{k}}\rangle+ \langle v-z_{n_{k}}, \frac{1}{\lambda_{n_{k}}} (v_{n_{k}}-z_{n_{k}}) \rangle. \end{array}$$

It follows from âˆ¥vnâˆ’znâˆ¥â†’0,âˆ¥Avnâˆ’Aznâˆ¥â†’0 and $$z_{n_{k}} \rightharpoonup x^{**},$$ we get

$${\lim}_{k\to +\infty}\langle v-z_{n_{k}}, u \rangle = \langle v-x^{**}, u\rangle\geq 0$$

and hence xâˆ—âˆ—âˆˆ(A+B)âˆ’1(0). Therefore, xâˆ—âˆ—âˆˆFix(T1)âˆ©Fix(T2)âˆ©(A+B)âˆ’1(0)âˆ©argminyâˆˆKg(y). On the other hand, the fact that xâˆ— solves (19), we then have

$$\begin{array}{@{}rcl@{}} \limsup_{n\to +\infty}\langle \eta Mx^{*}-\gamma f(x^{*}), x^{*}-x_{n}\rangle &=&{\lim}_{k\to +\infty}\langle \eta Mx^{*}-\gamma f(x^{*}), x^{*}-x_{n_{k}}\rangle\\ &=&\langle \eta Mx^{*}-\gamma f(x^{*}),x^{*}- x^{**}\rangle\leq 0. \end{array}$$

Finally, we show that xnâ†’xâˆ—. From (18) and properties of metric projection, we get

$$\begin{array}{@{}rcl@{}} \| x_{n+1}-x^{*}\|^{2} &=& \| P_{K}(\alpha_{n} \gamma f(x_{n})+ (I-\eta \alpha_{n} M) z_{n}) -x^{*} \|^{2}\\ & \leq& \langle \alpha_{n} \gamma f(x_{n})+ (I-\eta \alpha_{n} M) z_{n}-x^{*}, x_{n+1}-x^{*} \rangle\\ &=& \langle \alpha_{n} \gamma f(x_{n})+ (I-\eta \alpha_{n} M) z_{n}-x^{*} -\alpha_{n} \gamma f(x^{*})+\alpha_{n} \gamma f(x^{*})\\&&-\alpha_{n}\eta Mx^{*}+\alpha_{n} \eta Mx^{*}, x_{n+1}-x^{*} \rangle\\ & \leq& \Big(\alpha_{n} \gamma \| f(x_{n})-f(x^{*})\| + \| (I-\alpha_{n} \eta M)(z_{n}-x^{*})\|\Big) \Vert x_{n+1}-x^{*} \Vert\\&&+\alpha_{n} \langle \eta Mx^{*}-\gamma f(x^{*}), x^{*}-x_{n+1}\rangle\\ & \leq& (1-\alpha_{n}(\tau- b\gamma)) \| x_{n}-x^{*}\| \Vert x_{n+1}-x^{*} \Vert + \alpha_{n} \langle \eta Mx^{*}\!-\gamma f(x^{*}), x^{*}-x_{n+1} \rangle\\ & \leq& (1-\alpha_{n}(\tau- b\gamma))\| x_{n}-x^{*}\|^{2}+ 2\alpha_{n} \langle \eta Mx^{*}-\gamma f(x^{*}), x^{*}-x_{n+1}\rangle. \end{array}$$

Hence, by Lemma 4, we conclude that the sequence {xn} converges strongly to xâˆ—âˆˆFix(T1)âˆ©Fix(T2)âˆ©(A+B)âˆ’1(0)âˆ©argminyâˆˆKg(y).Case 2. Assume that the sequence {âˆ¥xnâˆ’xâˆ—âˆ¥} is not monotonically decreasing. Set Bn=âˆ¥xnâˆ’xâˆ—âˆ¥ and $$\pi : \mathbb {N}\to \mathbb {N}$$ be a mapping for all nâ‰¥n0 (for some n0 large enough) by $$\pi (n)= \max \lbrace k\in \mathbb {N} : k\leq n,\,\,\, B_{k}\leq B_{k+1}\rbrace.$$ Obviously, {pi(n)} is a non-decreasing sequence such that Ï€(n)â†’âˆž as nâ†’âˆž and BÏ€(n)â‰¤BÏ€(n)+1 for nâ‰¥n0. From (23), we have

$$(1-\tau\alpha_{\pi(n)})^{2} (1-\theta_{\pi(n)})\theta_{\pi(n)}\| u_{\\pi(n)}- T_{1}\circ T_{2} u_{\pi(n)}\|^{2} \leq \alpha_{\pi(n)} C.$$

Hence,

$${\lim}_{n\rightarrow \infty}\lVert u_{\pi(n)}- T_{1}\circ T_{2}u_{\pi(n)}\rVert =0.$$

By a similar argument as in case 1, we can show that xÏ€(n) is bounded in $$H, {\lim }_{n\rightarrow \infty }\Vert u_{\pi (n)}-x_{\pi (n)} \Vert =0, {\lim }_{n\rightarrow \infty }\Vert u_{\pi (n)}-v_{\pi (n)} \Vert =0, {\lim }_{n\rightarrow \infty }\Vert v_{\pi (n)}-z_{\pi (n)} \Vert =0$$, and $$\limsup _{\pi (n)\to +\infty }\langle \eta Mx^{*}-\gamma f(x^{*}), x^{*}-x_{\pi (n)})\rangle \leq 0.$$ We have for all nâ‰¥n0,

$$0\leq \lVert x_{\pi(n)+1}-x^{*} \rVert^{2}- \lVert x_{\pi(n)}-x^{*} \rVert^{2}\leq \alpha_{\pi(n)}[- (\tau- b\gamma) \lVert x_{\pi(n)}-x^{*} \rVert^{2} +2\langle \eta Mx^{*}-\gamma f(x^{*}), x^{*}-x_{\pi(n)+1} \rangle],$$

which implies that

$$\lVert x_{\pi(n)}-x^{*} \rVert^{2} \leq \frac{ 2}{\tau- b\gamma} \langle \eta Mx^{*}-\gamma f(x^{*}), x^{*}- x_{\pi(n)+1}\rangle.$$

Then, we have

$${\lim}_{n\rightarrow \infty}\lVert x_{\pi(n)}-x^{*} \rVert^{2} =0.$$

Therefore,

$${\lim}_{n\rightarrow \infty} B_{\pi(n)}={\lim}_{n\rightarrow \infty} B_{\pi(n)+1}=0.$$

Thus, by Lemma 6, we conclude that

$$0\leq B_{n}\leq \max\lbrace B_{\pi(n)},\,\,B_{\pi(n)+1}\rbrace=B_{\pi(n)+1}.$$

Hence, $${\lim }_{n\rightarrow \infty }B_{n}=0,$$ that is {xn} converges strongly to xâˆ—. This completes the proof. â–¡

We now apply Theorem 2 when T1 is nonexpansive mapping. In this case, demiclosedness assumption (Iâˆ’T1âˆ˜T2 is demiclosed at origin) is not necessary.

### Theorem 3

Let A: Kâ†’H be a Î±-inverse strongly monotone operator and g:Kâ†’(âˆ’âˆž, +âˆž] be a proper, lower semi-continuous, and convex function. Let T1:Kâ†’K be a nonexpansive mapping and T2:Kâ†’K be a firmly nonexpansive mapping.Let B be a maximal monotone operator on H into 2H such that the domain of B is included in K, let f:Kâ†’K be an b-Lipschitzian mapping and M:Kâ†’H be an Î¼-strongly monotone and L-Lipschitzian operator such that Î“:=Fix(T1)âˆ©Fix(T2)âˆ©(A+B)âˆ’1(0)âˆ©argminuâˆˆKg(u) is nonempty. Let {xn} be a sequence defined as follows:

$$\begin{array}{@{}rcl@{}} \left \{ \begin{array}{lll} x_{0}\in K,\\\\ u_{n}=\text{argmin}_{u\in K} \Big[ g(u)+\frac{1}{2\lambda_{n}}\Vert u-x_{n}\Vert^{2}\Big],\\\\ v_{n}=\theta_{n} u_{n}+(1-\theta_{n})T_{1}\circ T_{2} u_{n},\\\\ x_{n+1}= P_{K} \Big(\alpha_{n} \gamma f(x_{n}) +(I-\alpha_{n}\eta M) J_{\lambda_{n}}^{B}(v_{n}- \lambda_{n} Av_{n}) \Big), \end{array} \right. \end{array}$$
(34)

where {Î»n},{Î¸n}, and {Î±n} are sequences in (0,1) satisfying the following conditions:$$(i) {\lim }_{n\rightarrow \infty }\alpha _{n}=0,\,\,\sum _{n=0}^{\infty } \alpha _{n}=\infty$$, and Î»nâˆˆ(Î», d)âŠ‚(0, min{1, 2Î±}),$$(ii)\,\,{\lim }_{n\rightarrow \infty }\inf (1-\theta _{n})\theta _{n}> 0$$ and $$0<\eta <\frac {2\mu }{ L^{2}},$$ 0<Î³b<Ï„, where $$\tau =\eta \Big (\mu -\frac { L^{2} \eta }{2}\Big).$$ Then, the sequence {xn} generated by (34) converges strongly to xâˆ—âˆˆÎ“, which solves the variational inequality:

$$\langle \eta Mx^{*} -\gamma f(x^{*}), x^{*}-p \rangle\leq 0,\,\,\,\, \forall p\in \Gamma.$$
(35)

### Proof

We have T1âˆ˜T2 is nonexpansive mapping; then, the proof follows Lemma 1 and Theorem 2. â–¡

Now, we consider the following quadratic optimization problem:

$$\min_{x\in \Gamma}\Big(\frac{\eta}{2}\langle Mx, x \rangle- h(x)\Big),$$
(36)

where B:Kâ†’H is a strongly positive bounded linear operator, Î“:=Fix(T1)âˆ©Fix(T2)âˆ©(A+B)âˆ’1(0)âˆ©argminuâˆˆKg(u), and h is a potential function for Î³f (i.e., $$\phantom {\dot {i}\!}h^{'}(x) = \gamma f(x)$$ on K).

Hence, one has the following result.

### Theorem 4

Let A: Kâ†’H be a Î±-inverse strongly monotone operator and g:Kâ†’(âˆ’âˆž, +âˆž] be a proper, lower semi-continuous, and convex function. Let T1:Kâ†’K be a quasi-nonexpansive mapping and T2:Kâ†’K be a firmly nonexpansive mapping.Let B be a maximal monotone operator on H into 2H such that the domain of B is included in K, let f:Kâ†’K be an b-Lipschitzian mapping and M:Kâ†’H be strongly bounded linear operator with coefficient Î¼>0 such that Î“:=Fix(T1)âˆ©Fix(T2)âˆ©(A+B)âˆ’1(0)âˆ©argminuâˆˆKg(u) is nonempty. Let {xn} be a sequence defined as follows:

$$\begin{array}{@{}rcl@{}} \left \{ \begin{array}{lll} x_{0}\in K,\\\\ u_{n}=\text{argmin}_{u\in K} \Big[ g(u)+\frac{1}{2\lambda_{n}}\Vert u-x_{n}\Vert^{2}\Big],\\\\ v_{n}=\theta_{n} u_{n}+(1-\theta_{n})T_{1}\circ T_{2} u_{n},\\\\ x_{n+1}= P_{K} \left(\alpha_{n} \gamma f(x_{n}) +(I-\alpha_{n}\eta M) J_{\lambda_{n}}^{B}(v_{n}- \lambda_{n} Av_{n}) \right), \end{array} \right. \end{array}$$
(37)

where {Î»n},{Î¸n}, and {Î±n} are sequences in (0,1) satisfying the following conditions: $$(i) {\lim }_{n\rightarrow \infty }\alpha _{n}=0,\,\,\sum _{n=0}^{\infty } \alpha _{n}=\infty$$, and Î»nâˆˆ(Î», d)âŠ‚(0, min{1, 2Î±}),$$(ii)\,\,{\lim }_{n\rightarrow \infty }\inf (1-\theta _{n})\theta _{n}> 0,$$Iâˆ’T1âˆ˜T2 is demiclosed at the origin and $$0<\eta <\frac {2\mu }{ \Vert M\Vert ^{2}},$$ 0<Î³b<Ï„, where $$\tau =\eta \Big (\mu -\frac { \Vert M\Vert ^{2} \eta }{2}\Big).$$ Then, the sequence {xn} generated by (37) converges strongly to a unique solution of problem (36).

### Proof

We note that strongly positive bounded linear operator M is a âˆ¥Mâˆ¥-Lipschitzian and Î¼-strongly monotone operator; the proof follows Theorem 2. â–¡

## Application to some nonlinear problems

In this section, we apply our main results for finding a common solution of composite convex minimization problem, convex optimization problem, and fixed point problem involving composed operators.

### Problem 1

Let H be a real Hilbert space. We consider the minimization of composite objective function of the type

$$\min_{x\in H}\, \Big(\Psi(x)+\Phi(x)\Big),$$
(38)

where $$\Psi : H \to \mathbb {R} \cup \{+\infty \}$$ is a proper, convex, and lower semi-continuous functional and $$\Phi : H \to \mathbb {R}$$ is convex functional.

Many optimization problems from image processing [7], statistical regression, machine learning (see, e.g., [36] and the references contained therein), etc. can be adapted into the form of (38).Observe that problem 1 is equivalent to find xâˆ—âˆˆH such that

$$0\in \partial \Psi(x^{*})+ \nabla \Phi(x^{*}).$$
(39)

It is well known âˆ‚Î¨ is maximal monotone (see, e.g., Minty [37]).

### Lemma 11

(Baillon and Haddad [38]) Let H be a real Hilbert space, Î¦ a continuously FrÃ©chet differentiable, convex functional on H and âˆ‡Î¦ the gradient of Î¦. If âˆ‡Î¦ is $$\frac {1}{\alpha }$$-Lipschitz continuous, then âˆ‡Î¦ is Î±-inverse strongly monotone.

Hence, from Theorem 2, we have the following result.

### Theorem 5

Let H be a real Hilbert space and g:Hâ†’(âˆ’âˆž, +âˆž] be a proper, lower semi-continuous, and convex function. Let $$\Phi : H \to \mathbb {R}$$ be a continuously FrÃ©chet differentiable, convex functional on H and âˆ‡Î¦ a $$\frac {1}{\alpha }$$-Lipschitz continuous. Let $$\Psi : H \to \mathbb {R} \cup \{+\infty \}$$ be a proper, convex, and lower semi-continuous functional and f:Hâ†’H be an b-Lipschitzian mapping. Let T1:Hâ†’H be a quasi-nonexpansive mapping and let T2:Hâ†’H be a firmly nonexpansive mapping and M:Hâ†’H be an Î¼-strongly monotone and L-Lipschitzian operator such that Î“:=Fix(T1)âˆ©Fix(T2)âˆ©(âˆ‚Î¨+âˆ‡Î¦)âˆ’1(0)âˆ©argminuâˆˆHg(u) is nonempty. Let {xn} be a sequence defined as follows:

$$\begin{array}{@{}rcl@{}} \left \{ \begin{array}{lll} x_{0}\in H,\\\\ u_{n}=\text{argmin}_{u\in H} \Big[ g(u)+\frac{1}{2\lambda_{n}}\Vert u-x_{n}\Vert^{2}\Big],\\\\ v_{n}=\theta_{n} u_{n}+(1-\theta_{n})T_{1}\circ T_{2} u_{n},\\\\ x_{n+1}= \alpha_{n} \gamma f(x_{n}) +(I-\alpha_{n}\eta M) J_{\lambda_{n}}^{\partial \Psi}(v_{n}- \lambda_{n} \nabla \Phi v_{n}), \end{array} \right. \end{array}$$
(40)

where {Î»n},{Î¸n}, and {Î±n} be sequences in (0,1) satisfying the following conditions:$$(i) {\lim }_{n\rightarrow \infty }\alpha _{n}=0,\,\,\sum _{n=0}^{\infty } \alpha _{n}=\infty$$, and Î»nâˆˆ(Î», d)âŠ‚(0, min{1, 2Î±}),$$(ii)\,\,{\lim }_{n\rightarrow \infty }\inf (1-\theta _{n})\theta _{n}> 0,$$Iâˆ’T1âˆ˜T2 is demiclosed at the origin and $$0<\eta <\frac {2\mu }{ L^{2}},$$ 0<Î³b<Ï„, where $$\tau =\eta \Big (\mu -\frac { L^{2} \eta }{2}\Big).$$ Then, the sequence {xn} generated by (40) converges to a point xâˆ—âˆˆargminuâˆˆHg(u) which is a minimizer of Î¨(x)+Î¦(x) in H as well as it is also a common fixed point of T1 and T2 in H.

### Proof

We set B=âˆ‚Î¨ and âˆ‡Î¦=A,K=H into Theorem 2. Then, the proof follows Theorem 2. â–¡

### Remark 1

Many already studied problems in the literature can be considered as special cases of this paper; see, for example, [1, 3, 4, 14, 16, 17, 23, 26, 28, 39] and the references therein.

Not applicable.

## References

1. Rockafellar, R. T.: Maximal monotone operators and proximal point algorithm. SIAM J. Control Optim. 14, 877â€“898 (1976).

2. Radu, R. I., Csetnek, E. R., Heinrich, A.: A primal-dual splitting algorithm for finding zeros of sums of maximal monotone operators. SIAM J. Optim. 23, 2011â€“2036 (2013).

3. Passty, G. B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert spaces. J. Math. Anal. Appl. 72, 383â€“390 (1979).

4. Lions, P. L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964â€“979 (1979).

5. Chen, G. H. -G., Rockafellar, R. T.: Convergence rates in forward-backward splitting. SIAM J. Optim. 7(2), 421â€“444 (1997).

6. Genaro, L., Victoria, M. M., Fenghui, W., Xu, H. K.: Forward-backward splitting methods for accretive operators in Banach spaces. Abstr. Appl. Anal., 1â€“25 (2012). doi:10.1155/2012/109236.

7. Bredies, K.: A forward-backward splitting algorithm for the minimization of non-smooth convex functionals in Banach space. Inverse Probl. 25(1), 1â€“20 (2009).

8. Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38(2), 431â€“446 (2000).

9. Dadashi, V., Postolache, M.: Forward-backward splitting algorithm for fixed point problems and zeros of the sum of monotone operators. Arab. J. Math., 1â€“11 (2019). https://doi.org/10.1007/s40065-018-0236-2.

10. Adly, S.: Perturbed algorithms and sensitivity analysis for a general class of variational inclusions. J. Math. Anal. Appl. 201(3), 609â€“630 (1996).

11. Radu, R. I., Csetnek, E. R., Heinrich, A.: A primal-dual splitting algorithm for finding zeros of sums of maximal monotone operators. SIAM J. Optim. 23, 2011â€“2036 (2013).

12. Shimoji, K., Takahashi, W.: Strong convergence theorems of approximated sequences for nonexpansive mappings in Banach spaces. Proc. Amer. Math. Soc. 125, 3641â€“3645 (1997).

13. Dotson, Jr., W.G: Fixed points of quasi-nonexpansive mappings. Aust Math Soc A. 13, 167â€“170 (1972).

14. Xu, H. K.: An iterative approach to quadratic optimization. J. Optim. Theory Appl. 116, 659â€“678 (2003).

15. Moudafi, A.: Viscosity approximation methods for fixed point problems. J. Math. Anal. Appl. 241, 46â€“55 (2000).

16. Marino, G., Xu, H. K.: A general iterative method for nonexpansive mappings in Hibert spaces. J. Math. Anal. Appl. 318, 43â€“52 (2006).

17. Marino, G., Xu, H. K.: Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces. J. Math. Math. Appl. 329, 336â€“346 (2007).

18. Sow, T. M. M.: A modified generalized viscosity explicit methods for quasi-nonexpansive mappings in Banach spaces. Funct. Anal. Approx. Comput. 11(2), 37â€“49 (2019).

19. Yao, Y., Zhou, H., Liou, Y. C.: Strong convergence of modified Krasnoselskii-Mann iterative algorithm for nonexpansive mappings. J. Math. Anal. Appl. Comput. 29, 383â€“389 (2009).

20. Yuan, H.: On solutions of inclusion problems and fixed point problems. Fixed Point Theory Appl.2013:11, 11 (2013).

21. Gunduz, B., Akbulutl, S.: Common fixed points of a finite family of iasymptotically nonexpansive mappings by s-iteration process in Banach spaces. Thai J. Math. 15(3), 673â€“687 (2017).

22. Halpern, B.: Fixed points of nonexpansive maps. Bull. Amer. Math. Soc. 3, 957â€“961 (1967).

23. Martinet, B.: RÃ©gularisation dâ€™inÃ©quations variationnelles par approximations successives. (French) Rev. Franaise Informat. Recherche OpÃ©rationnelle. 4, 154â€“158 (1970).

24. GÃ¼ler, O.: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 29, 403â€“419 (1991).

25. Solodov, M. V., Svaiter, B. F.: Forcing strong convergence of proximal point iterations in a Hilber space. Math. Program. Ser. A. 87, 189â€“202 (2000).

26. Kamimura, S., Takahashi, W.: Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 13(3), 938â€“945 (2003).

27. Lehdili, N., Moudafi, A.: Combining the proximal algorithm and Tikhonov regularization. Optimization. 37, 239â€“252 (1996).

28. Reich, S.: Strong convergence theorems for resolvents of accretive operators in Banach spaces. J. Math. Anal. Appl. 183, 118â€“120 (1994).

29. Browder, F. E.: Convergenge theorem for sequence of nonlinear operator in Banach spaces. Math. Z. 100, 201â€“225 (1976). https://doi.org/org/10.1007/BF01109805.

30. Chidume, C. E.: Geometric properties of Banach spaces and nonlinear iterations. Springer Verlag Ser. Lect. Notes Math. 1965 (2009). ISBN 978-1-84882-189-7.

31. Xu, H. K.: Iterative algorithms for nonlinear operators. J. London Math. Soc. 66(2), 240â€“256 (2002).

32. Wang, S.: A general iterative method for an infinite family of strictly pseudo-contractive mappings in Hilbert spaces. Appl. Math. Lett. 24, 901â€“907 (2011).

33. Mainge, P. E.: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899â€“912 (2008).

34. Miyadera, I.: Nonlinear semigroups. Translations of Mathematical Monographs, American Mathematical Society, Providence (1992).

35. Ambrosio, G, SavarÃ©, N: Gradient Flows in Metric Spaces and in the Space of Probability Measures, Second Edition. Lectures in Mathematics ETH ZÃ¼rich, BirkhÃ¤user Verlag, Basel (2008).

36. Wang, Y., Xu, H. K.: Strong convergence for the proximal-gradient method. J. Nonlinear Convex Anal. 15(3), 581â€“593 (2014).

37. Minty, G. J.: Monotone (nonlinear) operator in Hilbert space. Duke Math. 29, 341â€“346 (1962).

39. Khatibzadeh, H., Mohebbi, V.: On the iterations of a sequence of strongly quasi-nonexpansive mappings with applications. Numer. Funct. Anal. Optim. 41(3), 231â€“256 (2020).

Not applicable.

Not applicable.

## Author information

Authors

### Contributions

The authors read and approved the final manuscript.

### Corresponding author

Correspondence to T. M. M. Sow.

## Ethics declarations

### Competing interests

The author declares that there are no competing interests.

### Publisherâ€™s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Rights and permissions

Reprints and permissions

Sow, T. General-type proximal point algorithm for solving inclusion and fixed point problems with composite operators. J Egypt Math Soc 28, 20 (2020). https://doi.org/10.1186/s42787-020-00080-w