Skip to main content

General-type proximal point algorithm for solving inclusion and fixed point problems with composite operators

Abstract

The main purpose of this paper is to introduce a new general-type proximal point algorithm for finding a common element of the set of solutions of monotone inclusion problem, the set of minimizers of a convex function, and the set of solutions of fixed point problem with composite operators: the composition of quasi-nonexpansive and firmly nonexpansive mappings in real Hilbert spaces. We prove that the sequence xn which is generated by the proposed iterative algorithm converges strongly to a common element of the three sets above without commuting assumption on the mappings. Finally, applications of our theorems to find a common solution of some nonlinear problems, namely, composite minimization problems, convex optimization problems, and fixed point problems, are given to validate our new findings.

Introduction

Throughout this paper, we assume that H be a real Hilbert space with the inner product 〈·,·〉 and norm . and K be a nonempty closed convex subset of H. Let A:H→2H, the domain of A,D(A), the image of a subset S of H, A(S) the range of A, R(A) and the graph of A,G(A) are defined as follows:

$$D(A):= \{x\in\,H\,:\,Ax\neq\emptyset\},\,\,A(S):= \cup \{Ax\,:\,x\in\,S\},$$
$$R(A):= A(H),\,\,G(A):=\{ (x,u):\,\,x\in\,D(A),\,\,u\in\,Ax\}.$$

An operator A:KH is called monotone if

$$\begin{array}{@{}rcl@{}} \langle Ax-Ay,\,x-y\rangle \geq0,~~\forall~x,y\in K. \end{array} $$

An operator A:KH is said to be strongly monotone if there exists a positive constant k(0,1) such that

$$\langle Ax-Ay, x-y\rangle \geq k\Vert x-y\Vert^{2},\,\,\, \forall x,y\in K.$$

It is said to be α-inverse strongly monotone if there exists a constant α>0 such that

$$\begin{array}{@{}rcl@{}} \langle Ax-Ay,\,\, x-y\rangle\geq \alpha \|Ax-Ay \|^{2},~~\forall~x,y\in K. \end{array} $$

It is immediate that if A is α-inverse strongly monotone, then A is monotone and Lipschitz continuous.

Let A:HH be a single-valued nonlinear mapping and B:H→2H be a set-valued mapping. The variational inclusion problem is as follows: find xH such that

$$ 0\in B(x)+A(x). $$
(1)

We denote the set of solutions of this problem by (A+B)−1(0). If A=0, then problem (1) becomes the inclusion problem introduced by Rockafellar [1]. Inclusions of the form specified by (1) arise in numerous problems of fundamental importance in mathematical optimization, either directly or through an appropriate reformulation. In what follows, we provide some motivating examples.

General monotone inclusions

Consider the inclusion problem

$$ {find}\,\, x\in H_{1},\,\,{such\, that}\,\, 0 \in (A + K^{*} BK)(x), $$
(2)

where \(\phantom {\dot {i}\!}A : H_{1}\to 2^{H_{1}}\) and \(\phantom {\dot {i}\!} B : H_{2}\to 2^{H_{2}}\) are maximally monotone operators and K:H1H2 is a linear, bounded operator with adjoint K. As was observed in [2], solving (2) can be equivalently cast as the following monotone inclusion posed in the product space:

$$ {find}\,\, \left(\begin{array}{c} x\\ y\\ \end{array}\right) \in H_{1}\times H_{2}\,\,{such that }\,\, \left(\begin{array}{c} 0\\ 0\\ \end{array}\right) \in \left(\begin{array}{cc} 0& A \\ 0&B^{-1} \\ \end{array}\right) \left(\begin{array}{c} x\\ y\\ \end{array}\right) + \left(\begin{array}{cc} 0& K^{*} \\ -K& 0 \\ \end{array}\right) \left(\begin{array}{c} x\\ y\\ \end{array}\right) $$
(3)

Notice that the first operator in (3) is maximally monotone whereas the second is bounded and linear (in particular, it is Lipschitz continuous with full domain). Consequently, (3) is also of the form specified by (1).

Saddle point problems

Many convex optimization problems can be formulated as the saddle point problem:

$$ \min_{x\in H}\max_{x\in H} \left(g(x)+\Phi(x,y)-f(y)\right), $$
(4)

where f,g:H→(−, +] are proper, lsc, convex functions and \(\Phi : H \times H \to \mathbb {R}\) is a smooth convex-concave function. Problems of this form naturally arise in machine learning, statistics, etc., where the dual (maximization) problem comes either from dualizing the constraints in the primal problem or from using the Fenchel-Legendre transform to leverage a nonsmooth composite part. Through its first-order optimality condition, the saddle point problem (22) can be expressed as the monotone inclusion

$$ {find}\,\, \left(\begin{array}{c} x\\ y\\ \end{array}\right) \in H\times H\,\,{such that }\,\, \left(\begin{array}{c} 0\\ 0\\ \end{array}\right) \in \left(\begin{array}{c} \partial g(x) \\ \partial f(y) \\ \end{array}\right) + \left(\begin{array}{c} \nabla_{x}\Phi(x,y) \\ \nabla_{y}\Phi(x,y)\\ \end{array}\right), $$
(5)

which is of the form specified by (1). In general, equations of inclusion monotone type (1) are nonlinear and there is no known method to find closed form solutions for them. Consequently, methods of approximating solutions of such equations are of interest.

The best-known splitting method for solving the inclusion (1) when B is single-valued is the forward-backward method, called so because each iteration combines one forward evaluation of B with one backward evaluation of A, introduced by Passty [3] and Lions and Mercier [4]. More precisely, the method generates a sequence according to

$$ x_{n+1}= (I-\lambda_{n} B)^{-1} (I-\lambda_{n} A)x_{n},\,\,\lambda_{n}>0, $$
(6)

under the condition that D(B)D(A). It was shown, see for example [5], that weak convergence of (6) requires quite restrictive assumptions on A and B, such that the inverse of A is strongly monotone or B is Lipschitz continuous and monotone and the operator A+B is strongly monotone on D(B). Hence, the modification is necessary in order to guarantee the strong convergence of forward-backward splitting method (see, for example, [512] and the references contained in them).

A map T:KK is said to be Lipschitz if there exists an L≥0 such that

$$ \|Tx-Ty\|\le L\|x-y\|,\,\, ~~~\forall x,y\in K, $$
(7)

if L<1, T is called contraction and if L=1, T is called nonexpansive. We denote by Fix(T) the set of fixed points of the mapping T, that is Fix(T):={xD(T):x=Tx}. We assume that Fix(T) is nonempty. A map T is called quasi-nonexpansive if Txpxp holds for all x in K and pFix(T). The mapping T:KK is said to be firmly nonexpansive, if

$$\|Tx-Ty\|^{2}\leq \|x-y\|^{2} -\|(x - y)-(Tx-Ty) \|^{2},\,\forall x,y\in K.$$

We remark here that a nonexpansive mapping with a nonempty fixed point set is quasi-nonexpansive; however, the inverse may be not true. See the following example [13].

Example 1

[13] Let \(H = \mathbb {R}\) and define a mapping by T:HH by

$$\begin{array}{@{}rcl@{}} Tx= \left \{ \begin{array}{ll} \frac{x}{2}\sin\left(\frac{1}{x}\right),\,\,\, x\neq 0\\\\ 0,\,\,\,\,\,\,\, x=0. \end{array} \right. \end{array} $$
(8)

Then, T is quasi-nonexpansive but not nonexpansive.

Fixed point theory has been revealed as a very powerful and effective method for solving a large number of problems which emerge from real-world applications and can be translated into equivalent fixed point problems. In order to obtain approximate solution of the fixed point problems, various iterative methods have been proposed (see, e.g., [1419] and the reference therein).

In 2013, Yuan [20], motivated by the fact that forward-backward method is remarkably useful for finding fixed points of nonlinear mapping, proved the following theorem.

Theorem 1

Let H be a real Hilbert space and C be a nonempty closed convex subset of H. Let A: CH be a α-inverse strongly monotone operator and S:CC be a quasi-nonexpansive mapping. Let B be a maximal monotone operator on H into 2H such that the domain of B is included in C such that F:=Fix(S)∩(A+B)−1(0) is nonempty and IS is demiclosed. Let {αn} be a real number sequence in [0,1] and {λn} be a positive real number sequence. Let {xn} be a sequence in C generated in the following iterative process:

$$\begin{array}{@{}rcl@{}} \left \{ \begin{array}{lll} x_{1}\in C,\\ C_{1}=C\\ y_{n}= \alpha_{n} x_{n} +(1-\alpha_{n}) SJ_{\lambda_{n}}(x_{n}- \lambda_{n} Ax_{n}),\\ C_{n+1}=\lbrace z\in C_{n}:\,\, \Vert y_{n}-z \Vert\leq \Vert x_{n}-z \Vert \rbrace,\\ x_{n+1}=P_{C_{n+1}}x_{1},\,\,\,\,n\geq 1, \end{array} \right. \end{array} $$
(9)

where \(J_{\lambda _{n}} = (I + \lambda _{n} B).\) Suppose that the sequences {αn} and {λn} satisfy the following restrictions:

  1. (i)

    0≤αna<1;

  2. (ii)

    0<bλnc<2α.

Then, the sequence {xn} converges strongly to PFx1.

However, we observe that in Theorem 1 recursion formula studied is not simpler.

Recently, iterative methods for nonexpansive mappings have been applied to solve convex minimization problems; see, e.g., [14, 16] and the references therein. A typical problem is to minimize a quadratic function over the set of the fixed points of a nonexpansive mapping on a real Hilbert space H:

$$ \min_{x\in Fix(T)}\Big(\frac{1}{2}\langle Ax, x \rangle-\langle b, x\rangle \Big). $$
(10)

In [14], Xu proved that the sequence {xn} defined by iterative method below with initial guess x0H chosen arbitrary:

$$ x_{n+1}= \alpha_{n} b + (I- \alpha_{n}A)Tx_{n},\,\, \,\, n\geq 0, $$
(11)

converges strongly to the unique solution of the minimization problem (10), where T is a nonexpansive mappings in H and A a strongly positive bounded linear operator. In 2006, Marino and Xu [16] extended Moudafi’s results [15] and Xu’s results [14] via the following general iteration x0H and

$$ x_{n+1}= \alpha_{n} \gamma f(x_{n}) + (I- \alpha_{n}A)Tx_{n}, \,\, n\geq 0, $$
(12)

where \(\{\alpha _{n}\}_{n \in \mathbb {N}} \subset (0,1),\)A is a bounded linear operator on H, and T is a nonexpansive. Under suitable conditions, they proved the sequence {xn} defined by (12) converges strongly to the fixed point of T, which is a unique solution of the following variational inequality

$$ \langle Ax^{*}-\gamma f(x^{*}), x^{*}-p\rangle\leq 0,\,\,\,\, \forall p\in Fix(T). $$

If T1 and T2 are self-mappings on K, a point xK is called a common fixed point of Ti(i=1,2) if xFix(T1)∩Fix(T2). To find a solution of the common fixed point problems, several iterative approximation methods were introduced and studied. This problem can be applied in solving solutions of various problems in science and applied science, see [21, 22] for instance. We note that Fix(T1)∩Fix(T2)Fix(T1T2) and almost all the results on common fixed point of nonlinear mappings in Hilbert spaces, commuting assumptions are needed on Ti(i=1,2).

One of the major problems in optimization is to find:

$$ x^{*} \in H\,\, {such\, that}\,\, g(x^{*})= \min_{y\in H} g(y). $$
(13)

The set of all minimizers of g on H is denoted by argminyHg(y). A successful and powerful tool for solving this problem is the well-known proximal point algorithm (shortly, the PPA) which was initiated by Martinet [23] in 1970 and later studied by Rockafellar [1] in 1976. Let H be a real Hilbert space and g:H→(−, +] be a proper lower semi-continuous and convex function. The PPA is defined as follows:

$$\begin{array}{@{}rcl@{}} \left \{ \begin{array}{ll} x_{1}\in H,\\ x_{n+1}=\text{argmin}_{y\in H} \left[ g(y)+\frac{1}{2\lambda_{n}}\Vert x_{n}-y\Vert^{2}\right], \end{array} \right. \end{array} $$
(14)

where λn>0 for all n≥1. In [1], Rockafellar proved that the sequence {xn} given by (14) converges weakly to a minimizer of g. He then posed the following question:

Q1 Does the sequence {xn} converges strongly? This question was resolved in the negative by Güler [24]. He produced a proper lower semi-continuous and convex function g in l2 for which the PPA converges weakly but not strongly. This leads naturally to the following question:

Q2 Can the PPA be modified to guarantee strong convergence? In response to Q2, several works have been done (see, e.g., Güler [24], Solodov and Svaiter [25], Kamimura and Takahashi [26], Lehdili and Moudafi [27], Reich [28] and the references therein).

Motivated by fixed point techniques of Yuan [20], the fact class of quasi-nonexpansive mappings properly includes that of nonexpansive map and an improvement of proximal point algorithm, we propose a new iterative scheme for finding a common element of the set of solutions of inclusion problem with set-valued maximal monotone mapping and inverse strongly monotone mapping, the set of minimizers of a convex function and the set of solutions of fixed point problem with composite operators in a real Hilbert space. We show that the iterative scheme proposed converges strongly to a common element of the three sets. Then, new strong convergence theorems are deduced. Our proposed algorithm does not involve commuting assumptions on Ti(i=1,2). Our technique of proof is of independent interest.

Preliminairies

The demiclosedness of a nonlinear operator T usually plays an important role in dealing with the convergence of fixed point iterative algorithms.

Definition 1

Let H be a real Hilbert space and T:D(T)HH be a mapping. IT is said to be demiclosed at 0 if for any sequence {xn}D(T) such that {xn} converges weakly to p and xnTxn converges to zero, then pFix(T).

Lemma 1

(Demiclosedness Principle, [29]) Let H be a real Hilbert space and K be a nonempty closed and convex subset of H. Let T:KK be a nonexpansive mapping. Then, IT is demiclosed at zero.

Lemma 2

([30]) Let H be a real Hilbert space. Then, for any x,yH, the following inequalities hold:

$$\lVert x+ y\rVert^{2} \leq \lVert x\rVert^{2}+ 2\langle y, x+y \rangle.$$
$$\Vert \lambda x+(1-\lambda)y \Vert^{2}=\lambda \Vert x \Vert^{2}+(1-\lambda) \Vert y \Vert^{2}-(1-\lambda)\lambda \Vert x-y \Vert^{2},\,\,\, \lambda\in (0,1).$$

Let a set-valued mapping B:H→2H be a maximal monotone. We define a resolvent operator \(J_{\lambda }^{B}\) generated by B and λ as follows:

$$J_{\lambda}^{B}=(I+\lambda B)^{-1}(x)\,\, \forall x\in H,$$

where λ is a positive number. It is easy to see that the resolvent operator \(J_{\lambda }^{B}\) is single-valued, nonexpansive, and 1-inverse strongly monotone and moreover, a solution of the problem 1 is a fixed point of the operator \(J_{\lambda }^{B} (I-\lambda A)\) for all λ>0.

Lemma 3

[4] Let B:H→2H be a maximal monotone mapping and A:HH be a Lipschitz and continuous monotone mapping. Then, the mapping B+A:H→2H is maximal monotone.

Lemma 4

([31]) Assume that {an} is a sequence of nonnegative real numbers such that an+1≤(1−αn)an+αnσn for all n≥0, where {αn} is a sequence in (0,1) and {σn} is a sequence in \( \mathbb {R}\) such that

\((a)\,\, \sum _{n=0}^{\infty }\alpha _{n} = \infty \), \((b)\,\,\limsup _{n\rightarrow \infty }\,\sigma _{n}\leq 0\) or \( \,\,\sum _{n=0}^{\infty }\arrowvert \sigma _{n} \alpha _{n}\arrowvert <\infty \). Then, \({\lim }_{n\rightarrow \infty }a_{n}=0\).

Lemma 5

[32] Let K be a nonempty closed convex subset of a real Hilbert space H and A:KH be a k-strongly monotone and L-Lipschitzian operator with k>0, L>0. Assume that \(0<\eta <\frac {2k}{ L^{2}}\) and \(\tau =\eta \Big (k-\frac { L^{2}\eta }{2}\Big).\) Then, for each \(t\in \Big (0, min\{1,\,\, \frac {1}{\tau }\}\Big),\) we have

$$\| (I-t\eta A)x-(I-t\eta A)y\| \leq (1- t\tau) \| x-y\|,\,\, \forall\, x,y\in K.$$

Lemma 6

[33] Let {tn} be a sequence of real numbers that does not decrease at infinity in a sense that there exists a subsequence \(\{t_{n_{i}}\}\) of {tn} such that \({t_{n_{i}}} \leq {t_{n_{i+1}}}\) for all i≥0. For sufficiently large numbers \(n\in \mathbb {N},\) an integer sequence {τ(n)} is defined as follows:

$$\tau(n) = \max\lbrace k\leq n:\,\,t_{k} \leq t_{k+1} \rbrace.$$

Then, τ(n)→ as n and

$$\max\lbrace t_{\tau(n)},\,\,\, t_{n} \rbrace\leq t_{\tau(n)+1}.$$

Lemma 7

Let H be a real Hilbert space and A:HH be an α-inverse strongly monotone mapping. Then, IθA is a nonexpansive mapping for all x,yH and θ[0,2α].

Proof

For all x,yH, we have

$$\begin{array}{@{}rcl@{}} \Vert (I-\theta A)x-(I-\theta A)y\Vert^{2} &=& \Vert (x - y)- \theta (Ax- Ay)\Vert^{2}\\ &=& \Vert x-y \Vert^{2}- 2\theta\langle Ax-Ay, x-y \rangle+\theta^{2}\Vert Ax-Ay \Vert^{2}\\ &\leq & \Vert x-y \Vert^{2}+\theta(\theta - 2\alpha)\Vert Ax-Ay \Vert^{2}. \end{array} $$

We obtain the desired result. □

Let H be a real Hilbert space and F:H→(−, +] be a proper, lower semi-continuous, and convex function. For every λ>0, the Moreau-Yosida resolvent of F, \(J_{\lambda }^{F}\) is defined by:

$${J_{\lambda}^{F}}x= \text{argmin}_{u\in H} \left[ F(u)+\frac{1}{2\lambda}\Vert x-u\Vert^{2}\right],$$

for all xH. It was shown in [24] that the set of fixed points of the resolvent associated to F coincides with the set of minimizers of F. Also, the resolvent \({J_{\lambda }^{F}}\) of F is nonexpansive for all λ>0.

Lemma 8

(Miyadera [34]) Let H be a real Hilbert space and F:K→(−, +) be a proper, lower semi-continuous, and convex function. For every r>0 and μ>0, the following holds:

$$J^{F}_{r}x= J^{F}_{\mu}\left(\frac{\mu}{r}x +\left(1-\frac{\mu}{r}\right)J^{F}_{r}x \right).$$

Lemma 9

Let H be a real Hilbert space and F:H→(−, +] be a proper, lower semi-continuous, and convex function. Then, for every x,yH and λ>0, the following sub-differential inequality holds:

$$ \frac{1}{\lambda}\Vert J_{\lambda}^{F} x-y \Vert^{2}-\frac{1}{\lambda}\Vert x-y \Vert^{2}+ \frac{1}{\lambda}\Vert x-J_{\lambda}^{F} x\Vert^{2}+ F\left(J_{\lambda}^{F} x\right)\leq F(y). $$
(15)

Main results

We start by the following result.

Lemma 10

Let H be a real Hilbert space and let K be a nonempty closed convex subset of H. Let T1:KK be a quasi-nonexpansive mapping and T2:KK be a firmly nonexpansive mapping.Then, Fix(T1)∩Fix(T2)=Fix(T1T2) and T1T2 is a quasi-nonexpansive mapping on K.

Proof

We split the proof into two steps.

Step 1: First, we show that Fix(T1)∩Fix(T2)=Fix(T1T2). We note that Fix(T1)∩Fix(T2)Fix(T1T2). Thus, we only need to show that Fix(T1T2)Fix(T1)∩Fix(T2). Let pFix(T1)∩Fix(T2) and qFix(T1T2). By using properties of T1 and T2, we have

$$\begin{array}{@{}rcl@{}} \ \Vert q-p \Vert^{2} & = & \Vert T_{1}\circ T_{2} q- T_{1}p \Vert^{2} \ \\ & \leq & \Vert T_{2}q-p \Vert^{2}. \end{array} $$
(16)

Using the fact that T2 is firmly nonexpansive, we have

$$\begin{array}{@{}rcl@{}} \| T_{2}q-p\|^{2}&\leq& \langle T_{2}q-p, q-p\rangle\\ &=& \frac{1}{2}(\| T_{2}q -p\|^{2}+ \|q-p\|^{2}- \| T_{2}q-q\|^{2}), \end{array} $$

which yields

$$ \| T_{2}q -p\|^{2}\leq \|q-p\|^{2}- \| T_{2}q-q\|^{2}. $$
(17)

Using (16) implies that (28) becomes

$$\begin{array}{@{}rcl@{}} \| T_{2}q-p\|^{2} &\leq& \|q-p\|^{2}- \| T_{2}q-q\|^{2}\\ & \leq & \Vert T_{2}q-p \Vert^{2}- \| T_{2}q-q\|^{2}. \end{array} $$

Clearly, T2qq=0 which implies that

$$q= T_{2}q.$$

Keeping in mind that T1T2q=q, we have

$$q=T_{1}\circ T_{2}q= T_{1}q.$$

Thus, qFix(T1)∩Fix(T2). Hence, Fix(T1)∩Fix(T2)=Fix(T1T2).

Step 2: We show T1T2 is a quasi-nonexpansive mapping on K. Let xK and pFix(T1T2). Then, pFix(T1)∩Fix(T2) by step 1. We observe that,

$$\begin{array}{@{}rcl@{}} \|T_{1}\circ T_{2}x-p \| &=&\|T_{1}\circ T_{2}x-T_{1}p \| \\ &\leq & \| T_{2}x-p \|\\ &\leq& \|x-p\|. \end{array} $$

This completes the proof. □

We now prove the following theorem.

Theorem 2

Let H be a real Hilbert space and K be a nonempty closed convex subset of H. Let A: KH be a α-inverse strongly monotone operator and g:K→(−, +] be a proper, lower semi-continuous, and convex function. Let T1:KK be a quasi-nonexpansive mapping and T2:KK be a firmly nonexpansive mapping.Let B be a maximal monotone operator on H into 2H such that the domain of B is included in K, let f:KK be an b-Lipschitzian mapping and M:KH be an μ-strongly monotone and L-Lipschitzian operator such that Γ:=Fix(T1)∩Fix(T2)∩(A+B)−1(0)∩argminuKg(u) is nonempty. Let {xn} be a sequence defined as follows:

$$\begin{array}{@{}rcl@{}} \left \{ \begin{array}{lll} x_{0}\in K,\\\\ u_{n}=\text{argmin}_{u\in K} \Big[ g(u)+\frac{1}{2\lambda_{n}}\Vert u-x_{n}\Vert^{2}\Big],\\\\ v_{n}=\theta_{n} u_{n}+(1-\theta_{n})T_{1}\circ T_{2} u_{n},\\\\ x_{n+1}= P_{K} \Big(\alpha_{n} \gamma f(x_{n}) +(I-\alpha_{n}\eta M) J_{\lambda_{n}}^{B}(v_{n}- \lambda_{n} Av_{n}) \Big), \end{array} \right. \end{array} $$
(18)

where {λn},{θn}, and {αn} are sequences in (0,1) satisfying the following conditions:

\( (i) {\lim }_{n\rightarrow \infty }\alpha _{n}=0,\,\,\sum _{n=0}^{\infty } \alpha _{n}=\infty \), and λn(λ, d)(0, min{1, 2α}),\((ii)\,\,{\lim }_{n\rightarrow \infty }\inf (1-\theta _{n})\theta _{n}> 0,\)IT1T2 is demiclosed at the origin and \(0<\eta <\frac {2\mu }{ L^{2}},\) 0<γb<τ, where \(\tau =\eta \Big (\mu -\frac { L^{2} \eta }{2}\Big).\) Then, the sequence {xn} generated by (18) converges strongly to xΓ, which solves the following variational inequality:

$$ \langle \eta Mx^{*} -\gamma f(x^{*}), x^{*}-p \rangle\leq 0,\,\,\,\, \forall p\in \Gamma. $$
(19)

Proof

From the choice of η and γ,(ηMγf) is strongly monotone, then the variational inequality (19) has a unique solution in Γ. In what follows, we denote x to be the unique solution of (19). Without loss of generality, we can assume \( \alpha _{n}\in \Big (0, \text {min}\{1\,\,, \frac {1}{\tau }\}\Big).\) Now, we prove that the sequence {xn} is bounded. Let pΓ. Then, g(p)≤g(u) for all uK. This implies that

$$g(p)+\frac{1}{2\lambda_{n}}\Vert p-p\Vert^{2} \leq g(u)+\frac{1}{2\lambda_{n}}\Vert u-p\Vert^{2}$$

and hence \({J_{\lambda _{n}}^{g}}p=p\) for all n≥1, where \({J_{\lambda _{n}}^{g}}\) is the Moreau-Yosida resolvent of g in K. Hence,

$$\|u_{n} -p\|\leq \| x_{n}-p\|,\,\,\, \forall n\geq 0.$$

By using (18) and Lemma 10, we have

$$\begin{array}{@{}rcl@{}} \| v_{n}-p\|^{2}&=&\Big\|\theta_{n}(u_{n}-p) + (1-\theta_{n})(T_{1}\circ T_{2} u_{n}-p)\Big\|^{2}\\ &=&\theta_{n} \| u_{n}-p\|^{2} + (1-\theta_{n}) \| T_{1}\circ T_{2} u_{n} -p\|^{2} -(1-\theta_{n})\theta_{n} \| T_{1}\circ T_{2} u_{n} -u_{n}\|^{2}\\ &\leq &\theta_{n} \|u_{n}-p\|^{2} + (1-\theta_{n})\|u_{n}-p\|^{2}-(1-\theta_{n})\theta_{n}\| T_{1}\circ T_{2} u_{n} -u_{n}\|^{2}. \end{array} $$

Hence,

$$ \|v_{n}-p\|^{2} \leq \| u_{n}-p\|^{2}-(1-\theta_{n})\theta_{n} \|T_{1}\circ T_{2} u_{n}-u_{n}\|^{2}. $$
(20)

Since θn]0,1[, we obtain,

$$ \| v_{n}-p \|\leq \|u_{n}-p \|. $$
(21)

For each n≥0, we put \(z_{n} : = J_{\lambda _{n}}^{B}(I-\lambda _{n} A)v_{n}.\) Then, from Lemma 7, we have

$$\|z_{n}-p\| = \|J_{\lambda_{n}}^{B} (I-\lambda_{n} A)v_{n}- p\|\leq \| v_{n}-p\|,\,\,\, \forall n\geq 0.$$

Therefore,

$$ \|z_{n}-p\|\leq \|v_{n}-p \| \leq \|u_{n}-p\| \leq \| x_{n}-p\|. $$
(22)

Hence, by using Lemma 5 and inequalities (22) and (18), we have

$$\begin{array}{@{}rcl@{}} \lVert x_{n+1}-p\rVert & = & \lVert P_{K} \Big(\alpha_{n} \gamma f(x_{n}) +(I-\alpha_{n}\eta M)z_{n} \Big)- p\rVert\\ &\leq& \alpha_{n} \gamma \lVert f(x_{n}) -f(p)\rVert + (1-\tau\alpha_{n}) \lVert z_{n} -p\rVert + \alpha_{n}\lVert \gamma f(p)- \eta Mp\rVert \\ &\leq&(1-\alpha_{n}(\tau- b\gamma))\lVert x_{n}-p\rVert+\alpha_{n}\lVert \gamma f(p)- \eta Mp\rVert\\ &\leq& \max{\{\lVert x_{n} - p\|,\frac{\lVert \gamma f(p)- \eta M p \|}{\tau- b\gamma}\}}. \end{array} $$

By induction, we can conclude that

$$\lVert x_{n}-p\rVert \leq \max{\{\lVert x_{0} - p\|,\frac{\lVert \gamma f(p)- \eta M p \|}{\tau- b\gamma}\}}, \,\,\,\, n \geq 1.$$

Hence, {xn} is bounded. By using Lemma 5 and inequality (20), we obtain

$$\begin{array}{@{}rcl@{}} \lVert x_{n+1}-p\rVert^{2} &\leq & \lVert \alpha_{n}(\gamma f(x_{n})-\eta Mp)+ (I-\eta \alpha_{n} M)(z_{n} -p)\rVert^{2}\\ &\leq& \alpha_{n}^{2} \lVert \gamma f(x_{n}) -\eta Mp\rVert^{2} + (1-\tau\alpha_{n})^{2} \lVert z_{n} -p\rVert^{2}\\&& +2 \alpha_{n}(1-\tau\alpha_{n})\lVert\gamma f(x_{n}) -\eta Mp\rVert\lVert z_{n} -p\rVert\\ &\leq& \alpha_{n}^{2} \lVert \gamma f(x_{n}) -\eta Mp\rVert^{2} + (1-\tau\alpha_{n})^{2} \lVert v_{n} -p\rVert^{2}+2 \alpha_{n}(1-\tau\alpha_{n})\lVert\gamma f(x_{n}) -\eta Mp\rVert\lVert z_{n} -p\rVert\\ &\leq& \alpha_{n}^{2} \lVert \gamma f(x_{n}) -\eta Mp\rVert^{2} + (1-\tau\alpha_{n})^{2}\lVert x_{n}-p \rVert^{2}-(1-\tau\alpha_{n})^{2}(1-\theta_{n})\theta_{n} \| T_{1}\circ T_{2} u_{n}-u_{n}\|^{2}\\ &&+2 \alpha_{n}(1-\tau\alpha_{n})\lVert\gamma f(x_{n}) -\eta Mp\rVert\lVert x_{n} -p\rVert. \end{array} $$

Hence,

$$\begin{array}{@{}rcl@{}} (1-\tau\alpha_{n})^{2}(1-\theta_{n})\theta_{n} \| T_{1}\circ T_{2} u_{n}-u_{n} \|^{2} \leq \|x_{n}-p\|^{2} -\lVert x_{n+1}-p\rVert^{2} + \alpha_{n}^{2} \lVert \gamma f(x_{n}) -\eta Mp\rVert^{2}\\+ 2 \alpha_{n}(1-\tau\alpha_{n})\lVert\gamma f(x_{n}) -\eta Mp\rVert\lVert x_{n} -p\rVert. \end{array} $$

Since {xn} is bounded, then there exists a constant C>0, such that

$$ (1-\tau\alpha_{n})^{2}(1-\theta_{n})\theta_{n}\| T_{1}\circ T_{2} u_{n}-u_{n} \|^{2} \leq \|x_{n}-p \|^{2} -\lVert x_{n+1}-p\rVert^{2} + \alpha_{n} C. $$
(23)

Next, we prove that xnx. To see this, let us consider two possible cases.

Case 1. Assume that the sequence {xnx} is monotonically decreasing. Then, {xnx} must be convergent. Clearly, we have

$$ {\lim}_{n\rightarrow \infty} \Big[\Vert x_{n}-x^{*} \Vert^{2}-\Vert x_{n+1}-x^{*}\Vert^{2}\Big]= 0. $$
(24)

It then implies from (23) that

$$ {\lim}_{n\rightarrow \infty}(1-\theta_{n})\theta_{n}\| T_{1}\circ T_{2} u_{n}-u_{n} \|^{2} =0. $$
(25)

Since \({\lim }_{n\rightarrow \infty }\inf (1-\theta _{n}) \theta _{n}> 0,\) we have

$$ {\lim}_{n\rightarrow \infty}\Vert T_{1}\circ T_{2} u_{n}-u_{n} \Vert =0. $$
(26)

Observing that,

$$\begin{array}{@{}rcl@{}} \Vert v_{n}-u_{n}\Vert &= & \Vert \theta_{n} u_{n} + (1-\theta_{n}) T_{1}\circ T_{2} u_{n}-u_{n}\Vert\\ &= & \Vert \theta_{n} u_{n} + (1-\theta_{n}) T_{1}\circ T_{2} u_{n}- \theta_{n} u_{n}- (1-\theta_{n}) u_{n}\Vert\\ & =& (1-\theta_{n})\Vert T_{1}\circ T_{2} u_{n}-u_{n} \Vert\\ &\leq& \Vert T_{1}\circ T_{2} u_{n}-u_{n} \Vert. \end{array} $$

Therefore, from (26), we get that

$$ {\lim}_{n\rightarrow \infty} \| v_{n}-u_{n} \|=0. $$
(27)

By using Lemma 9 and since g(x)≤g(un), we get

$$ \|x_{n}-u_{n}\|^{2} \leq \|x_{n}-p\|^{2}-\|u_{n}-x^{*}\|^{2}. $$
(28)

From (18) and Lemma 5, we obtain

$$\begin{array}{@{}rcl@{}} \lVert x_{n+1}-x^{*}\rVert^{2} &= & \lVert P_{K} \Big(\alpha_{n} \gamma f(x_{n}) +(I-\alpha_{n}\eta M) z_{n} \Big) -x^{*}\rVert^{2}\\ &\leq& \lVert \alpha_{n}(\gamma f(x_{n})-\eta M x^{*})+ (I-\eta \alpha_{n} M)(z_{n}-x^{*})\rVert^{2}\\ &\leq& \alpha_{n}^{2} \lVert \gamma f(x_{n}) -\eta Mp\rVert^{2} + (1-\tau\alpha_{n})^{2} \lVert z_{n} - x^{*}\rVert^{2} +2 \alpha_{n}(1-\tau\alpha_{n})\lVert\gamma f(x_{n}) -\eta M x^{*}\rVert\lVert z_{n} -x^{*}\rVert\\ &\leq& \alpha_{n}^{2} \lVert \gamma f(x_{n}) -\eta M x^{*}\rVert^{2} + (1-\tau\alpha_{n})^{2} \Big[ \Vert x_{n}-x^{*} \Vert^{2}- \Vert x_{n}-u_{n} \Vert^{2} \Big] \\ &+& 2 \alpha_{n}(1-\tau\alpha_{n})\lVert\gamma f(x_{n}) -\eta Mp\rVert\lVert z_{n} -x^{*}\rVert\\ &\leq& \alpha_{n}^{2} \lVert \gamma f(x_{n}) -\eta M x^{*}\rVert^{2} + (1-\tau\alpha_{n})^{2} \Vert x_{n}-x^{*} \Vert^{2}+ (1- \tau \alpha_{n})^{2} \Vert x_{n}-u_{n} \Vert^{2} \\ &+& 2 \alpha_{n}(1-\tau\alpha_{n})\lVert\gamma f(x_{n}) -\eta M x^{*}\rVert\lVert z_{n}-x^{*}\rVert. \end{array} $$

Since {xn} is bounded, then there exists a constant C1>0, such that

$$ (1- \tau \alpha_{n})^{2} \|x_{n}-u_{n}\|^{2}\leq \|x_{n}-x^{*}\|^{2}-\|x_{n+1}-x^{*}\|^{2}+\alpha_{n} C_{1}. $$
(29)

It then implies from (24) and αn→0 that

$$ {\lim}_{n\rightarrow \infty}\| x_{n}-u_{n}\|=0. $$
(30)

On the other hand, using Lemma 7, we have

$$\begin{array}{@{}rcl@{}} \lVert x_{n+1}-x^{*}\rVert^{2} &= & \lVert P_{K} \Big(\alpha_{n} \gamma f(x_{n}) +(I-\alpha_{n}\eta M) z_{n} \Big)-x^{*}\rVert^{2}\\ &\leq& \lVert \alpha_{n}(\gamma f(x_{n})-\eta Mp)+ (I-\eta \alpha_{n} M)(z_{n} -x^{*})\rVert^{2}\\ &\leq& \alpha_{n}^{2} \lVert \gamma f(x_{n}) -\eta M x^{*}\rVert^{2} + (1-\tau\alpha_{n})^{2} \lVert z_{n} -x^{*}\rVert^{2} +2 \alpha_{n}(1-\tau\alpha_{n})\lVert\gamma f(x_{n}) -\eta M x^{*}\rVert\lVert z_{n} -x^{*}\rVert\\ &\leq& \alpha_{n}^{2} \lVert \gamma f(x_{n}) -\eta M x^{*}\rVert^{2} + (1-\tau\alpha_{n})^{2} \Big[ \Vert v_{n}-x^{*} \Vert^{2}+ \lambda (d-2\alpha) \Vert Av_{n}-Ax^{*} \Vert^{2} \Big] \\ &+& 2 \alpha_{n}(1-\tau\alpha_{n})\lVert\gamma f(x_{n}) -\eta Mx^{*}\rVert\lVert z_{n}-x^{*}\rVert. \end{array} $$

Therefore, we have

$$\begin{array}{@{}rcl@{}} (1-\alpha_{n})\lambda(2\alpha-d) \Vert Av_{n}-Ax^{*} \Vert^{2} & \leq & \lVert x_{n}-x^{*}\rVert^{2}- \lVert x_{n+1}-x^{*}\rVert^{2}+\alpha_{n} C_{2}, \end{array} $$

where C2 is a positive constant. Since αn→0 as n, inequality (24) and {xn} is bounded, we obtain

$$ {\lim}_{n\rightarrow \infty}\Vert Av_{n}-Ax^{*} \Vert=0. $$
(31)

Since \(J_{\lambda _{n}}^{B}\) is 1-inverse strongly monotone and (18), we have

$$\begin{array}{@{}rcl@{}} \lVert z_{n}-x^{*}\rVert^{2} &=& \lVert J_{\lambda_{n}}^{B}(I-\lambda_{n} A) v_{n}- J_{\lambda_{n}}^{B}(I-\lambda_{n} A)x^{*}\rVert^{2}\\ &\leq & \langle z_{n}-x^{*}, (I-\lambda_{n} A)v_{n}-(I-\lambda_{n} A)x^{*} \rangle\\ &=& \frac{1}{2}\Big[ \lVert (I-\lambda_{n} A) v_{n}-(I-\lambda_{n} A)x^{*}\rVert^{2} +\lVert z_{n}-x^{*}\rVert^{2}\\&&-\lVert (I-\lambda_{n} A) v_{n}-(I-\lambda_{n} A)x^{*}-(z_{n}-x^{*})\rVert^{2} \Big]\\ &\leq & \frac{1}{2}\Big[ \lVert v_{n}-x^{*}\rVert^{2}+\lVert z_{n}-x^{*}\rVert^{2}-\lVert v_{n}- z_{n}\rVert^{2}+2\lambda_{n}\langle z_{n}-x^{*}, A v_{n}-Ax^{*}\rangle-{\lambda_{n} }^{2}\Vert Av_{n}-Ax^{*} \Vert^{2} \Big]. \end{array} $$

So, we obtain

$$\lVert z_{n}-x^{*}\rVert^{2} \leq \lVert v_{n}-x^{*}\rVert^{2} -\lVert v_{n}- z_{n}\rVert^{2}+2\lambda_{n} \langle z_{n}-x^{*}, A v_{n}-Ax^{*}\rangle-{\lambda_{n} }^{2}\Vert A v_{n}-Ax^{*} \Vert^{2},$$

and thus

$$\begin{array}{@{}rcl@{}} \lVert x_{n+1}-x^{*}\rVert^{2} &\leq& \lVert \alpha_{n}(\gamma f(x_{n})-\eta M x^{*})+ (I-\eta \alpha_{n} M x^{*})(z_{n} -x^{*})\rVert^{2}\\ &\leq& \alpha_{n}^{2} \lVert \gamma f(x_{n}) -\eta M x^{*}\rVert^{2} + (1-\tau\alpha_{n})^{2} \lVert z_{n}-x^{*}\rVert^{2}+2 \alpha_{n}(1-\tau\alpha_{n})\lVert\gamma f(x_{n}) -\eta M x^{*}\rVert\lVert z_{n}-x^{*}\rVert\\ &\leq & \alpha_{n}^{2} \lVert \gamma f(x_{n}) -\eta M x^{*}\rVert^{2} +\lVert v_{n}-x^{*}\rVert^{2}- (1-\tau\alpha_{n})^{2}\lVert v_{n}- z_{n}\rVert^{2}- (1-\tau\alpha_{n})^{2} {\lambda_{n}}^{2}\Vert A v_{n}-Ax^{*} \Vert^{2}\\&+& 2\lambda_{n} (1-\tau\alpha_{n})^{2}\langle z_{n}-x^{*}, A v_{n}-Ax^{*} \rangle + 2 \alpha_{n}(1-\tau\alpha_{n})\lVert\gamma f(x_{n}) -\eta M x^{*}\rVert\lVert z_{n} -x^{*}\rVert\\ &\leq & \alpha_{n}^{2} \lVert \gamma f(x_{n}) -\eta M x^{*}\rVert^{2} +\lVert x_{n}-x^{*}\rVert^{2}\\&&- (1-\tau\alpha_{n})^{2}\lVert v_{n}- z_{n}\rVert^{2} - (1-\tau\alpha_{n})^{2} {\lambda_{n}}^{2}\Vert A v_{n}-Ax^{*} \Vert^{2}\\&+& 2\lambda_{n} (1-\tau\alpha_{n})^{2}\langle z_{n}-x^{*}, A v_{n}-Ax^{*} \rangle + 2 \alpha_{n}(1-\tau\alpha_{n})\lVert\gamma f(x_{n}) -\eta M x^{*}\rVert\lVert z_{n} -x^{*}\rVert. \end{array} $$

Since αn→0 as n, inequalities (24) and (31), we obtain

$$ {\lim}_{n\rightarrow \infty}\Vert v_{n}- z_{n} \Vert^{2}=0. $$
(32)

Next, we prove that \(\limsup _{n\to +\infty }\langle \eta Mx^{*}-\gamma f(x^{*}), x^{*}-x_{n}\rangle \leq 0.\) Since H is reflexive and {xn} is bounded, there exists a subsequence \(\{x_{n_{k}}\}\) of {xn} such that \(x_{n_{k}}\) converges weakly to x in K and

$$\limsup_{n\to +\infty}\langle \eta Mx^{*}-\gamma f(x^{*}), x^{*}-x_{n}\rangle={\lim}_{k\to +\infty}\langle \eta Mx^{*}-\gamma f(x^{*}), x^{*}-x_{n_{k}}\rangle.$$

From (26) and IT1T2 is demiclosed, we obtain xFix(T1T2). Using Lemma 1, we have xFix(T2)∩Fix(T1). Using (18) and Lemma 8, we arrive at

$$\begin{array}{@{}rcl@{}} \lVert x_{n}-J^{g}_{\lambda}x_{n}\rVert &\leq & \lVert u_{n}-J^{g}_{\lambda}x_{n}\rVert+ \lVert u_{n} -x_{n}\rVert\\ &\leq & \lVert J^{g}_{\lambda_{n}}x_{n} -J^{g}_{\lambda}x_{n}\rVert+ \lVert u_{n} -x_{n}\rVert\\ &\leq & \lVert u_{n} -x_{n}\rVert + \lVert J^{g}_{\lambda}\Big(\frac{\lambda_{n}-\lambda}{\lambda_{n}}J^{g}_{\lambda_{n}}x_{n}+ \frac{\lambda}{\lambda_{n}}x_{n} \Big)- J^{g}_{\lambda}x_{n}\rVert\\ &\leq & \lVert u_{n} -x_{n}\rVert + \lVert \frac{\lambda_{n}-\lambda}{\lambda_{n}}J^{g}_{\lambda_{n}}x_{n}+ \frac{\lambda}{\lambda_{n}}x_{n} -x_{n}\rVert\\ &\leq & \lVert u_{n} -x_{n}\rVert +\Big(1-\frac{\lambda}{\lambda_{n}}\Big)\lVert u_{n} -x_{n}\rVert\\ &\leq & \Big(2-\frac{\lambda}{\lambda_{n}}\Big)\lVert u_{n} -x_{n}\rVert. \end{array} $$

Hence,

$$ {\lim}_{n\rightarrow \infty}\lVert x_{n}-J^{g}_{\lambda}x_{n}\rVert=0. $$
(33)

Since \(J^{g}_{\lambda }\) is single-valued and nonexpansive, using (33) and Lemma 1, then \(x^{**} \in Fix (J^{g}_{\lambda })=\text {argmin}_{u\in K}\, g(u).\) Let us show x(A+B)−1(0). Since A be an α2-inverse strongly monotone, A is Lipschitz continuous monotone mapping. It follows from Lemma 3 that B+A is maximal monotone. Let (v,u)G(B+A), i.e., uAvB(v). Since \(z_{n_{k}} = J_{\lambda _{n_{k}}}^{B}(v_{n_{k}}-\lambda _{n_{k}} A v_{n_{k}}),\) we have \( v_{n_{k}}- \lambda _{n_{k}} A v_{n_{k}}\in (I +\lambda _{n_{k}} B)z_{n_{k}},\) i.e., \(\frac {1}{\lambda _{n_{k}}}(v_{n_{k}}-z_{n_{k}}-\lambda _{n_{k}} A v_{n_{k}})\in B(z_{n_{k}}).\) By maximal monotonicity of B+A, we have

$$\langle v-z_{n_{k}}, u-Av- \frac{1}{\lambda_{n_{k}}} (v_{n_{k}}-z_{n_{k}} -\lambda_{n_{k}} A v_{n_{k}}) \rangle \geq 0 $$

and so

$$\begin{array}{@{}rcl@{}} \langle v-z_{n_{k}}, u \rangle &\geq & \langle v-z_{n_{k}}, Av- \frac{1}{\lambda_{n_{k}}} (v_{n_{k}}-z_{n_{k}} -\lambda_{n_{k}} A v_{n_{k}}) \rangle\\ &= & \langle v-z_{n_{k}}, Av-Az_{n_{k}}+ Az_{n_{k}}+ \frac{1}{\lambda_{n_{k}}} (v_{n_{k}}-z_{n_{k}} -\lambda_{n_{k}} A v_{n_{k}}) \rangle\\ &\geq& \langle v-v_{n_{k}}, Az_{n_{k}}-Av_{n_{k}}\rangle+ \langle v-z_{n_{k}}, \frac{1}{\lambda_{n_{k}}} (v_{n_{k}}-z_{n_{k}}) \rangle. \end{array} $$

It follows from vnzn→0,AvnAzn→0 and \(z_{n_{k}} \rightharpoonup x^{**},\) we get

$${\lim}_{k\to +\infty}\langle v-z_{n_{k}}, u \rangle = \langle v-x^{**}, u\rangle\geq 0$$

and hence x(A+B)−1(0). Therefore, xFix(T1)∩Fix(T2)∩(A+B)−1(0)∩argminyKg(y). On the other hand, the fact that x solves (19), we then have

$$\begin{array}{@{}rcl@{}} \limsup_{n\to +\infty}\langle \eta Mx^{*}-\gamma f(x^{*}), x^{*}-x_{n}\rangle &=&{\lim}_{k\to +\infty}\langle \eta Mx^{*}-\gamma f(x^{*}), x^{*}-x_{n_{k}}\rangle\\ &=&\langle \eta Mx^{*}-\gamma f(x^{*}),x^{*}- x^{**}\rangle\leq 0. \end{array} $$

Finally, we show that xnx. From (18) and properties of metric projection, we get

$$\begin{array}{@{}rcl@{}} \| x_{n+1}-x^{*}\|^{2} &=& \| P_{K}(\alpha_{n} \gamma f(x_{n})+ (I-\eta \alpha_{n} M) z_{n}) -x^{*} \|^{2}\\ & \leq& \langle \alpha_{n} \gamma f(x_{n})+ (I-\eta \alpha_{n} M) z_{n}-x^{*}, x_{n+1}-x^{*} \rangle\\ &=& \langle \alpha_{n} \gamma f(x_{n})+ (I-\eta \alpha_{n} M) z_{n}-x^{*} -\alpha_{n} \gamma f(x^{*})+\alpha_{n} \gamma f(x^{*})\\&&-\alpha_{n}\eta Mx^{*}+\alpha_{n} \eta Mx^{*}, x_{n+1}-x^{*} \rangle\\ & \leq& \Big(\alpha_{n} \gamma \| f(x_{n})-f(x^{*})\| + \| (I-\alpha_{n} \eta M)(z_{n}-x^{*})\|\Big) \Vert x_{n+1}-x^{*} \Vert\\&&+\alpha_{n} \langle \eta Mx^{*}-\gamma f(x^{*}), x^{*}-x_{n+1}\rangle\\ & \leq& (1-\alpha_{n}(\tau- b\gamma)) \| x_{n}-x^{*}\| \Vert x_{n+1}-x^{*} \Vert + \alpha_{n} \langle \eta Mx^{*}\!-\gamma f(x^{*}), x^{*}-x_{n+1} \rangle\\ & \leq& (1-\alpha_{n}(\tau- b\gamma))\| x_{n}-x^{*}\|^{2}+ 2\alpha_{n} \langle \eta Mx^{*}-\gamma f(x^{*}), x^{*}-x_{n+1}\rangle. \end{array} $$

Hence, by Lemma 4, we conclude that the sequence {xn} converges strongly to xFix(T1)∩Fix(T2)∩(A+B)−1(0)∩argminyKg(y).Case 2. Assume that the sequence {xnx} is not monotonically decreasing. Set Bn=xnx and \(\pi : \mathbb {N}\to \mathbb {N} \) be a mapping for all nn0 (for some n0 large enough) by \( \pi (n)= \max \lbrace k\in \mathbb {N} : k\leq n,\,\,\, B_{k}\leq B_{k+1}\rbrace.\) Obviously, {pi(n)} is a non-decreasing sequence such that π(n)→ as n and Bπ(n)Bπ(n)+1 for nn0. From (23), we have

$$ (1-\tau\alpha_{\pi(n)})^{2} (1-\theta_{\pi(n)})\theta_{\pi(n)}\| u_{\\pi(n)}- T_{1}\circ T_{2} u_{\pi(n)}\|^{2} \leq \alpha_{\pi(n)} C.$$

Hence,

$${\lim}_{n\rightarrow \infty}\lVert u_{\pi(n)}- T_{1}\circ T_{2}u_{\pi(n)}\rVert =0. $$

By a similar argument as in case 1, we can show that xπ(n) is bounded in \(H, {\lim }_{n\rightarrow \infty }\Vert u_{\pi (n)}-x_{\pi (n)} \Vert =0, {\lim }_{n\rightarrow \infty }\Vert u_{\pi (n)}-v_{\pi (n)} \Vert =0, {\lim }_{n\rightarrow \infty }\Vert v_{\pi (n)}-z_{\pi (n)} \Vert =0 \), and \(\limsup _{\pi (n)\to +\infty }\langle \eta Mx^{*}-\gamma f(x^{*}), x^{*}-x_{\pi (n)})\rangle \leq 0.\) We have for all nn0,

$$0\leq \lVert x_{\pi(n)+1}-x^{*} \rVert^{2}- \lVert x_{\pi(n)}-x^{*} \rVert^{2}\leq \alpha_{\pi(n)}[- (\tau- b\gamma) \lVert x_{\pi(n)}-x^{*} \rVert^{2} +2\langle \eta Mx^{*}-\gamma f(x^{*}), x^{*}-x_{\pi(n)+1} \rangle], $$

which implies that

$$\lVert x_{\pi(n)}-x^{*} \rVert^{2} \leq \frac{ 2}{\tau- b\gamma} \langle \eta Mx^{*}-\gamma f(x^{*}), x^{*}- x_{\pi(n)+1}\rangle.$$

Then, we have

$${\lim}_{n\rightarrow \infty}\lVert x_{\pi(n)}-x^{*} \rVert^{2} =0.$$

Therefore,

$${\lim}_{n\rightarrow \infty} B_{\pi(n)}={\lim}_{n\rightarrow \infty} B_{\pi(n)+1}=0.$$

Thus, by Lemma 6, we conclude that

$$0\leq B_{n}\leq \max\lbrace B_{\pi(n)},\,\,B_{\pi(n)+1}\rbrace=B_{\pi(n)+1}.$$

Hence, \({\lim }_{n\rightarrow \infty }B_{n}=0,\) that is {xn} converges strongly to x. This completes the proof. □

We now apply Theorem 2 when T1 is nonexpansive mapping. In this case, demiclosedness assumption (IT1T2 is demiclosed at origin) is not necessary.

Theorem 3

Let A: KH be a α-inverse strongly monotone operator and g:K→(−, +] be a proper, lower semi-continuous, and convex function. Let T1:KK be a nonexpansive mapping and T2:KK be a firmly nonexpansive mapping.Let B be a maximal monotone operator on H into 2H such that the domain of B is included in K, let f:KK be an b-Lipschitzian mapping and M:KH be an μ-strongly monotone and L-Lipschitzian operator such that Γ:=Fix(T1)∩Fix(T2)∩(A+B)−1(0)∩argminuKg(u) is nonempty. Let {xn} be a sequence defined as follows:

$$\begin{array}{@{}rcl@{}} \left \{ \begin{array}{lll} x_{0}\in K,\\\\ u_{n}=\text{argmin}_{u\in K} \Big[ g(u)+\frac{1}{2\lambda_{n}}\Vert u-x_{n}\Vert^{2}\Big],\\\\ v_{n}=\theta_{n} u_{n}+(1-\theta_{n})T_{1}\circ T_{2} u_{n},\\\\ x_{n+1}= P_{K} \Big(\alpha_{n} \gamma f(x_{n}) +(I-\alpha_{n}\eta M) J_{\lambda_{n}}^{B}(v_{n}- \lambda_{n} Av_{n}) \Big), \end{array} \right. \end{array} $$
(34)

where {λn},{θn}, and {αn} are sequences in (0,1) satisfying the following conditions:\( (i) {\lim }_{n\rightarrow \infty }\alpha _{n}=0,\,\,\sum _{n=0}^{\infty } \alpha _{n}=\infty \), and λn(λ, d)(0, min{1, 2α}),\((ii)\,\,{\lim }_{n\rightarrow \infty }\inf (1-\theta _{n})\theta _{n}> 0\) and \(0<\eta <\frac {2\mu }{ L^{2}},\) 0<γb<τ, where \(\tau =\eta \Big (\mu -\frac { L^{2} \eta }{2}\Big).\) Then, the sequence {xn} generated by (34) converges strongly to xΓ, which solves the variational inequality:

$$ \langle \eta Mx^{*} -\gamma f(x^{*}), x^{*}-p \rangle\leq 0,\,\,\,\, \forall p\in \Gamma. $$
(35)

Proof

We have T1T2 is nonexpansive mapping; then, the proof follows Lemma 1 and Theorem 2. □

Now, we consider the following quadratic optimization problem:

$$ \min_{x\in \Gamma}\Big(\frac{\eta}{2}\langle Mx, x \rangle- h(x)\Big), $$
(36)

where B:KH is a strongly positive bounded linear operator, Γ:=Fix(T1)∩Fix(T2)∩(A+B)−1(0)∩argminuKg(u), and h is a potential function for γf (i.e., \(\phantom {\dot {i}\!}h^{'}(x) = \gamma f(x)\) on K).

Hence, one has the following result.

Theorem 4

Let A: KH be a α-inverse strongly monotone operator and g:K→(−, +] be a proper, lower semi-continuous, and convex function. Let T1:KK be a quasi-nonexpansive mapping and T2:KK be a firmly nonexpansive mapping.Let B be a maximal monotone operator on H into 2H such that the domain of B is included in K, let f:KK be an b-Lipschitzian mapping and M:KH be strongly bounded linear operator with coefficient μ>0 such that Γ:=Fix(T1)∩Fix(T2)∩(A+B)−1(0)∩argminuKg(u) is nonempty. Let {xn} be a sequence defined as follows:

$$\begin{array}{@{}rcl@{}} \left \{ \begin{array}{lll} x_{0}\in K,\\\\ u_{n}=\text{argmin}_{u\in K} \Big[ g(u)+\frac{1}{2\lambda_{n}}\Vert u-x_{n}\Vert^{2}\Big],\\\\ v_{n}=\theta_{n} u_{n}+(1-\theta_{n})T_{1}\circ T_{2} u_{n},\\\\ x_{n+1}= P_{K} \left(\alpha_{n} \gamma f(x_{n}) +(I-\alpha_{n}\eta M) J_{\lambda_{n}}^{B}(v_{n}- \lambda_{n} Av_{n}) \right), \end{array} \right. \end{array} $$
(37)

where {λn},{θn}, and {αn} are sequences in (0,1) satisfying the following conditions: \( (i) {\lim }_{n\rightarrow \infty }\alpha _{n}=0,\,\,\sum _{n=0}^{\infty } \alpha _{n}=\infty \), and λn(λ, d)(0, min{1, 2α}),\((ii)\,\,{\lim }_{n\rightarrow \infty }\inf (1-\theta _{n})\theta _{n}> 0,\)IT1T2 is demiclosed at the origin and \(0<\eta <\frac {2\mu }{ \Vert M\Vert ^{2}},\) 0<γb<τ, where \(\tau =\eta \Big (\mu -\frac { \Vert M\Vert ^{2} \eta }{2}\Big).\) Then, the sequence {xn} generated by (37) converges strongly to a unique solution of problem (36).

Proof

We note that strongly positive bounded linear operator M is a M-Lipschitzian and μ-strongly monotone operator; the proof follows Theorem 2. □

Application to some nonlinear problems

In this section, we apply our main results for finding a common solution of composite convex minimization problem, convex optimization problem, and fixed point problem involving composed operators.

Problem 1

Let H be a real Hilbert space. We consider the minimization of composite objective function of the type

$$ \min_{x\in H}\, \Big(\Psi(x)+\Phi(x)\Big), $$
(38)

where \(\Psi : H \to \mathbb {R} \cup \{+\infty \}\) is a proper, convex, and lower semi-continuous functional and \(\Phi : H \to \mathbb {R}\) is convex functional.

Many optimization problems from image processing [7], statistical regression, machine learning (see, e.g., [36] and the references contained therein), etc. can be adapted into the form of (38).Observe that problem 1 is equivalent to find xH such that

$$ 0\in \partial \Psi(x^{*})+ \nabla \Phi(x^{*}). $$
(39)

It is well known Ψ is maximal monotone (see, e.g., Minty [37]).

Lemma 11

(Baillon and Haddad [38]) Let H be a real Hilbert space, Φ a continuously Fréchet differentiable, convex functional on H and Φ the gradient of Φ. If Φ is \(\frac {1}{\alpha }\)-Lipschitz continuous, then Φ is α-inverse strongly monotone.

Hence, from Theorem 2, we have the following result.

Theorem 5

Let H be a real Hilbert space and g:H→(−, +] be a proper, lower semi-continuous, and convex function. Let \(\Phi : H \to \mathbb {R} \) be a continuously Fréchet differentiable, convex functional on H and Φ a \(\frac {1}{\alpha }\)-Lipschitz continuous. Let \( \Psi : H \to \mathbb {R} \cup \{+\infty \}\) be a proper, convex, and lower semi-continuous functional and f:HH be an b-Lipschitzian mapping. Let T1:HH be a quasi-nonexpansive mapping and let T2:HH be a firmly nonexpansive mapping and M:HH be an μ-strongly monotone and L-Lipschitzian operator such that Γ:=Fix(T1)∩Fix(T2)∩(Ψ+Φ)−1(0)∩argminuHg(u) is nonempty. Let {xn} be a sequence defined as follows:

$$\begin{array}{@{}rcl@{}} \left \{ \begin{array}{lll} x_{0}\in H,\\\\ u_{n}=\text{argmin}_{u\in H} \Big[ g(u)+\frac{1}{2\lambda_{n}}\Vert u-x_{n}\Vert^{2}\Big],\\\\ v_{n}=\theta_{n} u_{n}+(1-\theta_{n})T_{1}\circ T_{2} u_{n},\\\\ x_{n+1}= \alpha_{n} \gamma f(x_{n}) +(I-\alpha_{n}\eta M) J_{\lambda_{n}}^{\partial \Psi}(v_{n}- \lambda_{n} \nabla \Phi v_{n}), \end{array} \right. \end{array} $$
(40)

where {λn},{θn}, and {αn} be sequences in (0,1) satisfying the following conditions:\( (i) {\lim }_{n\rightarrow \infty }\alpha _{n}=0,\,\,\sum _{n=0}^{\infty } \alpha _{n}=\infty \), and λn(λ, d)(0, min{1, 2α}),\((ii)\,\,{\lim }_{n\rightarrow \infty }\inf (1-\theta _{n})\theta _{n}> 0,\)IT1T2 is demiclosed at the origin and \(0<\eta <\frac {2\mu }{ L^{2}},\) 0<γb<τ, where \(\tau =\eta \Big (\mu -\frac { L^{2} \eta }{2}\Big).\) Then, the sequence {xn} generated by (40) converges to a point xargminuHg(u) which is a minimizer of Ψ(x)+Φ(x) in H as well as it is also a common fixed point of T1 and T2 in H.

Proof

We set B=Ψ and Φ=A,K=H into Theorem 2. Then, the proof follows Theorem 2. □

Remark 1

Many already studied problems in the literature can be considered as special cases of this paper; see, for example, [1, 3, 4, 14, 16, 17, 23, 26, 28, 39] and the references therein.

Availability of data and materials

Not applicable.

References

  1. 1

    Rockafellar, R. T.: Maximal monotone operators and proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976).

    MathSciNet  MATH  Article  Google Scholar 

  2. 2

    Radu, R. I., Csetnek, E. R., Heinrich, A.: A primal-dual splitting algorithm for finding zeros of sums of maximal monotone operators. SIAM J. Optim. 23, 2011–2036 (2013).

    MathSciNet  MATH  Article  Google Scholar 

  3. 3

    Passty, G. B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert spaces. J. Math. Anal. Appl. 72, 383–390 (1979).

    MathSciNet  MATH  Article  Google Scholar 

  4. 4

    Lions, P. L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979).

    MathSciNet  MATH  Article  Google Scholar 

  5. 5

    Chen, G. H. -G., Rockafellar, R. T.: Convergence rates in forward-backward splitting. SIAM J. Optim. 7(2), 421–444 (1997).

    MathSciNet  MATH  Article  Google Scholar 

  6. 6

    Genaro, L., Victoria, M. M., Fenghui, W., Xu, H. K.: Forward-backward splitting methods for accretive operators in Banach spaces. Abstr. Appl. Anal., 1–25 (2012). doi:10.1155/2012/109236.

  7. 7

    Bredies, K.: A forward-backward splitting algorithm for the minimization of non-smooth convex functionals in Banach space. Inverse Probl. 25(1), 1–20 (2009).

    MathSciNet  MATH  Article  Google Scholar 

  8. 8

    Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38(2), 431–446 (2000).

    MathSciNet  MATH  Article  Google Scholar 

  9. 9

    Dadashi, V., Postolache, M.: Forward-backward splitting algorithm for fixed point problems and zeros of the sum of monotone operators. Arab. J. Math., 1–11 (2019). https://doi.org/10.1007/s40065-018-0236-2.

  10. 10

    Adly, S.: Perturbed algorithms and sensitivity analysis for a general class of variational inclusions. J. Math. Anal. Appl. 201(3), 609–630 (1996).

    MathSciNet  MATH  Article  Google Scholar 

  11. 11

    Radu, R. I., Csetnek, E. R., Heinrich, A.: A primal-dual splitting algorithm for finding zeros of sums of maximal monotone operators. SIAM J. Optim. 23, 2011–2036 (2013).

    MathSciNet  MATH  Article  Google Scholar 

  12. 12

    Shimoji, K., Takahashi, W.: Strong convergence theorems of approximated sequences for nonexpansive mappings in Banach spaces. Proc. Amer. Math. Soc. 125, 3641–3645 (1997).

    MathSciNet  MATH  Article  Google Scholar 

  13. 13

    Dotson, Jr., W.G: Fixed points of quasi-nonexpansive mappings. Aust Math Soc A. 13, 167–170 (1972).

  14. 14

    Xu, H. K.: An iterative approach to quadratic optimization. J. Optim. Theory Appl. 116, 659–678 (2003).

    MathSciNet  MATH  Article  Google Scholar 

  15. 15

    Moudafi, A.: Viscosity approximation methods for fixed point problems. J. Math. Anal. Appl. 241, 46–55 (2000).

    MathSciNet  MATH  Article  Google Scholar 

  16. 16

    Marino, G., Xu, H. K.: A general iterative method for nonexpansive mappings in Hibert spaces. J. Math. Anal. Appl. 318, 43–52 (2006).

    MathSciNet  MATH  Article  Google Scholar 

  17. 17

    Marino, G., Xu, H. K.: Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces. J. Math. Math. Appl. 329, 336–346 (2007).

    MathSciNet  MATH  Google Scholar 

  18. 18

    Sow, T. M. M.: A modified generalized viscosity explicit methods for quasi-nonexpansive mappings in Banach spaces. Funct. Anal. Approx. Comput. 11(2), 37–49 (2019).

    MATH  Google Scholar 

  19. 19

    Yao, Y., Zhou, H., Liou, Y. C.: Strong convergence of modified Krasnoselskii-Mann iterative algorithm for nonexpansive mappings. J. Math. Anal. Appl. Comput. 29, 383–389 (2009).

    MATH  Article  Google Scholar 

  20. 20

    Yuan, H.: On solutions of inclusion problems and fixed point problems. Fixed Point Theory Appl.2013:11, 11 (2013).

    MathSciNet  MATH  Article  Google Scholar 

  21. 21

    Gunduz, B., Akbulutl, S.: Common fixed points of a finite family of iasymptotically nonexpansive mappings by s-iteration process in Banach spaces. Thai J. Math. 15(3), 673–687 (2017).

    MathSciNet  Google Scholar 

  22. 22

    Halpern, B.: Fixed points of nonexpansive maps. Bull. Amer. Math. Soc. 3, 957–961 (1967).

    MATH  Article  Google Scholar 

  23. 23

    Martinet, B.: Régularisation d’inéquations variationnelles par approximations successives. (French) Rev. Franaise Informat. Recherche Opérationnelle. 4, 154–158 (1970).

    MATH  Google Scholar 

  24. 24

    Güler, O.: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 29, 403–419 (1991).

    MathSciNet  MATH  Article  Google Scholar 

  25. 25

    Solodov, M. V., Svaiter, B. F.: Forcing strong convergence of proximal point iterations in a Hilber space. Math. Program. Ser. A. 87, 189–202 (2000).

    MATH  Article  Google Scholar 

  26. 26

    Kamimura, S., Takahashi, W.: Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 13(3), 938–945 (2003).

    MathSciNet  MATH  Article  Google Scholar 

  27. 27

    Lehdili, N., Moudafi, A.: Combining the proximal algorithm and Tikhonov regularization. Optimization. 37, 239–252 (1996).

    MathSciNet  MATH  Article  Google Scholar 

  28. 28

    Reich, S.: Strong convergence theorems for resolvents of accretive operators in Banach spaces. J. Math. Anal. Appl. 183, 118–120 (1994).

    MathSciNet  Article  Google Scholar 

  29. 29

    Browder, F. E.: Convergenge theorem for sequence of nonlinear operator in Banach spaces. Math. Z. 100, 201–225 (1976). https://doi.org/org/10.1007/BF01109805.

    Article  Google Scholar 

  30. 30

    Chidume, C. E.: Geometric properties of Banach spaces and nonlinear iterations. Springer Verlag Ser. Lect. Notes Math. 1965 (2009). ISBN 978-1-84882-189-7.

  31. 31

    Xu, H. K.: Iterative algorithms for nonlinear operators. J. London Math. Soc. 66(2), 240–256 (2002).

    MathSciNet  MATH  Article  Google Scholar 

  32. 32

    Wang, S.: A general iterative method for an infinite family of strictly pseudo-contractive mappings in Hilbert spaces. Appl. Math. Lett. 24, 901–907 (2011).

    MathSciNet  MATH  Article  Google Scholar 

  33. 33

    Mainge, P. E.: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899–912 (2008).

    MathSciNet  MATH  Article  Google Scholar 

  34. 34

    Miyadera, I.: Nonlinear semigroups. Translations of Mathematical Monographs, American Mathematical Society, Providence (1992).

    Google Scholar 

  35. 35

    Ambrosio, G, Savaré, N: Gradient Flows in Metric Spaces and in the Space of Probability Measures, Second Edition. Lectures in Mathematics ETH Zürich, Birkhäuser Verlag, Basel (2008).

    Google Scholar 

  36. 36

    Wang, Y., Xu, H. K.: Strong convergence for the proximal-gradient method. J. Nonlinear Convex Anal. 15(3), 581–593 (2014).

    MathSciNet  MATH  Google Scholar 

  37. 37

    Minty, G. J.: Monotone (nonlinear) operator in Hilbert space. Duke Math. 29, 341–346 (1962).

    MathSciNet  MATH  Article  Google Scholar 

  38. 38

    Baillon, J. B., Haddad, G.: Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel J. Math. 26, 137–150 (1977).

    MathSciNet  MATH  Article  Google Scholar 

  39. 39

    Khatibzadeh, H., Mohebbi, V.: On the iterations of a sequence of strongly quasi-nonexpansive mappings with applications. Numer. Funct. Anal. Optim. 41(3), 231–256 (2020).

    MathSciNet  MATH  Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

Not applicable.

Author information

Affiliations

Authors

Contributions

The authors read and approved the final manuscript.

Corresponding author

Correspondence to T. M. M. Sow.

Ethics declarations

Competing interests

The author declares that there are no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Sow, T. General-type proximal point algorithm for solving inclusion and fixed point problems with composite operators. J Egypt Math Soc 28, 20 (2020). https://doi.org/10.1186/s42787-020-00080-w

Download citation

Keywords

  • General-type proximal algorithm
  • Convex minimization problem
  • Monotone inclusion problem
  • Composite operators