- Original Research
- Open Access
- Published:
General-type proximal point algorithm for solving inclusion and fixed point problems with composite operators
Journal of the Egyptian Mathematical Society volume 28, Article number: 20 (2020)
Abstract
The main purpose of this paper is to introduce a new general-type proximal point algorithm for finding a common element of the set of solutions of monotone inclusion problem, the set of minimizers of a convex function, and the set of solutions of fixed point problem with composite operators: the composition of quasi-nonexpansive and firmly nonexpansive mappings in real Hilbert spaces. We prove that the sequence xn which is generated by the proposed iterative algorithm converges strongly to a common element of the three sets above without commuting assumption on the mappings. Finally, applications of our theorems to find a common solution of some nonlinear problems, namely, composite minimization problems, convex optimization problems, and fixed point problems, are given to validate our new findings.
Introduction
Throughout this paper, we assume that H be a real Hilbert space with the inner product 〈·,·〉 and norm ∥.∥ and K be a nonempty closed convex subset of H. Let A:H→2H, the domain of A,D(A), the image of a subset S of H, A(S) the range of A, R(A) and the graph of A,G(A) are defined as follows:
An operator A:K→H is called monotone if
An operator A:K→H is said to be strongly monotone if there exists a positive constant k∈(0,1) such that
It is said to be α-inverse strongly monotone if there exists a constant α>0 such that
It is immediate that if A is α-inverse strongly monotone, then A is monotone and Lipschitz continuous.
Let A:H→H be a single-valued nonlinear mapping and B:H→2H be a set-valued mapping. The variational inclusion problem is as follows: find x∈H such that
We denote the set of solutions of this problem by (A+B)−1(0). If A=0, then problem (1) becomes the inclusion problem introduced by Rockafellar [1]. Inclusions of the form specified by (1) arise in numerous problems of fundamental importance in mathematical optimization, either directly or through an appropriate reformulation. In what follows, we provide some motivating examples.
General monotone inclusions
Consider the inclusion problem
where \(\phantom {\dot {i}\!}A : H_{1}\to 2^{H_{1}}\) and \(\phantom {\dot {i}\!} B : H_{2}\to 2^{H_{2}}\) are maximally monotone operators and K:H1→H2 is a linear, bounded operator with adjoint K∗. As was observed in [2], solving (2) can be equivalently cast as the following monotone inclusion posed in the product space:
Notice that the first operator in (3) is maximally monotone whereas the second is bounded and linear (in particular, it is Lipschitz continuous with full domain). Consequently, (3) is also of the form specified by (1).
Saddle point problems
Many convex optimization problems can be formulated as the saddle point problem:
where f,g:H→(−∞, +∞] are proper, lsc, convex functions and \(\Phi : H \times H \to \mathbb {R}\) is a smooth convex-concave function. Problems of this form naturally arise in machine learning, statistics, etc., where the dual (maximization) problem comes either from dualizing the constraints in the primal problem or from using the Fenchel-Legendre transform to leverage a nonsmooth composite part. Through its first-order optimality condition, the saddle point problem (22) can be expressed as the monotone inclusion
which is of the form specified by (1). In general, equations of inclusion monotone type (1) are nonlinear and there is no known method to find closed form solutions for them. Consequently, methods of approximating solutions of such equations are of interest.
The best-known splitting method for solving the inclusion (1) when B is single-valued is the forward-backward method, called so because each iteration combines one forward evaluation of B with one backward evaluation of A, introduced by Passty [3] and Lions and Mercier [4]. More precisely, the method generates a sequence according to
under the condition that D(B)⊂D(A). It was shown, see for example [5], that weak convergence of (6) requires quite restrictive assumptions on A and B, such that the inverse of A is strongly monotone or B is Lipschitz continuous and monotone and the operator A+B is strongly monotone on D(B). Hence, the modification is necessary in order to guarantee the strong convergence of forward-backward splitting method (see, for example, [5–12] and the references contained in them).
A map T:K→K is said to be Lipschitz if there exists an L≥0 such that
if L<1, T is called contraction and if L=1, T is called nonexpansive. We denote by Fix(T) the set of fixed points of the mapping T, that is Fix(T):={x∈D(T):x=Tx}. We assume that Fix(T) is nonempty. A map T is called quasi-nonexpansive if ∥Tx−p∥≤∥x−p∥ holds for all x in K and p∈Fix(T). The mapping T:K→K is said to be firmly nonexpansive, if
We remark here that a nonexpansive mapping with a nonempty fixed point set is quasi-nonexpansive; however, the inverse may be not true. See the following example [13].
Example 1
[13] Let \(H = \mathbb {R}\) and define a mapping by T:H→H by
Then, T is quasi-nonexpansive but not nonexpansive.
Fixed point theory has been revealed as a very powerful and effective method for solving a large number of problems which emerge from real-world applications and can be translated into equivalent fixed point problems. In order to obtain approximate solution of the fixed point problems, various iterative methods have been proposed (see, e.g., [14–19] and the reference therein).
In 2013, Yuan [20], motivated by the fact that forward-backward method is remarkably useful for finding fixed points of nonlinear mapping, proved the following theorem.
Theorem 1
Let H be a real Hilbert space and C be a nonempty closed convex subset of H. Let A: C→H be a α-inverse strongly monotone operator and S:C→C be a quasi-nonexpansive mapping. Let B be a maximal monotone operator on H into 2H such that the domain of B is included in C such that F:=Fix(S)∩(A+B)−1(0) is nonempty and I−S is demiclosed. Let {αn} be a real number sequence in [0,1] and {λn} be a positive real number sequence. Let {xn} be a sequence in C generated in the following iterative process:
where \(J_{\lambda _{n}} = (I + \lambda _{n} B).\) Suppose that the sequences {αn} and {λn} satisfy the following restrictions:
- (i)
0≤αn≤a<1;
- (ii)
0<b≤λn≤c<2α.
Then, the sequence {xn} converges strongly to PFx1.
However, we observe that in Theorem 1 recursion formula studied is not simpler.
Recently, iterative methods for nonexpansive mappings have been applied to solve convex minimization problems; see, e.g., [14, 16] and the references therein. A typical problem is to minimize a quadratic function over the set of the fixed points of a nonexpansive mapping on a real Hilbert space H:
In [14], Xu proved that the sequence {xn} defined by iterative method below with initial guess x0∈H chosen arbitrary:
converges strongly to the unique solution of the minimization problem (10), where T is a nonexpansive mappings in H and A a strongly positive bounded linear operator. In 2006, Marino and Xu [16] extended Moudafi’s results [15] and Xu’s results [14] via the following general iteration x0∈H and
where \(\{\alpha _{n}\}_{n \in \mathbb {N}} \subset (0,1),\)A is a bounded linear operator on H, and T is a nonexpansive. Under suitable conditions, they proved the sequence {xn} defined by (12) converges strongly to the fixed point of T, which is a unique solution of the following variational inequality
If T1 and T2 are self-mappings on K, a point x∈K is called a common fixed point of Ti(i=1,2) if x∈Fix(T1)∩Fix(T2). To find a solution of the common fixed point problems, several iterative approximation methods were introduced and studied. This problem can be applied in solving solutions of various problems in science and applied science, see [21, 22] for instance. We note that Fix(T1)∩Fix(T2)⊂Fix(T1∘T2) and almost all the results on common fixed point of nonlinear mappings in Hilbert spaces, commuting assumptions are needed on Ti(i=1,2).
One of the major problems in optimization is to find:
The set of all minimizers of g on H is denoted by argminy∈Hg(y). A successful and powerful tool for solving this problem is the well-known proximal point algorithm (shortly, the PPA) which was initiated by Martinet [23] in 1970 and later studied by Rockafellar [1] in 1976. Let H be a real Hilbert space and g:H→(−∞, +∞] be a proper lower semi-continuous and convex function. The PPA is defined as follows:
where λn>0 for all n≥1. In [1], Rockafellar proved that the sequence {xn} given by (14) converges weakly to a minimizer of g. He then posed the following question:
Q1 Does the sequence {xn} converges strongly? This question was resolved in the negative by Güler [24]. He produced a proper lower semi-continuous and convex function g in l2 for which the PPA converges weakly but not strongly. This leads naturally to the following question:
Q2 Can the PPA be modified to guarantee strong convergence? In response to Q2, several works have been done (see, e.g., Güler [24], Solodov and Svaiter [25], Kamimura and Takahashi [26], Lehdili and Moudafi [27], Reich [28] and the references therein).
Motivated by fixed point techniques of Yuan [20], the fact class of quasi-nonexpansive mappings properly includes that of nonexpansive map and an improvement of proximal point algorithm, we propose a new iterative scheme for finding a common element of the set of solutions of inclusion problem with set-valued maximal monotone mapping and inverse strongly monotone mapping, the set of minimizers of a convex function and the set of solutions of fixed point problem with composite operators in a real Hilbert space. We show that the iterative scheme proposed converges strongly to a common element of the three sets. Then, new strong convergence theorems are deduced. Our proposed algorithm does not involve commuting assumptions on Ti(i=1,2). Our technique of proof is of independent interest.
Preliminairies
The demiclosedness of a nonlinear operator T usually plays an important role in dealing with the convergence of fixed point iterative algorithms.
Definition 1
Let H be a real Hilbert space and T:D(T)⊂H→H be a mapping. I−T is said to be demiclosed at 0 if for any sequence {xn}⊂D(T) such that {xn} converges weakly to p and ∥xn−Txn∥ converges to zero, then p∈Fix(T).
Lemma 1
(Demiclosedness Principle, [29]) Let H be a real Hilbert space and K be a nonempty closed and convex subset of H. Let T:K→K be a nonexpansive mapping. Then, I−T is demiclosed at zero.
Lemma 2
([30]) Let H be a real Hilbert space. Then, for any x,y∈H, the following inequalities hold:
Let a set-valued mapping B:H→2H be a maximal monotone. We define a resolvent operator \(J_{\lambda }^{B}\) generated by B and λ as follows:
where λ is a positive number. It is easy to see that the resolvent operator \(J_{\lambda }^{B}\) is single-valued, nonexpansive, and 1-inverse strongly monotone and moreover, a solution of the problem 1 is a fixed point of the operator \(J_{\lambda }^{B} (I-\lambda A)\) for all λ>0.
Lemma 3
[4] Let B:H→2H be a maximal monotone mapping and A:H→H be a Lipschitz and continuous monotone mapping. Then, the mapping B+A:H→2H is maximal monotone.
Lemma 4
([31]) Assume that {an} is a sequence of nonnegative real numbers such that an+1≤(1−αn)an+αnσn for all n≥0, where {αn} is a sequence in (0,1) and {σn} is a sequence in \( \mathbb {R}\) such that
\((a)\,\, \sum _{n=0}^{\infty }\alpha _{n} = \infty \), \((b)\,\,\limsup _{n\rightarrow \infty }\,\sigma _{n}\leq 0\) or \( \,\,\sum _{n=0}^{\infty }\arrowvert \sigma _{n} \alpha _{n}\arrowvert <\infty \). Then, \({\lim }_{n\rightarrow \infty }a_{n}=0\).
Lemma 5
[32] Let K be a nonempty closed convex subset of a real Hilbert space H and A:K→H be a k-strongly monotone and L-Lipschitzian operator with k>0, L>0. Assume that \(0<\eta <\frac {2k}{ L^{2}}\) and \(\tau =\eta \Big (k-\frac { L^{2}\eta }{2}\Big).\) Then, for each \(t\in \Big (0, min\{1,\,\, \frac {1}{\tau }\}\Big),\) we have
Lemma 6
[33] Let {tn} be a sequence of real numbers that does not decrease at infinity in a sense that there exists a subsequence \(\{t_{n_{i}}\}\) of {tn} such that \({t_{n_{i}}} \leq {t_{n_{i+1}}}\) for all i≥0. For sufficiently large numbers \(n\in \mathbb {N},\) an integer sequence {τ(n)} is defined as follows:
Then, τ(n)→∞ as n→∞ and
Lemma 7
Let H be a real Hilbert space and A:H→H be an α-inverse strongly monotone mapping. Then, I−θA is a nonexpansive mapping for all x,y∈H and θ∈[0,2α].
Proof
For all x,y∈H, we have
We obtain the desired result. â–¡
Let H be a real Hilbert space and F:H→(−∞, +∞] be a proper, lower semi-continuous, and convex function. For every λ>0, the Moreau-Yosida resolvent of F, \(J_{\lambda }^{F}\) is defined by:
for all x∈H. It was shown in [24] that the set of fixed points of the resolvent associated to F coincides with the set of minimizers of F. Also, the resolvent \({J_{\lambda }^{F}}\) of F is nonexpansive for all λ>0.
Lemma 8
(Miyadera [34]) Let H be a real Hilbert space and F:K→(−∞, +∞) be a proper, lower semi-continuous, and convex function. For every r>0 and μ>0, the following holds:
Lemma 9
Let H be a real Hilbert space and F:H→(−∞, +∞] be a proper, lower semi-continuous, and convex function. Then, for every x,y∈H and λ>0, the following sub-differential inequality holds:
Main results
We start by the following result.
Lemma 10
Let H be a real Hilbert space and let K be a nonempty closed convex subset of H. Let T1:K→K be a quasi-nonexpansive mapping and T2:K→K be a firmly nonexpansive mapping.Then, Fix(T1)∩Fix(T2)=Fix(T1∘T2) and T1∘T2 is a quasi-nonexpansive mapping on K.
Proof
We split the proof into two steps.
Step 1: First, we show that Fix(T1)∩Fix(T2)=Fix(T1∘T2). We note that Fix(T1)∩Fix(T2)⊂Fix(T1∘T2). Thus, we only need to show that Fix(T1∘T2)⊆Fix(T1)∩Fix(T2). Let p∈Fix(T1)∩Fix(T2) and q∈Fix(T1∘T2). By using properties of T1 and T2, we have
Using the fact that T2 is firmly nonexpansive, we have
which yields
Using (16) implies that (28) becomes
Clearly, ∥T2q−q∥=0 which implies that
Keeping in mind that T1∘T2q=q, we have
Thus, q∈Fix(T1)∩Fix(T2). Hence, Fix(T1)∩Fix(T2)=Fix(T1∘T2).
Step 2: We show T1∘T2 is a quasi-nonexpansive mapping on K. Let x∈K and p∈Fix(T1∘T2). Then, p∈Fix(T1)∩Fix(T2) by step 1. We observe that,
This completes the proof. â–¡
We now prove the following theorem.
Theorem 2
Let H be a real Hilbert space and K be a nonempty closed convex subset of H. Let A: K→H be a α-inverse strongly monotone operator and g:K→(−∞, +∞] be a proper, lower semi-continuous, and convex function. Let T1:K→K be a quasi-nonexpansive mapping and T2:K→K be a firmly nonexpansive mapping.Let B be a maximal monotone operator on H into 2H such that the domain of B is included in K, let f:K→K be an b-Lipschitzian mapping and M:K→H be an μ-strongly monotone and L-Lipschitzian operator such that Γ:=Fix(T1)∩Fix(T2)∩(A+B)−1(0)∩argminu∈Kg(u) is nonempty. Let {xn} be a sequence defined as follows:
where {λn},{θn}, and {αn} are sequences in (0,1) satisfying the following conditions:
\( (i) {\lim }_{n\rightarrow \infty }\alpha _{n}=0,\,\,\sum _{n=0}^{\infty } \alpha _{n}=\infty \), and λn∈(λ, d)⊂(0, min{1, 2α}),\((ii)\,\,{\lim }_{n\rightarrow \infty }\inf (1-\theta _{n})\theta _{n}> 0,\)I−T1∘T2 is demiclosed at the origin and \(0<\eta <\frac {2\mu }{ L^{2}},\) 0<γb<τ, where \(\tau =\eta \Big (\mu -\frac { L^{2} \eta }{2}\Big).\) Then, the sequence {xn} generated by (18) converges strongly to x∗∈Γ, which solves the following variational inequality:
Proof
From the choice of η and γ,(ηM−γf) is strongly monotone, then the variational inequality (19) has a unique solution in Γ. In what follows, we denote x∗ to be the unique solution of (19). Without loss of generality, we can assume \( \alpha _{n}\in \Big (0, \text {min}\{1\,\,, \frac {1}{\tau }\}\Big).\) Now, we prove that the sequence {xn} is bounded. Let p∈Γ. Then, g(p)≤g(u) for all u∈K. This implies that
and hence \({J_{\lambda _{n}}^{g}}p=p\) for all n≥1, where \({J_{\lambda _{n}}^{g}}\) is the Moreau-Yosida resolvent of g in K. Hence,
By using (18) and Lemma 10, we have
Hence,
Since θn∈]0,1[, we obtain,
For each n≥0, we put \(z_{n} : = J_{\lambda _{n}}^{B}(I-\lambda _{n} A)v_{n}.\) Then, from Lemma 7, we have
Therefore,
Hence, by using Lemma 5 and inequalities (22) and (18), we have
By induction, we can conclude that
Hence, {xn} is bounded. By using Lemma 5 and inequality (20), we obtain
Hence,
Since {xn} is bounded, then there exists a constant C>0, such that
Next, we prove that xn→x∗. To see this, let us consider two possible cases.
Case 1. Assume that the sequence {∥xn−x∗∥} is monotonically decreasing. Then, {∥xn−x∗∥} must be convergent. Clearly, we have
It then implies from (23) that
Since \({\lim }_{n\rightarrow \infty }\inf (1-\theta _{n}) \theta _{n}> 0,\) we have
Observing that,
Therefore, from (26), we get that
By using Lemma 9 and since g(x∗)≤g(un), we get
From (18) and Lemma 5, we obtain
Since {xn} is bounded, then there exists a constant C1>0, such that
It then implies from (24) and αn→0 that
On the other hand, using Lemma 7, we have
Therefore, we have
where C2 is a positive constant. Since αn→0 as n→∞, inequality (24) and {xn} is bounded, we obtain
Since \(J_{\lambda _{n}}^{B}\) is 1-inverse strongly monotone and (18), we have
So, we obtain
and thus
Since αn→0 as n→∞, inequalities (24) and (31), we obtain
Next, we prove that \(\limsup _{n\to +\infty }\langle \eta Mx^{*}-\gamma f(x^{*}), x^{*}-x_{n}\rangle \leq 0.\) Since H is reflexive and {xn} is bounded, there exists a subsequence \(\{x_{n_{k}}\}\) of {xn} such that \(x_{n_{k}}\) converges weakly to x∗∗ in K and
From (26) and I−T1∘T2 is demiclosed, we obtain x∗∗∈Fix(T1∘T2). Using Lemma 1, we have x∗∗∈Fix(T2)∩Fix(T1). Using (18) and Lemma 8, we arrive at
Hence,
Since \(J^{g}_{\lambda }\) is single-valued and nonexpansive, using (33) and Lemma 1, then \(x^{**} \in Fix (J^{g}_{\lambda })=\text {argmin}_{u\in K}\, g(u).\) Let us show x∗∗∈(A+B)−1(0). Since A be an α2-inverse strongly monotone, A is Lipschitz continuous monotone mapping. It follows from Lemma 3 that B+A is maximal monotone. Let (v,u)∈G(B+A), i.e., u−Av∈B(v). Since \(z_{n_{k}} = J_{\lambda _{n_{k}}}^{B}(v_{n_{k}}-\lambda _{n_{k}} A v_{n_{k}}),\) we have \( v_{n_{k}}- \lambda _{n_{k}} A v_{n_{k}}\in (I +\lambda _{n_{k}} B)z_{n_{k}},\) i.e., \(\frac {1}{\lambda _{n_{k}}}(v_{n_{k}}-z_{n_{k}}-\lambda _{n_{k}} A v_{n_{k}})\in B(z_{n_{k}}).\) By maximal monotonicity of B+A, we have
and so
It follows from ∥vn−zn∥→0,∥Avn−Azn∥→0 and \(z_{n_{k}} \rightharpoonup x^{**},\) we get
and hence x∗∗∈(A+B)−1(0). Therefore, x∗∗∈Fix(T1)∩Fix(T2)∩(A+B)−1(0)∩argminy∈Kg(y). On the other hand, the fact that x∗ solves (19), we then have
Finally, we show that xn→x∗. From (18) and properties of metric projection, we get
Hence, by Lemma 4, we conclude that the sequence {xn} converges strongly to x∗∈Fix(T1)∩Fix(T2)∩(A+B)−1(0)∩argminy∈Kg(y).Case 2. Assume that the sequence {∥xn−x∗∥} is not monotonically decreasing. Set Bn=∥xn−x∗∥ and \(\pi : \mathbb {N}\to \mathbb {N} \) be a mapping for all n≥n0 (for some n0 large enough) by \( \pi (n)= \max \lbrace k\in \mathbb {N} : k\leq n,\,\,\, B_{k}\leq B_{k+1}\rbrace.\) Obviously, {pi(n)} is a non-decreasing sequence such that π(n)→∞ as n→∞ and Bπ(n)≤Bπ(n)+1 for n≥n0. From (23), we have
Hence,
By a similar argument as in case 1, we can show that xπ(n) is bounded in \(H, {\lim }_{n\rightarrow \infty }\Vert u_{\pi (n)}-x_{\pi (n)} \Vert =0, {\lim }_{n\rightarrow \infty }\Vert u_{\pi (n)}-v_{\pi (n)} \Vert =0, {\lim }_{n\rightarrow \infty }\Vert v_{\pi (n)}-z_{\pi (n)} \Vert =0 \), and \(\limsup _{\pi (n)\to +\infty }\langle \eta Mx^{*}-\gamma f(x^{*}), x^{*}-x_{\pi (n)})\rangle \leq 0.\) We have for all n≥n0,
which implies that
Then, we have
Therefore,
Thus, by Lemma 6, we conclude that
Hence, \({\lim }_{n\rightarrow \infty }B_{n}=0,\) that is {xn} converges strongly to x∗. This completes the proof. □
We now apply Theorem 2 when T1 is nonexpansive mapping. In this case, demiclosedness assumption (I−T1∘T2 is demiclosed at origin) is not necessary.
Theorem 3
Let A: K→H be a α-inverse strongly monotone operator and g:K→(−∞, +∞] be a proper, lower semi-continuous, and convex function. Let T1:K→K be a nonexpansive mapping and T2:K→K be a firmly nonexpansive mapping.Let B be a maximal monotone operator on H into 2H such that the domain of B is included in K, let f:K→K be an b-Lipschitzian mapping and M:K→H be an μ-strongly monotone and L-Lipschitzian operator such that Γ:=Fix(T1)∩Fix(T2)∩(A+B)−1(0)∩argminu∈Kg(u) is nonempty. Let {xn} be a sequence defined as follows:
where {λn},{θn}, and {αn} are sequences in (0,1) satisfying the following conditions:\( (i) {\lim }_{n\rightarrow \infty }\alpha _{n}=0,\,\,\sum _{n=0}^{\infty } \alpha _{n}=\infty \), and λn∈(λ, d)⊂(0, min{1, 2α}),\((ii)\,\,{\lim }_{n\rightarrow \infty }\inf (1-\theta _{n})\theta _{n}> 0\) and \(0<\eta <\frac {2\mu }{ L^{2}},\) 0<γb<τ, where \(\tau =\eta \Big (\mu -\frac { L^{2} \eta }{2}\Big).\) Then, the sequence {xn} generated by (34) converges strongly to x∗∈Γ, which solves the variational inequality:
Proof
We have T1∘T2 is nonexpansive mapping; then, the proof follows Lemma 1 and Theorem 2. □
Now, we consider the following quadratic optimization problem:
where B:K→H is a strongly positive bounded linear operator, Γ:=Fix(T1)∩Fix(T2)∩(A+B)−1(0)∩argminu∈Kg(u), and h is a potential function for γf (i.e., \(\phantom {\dot {i}\!}h^{'}(x) = \gamma f(x)\) on K).
Hence, one has the following result.
Theorem 4
Let A: K→H be a α-inverse strongly monotone operator and g:K→(−∞, +∞] be a proper, lower semi-continuous, and convex function. Let T1:K→K be a quasi-nonexpansive mapping and T2:K→K be a firmly nonexpansive mapping.Let B be a maximal monotone operator on H into 2H such that the domain of B is included in K, let f:K→K be an b-Lipschitzian mapping and M:K→H be strongly bounded linear operator with coefficient μ>0 such that Γ:=Fix(T1)∩Fix(T2)∩(A+B)−1(0)∩argminu∈Kg(u) is nonempty. Let {xn} be a sequence defined as follows:
where {λn},{θn}, and {αn} are sequences in (0,1) satisfying the following conditions: \( (i) {\lim }_{n\rightarrow \infty }\alpha _{n}=0,\,\,\sum _{n=0}^{\infty } \alpha _{n}=\infty \), and λn∈(λ, d)⊂(0, min{1, 2α}),\((ii)\,\,{\lim }_{n\rightarrow \infty }\inf (1-\theta _{n})\theta _{n}> 0,\)I−T1∘T2 is demiclosed at the origin and \(0<\eta <\frac {2\mu }{ \Vert M\Vert ^{2}},\) 0<γb<τ, where \(\tau =\eta \Big (\mu -\frac { \Vert M\Vert ^{2} \eta }{2}\Big).\) Then, the sequence {xn} generated by (37) converges strongly to a unique solution of problem (36).
Proof
We note that strongly positive bounded linear operator M is a ∥M∥-Lipschitzian and μ-strongly monotone operator; the proof follows Theorem 2. □
Application to some nonlinear problems
In this section, we apply our main results for finding a common solution of composite convex minimization problem, convex optimization problem, and fixed point problem involving composed operators.
Problem 1
Let H be a real Hilbert space. We consider the minimization of composite objective function of the type
where \(\Psi : H \to \mathbb {R} \cup \{+\infty \}\) is a proper, convex, and lower semi-continuous functional and \(\Phi : H \to \mathbb {R}\) is convex functional.
Many optimization problems from image processing [7], statistical regression, machine learning (see, e.g., [36] and the references contained therein), etc. can be adapted into the form of (38).Observe that problem 1 is equivalent to find x∗∈H such that
It is well known ∂Ψ is maximal monotone (see, e.g., Minty [37]).
Lemma 11
(Baillon and Haddad [38]) Let H be a real Hilbert space, Φ a continuously Fréchet differentiable, convex functional on H and ∇Φ the gradient of Φ. If ∇Φ is \(\frac {1}{\alpha }\)-Lipschitz continuous, then ∇Φ is α-inverse strongly monotone.
Hence, from Theorem 2, we have the following result.
Theorem 5
Let H be a real Hilbert space and g:H→(−∞, +∞] be a proper, lower semi-continuous, and convex function. Let \(\Phi : H \to \mathbb {R} \) be a continuously Fréchet differentiable, convex functional on H and ∇Φ a \(\frac {1}{\alpha }\)-Lipschitz continuous. Let \( \Psi : H \to \mathbb {R} \cup \{+\infty \}\) be a proper, convex, and lower semi-continuous functional and f:H→H be an b-Lipschitzian mapping. Let T1:H→H be a quasi-nonexpansive mapping and let T2:H→H be a firmly nonexpansive mapping and M:H→H be an μ-strongly monotone and L-Lipschitzian operator such that Γ:=Fix(T1)∩Fix(T2)∩(∂Ψ+∇Φ)−1(0)∩argminu∈Hg(u) is nonempty. Let {xn} be a sequence defined as follows:
where {λn},{θn}, and {αn} be sequences in (0,1) satisfying the following conditions:\( (i) {\lim }_{n\rightarrow \infty }\alpha _{n}=0,\,\,\sum _{n=0}^{\infty } \alpha _{n}=\infty \), and λn∈(λ, d)⊂(0, min{1, 2α}),\((ii)\,\,{\lim }_{n\rightarrow \infty }\inf (1-\theta _{n})\theta _{n}> 0,\)I−T1∘T2 is demiclosed at the origin and \(0<\eta <\frac {2\mu }{ L^{2}},\) 0<γb<τ, where \(\tau =\eta \Big (\mu -\frac { L^{2} \eta }{2}\Big).\) Then, the sequence {xn} generated by (40) converges to a point x∗∈argminu∈Hg(u) which is a minimizer of Ψ(x)+Φ(x) in H as well as it is also a common fixed point of T1 and T2 in H.
Proof
We set B=∂Ψ and ∇Φ=A,K=H into Theorem 2. Then, the proof follows Theorem 2. □
Remark 1
Many already studied problems in the literature can be considered as special cases of this paper; see, for example, [1, 3, 4, 14, 16, 17, 23, 26, 28, 39] and the references therein.
Availability of data and materials
Not applicable.
References
Rockafellar, R. T.: Maximal monotone operators and proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976).
Radu, R. I., Csetnek, E. R., Heinrich, A.: A primal-dual splitting algorithm for finding zeros of sums of maximal monotone operators. SIAM J. Optim. 23, 2011–2036 (2013).
Passty, G. B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert spaces. J. Math. Anal. Appl. 72, 383–390 (1979).
Lions, P. L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979).
Chen, G. H. -G., Rockafellar, R. T.: Convergence rates in forward-backward splitting. SIAM J. Optim. 7(2), 421–444 (1997).
Genaro, L., Victoria, M. M., Fenghui, W., Xu, H. K.: Forward-backward splitting methods for accretive operators in Banach spaces. Abstr. Appl. Anal., 1–25 (2012). doi:10.1155/2012/109236.
Bredies, K.: A forward-backward splitting algorithm for the minimization of non-smooth convex functionals in Banach space. Inverse Probl. 25(1), 1–20 (2009).
Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38(2), 431–446 (2000).
Dadashi, V., Postolache, M.: Forward-backward splitting algorithm for fixed point problems and zeros of the sum of monotone operators. Arab. J. Math., 1–11 (2019). https://doi.org/10.1007/s40065-018-0236-2.
Adly, S.: Perturbed algorithms and sensitivity analysis for a general class of variational inclusions. J. Math. Anal. Appl. 201(3), 609–630 (1996).
Radu, R. I., Csetnek, E. R., Heinrich, A.: A primal-dual splitting algorithm for finding zeros of sums of maximal monotone operators. SIAM J. Optim. 23, 2011–2036 (2013).
Shimoji, K., Takahashi, W.: Strong convergence theorems of approximated sequences for nonexpansive mappings in Banach spaces. Proc. Amer. Math. Soc. 125, 3641–3645 (1997).
Dotson, Jr., W.G: Fixed points of quasi-nonexpansive mappings. Aust Math Soc A. 13, 167–170 (1972).
Xu, H. K.: An iterative approach to quadratic optimization. J. Optim. Theory Appl. 116, 659–678 (2003).
Moudafi, A.: Viscosity approximation methods for fixed point problems. J. Math. Anal. Appl. 241, 46–55 (2000).
Marino, G., Xu, H. K.: A general iterative method for nonexpansive mappings in Hibert spaces. J. Math. Anal. Appl. 318, 43–52 (2006).
Marino, G., Xu, H. K.: Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces. J. Math. Math. Appl. 329, 336–346 (2007).
Sow, T. M. M.: A modified generalized viscosity explicit methods for quasi-nonexpansive mappings in Banach spaces. Funct. Anal. Approx. Comput. 11(2), 37–49 (2019).
Yao, Y., Zhou, H., Liou, Y. C.: Strong convergence of modified Krasnoselskii-Mann iterative algorithm for nonexpansive mappings. J. Math. Anal. Appl. Comput. 29, 383–389 (2009).
Yuan, H.: On solutions of inclusion problems and fixed point problems. Fixed Point Theory Appl.2013:11, 11 (2013).
Gunduz, B., Akbulutl, S.: Common fixed points of a finite family of iasymptotically nonexpansive mappings by s-iteration process in Banach spaces. Thai J. Math. 15(3), 673–687 (2017).
Halpern, B.: Fixed points of nonexpansive maps. Bull. Amer. Math. Soc. 3, 957–961 (1967).
Martinet, B.: Régularisation d’inéquations variationnelles par approximations successives. (French) Rev. Franaise Informat. Recherche Opérationnelle. 4, 154–158 (1970).
Güler, O.: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 29, 403–419 (1991).
Solodov, M. V., Svaiter, B. F.: Forcing strong convergence of proximal point iterations in a Hilber space. Math. Program. Ser. A. 87, 189–202 (2000).
Kamimura, S., Takahashi, W.: Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 13(3), 938–945 (2003).
Lehdili, N., Moudafi, A.: Combining the proximal algorithm and Tikhonov regularization. Optimization. 37, 239–252 (1996).
Reich, S.: Strong convergence theorems for resolvents of accretive operators in Banach spaces. J. Math. Anal. Appl. 183, 118–120 (1994).
Browder, F. E.: Convergenge theorem for sequence of nonlinear operator in Banach spaces. Math. Z. 100, 201–225 (1976). https://doi.org/org/10.1007/BF01109805.
Chidume, C. E.: Geometric properties of Banach spaces and nonlinear iterations. Springer Verlag Ser. Lect. Notes Math. 1965 (2009). ISBN 978-1-84882-189-7.
Xu, H. K.: Iterative algorithms for nonlinear operators. J. London Math. Soc. 66(2), 240–256 (2002).
Wang, S.: A general iterative method for an infinite family of strictly pseudo-contractive mappings in Hilbert spaces. Appl. Math. Lett. 24, 901–907 (2011).
Mainge, P. E.: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899–912 (2008).
Miyadera, I.: Nonlinear semigroups. Translations of Mathematical Monographs, American Mathematical Society, Providence (1992).
Ambrosio, G, Savaré, N: Gradient Flows in Metric Spaces and in the Space of Probability Measures, Second Edition. Lectures in Mathematics ETH Zürich, Birkhäuser Verlag, Basel (2008).
Wang, Y., Xu, H. K.: Strong convergence for the proximal-gradient method. J. Nonlinear Convex Anal. 15(3), 581–593 (2014).
Minty, G. J.: Monotone (nonlinear) operator in Hilbert space. Duke Math. 29, 341–346 (1962).
Baillon, J. B., Haddad, G.: Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel J. Math. 26, 137–150 (1977).
Khatibzadeh, H., Mohebbi, V.: On the iterations of a sequence of strongly quasi-nonexpansive mappings with applications. Numer. Funct. Anal. Optim. 41(3), 231–256 (2020).
Acknowledgements
Not applicable.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
The authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The author declares that there are no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Sow, T. General-type proximal point algorithm for solving inclusion and fixed point problems with composite operators. J Egypt Math Soc 28, 20 (2020). https://doi.org/10.1186/s42787-020-00080-w
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s42787-020-00080-w
Keywords
- General-type proximal algorithm
- Convex minimization problem
- Monotone inclusion problem
- Composite operators