Skip to main content
  • Original research
  • Open access
  • Published:

Three-point iterative algorithm in the absence of the derivative for solving nonlinear equations and their basins of attraction

Abstract

In this paper, we suggested and analyzed a new higher-order iterative algorithm for solving nonlinear equation \(g(x)=0\), \(g:{\mathbb {R}}\longrightarrow {\mathbb {R}}\), which is free from derivative by using the approximate version of the first derivative, and we studied the basins of attraction for the proposed iterative algorithm to find complex roots of complex functions \(g:{\mathbb {C}}\longrightarrow {\mathbb {C}}\). To show the effectiveness of the proposed algorithm for the real and the complex domains, the numerical results for the considered examples are given and graphically clarified. The basins of attraction of the existing methods and our algorithm are offered and compared to clarify their performance. The proposed algorithm satisfied the condition such that \(|x_{m}-\alpha |<1.0 \times 10^{-15}\), as well as the maximum number of iterations is less than or equal to 3, so the proposed algorithm can be applied to efficiently solve numerous type non-linear equations.

Introduction

Nonlinear applications in physical science are difficult to talk about for various reasons. To start with, what are nonlinear problems? Practically speaking each problem in theoretical physical science is represented by nonlinear numerical conditions, aside from possibly quantum hypothesis, and even in quantum hypothesis, it is such a debatable question whether it will in conclusion be a linear or nonlinear theory. Therefore, by onward, the largest part of theoretical physics is truthful to nonlinear problems. Solving nonlinear equations, arise in many branches of science and engineering, is one of the most important problems in numerical analysis. The Newton’s method is well known and most likely used method for solving nonlinear equations. Multipoint iteration methods have overcome the theoretical limit of one point method regarding the convergence order of computational efficiency and become the most powerful tool to find the roots of nonlinear equation, boundary value problem and system of nonlinear equations etc. The maximum attainable computational efficiency of multi-point without method is closely related to the hypothesis given by Kung and Traub [1] and had conjecture that the convergence order of any multipoint method without memory with n-evaluation is not larger than \(2^{(n-1)}\). A number of modification of Newton’s method with improved rate of convergence are reported by previous researcher and there in. Some scheme developed from Newton method by some authors are given. Many papers are written about iterative methods for solving the non-linear equations for details, see [2,3,4,5]. Proposed and analyzed three new root-finding algorithms for solving nonlinear equations in one variable and derived these algorithms with the help of variational iteration technique, see for instance [6]. The variant of Frontini-Sormani method, some higher order methods for finding the roots (simple and multiple) of nonlinear equations are proposed. In particular, and have constructed an optimal fourth order method and a family of sixth order method for finding a simple root (see for instance, [7, 8]). The basin of attraction is a method to visually sense how an approach makes as a function of the different starting points. In this work, we discuss the possibility of approximating the derivative by suitable difference approximations. It is shown that the presented algorithm convergence eighth order and this theory is supported by computational results. It is observed that for several functions, suggested algorithm can produce even better accuracy than that of other methods, we think about an iterative method for solving non-linear equations in real and complex domains, which are a significant zone of research in the numerical analysis as it has intriguing applications in several branches of pure and applied science can be concentrated in the overall of the non-linear equations, for getting a simple root \(\alpha\) of the function \(g: {\mathbb {R}}\longrightarrow {\mathbb {R}}\) i.e. \(g(\alpha )=0\), and \(g'(\alpha )\ne 0\), we know the method of Newton for finding \(\alpha\) utilized the iterative method

$$\begin{aligned} x_{m+1}=x_{m}-\frac{g(x_{m})}{g'(x_{m})}. \end{aligned}$$

The method of Newton is the most popular and simple algorithm, which incorporates the derivative of the function. However, Steffensen’s method [9, 10]

$$\begin{aligned} x_{m+1} = x_{m}- \big (g(x_{m})\big )^{2}\big /\big (g (x_{m}+ g (x_{m}))-g (x_{m}+ g (x_{m}))\big ),\quad m = 0,1,2,3, ... \end{aligned}$$

a variety of Newton’s technique which doesn’t utilize the derivative of the function. In this technique, the derivative is approximated numerically by the central difference scheme. Steffensen’s technique has the same order of convergence as Newton’s method, based on the approximation of the first derivative. The motivation behind this work is to improve a new eight-order derivative-free algorithm. This work is ordered as follows. In the “Preliminaries” section, basic concepts used in the work are presented. The purpose of the “Construction of presented iterative method and analysis of convergence” section studies the construction of the proposed method and analyses the convergence order of the proposed method. While the “Results and discussion” section presents results and discussion in real and complex domains, in the “Numerical problems in real domain” section, we consider five numerical examples to demonstrate the performance of the proposed algorithm and in the “Graphical comparison for the basins of attraction” section Graphical comparison by Means of the basins of attractions. “Some real-life applications” section, four application problems are solved. Finally the “Conclusion and future work” section concludes the paper.

Preliminaries

Suppose that g(x)=0, x \(\in\) \({\mathbb {R}},\) is called nonlinear equation if the function g(x) is explicitly algebraic function as polynomial of degree other than one or a transcendental function of x and they do not involve derivative or integral. A value for parameter x that satisfies the equation g(x)=0 is called a root or a zero of g(x). The accompanying significant definitions given underneath are needed for the ensuing convergence analysis.

Definition 1

[11] Suppose that \(g: [a,b]\rightarrow {\mathbb {R}}\). Allow the next conditions to hold

  1. (i)

    \(g(a)g(b)<0\),

  2. (ii)

    \(g\in C^{2}[a,b]\) and \(g'(x)g''(x)\ne 0,\quad x\in [a,b]\),

then the succession \(\{x_m\}\) defined by Newton’s method beginning with an initial estimate \(x_0\in [a,b]\) converges to the exact solution \(\alpha\) for \(g(x)=0\) in [ab]. Additionally, we have the next assessment

$$\begin{aligned} |x_m - \alpha |\le \frac{C_1}{2C_2}|x_m -x_{m-1}|, m\ge 1, \end{aligned}$$

holds, where \(C_{1}= \max _{x\in [a,b]}|g''(x)|.\) and \(C_{2}= \min _{x\in [a,b]}|g'(x)|.\)

Definition 2

[12] Let a real function g(x) with a root \(\alpha\) and suppose that \(\{x_{m}\}\) be a sequence of real numbers which converge towards \(\alpha\). The order of convergence p is given by

$$\begin{aligned} \lim _{m\longrightarrow \infty }\frac{x_{m+1}-\alpha }{(x_{m}-\alpha )^p} = \xi \ne 0, \end{aligned}$$

where \(\xi\) is constant called the asymptotic error and \(p\in {\mathbb {R}}^{+}\).

Definition 3

[13] Suppose that \(e_{m}=x_{m}-\alpha\) is \(m\)th iteration error, then the equation error is

$$\begin{aligned} e_{m+1}=\zeta e_{m}^p+O(e_{m}^{p+1}). \end{aligned}$$

If the error of equation exists, then p is convergence order of an iterative method.

Definition 4

[13] Let \(x_{m-1}, x_{m}\) and \(x_{m+1}\) are three iterations closer to \(\alpha\) . The computational order of convergence might be approximated by

$$\begin{aligned} COC\approx \frac{\ln \big |\left( \frac{ x_{m+1}-x_{m} }{x_{m}-x_{m-1)}}\right) \big |}{\ln \big |\left( \frac{x_{m}-x_{m-1)}}{x_{m-1}-x_{m-2}}\right) \big |}. \end{aligned}$$

Definition 5

[4] Let a number \(\alpha\) and its an approximation \(\alpha _c\). We will think about two different ways to calculate the error in such an approximation

Absolute Error = \(|\alpha _c-\alpha |\)    and       Relative Error = \(|\alpha _c-\alpha ||\alpha |^{-1}\).

Also in this work we will discuss some specific problems using the basin of attraction as a standard for comparison.

Now, we shall requisition some definitions, see in [14]. Let \(R: {\mathbb {C}}\longrightarrow {\mathbb {C}}\), is a rational map on Riemann sphere.

Definition 6

Let \(z\in {\mathbb {C}}\), then its orbit define as \(\text {orb}(z)=\{ z,R(z),R^{2}(z),\cdots ,R^{m}(z)\}.\)

Definition 7

Let \(z_0\) is a starting point of rational map if \(R(z_0)=z_0.\)

Definition 8

Let \(z_0\) is a periodic point with period m which is such that \(R^{m}(z_0)=z_0\) where m is the smallest such integer.

Definition 9

A point \(z_0\) is called attracting if \(|R'(z_0)|< 1\), repelling if \(|R'(z_0)|> 1\), and neutral if \(|R'(z_0)|= 1.\) If the derivative is also zero then the point is called super-attracting.

Construction of presented iterative method and analysis of convergence

For solving nonlinear equations, we drive the derivative-free iterative technique by using the approximate version of the first derivative of \(g'(x_{m})\) by

$$\begin{aligned} g'(x_{m})\approx \big (g(x_{m}+\theta g(x_{m}))-g(x_{m}-\theta g(x_{m}))\big )\big /2 \theta g(x_{m}), \end{aligned}$$
(1)

where \(\theta \in {\mathbb {R}}\) and not equal zero. Let us consider the method in [15]:

$$\begin{aligned} y_{m}& = x_{m}-\frac{g(x_{m})}{g'(x_{m})},\nonumber \\ z_{m}& = x_{m}-\bigg (1+\frac{g(y_{m})}{g(x_{m})-2g(y_{m})}\bigg )\frac{g(x_{m})}{g'(x_{m})},\nonumber \\ x_{m+1}& = z_{m}-\bigg (1+\frac{2g(y_{m})}{g(x_{m})-2g(y_{m})}\bigg )\frac{g(z_{m})}{g'(x_{m})}. \end{aligned}$$
(2)

By using Eq. (1), we obtain the following new eighth order algorithm in the absence of the derivative which using for solving a nonlinear equation as follows.

Eighth order derivative free iteration algorithm (8th BM): Further, we substitute the approximation of the derivative \(g'(x)\) in Eq. (2) by Eq. (1), we get the proposed algorithm free from derivatives, as follows:

8th BM: Given an initial approximation \(x_{0}\) (close to \(\alpha\) ) the root of \(g(x)=0\). We find the approximate solution

$$\begin{aligned} y_{m}& = x_{m}-\frac{2 \theta g^{2}(x_{m})}{g(x_{m}+\theta g(x_{m})) -g(x_{m}-\theta g(x_{m}))},\nonumber \\ z_{m}& = x_{m}-\bigg (\frac{g^{2}(x_{m})-g(x_{m})g(y_{m})+g^{2}(y_{m})}{g^{2}(x_{m}) -2g(x_{m})g(y_{m})+g^{2}(y_{m})}\bigg )\frac{2 \theta g^{2}(x_{m})}{g(x_{m}+\theta g(x_{m}))-g(x_{m}-\theta g(x_{m}))},\nonumber \\ x_{m+1}& = z_{m}-\frac{2 \theta g^{2}(z_{m})}{g(z_{m}+\theta g(z_{m})) -g(z_{m}-\theta g(z_{m}))}. \end{aligned}$$
(3)

Steps for calculating root using 8th BM

Step 1: Define the function g(x).

Step 2: Nominate an approximation guess \(x_0\).

Step 3: By using the formula (3), we calculate the next approximation of the root \(x_{i+1}, (i=0,1,2,\cdots ).\)

Step 4: We use a specific accuracy \(\epsilon\) as \(|x_{i}-\alpha |<\epsilon\), and repeat Step 3 until we get desired approximate root which satisfy the condition. In order to prove the convergence of 8th BM, we establish the following theorem with the help of Maple software.

Theorem

Suppose that \(g(x):{\mathbb {R}}\longrightarrow {\mathbb {R}}\) for the interval (ab). Assume that g(x) has sufficiently continuous derivatives in (ab). If \(\alpha\) has a simple root of g(x) and if \(x_{0}\) is closed to \(\alpha\) then 8th BM satisfies the following error equation:

$$\begin{aligned} e_{m+1}=(\theta ^4 F^4 c^2_{3}+2 \theta ^2 F^2 c^2_{3}-4 \theta ^2 F^2 c^2_{2} c_{3}+c^2_{3}-4 c_{3} c^2_{2}+4 c^4_{2}) c^3_{2} e^8_{m}+O (e^9_{m}). \end{aligned}$$
(4)

Proof

Let the error at step m be denoted by \(e_{m} = x_{m} -\alpha\) and \(F= g'(\alpha )\) and \(c_{k}=\frac{1}{k!}\frac{g^{k}(\alpha )}{g'(\alpha )} ,\quad k=2,3,.....\quad\). If we expand \(g(x_{m})\) around the root \(\alpha\) and express it in terms of powers of error \(e_{m}\) we obtain

$$\begin{aligned} g(x_{m})& = g(\alpha )+(x_{m}-\alpha )g'(\alpha )+\frac{(x_{m}-\alpha )^2}{2!}g^{(2)}(\alpha )+\frac{(x_{m}-\alpha )^3}{3!} g^{(3)}(\alpha )\nonumber \\&\quad+\,\frac{(x_{m}-\alpha )^4}{4!}g^{(4)}(\alpha )+ \frac{(x_{m}-\alpha )^5}{5!}g^{(5)}(\alpha )+\frac{(x_{m} -\alpha )^6}{6!}g^{(6)}(\alpha )\nonumber \\&\quad+\,\frac{(x_{m}-\alpha )^7}{7!}g^{(7)}(\alpha )+\frac{(x_{m}-\alpha )^8}{8!}g^{(8)}(\alpha )+\ldots \nonumber \\& = F\big (e_{m}+c_{2}e^{2}_{m}+c_{3}e^{3}_{m}+ c_{4}e^{4}_{m}+c_{5}e^{5}_{m}+c_{6} e^{6}_{m}+c_{7}e^{7}_{m}+c_{8}e^{8}_{m}+\ldots \big ). \end{aligned}$$
(5)

Computing \(g^{2}(x_{m})\) using Eq. (5), then multiply by \(2\theta\) we get

$$\begin{aligned} 2 \theta g^{2}(x_{m})& = 2 \theta F^2 e^2_{m}+4 \theta F^2 c_{2} e^3_{m}+2 \theta F^2 (c^2_{2}+2 c_{3}) e^4_{m}+4 \theta F^2 (c_{2} c_{3}+c_{4}) e^5_{m}\nonumber \\&\quad+\,2 \theta F^2 (2 c_{2} c_{4}+2 c_{5}+c^2_{3}) e^6_{m}+... \end{aligned}$$
(6)

Expand \(g(x_{m}+\theta g(x_{m}))\) and \(g(x_{m}-\theta g(x_{m}))\) around the root \(\alpha\) and express it in terms of powers of error \(e_{m}\) we get

$$\begin{aligned} g(x_{m}+\theta g(x_{m}))& = F (1+\theta F) e_{m}+F c_{2} (3 \theta F+1+\theta ^2 F^2) e^2_{m}+F (2 \theta F c^2_{2}\nonumber \\&\quad+\,2 \theta ^2 F^2 c^2_{2}+c_{3}+4 \theta F c_{3}+3 c_{3} \theta ^2 F^2+\theta ^3 F^3 c_{3}) e^3_{m}\nonumber \\ &\quad+\,F (5 \theta F c_{2} c_{3}+8 \theta ^2 F^2 c_{2} c_{3}+3 \theta ^3 F^3 c_{2} c_{3}+c_{4}\nonumber \\&\quad+\,5 \theta F c_{4}+6 c_{4} \theta ^2 F^2+4 c_{4} \theta ^3 F^3+c_{4} \theta ^4 F^4+\theta ^2 F^2 c^3_{2}) e^4_{m}+..., \end{aligned}$$
(7)
$$\begin{aligned} g(x_{m}-\theta g(x_{m}))& = -F (-1+\theta F) e_{m}+F c_{2} (-3 \theta F+1+\theta ^2 F^2) e^2_{m}-F (2 \theta F c^2_{2}\nonumber \\&\quad-\,2 \theta ^2 F^2 c^2_{2}-c_{3}+4 \theta F c_{3}-3 c_{3} \theta ^2 F^2+\theta ^3 F^3 c_{3}) e^3_{m}\nonumber \\&\quad+\,F (-5 \theta F c_{2} c_{3}+8 \theta ^2 F^2 c_{2} c_{3}-3 \theta ^3 F^3 c_{2} c_{3}+c_{4}\nonumber \\&\quad-\,5 \theta F c_{4}+6 c_{4} \theta ^2 F^2-4 c_{4} \theta ^3 F^3+c_{4} \theta ^4 F^4+\theta ^2 F^2 c^3_{2}) e^4_{m}-... \end{aligned}$$
(8)

Using Eqs. (7) and (8), we have

$$\begin{aligned} g(x_{m}+\theta g(x_{m})) -g(x_{m}-\theta g(x_{m}))&=2 \theta F^2 e_{m}+6 \theta F^2 c_{2} e^2_{m}+(4 c^2_{2} \theta F^2+8 \theta F^2 c_{3}+2 c_{3} \theta ^3 F^4) e^3_{m}\nonumber \\&\quad +(10 c_{3} \theta F^2 c_{2}+6 c_{3} \theta ^3 F^4 c_{2}+10 \theta F^2 c_{4}+8 c_{4} \theta ^3 F^4) e^4_{m}+... \end{aligned}$$
(9)

Combining Eqs. (6) and (9), we get

$$\begin{aligned} \frac{2 \theta g^{2}(x_{m})}{g(x_{m}+\theta g(x_{m})) -g(x_{m}-\theta g(x_{m}))}& = e_{m}-c_{2} e^2_{m}+(2 c^2_{m}-2 c_{3}-c_{3} \theta ^2 F^2) e^3_{m}+(7 c_{2} c_{3}\nonumber \\&\quad+\,\theta ^2 F^2 c_{2} c_{3}-3 c_{4}-4 c_{4} \theta ^2 F^2-4 c^3_{2}) e^4_{m}+... \end{aligned}$$
(10)

By considering these relations and \(y_m\) in Eq. (3), we get

$$\begin{aligned} y_{m}& = \alpha +c_{2} e^2_{m}+(-2 c^2_{2}+2 c_{3}+c_{3} \theta ^2 F^2) e^3_{m}+(-7 c_{2} c_{3}\nonumber \\&\quad-\,\theta ^2 F^2 c_{2} c_{3}+3 c_{4}+4 c_{4} \theta ^2 F^2+4 c^3_{2}) e^4_{m}+... \end{aligned}$$
(11)

At this time, we expand \(g(y_m)\) around \(\alpha\) by using the result in Eq. (11), as accordingly, we get

$$\begin{aligned} g(y_{m})& = F c_{2} e^2_{m}+F (-2 c^2_{2}+2 c_{3}+c_{3} \theta ^2 F^2) e^3_{m}-F (7 c_{2} c_{3}\nonumber \\&\quad+\,\theta ^2 F^2 c_{2} c_{3}-3 c_{4}-4 c_{4} \theta ^2 F^2-5 c^3_{2}) e^4_{m}-... \end{aligned}$$
(12)

By considering these relations and \(z_m\) in Eq. (3), we get

$$\begin{aligned} z_{m}& = \alpha +(2 c^3_{2}-c_{2} c_{3}-\theta ^2 F^2 c_{2} c_{3}) e^4_{m}+(-10 c^4_{2}+14 c_{3} c^2_{2}\nonumber \\&\quad+\,5 \theta ^2 F^2 c^2_{2} c_{3}-2 c^2_{3}-3 \theta ^2 F^2 c^2_{3}-\theta ^4 F^4 c^2_{3}-2 c_{2} c_{4}-4 \theta ^2 F^2 c_{2} c_{4}) e^5_{m}+... \end{aligned}$$
(13)

Expanding \(g(z_{m})\) and about \(\alpha\) and using Eq. (13), we obtain

$$\begin{aligned} g(z_{m})& = -F c_{2} (-2 c^2_{2}+c_{3}+c_{3} \theta ^2 F^2) e^4_{m}-F (10 c^4_{2}-14 c_{3} c^2_{2}\nonumber \\&\quad-\,5 \theta ^2 F^2 c^2_{2} c_{3}+2 c^2_{3}+3 \theta ^2 F^2 c^2_{3}+\theta ^4 F^4 c^2_{3}+2 c_{2} c_{4}+4 \theta ^2 F^2 c_{2} c_{4}) e^5_{m}+... \end{aligned}$$
(14)

Combining Eqs. (13) and (14) we get

$$\begin{aligned} \frac{2 \theta g^{2}(z_{m})}{g(z_{m}+\theta g(z_{m})) -g(z_{m}-\theta g(z_{m}))}& = -(-2 c^2_{2}+c_{3}+c_{3} \theta ^2 F^2) c_{2} e^4_{m}+(-10 c^4_{2}+14 c_{3} c^2_{2}+5 \theta ^2 F^2 c^2_{2} c_{3}\nonumber \\&\quad-\,2 c^2_{3}-3 \theta ^2 F^2 c^2_{3}-\theta ^4 F^4 c^2_{3}-2 c_{2} c_{4}-4 \theta ^2 F^2 c_{2} c_{4}) e^5_{m}+... \end{aligned}$$
(15)

By using Eqs. (13) and (15) in the last expression of Eq. (3), we obtain

$$\begin{aligned} x_{m+1}& = \alpha +(\theta ^4 F^4 c^2_{3}+2 \theta ^2 F^2 c^2_{3}-4 \theta ^2 F^2 c^2_{2} c_{3}+c^2_{3}-4 c_{3} c^2_{2}+4 c^4_{2}) c^3_{2} e^8_{m}+O (e^9_{m}). \end{aligned}$$
(16)

From Eq. (16) and \(e_{m+1} = x_{m+1}-\alpha\) finally we have

$$\begin{aligned} e_{m+1}=(\theta ^4 F^4 c^2_{3}+2 \theta ^2 F^2 c^2_{3}-4 \theta ^2 F^2 c^2_{2} c_{3}+c^2_{3}-4 c_{3} c^2_{2}+4 c^4_{2}) c^3_{2} e^8_{m}+O (e^9_{m}). \end{aligned}$$
(17)

The last equation shows that 8th BM is eight order of convergence. This completes the proof. \(\square\)

Results and discussion

Numerical problems in real domain

In this section, we give the results of some numerical examples to compare our proposed algorithm with the methods in [16] which are called Dehghan Method 2 (DM2), King Method (KM) and Proposed Free Derivative Method (PFDM). We are using five examples to display the effectiveness of the presented algorithm. All the computations were done by using Maple 18 and were satisfied the condition such that \(|x_{m}-\alpha |<1.0 \times 10^{-15}\), as well as the maximum number of tainiterations is less than or equal to three. The computational results in Table 1 lists the absolute value of the given nonlinear function \(g_i(x_m),i=1,2,3,4,5, m=3,\) for our proposed algorithm at \(\theta =1,-1,0.5,-0.5\). In addition, it can be seen that in Table 1 the computational order of convergence (COC) perfectly coincides with the theoretical results. The results are given in Table 1 in terms of the number significant digits for each test function at 3rd iteration, that is, e.g. \(1.0 \times 10^{-41}\) shows that the absolute value of the given nonlinear function \(g_1(x_3)\) at 3rd iteration is zero up to 41 decimal places. In Table 2, “Div” indicates that the algorithm does not converge after the maximum allowed iteration is reached. From Table 2 one can see that the computational results achieved are not far different. In \(g_1(x)\) for initial guess \(-1.5\), 8th BM require 3 iterations for different \(\theta\), DM2 requires 5, KM requires 36 iterations, and PFDM requires 4 iterations. For initial guess \(-1.0\) 8th BM require 3 and 4 iterations, KM and PFDM require 4 and DM2 requires 4 iterations. So the quickest algorithm to hit the root is ours. In \(g_2(x)\) the method having the least iteration is 8th BM. As far as the numerical results are concerned, for most of the functions we tested, the proposed algorithm can be competitive with the methods we are comparing. The computational results presented in Table 1 and Table 2 show that our algorithm is more efficient compared with the proposed methods in [16]. Figures 1, 2 and 3 show the graphical representation of the values of the iteration \((x_{i})\) at various iteration numbers with different values of \(\theta\) appear, These figures show that the proposed algorithm reaches the exact solution at least a number of iterations, which is 2, and this shows that the 8th BM is effective at any \(\theta\). Absolute errors at different iterations numbers with various values of \(\theta\) are shown in Figs. 4 and 5, These figures show that 8th BM converges quickly and more accurately at least the number of iterations. Consequently, the 8th BM is considered as an improvement for the methods of derivative-free which are solving nonlinear equations. The following examples are used for numerical verification:

$$\begin{aligned} \left\{ \begin{array}{lllll} g_{1}(x)=-(x-\cos (x)),\quad \quad \alpha =0.7390851332151606\\ g_{2}(x)=\sin (x)-0.5,\quad \quad \alpha =0.5235987755982989\\ g_{3}(x)=1-(x^{2}-\sin ^{2}(x)),\quad \quad \alpha =1.4044916482153412\\ g_{4}(x)=x(x^{2}-1)+3,\quad \quad \quad \quad \quad \alpha =-1.6716998816571610\\ g_{5}(x)=-(0.5+\cos (x)-\tan (x)),\quad \qquad \alpha =0.8570567764718169\\ \end{array}\right. \end{aligned}$$
Table 1 Numerical results for test functions
Table 2 Comparison of different methods for solving test functions
Fig. 1
figure 1

Iteration values at different iteration numbers for 8th BM at different \(\theta\) (\(\theta =-1, -0.5, 0.5, 1\)) respectively for \(g_1(x).\)

Fig. 2
figure 2

Iteration values at different iteration numbers for 8th BM at different \(\theta\) (\(\theta =-1, -0.5, 0.5, 1\)) respectively for \(g_2(x).\)

Fig. 3
figure 3

Iteration values at different iteration numbers for 8th BM at different \(\theta\) (\(\theta =-1, -0.5, 0.5, 1\)) respectively for \(g_3(x).\)

Fig. 4
figure 4

Absolute errors at different iteration numbers for 8th BM with different \(\theta\) (\(\theta =-1, -0.5, 0.5, 1\)) respectively for \(g_4(x).\)

Fig. 5
figure 5

Absolute errors at different iteration numbers for 8th BM with different \(\theta\) (\(\theta =-1, -0.5, 0.5, 1\)) respectively for \(g_5(x).\)

figure a

Graphical comparison for the basins of attraction

Here we examine the comparison of some high order simple root finder in the complex plane using a basin of attraction. We consider the polynomial \(g(z) =z^{r}-1; z \in {\mathbb {C}}\) for achieving the unity roots in the form

$$\begin{aligned} \omega _{k} = \cos \left( \frac{2\pi (k-1)}{r}\right) + i\sin \left( \frac{2\pi (k-1)}{r}\right) ; k = 1, 2,...,r. \end{aligned}$$

The basin of attraction compared to the roots of the function g(z) comprises of all beginning points \(z_0\) which are pulled to \(\omega _{k}\). We use a comparison between iterative methods by using these basins. In the computational examples, let \(D = [-2, 2] \times [-2, 2]\in {\mathbb {C}}\) of \(250 \times 250\) points, furthermore, we apply our algorithm beginning in each a\(z_0\) in D. The basin of attraction for complex Newton’s method was first started by [13]. The basin of attraction is an approach to see how a calculation acts as a function for the different beginning points. It is another approach to look at the iterative methods. We give a color for each point \(z_0\in {\mathbb {C}}\) according to the root at which the corresponding iterative algorithm starting from \(z_0\) converges, for details, one may see [17, 18]. the accompanying test functions had been considered of comparison: \(g(z) =z^{r}-1, r=2,3,4,5\) respectively. We compare the newly proposed algorithm, namely (8th BM) and four different methods as Bhavna Panday and Jai Prakash Jaiswa [13], Changbum method (CMB) [19], Sharma Methods (SB) [20] and Behzad method (BG) [21]. We choose nonlinear functions to provide the accuracy of the newly proposed algorithm for different \(\theta\) to find complex roots for complex functions. The roots of used functions are listed and the computations reported using Maple 18 had been done. More scientific calculations in numerous territories of science request a high exactness level of numerical accuracy. We use the next applications for the comparison of the other methods as follows

$$\begin{aligned} \left\{ \begin{array}{lllll} g_{1}(z)=z^2-1,\quad \quad \quad z^{*}=\{\pm 1\}\\ g_{2}(z)=z^3-1,\quad \quad \quad z^{*}=\{-0.5 \pm 0.866025 i, 1\}\\ g_{3}(z)=z^2-z+\frac{1}{z},\quad z^{*}=\{0.877439 \pm 0.744862 i, -0.754878\}\\ g_{4}(z)=z^{4}-1,\quad \quad \quad z^{*}=\{\pm i, \pm 1\}\\ g_{5}(z)=z^{5}-1,\quad \quad \quad z^{*}=\{1,-0.809017 - 0.587785i, 0.309017 \pm 0.951057i ,-0.809017 + 0.587785i\}\\ g_{6}(z)=z^{3}-z,\quad \quad \quad z^{*}=\{0,\pm 1\}\\ g_{7}(z)=z^{2}+2z-1,\quad \quad \quad z^{*}=\{-1.46771+0.226699i,-0.4533980i, 1.467710+0.2266990i\}.\\ \end{array}\right. \end{aligned}$$

The sequence \(\{z_k \}_{k=0}^{\infty }\) is of the point orbit, if this converges to the root then we say that is attracted to. the initial points for the sequence of converges to \(z^{*}\) is the basin of attraction of. Boundaries between basins generally are fractals in nature. 8th BM which is given for real domain is also used to achieve the graphs of complex polynomials that envision the roots getting process. Figures 6 and 7 show the basins of attraction of 8th BM at \(\theta =0.5\), \(\theta =1\) and the methods in [13, 21], from left to right respectively for the quadratic and cubic polynomials. the red of color shows the roots \(z^{*}\). This shows that the convergence of 8th BM when the initial points are chosen near the root is rapid convergence, as the red intensity of the colors emphasizes that the proposed algorithm converges in less than 5 iterations. The 8th BM is more accurate with few iterations number and most basins of attraction at \(\theta =0.5\), \(\theta =1\) for \(g_{1}(z)\). Figures 8, 9, 10, 11 and 12 show the basins of attractions of 8th BM and the other methods in [13, 15, 22], where the presented algorithm is globally convergent with the lowest iterations number. when the polynomial degree increments from 3 to 7, the 8th BM has difficulties, and their iterations number increments. 8th BM has small spread points compared with the others.

Fig. 6
figure 6

Plots of 8th BM for \(\theta =0.5\), \(\theta =1\) and the method in [13], respectively for \(g_1(z).\)

Fig. 7
figure 7

Plots of 8th BM for \(\theta =0.5\), \(\theta =1\) and the method in SB [20] respectively for \(g_2(z).\)

Fig. 8
figure 8

Plots of 8th BM for \(\theta =0.5\), \(\theta =1\) and the method in BG [21] respectively for \(g_3(z).\)

Fig. 9
figure 9

Plots of 8th BM for \(\theta =0.5\), \(\theta =1\) and the method in CMB [19] respectively for \(g_4(z).\)

Fig. 10
figure 10

Plots of 8th BM for \(\theta =0.5\) and \(\theta =1\) respectively for \(g_5(z).\)

Fig. 11
figure 11

Plots of 8th BM for \(\theta =0.5\) and \(\theta =1\) and the method in [13] respectively for \(g_6(z).\)

Fig. 12
figure 12

Plots of 8th BM for \(\theta =0.5\) and \(\theta =1\) respectively for \(g_7(z)\) and the method in [13]

Some real-life applications

In this section we present some applications and compare our results to well-known methods:

Application 1

The deepness of embedment x if a sheet-pile wall is governed by the following equation [23]:

$$\begin{aligned} x=\frac{1}{4.62}(x^{3}+2.87x^{2}-10.28). \end{aligned}$$

It can be rewritten as

$$\begin{aligned} g(x)=\frac{1}{4.62}(x^{3}+2.87x^{2}-10.28).-x. \end{aligned}$$

An engineer has rated the deepness to be \(x=2.5\). Here we get the root of the equation \(g(x)=0\) with initial point 2.5 and compare some fully famous methods to our proposed algorithm.

Application 2

The vertical stress \(\eta _z\) created at point in an elastic continuum under the brink of a strip base supporting a regular pressure p is given via Boussinesq’s formula [23] to be:

$$\begin{aligned} \eta _z=\frac{p}{\pi }+\sin (x)\cos (x). \end{aligned}$$

A scientist is interested in estimating the value of x at which the vertical stress \(\eta _z\) ought to be 25 percent of the footing stress p. Initially it is rated that \(x=0.4\). The above can be rewritten for \(\eta _z\) being equal to 25 percent of the footing stress p : 

$$\begin{aligned} g(x)=\frac{x+\sin (x)\cos (x)}{\pi }-0.25. \end{aligned}$$

Now we find the root of the equation \(g(x)=0\) with initial point 0.4 and compare some well famous methods to our proposed algorithm.

Application 3

In general, many applications in science and engineering which include definition of unknown in turn lead to root-finding problem. The Planck’s radiance law problem appearing in [24, 25] is one among them and it is given by

$$\begin{aligned} \phi (\mu )=\frac{8\pi hc\mu ^{-5}}{e^{hc/\mu TK}-1}, \end{aligned}$$

which calculates the density of energy during an isothermal blackbody. Here, \(\mu\) is the wavelength of the radiation; T is the absolute temperature of the blackbody; k is Boltzmann’s constant; h is the Planck’s constant; and c is the speed of light. assume that we would like to define wavelength \(\mu\), which corresponds to maximum the density of energy \(\phi (\mu )\). From the previous equation, we get

$$\begin{aligned} \phi '(\mu )=\bigg (\frac{8\pi hc\mu ^{-6}}{e^{hc/\mu TK}-1}\bigg ) \bigg (\frac{e^{hc/\mu TK}(hc/\mu TK)}{e^{hc/\mu TK}-1}-5\bigg )=D.E. \end{aligned}$$

It can be checked that a maxima for \(\phi\) occurs when \(E=0\), that is when \(\bigg (\frac{e^{hc/\mu TK}(hc/\mu TK)}{e^{hc/\mu TK}-1}\bigg )=5\) Here, taking \(x= hc/\mu kT\), the above equation becomes

$$\begin{aligned} 1-0.2x=e^{-x}. \end{aligned}$$

Let us define

$$\begin{aligned} g(x)=1-0.2x-e^{-x}. \end{aligned}$$

The aim is to find a root of the equation \(g(x)=0\). Obviously, one of the root \(x=0\) is not taken for discussion. As argued in [24], \(g(x)=0\) for \(x=5\) and \(e^{-5} \approx 6.74 \times 10^{-3}\). Hence, it is expected that another root of the equation \(g(x)=0\) might occur near \(x=5\). The approximate root of g(x) is given by 4.96511423174427630369. Consequently, the wavelength of radiation \(\mu\) corresponding to which the energy density is maximum is approximated as \(\mu \approx \frac{hc}{KT}4.96511423174427630369.\)

Application 4

Study of the multipactor effect [26]. The trajectory of an electron in the air gap between two parallel plates is given by

$$\begin{aligned} x(t)=x_0+\big (\nu _0 +eE_0(m\omega )^{-1} \sin (\omega t_0 +\Psi )\big )(t-t_0) +e E_0(m\omega ^2)^{-1} (\cos (\omega t +\Psi )+\sin (\omega +\Psi )), \end{aligned}$$

where \(E_0\sin (\omega t + \Psi )\) is the RF electric field between plates at time \(t_0, x_0\) and \(\nu _0\) are the position and velocity of the electron e and m are the charge and mass of the electron at rest respectively. For the particular parameters, one can deal with a simpler expression as follows:

$$\begin{aligned} f(x)=x-0.5\cos (x)+0.25\pi . \end{aligned}$$

The required zero of the above function is \(-0.3094661392082146514....\)

Table 3 show the numerical calculations with respect to iterations number (m). The numerical applications of the above real life experiments demonstrate the validity and applicability of the proposed algorithm. This shows that the proposed algorithm is very much appropriate for all the application experiments . In most of the cases, the proposed algorithm show better performance in comparison to the existent methods.

Table 3 Comparison of results for Applications

Conclusion and future work

In this study, we suggested a derivative-free iterative algorithm with different values of the parameter \(\theta\) to solve nonlinear equations in real and complex domains. Considering that the proposed algorithm is derivative-free this allows us to apply it also on nonsmooth equations with positive and promising results. Moreover, this algorithm is particularly appropriate, to those applications in which the required derivatives are lengthy . Tables 1 and 2 display the best performance of the suggested algorithm in terms of accuracy, speed, number of iterations, and computational order of convergence as compared to other known algorithms. Figures 1-5 show that 8th BM converges quickly and more accurately at least the number of iterations. Figures 6-12 show that the basins of attraction of the new algorithm known can compete with other optimal eighth order algorithms in the literature. Theoretical and COC are verified in the considered problems. Five examples in the real domain and seven in the complex domain are solved where 8th BM produces better results than compared methods. The maximum number of iterations is less than or equal to three, to reach an absolute error less than \(10^{-15}\). Four real life applications are solved where the new algorithm produce better results than other compared methods.

In the upcoming future we plan to progress as follows. We will research solution of systems with large number of equations. Also, We will improve the codes so it handle a system of algebraic equations.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

8th BM::

Eighth order derivative free iteration algorithm.

References

  1. Kung, H.T., Traub, J.F.: Optimal order of one-point and multipoint iteration. J. ACM 21, 643–651 (1974)

    Article  MathSciNet  Google Scholar 

  2. Bahgat, M.S.M., Hafiz, M.A.: New two-step predictor-corrector method with ninth order convergence for solving nonlinear equations. J. Adv. Math. 2, 432–437 (2013)

    Google Scholar 

  3. Bahgat, M.S.M., Mufida, A.N.: Some families of one and two-step iterative methods for approximating multiple roots of nonlinear equations. J. Egypt. Math. Soc. 26(3), 429–441 (2018)

    Article  MathSciNet  Google Scholar 

  4. Hafiz, M.A., Bahgat, M.S.M.: Solving nonsmooth equations using family of derivative-free optimal methods. J. Egypt. Math. Soc. 21, 38–43 (2013b)

    Article  MathSciNet  Google Scholar 

  5. Bahgat, M.S.M.: New two-step iterative methods for solving nonlinear equations. J. Math. Res. (2012). https://doi.org/10.5539/jmr.v4n3p128

    Article  MathSciNet  Google Scholar 

  6. Naseem, A., Rehman, M.A., Abdeljawad, T.: Higher-order root-finding algorithms and their basins of attraction. J. Math. (2020). https://doi.org/10.1155/2020/5070363

    Article  MathSciNet  MATH  Google Scholar 

  7. Prem, B.C.: Optimal methods for finding simple and multiple roots of nonlinear equations and their basins of attraction. Nepali Math. Sci. Rep. 37(1–2), 14–29 (2020). https://doi.org/10.3126/nmsr.v37i1-2.34065

    Article  MathSciNet  Google Scholar 

  8. Behl, R., Zafar, F., Junjua, M., Alshomrani, A.S.: A new two-point scheme for multiple roots of nonlinear equations. Math. Methods Appl. Sci. 43, 2421–2443 (2020)

    Article  MathSciNet  Google Scholar 

  9. Jain, P.: Steffensen type methods for solving non-linear equations. Appl. Math. Comput. 194(2), 527–533 (2007). https://doi.org/10.1016/j.amc.2007.04.087

  10. Jain, P., Chand, P.B.: Derivative free iterative methods with memory having higher R-order of convergence. Int. J. Nonlinear Sci. Numer. Simul. 21(6), 641–648 (2007)

    Article  MathSciNet  Google Scholar 

  11. Demidovich, B.P., Maron, A.I.: Computational Mathematics. MIR Publishers, Moscow (1987)

    MATH  Google Scholar 

  12. Huang, S., Rafiq, A., Muhammad, R.S., Faisal, A.: New higher order iterative methods for solving nonlinear equations. Hacettepe J. Math. Stat. 47(1), 77–91 (2018)

    MathSciNet  MATH  Google Scholar 

  13. Bhavna, P., Jai, P.J.: A new seventh and eighth-order ostrowski’s type schemes for solving nonlinear equations with their dynamics. Gen. Math. Notes 28(1), 1–17 (2015)

  14. Madhu, K., Jayaraman, J.: Higher order methods for nonlinear equations and their basins of attraction. Mathematics 4, 22 (2016). https://doi.org/10.3390/math4020022

    Article  MATH  Google Scholar 

  15. Kou, J., Li, Y., Wang, X.: Some Variants of Ostrowski’s method with seventh-order convergence. J. Comput. Appl. Math. 209, 153–159 (2007)

  16. Hajjah, A., Imran, M., Gamal, M.D.H.: A two-step iterative method free from derivative for solving nonlinear equations. Appl. Math. Sci. 8, 8021–8027 (2014)

    Google Scholar 

  17. Scott, M., Neta, B., Chun, C.: Basin attractors for various methods. Appl. Math. Comput. 218(6), 2584–2599 (2011)

    MathSciNet  MATH  Google Scholar 

  18. Neta, B., Scott, M., Chun, C.: Basins of attraction for several methods to find simple roots of nonlinear equations. Appl. Math. Comput. 218(21), 10548–10556 (2012)

    MathSciNet  MATH  Google Scholar 

  19. Chun, C., Lee, M.Y., Neta, B., Dzunic, J.: On optimal fourth-order iterative methods free from second derivative and their dynamics. Appl. Math. Comput. 218, 6427–6438 (2012)

    MathSciNet  MATH  Google Scholar 

  20. Sharma, R., Bahl, A.: An optimal fourth order iterative method for solving nonlinear equations and its dynamics. J. Complex Anal. 1–9 (2015)

  21. Behzad, G.: A new general fourth-order family of methods for finding simple roots of nonlinear equations. J. King Saud Univ. Sci. 23, 395–398 (2011)

    Article  Google Scholar 

  22. Cayley, A.: The-Newton–Fourier imaginary problem. Am. J. Math. 2, 97 (1879)

    Article  Google Scholar 

  23. Griffithms, D.V., Smith, I.M.: Numerical Methods for Engineers, 2nd edn. Boca Raton, FL, USA, Chapman and Hall/CRC (Taylor and Francis Group) (2011)

  24. Bradie, B.: A Friendly Introduction to Numerical Analysis. Pearson Education Inc., London (2006)

    Google Scholar 

  25. Jain, D.: Families of Newton-like methods with fourth-order convergence. Int. J. Comput. Math. 90, 1072–1082 (2013)

    Article  MathSciNet  Google Scholar 

  26. Anza, S., Vicente, C., Boria, B., Armendáriz, V.E.: Long-term multipactor discharge in multicarrier systems. Phys. Plasmas 14, 82–112 (2007)

    Article  Google Scholar 

  27. Petkovic, M.S., Neta, B., Petkovic, L.D., Dzunic, J.: Multipoint Methods for Solving Nonlinear Equations. Elsevier, Amsterdam (2012)

    MATH  Google Scholar 

  28. Parimala, S., Kalyanasundaram, M., Jayaraman, J.: Optimal fourth order methods with its multi-step version for nonlinear equation and their dynamics. SeMA J. (2019). https://doi.org/10.1007/s40324-019-00191-0

    Article  MATH  Google Scholar 

  29. Parimala, S., Jayakumar, J.: Some new higher order weighted Newton methods for solving nonlinear equation with applications. Math. Comput. Appl. 24, 59 (2019). https://doi.org/10.3390/mca24020059

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The author would like to thank the referee for his/her valuable comments and suggestions which improved the manuscript in its present form. The author also acknowledge the authors of literatures for the provision of initial idea for this work.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

The author has made each part of this paper. The author read and approved the final manuscript.

Corresponding author

Correspondence to Mohamed S. M. Bahgat.

Ethics declarations

Competing interests

The authors declared no potential conficts of interest with respect to the research and authorship of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bahgat, M.S.M. Three-point iterative algorithm in the absence of the derivative for solving nonlinear equations and their basins of attraction. J Egypt Math Soc 29, 23 (2021). https://doi.org/10.1186/s42787-021-00132-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s42787-021-00132-9

Keywords

Mathematics Subject Classification