Skip to main content
  • Original Research
  • Open access
  • Published:

Convergence analysis and parity conservation of a new form of a quadratic explicit spline with applications to integral equations

Abstract

In this study, a new form of a quadratic spline is obtained, where the coefficients are determined explicitly by variational methods. Convergence is studied and parity conservation is demonstrated. Finally, the method is applied to solve integral equations.

Introduction

In Mathematics, Physics, and Engineering, among other disciplines, there is a great need to adjust discrete sets of data and to approximate functions. In general, it is desired to know the values at intermediate points, which can be solved through interpolation polynomials. In practice, high-order polynomials can introduce significant errors due to various factors. Generally, these polynomials present fluctuations that are not present in the function to interpolate. For this reason, in this work, the piecewise interpolation will be considered, which is particularly useful when the data to adjust have a smooth behavior alternated with strong changes. The focus will be on quadratic piecewise interpolation S of continuous real functions. In the literature, methods and algorithms that depend on a determined criterion in the calculation of one of the coefficients of S are presented. In [1], a good development of interpolation methods is exposed, particularly spline methods. In [2], some optimization algorithms can be consulted.

There is a very large and current body of literature on quadratic splines and collocation methods. In [3], quadratic spline interpolation is used where the coefficients of the polynomials are determined in matrix form, similarly in [4]. Typically, all of these methods require the resolution of algebraic systems or recursive equations. In this article, a variational alternative that minimizes the fluctuations of the interpolation polynomial S is presented. Its coefficients are determined explicitly through simple arithmetic computations.

There is great interest of the Fredholm-Volterra integral equation in engineering, physics, and other disciplines, which drives a further need to find solutions. As is well known, explicit solutions are very difficult to obtain, so approximation methods should be in order here. There are many works which have analyzed and proposed numerical methods, as well as other based on the methods of splines. A recent interesting work is [5] in which they apply the collocation method using Chebyshev’s polynomials to numerically evaluate the problem of Fredholm-Volterra; in [6] this kind of integral, equations are solved with B-Splines; in [7], fifth-order splines are used; in [8], the quadratic spline with end condition is utilized; and in [9], variable-order fractional functional differential equations are solved with Legendre collocation. In the present article, the spline obtained is used to solve Fredholm-Volterra integral equations and fractional differential equations.

An advantage of the spline presented in this work with respect to the cited spline methods is that an explicit law is determined for the computation of the coefficients, which avoids the resolution of algebraic systems that, although linear, are not exempt from method error, plus the possibility of not being well conditioned which requires a rescaling of the system. As for the application to integral equations, in particular the differential equations with fractional derivative, the need to solve an algebraic system lies in the lack of knowledge of the solution, but even so, the calculation of the coefficients of the algebraic system is simple and explicit.

The work is organized as follows: in “The quadratic spline” section, the interpolator is presented; in the “Convergence analysis” section, the convergence is studied; in the “Parity conservation” section, it is shown that S maintains the parity of the function to be interpolated; in the “Fredholm integral equation” section, the results are applied for Fredholm linear integral equations, similarly for Volterra linear integral equations in the “Volterra integral equation” section; in the “Examples” section, the numerical results are shown; and finally, in the “Conclusions” section, the conclusions are presented.

The quadratic spline

Consider an interval [a,b] in which n+1 equidistant nodes x0=a<x1<…<xn=b are selected, where xkxk−1=h, k=1,…,n. Let \(y:[a,b]\rightarrow \mathbb {C}\) be a C1 function, and consider yk=y(xk), k=0,…,n. Let Ik:=[xk−1,xk], k=1,…,n. It is desired to determine the quadratic piecewise interpolator S(x) such that interpolates y(x) in xk, k=0,…,n and such that S(x) is continuous in the nodes xk, k=1,…,n−1.

Let S(x) be the function in [x0,xn] defined through the polynomials Pk(x) so that:

$$ S(x)=P_{k}(x), \; x \in I_{k}, \; k=1,\ldots,n. $$
(1)

Since S is an interpolator, Pk(x) must verify:

$$\begin{array}{*{20}l} P_{k}(x_{k-1}) & = y_{k-1}, \; k=1,\ldots,n, \\ P_{k}(x_{k}) & = y_{k}, \; k=1,\ldots,n. \end{array} $$

The continuity of S(x) imposes that:

$$ P_{k}^{\prime}(x_{k})=P_{k+1}^{\prime}(x_{k}), \; k=1,\ldots,n-1. $$
(2)

Lagrange polynomials pk(x) over Ik are built as:

$$ p_{k}(x)=\frac{x-x_{k-1}}{h} \, y_{k}-\frac{x-x_{k}}{h} \, y_{k-1}, \; k=1,\ldots,n, $$
(3)

which satisfy pk(xk−1)=yk−1, pk(xk)=yk, k=1,…,n. Then:

$$ P_{k}(x)=p_{k}(x)+a_{k}(x-x_{k-1})(x-x_{k}), \; k=1,\ldots,n. $$
(4)

In [3, 4], the coefficients ak are obtained by linear algebraic systems. Here, an explicit formula will be found. From (2) and through simple algebraic operations:

$$\begin{array}{*{20}l} a_{k+1} & =\Delta_{k}-a_{k}, \; k=1,\ldots,n-1, \\ \Delta_{k} & =\frac{y_{k-1}-2y_{k}+y_{k+1}}{h^{2}}, \end{array} $$
(5)

from where ak=ak(a1), linear in a1. The coefficient a1 is determined from an additional condition.

From (5):

$$ a_{k}=(-1)^{k+1}a_{1}+r_{k}, \; k \geq 2, $$
(6)
$$ r_{k}=(-1)^{k+1}\sum\limits_{j=1}^{k-1}(-1)^{j}\Delta_{j}, \; k \geq 2. $$
(7)

Thus:

$$ a_{k}=(-1)^{k+1}\left(a_{1}+\sum\limits_{j=1}^{k-1}(-1)^{j}\Delta_{j}\right), \; k \geq 2. $$
(8)

In this way, an explicit expression is available in terms of elementary functions, in order to calculate the coefficients of the polynomial S. In addition, if we define r1=0, it is easy to show that:

$$r_{k+1} =\Delta_{k}-r_{k}, \; k=1,\ldots,n-1. $$

Next, in order to minimize the fluctuations, a1 is determined such that the sum of the quadratic errors between Pk(x) and pk(x) in each Ik is minimum. Let E be defined as \(E=\sum _{k=1}^{n} E_{k}\), where:

$$E_{k}=\int\limits_{x_{k-1}}^{x_{k}}[P_{k}(x)-p_{k}(x)]^{2}\,dx. $$

Taking into account (3) and (4):

$$E=E(a_{1})=\sum\limits_{k=1}^{n}a_{k}^{2}\int\limits_{x_{k-1}}^{x_{k}}(x-x_{k})^{2}(x-x_{k-1})^{2}\,dx, $$

from where:

$$E(a_{1})=\frac{h^{5}}{30}\sum\limits_{k=1}^{n} a_{k}^{2}. $$

Then, recalling (6), results \(\frac {\partial } {\partial a_{1}}a_{k}=(-1)^{k+1}\) from where \(\frac {\partial ^{2} E(a_{1})}{\partial a_{1}^{2}}=\frac {1}{15} n h^{5} > 0\). Therefore, the solution a1 that minimizes E is such that \(\frac {\partial E(a_{1})}{\partial a_{1}}=0\). This leads to:

$$a_{1}=\frac{1}{n}\sum\limits_{k=1}^{n} (-1)^{k} r_{k}. $$

From (7):

$$ a_{1}=-\frac{1}{n}\sum\limits_{j=1}^{n-1}(n-j)(-1)^{j}\Delta_{j}. $$
(9)

Finally, combining (8) and (9):

$$ a_{k}=(-1)^{k+1}\sum\limits_{j=1}^{n-1}\left(\frac{j}{n}+s_{j}-1\right)(-1)^{j}\Delta_{j}, \; k \geq 2, $$
(10)

where sj=1 if jk−1 and sj=0 if j>k−1.

Taking into account the definition of Δj:

$$ a_{k}=\sum_{j=0}^{n} c_{k,j} \, y_{j}, $$
(11)

where ck,j is given by:

$$c_{k,j}= \left\{\begin{array}{ll} \frac{(-1)^{k}}{h^{2}}\beta_{1} & \text{if} \; j=0 \\ \frac{(-1)^{k+1}}{h^{2}}(2\beta_{1}+\beta_{2}) & \text{if} \; j=1 \\ \frac{(-1)^{k+j}}{h^{2}}(\beta_{j-1}+2\beta_{j}+\beta_{j+1}) & \text{if} \; 1< j< n-1 \\ \frac{(-1)^{k+n-1}}{h^{2}}(2\beta_{n-2}+\beta_{n-1}) & \text{if} \; j=n-1 \\ \frac{(-1)^{k+n}}{h^{2}}\beta_{n-1} & \text{if} \; j=n \\ \end{array}\right. $$

where \(\beta _{j}=\frac {j}{n}\) if jk−1 and \(\beta _{j}=\frac {j}{n}-1\) if j>k−1.

(11) can be rewritten in matrix form as A=C·Y being the matrices Yn+1,1={yj}j=0,…,n and \({C}_{n,n+1}=\{c_{k,j}\}_{k=1,\ldots,n}^{j=0,\ldots,n}\). Note that the latter only depends on x0, h, and n. (3) and (4) define the matrices pn,1(x)={pk(x)}k=1,…,n, Pn,1(x)={Pk(x)}k=1,…,n. Then, the spline S can be written in the following matrix form:

$${P}(x)={p}(x)+{X(x) \cdot A} = {p}(x)+{X(x) \cdot C \cdot Y}. $$

The importance of this equation lies in the fact that the matrix C is only once calculated because it only depends on x0, h, and n. It does not depend on the function to interpolate. For each data collection Y, the only new calculation is the scalar product C·Y.

Convergence analysis

Here, we will prove that the convergence order obtained for the spline presented here is O(h) when the interpolated function has a bounded second derivative.

Theorem 1

Consider an interval [a,b] partitioned at x0=a<x1<…<xn=b, where xkxk−1=h, k=1,…,n. Consider also a C1 function \(y: [a,b] \longrightarrow \mathbb {C}\) with bounded second derivative. Let S(x) be the spline presented in this work that interpolates y(x). Then, S(x) converges uniformly to y(x) when |h|→0 with a convergence order of O(h).

Demonstration:

Let:

$$ D=\max_{x\in[a,b]} |y(x)-S(x)|. $$
(12)

Recalling (1), results:

$$ D \leq \max_{k \in \{1,\ldots,n\}} \max_{x\in I_{k}} \left|y-\left(\frac{x-x_{k-1}}{h}y_{k} - \frac{x-x_{k}}{h}y_{k-1}\right)\right|+|a_{k}| h^{2}. $$
(13)

From (10):

$$|a_{k}| \leq \sum_{j=1}^{n-1} \left|\frac{j}{n} + s_{j} - 1\right| |\Delta_{j}|. $$

Noting that \(|\frac {j}{n} + s_{j} - 1| \leq 1 \; \forall j=1,\ldots,n-1\) and as the second derivative of y(x) is bounded, namely |Δj|≤M for h sufficiently small, results:

$$|a_{k}| \leq (n-1) M = \left(\frac{b-a}{h}-1\right) M. $$

In this way, back to (13):

$$D \leq \max_{k \in \{1,\ldots,n\}} \max_{x\in I_{k}} \left|y-\left(\frac{x-x_{k-1}}{h}y_{k} - \frac{x-x_{k}}{h}y_{k-1}\right)\right|+ \left(\frac{b-a}{h}-1\right) M h^{2}. $$

Using the result of [1]:

$$\phantom{\dot{i}\!}\left|y-\left(\frac{x-x_{k-1}}{h}y_{k} - \frac{x-x_{k}}{h}y_{k-1}\right)\right| \leq \frac{h^{2}}{2} \max_{x \in I_{k}} |y^{\prime\prime}(x)|. $$

It results:

$$D \leq M h \left(b-a - \frac{h}{2}\right). $$

Parity conservation

Next, it will be shown that the developed spline preserves the parity of the interpolated function.

Theorem 2

Coefficients ak of the spline S that interpolates y=y(x), −a<x<a, in n+1 nodes, verify:

  • If y is even, then ak=ank+1, k=1,…,n.

  • If y is odd, then ak=−ank+1, k=1,…,n.

Demonstration:

We will do the demonstration for y even.

Since y is even, then yj=ynj, j=1,…,n−1 with what it is immediate that Δj=Δnj, j=1,…,n−1.

Case 1:n odd.

It will be proved that \(a_{k}=a_{n-k+1}, \; k=1,\ldots,\frac {n-1}{2}\) by induction on k.

  • Let us see that a1=an. From (8):

    $$a_{n}=(-1)^{n+1}\left[a_{1}+\sum\limits_{j\,=\,1}^{n-1}(-1)^{j}\Delta_{j}\right]=a_{1}+\sum\limits_{j=1}^{\frac{n-1}{2}}(-1)^{j}\Delta_{j}+\sum \limits_{j=\frac{n+1}{2}}^{n-1}(-1)^{j}\Delta_{j} $$
    $$=a_{1}+\sum\limits_{j=1}^{\frac{n-1}{2}}(-1)^{j}\Delta_{j}+\sum \limits_{j=\frac{n+1}{2}}^{n-1}(-1)^{j}\Delta_{n-j}=a_{1}+\sum \limits_{j=1}^{\frac{n-1}{2}}(-1)^{j}\Delta_{j}+\sum\limits_{j=1}^{\frac{n-1}{2}}(-1)^{n-j}\Delta_{j} $$
    $$=a_{1}. $$
  • Suppose that ak=ank+1 for some \(k=1,\ldots,\frac {n-3}{2}\).

Let us see that ak+1=ank. From (5):

$$a_{k+1}=\Delta_{k}-a_{k}=\Delta_{n-k}-a_{n-k+1}=a_{n-k}. $$

Therefore, \(a_{k}=a_{n-k+1}, \; k=1,\ldots,\frac {n-1}{2}\) if n is odd.

Case 2:n even.

It will be proved that \(a_{k}=a_{n-k+1}, \; k=1,\ldots,\frac {n}{2}\) by induction on k.

  • Let us see that a1=an. From (8):

    $$a_{n}=(-1)^{n+1}\left[a_{1}+\sum\limits_{j=1}^{n-1}(-1)^{j}\Delta_{j} \right]=-a_{1}-\sum\limits_{j=1}^{n-1}(-1)^{j}\Delta_{j}. $$

    Then, it is enough to prove that \(a_{1}=-\frac {1}{2}\sum \limits _{j=1} ^{n-1}(-1)^{j}\Delta _{j}\). From (9):

    $$a_{1}=-\frac{1}{n}\sum\limits_{j=1}^{n-1}(n-j)(-1)^{j}\Delta_{j}=-\frac{1}{2}\left\{\sum\limits_{j=1}^{n-1}(-1)^{j}\Delta_{j} +\sum\limits_{j=1}^{n-1}\left[\frac{2(n-j)}{n}-1\right](-1)^{j}\Delta_{j}\right\} $$
    $$=-\frac{1}{2}\left\{\sum\limits_{j=1}^{n-1}(-1)^{j}\Delta_{j}+\sum \limits_{j=1}^{\frac{n}{2}-1}\left(1-\frac{2j}{n}\right)(-1)^{j}\Delta_{j} +\sum\limits_{j=\frac{n}{2}+1}^{n-1}\left(1-\frac{2j}{n}\right)(-1)^{j} \Delta_{n-j}\right\} $$
    $$=-\frac{1}{2}\left\{\sum\limits_{j=1}^{n-1}(-1)^{j}\Delta_{j}+\sum \limits_{j=1}^{\frac{n}{2}-1}\left(1-\frac{2j}{n}\right)(-1)^{j}\Delta_{j} +\sum\limits_{j=1}^{\frac{n}{2}-1}\left(-1+\frac{2j}{n}\right)(-1)^{n-j} \Delta_{j}\right\} $$
    $$=-\frac{1}{2}\sum\limits_{j=1}^{n-1}(-1)^{j}\Delta_{j}. $$
  • Suppose that ak=ank+1 for some \(k=1,\ldots,\frac {n}{2}-1\).

Let us see that ak+1=ank. From (5):

$$a_{k+1}=\Delta_{k}-a_{k}=\Delta_{n-k}-a_{n-k+1}=a_{n-k}. $$

Therefore, \(a_{k}=a_{n-k+1}, \; k=1,\ldots,\frac {n}{2}\) if n is even.

The proof for y odd and n even is similar to case 1 and the proof for y odd and n odd is similar to case 2.

Observation: In two of the cases (y even, n odd and y odd, n even), the property is verified independently of the election of a1, while in the other two cases (y even, n even and y odd, n odd), the election of a1 is fundamental. It is easy to verify that for these cases, the property is not verified for every a1.

Corollary 1

The interpolation polynomial Pk(x) of the spline S that interpolates y=y(x),−a<x<a, with n+1 nodes, verify:

  • If y is even, then Pk(x)=Pnk+1(−x), k=1,…,n.

  • If y is odd, then Pk(x)=−Pnk+1(−x), k=1,…,n.

Demonstration:

Again, we will demonstrate for y even.

Recalling (4) is easy to verify that pk(x)=pnk+1(−x), k=1,…,n.

Then, taking into account that ak=ank+1, k=1,…,n and that xk=−xnk, k=0,…,n, the result is effortlessly obtained.

The demonstration for the case y odd is similar.

Fredholm integral equation

There exist numerous works to determine the numerical solution of Fredholm linear integral equations of the first and second kind. In [6], it is evaluated numerically using a mixed form between splines and Lagrange interpolation. This kind of problem is then solved using Taylor’s expansion in [10]. In [11], the least squares method is utilized. The quadrature methods are multiple, see for example [12]. An interesting work is [5] where Chebyshev’s polynomials are used. Finally, in [7], splines of order five are used. Here, the quadratic spline determined by our method is applied to numerically solve the linear problem. Consider the following equation:

$$ y(x)-\lambda \int\limits_{a}^{b}K(x,s) \, y(s) \, ds=f(x), $$
(14)

where y(x) is to be determined, axb, K(x,s) is continuous in Ω=[a,b]×[a,b], f(x) is continuous in I=[a,b], and λ is a real parameter. When f(x)≡0, the equation is homogeneous and λ becomes an eigenvalue (or characteristic root) associated to the eigenfunction y(x) in (14). Suppose also that x, s\(\in \mathbb {R}\), f is continuous in I=[a,b], and the kernel K(x,s) is continuous in the region Ω=[a,b]×[a,b] with a,b<. Consider a partition of the interval I with nodes xj=a+jh, \(h=\frac {b-a}{n}\), j=0,1,…,n. Using the quadratic spline S developed in “The quadratic spline” section, y(x) is interpolated in \(\{y_{j}\}_{j=0}^{n}\), whose values must be determined. Evaluating (14) in nodes xj:

$$y_{j}-\lambda{\int\limits_{a}^{b}}K(x_{j},s) \, S(s) \, ds = f_{j}, \; j=0,1,\ldots,n, $$

where fj=f(xj). From (1):

$${\int\limits_{a}^{b}}K(x_{j},s) \, S(s) \, ds={\sum\limits_{k=1}^{n}}\,{\int\limits_{x_{k-1}}^{x_{k}}}K(x_{j},s) \, P_{k}(s) \, ds. $$

Since ak is linear in yj, j=0,…,n, this leads to a linear system. Taking into account (4) and (11):

$$y_{j}-\lambda{\sum\limits_{k=1}^{n}}\left\{{\sum\limits_{i=0}^{n}}(\delta_{i,k} \, m_{j,k-1}-\delta_{i,k-1} \, m_{j,k}+d_{k,i}) \, y_{i}\right\} \, = f_{j}, \; j=0,1,\ldots,n, $$

where δi,k is the Kronecker delta, ck,i is that of (11), \(m_{j,k}=\frac {1}{h}\int \limits _{x_{k-1}}^{x_{k}} K(x_{j},s)(s-s_{k}) \, ds\), and \(d_{k,i}=c_{k,i}\int \limits _{x_{k-1}}^{x_{k}}K(x_{j},s)(s-s_{k-1})(s-s_{k}) \, ds\).

From here:

$$ {\sum\limits_{i=0}^{n}}(\delta_{j,i}-\lambda\alpha_{j,i})\,y_{i} = f_{j}, \; j=0,1,\ldots,n, $$
(15)
$$\alpha_{j,i}={\sum\limits_{k=1}^{n}}\delta_{i,k}\,m_{j,k-1}-\delta_{i,k-1}\,m_{j,k}+d_{k,i}. $$

In the non-homogeneous case, (15) represents a linear algebraic system whose solution determines \(\{y_{j}\}_{j=0}^{n}\) and therefore S(x). Remark that in this case, if λ does not coincide with the eigenvalues of the associated homogeneous equation, the integral equation has a solution, but if it matches an eigenvalue, it does not always have a solution. Therefore, the non-homogeneous system of equations does not always have a solution.

When f(x)≡0 in [a,b], the system is homogeneous and the problem of eigenvalues λ and eigenfunctions y(x) is solved in the usual way.

Volterra integral equation

Volterra integral equation of the second kind

Writing the Volterra integral equation of the second kind as

$$ y(x)=f(x)+\lambda\int\limits_{a}^{x} K(x,s) \, y(s)\,ds, $$
(16)

it is of interest to determine y(x), axb, where K(x,s) is continuous in Ω=[a,b]×[a,b], f(x) is continuous in I=[a,b] and λ is a real parameter.

For the numerical resolution of (16), partite I using nodes xj=a+jh, h=(ba)/n, j=0,1,…,n. The quadratic spline S interpolates y(x) in \(\{y_{j}\}_{j=0}^{n}\), whose values must be determined. Then, evaluating (16) in the n+1 nodes xj follows:

$$\begin{array}{*{20}l} y_{0} & = f_{0}, \\ y_{j} & = f_{j}+\lambda\int\limits_{a}^{x_{j}}K(x_{j},s)\,y(s)\,ds, \; j=1,\ldots,n. \end{array} $$

Taking into account that y(x) is approximated by S(x):

$$y_{j}=f_{j}+\lambda\sum\limits_{k=1}^{j}\int\limits_{x_{k-1}}^{x_{k}}K(x_{j},s)\,P_{k}(s)\,ds, \; j=1,\ldots,n. $$

Similarly to the “Fredholm integral equation” section, these equations represent a non-homogeneous system of order n. From its resolution, \(\{y_{j}\}_{j=0}^{n}\) is determined.

Volterra integral equation of the first kind

Writing the Volterra integral equation of the first kind as:

$$ f(x)=\int\limits_{0}^{x}K(x,s)\,y(s)\,ds. $$
(17)

In the particular case in which K(x,s), \(\frac {\partial }{\partial x}K(x,s)\), f(x), f(x) are continuous in 0≤xb, 0≤sx and K(x,x) does not vanish in 0≤xb, it is possible to derivate (17) with respect to x, obtaining:

$$y(x)=\frac{f^{\prime}(x)}{K(x,x)}-\int\limits_{0}^{x}\frac{1}{K(x,x)}\,\frac{\partial}{\partial x}K(x,s)\,y(s)\,ds, $$

which is an equation of the second kind.

A method is proposed to solve (17) which does not require the continuity conditions of f(x) and K(x,s) nor the restriction on the zeros of K(x,x). In the same way as in the previous cases, y(x) is determined in nodes xk, 0≤xkb, x0=0, xn=b. Recalling (17), F(x) is defined as:

$$ F(x)=-f(x)+\int\limits_{0}^{x} K(x,s)\,y(s)\,ds, $$
(18)

and F(xk)=0, k=0,1,…,n. If y(x) is the solution, F(x)≡0. Taking into account that y is approximated by S, (18) writes as:

$$F(x)=-f(x)+\sum_{k=1}^{n-1}\int\limits_{x_{k-1}}^{x_{k}}(x-s)^{2} P_{k}(s) \, ds+\int\limits_{x_{n-1}}^{x}(x-s)^{2} P_{n}(s) \, ds, \; x_{n-1} \leq x \leq x_{n}. $$

To determine \(\{y(x_{k})\}_{k=0}^{n}\), y(x) is approximated by the spline S(x) given by (4). Evaluating (18) in xk, k=1,…,n:

$$0=-f_{k}+\sum\limits_{j=1}^{k}\int\limits_{x_{j-1}}^{x_{j}}K(x_{k},s)\,P_{j}(s)\,ds, \; k=1,\ldots,n, $$

and given the fact that Pj(s) depends linearly on \(\{y(x_{k})\}_{k=0}^{n}\), a system of n non-homogeneous linear equations is obtained which determines \(\{y(x_{k})\}_{k=1}^{n}\) parametrized in y0=y(x0), which is determined by the following ansatz: y(x0) is the value that minimizes \(G=\int \limits _{x_{n-1}}^{x_{n}}F^{2}(x) \, dx\).

It is easy to see that it is sufficient that \(\frac {\partial }{\partial y_{0}}G=0\). Derivability of G is guaranteed by the linearity of Pj in y0.

Examples

Quadratic spline

Consider f(x)=|x|, −1≤x≤1, interpolated with the Lagrange polynomial Ln(x) with equidistant nodes. It turns out as is well known, Ln(x)−f(x)2→0 when n, (·, Lebesgue norm 2). Let en be:

$$ e_{n}(\cdot)=\int\limits_{x_{0}}^{x_{n}}[\cdot-f(x)]^{2} \, dx. $$
(19)

It can be seen that for n>15 the Lagrange interpolation I presents sharp fluctuations. In Table 1, some errors are calculated.

Table 1 Errors for f(x)=|x| with Lagrange interpolation I

However, with the S spline from this work, the undesired fluctuations are markedly reduced, as seen in Table 2. Remark that despite the fact f is not a function of class C1, a good interpolator is achieved.

Table 2 Errors for f(x)=|x| with the S spline

Regarding the space convergence order, using D defined on (12), it is possible to see that \(\int _{-1}^{1} (|x|-s)^{2} dx \leq \int _{-1}^{1} D^{2} dx = 2 \, D^{2}\). In this example, for n=20, D=5·10−2, then 2 D2=5·10−3 which is effectively larger than e20=6.6·10−4. The space convergence order is linear as shown in Fig. 1. The dots were numerically obtained, and the curve was obtained by a least squares method.

Fig. 1
figure 1

D vs h for f(x)=|x| with the S spline

As a second example, consider g(x)= sin2πx, −1≤x≤1. Interpolations are made with the Quadratic Spline end Condition routine (Q) from [8], with the natural cubic spline of Mathematica 9.0.1.0 software (M) and with the S spline from this work. Results are shown in Table 3.

Table 3 Errors en for g(x)= sin2πx

Therefore, in this example, the spline presented here is a better approximation than the Quadratic Spline end Condition routine.

Regarding the space convergence order, for n=50, D=1.3·10−4 then 2 D2=3.38·10−8 which is effectively bigger than e50(S)=9·10−9. In this case, Fig. 2 shows that the space convergence order is cubic. The dots were numerically obtained, and the curve was obtained by the least squares method using a third degree polynomial.

Fig. 2
figure 2

D vs h for g(x)= sin2πx with the S spline

Observe that in the first example a linear space convergence order was obtained while in the second it was cubic. The difference in behavior is due to the fact that f(x)=|x| has no continuous derivative at x=0 while the derivatives of g(x)= sin2πx are continuous at every point.

Fredholm equation

Consider two extra examples from [13]. The first of them being

$$ : y(x)+2\int\limits_{0}^{1} e^{x-t}\,y(t)\,dt=2\,x\,e^{x}, $$
(20)

whose solution is \(y(x)=e^{x}(2x-\frac {2}{3})\). The solution obtained using the quadratic spline gives the results shown in Table 4, where the error en is defined in (19) and ET= max|y(x)−yn(x)| in x0xxn.

Table 4 Errors en and ET with the S spline for (20)

The second example corresponds to an equation of the first kind:

$$ y(x)-\lambda\int\limits_{0}^{1}(2xt-4x^{2})\,y(t)\,dt=0, $$
(21)

whose solution is y(x)=x(1−2x), being the eigenvalue λ=−3, of multiplicity 2. The results are shown in Table 5.

Table 5 Errors en and ET with the S spline for (21)

Finally, consider from [11]:

$$\begin{array}{ll} y(x) & =f(x)+\int\limits_{0}^{1}(x+s)\,y(s)\,ds,\\ f(x) & =1+\cos(x)-(1+x)\sin1-\cos1, \end{array} $$

whose solution is y(x)= cosx. With the S spline, for n=5 and n=10, e5(S)=1.17·10−3 and e10(S)=1.79·10−5 are obtained respectively. In [11], y is approximated by {1,x,x2} using the least squares method (L) where it is obtained that e2(L)=1.49·10−3, which is very similar to our results.

Volterra equation

Consider from [13]:

$$ y(x)=\frac{1}{1+x^{2}}-\int\limits_{0}^{x}\frac{s}{1+x^{2}}\,y(s) \, ds. $$
(22)

The results are shown in Table 6.

Table 6 Errors en and ET with the S spline for (22)

From [6], consider:

$$ \exp(-x^{2})+\frac{\lambda}{2} \, x \, [1-\exp(-x^{2})]=y(x)+\lambda \int\limits_{0}^{x} x\,s\,y(s)\,ds,\;\; 0\leq x\leq 1, $$
(23)

for λ=1, y(x)= exp(−x2). The results are shown in Table 7.

Table 7 Errors en and ET with the S spline for (23)

In [6], the error is not specified and the solutions are only compared graphically. For the developed spline, the curves corresponding to the numerical and analytical solutions are completely overlapped.

Again from [13]:

$$ -x^{3}+\int_{0}^{x}(x-s)^{2}\,y(s)\,ds=0, $$
(24)

whose solution is y(x)≡3. The results are shown in Table 8.

Table 8 Errors en and ET with the S spline for (24)

Fractional differential equation

Let α be a real positive number and denote by m=α the smaller integer bigger than α. Let us define the Caputo fractional derivative, \(D^{\alpha }_{a}\), of a n times differentiable function of real variable, y(x), as [14]

$$ D^{\alpha}_{a} \,y(x) = \frac 1{\Gamma(m-\alpha)} \int_{a}^{x} \frac{y^{(m)}(t)}{(x-t)^{\alpha-m+1}} \,dt, $$
(25)

where y(m)(t) means the mth derivative of the function y(t).

Let \(y(x):[0,X]\longmapsto \mathbb R\) be a differentiable real function of the real variable x and \(f(x):[0,X] \longmapsto \mathbb R\) continuous. Let 1<α≤2. Then, let us consider the following fractional differential equation:

$$ \left\{\begin{array}{rl} D^{\alpha}_{0} y(x) & =f(x) y(x), \; 0< x<X,\\ D^{k}y(0) & =y_{0}^{(k)}, \; k=0,1,2,...,m-1. \end{array}\right. $$
(26)

Converting the initial value problem for the differential equation into an equivalent Volterra integral equation [14]:

$$y(x)= {\sum\limits_{k=0}^{m-1}} \frac{x^{k}}{k!}D^{k}y(0)+\frac{1}{\Gamma(\alpha)}\int_{0}^{x}(x-t)^{\alpha -1}f(t)y(t)dt, $$

Consider the fractional oscillator [15]:

$$ D_{0}^{\alpha}y(x) =-y(x), $$
(27)

where 1<α≤2, whose exact solution is:

$$y(x)=c_{1} \ E_{\alpha,1}(-x^{\alpha})+c_{2} \ x \ E_{\alpha,2}(-x^{\alpha}), $$

where Eα,β(z) is the so-called Mittag-Leffler function

$$E_{\alpha,\beta}(z) = \sum_{k=0}^{\infty} \frac{z^{k}}{\Gamma(\alpha k+\beta)}\,. $$

Taking into account the results of the “Volterra integral equation” section, the equation of Volterra associated to the equation (27) is solved with the S spline and compared with the exact solution. The parameters used are α=3/2, X=10, y(0)=1, and y(0)=0. Results are shown in Table 9.

Table 9 Errors en and ET with the S spline for (27)

Conclusions

Using variational calculus, a quadratic spline method that minimizes the spline fluctuations has been developed. Even though piecewise interpolation has several decades of study, since the 1940s when it was developed by the mathematician I. J. Schoenberg, the main advantage of this scheme is that the coefficients of the spline are explicitly determined through simple arithmetic calculations without needing recursive equations nor solving algebraic systems. The reason of using a quadratic spline is that in this case the explicit law of the coefficients of the interpolating segmental polynomial is simply obtained. For higher order splines, although they improve the interpolation, we were not able to find a simple and explicit expression of the coefficients. This variational method may be considered as an adaptive scheme, due to its simplicity in the determination of the coefficients. The spline has a linear space convergence order for functions with bounded second derivative and it preserves the parity of the function. It is also useful for solving Fredholm-Volterra linear integral equations and fractional differential equations given its simplicity. Some weaknesses of the method are that the nodes must be equally spaced and we could not extend this form to the general high-order splines and multiple dimensions. In a second stage, the results will be extended to fractional ordinary differential equations.

Availability of data and materials

All data generated or analyzed during this study are included in this article.

References

  1. Kincaid, D. R., Cheney, W.: Numerical Analysis: Mathematics of Scientific Computing, third ed. Brooks Cole Publishing Company, California (1991).

    MATH  Google Scholar 

  2. Press, W., Teukolsky, S., Vetterling, W., Flannery, B.: Numerical Recipes: The Art of Scientific Computing, first ed. Cambridge University Press, New York (1986).

    MATH  Google Scholar 

  3. Foucher, F., Sablonnière, P.: Quadratic spline quasi-interpolants and collocation methods. Math. Comput. Simulat. 79, 3455–3465 (2009). https://doi.org/10.1016/j.matcom.2009.04.004.

    Article  MathSciNet  Google Scholar 

  4. Rana, S. S.: Quadratic spline interpolation. J. Approx. Theory. 57, 300–305 (1989). https://doi.org/10.1016/0021-9045(89)90045-2.

    Article  MathSciNet  Google Scholar 

  5. Youssri, Y. H., Hafez, R. M.: Chebyshev collocation treatment of Volterra–Fredholm integral equation with error analysis. Arab. J. Math. (2019). https://doi.org/10.1007/s40065-019-0243-y.

  6. Maleknejad, K., Derili, H.: Numerical solution of integral equations by using combination of Spline-collocation method and Lagrange interpolation. Appl. Math. Comput. 175, 1235–1244 (2006). https://doi.org/10.1016/j.amc.2005.08.034.

    MathSciNet  MATH  Google Scholar 

  7. Chen, F., Wong, P. J. Y.: Solutions of Fredholm integral equations via discrete biquintic splines. Math. Comput. Model. 57, 551–563 (2013). https://doi.org/10.1016/j.mcm.2012.07.007.

    Article  MathSciNet  Google Scholar 

  8. Behforooz, G.: Quadratic spline. Appl. Math. Lett. 1, 177–180 (1988). https://doi.org/10.1016/0893-9659(88)90067-5.

    Article  MathSciNet  Google Scholar 

  9. Hafez, R. M., Youssri, Y. H.: Legendre-collocation spectral solver for variable-order fractional functional differential equations. Comput. Methods Differ. Equ. 8, 99–110 (2020). https://doi.org/10.22034/cmde.2019.9465.

    MathSciNet  MATH  Google Scholar 

  10. Maleknejad, K., Aghazadeh, N.: Numerical solution of Volterra integral equations of the second kind with convolution kernel by using Taylor-series expansion method. Appl. Math. Comput. 161, 915–922 (2005). https://doi.org/10.1016/j.amc.2003.12.075.

    MathSciNet  MATH  Google Scholar 

  11. Wang, Q., Wang, K., Chen, S.: Least squares approximation method for the solution of Volterra-Fredholm integral equations. J. Comput. Appl. Math. 272, 141–147 (2014). https://doi.org/10.1016/j.cam.2014.05.010.

    Article  MathSciNet  Google Scholar 

  12. Panda, S., Martha, S. C., Chakrabarti, A.: A modified approach to numerical solution of Fredholm integral equations of the second kind. Appl. Math. Comput. 271, 102–112 (2015). https://doi.org/10.1016/j.amc.2015.08.111.

    MathSciNet  MATH  Google Scholar 

  13. Krasnov, M. L., Kiseliov, A. I., Makárenko, G. I.: Integral Equations. Mir, Moscow (1982).

    Google Scholar 

  14. Diethelm, K.: The Analysis of Fractional Differential Equations. An Application Oriented Exposition Using Differential Operators of Caputo Type. Springer Verlag, Berlin (2010).

    MATH  Google Scholar 

  15. Narahari Archar, B. N., Hanneken, J. W., Enck, T., Clarke, T.: Dynamics of the fractional oscillator. Phys. A. 297, 361–367 (2001). https://doi.org/10.1016/S0378-4371(01)00200-X.

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We want to thank the reviewers of this work, whose suggestions have contributed to its improvement.

Funding

This research has been partially sponsored by the Universidad Nacional de Rosario through the projects ING495 “Estudio de diversos problemas con ecuaciones diferenciales fraccionarias” and 80020180100064UR “Esquemas numéricos para la resolución de modelos de orden fraccionario relativos al tratamiento de la infección por VIH”. The first author is also sponsored by CONICET through an internal doctoral fellowship.

Author information

Authors and Affiliations

Authors

Contributions

The authors read and approved the final manuscript.

Corresponding author

Correspondence to Alberto José Ferrari.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ferrari, A.J., Lara, L.P. & Santillan Marcus, E.A. Convergence analysis and parity conservation of a new form of a quadratic explicit spline with applications to integral equations. J Egypt Math Soc 28, 30 (2020). https://doi.org/10.1186/s42787-020-00091-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s42787-020-00091-7

Keywords