The discrete scheme corresponding to the original problem (1)–(3) is as follows:

$$ \mathrm{For}\;i=1,2,...,N-1,\kern0.6em {L}_1^N{Y}_i={f}_i-{b}_i{\phi}_{i-N,} $$

(27)

$$ \mathrm{For}\;i=N+1,...,2N-1,\kern0.6em {L}_2^N{Y}_i={f}_i, $$

(28)

subject to the boundary conditions is as follows:

$$ {Y}_i={\phi}_i,i=-N,-N+1,...,0,\kern3.25em $$

(29)

$$ {K}^N{Y}_{2N}={Y}_{2N}-\sum \limits_{i=1}^{2N}\frac{g_{i-1}{Y}_{i-1}+4{g}_i{Y}_i+{g}_{i+1}{Y}_{i+1}}{3}{h}_i,\kern1.25em $$

(30)

and

$$ {D}^{-}{Y}_N={D}^{+}{Y}_N, $$

where

$$ {\displaystyle \begin{array}{l}{L}_1^N{Y}_i=-\varepsilon {\delta}^2Y\left({x}_i\right)+a\left({x}_i\right){D}^0Y\left({x}_i\right)+b\left({x}_i\right)Y\left({x}_i\right)\\ {}{L}_2^N{Y}_i=-\varepsilon {\delta}^2Y\left({x}_i\right)+a\left({x}_i\right){D}^0Y\left({x}_i\right)+b\left({x}_i\right)Y\left({x}_i\right)+c\left({x}_i\right)Y\left({x}_{i-N}\right)\end{array}}. $$

**Lemma 5:** (Discrete Maximum Principle) Assume that

$$ \sum \limits_{i=1}^{2N}\frac{g_{i-1}+4{g}_i+{g}_{i+1}}{3}{h}_i=\rho <1 $$

and mesh function *ψ*(*x*_{i}) satisfy *ψ*(*x*_{0}) ≥ 0 and *K*^{N}*ψ*(*x*_{2N}) ≥ 0. Then, \( {L}_1^N\psi \left({x}_i\right)\ge 0,\forall {x}_i\in {\varOmega}_1^{2N},{L}_2^N\psi \left({x}_i\right)\ge 0,\forall {x}_i\in {\varOmega}_2^{2N} \) and *D*^{+}(*ψ*(*x*_{N})) − *D*^{−}(*ψ*(*x*_{N})) ≤ 0 imply that \( \psi \left({x}_i\right)\ge 0,\forall {x}_i\in {\overline{\varOmega}}^{2N} \).

**Proof:** Define

$$ s\left({x}_i\right)=\Big\{{\displaystyle \begin{array}{l}\frac{1}{8}+\frac{x_i}{2},\kern0.5em {x}_i\in \left[0,1\right]\cap {\overline{\varOmega}}^{2N},\\ {}\frac{3}{8}+\frac{x_i}{4},\kern0.5em {x}_i\in \left[1,2\right]\cap {\overline{\varOmega}}^{2N},\end{array}} $$

Note that \( s\left({x}_i\right)>0,\forall {x}_i\in {\overline{\varOmega}}^{2N}, Ls\left({x}_i\right)>0,\forall {x}_i\in {\varOmega}_1^{2N}\cup {\varOmega}_2^{2N},s(0)>0, Ks\left({x}_{2N}\right)>0 \), and[*s*^{′}](*x*_{N}) < 0.

Let \( \mu =\max \left\{\frac{-\psi \left({x}_i\right)}{s\left({x}_i\right)}:{x}_i\in {\overline{\varOmega}}^{2N}\right\} \). Then, there exists \( {x}_k\in {\overline{\varOmega}}^{2N} \) such that *ψ*(*x*_{k}) + *μs*(*x*_{k}) = 0 and \( \psi \left({x}_i\right)+\mu s\left({x}_i\right)\ge 0,\forall {x}_i\in {\overline{\varOmega}}^{2N} \). Therefore, the function (*ψ* + *μs*) attains its minimum at *x* = *x*_{k}. Suppose the theorem does not hold true, then, *μ* > 0.

**Case (i):***x*_{k} = *x*_{0}, 0 < (*ψ* + *μs*)(*x*_{0}) = 0, it is a contradiction.

**Case (ii):**\( {x}_k\in {\varOmega}_1^{2N} \), \( 0<{L}_1^N\left(\psi +\mu s\right)\left({x}_k\right)\le 0 \), it is a contradiction.

**Case (iii):***x*_{k} = *x*_{N}, 0 ≤ [*D*(*ψ* + *μs*)^{′}](*x*_{N}) < 0, it is a contradiction.

**Case (iv):**\( {x}_k\in {\varOmega}_2^{2N} \), \( 0<{L}_2^N\left(\psi +\mu s\right)\left({x}_k\right)\le 0 \), it is a contradiction.

**Case (v):***x*_{k} = *x*_{2N}

$$ 0<{K}^N\left(\psi +\mu s\right){x}_{2N}=\left(\psi +\mu s\right){x}_{2N}-\sum \limits_{i=1}^{2N}\frac{g_{i-1}\left(\psi +\mu s\right){x}_{i-1}+4{g}_i\left(\psi +\mu s\right){x}_i+{g}_{i+1}\left(\psi +\mu s\right){x}_{i+1}}{3}{h}_i\le 0 $$

It is a contradiction. Hence the proof of the theorem.

**Lemma 6:** Let *ψ*(*x*_{i}) be any mesh function then for 0 ≤ *i* ≤ 2*N*,

$$ \left|\psi \left({x}_i\right)\right|\le C\max \left\{\left|\psi \left({x}_o\right)\right|,\left|{K}^N\psi \left({x}_{2N}\right)\right|,\underset{i\in {\Omega}_1^{2N}\cup {\Omega}_2^{2N}}{\max}\left|{L}^N\psi \left({x}_i\right)\right|\right\}. $$

**Proof:** For the proof, refer to [16].

The following theorem shows the parameter uniform convergence of the scheme developed.

### Theorem 1:

Let *y*(*x*_{i}) and *y*_{i} be respectively the exact solution of (1)–(3) and numerical solutions of (17). Then, for sufficiently large *N*, the following parameter uniform error estimate holds:

$$ \underset{0<\varepsilon \le 1}{\sup}\left\Vert y\left({x}_i\right)-{y}_i\right\Vert \le C{N}^{-2} $$

(32)

**Proof:** Let us consider the local truncation error defined as follows:

$$ {L}^h\left(y\left({x}_i\right)-{y}_i\right)=-\varepsilon \sigma \left(\rho \right)\left(\frac{d^2}{d{x}^2}-{D}^{+}{D}^{-}\right)y\left({x}_i\right)+a\left({x}_i\right)\left(\frac{d}{dx}-{D}^0\right)y\left({x}_i\right),\kern4.25em $$

(33)

where \( \varepsilon \sigma \left(\rho \right)=a(1)\frac{N^{-1}}{2}\coth \left(a(1)\frac{N^{-1}}{2\varepsilon}\right) \) since \( \rho =\frac{N^{-1}}{\varepsilon } \). In our assumption, *ε* ≤ *h* = *N*^{−1}.

By considering *N* is fixed and taking the limit for *ε* → 0, we obtain the following:

$$ \underset{\varepsilon \to 0}{\lim}\varepsilon \sigma \left(\rho \right)=\underset{\varepsilon \to 0}{\lim }a(1)\frac{N^{-1}}{2}\coth \left(a(1)\frac{N^{-1}}{2\varepsilon}\right)=C{N}^{-1}. $$

From Taylor’s series expansion, the bound for the difference becomes:

$$ \Big\{{\displaystyle \begin{array}{l}\left\Vert \left(\frac{d^2}{d{x}^2}-{D}^{+}{D}^{-}\right)y\left({x}_i\right)\right\Vert \le C{N}^{-3}\left\Vert \frac{d^4\left(y\left({x}_i\right)\right)}{d{x}^4}\right\Vert \\ {}\left\Vert \left(\frac{d}{dx}-{D}^0\right)y\left({x}_i\right)\right\Vert \le C{N}^{-2}\left\Vert \frac{d^3\left(y\left({x}_i\right)\right)}{d{x}^3}\right\Vert \end{array}}, $$

where \( \left\Vert \frac{d^k\left(y\left({x}_i\right)\right)}{d{x}^k}\right\Vert =\underset{x_i\in \left({x}_0,{x}_N\right)}{\sup}\left(\frac{d^ky\left({x}_i\right)}{d{x}^k}\right),k=3,4 \).

Now, using the bounds and the assumption *ε* ≤ *N*^{−1}, (33) reduces to:

$$ {\displaystyle \begin{array}{l}\left\Vert {L}^h\left(y\left({x}_i\right)-{y}_i\right)\right\Vert =\left\Vert -\varepsilon \sigma \left(\rho \right)\left(\frac{d^2}{d{x}^2}-{D}^{+}{D}^{-}\right)y\left({x}_i\right)+a\left({x}_i\right)\left(\frac{d}{dx}-{D}^0\right)y\left({x}_i\right)\right\Vert \\ {}\kern6em \le \left\Vert -\varepsilon \sigma \left(\rho \right)\left(\frac{d^2}{d{x}^2}-{D}^{+}{D}^{-}\right)y\left({x}_i\right)\right\Vert +\left\Vert a\left({x}_i\right)\left(\frac{d}{dx}-{D}^0\right)y\left({x}_i\right)\right\Vert \\ {}\kern6.25em \le C{N}^{-3}\left\Vert \frac{d^4\left(y\left({x}_i\right)\right)}{d{x}^4}\right\Vert +C{N}^{-2}\left\Vert \frac{d^3\left(y\left({x}_i\right)\right)}{d{x}^3}\right\Vert \end{array}}. $$

(34)

Here, the target is to show the scheme convergence independent on the number of mesh points.

By using the bounds for the derivatives of the solution in Lemma 4, we obtain:

$$ {\displaystyle \begin{array}{l}\left\Vert {L}^h\left(y\left({x}_i\right)-{y}_i\right)\right\Vert \le C{N}^{-3}\left\Vert \frac{d^4\left(y\left({x}_i\right)\right)}{d{x}^4}\right\Vert +C{N}^{-2}\left\Vert \frac{d^3\left(y\left({x}_i\right)\right)}{d{x}^3}\right\Vert \\ {}\kern6em \le C{N}^{-3}\left(1+{\varepsilon}^{-4}\exp \left(\frac{-\alpha \left(1-{x}_j\right)}{\varepsilon}\right)\right)+C{N}^{-2}\left(1+{\varepsilon}^{-3}\exp \left(\frac{-\alpha \left(1-{x}_j\right)}{\varepsilon}\right)\right)\\ {}\kern6em \le C{N}^{-2}\left(1+{\varepsilon}^{-4}\exp \left(\frac{-\alpha \left(1-{x}_j\right)}{\varepsilon}\right)\right),\kern0.5em \mathrm{since}\kern0.5em {\varepsilon}^{-4}\ge {\varepsilon}^{-3}\end{array}}. $$

(35)

**Lemma 7:** For a fixed mesh and for *ε* → 0, it holds:

$$ \underset{\varepsilon \to 0}{\lim}\underset{1\le j\le N-1}{\max}\frac{\exp \left(\frac{-\alpha \left(1-{x}_j\right)}{\varepsilon}\right)}{\varepsilon^m}=0,\kern0.5em m=1,2,3,... $$

(36)

**Proof:** Refer to [19].

By using Lemma 7 into (35), results to

$$ \left\Vert {L}^h\left(y\left({x}_i\right)-{y}_i\right)\right\Vert \le C{N}^{-2} $$

(37)

Hence, by discrete maximum principle, we obtain:

$$ \left\Vert y\left({x}_i\right)-{y}_i\right\Vert \le C{N}^{-2}.\kern13em $$

(38)

Thus, result of (38) shows (32). Hence, the proof.

**Remark:** A similar analysis for convergence may be carried out for the finite difference scheme (24).

### Richardson Extrapolation

This technique is acceleration technique which involves combination of two computed approximations of a solution. The combination goes out to be an improved approximation. From the local truncation term, we have:

$$ \mid y\left({x}_i\right)-{y}_i\mid \le C(h) $$

(39)

where *y*(*x*_{i}) and *y*_{i} are exact and approximate solutions respectively, and *C* is constant free from mesh size *h*.

Let *Ω*^{4N} be the mesh found by dividing each mesh interval in *Ω*^{2N} and symbolize the calculation of the solution on *Ω*^{4N} by \( {\overline{y}}_i \). Consider (39) works for any *h* ≠ 0, which implies:

$$ y\left({x}_i\right)-{y}_i\le C(h)+{R}^{2N},\kern0.84em {x}_i\in {\varOmega}^{2N} $$

(40)

So that it works for any \( \frac{h}{2}\ne 0 \) yields:

$$ y\left({x}_i\right)-{\overline{y}}_i\le C\left(\frac{h}{2}\right)+{R}^{4N},\kern0.84em {x}_i\in {\varOmega}^{4N} $$

(41)

where the remainders *R*^{2N} and *R*^{4N} are *O*(*h*^{2}). Combination of inequalities in (40) and (41) leads to \( y\left({x}_i\right)-\left(2{\overline{y}}_i-{y}_i\right)\approx O\left({h}^2\right) \) which proposes that

$$ {\left({y}_i\right)}^{ext}=2{\overline{y}}_i-{y}_i $$

(42)

is also a rough calculation of *y*(*x*_{i}). By means of this approximation to estimate the truncation error, we obtain:

$$ \mid y\left({x}_i\right)-{\left({y}_i\right)}^{ext}\mid \le C\left({h}^2\right) $$

(43)

where *C* is free of mesh size *h*. Thus, using Richardson extrapolation first order convergent method is accelerated into second order convergent as provided in (43). Thus, we can say that the proposed method is second order convergent.

### Numerical examples and results

In this section, two examples are considered to illustrate the applicability of the numerical method discussed above. The exact solutions of these test problems are not known. Therefore, double mesh principle is used to estimate the errors and compute the numerical rate of convergence to the computed solution. The double mesh formula to determine maximum absolute error is defined as follows:

$$ {E}_{\varepsilon}^h=\underset{0\le i\le 2N}{\max}\mid {Y}_i^N-{Y}_{2i}^{2N}\mid $$

where \( {Y}_i^N \) and \( {Y}_{2i}^{2N} \) are the *i*^{th} components of the numerical solutions for *N* and 2*N*, respectively. We compute the uniform error and the rate of convergence using the formula:

$$ {E}^h=\underset{\varepsilon }{\max }{E}_{\varepsilon}^h\;\mathrm{and}\;{R}^h={\log}_2\left(\frac{E^N}{E^{2N}}\right) $$

The numerical results are presented for the values of the perturbation parameter *ε* ∈ {10^{−4}, 10^{−8}, ..., 10^{−20}}.

Example 1:

$$ {\displaystyle \begin{array}{l}-\varepsilon {y}^{{\prime\prime} }(x)+3{y}^{\prime }(x)+y(x)-y\left(x-1\right)=1,\kern0.72em x\in \left(0,1\right)\cup \left(1,2\right)\\ {}\kern0.48em y(x)=1,\kern0.72em x\in \left[-1,0\right]\\ {}\kern0.48em y(2)-\varepsilon \underset{0}{\overset{2}{\int }}\frac{x}{3}y(x) dx=2\end{array}} $$

Example 2:

$$ {\displaystyle \begin{array}{l}-\varepsilon {y}^{{\prime\prime} }(x)+5{y}^{\prime }(x)+\left(x+1\right)y(x)-y\left(x-1\right)={x}^2,\kern0.72em x\in \left(0,1\right)\cup \left(1,2\right)\\ {}\kern0.48em y(x)=1,\kern0.72em x\in \left[-1,0\right]\\ {}\kern0.48em y(2)-\varepsilon \underset{0}{\overset{2}{\int }}\frac{x}{3}y(x) dx=2\end{array}} $$