Error Control - Maple Help

Error Control for Numerical Solution of IVPs in Maple

Description

 • dsolve[numeric] approximates the solution of an initial value problem (IVP) for a system of ordinary differential equations (ODEs).  Although ODEs arise in various forms, they can always be rewritten as a first order system, so to explain error control, it is assumed that the ODEs have the form

$\frac{ⅆ}{ⅆx}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}y\left(x\right)=f\left(x,y\left(x\right)\right),y\left(a\right)=\mathrm{y0}$

 for $x$ in the range $a..b$. Here $x$ is the independent variable and $y\left(x\right)$ is the vector of dependent variables.
 • Numerical methods solve this IVP by approximating the solution successively at a discrete set of points ${x}_{0}=a<{x}_{1}<{x}_{2}<\mathrm{...}<{x}_{m}=b$. Using an approximation ${y}_{i}$ to $y\left({x}_{i}\right)$, and possibly approximations at a few ${x}_{j}$ prior to ${x}_{i}$, an approximation ${y}_{i+1}$ is computed to $y\left({x}_{i+1}\right)$.  This is called taking a step to ${x}_{i+1}$ with a step of size $h={x}_{i+1}-{x}_{i}$.
 • At each step the solver makes a truncation and/or discretization error that depends on the method and the length of the step.  The cumulative effect of these errors depends on the stability of the IVP near the  solution $y\left(x\right)$. If the IVP is stable near the solution (solutions with nearby initial data do not diverge from one another), errors are not amplified, but if the IVP is unstable near the solution (solutions with nearby initial data diverge), errors are amplified.
 • Adaptive solvers estimate the discretization error at each step and control it by adjusting the step size.  The cumulative error in the numerical solution depends on the stability of the IVP, but for moderately stable problems, the solvers are tuned so that the error is comparable to the tolerances on the discretization error.  For the efficient solution of an IVP, the solvers use the largest step size that results in a discretization error smaller than the tolerances. That is, a large step size is used if the solution is easy to approximate and a small one if it is difficult.
 • Unfortunately, there is no "best" numerical method for IVPs. A method for which the discretization error is $\mathrm{O}\left({h}^{p}\right)$ is said to be of order $p$.  If the step size $h$ is sufficiently small, a method of higher order has a smaller discretization error. Because methods of higher order are more expensive, this is an advantage only if the tolerances are sufficiently small. For greater accuracy, use a higher order method. In particular, dsolve[rkf45] is generally more efficient than the higher order dsolve[dverk78] for modest tolerances (the default), but dsolve[dverk78] is generally more efficient for stringent tolerances.  The dsolve[lsode] code adapts the order, as well as the step size, so as to be efficient over a wide range of tolerances.
 • The Maple numerical IVP solvers control the discretization error by means of the options abserr, relerr, minstep, maxstep, and initstep. Note that the classical methods do not estimate the discretization error or vary the step size to control it.
 • The options that pertain to error control are:

 abserr relerr initstep maxstep minstep

 • Not all options apply to all methods, and exceptions are noted in the discussion of each option. In all cases, the default values are specific to each method. For additional information, consult the individual method help pages.
 • abserr and relerr are tolerances for the discretization error.  abserr is an absolute error tolerance and relerr is a relative error tolerance. The exact meaning of these tolerances is dependent upon the solver. To explain this, let the dependent variables at the i-th step be ${\left({y}_{i}\right)}_{j},j=1..n$ and the estimated discretization error be ${\mathrm{approxerr}}_{j}$.
 • For the rkf45, ck45, rosenbrock, the related dae extension methods, the taylorseries method, and the mebdfi method, the inequality

${\mathrm{approxerr}}_{j}\le \mathrm{abserr}+\mathrm{relerr}\left|{\left({y}_{i}\right)}_{j}\right|$

 must be satisfied for all ${\left({y}_{i}\right)}_{j}$ simultaneously, $j=1..n$.
 Note: some of the solvers (all but taylorseries mentioned above) allow for a per-component absolute error tolerance, so in this case the inequality becomes:

${\mathrm{approxerr}}_{j}\le {\mathrm{abserr}}_{j}+\mathrm{relerr}\left|{\left({y}_{i}\right)}_{j}\right|$

 • For the gear and dverk78 methods, the inequality

${\mathrm{approxerr}}_{j}\le \mathrm{max}\left(\mathrm{abserr},\mathrm{relerr}\left|{\left({y}_{i}\right)}_{j}\right|\right)$

 must be satisfied for all ${\left({y}_{i}\right)}_{j}$ simultaneously, $j=1..n$.
 • lsode measures the error in the sense of root-mean-square:

$\sqrt{\frac{\sum _{j=1}^{n}\frac{{\mathrm{approxerr}}_{j}^{2}}{{\left(\mathrm{abserr}+\mathrm{relerr}\left|{\left({y}_{i}\right)}_{j}\right|\right)}^{2}}}{n}}\le 1$

 • The other options, minstep, maxstep, and initstep, are generally intended as fine tuning options and must be used with care.
 The minstep parameter places a minimum on the size of the steps taken by the solver, and forces an error if the solver cannot achieve the required error tolerance without reducing the step below the minimum. This can be used to recognize singularities, and can also be used to limit the cost of the of the computation, though a better way to accomplish this is to limit the number of evaluations of $f\left(x,y\right)$, see dsolve[maxfun].
 The maxstep parameter places a maximum on the size of the steps taken by the solver, so that even if the step control indicates that a larger step is possible, the size of the step will not exceed the imposed limit. This can be used to ensure that the solver does not lose the scale of the problem.
 You can specify to the solver the scale of the problem near the initial point $x=a$ by supplying an initial step size as initstep.  The solver uses this step size if the error of the step is acceptable. Otherwise, it reduces the step size and tries again.
 The minstep and maxstep options are not available for the solvers rkf45, ck45, rosenbrock, and the related dae extension methods, as they require control of these parameters to provide a good solution.

Examples

 > $\mathrm{dsys}≔\left[\mathrm{diff}\left(y\left(x\right),x,x\right)=-y\left(x\right),y\left(0\right)=0,\mathrm{D}\left(y\right)\left(0\right)=1\right]$
 ${\mathrm{dsys}}{≔}\left[\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{x}}^{{2}}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({x}\right){=}{-}{y}{}\left({x}\right){,}{y}{}\left({0}\right){=}{0}{,}{\mathrm{D}}{}\left({y}\right){}\left({0}\right){=}{1}\right]$ (1)

The first example solves this IVP with the default rkf45 method and default error tolerances.  The errf function measures the difference between the computed solution and the exact solution. This example shows how the error can vary over the interval of integration. The remaining examples show the effect of tighter tolerances on the error.

The rkf45 method with default error tolerances. Note that we need the option maxfun=0 to remove the limit on the number of function evaluations performed for the integration out to t=900:

 > $\mathrm{dsol1}≔\mathrm{dsolve}\left(\mathrm{dsys},\mathrm{numeric},\mathrm{maxfun}=0\right)$
 ${\mathrm{dsol1}}{≔}{\mathbf{proc}}\left({\mathrm{x_rkf45}}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (2)
 > $\mathrm{errf}≔x↦\mathrm{evalf}\left(\mathrm{abs}\left(\mathrm{sin}\left(x\right)-\mathrm{rhs}\left(\mathrm{dsol1}\left(x\right)\left[2\right]\right)\right)\right)$
 ${\mathrm{errf}}{≔}{x}{↦}{\mathrm{evalf}}{}\left(\left|{\mathrm{sin}}{}\left({x}\right){-}{\mathrm{rhs}}{}\left({{\mathrm{dsol1}}{}\left({x}\right)}_{{2}}\right)\right|\right)$ (3)
 > $\left[\mathrm{errf}\left(9\right),\mathrm{errf}\left(90\right),\mathrm{errf}\left(900\right)\right]$
 $\left[{4.46240948692722}{×}{{10}}^{{-7}}{,}{8.91718218909432}{×}{{10}}^{{-6}}{,}{0.0000929084983034567}\right]$ (4)

The following plot shows the growth of the global error over the first portion of the interval of computation.

 > $\mathrm{plot}\left(\mathrm{errf},0..9\right)$

The rkf45 method with tighter error tolerances:

 > $\mathrm{dsol2}≔\mathrm{dsolve}\left(\mathrm{dsys},\mathrm{numeric},\mathrm{abserr}=1.×{10}^{-8},\mathrm{relerr}=1.×{10}^{-8},\mathrm{maxfun}=0\right)$
 ${\mathrm{dsol2}}{≔}{\mathbf{proc}}\left({\mathrm{x_rkf45}}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (5)
 > $\mathrm{errf}≔x↦\mathrm{evalf}\left(\mathrm{abs}\left(\mathrm{sin}\left(x\right)-\mathrm{rhs}\left(\mathrm{dsol2}\left(x\right)\left[2\right]\right)\right)\right)$
 ${\mathrm{errf}}{≔}{x}{↦}{\mathrm{evalf}}{}\left(\left|{\mathrm{sin}}{}\left({x}\right){-}{\mathrm{rhs}}{}\left({{\mathrm{dsol2}}{}\left({x}\right)}_{{2}}\right)\right|\right)$ (6)
 > $\left[\mathrm{errf}\left(9\right),\mathrm{errf}\left(90\right),\mathrm{errf}\left(900\right)\right]$
 $\left[{1.15294584435155}{×}{{10}}^{{-8}}{,}{2.42536584593722}{×}{{10}}^{{-7}}{,}{2.62007740259307}{×}{{10}}^{{-6}}\right]$ (7)
 > $\mathrm{plot}\left(\mathrm{errf},0..9\right)$

The dverk78 method with the same tolerances.  This solver is more efficient than rkf45 at these tolerances, so it is not necessary to increase the default maxfun.

 > $\mathrm{dsol3}≔\mathrm{subs}\left(\mathrm{dsolve}\left(\mathrm{dsys},\mathrm{numeric},\mathrm{output}=\mathrm{listprocedure},\mathrm{method}=\mathrm{dverk78},\mathrm{abserr}=1.×{10}^{-8},\mathrm{relerr}=1.×{10}^{-8}\right),y\left(x\right)\right)$
 ${\mathrm{dsol3}}{≔}{\mathbf{proc}}\left({x}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (8)
 > $\mathrm{errf}≔x↦\mathrm{evalf}\left(\mathrm{abs}\left(\mathrm{sin}\left(x\right)-\mathrm{dsol3}\left(x\right)\right)\right)$
 ${\mathrm{errf}}{≔}{x}{↦}{\mathrm{evalf}}{}\left(\left|{\mathrm{sin}}{}\left({x}\right){-}{\mathrm{dsol3}}{}\left({x}\right)\right|\right)$ (9)
 > $\left[\mathrm{errf}\left(9\right),\mathrm{errf}\left(90\right),\mathrm{errf}\left(900\right)\right]$
 $\left[{4.93375695853615}{×}{{10}}^{{-9}}{,}{1.66151348235388}{×}{{10}}^{{-9}}{,}{4.79303242650886}{×}{{10}}^{{-7}}\right]$ (10)

This solution is sufficiently accurate that, to compute the error accurately, we must use more than the default 10 digits of accuracy in Maple.

 > $\mathrm{Digits}≔\mathrm{trunc}\left(\mathrm{evalhf}\left(\mathrm{Digits}\right)\right)$
 > $\mathrm{plot}\left(\mathrm{errf},0..9\right)$

The dverk78 method with even tighter error tolerances:

 > $\mathrm{dsol4}≔\mathrm{subs}\left(\mathrm{dsolve}\left(\mathrm{dsys},\mathrm{numeric},\mathrm{output}=\mathrm{listprocedure},\mathrm{method}=\mathrm{dverk78},\mathrm{abserr}=1.×{10}^{-10},\mathrm{relerr}=1.×{10}^{-10}\right),y\left(x\right)\right)$
 ${\mathrm{dsol4}}{≔}{\mathbf{proc}}\left({x}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (11)
 > $\mathrm{errf}≔x↦\mathrm{evalf}\left(\mathrm{abs}\left(\mathrm{sin}\left(x\right)-\mathrm{dsol4}\left(x\right)\right)\right)$
 ${\mathrm{errf}}{≔}{x}{↦}{\mathrm{evalf}}{}\left(\left|{\mathrm{sin}}{}\left({x}\right){-}{\mathrm{dsol4}}{}\left({x}\right)\right|\right)$ (12)
 > $\left[\mathrm{errf}\left(9\right),\mathrm{errf}\left(90\right),\mathrm{errf}\left(900\right)\right]$
 $\left[{1.33253463818761}{×}{{10}}^{{-10}}{,}{2.34880781491142}{×}{{10}}^{{-10}}{,}{4.41993575073241}{×}{{10}}^{{-9}}\right]$ (13)
 > $\mathrm{Digits}≔\mathrm{trunc}\left(\mathrm{evalhf}\left(\mathrm{Digits}\right)\right)$
 > $\mathrm{plot}\left(\mathrm{errf},0..9\right)$