Problem Solvers

This section describes some problem solvers. These functions are either built in or read from a file at program start. We explain

Solving Equations

In UTIL some fuctions are defined, which will to be of general use. First there are some functions for solving equations.

    >bisect("f",a,b,...)

uses the bisetion method to solve f(x,...)=0. f must be a function of the real variable x which may have additional parameters "...". These parameters can be passed to bisect and are in turn passed to f. You can use the syntax

    >bisect("f",a,b;v)

to pass the additional parameter v to f(x,v). This syntax applies to all functions below, which accept additional parameters to be passed to a function.

The method uses the internal epsilon to stop the iteration. You can specify an own epsilon with

    >biscet("f",a,b,eps=0.1)

Note, that parameters with values must be the last parameters of all. The eps parameter can be applied to many functions in this section.

It is possible to pass an expression to bisect. This expression must be a string containing a valid EULER expression for f(x). The name of the unknown variable must be x.

    >bisect("x^2-2",1,2);

All functions in this section will work with expressions in x.

    >secant("f",a,b,...)

uses the secant method for the same purpose. "f" may again be an expression containing the variable x.

If you want to find the root of an expression, which contains one variable or many variables, you can use

    >root("expr",x)

expr must be any valid EULER expression. All variables in this expression must be global variables of type real. This function is not to be used in other functions, unless this functions sets the global variables first! x must be the name of the variable, which you want to solve expr==0 for. The new value of x solves the expression. E.g.,

    >a=2; x=1; root("x^2-a",x)

will set x equal to sqrt(2).

If you need to find roots of a function with several parameters, you can use Newtons method or the method of Broyden.

    >broyden("f",x)

or

    >broyden("f",x,J,...)

uses the Broyden method to find the roots of f(x,...)=0. This time, f may be a function f. Then x is a vector. J is an approximation of the Jacobian at x, the starting point. If J==0 or J is missing, then the function computes J. Again, additional parameters are passed to f.

    >newton("f","f1",x,...)

finds a zero of f using the Newton method for a function with one parameter. You must provide the derivative of f in f1. Similarily,

    >newton2("f","Df",x,...)

is used for several parameters. f(x) must compute an 1xn vector y from the 1xn vector x. Df(x) must compute the nxn derivative matrix at x.

Optimization

The minimum of a convex function (maximum of a concave function) can be computed with fmin (fmax). E.g.

    >fmin("x^3-x",0,1)

If f is a function, you may also use fmin("f",0,1). As always, additional parameters are passed to f.

The function fmin uses the Golden Ratio method. However, Brent's method may be faster and is available with

    >brentmin("x^3-x",0.5)

where the parameters are an initial try, and optionally a step size for the search and an stopping criterion epsilon. The basic built-in function is brent. All minima and maxima of a function or expression (in x) are computed with

    >fextrema("x^3-x",a,b)

where a and b are the interval ends. There may be an optional n parameter to determine the number of subintervals to be investigated. The minimum of a function in several variables can be found with the Nelder-Mean simplex algorithm.

    >neldermin("f",v)

This time, f must take a 1xn vector as an argument and return a real number. Or you may pass an expression in x, which evaluates for a 1xn vector x to a real. V is a starting point for the search. The basic built-in function is nelder. Optionally, you may pass a delta as size of the first simplex and a stopping epsilon.

Linear Optimization

EULER has a built in Simplex algorithm, which minimizes c'*x among all x, satisfying x>=0 and A*x<=b.

    >simplex(A,b,c)

will compute the solution x. To check for feasibility and boundedness, use

    >{x,r}=simplex(A,b,c)

r will be 0, if x is a correct minimum. If r is -1, the problem is not feasible. If r is 1, the problem is unbounded. In this case, x contains a feasible point. In any case A must be a real mxn matrix, b an mx1 matrix (a column vector), and c a 1xn matrix (a row vector). x will be a nx1 column vector.

Note, that the internal epsilon is used for numerical computations.

Integration

    >simpson("f",a,b)

or

    >simpson("f",a,b,n,...)

computes the Simpson integral with of f in [a,b]. n is the discretization and additional parameters are passed to f(x,...). f must be able to handle vector input for x. Again, "f" may be an expression in x.

    >gauss("f",a,b)

or

    >gauss("f",a,b,n,...)

performs gauss integration with 10 knots. If n is specified, then the interval is subdivided into n subintervals. f(x,...) must be able to handle vector intput for x. "f" may be an expression in x.

    >romberg("f",a,b)

or

    >romberg("f",a,b,n,...)

uses the Romberg method for integration. n is the starting discretization. All other parameters are as in "simpson". Again, "f" may be an expression containing the variable x.

There is also an adaptive integrator

    >adaptiveint("f",a,b,eps)

where eps is the desired accuracy. Optionally, you may pass the number of initial subdivisions. Further parameters are passed to f.

Differentiation

To take the derivative, one may use the dif function, as in

    >dif("f",x)
    >dif("f",x,n)

(the latter for the n-th derivative). Again, "f" may be an expression in "x". There are some approximation tools. Polynomial interpolation has been discussed above.

Splines

There is also spline interpolation

    >sp=spline(t,s)

which returns the second derivatives of the natural cubic spline through (t[i],s[i]). To evaluate this spline at x,

    >splineval(t,s,sp,x)

is available.

Regression Analysis

    >polyfit(t,s,n)

fits a polynomial of n-th degree to (t[i],s[i]) in least square mode. This is an application of

    >fit(A,b)

which computes x such that the L_2-norm of Ax-b is minimal.

Iteration and Fixed Points

    >iterate("f",x,...)

seeks a fixed point of f by iterating the function from x. If you provide additional arguments ..., these are passed to f. Of course, f must be a function of type

    >function f(x)
    $...
    $endfunction

If you want see the iterations, use the following variant.

    >niterate("f",x,n,...)

iterates f starting at x n times and returns the vector of iterated values.

    >steffenson("f",x);

Seeks a fixed point starting from f using the Steffenson operator. nsteffenson is similar to niterate.

    >map("f",x,...)

evaluates the function f(x,...) at all elements of x, if f(x,...) does not work because the function f does not accept vectors x.

Differential Equations

One of the differential equation solvers is the Heun method. To use it, write the function f, which defines the differential equation y=f(t,y). y can be a 1xN vector. Then call

    >heun("f",t,y0,...)

The extra parameters ... are passed to f. y0 is the starting value y(t[1]) and t must be a vector of points t. The function returns the values of the solution at t. The accuracy depends of the distances from t[i] to t[i+1]. Heun is implemented via a UTIL function. A faster solver, implemented as a built-in function, is the Runge-Kutta method and its adaptive variant.

    >runge("f",t,y0)

does the same as heun. Optionally, a number of steps to take between the points of t can be passed. Thus you have to pass addtional arguments for f as

    >runge("f",t,y0;...)

The adaptive Runge-Kutta method is called adaptiverunge, and takes an epsilon and an initial step size as optional arguments. Internally, these functions use the built-in runge1 and runge2 respectively.