 Introduction to the Global Optimization Toolbox - Maple Programming Help

Home : Support : Online Help : Toolboxes : Global Optimization : examples/IntroductionGlobalOptimization

Introduction to the Global Optimization Toolbox

Overview of Features

Maple has partnered with Noesis Solutions to develop a new version of the Maple Global Optimization Toolbox that is powered by Optimus® technology, first released with the Global Optimization Toolbox 17.

The new global solver offers two solver methods, a differential evolution algorithm and a hybrid algorithm that uses interpolating response surface models, as well as new options to guide the search.  In addition, the updated toolbox is fully compatible with existing code.

Highlights of some of the new options available for the two new solver methods:

Differential Evolution Algorithm

 • Average step width — the variation in design variables used as stopping criterion
 • Population size — the number of designs evaluated during each iteration
 • Inverse crossover probability — the probability that variables might be the same as predecessors without adjustment
 • Target weighting factor — applies weighted differences of the variables of the randomly selected designs of the previous generation, allowing you to set the balance between computation speed and probability of success

 • Theta method — methods for maximizing the predictive quality of the model, maximizing the likelihood of the set of points with respect to the model, and minimizing the semi-norm of the correlation matrix
 • Number of sigma — how much attention is given to the estimated standard deviation when selecting the next points
 • Nugget — the smoothing factor for the kriging response surface model
 • Optimum search method — different methods for locating new promising points in the design space

The following sample application solves an investment optimization problem in two ways.

 Introduction Traditional investment performance benchmarks, like the Sharpe Ratio, approximate the returns distribution with mean and standard deviation. This, however, assumes the distribution is normal.  Many modern investments vehicles, like hedge funds, display fat tails, and skew and kurtosis in the returns distribution. Hence, they cannot be adequately benchmarked with traditional approaches.   One solution, proposed by Shadwick and Keating in 2002 is the Omega Ratio.  This divides the returns distribution into two halves – the area below a target return, and the above a target return. The Omega Ratio is simply the former divided by the latter. A higher value is better.   For a set of discrete returns, the Omega Ratio is given by   $\mathrm{Ω}\left(L\right)=\frac{E\left[\mathrm{max}\left(R-L,0\right)\right]}{E\left[\mathrm{max}\left(L-R,0\right)\right]}$   where L is a target return and R is a vector of returns.   This application finds the asset weights that maximize the Omega Ratio of a portfolio of ten investments, given their simulated monthly returns and a target return.   This is a non-convex problem, and requires global optimizers for a rigorous solution. However, a transformation of the variables (only valid for Omega Ratios of over 1) converts the optimization into a linear program.   This application implements both approaches, the former using Maple's Global Optimization Toolbox, and the latter using Maple's linear programming features. For the data set provided in this application, both approaches give comparable results.

Returns Data and Minimum Acceptable Return

 > $\mathrm{restart}:$

Monthly hedge fund returns

 >

Number of funds

 > $N≔\mathrm{LinearAlgebra}\left[\mathrm{ColumnDimension}\right]\left(\mathrm{data}\right)$
 ${N}{:=}{10}$ (3.1)

Number of returns for each fund

 > $S≔\mathrm{LinearAlgebra}\left[\mathrm{RowDimension}\right]\left(\mathrm{data}\right)$
 ${S}{:=}{36}$ (3.2)

Target Return

 > $L≔0.1:$

Omega Ratio

 >

"Strawman" portfolio of equal weights at the target return

 > $\mathrm{OmegaRatio}\left(L,\mathrm{data},\left[0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1\right]\right)$
 ${4.003192510}$ (3.3)

Global Optimization

The new Global Optimization Toolbox finds the optimum asset weights easily.

 > $\mathrm{resultsGO}:=\mathrm{GlobalOptimization}\left[\mathrm{GlobalSolve}\right]\left('\mathrm{OmegaRatio}'\left(L,\mathrm{data},\left[\mathrm{seq}\left({w}_{i},i=1..10\right)\right]\right),\left\{\mathrm{add}\left({w}_{i},i=1..N\right)=1\right\},\mathrm{seq}\left({w}_{i}=0..1,i=1..N\right),\mathrm{maximize}\right):$

Optimized Omega Ratio

 > ${\mathrm{resultsGO}}_{1}$
 ${6.97039644046392137}$ (3.2.1)

Optimized investment weights

 > $\mathrm{weightsGO}≔{\mathrm{resultsGO}}_{2}$
 ${\mathrm{weightsGO}}{:=}\left[{{w}}_{{1}}{=}{0.00000276016608452556}{,}{{w}}_{{2}}{=}{0.00000228470381952173}{,}{{w}}_{{3}}{=}{3.94708622564188}{}{{10}}^{{-7}}{,}{{w}}_{{4}}{=}{0.410741791062930}{,}{{w}}_{{5}}{=}{8.41236359461206}{}{{10}}^{{-7}}{,}{{w}}_{{6}}{=}{0.234868780012149}{,}{{w}}_{{7}}{=}{0.353335563853041}{,}{{w}}_{{8}}{=}{0.00000174272058245384}{,}{{w}}_{{9}}{=}{0.000176546056103011}{,}{{w}}_{{10}}{=}{0.000869375380069815}\right]$ (3.2.2)

Linear Program

Finding the optimum asset weights using the linear programming method:

 > $\mathrm{eq1}≔\mathrm{seq}\left(\mathrm{add}\left({\mathrm{data}}_{i,j}{w}_{j},j=1..N\right)-{u}_{i}+{d}_{i}-Lt=0,i=1..S\right):$
 > $\mathrm{eq2}≔\mathrm{add}\left({w}_{j},j=1..N\right)=t:$
 > $\mathrm{eq3}≔\mathrm{add}\left(\frac{{d}_{i}}{S},i=1..S\right)=1:$
 > $\mathrm{obj}≔\mathrm{add}\left(\frac{{u}_{i}}{S},i=1..S\right):$
 > $\mathrm{cons}≔\mathrm{seq}\left({\left[{u}_{i}\ge {0}_{},{d}_{i}\ge 0\right]}_{\left[\right]},i=1..S\right),\mathrm{seq}\left({\left[{w}_{j}\ge 0\right]}_{\left[\right]},j=1..N\right):$
 > $\mathrm{resultsLP}≔\mathrm{Optimization}\left[\mathrm{LPSolve}\right]\left(\mathrm{obj},\left\{\mathrm{eq1},\mathrm{eq2},\mathrm{eq3},\mathrm{cons}\right\},\mathrm{maximize},\mathrm{assume}=\mathrm{nonnegative}\right):$

Optimized Omega Ratio

 > ${\mathrm{resultsLP}}_{1}$
 ${6.97971754550025}$ (3.3.1)

Optimized investment weights

 > $\mathrm{assign}\left(\mathrm{select}\left(\mathrm{has},{\mathrm{resultsLP}}_{2},t\right)\right)$
 > $\mathrm{weightsLP}:=\mathrm{map}\left(i→\mathrm{lhs}\left(i\right)=\frac{\mathrm{rhs}\left(i\right)}{t},\mathrm{select}\left(\mathrm{has},{\mathrm{resultsLP}}_{2},w\right)\right)$
 ${\mathrm{weightsLP}}{:=}\left[{{w}}_{{1}}{=}{0.}{,}{{w}}_{{2}}{=}{0.}{,}{{w}}_{{3}}{=}{0.}{,}{{w}}_{{4}}{=}{0.409790222161672}{,}{{w}}_{{5}}{=}{0.}{,}{{w}}_{{6}}{=}{0.219617381328148}{,}{{w}}_{{7}}{=}{0.333211183439266}{,}{{w}}_{{8}}{=}{0.}{,}{{w}}_{{9}}{=}{0.}{,}{{w}}_{{10}}{=}{0.0373812133486924}\right]$ (3.3.2)
 >