LSSolve (Matrix Form) - Maple Help

Optimization[LSSolve](Matrix Form)

solve a least-squares problem in Matrix Form

 Calling Sequence LSSolve([c, G], lc, bd, opts) LSSolve([n, q], p, nc, nlc, lc, bd, opts) LSSolve([n, q], p, lc, bd, opts)

Parameters

 c - Vector; linear least-squares Vector G - Matrix; linear least-squares Matrix lc - (optional) list; linear constraints bd - (optional) list; bounds n - $\mathrm{posint}$; number of variables q - $\mathrm{posint}$; number of residuals p - procedure; least-squares objective function nc - (optional) $\mathrm{nonnegint}$ or list of 2 nonnegints; number of nonlinear constraints nlc - (optional) procedure; nonlinear constraints opts - (optional) equation(s) of the form option = value where option is one of assume, constraintjacobian, feasibilitytolerance, infinitebound, initialpoint, iterationlimit, method, objectivejacobian, optimalitytolerance, or output; specify options for the LSSolve command

Description

 • The LSSolve command solves a least-squares (LS) problem, which involves computing the minimum of an objective function having one of the forms shown below, possibly subject to constraints. Generally, a local minimum is returned unless the problem is convex.
 • LSSolve accepts linearly-constrained, linear LS problems of the following form.
 minimize $\frac{1}{2}{‖c-Gx‖}^{2}$
 subject to
 $A·x\le b$ (linear inequality constraints)
 $\mathrm{Aeq}·x=\mathrm{beq}$ (linear equality constraints)
 $\mathrm{b1}\le x\le \mathrm{bu}$ (bounds)
 where $x$ is the vector of problem variables; $c$, $b$, $\mathrm{beq}$, $\mathrm{bl}$ and $\mathrm{bu}$ are vectors; and $G$, $A$ and $\mathrm{Aeq}$ are matrices.  The relations involving matrices and vectors are element-wise. The dimension of $c$ must be greater than or equal to the dimension of $x$.
 LSSolve also accepts nonlinear LS problems of the following form.
 minimize $F\left(x\right)=\left(\frac{1}{2}\right)\left({\mathrm{f1}\left(x\right)}^{2}+{\mathrm{f2}\left(x\right)}^{2}+\mathrm{...}+{\mathrm{fq}\left(x\right)}^{2}\right)$
 subject to
 $v\left(x\right)\le 0$ (nonlinear inequality constraints)
 $w\left(x\right)=0$ (nonlinear equality constraints)
 $A·x\le b$ (linear inequality constraints)
 $\mathrm{Aeq}·x=\mathrm{beq}$ (linear equality constraints)
 $\mathrm{b1}\le x\le \mathrm{bu}$ (bounds)
 where each $\mathrm{fi}\left(x\right)$ is a real-valued function of $x$; $v\left(x\right)$ and $w\left(x\right)$ are vector-valued functions of $x$; and the other components are as described previously. The algorithms used by LSSolve assume the residuals $\mathrm{fi}\left(x\right)$ and the constraints are twice continuously differentiable. LSSolve will sometimes succeed even if this condition is not met. The number of residuals must be greater than or equal to the dimension of $x$.
 • This help page describes how to specify the problem in Matrix form. For details about the exact format of the objective function and the constraints, see the Optimization/MatrixForm help page. The algebraic and operator forms for specifying an LS problem are described in the Optimization[LSSolve] help page.  The Matrix form is more complex, but leads to more efficient computation.
 • Consider the first calling sequence for linearly constrained, linear LS problems. The first parameter $\left[c,G\right]$ is a list containing the LS Vector and Matrix respectively.
 The second parameter lc is an optional list of linear constraints. The most general form is $\left[A,b,\mathrm{Aeq},\mathrm{beq}\right]$, where A and Aeq are Matrices, and b and beq are Vectors. This parameter can take other forms if either inequality or equality constraints do not exist. For a full description of how to specify general linear constraints, refer to the Optimization/MatrixForm help page.
 The third parameter $\mathrm{bd}$ is an optional list $\left[\mathrm{bl},\mathrm{bu}\right]$ of lower and upper bounds.  In general, bl and bu must be $n$-dimensional Vectors.  The Optimization/MatrixForm help page describes alternate forms that can be used when either bound does not exist and provides more convenient ways of specifying the Vectors. Non-negativity of the variables is not assumed by default, but can be specified using the assume = nonnegative option.
 • Use the second calling sequence for nonlinear LS problems.  Again, refer to the Optimization/MatrixForm help page for more details about the format of each parameter.
 The first parameter [n, q] is a list containing the number of problem variables followed by the number of residuals.
 The second parameter p is a procedure, $\mathrm{proc}\left(x,y\right)\mathrm{...}\mathrm{end proc}$, that computes the values of the residuals.   The current point is passed as the Vector $x$, and the values of $\mathrm{f1}\left(x\right)$, $\mathrm{f2}\left(x\right)$, ..., $\mathrm{fq}\left(x\right)$ are returned using the Vector parameter $y$.
 The third parameter nc is a list of two non-negative integers representing the number of nonlinear inequality constraints and the number of nonlinear equality constraints.  If there are no inequality constraints, nc can be a single integer value.
 The fourth parameter nlc is a procedure, $\mathrm{proc}\left(x,y\right)\mathrm{...}\mathrm{end proc}$, that computes the values of the nonlinear constraints.  The current point is passed as the Vector $x$, and the values of $v\left(x\right)$ followed by the values of $w\left(x\right)$ are returned using the Vector parameter $y$.
 The fifth parameter lc, representing linear constraints, and the sixth parameter bd, representing bounds, are as described for the first calling sequence.
 • If the residuals are nonlinear and there are no nonlinear constraints, the third calling sequence, in which parameters nc and nlc are omitted, can be used.
 • Maple returns the solution as a list containing the final minimum value and a point (the extremum).  If the output = solutionmodule option is provided, then a module is returned.  See the Optimization/Solution help page for more information.

Options

 The opts argument can contain one or more of the following options. These options are described in more detail in the Optimization/Options help page.
 • assume = nonnegative -- Assume that all variables are non-negative.
 • constraintjacobian = procedure -- Use the provided procedure to compute the Jacobian matrix of the constraints.  The form required for the procedure is described in the Nonlinear Constraints section of the Optimization/MatrixForm help page.
 • feasibilitytolerance = realcons(positive) -- Set the maximum absolute allowable constraint violation.
 • infinitebound = realcons(positive) -- Set any value of a variable greater than the infinitebound value to be equivalent to infinity during the computation.
 • initialpoint = Vector --  Use the provided initial point, which is an n-dimensional Vector of numeric values.
 • iterationlimit = posint -- Set the maximum number of iterations performed by the algorithm.
 • method = modifiednewton or sqp -- Specify the method.  See the Optimization/Methods help page for more information.
 • objectivejacobian = procedure -- Use the provided procedure to compute the Jacobian matrix of the objective function residuals.  The form required for the procedure is described in the Nonlinear Least-Squares Objective section of the Optimization/MatrixForm help page.
 • optimalitytolerance = realcons(positive) -- Set the tolerance that determines whether an optimal point has been found. This option is not available when the problem is linear.
 • output = solutionmodule -- Return a module as described in the Optimization/Solution help page.

Notes

 • The LSSolve command uses various methods implemented in a built-in library provided by the Numerical Algorithms Group (NAG). See the Optimization/Methods help page for more details. The solvers are iterative in nature and require an initial point.  The quality of the solution can depend greatly on the point chosen, particularly for nonlinear problems, so it is recommended that you provide a point through the initialpoint option. Otherwise, a point is automatically generated.
 • The computation is performed in floating-point. Therefore, all data provided must have type realcons and all returned solutions are floating-point, even if the problem is specified with exact values.  For best performance, Vectors and Matrices should be constructed with the $\mathrm{datatype}=\mathrm{float}$ option and all procedures should work with evalhf. Because the solver fails when a complex value is encountered, it is sometimes necessary to add additional constraints to ensure that the objective function and constraints always evaluate to real values. For more information about numeric computation in the Optimization package and suggestions on how to obtain the best performance using the Matrix form of input, see the Optimization/Computation help page.
 • For some methods of solving nonlinear problems, the computation is more efficient when derivatives of the objective function and constraints are available.  Use objectivejacobian and constraintjacobian to set these options. For information on the methods that use derivatives, see the Optimization/Methods help page.
 • Although the assume = nonnegative option is accepted, general assumptions are not supported by commands in the Optimization package.
 • An answer is returned when necessary first-order conditions for optimality have been met and the iterates have converged.  If the initial point already satisfies the conditions, then a warning is issued.  Generally, the result is a local extremum but it is possible for the solver to return a saddle point.  It is recommended that you try different initial points with each problem to verify that the solution is indeed an extremum.
 Occasionally the solver will return a solution even if the iterates have not converged but the point satisfies the first-order conditions.  Setting infolevel[Optimization] to 1 or higher will produce a message indicating this situation if it occurs.
 • If LSSolve returns an error saying that no solution could be found, it is recommended that you try a different initial point or use tolerance parameters that are less restrictive.

Examples

 > $\mathrm{with}\left(\mathrm{Optimization}\right):$

Use the first calling sequence for LSSolve to minimize the objective function $\left(\frac{1}{2}\right)\left({\left(2-7x\right)}^{2}+{\left(-3-5x\right)}^{2}\right)$ in Matrix form.

 > $c≔\mathrm{Vector}\left(\left[2,-3\right],\mathrm{datatype}=\mathrm{float}\right):$
 > $G≔\mathrm{Matrix}\left(\left[\left[7\right],\left[5\right]\right],\mathrm{datatype}=\mathrm{float}\right):$
 > $\mathrm{LSSolve}\left(\left[c,G\right]\right)$
 $\left[{6.49324324324324209}{,}\left[\begin{array}{c}{-0.0135135135135135}\end{array}\right]\right]$ (1)

Minimize $\left(\frac{1}{2}\right)\left({\left(1-x\right)}^{2}+{\left(1-y\right)}^{2}+{\left(1-z\right)}^{2}\right)$ subject to the constraints $6x+3y\le 1$ and $x\le 0$.

 > $c≔\mathrm{Vector}\left(\left[1,1,1\right],\mathrm{datatype}=\mathrm{float}\right):$
 > $G≔\mathrm{Matrix}\left(\left[\left[1,0,0\right],\left[0,1,0\right],\left[0,0,1\right]\right],\mathrm{datatype}=\mathrm{float}\right):$
 > $A≔\mathrm{Matrix}\left(\left[\left[6,3,0\right],\left[1,0,0\right]\right],\mathrm{datatype}=\mathrm{float}\right):$
 > $b≔\mathrm{Vector}\left(\left[1,0\right],\mathrm{datatype}=\mathrm{float}\right):$
 > $\mathrm{lc}≔\left[A,b\right]:$
 > $\mathrm{LSSolve}\left(\left[c,G\right],\mathrm{lc}\right)$
 $\left[{0.711111111111111138}{,}\left[\begin{array}{c}{-0.0666666666666667}\\ {0.466666666666667}\\ {1.}\end{array}\right]\right]$ (2)