Overview - Maple Help
For the best experience, we recommend viewing online help using Google Chrome or Microsoft Edge.

Online Help

Regression Commands

  

The Statistics package provides various commands for fitting linear and nonlinear models to data points and performing regression analysis.   The fitting algorithms are based on least-squares methods, which minimize the sum of the residuals squared.

 

Available Commands

Linear Fitting

Nonlinear Fitting

Other Commands

Using the Regression Commands

Examples

References

Available Commands

ExponentialFit

fit an exponential function to data

Fit

fit a model function to data

LeastTrimmedSquares

robust linear regression

LinearFit

fit a linear model function to data

LogarithmicFit

fit a logarithmic function to data

Lowess

produce lowess smoothed functions

NonlinearFit

fit a nonlinear model function to data

OneWayANOVA

generate a one-way ANOVA table

PolynomialFit

fit a polynomial to data

PowerFit

fit a power function to data

PredictiveLeastSquares

fit a predictive linear model function to data

RepeatedMedianEstimator

robust linear regression

Linear Fitting

• 

A number of commands are available for fitting a model function that is linear in the model parameters to given data.  For example, the model function  is linear in the parameters a and b, though it is nonlinear in the independent variable t.

• 

The LinearFit command is available for multiple general linear regression.  For certain classes of model functions involving only one independent variable, the PolynomialFit, LogarithmicFit, PowerFit, and ExponentialFit commands are available. The PowerFit and ExponentialFit commands use a transformed model function that is linear in the parameters.

Nonlinear Fitting

• 

The NonlinearFit command is available for nonlinear fitting.  An example model function is  where a and b are the parameters, and x and y are the independent variables.

• 

This command relies on local nonlinear optimization solvers available in the Optimization package.  The LSSolve and NLPSolve commands in that package can also be used directly for least-squares and general nonlinear minimization.

Other Commands

• 

The general Fit command allows you to provide either a linear or nonlinear model function.  It then determines the appropriate regression solver to use.

• 

The OneWayANOVA command generates the standard ANOVA table for one-way classification, given two or more groups of observations.

Using the Regression Commands

• 

Various options can be provided to the regression commands. For example, the weights option allows you to specify weights for the data points and the output option allows you to control the format of the results.  The options available for each command are described briefly in the command's help page and in greater detail in the Statistics/Regression/Options help page.

• 

The format of the solutions returned by the regression commands is described in the Statistics/Regression/Solution help page.

• 

Most of the regression commands use methods implemented in a built-in library provided by the Numerical Algorithms Group (NAG).  The underlying computation is done in floating-point.  Either hardware or software (arbitrary precision) floating-point computation can be specified.

• 

The model function and data sets may be provided in different ways.  Full details are available in the Statistics/Regression/InputForms help page. The regression routines work primarily with Vectors and Matrices.  In most cases, lists (both flat and nested) and Arrays are also accepted and automatically converted to Vectors or Matrices.  Consequently, all output, including error messages, uses these data types.

Examples

Define Vectors X and Y, containing values of an independent variable x and a dependent variable y.

Find the values of a and b that minimize the least-squares error when the model function  is used.

(1)

It is also possible to return a summary of the regression model using the summarize option:

(2)

Model:

Coefficients

Estimate

Standard Error

t-value

P(>|t|)

a

b

R-squared:

Adjusted R-squared:

Residuals

Residual Sum of Squares

Residual Mean Square

Residual Standard Error

Degrees of Freedom

Five Point Summary

Minimum

First Quartile

Median

Third Quartile

Maximum

Fit a polynomial of degree 3 through this data.

(3)

Use the output option to see the residual sum of squares and the standard errors.

(4)

Fit the model function , which is nonlinear in the parameters.

(5)

Consider now an experiment where quantities , , and  are quantities influencing a quantity  according to an approximate relationship

with unknown parameters , , and . Six data points are given by the following matrix, with respective columns for , , , and .

(6)

We take an initial guess that the first term will be approximately quadratic in , that  will be approximately , and for  we don't even know whether it's going to be positive or negative, so we guess . We compute both the model function and the residuals. Also, we select more verbose operation by setting .

In NonlinearFit (algebraic form)

(7)

We note that Maple selected the nonlinear fitting method. Furthermore, the exponent on  is only about , and the other guesses were not very good either. However, this problem is conditioned well enough that Maple finds a good fit anyway.

Now suppose that the relationship that is used to model the data is altered as follows:

We adapt the calling sequence very slightly:

In Fit

In LinearFit (container form)

final value of residual sum of squares: .0537598869493245

Summary:
----------------
Model: .82307292*x-.16791011*x^2/y-.75802268e-1*y*z
----------------
Coefficients:
    Estimate  Std. Error  t-value  P(>|t|)
a    0.8231    0.1898      4.3374   0.0226
b   -0.1679    0.0940     -1.7862   0.1720
c   -0.0758    0.0182     -4.1541   0.0254
----------------
R-squared: 0.9600, Adjusted R-squared: 0.9201

(8)

This time, Maple could select the linear fitting method, because the expression is linear in the parameters. In addition, as the infolevel is greater than 0 and the expression is linear in the parameters, a summary for the regression is displayed. The initial values for the parameters are not used.

Finally, consider a situation where an ordinary differential equation leads to results that need to be fitted. The system is given by

where  and  are parameters that we want to find,  is a variable that we can vary between experiments, and  is a quantity that we can measure at . We perform 10 experiments at , and the results are as follows.

(9)

(10)

We now need to set up a procedure that NonlinearFit can call to obtain the value for a given input value  and a given pair of parameters  and . We do this using dsolve/numeric.

(11)

(12)

We now have a procedure ODE_Solution that can compute the correct value, but we need to write a wrapper that has the form that NonlinearFit expects. We first need to call ODE_Solution once to set the parameters, then another time to obtain the value of  at , and then return this value (for more information about how this works, see dsolve/numeric). By hand, we can do this as follows:

(13)

(14)

(15)

Error, (in ODE_Solution) cannot evaluate the solution past the initial point, problem may be complex, initially singular or improperly set up

Note that for some settings of the parameters, we cannot obtain a solution. We need to take care of this in the procedure we create (which we call f), by returning a value that is very far from all output points, leading to a very bad fit for these erroneous parameter values.

f := proc(zValue, aValue, bValue) global ODE_Solution, a, b, z, x, t; ODE_Solution('parameters' = [a = aValue, b = bValue, z = zValue]); try return eval(x(t), ODE_Solution(1)); catch: return 100; end try; end proc;

(16)

(17)

We need to provide an initial estimate for the parameter values, because the fitting procedure is only performed in a local sense. We go with the values that provided a solution above: .

(18)

References

  

Draper, Norman R., and Smith, Harry. Applied Regression Analysis. 3rd ed. New York: Wiley, 1998.

Applications

Parameter Estimation for an N-Channel Enhancement MOSFET

See Also

CurveFitting

Statistics

Statistics/Computation

Statistics/MaximumLikelihoodEstimate

Statistics/Regression/Options

Statistics/Regression/Solution

TimeSeriesAnalysis

 


Download Help Document