Dakota Reference Manual  Version 6.15
Explore and Predict with Confidence
 All Pages
moving_least_squares


Moving Least Squares surrogate models

Specification

Alias: none

Argument(s): none

Child Keywords:

Required/Optional Description of Group Dakota Keyword Dakota Keyword Description
Optional basis_order

Polynomial order for the MLS bases

Optional weight_function

Selects the weight function for the MLS model

Optional export_model

Exports surrogate model in user-specified format(s)

Optional import_model

Import surrogate model from archive file

Description

Moving least squares is a further generalization of weighted least squares where the weighting is "moved" or recalculated for every new point where a prediction is desired[66].

The implementation of moving least squares is still under development. It tends to work well in trust region optimization methods where the surrogate model is constructed in a constrained region over a few points. The present implementation may not work as well globally.

Known Issue: When using discrete variables, there have been sometimes significant differences in surrogate behavior observed across computing platforms in some cases. The cause has not yet been fully diagnosed and is currently under investigation. In addition, guidance on appropriate construction and use of surrogates with discrete variables is under development. In the meantime, users should therefore be aware that there is a risk of inaccurate results when using surrogates with discrete variables.

Theory

Moving Least Squares can be considered a more specialized version of linear regression models. In linear regression, one usually attempts to minimize the sum of the squared residuals, where the residual is defined as the difference between the surrogate model and the true model at a fixed number of points.

In weighted least squares, the residual terms are weighted so the determination of the optimal coefficients governing the polynomial regression function, denoted by $\hat{f}({\bf x})$, are obtained by minimizing the weighted sum of squares at N data points:

\[ \sum_{n=1}^{N}w_{n}({\parallel \hat{f}({\bf x_{n}})-f({\bf x_{n}})\parallel}) \]