Dakota Reference Manual  Version 6.15
Explore and Predict with Confidence
 All Pages
linear_constraints

Description

Many methods can make use of linear equality or inequality constraints.

As the name implies, linear constraints are constraints that are linear functions of the variables. Constraints that are nonlinear functions of variables are specified using the nonlinear_constraints family of keywords. From a Dakota usage point of view, the most important difference between linear and nonlinear constraints is that the former are specified entirely within the Dakota input file and calculated by Dakota itself, while the latter must be calculated by the user's simulation and returned as responses to Dakota.

The Optimization chapter of the User's Manual[4] states which methods support linear constraints. Of those methods, a subset strictly obey linear constraints; that is, no candidate points are generated by the optimizer that violate the constraints. These include asynch_pattern_search, the optpp_* family of optimizers (with the exception of optpp_fd_newton), and npsol_sqp. The other methods seek feasible solutions (i.e. solutions that satisfy the linear constraints), but may violate the constraints as they run. Linear constraints may also be violated, even when using an optimizer that itself strictly respects them, if numerical_gradients are used. In this case, Dakota may request evaluations that lie outside of the feasible region when computing a gradient near the boundary.

One final limitation that bears mentioning is that linear constraints are compatible only with continuous variables. No discrete types are permitted when using linear constraints.

Related Topics

Related Keywords