![]() |
Dakota Reference Manual
Version 6.15
Explore and Predict with Confidence
|
Perform each function evaluation in a separate working directory
Alias: none
Argument(s): none
Default: no work directory
Child Keywords:
Required/Optional | Description of Group | Dakota Keyword | Dakota Keyword Description | |
---|---|---|---|---|
Optional | named | The base name of the work directory created by Dakota | ||
Optional | directory_tag | Tag each work directory with the function evaluation number | ||
Optional | directory_save | Preserve the work directory after function evaluation completion | ||
Optional | link_files | Paths to be linked into each working directory | ||
Optional | copy_files | Files and directories to be copied into each working directory | ||
Optional | replace | Overwrite existing files within a work directory |
When performing concurrent evaluations, it is typically necessary to cloister simulation input and output files in separate directories to avoid conflicts. When the work_directory
feature is enabled, Dakota will create a directory for each evaluation, with optional tagging (directory_tag
) and saving (directory_save
), as with files, and execute the analysis driver from that working directory.
The directory may be named
with a string, or left anonymous to use an automatically-generated directory in the system's temporary file space, e.g., /tmp/dakota_work_c93vb71z/. The optional link_files
and copy_files
keywords specify files or directories which should appear in each working directory.
When using work_directory, the analysis_drivers may be given by an absolute path, located in (or relative to) the startup directory alongside the Dakota input file, in the list of template files linked or copied, or on the $PATH (Path% on Windows).