![]() |
Dakota Reference Manual
Version 6.15
Explore and Predict with Confidence
|
Calculate the mutual information between prior and posterior
Alias: none
Argument(s): none
Child Keywords:
Required/Optional | Description of Group | Dakota Keyword | Dakota Keyword Description | |
---|---|---|---|---|
Optional | ksg2 | Use second Kraskov algorithm to compute mutual information |
The mutual information quantifies how much information two random variables contain about each other. It is a measure of the mutual dependence of two random variables. The mutual information is a non-negative measure, with zero representing complete independence of the two random variables. For continuous random variables and
, the mutual information is
The mutual information can also be interpreted as the reduction in uncertainty of one random variable due to the knowledge of another. By specifying, mutual_info
, the mutual information between the posterior parameters and the prior parameters is calculated.
The mutual information is calculated using a -nearest neighbor approximation algorithm. As of Dakota 6.6, there are two such algorithms available, both of which are derived in[58] . By default, Dakota uses the first such algorithm; the second may be selected by specifying the keyword
ksg2
. Further details can be found in the Dakota Theory Manual [15] .
Expected Output
If mutual_information
is specified, the calculated value will be reported to the screen at the end of the calibration.
Additional Discussion
Due to the necessary approximation of the multidimensional integral above, a negative mutual information may be reported for applications whose true value is close to or equal to zero. As of Dakota 6.6, mutual information calculations are primarily used in the implementation of the experimental_design algorithm.
method bayes_calibration queso dram seed = 34785 chain_samples = 1000 posterior_stats mutual_info