Skip to content

Latin Uncertainty Quantification Analysis

Uncertainty Quantification (UQ) is a critical step in assessing the reliability of a model or system. Its purpose is not to find out which parameter is the most sensitive, but to answer a more important question: "When my input parameters are uncertain within a certain range, what is the possible range of my output results (performance metrics)? What is their probability distribution?"

tricys uses the efficient Latin Hypercube Sampling (LHS) technique to perform UQ analysis. LHS is a stratified sampling method that can uniformly explore the entire multi-dimensional parameter space with a relatively small number of sample points, thus efficiently assessing how input uncertainty propagates to the model output.

For common configurations such as metric definitions, glossaries, and unit maps, please refer to the Automated Analysis Main Page.

1. Configuration File Example

The configuration for an LHS analysis is very similar to that of a SOBOL global sensitivity analysis, with the main difference being that analyzer.method is set to "latin".

{
    // ... (paths, simulation)
    "sensitivity_analysis": {
        "enabled": true,
        "analysis_cases": [
            {
                "name": "SALIB_LATIN_Analysis",
                "independent_variable": ["pulseSource.width", "plasma.nf", "plasma.fb", "tep_fep.to_SDS_Fraction[1]", "blanket.TBR"],
                "independent_variable_sampling": {
                      "pulseSource.width": { "bounds": [50, 90], "distribution": "unif" },
                      "plasma.nf": { "bounds": [0.1, 0.9], "distribution": "unif" },
                      "plasma.fb": { "bounds": [0.03, 0.07], "distribution": "unif" },
                      "tep_fep.to_SDS_Fraction[1]": { "bounds": [0.1, 0.8], "distribution": "unif" },
                      "blanket.TBR": { "bounds": [1.05, 1.25], "distribution": "unif" }
                },
                "dependent_variables": [
                    "Startup_Inventory",
                    "Self_Sufficiency_Time",
                    "Doubling_Time"
                ],
                "analyzer": {
                    "method": "latin",
                    "sample_N": 256
                }
            }
        ],
        // ... (Common configurations)
    }
}

2. Key Configuration Items Explained

  • independent_variable (list): A list of all input parameters considered as sources of uncertainty.
  • independent_variable_sampling (object): A dictionary that defines the sampling range and probability distribution for each uncertain parameter.
  • analyzer (object): Defines the analysis method to be used and its parameters.
  • method: Fixed as "latin".
  • sample_N: The number of samples to be generated by Latin Hypercube Sampling, which directly corresponds to the total number of simulations to be run.

3. Workflow Explained

When tricys receives an analysis task that includes an analyzer field (like the LHS analysis in this example), it initiates a special workflow based on the SALib library.

  1. Identify Analysis Type: In the main flow of tricys.simulation.simulation_analysis.py, the program first detects that this is a SALib analysis task.
  2. Define Problem and Sample: The program delegates the task to the tricys.analysis.salib.py module, which defines a SALib "problem space" according to the configuration and calls the LHS sampling function to generate N parameter sample points.
  3. Execute Batch Simulations: The N generated parameter samples are written to a temporary CSV file, and tricys then executes a simulation for each sample point.
  4. Collect and Analyze Results: After all simulations are completed, the program calculates the N output results for each performance metric and performs a statistical analysis on them (calculating mean, standard deviation, percentiles, etc.).
  5. Generate Report and Charts: Finally, the program generates the final Markdown report based on the statistical analysis results, which includes detailed statistical data tables and output distribution charts (histograms and cumulative distribution function plots).

4. Analysis Report Output

The core content of the report is an independent statistical analysis for each dependent variable (performance metric). For example, for the Startup_Inventory metric, the report will include:

  1. Statistical Summary: Provides a set of core statistics, including mean, standard deviation, minimum, and maximum values.
  2. Key Distribution Points / CDF: Provides a series of percentiles (e.g., 5%, 25%, 50% (median), 75%, 95%) to accurately describe the cumulative distribution of the output metric.
  3. Output Distribution (Histogram Data): A table showing the frequency distribution of the output results in different numerical intervals.
  4. Output Distribution Plot: An embedded .png chart containing two subplots:
    • Histogram: Visually displays the shape of the probability density distribution of the output metric.
    • Cumulative Distribution Function (CDF): Shows the probability that the output metric value is less than or equal to a certain specific value.

How to Interpret UQ Results

The focus of uncertainty quantification analysis is to understand the distribution of the output, not the ranking of parameter sensitivity.

  • Look at Central Tendency: The Mean and Median tell you where the performance metric is most likely to fall.
  • Look at Dispersion: The Standard Deviation and the range of the 5%-95% percentiles reveal how much uncertainty there is in the output. A wider range indicates that the uncertainty in the input parameters has a greater impact on the system's performance, and the system's robustness may be poorer.
  • Look at Distribution Shape: The shape of the histogram is important. A symmetric, bell-shaped curve similar to a normal distribution is relatively ideal. If the distribution shows a long tail or is skewed, it may mean that the system is prone to extremely good or bad results under certain parameter combinations, which is crucial for risk assessment.

5. Full Example Configuration

example/analysis/5_latin_uncertainty_analysis/latin_uncertainty_analysis.json { "paths": { "package_path": "../../example_model_single/example_model.mo" }, "simulation": { "model_name": "example_model.Cycle", "variableFilter": "time|sds.I[1]", "stop_time": 12000.0, "step_size": 0.5 }, "sensitivity_analysis": { "enabled": true, "analysis_cases": [ { "name": "SALIB_LATIN_Analysis", "independent_variable":["pulseSource.width","plasma.nf","plasma.fb","tep_fep.to_SDS_Fraction[1]","blanket.TBR"], "independent_variable_sampling":{ "pulseSource.width": { "bounds": [50,90], "distribution": "unif" }, "plasma.nf": { "bounds": [0.1,0.9], "distribution": "unif" }, "plasma.fb": { "bounds": [0.03,0.07], "distribution": "unif" }, "tep_fep.to_SDS_Fraction[1]": { "bounds": [0.1,0.8], "distribution": "unif" }, "blanket.TBR": { "bounds": [1.05, 1.25], "distribution": "unif" } }, "dependent_variables": [ "Startup_Inventory", "Self_Sufficiency_Time", "Doubling_Time" ], "analyzer": { "method": "latin", "sample_N": 256 } } ], "metrics_definition": { "Startup_Inventory": { "source_column": "sds.I[1]", "method": "calculate_startup_inventory" }, "Self_Sufficiency_Time": { "source_column": "sds.I[1]", "method": "time_of_turning_point" }, "Doubling_Time": { "source_column": "sds.I[1]", "method": "calculate_doubling_time" }, "Required_TBR": { "source_column": "sds.I[1]", "method": "bisection_search", "parameter_to_optimize": "blanket.TBR", "search_range": [1,1.5], "tolerance": 0.005, "max_iterations": 10 } }, "glossary_path": "../../example_glossary/example_glossary.csv", "unit_map": { "Doubling_Time": { "unit": "days", "conversion_factor": 24 }, "Startup_Inventory": { "unit": "kg", "conversion_factor": 1000 }, "Self_Sufficiency_Time": { "unit": "days", "conversion_factor": 24 }, "width":{ "unit": "%" } } } }

6. AI-Enhanced Analysis

All analysis modules in tricys are deeply integrated with Large Language Models (LLMs), capable of automatically converting raw charts and data into structured, academic-style reports.

6.1. How to Enable

In your analysis case configuration (i.e., within any object in the analysis_cases list or in the params of a post_processing task), add "ai": true to activate the AI analysis feature for that case.

"analysis_cases": [
    {
        "name": "TBR_Analysis_with_AI_Report",
        "independent_variable": "blanket.TBR",
        "independent_variable_sampling": [1.05, 1.1, 1.15, 1.2],
        "dependent_variables": [ "Doubling_Time" ],
        "ai": true
    }
]

6.2. Environment Setup

Before using this feature, you must create a file named .env in the project's root directory and fill in your Large Language Model API credentials. This ensures that your keys are kept secure and are not committed to version control.

# .env file in project root
API_KEY="sk-your_api_key_here"
BASE_URL="https://your_api_base_url/v1"
AI_MODEL="your_model_name_here"

6.3. Output Reports

When enabled, in addition to the standard analysis report (analysis_report_...md), tricys will generate two additional reports in the case's report folder:

  • analysis_report_{case_name}_{model_name}.md: Appends an in-depth textual interpretation of the data and charts, generated by the AI, to the end of the core report.
  • academic_report_{case_name}_{model_name}.md: A well-structured, academic-style report written entirely by the AI. This report typically includes sections such as Abstract, Introduction, Methods, Results and Discussion, and Conclusion, and can be used directly for presentations or as a draft for a paper.