A key difference in how we in CASC use ML compared to the commercial sector is that a working model is rarely the ultimate goal. High-sensitivity predictive models lead to new insights and enable us to form new hypotheses about physical phenomena.

We are developing techniques that reveal the interpretable components in these often opaque models as well as approaches for effective communication between model and domain user. This strategy calls for novel techniques that combine human understanding and machine intelligence.

Explainable artificial intelligence (AI) lies at the intersection of ML, statistics, visualization, human–computer interaction, and more. This emerging research area is rapidly becoming not only a crucial capability for LLNL but also a core strength. CASC’s integrated research teams jointly tackle these challenges, earning widespread recognition for their contributions.

three panels showing scale up from nano to macro

Surprising Places You'll Find ML

Researchers explain why water filtration, wildfires, and carbon capture are becoming more solvable thanks to groundbreaking data science methodologies.

illustration showing the LbC method of simulator inputs into a prediction estimator

Learn by Calibrating

A DL approach to designing emulators for scientific processes is more accurate and efficient than existing methods.