Explainable Artificial Intelligence in Deep Learning-Based Solar Storm Predictions

Adam O. Rawashdeh, Jason T.L. Wang, Katherine G. Herbert

Research output: Contribution to journalConference articlepeer-review

Abstract

A deep learning model is often considered a black-box model, as its internal workings tend to be opaque to the user. Because of the lack of transparency, it is challenging to understand the reasoning behind the model’s predictions. Here, we present an approach to making a deep learning-based solar storm prediction model interpretable, where solar storms include solar flares and coronal mass ejections (CMEs). This deep learning model, built based on a long short-term memory (LSTM) network with an attention mechanism, aims to predict whether an active region (AR) on the Sun’s surface that produces a flare within 24 hours will also produce a CME associated with the flare. The crux of our approach is to model data samples in an AR as time series and use the LSTM network to capture the temporal dynamics of the data samples. To make the model’s predictions accountable and reliable, we leverage post hoc model-agnostic techniques, which help elucidate the factors contributing to the predicted output for an input sequence and provide insights into the model’s behavior across multiple sequences within an AR. To our knowledge, this is the first time that interpretability has been added to an LSTM-based solar storm prediction model.

Original languageEnglish
JournalProceedings of the International Florida Artificial Intelligence Research Society Conference, FLAIRS
Volume38
StatePublished - 14 May 2025
Event38th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2025 - Daytona Beach, United States
Duration: 20 May 202523 May 2025

Fingerprint

Dive into the research topics of 'Explainable Artificial Intelligence in Deep Learning-Based Solar Storm Predictions'. Together they form a unique fingerprint.

Cite this