Abstract
Multimodal sentiment analysis aims to attain a precise comprehension of emotions by integrating complementary textual, visual, and audio information. However, issues such as sentiment discrepancies between modalities, ineffective integration of multi-modal information, and the intricacy of order dependency significantly constrain the models' efficacy. The authors propose an LLM-guided Hierarchical Spatio-Temporal Graph Network (L-HSTGN). By multimodal large model feature enhancement, bidirectional spatio-temporal joint modeling, and dynamic gate fusion mechanism, they effectively address the aforementioned problems. Firstly, they produce cross-modal emotion pseudo-labels based on the multimodal large model, and the single-modal representation was optimized by combining adversarial regularization. Secondly, they develop a bidirectional spatio-temporal convolution module to concurrently extract local-global temporal characteristics and dynamic spatial correlations.
| Original language | English |
|---|---|
| Journal | International Journal of Information Systems in the Service Sector |
| Volume | 16 |
| Issue number | 1 |
| DOIs | |
| State | Published - 2025 |
Keywords
- Graph Network
- Hierarchical
- Large Language Model
- Multimodal Sentiment Analysis
- Spatio-Temporal Network
Fingerprint
Dive into the research topics of 'LLM-Guided Multimodal Information Fusion With Hierarchical Spatio-Temporal Graph Network for Sentiment Analysis'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver