TY - GEN
T1 - Evaluating Small Language Models for Intrusion Detection on Automotive Embedded Platforms
AU - Salah, Islam
AU - Son, Junggab
AU - Robila, Stefan
AU - Kim, Daeyoung
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-2231-8/2025/11.
PY - 2026/2/4
Y1 - 2026/2/4
N2 - The increasing reliance on embedded computing and communication systems in vehicles has expanded their attack surface, making embedded automotive systems increasingly vulnerable to malware and intrusion. Traditional intrusion detection systems, originally designed for general-purpose computing, are often too resource-intensive for deployment in automotive environments. This study investigates the use of Small Language Models (SLMs) for lightweight intrusion detection in embedded automotive systems. We implement a CAN-to-text transformation that allows transformer-based SLMs to model Controller Area Network (CAN) traffic as contextual sequences and effectively detect anomalous communication patterns. The results demonstrate the feasibility of using compact transformer architectures for embedded intrusion detection. We evaluate three representative SLMs, such as MiniLM, DistilBERT, and TinyBERT, on an embedded development board. Among them, MiniLM achieved the most balanced performance, offering high detection accuracy while consuming less power and memory than DistilBERT. TinyBERT provided better computational efficiency, but at the cost of reduced accuracy, limiting its use in safety-critical environments. Our findings indicate that compact transformer-based models can effectively balance accuracy and efficiency, making them viable candidates for next-generation in-vehicle intrusion detection systems.
AB - The increasing reliance on embedded computing and communication systems in vehicles has expanded their attack surface, making embedded automotive systems increasingly vulnerable to malware and intrusion. Traditional intrusion detection systems, originally designed for general-purpose computing, are often too resource-intensive for deployment in automotive environments. This study investigates the use of Small Language Models (SLMs) for lightweight intrusion detection in embedded automotive systems. We implement a CAN-to-text transformation that allows transformer-based SLMs to model Controller Area Network (CAN) traffic as contextual sequences and effectively detect anomalous communication patterns. The results demonstrate the feasibility of using compact transformer architectures for embedded intrusion detection. We evaluate three representative SLMs, such as MiniLM, DistilBERT, and TinyBERT, on an embedded development board. Among them, MiniLM achieved the most balanced performance, offering high detection accuracy while consuming less power and memory than DistilBERT. TinyBERT provided better computational efficiency, but at the cost of reduced accuracy, limiting its use in safety-critical environments. Our findings indicate that compact transformer-based models can effectively balance accuracy and efficiency, making them viable candidates for next-generation in-vehicle intrusion detection systems.
KW - Automotive security
KW - Controller Area Network (CAN)
KW - Embedded systems
KW - Intrusion detection
KW - Small Language Models
UR - https://www.scopus.com/pages/publications/105030413162
U2 - 10.1145/3769002.3769959
DO - 10.1145/3769002.3769959
M3 - Conference contribution
AN - SCOPUS:105030413162
T3 - 2025 Research in Adaptive and Convergent Systems, RACS 2025
BT - 2025 Research in Adaptive and Convergent Systems, RACS 2025
PB - Association for Computing Machinery, Inc
T2 - 2025 Research in Adaptive and Convergent Systems, RACS 2025
Y2 - 16 November 2025 through 19 November 2025
ER -