Alexandra Weber (Telespazio Germany GmbH), Peter Franke (Telespazio Germany GmbH)

Space missions increasingly rely on Artificial Intelligence (AI) for a variety of tasks, ranging from planning and monitoring of mission operations, to processing and analysis of mission data, to assistant systems like, e.g., a bot that interactively supports astronauts on the International Space Station. In general, the use of AI brings about a multitude of security threats. In the space domain, initial attacks have already been demonstrated, including, e.g., the Firefly attack that manipulates automatic forest-fire detection using sensor spoofing. In this article, we provide an initial analysis of specific security risks that are critical for the use of AI in space and we discuss corresponding security controls and mitigations. We argue that rigorous risk analyses with a focus on AI-specific threats will be needed to ensure the reliability of future AI applications in the space domain.

View More Papers

Reverse Engineering of Multiplexed CAN Frames (Long)

Alessio Buscemi, Thomas Engel (SnT, University of Luxembourg), Kang G. Shin (The University of Michigan)

Read More

More Lightweight, yet Stronger: Revisiting OSCORE’s Replay Protection

Konrad-Felix Krentz (Uppsala University), Thiemo Voigt (Uppsala University, RISE Computer Science)

Read More

BGP-iSec: Improved Security of Internet Routing Against Post-ROV Attacks

Cameron Morris (University of Connecticut), Amir Herzberg (University of Connecticut), Bing Wang (University of Connecticut), Samuel Secondo (University of Connecticut)

Read More