Alexandra Weber (Telespazio Germany GmbH), Peter Franke (Telespazio Germany GmbH)

Space missions increasingly rely on Artificial Intelligence (AI) for a variety of tasks, ranging from planning and monitoring of mission operations, to processing and analysis of mission data, to assistant systems like, e.g., a bot that interactively supports astronauts on the International Space Station. In general, the use of AI brings about a multitude of security threats. In the space domain, initial attacks have already been demonstrated, including, e.g., the Firefly attack that manipulates automatic forest-fire detection using sensor spoofing. In this article, we provide an initial analysis of specific security risks that are critical for the use of AI in space and we discuss corresponding security controls and mitigations. We argue that rigorous risk analyses with a focus on AI-specific threats will be needed to ensure the reliability of future AI applications in the space domain.

View More Papers

Front-running Attack in Sharded Blockchains and Fair Cross-shard Consensus

Jianting Zhang (Purdue University), Wuhui Chen (Sun Yat-sen University), Sifu Luo (Sun Yat-sen University), Tiantian Gong (Purdue University), Zicong Hong (The Hong Kong Polytechnic University), Aniket Kate (Purdue University)

Read More

5G-Spector: An O-RAN Compliant Layer-3 Cellular Attack Detection Service

Haohuang Wen (The Ohio State University), Phillip Porras (SRI International), Vinod Yegneswaran (SRI International), Ashish Gehani (SRI International), Zhiqiang Lin (The Ohio State University)

Read More

Differentially Private Dataset Condensation

Tianhang Zheng (University of Missouri-Kansas City), Baochun Li (University of Toronto)

Read More

BliMe: Verifiably Secure Outsourced Computation with Hardware-Enforced Taint Tracking

Hossam ElAtali (University of Waterloo), Lachlan J. Gunn (Aalto University), Hans Liljestrand (University of Waterloo), N. Asokan (University of Waterloo, Aalto University)

Read More