Quan Zhang (Tsinghua University), Yiwen Xu (Tsinghua University), Zijing Yin (Tsinghua University), Chijin Zhou (Tsinghua University), Yu Jiang (Tsinghua University)

Java deserialization vulnerabilities have long been a grave security concern for Java applications. By injecting malicious objects with carefully crafted structures, attackers can reuse a series of existing methods during deserialization to achieve diverse attacks like remote code execution. To mitigate such attacks, developers are encouraged to implement policies restricting the object types that applications can deserialize. However, the design of precise policies requires expertise and significant manual effort, often leading to either the absence of policy or the implementation of inadequate ones.

In this paper, we propose DeseriGuard, a tool designed to assist developers in securing their applications seamlessly against deserialization attacks. It can automatically formulate a policy based on the application's semantics and then enforce it to restrict illegal deserialization attempts. First, DeseriGuard utilizes dataflow analysis to construct a semantic-aware property tree, which records the potential structures of deserialized objects. Based on the tree, DeseriGuard identifies the types of objects that can be safely deserialized and synthesizes an allowlist policy. Then, with the Java agent, DeseriGuard can seamlessly enforce the policy during runtime to protect various deserialization procedures. In evaluation, DeseriGuard successfully blocks all deserialization attacks on 12 real-world vulnerabilities. In addition, we compare DeseriGuard's automatically synthesized policies with 109 developer-designed policies. The results demonstrate that DeseriGuard effectively restricts 99.12% more classes. Meanwhile, we test the policy-enhanced applications with their unit tests and integration tests, which demonstrate that DeseriGuard's policies will not interfere with applications' execution and induce a negligible time overhead of 2.17%.

View More Papers

Inaudible Adversarial Perturbation: Manipulating the Recognition of User Speech...

Xinfeng Li (Zhejiang University), Chen Yan (Zhejiang University), Xuancun Lu (Zhejiang University), Zihan Zeng (Zhejiang University), Xiaoyu Ji (Zhejiang University), Wenyuan Xu (Zhejiang University)

Read More

Investigating the Impact of Evasion Attacks Against Automotive Intrusion...

Paolo Cerracchio, Stefano Longari, Michele Carminati, Stefano Zanero (Politecnico di Milano)

Read More

WIP: A First Look At Employing Large Multimodal Models...

Mohammed Aldeen, Pedram MohajerAnsari, Jin Ma, Mashrur Chowdhury, Long Cheng, Mert D. Pesé (Clemson University)

Read More

Transpose Attack: Stealing Datasets with Bidirectional Training

Guy Amit (Ben-Gurion University), Moshe Levy (Ben-Gurion University), Yisroel Mirsky (Ben-Gurion University)

Read More