Yikang Chen (The Chinese University of Hong Kong), Yibo Liu (Arizona State University), Ka Lok Wu (The Chinese University of Hong Kong), Duc V Le (Visa Research), Sze Yiu Chau (The Chinese University of Hong Kong)

In the last decade, a series of papers were published on using static analysis to detect cryptographic API misuse. In each paper, apps are checked against a set of rules to see if violations exist. A common theme among these papers is that rule violations are plentiful, often at the scale of thousands. Interestingly, while much effort went into tackling false negatives, curiously, not much has been said on (1) whether the misuse alarms are indeed correct and meaningful, and (2) what can future work improve upon apart from finding more misuses.

In this paper, we take a deep dive into the rule violations reported by various academic papers as well as the rules, models and implementations of their detectors, in an attempt to (1) explain the gap between their misuse alarms and actual vulnerabilities, and (2) shed light on possible directions for improving the precision and usability of misuse detectors. Results of our analysis suggest that the small-scale inspections done by previous work had some unfortunate blind-spots, leaving problems in their rules, models, and implementations unnoticed, which in turn led to unnecessary overestimation of misuses (and vulnerabilities). To facilitate future research on the topic, we distill these avoidable false alarms into high-level patterns that capture their root causes, and discuss design, evaluation and reporting strategies that can improve the precision of misuse findings. Furthermore, to demonstrate the generalizability of these false alarm patterns and improvement directions, we also investigate a popular industry detector and a dynamic detector, and discuss how some of the false alarm patterns do and do not apply to them. Our findings suggest that the problem of precisely reporting cryptographic misuses still has much room for future work to improve upon.

View More Papers

TALISMAN: Tamper Analysis for Reference Monitors

Frank Capobianco (The Pennsylvania State University), Quan Zhou (The Pennsylvania State University), Aditya Basu (The Pennsylvania State University), Trent Jaeger (The Pennsylvania State University, University of California, Riverside), Danfeng Zhang (The Pennsylvania State University, Duke University)

Read More

Secret-Shared Shuffle with Malicious Security

Xiangfu Song (National University of Singapore), Dong Yin (Ant Group), Jianli Bai (The University of Auckland), Changyu Dong (Guangzhou University), Ee-Chien Chang (National University of Singapore)

Read More

Architecting Trigger-Action Platforms for Security, Performance and Functionality

Deepak Sirone Jegan (University of Wisconsin-Madison), Michael Swift (University of Wisconsin-Madison), Earlence Fernandes (University of California San Diego)

Read More

Make your IoT environments robust against adversarial machine learning...

Hamed Haddadpajouh (University of Guelph), Ali Dehghantanha (University of Guelph)

Read More