Friday, 1 March

  • 08:00 - 09:00
    Breakfast
    Boardroom with Foyer
  • 09:00 - 09:10
    Welcome and Opening Remarks
    Kon Tiki Ballroom
  • 09:10 - 10:10
    Keynote Talk by Pierre Laperdrix (CNRS, Univ Lille, Inria Lille)
    Kon Tiki Ballroom
    • The web is a fantastic platform that transformed our society. In the span of two decades, browsers went from rendering texts and images to becoming massive software filled with advanced technology and multimedia capabilities. From a security and privacy perspective, a lot has changed by making our communications more private and by providing proper isolation between components. But are these changes always positive? Is the web evolving too quickly to the detriment of users and their online privacy? In this presentation, we will see that the answer can be complex where innovation, privacy and legislation consistently counterbalance one another.

      Speaker's Biography: Pierre Laperdrix is currently a research scientist for CNRS in the Spirals team in the CRIStAL laboratory in Lille, France. Previously, he was a postdoctoral researcher in the PragSec lab at Stony Brook University and, after, in the Secure Web Applications Group at CISPA. His research interests span several areas of security and privacy with a strong focus on the web. One of his main goal is to understand what is happening on the web to ultimately design countermeasures to better protect users online.

  • 10:10 - 10:30
    Morning Coffee Break
    Boardroom with Foyer
  • 10:30 - 12:00
    Session 1: Network Security on the Web
    Chair: Shujiang Wu (F5)
    Kon Tiki Ballroom
    • It has been shown that post-quantum key exchange and authentication with ML-KEM and ML-DSA, NIST’s post-quantum algorithm picks, will have an impact on TLS 1.3 performance used in the Web or other applications. Studies so far have focused on the overhead of quantum-resistant algorithms on TLS time-to-first-byte (handshake time). Although these works have been important in quantifying the slowdown in connection establishment, they do not capture the full picture regarding real-world TLS 1.3 connections which carry sizable amounts of data. Intuitively, the introduction of an extra 10KB of ML-KEM and ML-DSA exchanges in the connection negotiation will inflate the connection establishment time proportionally more than it will increase the total connection time of a Web connection carrying 200KB of data. In this work, we quantify the impact of ML-KEM and ML-DSA on typical TLS 1.3 connections which transfer a few hundreds of KB from the server to the client. We study the slowdown in the time-to-last-byte of post-quantum connections under normal network conditions and in more unstable environments with high packet delay variability and loss probabilities. We show that the impact of ML-KEM and ML-DSA on the TLS 1.3 time-to-last-byte under stable network conditions is lower than the impact on the time-to-first-byte and diminishes as the transferred data increases. The time-to-last-byte increase stays below 5% for high-bandwidth, stable networks. It goes from 32% increase of the time-to-first-byte to under 15% increase of the time-to-last-byte when transferring 50KiB of data or more under low-bandwidth, stable network conditions. Even when congestion control affects connection establishment, the additional slowdown drops below 10% as the connection data increases to 200KiB. We also show that connections in lossy or volatile networks could see higher impact from post-quantum handshakes, but these connections’ time-to-last-byte degradation still drops as the transferred data increases. Finally, we show that such connections are already significantly slow and volatile regardless of the TLS handshake.

    • Mohammed Aldeen, Sisheng Liang, Zhenkai Zhang, Linke Guo (Clemson University), Zheng Song (University of Michigan – Dearborn), and Long Cheng (Clemson University)

      —Graphics processing units (GPUs) on modern computers are susceptible to electromagnetic (EM) side-channel attacks that can leak sensitive information without physical access to the target device. Website fingerprinting through these EM emanations poses a significant privacy threat, capable of revealing user activities from a distance. This paper introduces EMMasker, a novel software-based solution designed to mitigate such attacks by obfuscating the EM signals associated with web activity. EMMasker operates by generating rendering noise within the GPU using WebGL shaders, thereby disrupting the patterns of EM signals and confounding any attempts at identifying user online activities. Our approach strikes a balance between the effectiveness of obfuscation and system efficiency, ensuring minimal impact on GPU performance and user browsing experience. Our evaluation shows that EMMasker can significantly reduce the accuracy of state-of-the-art EM website fingerprinting attacks from average accuracy from 81.03% to 22.56%, without imposing a high resource overhead. Our results highlight the potential of EMMasker as a practical countermeasure against EM side-channel website fingerprinting attacks, enhancing privacy and security for web users.

    • Naif Mehanna (Univ. Lille / Inria / CNRS), Walter Rudametkin (IRISA / Univ Rennes), Pierre Laperdrix (CNRS, Univ Lille, Inria Lille), and Antoine Vastel (Datadome)

      Free-proxies have been widespread since the early days of the Web, helping users bypass geo-blocked content and conceal their IP addresses. Various proxy providers promise faster Internet or increased privacy while advertising their lists comprised of hundreds of readily available free proxies. However, while paid proxy services advertise the support of encrypted connections and high stability, free proxies often lack such guarantees, making them prone to malicious activities such as eavesdropping or modifying content. Furthermore, there’s a market that encourages exploiting devices to install proxies.

      In this paper, we present a 30-month longitudinal study analyzing the stability, security, and potential manipulation of free web proxies that we collected from 11 providers. Our collection resulted in over 640, 600 proxies, that we cumulatively tested daily. We find that only 34.5% of proxies were active at least once during our tests, showcasing the general instability of free proxies. Geographically, a majority of proxies originate from the US and China. Leveraging the Shodan search engine, we identified 4, 452 distinct vulnerabilities on the proxies’ IP addresses, including 1, 755 vulnerabilities that allow unauthorized remote code execution and 2, 036 that enable privilege escalation on the host device. Through the software analysis on the proxies’ IP addresses, we find that 42, 206 of them appear to run on MikroTik routers. Worryingly, we also discovered 16, 923 proxies that manipulate content, indicating potential malicious intent by proxy owners. Ultimately, our research reveals that the use of free web proxies poses significant risks to users’ privacy and security. The instability, vulnerabilities, and potential for malicious actions uncovered in our analysis lead us to strongly caution users against relying on free proxies.

  • 12:00 - 13:30
    Lunch
    Lawn
  • 13:30 - 14:30
    Keynote Talk by Shuo Chen (Microsoft Research Redmond)
    Kon Tiki Ballroom
    • In this talk, I will share my reflection about web security research. There are a number of superficial understandings about the nature of web security issues, the focus of defense technologies and the emerging concept of Web3. To deepen these understandings, it is necessary to see the Web as a “multi-mind” computing paradigm, which has two fundamental characteristics: (1) it is an open platform on which people with potential conflicts of interest (COI) can add code modules; (2) app functionalities are achieved by running through multiple COI modules. These characteristics distinguish the Web from other computing paradigms, such as personal computing, cloud computing and even distributed computing. Recognizing the intrinsic multi-mind nature of the Web, I will use concrete examples to show some unique research angles. I will explain that web security problems are not general security problems manifested in the Web. Accordingly, there are novel promising approaches that are methodological for defense. In the last part of the talk, I will argue that Web3 is a natural next stage in the evolution of the Web.

      Speaker's Biography: Shuo Chen is a senior principal researcher at Microsoft Research Redmond. His interest is about studying operational systems to understand their security challenges and develop systematic solutions. He worked in the areas of software-as-a-service, browser, web privacy/security and blockchain/smart-contract. His research led to several real-world security pushes, such as a cross-company effort to fix browser bugs that compromise HTTPS security; Microsoft Internet Explorer team’s effort to systematically fix GUI-spoofing (phishing) bugs; a cross-company effort to fix logic bugs in e-commerce, online payment and single-sign-on services. His research was covered by the media, such as CNN, CNET, MIT Tech Review, etc. He also works in the area of program verification for browsers, web protocols and smart contracts. Shuo served on the program committees for IEEE S&P, USENIX Security, ACM CCS, DSN, etc. He obtained his Ph.D. degree from University of Illinois at Urbana-Champaign.

    14:30 - 15:10
    Session 2: Work In Progress
    Chair: Xu Lin (Washington State University)
    Kon Tiki Ballroom
    • Online fraud has emerged as a formidable challenge in the digital age, presenting a serious threat to individuals and organizations worldwide. Characterized by its ever-evolving nature, this type of fraud capitalizes on the rapid development of Internet technologies and the increasing digitization of financial transactions. In this paper, we address the critical need to understand and combat online fraud by conducting an unprecedented analysis based on extensive real-world transaction data.

      Our study involves a multi-angle, multi-platform examination of fraudsters' approaches, behaviors and intentions. The findings of our study are significant, offering detailed insights into the characteristics, patterns and methods of online fraudulent activities and providing a clear picture of the current landscape of digital deception. To the best of our knowledge, we are the first to conduct such large-scale measurements using industrial-level real-world online transaction data.

    • Nikolaos Pantelaios and Alexandros Kapravelos (North Carolina State University)

      Introduced over a decade ago, Chrome extensions now exceed 200,000 in number. In 2020, Google announced a shift in extension development with Manifest Version 3 (V3), aiming to replace the previous Version 2 (V2) by January 2023. This deadline was later extended to January 2025. The company’s decision is grounded in enhancing three main pillars: privacy, security, and performance.

      This paper presents a comprehensive analysis of the Manifest V3 ecosystem. We start by investigating the adoption rate of V3, detailing the percentage of adoption from its announcement up until 2024. Our findings indicate, prior to the 2023 pause, less than 5% of all extensions had transitioned to V3, despite the looming deadline for the complete removal of V2, while currently nine out of ten new extensions are being uploaded in Manifest V3. Furthermore, we compare the security and privacy enhancements between V2 and V3 and we evaluate the improved security attributable to V3’s safer APIs, examining how certain APIs, which were vulnerable or facilitated malicious behavior, have been deprecated or removed in V3. We dynamically execute 517 confirmed malicious extensions and we see a 87.8% removal of APIs related to malicious behavior due to the improvements of V3. We discover that only 154 (29.8%) of these extensions remain functional post-conversion. This analysis leads to the conclusion that V3 reduces the avenues for abuse of such APIs. However, despite the reduction in APIs associated with malicious activities, the new Manifest V3 protocol is not immune to such behavior. Our research demonstrates, through a proof of concept, the adaptability of malicious activities to V3. After the proof of concept changes are applied, we showcase 290 (56%) of the examined malicious extensions retain their capability to conduct harmful activities within the V3 framework. They can achieve this by incorporating web accessible resources, a method that facilitates the injection of third-party JavaScript code. Conclusively, this paper also pioneers by documenting the impact of user and community feedback in the transition from V2 to V3, analyzing the percentage of initial issues that have been resolved, and proposing future directions and mitigation strategies for the continued evolution of the browser extension ecosystem.

  • 15:10 - 15:40
    Afternoon Coffee Break
    Boardroom with Foyer
  • 15:40 - 16:40
    Session 3: Program Language Security on the Web
    Kon Tiki Ballroom
    • Are GitHub stars a good surrogate metric to assess the importance of open-source code? While security research frequently uses them as a proxy for importance, the reliability of this relationship has not been studied yet. Furthermore, its relationship to download numbers provided by code registries – another commonly used metric – has yet to be ascertained. We address this research gap by analyzing the correlation between both GitHub stars and download numbers as well as their correlation with detected deployments across websites. Our data set consists of 925 978 data points across three web programming languages: PHP, Ruby, and JavaScript. We assess deployment across websites using 58 hand-crafted fingerprints for JavaScript libraries. Our results reveal a weak relationship between GitHub Stars and download numbers ranging from a correlation of 0.47 for PHP down to 0.14 for JavaScript, as well as a high amount of low star and high download projects for PHP and Ruby and an opposite pattern for JavaScript with a noticeably higher count of high star and apparently low download libraries. Concerning the relationship for detected deployments, we discovered a correlation of 0.61 and 0.63 with stars and downloads, respectively. Our results indicate that both downloads and stars pose a moderately strong indicator of the importance of client-side deployed JavaScript libraries.

    • Codes automatically generated by large-scale language models are expected to be used in software development. A previous study verified the security of 21 types of code generated by ChatGPT and found that ChatGPT sometimes generates vulnerable code. On the other hand, although ChatGPT produces different output depending on the input language, the effect on the security of the generated code is not clear. Thus, there is concern that non-native English-speaking developers may generate insecure code or be forced to bear unnecessary burdens. To investigate the effect of language differences on code security, we instructed ChatGPT to generate code in English and Japanese, each with the same content, and generated a total of 450 codes under six different conditions. Our analysis showed that insecure codes were generated in both English and Japanese, but in most cases they were independent of the input language. In addition, the results of validating the same content in different programming languages suggested that the security of the code tends to depend on the security and usability of the API provided by the programming language of the output.

  • 16:40 - 17:00
    Awards and Closing Remarks
    Kon Tiki Ballroom