Rei Yamagishi, Shinya Sasa, and Shota Fujii (Hitachi, Ltd.)

Codes automatically generated by large-scale language models are expected to be used in software development. A previous study verified the security of 21 types of code generated by ChatGPT and found that ChatGPT sometimes generates vulnerable code. On the other hand, although ChatGPT produces different output depending on the input language, the effect on the security of the generated code is not clear. Thus, there is concern that non-native English-speaking developers may generate insecure code or be forced to bear unnecessary burdens. To investigate the effect of language differences on code security, we instructed ChatGPT to generate code in English and Japanese, each with the same content, and generated a total of 450 codes under six different conditions. Our analysis showed that insecure codes were generated in both English and Japanese, but in most cases they were independent of the input language. In addition, the results of validating the same content in different programming languages suggested that the security of the code tends to depend on the security and usability of the API provided by the programming language of the output.

View More Papers

TALISMAN: Tamper Analysis for Reference Monitors

Frank Capobianco (The Pennsylvania State University), Quan Zhou (The Pennsylvania State University), Aditya Basu (The Pennsylvania State University), Trent Jaeger (The Pennsylvania State University, University of California, Riverside), Danfeng Zhang (The Pennsylvania State University, Duke University)

Read More

A Cross-Verification Approach with Publicly Available Map for Detecting...

Takami Sato, Ningfei Wang (University of California, Irvine), Yueqiang Cheng (NIO Security Research), Qi Alfred Chen (University of California, Irvine)

Read More

Maginot Line: Assessing a New Cross-app Threat to PII-as-Factor...

Fannv He (National Computer Network Intrusion Protection Center, University of Chinese Academy of Sciences, China), Yan Jia (DISSec, College of Cyber Science, Nankai University, China), Jiayu Zhao (National Computer Network Intrusion Protection Center, University of Chinese Academy of Sciences, China), Yue Fang (National Computer Network Intrusion Protection Center, University of Chinese Academy of Sciences, China),…

Read More

TextGuard: Provable Defense against Backdoor Attacks on Text Classification

Hengzhi Pei (UIUC), Jinyuan Jia (UIUC, Penn State), Wenbo Guo (UC Berkeley, Purdue University), Bo Li (UIUC), Dawn Song (UC Berkeley)

Read More