Do users write more insecure code with AI assistants?

AI code assistants have emerged as powerful tools that can aid in the software development life-cycle and can improve developer productivity. Unfortunately, such assistants have also been found to produce insecure code in lab environments, raising significant concerns about their usage in practice. In this paper, we conduct a user study to examine how users interact with AI code assistants to solve a variety of security related tasks. Overall, we find that participants who had access to an AI assistant wrote significantly less secure code than those without access to an assistant. Participants with access to an AI assistant were also more likely to believe they wrote secure code, suggesting that such tools may lead users to be overconfident about security flaws in their code. To better inform the design of future AI-based code assistants, we release our user-study apparatus and anonymized data to researchers seeking to build on our work at this link.

↫ Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh

I’m surprised somewhat randomly copying other people’s code into your program – violating their licenses, to boot – leads to crappier code. Who knew!

27 Comments

  1. 2024-01-17 7:26 pm
    • 2024-01-17 7:59 pm
    • 2024-01-17 10:26 pm
      • 2024-01-18 2:38 am
        • 2024-01-18 6:38 am
          • 2024-01-18 10:31 am
          • 2024-01-18 2:20 pm
    • 2024-01-17 10:39 pm
      • 2024-01-17 10:41 pm
      • 2024-01-17 11:18 pm
      • 2024-01-18 6:46 am
        • 2024-01-18 8:26 am
          • 2024-01-18 5:42 pm
          • 2024-01-19 12:29 pm
          • 2024-01-19 5:50 pm
    • 2024-01-18 12:29 am
      • 2024-01-18 2:35 am
  2. 2024-01-17 7:39 pm
    • 2024-01-17 10:28 pm
      • 2024-01-17 11:05 pm
  3. 2024-01-17 10:25 pm
  4. 2024-01-18 1:13 am
  5. 2024-01-18 9:09 am
    • 2024-01-20 3:56 am
      • 2024-01-20 1:11 pm
        • 2024-01-20 1:56 pm
    • 2024-01-20 3:57 am