The rise of AI coding assistants marks a significant leap forward in software development. With capabilities that streamline tasks, these tools promise a new level of efficiency. However, a recent joint report by France’s National Cybersecurity Agency (ANSSI) and Germany’s Federal Office for Information Security (BSI) highlights crucial security and privacy considerations for organisations adopting AI coding assistants.

This blog will explore the advantages and risks these AI tools bring, along with actionable steps to integrate them responsibly into business practices.

Opportunities

AI coding assistants offer many benefits for developers and organisations alike, primarily by automating repetitive coding tasks. Advantages include:

  1. The ability to generate code snippets, functions, and even complex classes from simple prompts. This is particularly useful for routine algorithms, as it allows developers to focus on more intricate aspects of their projects.
  2. By detecting code errors and generating test cases, coding assistants help reduce the time developers spend on debugging and quality assurance.
  3. Maintaining code standards is essential, and AI assistants help enforce consistent code formatting, documentation, and commenting – which improves readability and collaboration across teams.
  4. AI is evolving to translate older programming languages into modern alternatives, which can help with maintaining legacy systems.

Key security and privacy risks

Despite these benefits, there are inherent risks in relying on AI coding assistants. Here are four of the most pressing concerns:

  1. Sensitive information, like API keys and proprietary code, can be inadvertently exposed through AI coding assistants, particularly if they are cloud-based. Ensuring data privacy requires careful consideration of contractual terms and data handling practices.
  2. AI-generated code can be error-prone; a 2021 study found that 40% of AI-generated code could contain security vulnerabilities.
  3. AI tools can be susceptible to attacks, such as package hallucinations and prompt injections, where malicious actors manipulate outputs to introduce vulnerabilities.
  4. Over-reliance on AI-generated solutions may reduce the critical review phase, particularly among newer developers. That could lead to uncritically accepting potentially flawed code.

Actions to consider

To harness the advantages of AI coding assistants while mitigating risks, the joint report recommends that organisations should adopt these best practices:

  • Opt for enterprise versions that allow organisations to control data-sharing settings and prevent inadvertent training data contributions (See my colleague Tracy Elliott’s blog for more on this subject).
  • Incorporate AI tools into your data protection impact assessment (DPIA) and privacy notices to assess and document potential impacts on data protection and security.
  • Establish an AI usage policy.
  • Don’t treat AI coding assistants as a substitute for experienced developers; make sure to use them together with systematic risk analyses.
  • Developers should critically review AI-generated code, conduct vulnerability scans, and collaborate with security teams to ensure the integrity of the generated code.
  • Enterprises are advised to control access to these tools, preferably using secure, enterprise-managed configurations, and to provide guidelines on what data can be shared with these assistants.
  • Developers and security teams should receive specific training on the risks and safe use of AI tools.
  • When engaging with new vendors, review how they their AI tools handle data including retention periods or whether the data is used for training.
  • Align AI adoption with local and international regulations (e.g., GDPR, NIS2 Directive) to ensure compliance and avoid penalties.

Integrating AI coding assistants must be paired with privacy-conscious practices. Organisations need to balance the benefits of AI with its potential risks, maintaining a proactive stance on data protection to leverage these tools responsibly.

Although AI might initially seem unfamiliar or daunting, it has the potential to be a transformative tool for organisations. A theme of IRISSCON 2024 was that some experts believe AI has more benefits for defenders than opportunities for cybercriminals. Despite the associated risks, it’s possible to use AI responsibly and effectively when the appropriate safeguards and compliance measures are in place.

[/fusion_text][/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]

About the Author: Clíona Perrick

Clíona Perrick is a Data Protection Consultant with BH Consulting.

Let’s Talk

Please leave your contact details and a member of our team will be in touch shortly.

"*" indicates required fields

Name*
This field is for validation purposes and should be left unchanged.