An Artificial Intelligence (AI) solution is only as good as the data that trains it. The more complete the data on which an AI is trained, the more valuable will be the output. An AI gains insights from the data and can make predictions, automate processes and perform other tasks that it is trained to do.

But just like human beings, if an AI has nothing on which to base its predictions, its output will be worthless. Or even worse if an AI system is fed deliberately wrong data then the outcome, predictions or performing tasks will lead to devastating results.

Imagine a scenario where in the health sector an AI solution is making diagnostic decisions based on the large data set and training models relating to health. If this data is tampered with, the AI can make incorrect and harmful decisions which could also lead to loss of human life.

Therefore, robust cybersecurity measures is essential for protecting AI systems from being maliciously manipulated to ensure the integrity and reliability of their operations.

Recently, the UK's National Cybersecurity Centre (NCSC) together with the US's Cybersecurity and Infrastructure Security Agency (CISA) have developed guidelines for secure AI development, which another 16 countries have agreed to implement.

These guidelines are crucial for ensuring that AI systems function as intended, are available when needed, and do not reveal sensitive data to unauthorised parties. The guidelines emphasise the importance of developing, deploying and operating AI systems in a secure and responsible manner, considering the novel security vulnerabilities unique to AI.

The guidelines are structured around four key areas within the AI system development life cycle: secure design, secure development, secure deployment, and secure operation and maintenance. Each section of the guidelines offers detailed advice and best practices for providers of AI systems, whether the systems are created from scratch or built upon existing tools and services. The guidelines are intended for a broad audience, including data scientists, developers, managers, decision-makers and risk owners, urging all stakeholders to read and apply these guidelines.

  1. Secure design: The guidelines emphasise the importance of incorporating security at the earliest stages of AI development. This includes assessing potential risks and vulnerabilities specific to AI technologies. It advocates for designing AI systems that are resilient to attacks and can maintain data integrity.
  2. Secure development: In the development phase, the focus is on implementing robust coding practices and safeguarding the AI supply chain. This involves scrutinising source codes, managing dependencies and ensuring that development tools are secure. The guidelines encourage regular security audits and stress the need for transparency in AI algorithms.
  3. Secure deployment: Deployment of AI systems must be done with utmost care, ensuring that the deployment environment is secure. The guidelines recommend rigorous testing procedures, including penetration testing and vulnerability scanning, to identify and address potential security issues before widespread deployment.
  4. Secure operation and maintenance: Once AI systems are operational, continuous monitoring and maintenance become crucial. The guidelines suggest regular updates and patch management to mitigate emerging threats. They also recommend the implementation of incident response plans to handle any security breaches effectively.

In general, the guidelines advocate a 'secure by default' approach, closely aligned with established cyber security practices. The principles prioritised include taking ownership of security outcomes for customers, embracing radical transparency and accountability, and building organisational structures that prioritise security.

It should also be recognised that AI systems are subject to novel security vulnerabilities which necessitate a different approach to cybersecurity. It introduces concepts like "adversarial machine learning", where attackers exploit vulnerabilities in machine learning components, including hardware, software, workflows and supply chains. This can lead to unintended behaviours in AI systems, such as compromised performance, unauthorised actions, or sensitive information extraction.

This call to action urges countries to recognise the importance of secure AI development as a cornerstone of their national cybersecurity strategies. Adopting a 'secure by default' approach, focusing on the entire lifecycle of AI systems from design to deployment and operation, and emphasising continuous vigilance and adaptation to emerging threats are key components of this strategy.

In conclusion, the adoption and implementation of AI development security guidelines should be a priority for all nations including Cyprus which is rapidly gaining recognition for its burgeoning tech sector.

As Cyprus continues to grow as a tech hub, with numerous development companies at its core, implementing these guidelines is essential to harness the full potential of AI technologies.

This not only safeguards against the evolving landscape of cybersecurity threats but also ensures that the development and deployment of AI in Cyprus are conducted in a manner that is innovative, efficient and secure.

Originally published by Cyprus Mail.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.