UK Releases Code of Practice for Securing AI

By

See the 13 general principles aimed at developers, operators, and organizations.

The UK government has developed a voluntary Code of Practice aimed at addressing AI cybersecurity risks.

This Code of Practice applies to developers, system operators, and organizations that create, deploy, or manage AI systems. And, according to the announcement, it “equips organizations with the tools they need to thrive in the age of AI. From securing AI systems against hacking and sabotage, to ensuring they are developed and deployed in a secure way, the Code will help developers build secure, innovative AI products.”

Specifically, the Code sets out 13 cybersecurity principles encompassing the software development lifecycle – secure design, secure development, secure deployment, secure maintenance, and secure end of life. The general principles are:

  1. Raise awareness of AI security threats and risks.
  2. Design your AI system for security as well as functionality and performance.
  3. Evaluate the threats and manage the risks to your AI system.
  4. Enable human responsibility for AI systems.
  5. Identify, track and protect your assets.
  6. Secure your infrastructure.
  7. Secure your supply chain.
  8. Document your data, models and prompts.
  9. Conduct appropriate testing and evaluation.
  10. Communication and processes associated with end-users and affected entities.
  11. Maintain regular security updates, patches and mitigations.
  12. Monitor your system’s behavior.
  13. Ensure proper data and model disposal.

See the announcement for details.
 
 

 
 
 

02/17/2025

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=