Search
Close this search box.

New NCSC Guidelines for AI: The Importance of Security

A group of people working in a office

The recently published Guidelines for Secure AI System Development—a collaboration between the UK’s National Cyber Security Centre (NCSC), the US Cybersecurity and Infrastructure Security Agency (CISA), and an array of international partners—offers vital insights for companies choosing AI vendors or building with AI.  

This comprehensive guide emphasizes that while AI brings numerous benefits, it also introduces unique security risks. As such, businesses must make sure their AI systems are developed, deployed, and operated securely and responsibly. The guidelines highlight four critical areas:  

  • Secure design: involves guidelines for the design stage of AI system development, covering understanding risks, threat modeling, and specific considerations for system and model design.  
  • Secure development: applies to the development stage, focusing on supply chain security, documentation, asset, and technical debt management.  
  • Secure deployment: encompasses guidelines for the deployment stage, including protecting infrastructure and models from compromise, threat, or loss, and developing incident management processes.  
  • Secure operation and maintenance: relates to actions relevant after system deployment, involving logging, monitoring, update management, and information sharing.

These areas focus on understanding risks, supply chain security, incident management, and continuous monitoring.  

For businesses selecting AI vendors, these guidelines underline the importance of choosing partners who prioritize security, governance, transparency, explainability, and accountability. It’s not just about the technological prowess of AI, but also about how it’s built and maintained. Security must be a core aspect throughout the AI system’s lifecycle.  

In the current landscape, it’s essential for businesses to critically evaluate potential AI vendors. We must look beyond the immediate functionalities to consider the long-term implications of security and ethical use. By aligning with vendors that adhere to these guidelines, businesses can not only leverage the transformative power of AI but do so in a secure and ethically responsible manner. This approach is key to harnessing AI’s potential while upholding high standards of trust and integrity in the digital age. 

If you’re interested in learning more about how to introduce responsible AI practices to your business, check out this blog!

Report

The FORRESTER WAVE™: End-User Experience Management, Q3 2022

The FORRESTER WAVE™: End-User Experience Management, Q3 2022