News & Analysis

Google Plays it SAIF on Generative AI

The company has unveiled a secure AI framework that could ensure the security of all future AI models as part of risk mitigation

At a time when generative AI has raised uncomfortable questions about its safety and security and their impact in the hands of bad actors, Google has launched a Secure AI Framework (SAIF) that, it claims, is designed to make all future AI models secure by default. If one were to believe Google, this could well be the magic bullet that triggers AI towards humanity’s good. 

Of course, all of what we know comes only from a blog post by Google where it states that SAIF is a conceptual framework for security AI systems that would mitigate all AI-related risks – be it model theft, data poisoning during machine learning and training, malicious code injections to extract confidential data and much more. 

Responsible actors need to act responsibly

According to Phil Venebles, Google Cloud VP and Royal Hansen, VP of privacy, safety and security engineering at Google, “a framework across the public and private sectors is essential for making sure that responsible actors safeguard the technology that supports AI advancements so that when AI models are implemented, they’re secure-by-default.” 

The blog underscores the potential of generative AI and says the pursuit of progress within these new frontiers of innovation requires industry-wide security standards for building and deploying the technology in a responsible manner. The authors say Google believes that SAIF would be the way to achieve these goals. 

What’s SAIF and how does it work?

The blog post provides a brief overview of SAIF through a PDF document (download it here) and goes on to provide use cases and how the framework can be implemented in another document here. In brief the post notes that there are six core elements around which the secure AI framework would function. These are: 

  • Expand strong security foundations to the AI ecosystem – Google recommends leveraging secure-by-default infrastructure protections and expertise built over the last two decades and starting to scale and adopt those security foundations in the context of AI and the evolving threat landscape.
  • Extend detection and response to bring AI into an organization’s threat universe – Security teams should collaborate with trust and safety, threat intelligence and counter-abuse teams to monitor inputs and outputs of generative AI systems to detect anomalies and use threat intelligence to anticipate attacks.
  • Automate defenses to keep pace with existing and new threats – As adversaries may use AI to scale their impact, it is essential for organizations to also use AI and its emerging capabilities to stay nimble and cost-effective in protection efforts.
  • Harmonize platform-level controls to ensure consistent security – Consistency across control frameworks can support AI risk mitigation and scale protections across different platforms and tools for all AI applications. This includes extending secure-by-default protections to AI platforms like Vertex AI and Security AI Workbench, and building controls and protections into the software development lifecycle. 
  • Adapt controls to adjust mitigations and create faster feedback loops for AI deployment – Constant testing of implementations through continuous learning can ensure detection and protection capabilities address the changing threat environment. This includes techniques like reinforcement learning based on incidents and user feedback and involves steps such as updating training data sets, fine-tuning models to respond strategically to attacks etc. 
  • Contextualize AI system risks in surrounding business processes – Conduct assessment of the end-to-end business risk including data lineage, validation and operational behavior monitoring for certain types of applications and construct automated checks to validate AI performance.

A good start, but more needs to be done

Industry experts believe that while SAIF could be a good start to observe how AI functions as they gain competence. The challenge would be to constantly keep abreast of their actions to check whether these are aligned with human values. While a security protocol is welcome while using generative AI, there is much more that needs to be done on this front. 

At best what Google’s announcement does is indicate that there is a global view that companies are taking on the risks around generative AI and its security. In fact, a recent report by Forrester had spoken about zero-trust for AI and SAIF appears to be in sync with this idea.  

Leave a Response