Secure Enterprise AI Adoption: A Primer

Enterprise AI is here, and many IT departments lack the relevant strategies to securely govern these AI platforms. This quick primer discusses the risks of ungoverned AI, the secure AI operations lifecycle, and core activities to consider when governing AI across the organization.

1. Overview

Artificial intelligence is a powerful technology which can unlock many opportunities within the enterprise. Yet AI is a technology which few people truly understand the inner workings of. Fortunately, leveraging AI is much like operating a vehicle. In other words, one must not be an automotive engineer to safely operate a vehicle. Therefore, the goal of this framework is to focus on safe operation of AI within the enterprise across its entire lifecycle of adoption.

This document is intended to be a primer rather than a full-fledged framework. It’s the 30,000 foot view into AI governance which articulates the major components of an AI governance program. 

2. Artificial Intelligence Risks  

Prior to examining the details of how to safely operate AI capabilities, it’s important to understand the risks associated with artificial intelligence. As with any information technology solution, AI can be hacked, manipulated, and generally misused for malicious purposes while creating significant consequences for the AI operator. Furthermore, AI’s complexity and often opaque architecture make securing such systems especially challenging. 

AI risks can be organized into four key categories: model security, enterprise data security, prompt security, and regulatory compliance. Each of these categories are described in further detail below. 

  1. Enterprise Data Security – Focuses on securing the data going into and out of AI systems. This is one of the highest risk areas for AI adoption.   
  1. Model Security – The governing parameters or “rules” which ensure machine learning models behave in a manner that is legal, ethical, and secure. 
  1. Prompt Security – Prompts are essentially inputs to AI models, and like any type of computer input, are subject to malicious injection and/or manipulation. 
  1. Regulatory Compliance – In addition to existing data protection laws, there are dozens of emerging AI-centric laws which will soon be enacted. Keeping abreast of these regulations is crucial for ensuring compliance with local, federal, and global regulatory requirements. 

3. Secure AI Operations Lifecycle

The ultimate goal of an AI governance framework is to ensure security, safety, and compliance across the entire lifecycle of AI adoption. While many organizations focus on pieces of AI, such as data security or procurement checklists, this approach would be incomplete and leave the organization at risk. A simplified view of the overall AI adoption lifecycle is illustrated below:

Governance occurs across all phases of the secure AI operations lifecycle, especially as the models grow and iterate over time.

4. Activities Checklist

Various activities must occur across the different stages of the secure AI operations lifecycle. The non-exhaustive list below outlines several key activities to incorporate into your overall lifecycle strategy:

Although the illustration above is linear, this is a cyclical process. In other words, post-deployment operations and monitoring will naturally progress back to the plan and design phase as new requirements and feature requests emerge. 

Appendix

Below is a starting point for examining AI vendor security and governance capabilities. While the initial vetting of a vender is important, AI will evolve as adoption increases within the enterprise. Therefore, it’s far more important to continuously monitor and govern AI across the aforementioned lifecycle above.

1. Sample AI Vendor Security Questionnaire

#QuestionAnswer
Core IT Security Controls
1Does the solution support SAML single sign on (SSO)?
2Can we automatically provision and deprovision users with APIs, such as SCIM?
3Can we programmatically access sign-in and audit logs via API?
4Are source IP whitelists for access an option with this product?
5Does the system implement role-based access control?
6Is the solution single or multi-tenant?
7Where is the solution hosted? Your own dedicated data center, or perhaps a public cloud such as AWS?
8Do you have SOC II and ISO 27000 certifications?
Data Architecture
9Explain how data access controls from source systems are honored within this solution. 
10Are third-party data sources used to train our enterprise-specific models?
11If third-party data is used, have all original content creators consented to use?
12Will our enterprise data ever be used, anonymized or not, by other entities?
13Can the solution crawl Internet sources on-demand, such as from user prompts?
14Will the solution contain our customer information?
Explainable and Interpretable
15Are outputs explainable? I.e. Can we determine how the model came to a particular conclusion in a reasonably intuitive way?
16Will prompts, or any form of input be accessible to the public Internet?
Generative AI
17If the solution is capable of creating novel works, such as programming code, images, audio, or video, where does the training data come from? If from third-party sources, are all licenses royalty free and open source?
Model Training and Validation
18Are system administrator level prompt engineering controls available to ensure fair and ethical controls are honored?

Blogs to your Inbox

Never miss an article by receiving articles directly to your inbox.

Your information will be kept private.