AI governance is the proactive method for monitoring risk from Artificial Intelligence in a company or non-profit. Privacy, fairness, and safety are all parts of a responsible AI program. AI governance must understand and monitor risks from the use and deployment of AI, as well as keep track of regulatory requirements.
Some companies have a group responsible solely for AI governance. Others may be asking the corporate governance teams, privacy teams, or risk and safety teams to consider AI governance as their role and duty. Some tech companies have also had responsible AI in their engineering departments, and such teams can inform AI governance teams.
Here are three immediate and foundational tips for anyone tasked with operating an AI Governance program.
- Define Artificial Intelligence for your org so you don’t spend time internally haggling over what counts and what doesn’t. The current EU regulatory perspective is that AI includes machine learning models, so you might need to make sure your definition is broad enough to align with laws and regulation. The European Commission definition is at this link.
- Create an AI inventory. What are all the different places in your org that have AI projects? What are they doing and how? I’ve seen first-hand how it can be difficult to find all the ML projects at tech companies, much less assess their privacy properties. Organizations can start now to keep track of who is doing AI projects and how. This does not mean keeping track of every experiment, but instead all the projects that use AI internally or include it in products. As a side note, keeping track of the details of each ML experiment can be useful for testing and debugging, but might be too detailed for a company-wide AI governance program.
- Create a culture of continuous monitoring. This is a lesson learned from privacy engineering but was emphasized. Any organization that thinks they can do privacy once – or do AI governance once – and be done with it will fail to address changes in the product, the culture, and the law. (Learn more about privacy atrophy). Furthermore, the belief that once is enough can lead to frustration. I’ve had engineers express to me their frustration that they “did privacy last year” so shouldn’t need to do it again. Creating an expectation that privacy and AI governance will change over time and needs to be revisited can head off that frustration, as well as allow teams to plan and budget for AI governance.
In conclusion, launching an effective AI governance program demands a comprehensive and proactive approach. Recognizing the intrinsic connection between privacy, fairness, and safety is fundamental to instilling responsible AI practices within an organization. Whether through dedicated AI governance teams or collaboration with existing corporate governance, privacy, or risk and safety teams, the commitment to understanding and monitoring risks associated with AI use and deployment is paramount. The three foundational tips outlined—clearly defining artificial intelligence, creating an AI inventory, and fostering a culture of continuous monitoring—serve as the building blocks for a robust AI governance initiative. Embracing a dynamic mindset, understanding that AI governance is not a one-time effort but an evolving process, is essential for navigating the ever-changing landscape of products, cultures, and legal requirements. By establishing this expectation, organizations can proactively plan, budget, and ensure that their AI governance remains agile and responsive over time.