Navigating the Expanding Role of Privacy Professionals in the Age of AI

Rebecca // January 19 // 0 Comments

AI governance is increasingly handled by privacy teams.  

Privacy professionals at medium-sized and large companies are a busy bunch. Their roles encompass a wide range of tasks, from staying abreast of evolving international privacy regulations to addressing data subject access requests (DSARs), conducting regular audits of data usage, and educating colleagues about the importance of data privacy. And while many privacy professionals relish the diversity and interplay of their tech and soft skills, it’s undeniable that this can be a demanding job.

As if running a privacy program isn’t enough, more and more privacy professionals have also been made responsible for AI practices and governance.  In 2023 privacy professionals have seen their responsibilities expand even further to include AI practices and governance. Talking to other privacy professionals, I’ve seen this shift happen in two ways; bottom-up or top down.  

The bottom-up approach occurs organically when teams throughout the company reach out to privacy because they don’t have anyone else.  In these cases, business units have been eager to leverage the hype surrounding large language models (LLMs) by fine-tuning models using company data. For example, a customer service team might envision automating responses using an LLM trained on past customer tickets. In such instances, collaboration with the privacy team is often initiated after the AI initiative has begun, as departments often seek guidance or approval to proceed.  

When companies lack a well-established AI governance program, privacy professionals become the natural go-to partners for AI initiatives. When the shift occurs from the bottom-up privacy pros typically don’t receive any additional training or resources in these situations, as the expansion of their responsibilities occurs gradually and organically.

On the other hand, some privacy professionals have been explicitly tasked by company leadership to focus on AI. In these cases, privacy teams or individuals assume responsibility for preparing for AI regulations, assessing AI risks, establishing company-wide AI policies, and training various departments on responsible AI practices. The resources available to these privacy teams vary, with some receiving additional training or personnel support, while others continue to manage their AI responsibilities alongside their existing workload.

As the role of privacy professionals expands in the age of AI, companies must recognize the crucial role they play in safeguarding user privacy and ensuring responsible AI practices. It’s important to recognize if your privacy team is also handling AI, and how that happened.  To support these professionals in their ever-evolving responsibilities, companies need to invest in their training and resources. And don’t forget to invest in additional personnel or resources if necessary to help them manage their growing workload effectively.

The evolving role of privacy professionals in the age of AI is a testament to their adaptability and expertise. As AI continues to permeate organizations, privacy professionals will play an increasingly pivotal role in ensuring that AI is used responsibly, ethically, and in a manner that protects user privacy.  

By empowering your privacy professionals with the skills and support they need, you’ll not only enhance their ability to protect user privacy but also position your organization as a leader in responsible AI practices. In the dynamic world of AI, privacy is not just a compliance issue; it’s a cornerstone of innovation and trust.

Get your free E-Book here

Start your journey to adversarial privacy testing with our free E-book.  I've written this book for privacy and security professionals who want to understand privacy red teams and privacy pen testing.

  1. 1
    When is an adversarial privacy test helpful?
  2. 2
    Who are privacy adversaries and what are their motivations?
  3. 3
    When to build a team in-house versus hiring an external team?
Adversarial PRIVACY TESTING
About the Author Rebecca

Dr. Rebecca Balebako builds data protection and trust into software products. As a certified privacy professional (CIPP/E, CIPT, Fellow of Information Privacy) ex-Googler, and ex-RANDite, she has helped multiple organizations improve their responsible AI and ML programs.

Our Vision

 We work together with companies to build Responsible AI solutions that are lasting and valuable. 

Privacy by Default

respect

Quality Process

HEALTH

Inclusion

>