Privacy Red Teams

Rebecca // August 7 // 0 Comments

Some large companies are building privacy red teams to test their privacy controls. Other companies are exploring adding privacy capabilities to their security red team efforts. This introduction is for people who want to learn how a privacy red team is different from a security red team.

What is a red team?

A security red team is a group of ethical engineers who are employed by a company to attack the system’s security and try to mimic real-world exploits. Red teams are employees or consultants hired specifically to hack an organization, do so with permission, and typically follow specified rules of engagement. If a red team succeeds in infiltrating a private network, stealing intellectual property, or transferring money from corporate accounts, they will roll back the actions and work with internal teams to prevent such attacks from occurring in the future.

The US National Institute Of Standards and Technology (NIST) defines a red team objective as “improve enterprise cybersecurity by demonstrating the impacts of successful attacks and by demonstrating what works for the defenders in an operational environment.”

Why have a privacy red team?

Companies under GDPR or other privacy regulations may already be integrating Privacy by Design. In privacy by design, privacy is an integral part of designing and building any new system that collects data about people.

Privacy red teams play a role in companies with strong privacy-by-design programs, as privacy red teams will test systems after they are built. There can be many steps between the design and the implementation, where errors may occur. Privacy red teams test whether system implementations offer the robust privacy protections they were designed for.

Interested in privacy testing for your organization?

We offer AI auditing and privacy red teams.


How does a security red team differ from a privacy red team?

NIST defines privacy as: “The right of a party to maintain control over and confidentiality of information about itself.”

In many ways, a privacy red team is like a security red team, in that privacy red teams improve enterprise privacy by demonstrating successful attacks on a company’s system, so that such attacks can be prevented. Privacy red teams are composed of ethical hackers with engineering, security, and privacy skills. Unlike security red teams, the focus of a privacy red team is typically on data about real people, often customers or employees of the company. Furthermore, the privacy red team may consider end-user understanding. Good privacy practices ensure that data is protected and the user experience of control and confidentiality is reliable.

Focus on people’s data

Privacy red teams focus on data about people. Security red teams may focus on company resources like intellectual property, or financial systems. A privacy red team is unlikely to be interested in corporate resources. Instead, privacy red teams will target systems that contain data about people. These people are the “data subjects”. Data subjects can be the company end-users or employees.

Privacy red teams may try to simulate insider threats and external threats on people’s data.

External threats may include attempts to steal customer financial data, passwords, or other information that may cause real harm to the user. Or it may include attempts to gain access to personal data for strategic benefit. An example includes the Office of Personnel Management (OPM) breach, in which 20 million Americans’ personnel records were stolen, likely by a nation-state actor looking for political gain. For example, an external actor stole my fingerprints and employment data in this attack.

Insider threats are not limited to malicious attacks. Insider threats to people’s data can also include employees poking around user data beyond the specified purpose. This can happen when employees are not well trained on privacy, or appropriate controls aren’t in place. An example is the recent report of Tesla employees who were reviewing videos from car cameras, and sharing embarrassing moments with other employees.

Focus on user controls

Privacy red teaming will also look at user control and expectation. Privacy red teams may compare the privacy notice to what the system actually does. For example, privacy red teams may try to confirm that data is actually deleted if the user tries to delete an account. They may also try to exfiltrate user data for purposes other than what was specified. For example, can billing data be used by a marketing team to send ads? The user may have provided the information only for billing.

A privacy red team might successfully execute a privacy attack on a system that is working as designed. This can happen when the system wasn’t designed to account for today’s privacy values and expectations. For example, the privacy world’s definition of identifiable datasets has changed in the past 10 years. The definition of anonymization and pseudonymity are being updated as new attacks are found. If an anonymization procedure was built 10 years ago, it may no longer meet today’s standards. A privacy red team may test such datasets against current standards and known attacks.

Privacy red teams must do good, not cause harm

A privacy red team needs to establish clear rules about accessing and modifying people’s data. These rules will likely in informed by ethical user experience guidelines. A privacy red team attack should always begin with a clear understanding of the risks and benefits to the data subjects.

Risks should be eliminated, for example, by creating test accounts to attack instead of real people’s data. I highly recommend this approach. If risks can’t be eliminated, risks should be minimized, such as setting up protective controls for user data touched by a red team attack or using willing victims who have agreed to participate. Furthermore, the benefits of a red team exercise should be greater than the potential harms. To maximize the benefits of a privacy red team exercise, identified vulnerabilities should be fixed as quickly as possible, and any attack paths and outcomes should be rolled back.

In summary, privacy red teams are an emerging concept in the privacy world. In many ways, privacy red teams are similar in form and method to security red teams. However, privacy red teams may specifically focus on people’s data, and the user experience controlling that data. Furthermore, privacy red teams must reduce any harm to people whose data they access in the exercise, by first doing a cost-benefit analysis of the outcomes of their attack.

Want a privacy red team to streamline and improve your data protection?

We offer AI auditing and privacy red teams.

Citations and Learn More:

Get your free E-Book here

Start your journey to adversarial privacy testing with our free E-book.  I've written this book for privacy and security professionals who want to understand privacy red teams and privacy pen testing.

  1. 1
    When is an adversarial privacy test helpful?
  2. 2
    Who are privacy adversaries and what are their motivations?
  3. 3
    When to build a team in-house versus hiring an external team?
Adversarial PRIVACY TESTING
About the Author Rebecca

Dr. Rebecca Balebako builds data protection and trust into software products. As a certified privacy professional (CIPP/E, CIPT, Fellow of Information Privacy) ex-Googler, and ex-RANDite, she has helped multiple organizations improve their responsible AI and ML programs.

Our Vision

 We help companies build data protection that their users love.

Privacy by Default

respect

Quality Process

HEALTH

Inclusion

>