AI Audit – Continuous Testing and Human Review

The Story

Reputation and trust are vital for your company.  At the same time, you want to take advantage of the new opportunities offered by generative AI, LLMs, or AI in general.  

The AI apps your company offers should be useful and delightful products.  They should not create problems for your company -- like data breaches, inaccuracy, or data leaks.  

How can you check the relevant threats and run continuous monitoring on your apps?  

The Problem:

  • AI risks from privacy and security are under scrutiny.  You want to be able to proactively face changes instead of being on the back foot.  
  • You aren't sure how to measure and test the effectiveness of your AI protections. 
  • You need to provide assurance to regulators and consumers that your company is taking the right data protection steps.  
  • The EU AI Act requires high-risk AI products to do security testing, so you are interested in complying.  
  • While several tools exist that help you measure "Responsibility," it isn't clear what the metrics mean or which ones are most important for your business strategy.

The Solution:

We offer a proven method for identifying the strengths and weaknesses of your AI, ML, LLM, and generative AI app.  Do you want to identify harmful or discriminatory outputs from an AI system, unforeseen or undesirable system behaviors, limitations, or potential risks associated with the misuse of the system?

  • AI Red Teams are proactive and documented steps to measure your AI security and privacy.
  • Threat modeling with stakeholders drives alignments about priorities and budgets.
  • Risk assessment and threat analysis identify priorities related to privacy, responsibility, fairness, security, and transparency.

How we work to test you AI apps.

step 1

Business Goals and Threat Assessment

Based on your organization's context, we outline the goals and threats of a new AI product.  

We run a stakeholder alignment meeting to drive consensus on the desired behavior of an AI app.

At the end of this step, you receive a list of desired features and guardrails for your AI app.  

step 2


We define you'll want to test against for safe and responsible AI.

We develop a plan of action for the right balance of continuous audit and human testing.  We work with you to define the scope and rules of engagement. 

At the end of this step, you'll have clear metrics for your responsible AI app, and a test plan.  Whether you have one AI project or multiple, this framework for responsible AI will allow your developers to understand how to translate and measure business goals.

step 3

Develop and run the automated tests with human oversight

Our diverse team of experts will emulate adversaries and run tests on the AI app. 

We build an automated toolkit to for continuous monitoring of the AI app.

At the end of this step, you'll understand what attacks and guardrails were effective.  

step 4

Report and Transfer

We help you understand how to improve you  AI app through a report on the options and next steps. 

We hand over the red team toolkit and provide training on updating and evaluating the results. 

At the end of this step, you'll have a process that you can use to continuously monitor your AI apps.

Schedule a Meet and Greet

Do you want to talk to us about data protection? Let's have a 15-minute meet and greet.  

Call includes:

  • Honest answers to your questions about whether our service is a good fit for your organization.
  • Adversary elicitation: to improve your threat models, we can brainstorm privacy adversaries.  

Call does not include:

  • Sales Pitch 
  • No strings attached: "It isn't for us" is an acceptable decision

What others say about working with us:

M. Le Tilly - Privacy Investigation at Google

Rebecca is both a highly-skilled privacy engineer and a reliable manager.  She consistently provided relevant pointers and asked critical questions based on her extensive knowledge of current and upcoming privacy regulations and trends. She's capable of understanding complex systems quickly and applies a privacy lens systematically.  Working with her has been a blast and an amazing learning experience!

A. Wyeth - Tech Lead at Google

 Rebecca's data protection knowledge and ability to think through what was best for the user were essential to the success of our Privacy Red Team exercises. She helped us to identify and mitigate potential privacy risks, and she provided valuable insights on how to communicate our findings to get the best impact. 

 I highly recommend her to anyone looking for a data protection expert.

G. Honvoh Chabi -  Manager at Tech Startup

Rebecca ensures processes are followed and tasks effectively completed within the time frame by her team and any third party involved. She is sociable and passionate about her job.

We bring expertise on building Responsible AI

Responsible AI frameworks, development, and monitoring.

  • Expert data protection experience: We've been working on privacy, security, and fairness since 2010. We've helped start-ups, large tech companies, and specialized government agencies define strategies for machine learning, including face recognition and speech recognition. 
  • Values-based engineering and design: Are you a values-driven organization that wants to make the world a better place? So are we!  We are committed to compassionate development for all technology users. 
  • Translating business to technical requirements: We help overcome gaps in understanding and language between different teams.  We translate business goals into engineering requirements.  
  • International and multi-cultural:  We have work experience in 5 countries, including in Europe and the USA. We offer an international perspective. We are based in Switzerland and are available during European work hours.

Get your free Privacy Testing E-Book!

Start your journey to adversarial privacy testing with our free E-book.  I've written this book for privacy and security professionals who want to understand privacy red teams and privacy pen testing.

  1. 1
    When is an adversarial privacy test helpful?
  2. 2
    Who are privacy adversaries and what are their motivations?
  3. 3
    When to build a team in-house versus hiring an external team?

Our Vision

 We work together with companies to build Responsible AI solutions that are lasting and valuable. 

Privacy by Default


Quality Process