AI Audit – Security and Privacy Red Teams for AI and ML

Identify safety and privacy threats to your AI, ML, LLM, and generative AI app, before someone else does.  AI Red Teams are proactive and documented steps to measure your AI security and privacy.

We test for unforeseen or undesirable system behaviors, security holes, data leaks, and toxic language.  

The Problem:

  • AI risks from privacy and security are under scrutiny.  You want to be able to proactively face changes instead of being on the back foot.  
  • The EU AI Act requires high-risk AI products to do security testing. You need to provide regulators and consumers assurance that your company is taking the right data protection steps. 
  • You have a reputation and brand to maintain.  Trust and loyalty are important. 

The Solution:

We offer a proven method for testing privacy and security in AI apps. 

  • AI Red Teams are proactive and documented steps to measure your AI security and privacy.
  • Threat modeling with stakeholders drives alignments about priorities and budgets.
  • Risk assessment and threat analysis identify priorities related to privacy, responsibility, fairness, security, and transparency.

We bring expertise on building Responsible AI

Responsible AI frameworks, development, and monitoring.

  • Expert data protection experience: We've been working on privacy, security, and fairness since 2010. We've helped start-ups, large tech companies, and specialized government agencies define strategies for machine learning, including face recognition and speech recognition. 
  • Values-based engineering and design: Are you a values-driven organization that wants to make the world a better place? So are we! We are committed to compassionate development for all technology users. 
  • Translating business to technical requirements: We help overcome gaps in understanding and language between different teams. We translate business goals into engineering requirements.  
  • International and multi-cultural: We have work experience in 5 countries, including in Europe and the USA. We offer an international perspective. We are based in Switzerland and are available during European work hours.
      

What others say about working with us:

M. Le Tilly - Privacy Investigation at Google


Rebecca is both a highly-skilled privacy engineer and a reliable manager.  She consistently provided relevant pointers and asked critical questions based on her extensive knowledge of current and upcoming privacy regulations and trends. She's capable of understanding complex systems quickly and applies a privacy lens systematically.  Working with her has been a blast and an amazing learning experience!

A. Wyeth - Tech Lead at Google


 Rebecca's data protection knowledge and ability to think through what was best for the user were essential to the success of our Privacy Red Team exercises. She helped us to identify and mitigate potential privacy risks, and she provided valuable insights on how to communicate our findings to get the best impact. 


 I highly recommend her to anyone looking for a data protection expert.

G. Honvoh Chabi -  Manager at Tech Startup


Rebecca ensures processes are followed and tasks effectively completed within the time frame by her team and any third party involved. She is sociable and passionate about her job.

Schedule a Meet and Greet

Do you want to talk to us about data protection? Let's have a 15-minute meet and greet.  

Call includes:

  • Honest answers to your questions about whether our service is a good fit for your organization.
  • Adversary elicitation: to improve your threat models, we can brainstorm privacy adversaries.  

Call does not include:

  • Sales Pitch 
  • No strings attached: "It isn't for us" is an acceptable decision

How we work to test your AI apps.

step 1

Business Goals and Threat Assessment

Based on your organization's context, we outline the goals and threats of a new AI product.  

We run a stakeholder alignment meeting to drive consensus on the desired behavior of an AI app.

At the end of this step, you receive a list of desired features and guardrails for your AI app.  

step 2

Prepare

We define what you'll want to test against for safe and responsible AI.

We develop a plan of action for the right balance of continuous audit and human testing. We work with you to define the scope and rules of engagement. 

At the end of this step, you'll have clear metrics for your responsible AI app and a test plan. Whether you have one AI project or multiple, this framework for responsible AI will allow your developers to understand how to translate and measure business goals.

step 3

Develop and run the automated tests with human oversight

Our diverse team of experts will emulate adversaries and run tests on the AI app. 

We build an automated toolkit for continuous monitoring of the AI app.

At the end of this step, you'll understand what attacks and guardrails were effective.  

step 4

Report and Transfer

We help you understand how to improve your AI app through a report on the options and next steps. 

We hand over the red team toolkit and provide training on updating and evaluating the results. 

At the end of this step, you'll have a process that you can use to continuously monitor your AI apps.

Test and measure your AI agent, app, or system. 

We offer AI auditing and privacy red teams.

Our Vision

 We work together with companies to build Responsible AI solutions that are lasting and valuable. 

Privacy by Default

respect

Quality Process

HEALTH

Inclusion

>