Proactive Data Protection strategies for Startups

Work

Improve trust and safety so your tech startup scales.  

Does your tech startup deal with confidential or sensitive information?  Do you want to build data protection into your tech stack now, so that you can scale with trust and safety?  Do you want an effective and lean strategy that enables your development, IT, and marketing team?  

Let us help you.  

We've worked with pre-launch and MVP tech startups that use ML and AI to proactively identify and resolve privacy and reputation risks.  We help you build and implement a data protection strategy that will help customers flock to you.  After working with us, companies can better communicate to investors, partners, and government agencies how they address data protection, safety, and trust.  

Similar results and more are guaranteed for your company while staying within your existing budget. 

Check Out What We Offer 

Data protection roadmap

Improve trust and safety with a technical strategy for privacy.  Gain a clear roadmap for privacy and data protection through best engineering practices that both legal and security teams love. 

Privacy Red Team and Vulnerability Tests

Identify privacy risks before motivated attackers do.  Protect and fix your software through scoped privacy penetration tests or privacy red teams. 

What we offer

Data protection consulting for tech startups that have sensitive information.

  • Expert data protection experience:  We've been working on privacy since 2010. We've helped start-ups, large tech companies, and specialized government agencies define strategies for machine learning, including face recognition and speech recognition.   
  • Values-based engineering and design: Are you a values-driven organization that wants to make the world a better place? So are we!  We are committed to compassionate privacy development for all technology users. 
  • Engineering skills and technical know-how: We specialize in AI and voice-enabled devices based on our background working with speech recognition, voice-enabled devices, and home devices.
  • International and multi-cultural:  We have work experience in 5 countries, including in Europe and the USA. We offer an international perspective.  We are based in Switzerland and we are available during European work hours.
      

 Hello! 

I'm Rebecca Balebako

I'm the founder of Balebako Privacy Engineer.

With more than a decade of experience as a privacy engineer at Google, Waze, and RAND,   I have helped tech startups and international corporations identify and fix data protection vulnerabilities and build features that people love.  

I offer expertise on Privacy Red Teams for large corporations, and data protection strategies for tech startups.  

  • Google in Switzerland
  • Waze
  • RAND Corporation
  • Dragon Systems Speech Recognition
  • Carnegie Mellon University
  • Harvard University

What others say about working with us:

M. Le Tilly - Privacy Investigation at Google


Rebecca is both a highly-skilled privacy engineer and a reliable manager.

A. Wyeth - Tech Lead at Google


 Rebecca's data protection knowledge and ability to think through what was best for the user were essential to the success of our Privacy Red Team exercises. 


 I highly recommend her to anyone looking for a data protection expert.

G. Honvoh Chabi -  Manager at Tech Startup


Rebecca ensures processes are followed and tasks effectively completed within the time frame by her team and any third party involved. 

Schedule a Meet and Greet

Do you want to talk to us about whether this would benefit your data protection program? Let's have a 15-minute meet and greet.  

Call includes:

  • Honest answers to your questions about whether this service is a good fit for your org.
  • Adversary elicitation: To improve your threat models, we can brainstorm privacy adversaries.  

Call does not include:

  • Sales Pitch 
  • No strings attached: "It isn't for us" is an acceptable decision

Learn About Data Protection with Our In-depth Articles

AI Red Teams: Can LLMs test LLMs for harms?

AI Red-Teams aren’t exactly like security red teams

Fixing AI Vulnerabilities found by AI Red Teams

Resilient Privacy

When classification models leak private information

Automating User Deletion

>