Hello world, I’m Brain (yes 🧠) After six years of technical writing and public speaking in the AI space plus experience working in the financial industry as a data scientist, I have stepped into a new role as a Developer Evangelist at Safe Intelligence. I am here to help you build and deploy better, provably safe, and fair AI.
Looking backwards, I can see how the dots connect, bringing me to this role. My goal for this role is to help developers and AI practitioners demystify complex machine learning challenges and showcase Safe Intelligence’s automatic verification and robustification offerings. This includes the creation of demos, the development of content, public speaking, and the organization of events.

Safe Intelligence’s vibrant culture was evident from the moment I joined. My onboarding included meeting the entire team—an extraordinary group who balance serious work with hearty laughter and an uncanny ability to find great lunch spots (We have a random restaurant picker that might be overdue for an agentic upgrade 🤖, if you ask me!). I was introduced via demos to our products and services and immediately sensed how the team is committed to pushing boundaries in making ML models robust and reliable. The product and service introductions began with hands-on demos, giving me a firsthand look at our cutting-edge verification and robustification technologies. We’re excited to soon share these powerful solutions that enhance the entire ML lifecycle and strengthen model performance and robustness. Over the past few weeks, I’ve been diving deep into our platforms, toolkits, and internal workflows through GitHub and documentation.

I attended my first MLOps meetup in London on 26th March 2025. It was an electrifying experience—Alex Jones from Monzo discussed unifying feature engineering across data and backend systems, Callum Court explored the power of multi-task deep learning in fraud detection, and Eleanor Shatova wowed everyone with how LLMs can supercharge search. Between mouthwatering chocolate cake and networking with professionals, I got to see firsthand how diverse and passionate the AI community here really is.

Creating an internal walkthrough of our offerings has been a highlight so far, providing me with a hands-on look at how everything interconnects. I’ve also had the chance to learn from some brilliant ML experts here in London, and I’ll soon be hopping around to more local meetups—if you’re in town, come say hi and let’s chat about AI, ML, and building models that are safe and reliable. A big part of my role involves content creation—blog posts, short webinars, and videos—all aimed at helping developers level up their ML workflows. Whether it’s covering the basics or diving into advanced topics, my goal is to make every piece engaging, accessible, and genuinely useful.

As AI models grow increasingly complex, understanding how they’ll behave in real-world scenarios becomes trickier. Traditional testing isn’t enough for high-stakes environments like finance or healthcare, which is why Safe Intelligence uses formal verification to mathematically prove a model’s behavior under various conditions. Exciting things are ahead—new content, meetups, and more ways to connect with the AI community. Are you interested in learning more about robust, verifiable AI systems? Say hi or explore our early-access ML validation platform: Safe Intelligence Early Access. Let’s shift AI from guesswork to guarantees.
Thanks for being part of my first-month journey; see you around!