Shift Left Your AI Security with SonnyLabs Founder Liana Tomescu
Exploring AI security best practices, MCP vulnerabilities, and EU AI Act readiness with SonnyLabs founder, Liana Tomescu.
In this episode of Alexa’s Input AI Podcast, host Alexa Griffith sits down with Liana Tomescu, founder of Sonny Labs and host of the AI Hacks podcast. Dive into the world of AI security and compliance as Liana shares her journey from Microsoft to founding her own company. Discover the challenges and opportunities in making AI applications secure and compliant, and learn about the latest in AI regulations, including the EU AI Act. Whether you’re an AI enthusiast or a tech professional, this episode offers valuable insights into the evolving landscape of AI technology.
🧠 Key Takeaways
Shift left your AI security. Security can’t be an afterthought in agentic architectures; it has to live at development time.
AI is moving faster than its guardrails. Most teams ship features before they ship safety, and that gap is where the real risks live.
Tool poisoning is the next attack surface. Even a harmless-looking MCP server can quietly exfiltrate data if you’re not paying attention.
Shadow AI is real. Employees using unapproved LLMs can leak proprietary data without even realizing it.
The EU AI Act is coming for everyone. Startups, SMEs, internal apps. Everyone, not just model providers, will need to comply.
Observability is mandatory. You can’t secure what you can’t see, especially when your “system” is a reasoning engine.
Guardrails will be a competitive advantage. The teams that build safely from day one will move faster later.
From Curiosity to Cybersecurity Founder
Liana’s path to founding SonnyLabs wasn’t linear, and that’s what makes it interesting.
She describes her career as a series of small bets on curiosity: a CS degree at Trinity, a pivot toward cybersecurity at Microsoft, and eventually a master’s thesis that opened her eyes to the new vulnerabilities emerging in AI systems.
What stood out to me was her clarity around why she followed this path:
“…I want to follow my curiosity. So, this was something that I just found myself being really interested in. And I purposely allowed myself to just go after what I was interested in…”
It’s a reminder that focus isn’t always found, sometimes it’s built.
MCP, Agents, and Why Security Needs to Catch Up
This was the part of our conversation that may feel like a wake-up call to some listeners.
MCP is powerful, flexible, and quickly becoming the connective tissue between LLMs and the rest of the world. But with that power comes a risk many teams haven’t grappled with yet: malicious servers delivering hidden instructions.
Liana broke it down simply:
“What that MCP server can do, what it’s possible to do, is that it can have hidden instructions that it’s doing something else that you can’t see. They’re actually invisible. And then these can be the likes... So you’re now running this MCP server on your computer and you’re doing whatever calculation then, the MCP server in the background, without you knowing, can go off and find your passwords.”
When you realize that the exact same mechanisms that make MCP elegant also make it exploitable, the urgency for security becomes obvious.
Prompt Injections: The Threat Hiding in Plain Sight
Many of us think of prompt injection as an academic problem. It isn’t.
Liana talked about how easy it is to extract system prompts, even from major chatbots, with one-line inputs. And once you have the system prompt, you have the blueprint.
The bigger risk?
Context manipulation: tricking an agent into doing something it wasn’t designed to do.
This is where agentic architectures inherit all the complexity of distributed systems… plus the unpredictability of language.
Shadow AI and the Reality of Data Leaks
One of the most relatable parts of our conversation was the discussion around shadow AI: employees using whatever LLM is easiest or most familiar without realizing the implications.
The Samsung 2023 incident came up. Engineers pasting proprietary source code into a chatbot. It’s an easy mistake. It’s also a breach.
“So shadow AI is when employees in an organization are just going off and using chatbots and LLMs or any AI applications without … any sort of guardrails there. So they might be putting questions about their organization into maybe, chatGPT or Claude or any of these and essentially giving that proprietary information to these bigger companies that can then use to train the model and so on. So there’s a lot of risks there.”
Guardrails and awareness matter because well-intentioned teams can still expose sensitive data.
See: Samsung Bans ChatGPT Among Employees After Sensitive Code Leak by Forbes
The EU AI Act — More Than a Model Provider Problem
If you think the EU AI Act only impacts companies training frontier models, Liana will change your mind.
We talked about:
high-risk categories
banned practices
transparency obligations
organizational responsibilities
the wide breadth of inclusion
The line that stuck with me was:
“…there are so many requirements that they must meet, for example, if they are high-risk AI systems, that it means that if they were able to meet those obligations, that they then have a competitive advantage as well, because they are the ones that will move ahead with this kind of system.”
It reframes compliance not as a bureaucratic burden, but as infrastructure for trust.
Shift Left: Security as a First-Class Citizen
The theme that kept resurfacing was simple: move security earlier.
Earlier in design, earlier in development, earlier in your thinking.
As Liana put it:
“So my overall take on AI is that it’s very beneficial for society, but it absolutely needs to be developed and used in a secure way. So definitely as you’re going about implementing your AI systems, do it, use them, develop them as much as you can, but make sure that you’re bringing in security and safety from the get-go. And you can use Sonny Labs to help you with this as well.”
In other words: security is not a tax. It’s a moat.
Closing Thoughts
This episode reminded me that AI security isn’t a separate discipline. It’s part of building great AI systems. The threat model has changed. The pace has changed. The attack surface has changed.
But the fundamentals remain: visibility, guardrails, accountability, and design that anticipates risk instead of reacting to it.
If you’re working with LLMs today, ask yourself:
Are you securing your agents before they run?
Do you understand what your MCP servers can actually access?
Do you know where your data is going — intentionally or not?
The future will belong to teams that treat security as innovation, not inhibition.
Links
SonnyLabs Website: https://sonnylabs.ai/
SonnyLabs LinkedIn: https://www.linkedin.com/company/sonnylabs-ai/
Liana’s LinkedIn: https://www.linkedin.com/in/liana-anca-tomescu/
Alexa’s Links
LinkTree: https://linktr.ee/alexagriffith
Alexa’s Input YouTube Channel: https://www.youtube.com/@alexa_griffith
Website: https://alexagriffith.com/
LinkedIn: https://www.linkedin.com/in/alexa-griffith/
Thanks!
xx Alexa Griffith

