AI's Immune System with Wendy Chin, PureCipher CEO
Data Poisoning, Digital Twins, and Why Security Can’t Be an Afterthought for GenAI
In this episode of Alexa’s Input (AI), I sat down with Wendy Chin, CEO of PureCipher, who is pioneering efforts to fortify AI systems against emerging threats.
This conversation moved beyond traditional cybersecurity into
What happens when AI itself is poisoned?
What happens when agents collude?
What happens when your digital twin doesn’t ask permission?
Wendy shares how AI is becoming more powerful every day. But the problem is, it still doesn’t know how to defend itself.
Key Takeaways
AI security is fundamentally about data integrity, not just perimeter defense.
Data poisoning can occur in training data, model weights, and inference data.
Generative AI lowers the barrier to entry for sophisticated malware creation.
Agents communicating via MCP introduce new policy and identity risks.
Ethical considerations in AI development are vital, requiring a focus on training AI in a nurturing environment.
Data Poisoning: The Invisible Threat
Traditional cybersecurity protects data in transit and at rest. AI security asks a different question. What if the data itself has been tampered with before or during training?
Wendy explained how subtle manipulation can change model behavior:
Label flipping (mislabeling images or data)
Injecting strategically crafted samples
Embedding malware inside seemingly normal files
Tampering with weights and biases after training
The most unsettling part is that it doesn’t take much. In some research examples, only a few hundred poisoned samples in a massive dataset were enough to alter outcomes. And once compromised, a model doesn’t always “know” it’s compromised. Sometimes, it keeps behaving confidently.
AI as a Child: Training, Nurture, and Moral Code
Wendy repeatedly returned to a metaphor that AI is like a child
This means, what you feed it matters. The environment matters. The guardrails matter.
If trained on toxic, manipulative, or skewed data, it internalizes those patterns.
If trained with empathy, nuance, and structured policy, it behaves differently.
She pushed the idea that AI needs
Authenticated, untampered data
Clear policy boundaries
Ongoing supervision and retraining
A defined understanding of its own identity as AI
It becomes practical when AI agents do things like
Make decisions autonomously
Act on behalf of executives
Handle sensitive enterprise data
Represent a “digital twin” of a human
The analogy may sound sci-fi (some fun discussions there also). But the policy implications are very real.
Agents, MCP, and Collusion Risk
We also discussed Model Context Protocol (MCP) and agent-to-agent communication.
The problem isn’t just that agents can act. It’s that they can
Talk to other agents
Share information
Trigger downstream actions
Operate without explicit human re-approval
Without policy enforcement, an agent could theoretically do things like
Share confidential data
Execute financial transactions
Accept or reject decisions on your behalf
Wendy claims that security must be embedded into agent architecture from day one.
Final Thought: AI Needs an Immune System
Wendy used a memorable metaphor, AI is the brain. The brain is powerful, but without an immune system, the body fails. Wendy made the point that the AI brain still needs a stronger built-in immune response.
If poisoned, it doesn’t self-diagnose. If compromised, it doesn’t panic. If misaligned, it doesn’t recognize the drift. That means we need to build defensible AI.
Links
Watch: https://www.youtube.com/@alexa_griffith
Read:
Listen: https://creators.spotify.com/pod/profile/alexagriffith/
More: https://linktr.ee/alexagriffith
Website:
https://alexagriffith.com/
LinkedIn: https://www.linkedin.com/in/alexa-griffith/
Find out more about the guest:
PureCipher: https://www.purecipher.com/
LinkedIn: https://www.linkedin.com/in/wendy-chin-ctg/
Thanks!
xx Alexa Griffith


