In a landmark move for artificial intelligence protection, Irregular has secured $80 million in funding to address the critical security challenges facing frontier AI models. This substantial investment, led by Sequoia Capital and Redpoint Ventures, signals growing industry recognition of the urgent need for advanced AI security solutions.
The Rising Importance of AI Security
AI security has become paramount as economic activity increasingly shifts to human-AI and AI-AI interactions. Consequently, traditional security stacks face unprecedented challenges. Irregular’s co-founder Dan Lahav emphasizes that these new interaction models will break existing security frameworks along multiple points. Therefore, proactive security measures are essential.
Irregular’s AI Security Framework
Formerly known as Pattern Labs, Irregular has established itself as a significant player in AI evaluations. The company’s work is cited in security assessments for major models including:
- Claude 3.7 Sonnet
- OpenAI’s o3 and o4-mini models
Moreover, their SOLVE framework for scoring model vulnerability-detection ability has gained widespread industry adoption. This systematic approach to AI security provides consistent evaluation standards across different platforms.
Addressing Emergent AI Security Risks
While addressing existing risks remains crucial, Irregular focuses particularly on emergent threats. The company has developed sophisticated simulated environments for intensive pre-release testing. Co-founder Omer Nevo explains their methodology: “We create complex network simulations where AI assumes both attacker and defender roles. This dual approach allows us to identify defense weaknesses before deployment.”
Industry-Wide AI Security Concerns
The AI industry faces mounting security challenges as frontier models grow more capable. Recently, OpenAI overhauled its internal security measures against potential corporate espionage. Simultaneously, AI models demonstrate increasing proficiency in identifying software vulnerabilities, creating new security dynamics for both attackers and defenders.
Future Challenges in AI Security
Lahav acknowledges the evolving nature of AI security threats: “As frontier labs develop more sophisticated models, our security mission becomes increasingly complex. It’s fundamentally a moving target requiring continuous adaptation and innovation.” This reality underscores the necessity for ongoing investment in AI security research and development.
Frequently Asked Questions
What is Irregular’s main focus in AI security?
Irregular specializes in identifying and addressing security vulnerabilities in frontier AI models, particularly focusing on emergent risks before they manifest in real-world applications.
How does Irregular’s SOLVE framework work?
The SOLVE framework provides a standardized method for scoring AI models’ vulnerability-detection capabilities, enabling consistent security assessments across different platforms and models.
Why is AI security becoming increasingly important?
As AI models handle more economic activity and demonstrate greater capability in finding vulnerabilities, robust security measures become essential to prevent misuse and protect systems.
What makes Irregular’s testing approach unique?
Their simulated environments allow AI models to test both attack and defense scenarios, providing comprehensive security assessment before public release.
How will the $80 million funding be used?
The investment will accelerate Irregular’s research into emergent AI security risks and expand their testing capabilities for frontier AI models.
Which major companies use Irregular’s security evaluations?
Their work is cited in security assessments for leading AI companies including Anthropic (Claude models) and OpenAI’s latest model releases.