AI Security Research Engineer
The Mission
The last major shift in cybersecurity produced CrowdStrike and SentinelOne. The next shift, AI-driven offense, will be bigger, and defense isn't ready.
AI is compressing offensive cyber capability. Attacks that required nation-state resources will soon run autonomously, at machine speed. The industry's response so far has been to replay scripted attack simulations and hope for the best. That's not going to hold.
0Labs is building an AI-native platform and service for continuous purple teaming. Teams of agents execute real, adaptive cyber campaigns, then automatically generate the detection rules to stop them. Attack, detect, fix, repeat. Continuously.
Our bet: the future of cyber resilience isn't bigger walls. It's faster adaptation.
We're an early-stage backed startup, moving very quickly. We're already working on the frontier with AI Safety Labs and early design partnerships with enterprises. We have a front row seat to the intersection of AI and offensive security, and we're building the company that defines it.
The Role
We're looking for part-time and full-time contractors to support ongoing AI security research and product development. The immediate focus is our AI control evaluation harness -- a system that benchmarks how well monitoring and oversight mechanisms detect unsafe agent behavior. You'll be extending evaluation scenarios, improving scoring frameworks, and stress-testing agent deployments across a range of attack surfaces.
Beyond this, you'll contribute to broader research that feeds directly into early product development: designing agent architectures, running adversarial evaluations, and turning findings into engineering decisions.
What we're looking for
- Hands-on experience building and evaluating AI agents.
- Comfort working with evaluation and benchmarking frameworks (designing metrics, interpreting results, iterating on methodology)
- Strong programming, automation, and infrastructure skills
- Ability to work independently in an ambiguous, fast-moving environment
- Interest in AI safety, security, or adversarial ML is a plus but not required
Compensation
Competitive, based on experience and scope.
How To Apply
Please send your CV and describe a project you've built that you're proud of.
careers@0labs.ai