Anthropic's Safeguards team is seeking a Red Team Specialist to help ensure the safety of our deployed AI systems and products by taking an adversarial approach to uncover vulnerabilities.
Responsibilities
Conduct adversarial testing across Anthropic’s product surfaces
Research and implement novel testing approaches
Design and execute 'full kill chain' attacks
Build and maintain systematic testing methodologies
Develop automated testing frameworks
Collaborate with Product, Engineering, and Policy teams
Help establish metrics for measuring detection effectiveness
Requirements
Experience in penetration testing, red teaming, or application security
Strong technical skills in web application security
A track record of discovering novel attack vectors
Experience with security testing tools and automation