Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Design at Anthropic sits at the intersection of craft, research, and product intuition. We're a small team working on products that millions of people use daily—and on interactions that don't have established conventions yet.
Our work shapes how people experience AI: whether Claude feels like a tool or a collaborator, whether it earns trust or erodes it. We partner closely with engineers and researchers, often designing around capabilities that are emerging in real-time. That means staying close to the models, prototyping rapidly, and being comfortable with ambiguity.
We care deeply about craft—the details that make something feel polished and trustworthy—but we ship fast and learn in the open. We'd rather get something in front of users and iterate than wait for perfection.
Contribute to the strategic direction of our tools, rooted in deep user empathy
Define feature areas with exceptional attention to detail and polish, identifying opportunities to improve quality and consistency of broader flows
Craft beautiful, polished, and delightful user interfaces that build trust and showcase the power of our AI technology
Collaborate with product managers, engineers, AI researchers and other stakeholders to define product vision, strategy and roadmaps
Rapidly prototype ideas using code and other methods to communicate concepts and build excitement
Find creative ways to ship high-quality work in a fast-paced, often ambiguous, resource-constrained startup environment
You ship fast and stay in motion. You'd rather get something in front of users this week than perfect it for a month. You prototype in code, stay scrappy when speed matters, and know when to polish versus when to learn.
You're rethinking the basics. Many UI primitives were designed for a different era. You're excited to question fundamental assumptions and invent patterns that feel native to AI.
You're AI-native in how you work. You're already using Claude Code or similar tools to extend what you can build. You see AI as a creative partner in your own practice.
You stay close to the models. You pay attention to where capabilities are heading, not just what Claude can do today. You design for the product we're becoming.
You elevate craft while moving fast. You care about the details—the pixels, the copy, the edge cases—and find ways to maintain that care while shipping quickly.
8+ years of product design experience (experience designing complex workflows, enterprise/B2B SaaS, developer tools, or API products preferred)
Strong portfolio showcasing user-centric design thinking, polished UI craftsmanship, and innovative interaction paradigms
Proven track record of executing end-to-end on large and complex products or a series of products in ambiguous environments
Excellent collaboration and communication skills to work effectively with cross-functional teams and influence without authority
Passion for crafting scaled, highly impactful, safe and beneficial artificial intelligence technologies to enable new possibilities
Experience with prototyping, especially using front-end code (e.g. HTML/CSS/JS) preferred
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process