Trust & Safety Analyst

OpenAI

Apply now
View and Apply

Job description

We are looking for experienced Trust & Safety Analysts to collaborate closely with internal teams to ensure safety and compliance on OpenAI platforms. You will be a stakeholder in the design and implementation of policies, processes, and automated systems to take action against bad actors and minimize abuse at scale, handle high risk & high visibility customer cases with care, and build feedback loops to improve our trust & safety policies and detection systems. Ideally, you have worked in a high-paced startup environment, have handled a breadth of integrity related issues of varying sensitivity and complexity, and are comfortable with building processes and systems from zero to one.

Please note: This role may involve handling sensitive content, including material that may be highly confidential, sexual, violent, or otherwise disturbing.

This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.

In This Role You Will

  • Support new launches by partnering with Product and Policy teams to stand up safety workflows, tooling, and vendor programs ahead of release.
  • Interpret and apply usage policies to complex scenarios, providing clear guidance to customers and internal teams, and capturing edge cases to refine the policy set.
  • Own high‑stakes escalations—investigate quickly, coordinate across Legal, Compliance, and Engineering, and drive incidents to resolution with minimal noise.
  • Build and scale processes for human‑in‑the‑loop labeling, user reporting & content moderation, appeals, and other trust workflows—always with quality as the North Star.
  • Analyze data for trends and create tight feedback loops that inform detection models, product features, and policy updates.
  • Advance moderation quality by defining KPIs, setting up QA programs, and iterating on reviewer training and tooling.
  • Equip internal & external teams with playbooks, SOPs, and hands‑on training that deepen their understanding of our systems and safety philosophy.
  • Serve as escalation POC for high‑complexity cases, acting as the nexus between product, compliance, legal, ops, and customer‑facing teams.
  • Proven ability to drive automation at scale, demonstrating hands-on experience rolling up sleeves to implement operational efficiencies; Strong experience or ability in leveraging LLMs to enable automation is required.

You Might Thrive In This Role If You

  • Build for scale. You’ve taken operations from zero to one while keeping the quality bar‑high.
  • Bring 5+ years in Trust & Safety, integrity, or compliance ops, including hands‑on experience with content moderation and vendor management.
  • Combine strong analytical skills with the ability to communicate nuanced, technical issues to both engineers and non‑technical stakeholders.
  • Have proven program‑management skills: you prioritize ruthlessly, juggle multiple launches, and bias toward action.
  • Are comfortable in ambiguity and energized by solving novel problems in a fast‑moving, startup‑style environment.
  • Maintain a humble, collaborative attitude and a willingness to learn whatever is needed to help your team and our users succeed.

Most recent jobs

TRU Staffing Partners, Inc.
Published on
May 6, 2025

Privacy + AI Compliance Enablement Leader

Job type
Full-time
Experience level
Mid-Senior level
Job location
United States
More details
Ubique Systems
Published on
May 6, 2025

AI Regulatory Intelligence

Job type
Contract
Experience level
Mid-Senior level
Job location
Spain
More details
Ubique Systems
Published on
May 6, 2025

Data and AI Regulatory Intelligence expert

Job type
Contract
Experience level
Associate
Job location
Spain
More details
OpenAI
Trust & Safety Analyst
Published on
April 17, 2025
Seniority level
Mid-Senior level
Job type
Full-time
Job location
United States
Apply now
View and Apply
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.