OpenAI Launches Safety Fellowship
OpenAI has announced the OpenAI Safety Fellowship, a pilot program that will fund a cohort of external researchers to conduct independent work on AI safety and alignment. The fellowship runs from September 14, 2026, to February 5, 2027.
Key Details:
- Fellows will receive a monthly stipend, computing resources, and mentorship from OpenAI researchers. They are expected to produce significant research output by the program's end, such as a paper, benchmark, or dataset.
- Priority research areas include safety evaluation, robustness, scalable mitigation strategies, privacy-preserving methods, agentic oversight, and high-severity misuse domains.
- Applications close on May 3, with successful candidates notified by July 25.
Background:
Hours after a New Yorker investigation reported that OpenAI had dissolved its safety teams and dropped safety from its IRS filings, the fellowship announcement was made on April 6, 2026. The investigation also revealed that OpenAI had downplayed the importance of existential safety when asked about it.
This comes after previous dissolutions of OpenAI's superalignment and AGI-readiness teams, as well as the Mission Alignment team (the successor to Superalignment) in February 2026. Many key figures associated with safety oversight at OpenAI had left the company by early 2026.