OpenAI launches safety fellowship as it broadens outside research push

OpenAI on April 6 announced a new Safety Fellowship, a pilot program aimed at external researchers, engineers and practitioners working on the safety and alignment of advanced AI systems. The company said the program is designed to support rigorous research on issues including evaluation, robustness, privacy-preserving safety methods, agentic oversight and high-severity misuse risks.

The announcement comes as OpenAI continues to expand both its product footprint and its policy and safety work. The fellowship is set to run from September 14, 2026, through February 5, 2027, and OpenAI said fellows will work with company mentors and a cohort of peers.

What OpenAI announced

In its April 6 post, OpenAI said it is accepting applications for the fellowship as a way to support independent work on questions that matter for current and future AI systems. The company described the initiative as a pilot program for people pursuing technically strong, empirically grounded research with relevance to the broader safety community.

OpenAI said it is especially interested in applicants focused on safety evaluation, ethics, scalable mitigations, privacy-preserving methods, agentic oversight and misuse prevention. The company did not disclose the number of fellows it plans to select.

Why the program matters

The fellowship adds another channel through which OpenAI is trying to shape the safety conversation around advanced AI. By inviting outside researchers into its orbit, the company is signaling that it wants more independent work tied to the risks and controls surrounding increasingly capable systems.

That effort sits alongside a broader set of recent OpenAI announcements centered on safety, governance and policy. In the past week, the company has also published a child safety blueprint and a separate industrial policy document, underscoring how OpenAI is pairing product development with public-facing policy work.

OpenAI’s broader safety and policy agenda

OpenAI’s April 6 industrial policy post called for “people-first” ideas to help expand opportunity and build resilient institutions in the age of advanced AI. The company said it was also establishing pilot fellowships and focused research grants to support work that builds on those ideas.

Separately, OpenAI said on April 7 that it had published a child safety blueprint aimed at strengthening U.S. child protection frameworks in the age of AI. The company said it worked with outside organizations including the National Center for Missing and Exploited Children, the Attorney General Alliance and Thorn on that framework.

What to Watch

The key question now is how many researchers apply, what kinds of projects OpenAI ultimately funds and whether the fellowship produces work that influences the company’s own safety practices or broader industry standards. OpenAI said the program is a pilot, so its scale and long-term shape will likely depend on the response it receives.


Source Reference

Primary source: OpenAI
Source date: 2026-04-06
Reference: Read original source