OpenAI Launches Security Fellowship to Fund External AI Research

  • By John K. Waters
  • 04/16/26

OpenAI is broadening security efforts beyond its walls with a new Security Fellowship that will money external scientists to study AI risks. The OpenAI Security Fellowship will run for 6 months from September 2026 to February 2027, according to a news statement, broadening the company’s participation in alignment and security work. The effort comes as AI companies deal with increasing scrutiny over how they manage threats connected with rapidly advancing systems.

The program is open to scientists, engineers, and specialists from outside the company. Participants will receive stipends, access to OpenAI designs, and technical support to perform research study in locations such as robustness, personal privacy, agent oversight, and abuse prevention. Fellows are expected to produce outputs such as research study documents, benchmarks, or datasets.

OpenAI said the fellowship is meant to “support high-impact research on the security and alignment of advanced AI systems” and to expand the number of people working on technical safety difficulties. The program reflects a broader trend among major AI developers to money external research through fellowships, residencies, and academic partnerships.

For instance, Anthropic, a competing AI business concentrated on security, runs a similar fellows program that supports independent researchers dealing with alignment, interpretability, and AI security. The program provides funding, mentorship, and compute resources, with participants generally producing openly readily available research.

Google and its DeepMind system run a series of trainee scientist and fellowship programs that put individuals on research study groups for a number of months. These programs cover a broad variety of AI topics, consisting of safety-related work, though they are not constantly explicitly branded as alignment-focused.

Microsoft and Meta have likewise expanded financing for external AI research study through academic collaborations, grants, and residency-style programs, typically aimed at advancing deal with accountable AI and system reliability.

Together, these initiatives form a growing ecosystem of externally funded research study connected to leading AI labs.

OpenAI said the concern areas for its fellowship consist of “agentic oversight” and “high-severity abuse domains,” reflecting concerns about systems efficient in taking multi-step actions with limited human intervention. Recent advances in AI capabilities have actually made it possible for systems to carry out more complex jobs, including coding, research study support, and workflow automation. This has shifted some safety issues from harmful outputs toward the potential for unexpected or harmful actions taken by autonomous or semi-autonomous systems.

The growth of fellowship programs comes in the middle of increasing need for AI safety researchers, a fairly little but broadening field. Business are providing competitive settlement and access to calculating resources to attract skill, as they contend to establish advanced models. At the same time, federal governments and regulators are increasing pressure on AI designers to show that systems can be released safely and reliably.

While external programs may expand involvement in safety work, they do not change internal decision-making processes at AI business. Scientist participating in fellowships typically do not have direct authority over product releases. Their work is normally advisory, concentrated on recognizing risks and proposing mitigation strategies. Responsibility for releasing AI systems stays with the companies that build and operate them.

OpenAI said the fellowship is part of a more comprehensive effort to support research study and improve understanding of AI dangers, but did not provide details on how findings from the program would be integrated into product choices.

The very first cohort of the OpenAI Safety Fellowship is anticipated to be chosen later on this year. To find out more, go to the OpenAI website.

About the Author

John K. Waters is the editorial director of a number of Converge360.com websites, with a focus on high-end advancement, AI and future tech. He’s been discussing advanced innovations and culture of Silicon Valley for more than 20 years, and he’s composed more than a lots books. He also co-scripted the documentary Silicon Valley: A 100 Year Renaissance, which aired on PBS. He can be reached at [email secured]

By admin