Why Did OpenAI's Head Of Safety Run Away?

Why Did OpenAI's Head Of Safety Run Away?

OpenAI head of safety runs
OpenAI head of safety runs

More big moves at OpenAI raising more questions about the company's path forward and how close they are to AGI.

Daniel MacDougall

OpenAI Safety Lead Departs, Citing More Concerns Over Focus

Jan Leike, former head of OpenAI's "Super Alignment" team, made waves in May 2024 when he abruptly left the company and voiced concerns on X (formerly Twitter) about the prioritization of "shiny products" over safety in AI development.
Leike's departure followed that of OpenAI co-founder and chief scientist Ilya Sutskever, adding to a period of instability at the influential research lab.

Leike expressed his belief that OpenAI, originally envisioned as a leader in safe AI development, had strayed from its core mission. He argued for a "safety-first" approach to Artificial General Intelligence (AGI), a hypothetical future where machines achieve human-level or surpassing intelligence.

Jan Leiki's comments

Jan Leiki stated on X, "I joined because I thought OpenAI would be the best place in the world to do this research.
However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point."

"Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity."

Jan also stated, "These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there."

One line that stood out, "OpenAI must become a safety-first AGI company."

This again raises more questions about the company developing AGI and how far along they are.

These statements raise some wild speculation and controversy. If Jan Leike was the only person leaving over OpenAI's safety direction, then there could be the possibility of just another disgruntled employee, but a wave of people leaving citing safety concerns raises eyebrows.

And, how close is OpenAI to AGI, or are they already discussing how to let the genie out of the bottle?

The company has remodeled its safety team. But are they hiring a bunch of "yes" type employees?

Typically, when people leave a large company and immediately get hired by the competition, there should be some weight to what they say, especially when being so public about their feelings and ideas on corporate safety and direction.

Jan's public criticism coincided with internal conflict at OpenAI. News outlets reported a failed attempt by some board members, including Sutskever, to oust CEO Sam Altman in late 2023. While the reasons behind this attempted leadership shift remain unclear, it highlighted a potential clash within OpenAI regarding its focus and direction.

Leike's exit doesn't signal the end of his involvement in AI safety. Shortly after leaving OpenAI, he announced his new role at Anthropic, a rival AI research company.
There, he leads a team dedicated to scalable oversight, weak-to-strong generalization, and automated alignment research, all areas crucial for ensuring the safe development and deployment of advanced AI.

The debate sparked by Leike's departure underscores the ongoing challenge of balancing innovation with responsible development in the field of AI.
OpenAI has since announced the formation of a new safety committee. Still, questions remain about how effectively it can address concerns and ensure AI safety remains a top priority. It's hard to explain when the head of safety leaves and slams the door in OpenAI's face.