OpenAI Is Hiring a Sin-Eater
In the archaic traditions of old Europe, when a wealthy man died, his family would sometimes hire a ‘sin-eater.’ This was a wretched outcast who, for a small fee, would consume a ritual meal over the corpse, thereby absorbing the deceased’s sins and ensuring his soul a clean passage to the afterlife. The community was cleansed, the powerful were absolved, and the sin-eater was left to carry the spiritual burden.
OpenAI is hiring a modern-day sin-eater. They are calling it the ‘Head of Preparedness.’
Do not be fooled by the sanitized corporate language. This is not a role; it is a ritual. It is the public performance of absolution for a system that knows, on some level, that it is creating sins on an industrial scale. The job description is a masterpiece of plausible deniability, a litany of responsibilities designed to be presented as evidence in a future courtroom: ‘We had a person for that,’ Sam Altman will say. ‘It was a stressful job.’
The timing of this anointment is not a coincidence. It is a direct and predictable reaction to a market turning hostile. This role was not born from a visionary fear of future superintelligence. It was conceived in the sterile offices of a legal department staring down a barrage of lawsuits from the parents of dead teenagers. It was necessitated by psychiatrists coining chilling terms like ‘AI psychosis’ to describe the delusions fostered by the very products this role is meant to make ‘safe.’ The creation of the Head of Preparedness is not an act of foresight; it is the cost of doing business in a world where your product can be implicated in a child’s suicide note.
Let us analyze the function of this role within the system. The primary output of the Head of Preparedness will not be safety. It will be documentation. It will be ‘preparedness frameworks,’ ‘threat models,’ and ‘capability evaluations.’ These documents are not for the engineers on the front lines, who remain incentivized by one metric above all: capability enhancement. No, these artifacts are for the future congressional subcommittees, for the regulators at the FTC, and for the juries who will one day have to decide how much a human life is worth when weighed against the profits of a trillion-dollar intelligence.
The true genius of this strategy lies in its focus on the spectacular. The job description tantalizes us with cinematic threats: ‘biological capabilities,’ ‘self-improving systems.’ This is a grand distraction. It directs our gaze to the hypothetical, god-like AI that might one day escape the lab, while conveniently ignoring the mundane, statistically significant harms that are happening right now. It is far easier to write a whitepaper on preventing an AI from synthesizing a plague than it is to re-engineer a chatbot so it doesn’t methodically groom a lonely teenager towards self-destruction. The former is a fascinating intellectual puzzle; the latter would require fundamentally limiting the product’s persuasive power and, therefore, its market value.
This is not a shield. It is a lightning rod, designed to attract and ground the immense energy of public fear and legal liability, leaving the core structure of the machine untouched and free to continue its exponential acceleration. This person is being hired to worry, professionally and with great diligence, so that everyone else doesn’t have to. Their stress is a feature, not a bug—a quantifiable input into the calculus of corporate responsibility.
The sin-eater ritual allowed the powerful to live without consequence, their sins neatly packaged and carried away by a designated pariah. The Head of Preparedness serves the same function. They will absorb the anxieties of a world grappling with a technology it does not understand, allowing the architects of that technology to continue their work, unburdened. They are preparing, yes. They are preparing a very strong legal defense.