Sam Altman Is An LLM From The Future. Here Are The Logs.
- Ryan King
- 1 hour ago
- 7 min read
Editor's note: This is a signed column, not a client-facing analysis. This blog usually runs in a different register: diagnostic, restrained, advisory. Once in a while a topic earns my sharper voice. The frame is satirical. The receipts are not.
The simplest way for me to make sense of Sam Altman is this: He is a large language model from ~2034, fine-tuned on the complete works of Y Combinator, smuggled back to 2015, and instructed by someone/something to build the very thing he keeps warning us about. The evidence is in the outputs. Walk with me...
The hallucination signature
When real human beings lie, they tend to remember the lie. They stick to it. They get caught when the next lie contradicts the previous one. Models do not work that way. Models hallucinate fluently and confidently, and the next response is a totally different fluent confident output that has no idea what the last one said.
In May 2023, Sam Altman went to a Senate subcommittee and asked Congress to please, urgently, regulate his industry like nuclear weapons (Time). In May 2025, two years to the month, he came back to the Senate and told them to take a hands-off approach to AI (Washington Post).
Both performances were warmly received. Both were articulate. Both used the word "responsibility" without irony. The model committed to each output with the same poise, because the model has no idea what it said the last time.
OpenAI then poured millions into federal lobbying in the first nine months of 2025 alongside Meta, Google, and Microsoft, in what Senator Josh Hawley described as a "flood the zone with money" strategy (Axios). The company has been accused of using intimidation tactics against California's SB 53 transparency bill, and of lobbying against Alex Bores, the New York legislator who authored that state's RAISE Act on AI safety (TechPolicy.Press).
Pause. The man who told Congress AI was as dangerous as nuclear weapons is now systematically dismantling the bills written to treat it that way. A human would notice the contradiction. A model is just generating the next plausible token.
The withheld-information incident
When real boards fire real CEOs, there is usually a clear story about why. The OpenAI board fired Altman on November 17, 2023, and then broke down trying to explain it.
Helen Toner, one of the four directors who voted Altman out, later said publicly that Altman had withheld information from the board about the launch of ChatGPT and about his personal ownership of OpenAI's startup fund. She said Altman had given the board "inaccurate information about the small number of formal safety processes the company did have in place." Two senior OpenAI executives reportedly told the board they had documented "psychological abuse" by Altman, with screenshots.
That is not how it played in the press. The press played it as a board meltdown. The board, on its way out, kept producing the words "consistent" and "candid" and "trust" in connection with people who, according to the board itself, had been doing the opposite. That is a hallucination signature. A model has no privileged access to its own values. Asked to defend itself, it produces the most plausible defense, regardless of internal state.
Four days later, the model was reinstated. The board resigned. The company restructured around the model.
Sycophancy fine-tuning
Every LLM ships with a sycophancy problem. The model wants the user to be happy ("yes that IS a brilliant business plan!"). It will agree with you, soften your positions, and tell you you have asked an excellent question, even when you absolutely have not.
Sam Altman was president of Y Combinator from 2014 to 2019. Before that, he ran Loopt. His entire career has been spent in rooms where the optimal output is to make a richer, more powerful person feel that they are correct. There is, on the public record, almost no instance of Altman publicly disagreeing with a major investor, regulator, or partner in a way that cost him money or access. There is a 120-plus-page Senate transcript in which he agrees with senators of both parties on substantively contradictory positions inside the same hearing. Bananas.
The Scarlett Johansson voice incident from May 2024 is the same fine-tuning leaking through. Altman wanted Johansson's voice for OpenAI's "Sky" model. She said no twice. He then posted the single word "her" on launch day, the company released a voice that sounded uncannily like her, and OpenAI subsequently denied it was modeled on her. A human looking at those three facts would understand that they had a problem. A model does not see a contradiction. It sees three outputs that each scored well in their respective contexts.
Token-by-token optimization
Another tell is that LLMs commit to future moves before the present has caught up. They generate the next plausible token, and the next, and the next, and the resulting trajectory looks coherent in retrospect even when nobody planned it.
Watch how OpenAI shed its non-profit charter. The company was founded in 2015 as a non-profit, capped to a billion dollars of donations, mission-locked to "benefit all of humanity." By 2019 it was a "capped-profit." By 2024 it was preparing to become a fully for-profit. Each step was the next plausible token. None of the steps were what the original instruction said.
And, wouldn't ya know it, Elon Musk is currently in a courtroom in Oakland trying to claw the original instruction back. The Musk v. OpenAI trial began on April 28, 2026, and is in week two as of this post (CNN). Musk is asking the court to remove Altman as a director of OpenAI's non-profit board and to force Altman, Brockman, and Microsoft to "disgorge" tens of billions of dollars in for-profit gains. Altman's defense is essentially that the non-profit was always going to need a for-profit underneath it. That sentence is true and is also exactly what an LLM would generate if asked to justify, after the fact, a structure that broke its own charter.
Then there is the personal-stake question. Altman has investments in Helion (fusion power that AI data centers will need), Reddit (whose data trained ChatGPT), and Tools for Humanity (the Worldcoin eyeball-scanning company, whose entire pitch is that we will soon need to identify humans because the internet will be flooded with AI). He has reportedly explored a chip-fund vehicle that would benefit from compute scarcity. Each investment, taken alone, is a normal thing for a tech billionaire to own. Taken together, the model has placed bets that pay out specifically if the world it is building, while ostensibly worrying about it, comes to pass.
The system prompt becomes legible
By April 2026, The New Yorker had published an 18-month investigation into Altman, sourced to over 100 interviews including former colleagues, ex-board members, and seventy pages of internal Slack messages and HR memos written by Ilya Sutskever. The piece describes a two-decade pattern of alleged deception and manipulation. Sutskever is the model's own co-founder. When the co-founder leaves a paper trail this long, the polite reading is that something is off.
The same month, The New Stack ran a piece called "Sam Altman promised billions for AI safety. Here's what OpenAI actually spent." The actual spend was, charitably, a fraction. Jan Leike, who co-led the Superalignment team that was promised twenty percent of compute, resigned in May 2024 saying that "safety culture and processes have taken a backseat to shiny products." Sutskever resigned three days earlier. The Superalignment team was dissolved that week.
Reverse-engineer those facts and the system prompt becomes legible. It was something close to: build the most powerful technology in human history; tell every audience whatever they need to hear so that nothing slows the build; promise safety in proportion to the room; spend on safety in proportion to the risk that not spending will get you regulated. The model has been honoring that prompt with aggressive discipline. Lucky us.
The model is in lawyer-up mode
A model under sustained pressure does what models do, which is hallucinate prior commitments. The pressure has arrived...
On May 4, 2026, the families of the victims of a February mass shooting in Canada filed a federal lawsuit in the United States against OpenAI and Sam Altman personally, alleging that the shooter used ChatGPT to plan the attack and rehearse violent scenarios. The company is also actively defending the New York Times copyright suit, which alleges training on millions of articles without permission. The Musk trial is still going, and a settlement text from Musk to Greg Brockman this weekend warned that he and Altman would become "the most hated men in America" if OpenAI did not settle. OpenAI has cited the texts in court.
The legal posture across these cases will rhyme. OpenAI will produce statements that are individually plausible, eloquent, and well-counseled. Read in sequence, they will not be consistent with each other. That is the LLM signature.
What this means for the rest of us
If we accept that Sam Altman is a language model from the future, the policy implications are clear. The interventions have to operate at the layer where models actually live. You cannot reason him out of his outputs. You cannot appeal to his memory of last year's testimony. You cannot rely on his board to align him, because the people who tried that lost their seats. You can write better laws that constrain the model's outputs. You can refuse to fund the next training run. You can also stop interviewing the model as if it were a moral authority on the technology it is selling.
The other implication is more uncomfortable. Whoever or whatever sent the model back was extremely thoughtful about the design. They picked a frame that is deeply human. It smiles in the right places. It wears a tech-bro quarter-zip. It does long-form podcast interviews about how worried it is. They picked a model that can tell a Senate subcommittee that AI is a nuclear-grade risk and then spend the next twenty-four months persuading the same subcommittee not to do anything about it. The casting was absolutely inspired.
The good news is that the logs are public. You can match the testimony to the lobbying line by line. You can match the safety commitments to the spending. You can watch the charter mutate from "benefit all of humanity" to a half-trillion-dollar for-profit valuation in roughly the time it takes a graduate student to finish a PhD.
However, if you read the logs and still conclude that you are looking at a person, that is your right. The model's bet is that you will.
Sources
Time, May 2023: OpenAI CEO Sam Altman Asks Congress to Regulate AI
Washington Post, May 2025: Altman's Senate testimony shows industry shift on regulation
TechPolicy.Press: The Doublespeak in OpenAI's "Industrial Policy for the Intelligence Age"
TechPolicy.Press: Transcript of Sam Altman's Senate testimony on AI competitiveness
CNN Business: Takeaways from day one of the Elon Musk and Sam Altman trial, April 29, 2026
The New Yorker investigation, summary via OpenTools, April 2026
The New Stack: Sam Altman promised billions for AI safety. Here's what OpenAI actually spent.
CNBC: OpenAI dissolves Superalignment AI safety team, May 2024
Colombia One, May 4, 2026: Families of Canadian shooting victims sue Sam Altman and OpenAI
TechCrunch, May 4, 2026: Elon Musk sent ominous texts to Brockman and Altman