Two Headlines
On the week that showed us where we actually are
Two things happened in the same week in March, in the same civilisation, and almost no one noticed the connection.
A publisher withdrew a novel from sale. The book had been acquired, edited, and released. Then readers raised concerns that AI had been involved in its creation. The publisher acted swiftly. The book was withdrawn. In the conversation that followed, the decision was described — by those who welcomed it — as the industry getting its act together.
In the same week, the United States government banned the AI company that had refused to allow its technology to be used in fully autonomous weapons and mass domestic surveillance. The company had tried to write two specific safeguards into its contract with the Pentagon. The Pentagon refused. Remove the restrictions, or lose the contract. When the company held its position, the administration designated it a supply chain risk — a status normally reserved for companies considered extensions of foreign adversaries — and ordered every federal agency to cease using its technology. The companies that agreed to make their AI available for all lawful military purposes without restriction stepped into the space left behind.
Sit with both of those sentences together.
In one room: AI must be kept out of human creative endeavour, because something irreplaceable is at stake. In another: AI must be made available for lethal autonomous operations without the interference of the company that built it, because something irreplaceable is at stake.
Both positions are held with complete sincerity. Both are defended with serious arguments. Both are responses to the same technology, in the same month, in the same country. And they point in precisely opposite directions.
*
I am not going to tell you which side is right. That is not where this piece is going.
I want to ask a different question: what would it mean for both positions to be wrong in the same way?
Not wrong about their values. The publisher is right that something matters about the relationship between a life lived and a life written. The engineers who refused to remove their safeguards were right that something matters about humans remaining in the loop when the loop includes lethal force. These are not foolish positions. They are, within their frames, entirely defensible.
But a frame is not the same as reality. And the question neither frame is asking is the one that matters most.
*
A few weeks ago, in a piece called The Other Acceleration, I described what a graph of compression technology looks like across the history of life on earth. A flat line for four billion years, then a vertical spike in the last five thousand. Everything we call civilisation is in the spike. Everything that is currently accelerating is in the spike.
That piece introduced a distinction that I want to pick up here.
Compression is what happens when lived experience is turned into a signal: a word, a number, a symbol, a line of code. Something is always lost in the translation. The signal travels faster and further than the experience it represents. That is why compression is useful. That is also why it is dangerous.
Decompression is what happens at the other end: when the signal meets a body that opens it back up into something approaching the experience it compressed. Your body decompressing this sentence right now. The formation you have accumulated — the things you have lived, the losses you have carried, the patterns you have learned to recognise — determining how much of what I compressed you can recover.
A healthy system keeps these two movements coupled. Compression and decompression in continuous cycle. The map serving the territory. The symbol remaining answerable to what it represents.
The question the diagram raises — the question I have been sitting with since the week these two headlines appeared — is this: what happens when the most powerful compression tool in the history of our species operates without decompression? Not occasionally. Structurally, given how it operates.
*
The publisher’s answer is: keep it out of literature. Literature is where decompression happens. The writer’s body, formed by everything they have lived, meeting the language and finding what it contains. If the formation is absent — if no body underwent the living that the words purport to compress — then the decompression at the other end is meeting a simulation of depth rather than depth itself. The reader’s body has been misled.
This is not a wrong answer. It names something real.
But it cannot see what it cannot see. The question it does not ask is: what about all the other places where compression without decompression is already operating? The financial models that optimise supply chains without registering the bodies those chains run through. The targeting algorithms that identify threats at speeds no human could match, in contexts no human is fully inside. The administrative systems that allocate care, determine eligibility, route decisions — at scales and speeds that have already exceeded the regulatory capacity of any body that might have decompressed them.
The novel is not where the stakes are highest. It is where the stakes are most legible. And legibility, in the consciousness trap, is its own kind of blindness: the frame illuminates what it can see and leaves everything else in the dark.
*
The Pentagon’s answer is: the decision about where AI operates is a military decision, not a company’s decision. Remove the restrictions. Trust the institution.
This too names something real. Democratic accountability runs through institutions, not through the terms of service of technology companies. The question of who controls the tools of lethal force is not one that private enterprises should be able to answer unilaterally.
But it cannot see what it cannot see. The question it does not ask is: what is actually being removed when the safeguard is removed? Not a company preference. Not an ideological position. The last mechanism by which a consequence-bearing human being remains inside the loop at the moment of lethal decision.
What those engineers were defending — whatever their reasons, whatever their other failures of nerve or judgement — was the structural principle that someone who can be reached by consequences must remain present when those consequences include killing. Remove that, and you have not streamlined the decision. You have removed the only thing that made the decision a decision in any sense that a living system can recognise.
*
Two frames. Two sincere, serious, defensible positions. Both responding to the same thing. Neither able to see what the other is seeing. Neither able to see what neither is seeing.
The thing neither is seeing is not a policy question. It is a structural one.
We have produced the most powerful compression tool in the history of symbolic intelligence. We are deploying it at a speed and scale that no decompressive capacity — no body, no formation, no institution built from consequence-bearing humans — can match. And we are having a furious argument about whether to keep it out of novels.
That argument is real and worth having. And while we are having it, the processes that actually shape whether human beings remain inside the regulatory loops of the systems that govern their lives are moving in a different direction entirely, at a speed the argument cannot follow.
*
I began this series with a graph. A flat line for four billion years. A spike so steep and so late that it looks like a printing error.
We are in the spike. Both headlines are in the spike. The argument about which direction to point — toward the novel or toward the weapons system — is in the spike.
The question the spike is asking is not which application of AI we should permit or prohibit. It is whether the practices of decompression — embodied, formation-dependent, consequence-bearing — can be cultivated at anywhere near the speed at which compression is accelerating.
That is a harder question. It does not resolve into a policy position or a publisher’s decision. It resolves, if it resolves at all, into something that happens in specific human beings, in specific practices, one at a time.
One more thing, and it belongs in the argument rather than the footnote. This piece was developed in dialogue with an AI assistant. If the argument has landed, you will already know why that is not a contradiction. The question was never whether AI was in the room. It was whether a consequence-bearing human remained inside the loop — carrying the formation, making the judgements, responsible for every word. That human is me. Whether the loop held is yours to assess.
Which is where we are going next.
*
Terry Cooke-Davies is a Distinguished Fellow of the Schumacher Institute. He writes from Folkestone, UK. Recognition Theory: Schumacher Institute Briefing 1 (ISBN 978-1-0369-6925-7) is available from the Institute.


