What humans soon can't do?

Generative AI, especially LLMs, are thought by many to be leading us to a "surplus of intelligence", a seemingly boundless supply of automated assistance producing code, essays, jokes, art, even scientific papers. Herbert Simon famously argued that where we find surpluses produced by technology, we can look for new scarcities these create, for instance, that a surplus of information consumes attention.

There is enormous enthusiasm for what a surplus of intelligence might produce, alongside which there's grown a prominent critical community questioning the nature of the intelligence, and its broader political-economic foundations and function. But I want to ask a different question - what does a surplus of "intelligence" consume?

We know of course that it consumes carbon, water, energy. It consumes time, of educators trying to teach the hard-won skills of writing and reflection, of software developers reviewing pull-requests with subtle errors, of professionals reviewing legal documents, resumes, policies, and other paperwork that may or may not express the considered preferences of their author.

We also know from early empirical work that it consumes entry-level jobs for technical writers, transcription and translation services, software developers, marketers, even photographers and artists. Perhaps these will be replaced, and the skill of debugging a Wordpress plugin or writing enthusiastic copy about the healing powers of sea moss will be lost alongside our memories of phone numbers and skill with paper maps. There's an optimism that every job lost a new one will be gained.

You'll frequently hear arguments in the form "Sure LLM's can do this, but they'll never do that", where "this" is something even months ago we might have thought impossible for a computer, and "that" is something increasingly, ineffably, "human". It reminds me of Daniel Dennett's "Quining Qualia", and his story of Chase and Sanborn, two coffee tasters who can no longer tell if the coffee has changed, or if they have. When they strip away all the discriminable properties and functional differences, there's nothing left to latch onto. What is left over for the distinctively human is looking increasingly esoteric. Maybe we're not so special. Maybe this kind of desperate argument just bootstraps a new phlogiston.

But I don't think so. I think we risk the atrophy of important capacities, paradoxically, the very ones we need to benefit from the promised abundance of surplus intelligence. What is being risked is our opportunity, and motivation, to develop the critical and intuitive skills that come from doing the cognitive heavy lifting ourselves, recognizing subtle errors in code and argument, refining our thoughts through trial and failure, cultivating a personal taste for quality, knowing our own minds.

That little reflex to tab over to an LLM to rephase a text, to draft an email, to polish a phrase, to read some error logs, it gets stronger and stronger the more it seems to deliver what we want. But does it change what we want? Does it change what seems good enough? Can we become good writers without being bad ones? Can we have senior engineers without juniors? Can we have experts without apprentices? Will a surplus of intelligence consume its sources?