- Aug 25, 2025
What exactly is worthwhile cognitive labour?
- Clive Forrester
- Life
- 2 comments
The summer after my first year of university, I landed a job through the National Youth Service in Jamaica. I was assigned to the Spanish Town Police Station, about a thirty-five-minute commute from my home in Portmore. My workday began not with a flurry of activity, but with a quiet, analogue ritual. I’d pull a large, heavy ledger from my desk drawer, find a ruler, and take out a pen. My first job of the day, every day, was to draw lines down the wide, empty pages, dividing each one into four columns.
Once the pages were prepared, I’d walk over to a desk that was always piled high with traffic tickets. I’d grab a thick stack and carry it back to my corner. From about 8:30 in the morning until 4:00 in the afternoon, my world was that ledger. My task was to copy the details from each ticket into my columns: the date, the name of the person ticketed, the offence, and the ticket number. When one stack was finished, I’d fetch another. For a whole month, that was the steady, quiet rhythm of my work.
During that first week, I remember pointing out to my supervisor that a computer could do all this in a fraction of the time. I only made that observation once. It was immediately obvious, without another word being said, that if a computer were doing the job, then I wouldn't be there. The suggestion just hung in the air between us. That feeling of being a human placeholder has stuck with me, and it’s a memory that bubbles to the surface now as AI reshapes our world. It forces a question that feels more urgent than ever: what is worthwhile cognitive labour? Back then, my work was a necessary task, a means to a paycheck. But was it a good use of a young person’s mind? If a machine could have done it then, or an AI can do it now, isn't it better to let it?
Redrawing the Map: Distinguishing Tedium from True Thinking
So, how do we begin to redraw this map of cognitive work? What standard do we use to decide if a task should be handed over to AI or remain the domain of human thought? It’s not a simple question, because what one person calls tedium, another might call practice.
Take my own work, for example. A big part of my job is teaching students the principles of critical thinking, and a key tool for that is the essay. From my students’ perspective, especially those who will never write a formal essay again after university, the whole process can feel like a long, drawn-out exercise. It’s easy to see why they might want to offload some of that work to AI. But what part, exactly?
If an AI writes the whole essay, the student learns nothing. That much is clear. But what if the AI just helps with the brainstorming? Or the research? Or the outline? From where I stand, every step in the process—the initial spark of an idea, the search for evidence, the structuring of an argument, the drafting and revision—is a part of the mental workout. Each stage is worthwhile cognitive labour. So, who decides which part is expendable filler and which is the main event?
This dilemma isn't confined to the classroom. Think of a junior doctor learning to read X-rays. A seasoned radiologist might find reading hundreds of routine scans to be tiresome work, perfect for an AI. But for the trainee, that repetition is how they build the intuition to spot the one subtle anomaly that matters. Or consider a young lawyer sifting through thousands of documents for a case. AI is already taking over this kind of work, but what is lost? Does that lawyer miss the chance to develop a feel for the case that only comes from deep immersion in the details? The boundary between foundational practice and mindless drudgery is blurry, and it seems to shift depending on who you ask.
The danger of mental atrophy
This brings us to the "use it or lose it" problem. If we give all the heavy lifting to AI, do our own minds get weaker? This is more than a theoretical worry. I recently read about some preliminary research from a group at MIT that points in this direction. The study, which hasn't been peer-reviewed yet, monitored the brain activity of volunteers tasked with writing an essay. One group used an AI assistant, another used only Google search, and a third used nothing but their own wits.
The findings were telling. The group using AI showed the lowest cognitive engagement. Over several months, the researchers noticed these participants seemed to get lazier, eventually just copy-pasting to get the task done. It looked a lot like the beginning of mental atrophy.
I can almost hear a collective "I told you so!" from my academic colleagues. And they have a point. But I also see an irony here. I know plenty of professors who show clear signs of physical atrophy and don't seem concerned in the slightest. Many have offloaded any expectation of self-defence or survival in a tough spot to others. We don’t raise an alarm about that. But when students find a shortcut around the drudgery of essay writing—a task rapidly being absorbed by technology—the sky is supposedly falling.
Let’s be clear: the risk of our minds getting soft is real. I just question how we’re measuring that fitness. It feels strange to test our mental strength using exercises that a machine can now perform. Shouldn’t we be designing a new mental fitness test? One that focuses on the complex, creative, and emotionally intelligent tasks that, for now, only a human brain can handle?
The Search for Human Purpose
This brings us to the most difficult part of the conversation, the one that goes beyond economics and touches on the search for meaning itself. In discussions about this, I often hear the optimistic view that as AI phases out old jobs, new, more cognitively demanding ones will appear just beyond AI’s reach. I understand the logic, but the problem, as I see it, is the speed of the change. The old jobs will likely disappear much faster than new ones can be created, leaving a massive gap where people’s livelihoods—and identities—used to be.
To make this concrete, let me use my own profession as an example. Being an academic is a high-cognitive-demand job, so it might seem safe for a while. But I harbour no delusions that it's irreplaceable. For one, you could argue there are too many of us already. So, what would happen if we just Thanos-snapped the number of available professor jobs in half? What becomes of the people who are suddenly displaced?
The common answer is that they would "retrain" for a new, AI-proof career. But this overlooks a human reality. Most professors have never really worked in another capacity. Our entire professional identity is tied up in this one specific thing. We have all our eggs in one basket. If the bottom of that basket were to suddenly fall out, the shock would be more than just financial. For many, it would be a full-blown existential crisis.
My example above focuses on academics but it's a stand-in for countless specialized professions. As AI continues its climb up the cognitive ladder, it will displace people who have built their sense of self-worth on being an expert in a particular domain. The fallout won't be something a short retraining program can fix. It will be a profound psychological challenge for a society that has long answered the question "Who are you?" with a job title. This forces us to confront the ultimate question: once the struggle for cognitive work is over, what will be the purpose that gets us out of bed in the morning? Perhaps the most human work we have left is to figure out an answer.
It would be easy to write off that entire summer back in my first year as an undergrad as a waste of a young mind. But that wouldn’t be the whole story. My time at the Spanish Town Police Station, for all its monotony, taught me things that have proven to be surprisingly durable. They just weren't the kinds of lessons that come from a textbook or a lecture hall.
For one, the job taught me about the simple, quiet dignity of work itself. It taught me what it meant to show up on time, day after day. It gave me a reason to put on a button-down shirt, and sometimes even a tie, and to present myself as someone ready to contribute. There was a discipline I learned in taking on a task, no matter how small or seemingly pointless in the grander scheme of things, and seeing it through to the end. That quiet commitment builds a kind of character.
On a more practical note, it was a crash course in budgeting. The pay was a pittance, as you might expect, and making it last the month required a level of financial attention I’d never needed before. And from my little corner desk, I got an unplanned education in organizational behaviour; I saw all the ways the public service could be made to run more smoothly, the bottlenecks and the workarounds that people create in large systems.
Could I have learned all of this while doing a job that challenged my mind more? Probably. But I am almost certain I wouldn't have learned any of it—let alone all of it together—in a formal school setting. So it leaves me with a complicated thought. Perhaps there is a unique space for these kinds of low-level cognitive jobs. As they slowly disappear, we might find we’ve lost an important, if unglamorous, training ground for patience, humility, and the simple grit of seeing a job done.
Join the mail list
Liked this blog? Consider signing up to get a notification for new posts.