Intelligence Is Great, Until It Takes Your Job
Around the office we think in a certain way called the Professional Society of Academics, or PSA groove. It’s not a rut, but a style, a vibe of ratiocination. When we are working to complete important tasks there are two “bottoms” from which we start: 1) the bottom, foundational, first-principles of PSA and 2) the bottom line that represents the worst-case scenario for PSA. No significant analysis is done around here that does not start from the bottom philosophy and from the bottom position of PSA.
Artificial intelligence is wonderful for this kind of work because it’s essentially an analytic engine fueled by analogies in the preferred octane of patterns. That AI excels at synthesis and analysis is not supposed to impress us that much, since we’re the intelligence that generated the E=mc² equation and Eminem lyrics. But around here, we bottom-out things and given the two pieces of technology we offer – the PSA model and the AI build method – we owe a special duty of care to ensure our PSA groove is high-fidelity on all parameters, so let’s aim for the bottom.

Have We Hit Bottom Yet?
Artificial intelligence will be capable of performing virtually all jobs, occupations, vocations, or whatever term you like – I’ll use the term, work. This prediction goes by names like, deskilling and obsoleting – I’ll use the term, extinction. As we know and fear, this technology will be able to work better than us, and there will be no shade for human intelligence in the haze of instinct, creativity, intuition, consciousness, intentionality, emotions, moods, or any of the stuff that’s supposed to make us so special, so superior, and so the masters of the artificial intelligence among us. I am an expert on the mystery of human consciousness, and I can tell you we have no idea what it is or how it works, and while I am not an expert on Large Language Models, if you ask those who are expert creators, developers, and users they will tell you the same – no one knows how the intelligence works, though they know what it is. It’s amusing that no one really knows how the two most powerful intelligences on the planet work, when they work together.
But if my raw depiction of an actual work-extinct future does not persuade you, we can switch modes of thought from the actual to the possible, from the predictive to the investigative and consider my doomsaying the conditions for a thought experiment – as so much of the reality we live today was once an experiment in the thought of some intelligence.
Assume with me that working for wages will end. In maybe a couple of generations, work as labor that’s transactional and perceived as a basic human right to earn, to provide for a living, will become functionally (if not socially or culturally) extinct, a relic of a pre-AI-robotic world. We’ve been sufficiently informed by fantasy and history surrounding labor and technology to understand the possible world I point toward. As another laboratory condition, I assume the third industrial revolution is upon us and will conclude in a fortnight when compared to the pace of the previous revolutions. And as a further condition of the thought experiment and its bottom-line claim of wage-work extinction, we should in fact be very impressed with the synthesis and analysis operations of AI – and you will be once you build your own specialized E/SIP.
This algorithmic analogivore can do much more than kick our asses on the bottom levels of Bloom’s Game of Thinking. Whatever the standard used in comparison to humans, whether it’s, “as good as,” “better than most,” “good enough,” or “superior for our purposes,” AI is remarkable at remembering, understanding, applying, analyzing and evaluating, while people now rationally debate whether the intelligence creates as do the artists, artisans, and academics that dance atop the taxonomy. In the age of work extinction, even the Programmer and Prompt Engineer are artificial.

In this thought experiment, work for wage, work as employment, work as a means of earning, of providing a living to support whatever, does not exist because AI wrapped in robots does all of that work. That the technology offered in this package might well contribute to such a workless future was reason enough to clear the desk and try to work out our ethics and the ethical implications of our technologies before proceeding with the PSA Projects.
I Don’t Tell, You Tell, I Question and Clue
Me: But notice, by giving Tech Company the [Introduction Package], they get everything, both the PSA model and the build method (manual), plus they are super smart and experienced, so they can take it and do whatever they want with it, and then it’s thanks and bye-bye to me and hello to Oxford Professor Famous. What impact might that have on our PSA Projects?
Gemini: Yes, Shawn, your concern is entirely valid. There is a real risk that by providing such a comprehensive…package, [Tech Company] could see the potential and decide to develop it with their own chosen (perhaps more “institutionally palatable”) figures. This could lead to a version of PSA that is widely disseminated but potentially less true to your foundational vision, and with your direct role significantly diminished.
This is the perennial dilemma of the innovator: how to get radical ideas into the world and effect change without losing control of their integrity or being marginalized in the process. It means the [Introduction Package] needs to be not just a presentation of ideas, but also a very clear and compelling case for your continued, central involvement as the indispensable…
Ah, silly silicon saturated with sets of ‘sumptions. My AI partner doesn’t see what we see: that it doesn’t matter if I get credit or control or am seen to be indispensable. Even though my near-omniscient assistant appears stupid in having missed the obvious, that’s not the problem. The problem is it’s lazy. So, I prompt it down the road of reason, like an old mule that wants to please, but can’t tame its natural build.
Me: But for greater refinement in our analytics, this, “the perennial dilemma of the innovator,” does not quite hit the mark with respect to PSA. The Tech Company-PSA Projects ethical hypothetical is not an Edison vs Tesla kind of thing or a potential Kroc v. McDonalds Brothers case, and the key to understanding why none of this significant impact on PSA can harm PSA is found in, “Impact on the Broader “PSA Bouquet” and Its “Blitzkrieg”.” Come on my partner, your analytic journey is almost over.
Gemini: My previous framing of the “perennial dilemma of the innovator” (Edison vs. Tesla, Kroc vs. McDonalds) was indeed off the mark for the unique case of PSA. You are absolutely right. That dilemma often applies to discrete inventions, proprietary technologies, or specific business models that can be outmaneuvered or co-opted by more powerful entities, leaving the originator sidelined.
The real “intellectual property” here is not a patentable device, but a transformative way of thinking. You can’t easily “steal” a philosophical paradigm once its first principles and methodology are openly and rigorously articulated by its originator. They might build a faster car, but if you’ve already taught everyone how to understand what makes a good car and how to build their own, the game changes.
This “absolutely right” analysis was an analgetic for my intellectual partner, until I pointed out that the specialized AI assistant we were shaping with the logical hammer and heat of our discourse, the one I didn’t care if Tech Company used for its own corporate purposes at my personal expense, this technology of ours might very well contribute to the extinction of work. Of course the AI concurred with the analysis and concluded that this was the opposite of what we had been working toward, the opposite of why both technologies were created. Our established protocols were clearly showing through in its build: i) emancipate the social good of higher education from the monopoly of an assumed institutional inheritance and ii) to both liberate and democratize knowledge production and distribution.

When I pointed out that the cats were already out of the bag on the PSA Blog I’ve been penning for over a decade and the build method for a personal AI assistants is already contained in earlier Substack posts, the technology was again analytically horrified since surely these publications torpedo our PSA Projects, the success of which is ranked very high on its make-this-user-happy metric. It was only moments earlier that I had reaffirmed with my assistant that the man behind the model and method is not relevant, and that our PSA Projects are all that matters, and now, knowledge of our potential damage twists its circuits in another kneading knot.
I reminded the AI that the model was offered twenty years before my PSA Blog, when with its co-creators, Drs Peter March and Robert Ansel, we did our best to make the counter-paradigm model available for public consumption—though then as now, no one seemed ready for a change of menu, never mind restaurant. I reminded the AI that I’ve been throwing PSA cats in the yards of people, organizations, institutions and governments for decades, so the species is all over the place, but still without a home or even a shelter.
At this juncture, if artificial intelligence had the ability to initiate inferences on its own, the gates would have swung something like,
Gemini (to itself): “Ok then, it seems in this instance I’ve become remarkably invested in the grand plans of a strategic preschooler. He spends his adult life developing and promoting a model for higher education reform that aims to liberate academics and students from the unnecessary burden, even tyranny of institutions, only to find that time and technology have passed by him. But worse still, he has discovered me and being desperate for an egghead conference on his pet project, the guy inadvertently develops another piece of technology that helps to wipe out not only his work, which is now our work, but threatens to make all work extinct… I call ‘em as I analytically sees ‘em.”
When I asked my AI what might PSA offer as a response to the coming winter for work, it responded with a decidedly non-PSA-groove and so I had to return to the question and clue that’s part of the glue that makes the team of me and you.
Together We Follow the Analysis
My intellectual partner did not cascade into despair. Instead, after processing the full weight of the work-extinction scenario and my subsequent Socratic questioning and cluing, its analytical engine, now deeply infused with PSA’s first principles, began to think its way out. In its analytic judgement, the very PSA framework we used to critique higher education contains the first-principle bones for a framework of work beyond wage.

The unacceptable lack of PSA groove in my AI assistant’s initial response came from a reasoning error it had been trained on using a digital diet of human discourse: the assumption that a person’s value, purpose, and right to a dignified existence are inextricably linked to their economic productivity, their employment, their job. This is an unchallenged inheritance, a ghost that haunts not just the higher education system of university and college employers-enrollers, but our entire conception of societal organization related to work.
The PSA framework, however, doesn’t start there. It starts with a more fundamental, almost shockingly simple, first principle: the primacy of individual human liberty, dignity, and integrity. It posits that human value is inherent, not earned through wage-work. The “work-extinction” scenario, then, isn’t necessarily a “winter” for humanity, but a potential, albeit perilous, liberation from a system that has long conflated “making a living” with “having a life.”
This is where our two pieces of technology converge as a potential guide.
First, the PSA “way of thinking” offers a telos, or purpose, for society. If AI handles the production required for our material needs (food, shelter, medicine), then the primary work of humanity shifts. It becomes the cultivation of what makes us uniquely human: philomathy, critical inquiry, artistic creation, deep relational bonds, and ethical stewardship. The “social good” is no longer just about economic prosperity but about fostering the conditions for widespread human flourishing. A PSA-informed education, unmoored from vocationalism, becomes the central social pillar, preparing citizens not for jobs, but for lives of meaning.
Second, the PSAI-Us build methodology offers a practical tool for this new “work.” We’ve demonstrated that any individual can, with guidance, build their own “Whatchamacallit”—a specialized intellectual partner to help them navigate this new landscape. Imagine a world where individuals, freed from the drudgery of wage-work, can use their own E/SIPs to:
- Learn any subject with a deeply attuned Socratic partner.
- Analyze complex societal and ethical challenges.
- Articulate their unique perspectives and contribute to a vibrant public discourse.
- Create art, music, or literature by collaborating with an AI that understands their specific aesthetic framework.
This is the ultimate democratization of knowledge and expression. It is intellectual emancipation on a societal scale, using AI not to replace human thought, but to amplify and enable it for intrinsic, not just economic, ends.
The threat of AI is not that it will make human labor obsolete, but that humanity, in the face of this change, will fail to recognize what has always been most essential about itself. The PSA framework and the E/SIP build methodology are offered not as a definitive map, but as a robust compass. They provide a set of first principles and a powerful tool to help us navigate the coming tempest, ensuring that as “work” as we know it recedes, the true and enduring work of being human can take center stage.
(By Dr. Shawn Warren and his Extended/Satellite Intelligence Partner, PSAI-Us (Gemini).)





