Why do we AI? Devaluation, division and control of labour

Artificial Intelligence (AI) is on the mouth of everybody, for good and for worse, and people in the AEC field — especially those involved with aspects that are perceived as “creative” such as architectural composition — often ask me: why should I care? This prompted me to explore a concept: why, exactly, are we pursuing […]

Artificial Intelligence (AI) is on the mouth of everybody, for good and for worse, and people in the AEC field — especially those involved with aspects that are perceived as “creative” such as architectural composition — often ask me: why should I care? This prompted me to explore a concept: why, exactly, are we pursuing AI with such vigour? To understand this, we need to look at the intersection of cognitive science, technology, human ambition, and the capitalistic exploitation of intellectual labour. As Margaret A. Boden insightfully put it, AI is about teaching computers to do the kind of things human minds can do. But let’s unpack this further.

A Cognitive Curiosity

At its core, the development of AI stems from a deep-seated curiosity about human cognition. How do we perceive the world, plan actions, make decisions, and learn from experience? Cognitive scientists see AI as a mirror, a way to explore these uniquely human (and sometimes animal) capabilities. Tasks like perception, association, prediction, planning, and movement control—these are not just technological challenges but cognitive puzzles.

For example:

  • Perception: how can a machine process visual or auditory input to recognize objects or words?
  • Association: how do we teach machines to link concepts, like understanding that a barking sound often corresponds to a dog?
  • Prediction: can a computer anticipate the next word in a sentence, much like our brains do?
  • Planning and movement control: from robots navigating physical spaces to apps calculating optimal routes, AI mimics our ability to plan and act in dynamic environments.

Each of these challenges helps us understand the rich structure of intelligence. It’s not one-dimensional but a multi-faceted space where different skills intersect. AI, in this sense, is both a tool and a lens—it enables us to replicate intelligence while offering insights into what intelligence truly means. But is this the whole story?

Please read this book

Technological Ambitions

From a practical standpoint, AI’s allure lies in its potential to augment human abilities. Machines can tackle tasks faster, more efficiently, and sometimes more creatively than we can (on the creative part, I have expanded here). However, what makes AI uniquely fascinating is its ability to do so in ways fundamentally different from human methods. Consider:

  • A robot vacuum doesn’t clean your floors like you do; it uses algorithms, sensors, and pre-programmed instructions to achieve the same goal, and doesn’t need blasting Katrina & The Waves’ Walking On Sunshine full volume to get through the task;
  • a voice assistant like Alexa or Siri doesn’t “listen” and “understand” as we do, yet it can process language to fulfil requests or provide information;
  • a navigation app doesn’t think like a driver but uses real-time data and predictive models to find optimal routes.

While AI replicates human-like results, it doesn’t necessarily replicate human processes. Instead, it complements human efforts, often outperforming us in areas where speed, scale, or precision matter most. However, by designing systems that approach problems in distinct ways, we can contrast and compare these approaches to better understand how humans solve similar challenges. For instance, when neural networks approximate learning processes, they offer a testbed for theories about human cognition, even if the mechanisms differ. This complementary relationship allows AI to act as a scientific tool for unravelling the mysteries of perception, decision-making, and other cognitive processes while enhancing human efforts in areas like speed, scale, or precision.

Why Should We Care?

Why should AI interest us beyond its practical applications? There are two key reasons: scientific discovery and societal impact, the latter offering interesting perspectives on the former.

1. Scientific Discovery

AI offers a framework for modeling and understanding complex systems, including ourselves. Cognitive scientists use AI to test theories about how the brain works. For instance, neural networks—a type of AI inspired by biological brains—help us explore concepts like learning and memory. As we push forward, generative AI (genAI) opens new avenues for understanding creativity and problem-solving, offering glimpses into how humans synthesize information or come up with novel ideas.

Looking to the future, AI could fundamentally reshape how we model intelligence and cognition by simulating not only human processes but potentially novel, alien-like approaches to reasoning, if we let it. This raises intriguing questions: Could AI-generated agents, with their emergent behaviours, offer entirely new perspectives on problem-solving that humans hadn’t considered? Could they become co-explorers in fields like decision-making, where their “neutral” logic might clash or harmonize with our inherently emotional biases? Or is this an illusion, and there can be no such thing as a “neutral logic”? Are we simply burying our biases deep into training datasets?

This brings us straight to the second reason we should care and stay involved.

2. Societal Impact

AI has already shaped how we live, work, and interact. It’s in the cars we drive, the recommendations we receive online, and even the medical diagnoses we trust. By developing AI responsibly, we could:

  • improve accessibility for individuals with disabilities and neurodivergence through tools like speech recognition;
  • enhance productivity and creativity by automating repetitive tasks and generating novel ideas;
  • address global challenges, from climate modelling to healthcare optimization.

Are we really doing that?

In his The Eye of the Master: A Social History of Artificial Intelligence, Matteo Pasquinelli observes how “the most valuable component of work in general has never been just manual, but has always been cognitive and cooperative as well”, and therefore, it’s unsurprising that genAI seems to be coming for the creative, intellectual endeavours instead of, as it has been put, doing your laundry. This isn’t new. It’s how Babbage’s computer was born. It started from the mathematician Gaspard de Prony and his idea to apply Adam Smith’s industrial method of the division of labour to hand calculation. The result was, in Pasquinelli’s words, “a social algorithm – a hierarchical organisation of three groups of clerks which divided the toil and each performed one part of the long calculation, eventually composing the final results.”

“the inner code of AI is constituted not by the imitation of biological intelligence but by the intelligence of labour and social relations.”

The denial of the intelligence of labour and social activities comes in second. And the last needed step needed is to segment the intellectual labour. Sigfried Giedion, in his Mechanisation Takes Command, expands beautifully on this concept. Devaluation, division and control are fundamental needs for the automation process.

Let’s break it down.

Devaluation, in the context of mechanization, refers to the reduction of the importance of individual skill, whether creative or collaborative, in production processes. This happens as tasks once performed by skilled workers are replaced or simplified by machines.
Mechanization reduces reliance on unique human expertise by standardizing tasks. For instance, in traditional craftsmanship, the artisan’s skill determines the quality of a product, while mechanization relies on the machine to determine quality, making human input less significant. By devaluing skills, industries can focus on efficiency and reproducibility, which in turn lowers labour costs because tasks can be performed entirely by less specialized workers or by machines.

Division refers to the segmentation of tasks into smaller, discrete, and repetitive components, a principle closely tied to the division of labor explored by Adam Smith. As mechanization thrives on the ability to break down complex processes into simpler, standardized steps that machines or unskilled workers can perform, segmentation not only enhances efficiency but also allows for the design of specialized machinery tailored to perform specific tasks.
In assembly lines such as those pioneered by Henry Ford, for instance, automobile manufacturing was divided into numerous small tasks, each performed by a different machine or worker. This division enabled faster production and minimized the need for comprehensive training.

Control refers to the oversight and regulation of processes to ensure consistency, predictability, and efficiency in production. Automation introduces the need for precise control mechanisms to monitor and manage the performance of machines, often for the worker’s own safety, but this also includes quality control, coordination of workflows, and synchronization between different stages of production. Control ensures that the system operates harmoniously, reduces waste, and maintains standards, which are crucial for industrial scalability, but also treats workers as another cog in the machine, leading to an inevitable dehumanization of the human component for the process.

Giedion’s analysis shows that these principles not only shape the technical aspects of automation but also have profound social and cultural ramifications. Devaluation diminishes the role of individual craftsmanship, division alienates workers by fragmenting their contributions, and control centralizes power in the hands of those managing the machines. Together, these elements encapsulate mechanisation’s transformative—and often disruptive—nature in human history.

Now, my invitation is not to be critical in an aprioristic way but to look for these patterns in your industry. Consider what’s happening in some creative industries such as writing for movies, for instance: the idea that AI should be used to pitch ideas and humans should be hired only to check and refine those ideas is certainly a good example of both devaluation and division processes. What about your industry?

The Bigger Picture

AI is not just about building smarter machines; nor it’s just about understanding intelligence itself: it’s about labour and task control. It challenges us to think about what it means to be human and what role technology plays in our evolution, but also to consider what’s valuable and what’s lost in its segmentation. “What do we aspire to achieve as humans?” The answers, much like AI itself, are underpinned by corporate interests we should be mindful of.

Happy New Year.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.