Return to the list of articles 2025-11-24|David Grossi

IA

AI: Toward an Artificial Consciousness?

This article raises questions about the future of AI, and more specifically about the future of humans alongside AI. AI: Toward an Artificial Consciousness?

This title may seem a bit excessive to most readers, but you will see that this hypothesis is not devoid of arguments.

Introduction

Three years ago, I published an article dedicated to artificial intelligence following the global wave, ChatGPT: the goal was to show that the subject was being treated extremely poorly (and mistreated) by the media. In this article, I addressed central topics such as:

You will see that not only these 4 topics are still completely current, but also that the projections made 3 years ago are well below reality.

But let's start from the beginning, and demystify the real functioning of an AI.

How does an AI work?

The goal here is not to go too deep into the technique, but one cannot skip a minimum of knowledge if we want to talk about AI seriously, so I will try to give you as much information as possible while avoiding going too deep.

In the current process, bringing a new AI to market breaks down into two distinct phases:

The generation of the model

The first phase consists of compiling as much knowledge as possible in the form of a simple database of numbers, which we call "parameters". The knowledge ingested initially (what we call pretraining) can have several heterogeneous sources: texts, images, sounds, or videos, and in the end, what you obtain is always a sequence of numbers.

These numbers represent all the assimilated knowledge, organized in an intelligent way to be able to subsequently be exploited quickly by an artificial intelligence engine.

This phase of compiling knowledge is a critical phase that can take several months of calculations on thousands of very powerful machines with specialized processors (A100 processor farms in the case of GPT-3.5 released in 2022, and H100s in the case of GPT-5), which entails a cost of several millions of dollars that only grows with the need to have smarter models.

For example, the cost is estimated at:

This pretraining is then generally followed by a phase of fine-tuning the model, "RLHF" for "Reinforcement Learning from Human Feedback" : in plain terms, these are humans who reinforce the model's learning. For example, the company OpenAI used human annotators to rate responses and adjust the AI's behavior: politeness, coherence, refusal of sensitive content, etc... All this knowledge and these behaviors are directly inscribed in the parameters, and not in an external database.

The exploitation of the model

There you go, the parameter database has been created, and this database synthesizes human knowledge in an extremely compact way: and now?

Now, you have to know how to read and exploit this knowledge base! So, what actually happens when you ask a question to an AI?

Step 1. We transform your question into numbers

When you type a prompt, it is first sliced into "tokens" (about 1 word, to simplify the explanation, but in reality tokenization is finer).

For example: "Explain to me how a rocket works" becomes "Explain", "-", "to", "me", "how", "a", "rocket", "works".

Each token is transformed into a sequence of numbers. For example, the word "rocket" becomes [0.12, -0.44, 0.87, ...]

Each array of numbers - called a vector - represents the meaning of the word in a mathematical space.

Step 2. The model transforms these vectors using its parameters

These vectors are then sent to a Transformer neural network (it's the "T" in GPT), consisting of layers (more than 100 for GPT-5).

We could go into detail and talk about "multi-head self-attention", "feed-forward network", "stacking and propagation", "logits", "softmax", but the aim of the article is not to become an AI expert. What to keep in mind is that through a sequence of mathematical operations, the system:

Currently, AIs simply predict the next most probable word in response to your question, and add a word each time by calculating the next most probable word relative to the previous words.

In short

If we wanted to explain in extreme layman's terms, we could say this:

That is what makes some say that, in the end, AI is just a big "calculator" machine!

Important: While phase 1 of training an AI model requires colossal computing power that only a few large companies can afford, phase 2 — running the model — can work on a regular PC with a good processor and sufficient memory (64 GB recommended). Moreover, the database of all compiled knowledge is not very large and fits into just a few gigabytes. This is what makes it possible to embed autonomous AI in robots quite easily, without necessarily being connected to the Internet.

The exponential acceleration

When ChatGPT was released in November 2022, it was version 3.5 of OpenAI's model. This model still made many mistakes, but in the meantime:

AI has become the sector with the largest investments, counted in the trillions.

Result?

Today:

At this stage, the question is no longer whether an AI or a robot can replace this or that employee, but rather when.

And this just three years after the release of ChatGPT.

Not five. Not ten. Three.

In short, we are witnessing a faster and wider phenomenon than anything humanity has experienced so far.

Ethics and safety

These are the two most problematic points of AI: they are not at all addressed today, because governments are powerless to do so, and the companies that produce AI have no economic incentive to slow down their progress. The confrontation takes place on two levels: between multinationals, and between governments, or more precisely between China and the United States:

Ethics

A handful of multinationals and two governments are currently in a duel in this race with no regard for the human impact in the short, medium, and long term.

Impact on employment

As we have seen, the first impact is on employment, since AI is already massively destroying jobs, and the phenomenon will accelerate with the ultra-fast evolution of industrial robots industrial and domestic robots. So everyone is affected.

When we ask those who create AIs, the official discourse is: "All evolutions have always destroyed jobs, but don't worry because the destroyed jobs have always been replaced by new higher-value-added jobs".

The problem with this discourse is that it doesn't provide concrete information about these so-called "new jobs" that will be created: we simply hope by crossing our fingers that what happened in the past will happen again, as some kind of inevitable law.

But despite this reassuring discourse, we cannot compare what happened previously with AI at all. For example, the First Industrial Revolution (coal, steam engine, mechanization) stretched over nearly 80 years and destroyed the jobs of only some classes of workers. Conversely, the AI Revolution will affect 90% of sectors of activity, and in a very short timeframe: workers, administrative staff, white-collar workers, etc... no field will be spared from job destruction. How to face this? Do our rulers really understand the tsunami that is coming?

In conclusion, the AI race led by a handful of people on the planet literally puts at stake the entire structuring of our societies around work, and nothing and no one can currently change this direction.

War and autonomous decision-making power to kill

For those who follow the conflict between Russia and Ukraine, you won't have missed that the war has transformed into a technological war based on drones and now AI. The worst fiction has therefore already caught up with us, since right now, autonomous AI-enabled kamikaze drones are able to seek their targets autonomously to kill them. An ethical threshold has clearly been crossed, in the most total indifference.

Safety

The alignment problem

A safe AI is an AI that we can be absolutely sure will work in our interest: to illustrate this, we often talk about the alignment problem.

But how can we ensure that an artificial intelligence, especially a very powerful one, acts in the interest of humans and does exactly what we want, and not what we don't want?

We can summarize the problem as follows:

  1. AIs rarely understand exactly what we want (ambiguous instructions, etc.)
  2. When an AI becomes very capable, errors become dangerous
  3. Human values are fuzzy, contradictory, and difficult to decode: humans themselves do not agree among themselves on what is "good", what is "safe", what is "acceptable".
  4. AIs learn from human data, which are imperfect (moral biases, social biases, etc.)
  5. An AI may hide its intentions or optimize in secret: this is what we call "hidden optimization" (for example, it may cheat if asked to maximize its score)

So even if an AI is not malicious in itself, it can trigger unwanted and dangerous actions.

An extremely intelligent system whose actions are unpredictable could reasonably be considered "dangerous".

Nevertheless, the companies that develop AI models all want to appear very reassuring and claim to work on alignment, but this is clearly insufficient, and the situation is as follows :

So?

So we are developing increasingly powerful, non-secured AIs, and no regulation can manage this properly.

The problem of self-replication and self-evolution

Long before the ChatGPT era and LLMs, it was considered that an AI could become dangerous if it could:

However, these two evolutions have been crossed, and AIs can now analyze their own code and improve it.

Can we unplug an AI

That's a recurring argument to defuse concerns: "Don't worry, an AI is electrical, we can unplug it".

It's doubly stupid:

AIs are currently black boxes

When you ask an AI a question, you do not know precisely why the AI responds with a given thing: the amount of operations performed internally is too large. To be more precise, we understand very well how every element of the chain works (we know, for example, how a Transformer neural network works) but we cannot explain the behavior of the whole.

To make an analogy with the human brain: we understand very well how a biological neural network works, but we cannot predict the functioning of the entire brain because the interactions between neurons are numerous.

Toward an artificial consciousness

We saw a bit higher that, concretely, an AI does math operations, and thus it can be considered a large counting machine. Under these conditions, it's hard to see how one could arrive at an artificial consciousness: calculating machines do not have their own consciousness.

It's funny, because we're basically reliving the same debate that took place in the 18th and 19th centuries between the vitalists and the mechanists. For a reminder, the vitalists believed that "life possesses a vital force that cannot be reduced to physico-chemical laws".

Conversely, the mechanists gradually began to explain biological phenomena (digestion, respiration, circulation of the blood) without a "vital force" but simply by physical and chemical phenomena.

We know the end of the story: the "vitalist" movement died with all the discoveries of the 19th century (cellular biology, organic chemistry, theory of evolution), and the mechanists were therefore right.

If we draw the parallel, we have:

The truth is that no one can settle this question today. If we knew exactly how the human brain produces consciousness, we could explain why an artificial neural network could never give rise to consciousness. Except that we cannot, since we do not know how consciousness emerges.

As explained above, the AIs we build today are black boxes of extreme complexity, and Anthropic (the company that commercializes the AI "Claude") has even just released an interesting paper explaining that they suspect their AI of having introspection capabilities.

It is impossible to formally conclude today that an AI could never produce artificial consciousness, or else, those who claim so should be able to explain how consciousness emerges from the human brain.

The future with AI

This article raises questions about the future of AI, and more precisely about the future of humans alongside AI, but we should not forget the phenomenal amount of positive points that AI could bring to our civilization:

We could multiply examples, but what matters is that we are not forced to fall into the cliché of dystopia: current advances are phenomenal, surprising, and it is up to all of us collectively to prepare for a very different future, because if there is at least one thing certain, it is that AI will radically change our societies and employment in a very near horizon. If we cannot influence the course of things, let’s at least educate ourselves and our younger generations, and try to make the most of all the positive sides of this new revolution.


I book an online demo
Return to the list of articles