Artificial intelligence… a vast subject!
As a startup founder in the #frenchtech and #nocode space focused on #workflow, I’ve always had my head down trying to turn my modest company into something… extraordinary?
The AirProcess product is among the top Cloud applications in the #nocode space – it even matches some products that are worth billions on the stock market – and it is probably the best (bye bye “modesty”) in the category of decision-making #workflow tools that optimize company processes.
But being located in the middle of the Indian Ocean on a paradise island – Réunion – where the fundraising and venture capital ecosystem is almost non-existent, you have to put in ten times more energy to move forward. You can’t have it all: sun, coconut trees, and investors nearby!
To make a long story short, I am busy every day until late at night, and I don’t have extra time to post my life and opinions on social networks.
Until today.
I decided to pick up the digital pen to discuss a topic that, I believe, is badly handled – and mistreated – by mass media: Artificial Intelligence.
Artificial intelligence deserves (much) more noise
I will try to address the topic in two parts:
1 – First I will give examples to illustrate how the AI field is quietly taking over the world.
2 – Then I will talk about the security, ethical and philosophical problems this raises.
Ambitious program… so I will force myself to be as concise and factual as possible, even though the subject matter seems endless to me.

Part I – Why the general public does not understand at all what is happening
a – The little parrot GPT 3.5 takes flight
Because yes, almost everyone has talked about it, but the debates I heard were (almost) all at a shockingly journalistic level, with no research work done, no vision, and little information, or even misinformation.
Between the foolish journalists – I choose my words carefully – who had fun discrediting AI by showing that ChatGPT had poor results on the 2023 math baccalaureate, and our digital minister who called ChatGPT an “approximate parrot,” I am worried.
First, I am surprised that the highest-placed person in France to judge a strategy for a country of 70 million people has so little vision and understanding of the subject.
Our minister’s remark is of an astonishing, distressing, worrying, sad level of naivety… or revolting too, because we would all like to see the most qualified people in their fields at the top of the French State.
I am not especially pro-USA – especially when they dominate the IT market in which I am struggling to stand out with AirProcess – but one can recognize their foresight in taking AI much more seriously than we do: this week, the US Senate heard Sam Altman, CEO of OpenAI, for nearly 3 hours to evaluate measures to take regarding AI. For those who have the time, I strongly encourage you to listen to the entire hearing, which was of very high quality, both in the questions asked and the answers given.
The hearing is available here:
https://www.youtube.com/embed/P_ACcQxJIsg?feature=oembed
Today, we are witnessing the greatest revolution of humanity, in a deafening media silence.
The world has instantly split into two categories:
- the technophiles and the curious, who understood what is happening, and who follow the news week by week with neurons on fire
- the others, who have as information only the sad gruel fed to them on TV and radio, where people are invited… who sometimes shouldn’t be invited at all.
The wake-up will be hard!
To begin, let’s set the context.
ChatGPT, our dear “approximate parrot”, broke all imaginable records:
- Netflix took 3 and a half years to reach its first million users.
- Airbnb, 2 and a half years.
- Twitter, 2 years.
- Facebook, 10 months.
- ChatGPT, 5 days.
The site launched by OpenAI received 150 million visits in the month of its release in November 2022.
In April 2023, just 5 months later, it received 1.8 billion visits per month.
When ChatGPT was released, Google triggered an internal alert (“Code red”), because for the first time in years, its business model was threatened.
Nice track record for a mere “approximate parrot,” dear Mr. Minister of Digital Affairs.
And I will stop here with the statistics because I could go on for a long time.
But above all, the subject is far from ending here.

b – GPT 3.5, GPT-4…
At first, we play with ChatGPT.
It’s fun, because it’s still a bit clumsy, which lets us laugh at its missteps.
Yes, of course, it has cognitive biases, so internet users have fun making it say somewhat racist things, or listing forbidden sites even though it is theoretically restricted by OpenAI. Associations are already outraged… and OpenAI is doing everything to “wokify” its AI, in vain…
In short, we play, we laugh a lot, and then we move on, because toys are fun for five minutes.
In March 2023, GPT-4 was released.
The AI already went from 175 billion parameters to about 1,000 billion.
GPT-4 is multimodal, meaning it no longer understands only text, but can analyze images, sounds, and is already being experimented on films.
While some continue to play, the savvier ones have already understood the interest of AI and launched businesses to exploit this new Eldorado.
But above all, researchers from Microsoft’s research lab (the main shareholder of OpenAI) began testing the capabilities of the new algorithm and published a 155-page paper titled:
“Sparks of Artificial General Intelligence: Early experiments with GPT-4”
For those interested, the full study is available at this address:
https://arxiv.org/pdf/2303.12712.pdf
For those who do not know, Artificial General Intelligence (referred to as AGI) is the Holy Grail of AI researchers, because it is, by definition, versatile (like “natural” intelligence) instead of being specialized like classical AIs, which are very often “expert systems” (that is, trained in a specific field, and unsuited for everything else).
Let’s be very clear: at this stage, GPT-4 is not yet qualified as AGI, and it is still far from it.
However, the “sparks of general intelligence” it displayed should, in my view, constitute a very powerful alarm signal about the digital tools we – humans – are building for our civilization.
We are nothing less than in the process of creating a new form of intelligence that competes with us...
…. if competition there is! Because unlike so-called “natural” intelligence, the speed at which AIs evolve is not constrained by the biological cycle of human beings.

c – Moore’s Law has already been beaten
In computing, we often talk about “Moore’s Law.” It is originally an empirical observation made in 1965 by Gordon Moore, co-founder of Intel. It posits that the number of transistors on an electronic chip approximately doubles every two years, which leads to a significant increase in computing power and computer performance.
Since power doubles roughly every 23 months, this is an exponential law: a 1967 computer is roughly twice as powerful as a 1965 computer. The one from 1969 is therefore 4 times more powerful than that of 1965, the one from 1971 eight times more powerful… then 16, then 32, then 64, then 128, etc…
It turns out that this approximation by Gordon Moore has held true until today.
There were 58 years between 1965 and 2023, or 29 two-year cycles. Our 2023 computers are therefore approximately 2 to the power of 29 times more powerful than those of 1965.
That is about 540 million times more powerful.
All this to say what?
Until recently, some researchers thought that AI power was correlated with the computing power of computers. That is necessarily true, but it turns out that it is not directly proportional.
To begin with, through innovation and the discovery of new architectures, the power of supercomputers has roughly doubled every 14 months in recent decades:
Likewise for AIs: the discovery of new architectures has made their evolution leap in a non-linear way.
The GPT-type architectures (for Generative Pre-trained Transformer) that are at the origin of the “ChatGPT” revolution all come from a single scientific paper from 2017, entitled “Attention Is All You Need”, available here:
https://arxiv.org/pdf/1706.03762.pdf
The AIs available in a year may be 10 times more capable than those of today. Or 1,000 times. Or 100,000 times?
No one can seriously make a reliable prediction about the emergent behaviors of AI at this stage of advancement, and moreover, Sam Altman confirms this transparently in his hearing.
This exponential and potentially uncontrollable acceleration is what fundamentally worried scientist and physicist Stephen Hawking.

d – The acceleration in recent months is phenomenal
On page 40 of the Microsoft Research lab study, GPT-4 not only solves a very high-level math problem (master’s level), but above all it does so with a creativity that, for the moment, is not explained by those who designed the system. And GPT-4 demonstrates this creativity several times in the study which I also invite you to read in full.
In February 2023, we therefore already reached a stage where the creators of a system are reduced to hypothesizing to explain what could have generated such powerful, creative and unexpected results.
At the end of March 2023, just 4 months after the release of ChatGPT 3.5, a developer under the pseudonym “Significant Gravitas” thought that, since ChatGPT is capable of listing the tasks necessary to accomplish an objective, it would be fun to have the task list itself entrusted to several other ChatGPTs to carry them out.
By adding a collective memory of the work done and the results produced, this isolated developer managed to make AIs collaborate effectively to achieve much more complex goals.
He soberly named his system “AutoGPT”.
The source code was posted on Github (a website where developers can share their source code) and given to the community “as this” (just like that), with a disclaimer:
As an autonomous experiment, Auto-GPT may generate content or take actions that are not in line with real-world business practices or legal requirements. It is your responsibility to ensure that any actions or decisions made based on the output of this software comply with all applicable laws, regulations, and ethical standards. The developers and contributors of this project shall not be held responsible for any consequences arising from the use of this software.
Result: on April 2, his source code took off and gathered 132,000 stars on Github in a month and a half.
Again, for the uninitiated, it should be known that very large open-source projects take years to collect stars.
WordPress, the most widespread open-source CMS in the world, has 17,500 stars. Strapi, the trendiest Headless CMS, took nearly 6 years to collect its 53,800 stars.
So something notable is happening here.
The movement of “autonomous agents” similar to AutoGPT was then launched, and in barely 15 days, there were already nearly a dozen created by various developers around the planet.
More than 100,000 developers are currently working with versions of AutoGPT or variants of this AI installed on their machines.
Meanwhile, major players like Google, Apple, and Facebook continue to work on their own LLM-based AIs (Large Language Model) to stay in the race: this week, at its annual conference, Google presented Bard, an AI even more impressive than ChatGPT:
https://www.youtube.com/embed/cNfINi5CNbY?feature=oembed
e – Powerful AIs are within everyone’s reach
Also in mid-March 2023, Facebook’s AI engine – named LLaMA – leaked!
A series of YouTube tutorials then sprang up on how to install this AI on… a simple desktop PC. You read that right.
Contrary to received ideas and to the great surprise of many experts, running an AI as powerful as ChatGPT or LLaMA does not require an enormous amount of power.
What requires a lot of computing power is only the generation of the database with the billions of parameters used by the AI.
Once this database is generated, it is only static data exploited by the algorithm at high speed.
For example, the database containing 7 billion parameters to feed LLaMA weighs only 64GB. That’s just a large USB stick.
To be clear: today, in May 2023, just 6 months after the appearance of ChatGPT, we witness almost every week a new evolution, and an AI as powerful as LLaMA fits in your pocket, which will necessarily make embedded AIs explode in the short term.
What could possibly go wrong?

2 – Considerations on security, ethics, and philosophy
Given these elements, it’s time to get to the heart of the reflection:
A – the first reflection concerns the security of humanity and the ethics around AI.
B – the second reflection is philosophical.
a – Ethics and security
For these first two points – security and ethics – we, Humanity, have already lost.
For ethics, first.
An open letter signed by more than 27,000 people and scientists – including renowned ones in the field of AI – called for a moratorium on development, time to lay out the rules of the game.
This letter, which I invite you to read here (https://futureoflife.org/open-letter/pause-giant-ai-experiments/), rightly reminds us that AI concerns the future of all humanity and should therefore not be delegated to a few “unelected tech leaders.” Analysis firms already estimate that AI will destroy more than 100 million jobs worldwide in the very short term, and this, on a simple unilateral decision by OpenAI to release ChatGPT into the wild in November 2022.
Of course, it is far too late for a moratorium: the bomb was dropped by OpenAI, and Microsoft has already appropriated the technology by integrating it into Bing – and soon all its products. Google has already replied with Bard, and the story is only beginning.
Billions of dollars are at stake in a race that has been launched de facto.
None of the big tech players will give up this race, at the risk of being relegated to the background by the competition. So?
So ethics will wait.
For security, next.
Unlike nuclear or biological weapons, there is no need for significant financial means or a state-of-the-art laboratory to work with artificial intelligence.
Any developer can now work on the most cutting-edge AIs, without restriction, and regardless of motivation.
No government will now be able to stop this movement, nor regulate it by legislation.
It needed to be done well before ChatGPT was thrown onto the market and created the spark.
Again, many had sounded the alarm on the subject, and not the least intelligent (and I still think of Stephen Hawking).
Just in the field of cybersecurity, it must be understood that ChatGPT can very easily assist novice hackers by lowering considerably the skill barrier required to enter cybercrime.
In the field of deepfake, Sam Altman himself warned that we will see fakes on steroids: meaning, more and more numerous and more complicated fakes to identify, to the point that this could be an issue for upcoming American elections, according to him.
AIs are now so comfortable with natural language that it will be virtually impossible to detect bots on social networks, and one can easily program AIs to steer and manipulate opinion.
Yet, although all this may seem important, the main issue is in my opinion not here.

b – Philosophy
In artificial intelligence, there are two schools of thought that are radically opposed, exactly like in the days of vitalists and physicalists.
The first school believes that AIs will always be weak, the second that AIs could one day become strong.
I will try to popularize these two terms as much as possible for non-specialists.
A so-called “weak” AI is a simple computer simulation.
In plain terms, even if this AI is extremely powerful and gives you the full illusion that you are talking to a living being endowed with emotions, this is only simulated: it is only simple computer calculations, with “outputs” (the AI’s behavior) that respond to “inputs” (the stimuli).
Weak AI therefore only processes information, without putting any meaning into it, nor any emotion, nor having awareness of what it does. It’s just a very beautiful calculating machine.
Conversely, so-called “strong” AI will behave exactly like weak AI in appearance, except that the processing of these billions of pieces of information will give rise to a consciousness, in the same way that the processing of billions of pieces of information in a human brain gives rise to consciousness.
To date, the debate remains philosophical because even if an AI reached that stage, we would have no way to know or test whether it “simulates” a consciousness, or whether it has truly become conscious.
These two schools of thought each have arguments in favor of their hypothesis (weak AI or strong AI).
Facebook CEO Mark Zuckerberg, for example, is on the side of weak AI: his discourse has therefore always been very reassuring about the fact that AIs will always be controllable entities, and at the service of humanity. Beautiful, useful machines.
Conversely, Stephen Hawking described AI as a “Civilization destroyer”. No need to translate, I think.
He hypothesized that the same causes produce the same effects: from the moment no one is capable today of clearly explaining how the human neural network produces consciousness, then this means that no one can guarantee with certainty that an artificial neural network will not produce an artificial consciousness.
In any case, the train is running at high speed, and humanity will have to adapt.
OpenAI already proposes to scan our irises in exchange for a financial reward in cryptocurrency (article here) in order to create an absolute digital identity platform, on an AI background. Sci-fi readers and cinephiles will appreciate the wink to 1984 and to Minority Report. We could also have fun imagining what the military could do with AIs embedded in Boston Dynamics robots. There, it’s more on the Terminator side that one would have to look for inspiration.
This fictional world is approaching us at high speed, and for the moment, the media are not covering it commensurate with what is happening.
Conclusion
Despite all this, I do not intend to write an alarmist article.
This article is intended to inform those who do not make the effort to follow this revolution deemed “too technical.” This revolution – because it is indeed a revolution, and not a simple evolution – deserves all our attention as citizens if we do not want to merely undergo the use of AI that large multinationals – or malicious governments – have planned for us.
At all times, technological revolutions have overturned the way humans live.
Fire… electricity… nuclear power… genetics… the Internet… and now, AI.
I am confident that humanity will know how to adapt, but this time, we will have to adapt faster than usual. As a fan of science fiction, I have always dreamed that AI could one day reach this level of power, but it seemed to me possible only on a very distant horizon.
I have always been passionate about AI, I chose and attended a school in this field, I am its daily first user, and not only does AI save me a huge amount of time in my work, but it also encourages me to step out of my comfort zone, because I know it can help me at every stage of my progress. Today, I fully consider it my second brain.
I have already integrated AI into AirProcess, and I will continue to look for uses that allow my clients to save time.
If I had to attempt a somewhat simplistic metaphor, it would be this: a human digging with his hands digs slowly. If you give him a shovel, he will dig 100 times faster. But if you teach him to operate an excavator, he will dig 10,000 times faster.
One must consider that artificial intelligence is the excavator of intelligence. It creates a colossal leverage effect for anyone who wants to take hold of it and use it.
AI can also be a powerful ally of humanity in solving complex problems, such as improving technologies to limit CO2 emissions, much faster production of new drugs, accelerated research work on cancer or orphan diseases, the development of new alternatives to fossil fuels, and, why not, the discovery of new solutions to improve wealth sharing?
In short, AI is a very promising technological gem in all domains of the gray matter. We only need to use it well… and if possible take it seriously instead of calling it an approximate parrot 🙂
To conclude all this, it is important to understand that AI is not a topic treated in proportion to its importance: even if AI has existed for a long time, recent advances constitute a real revolution that will transform our societies, destroy jobs and create others, increase risks in some areas (and decrease them in others), transform art in almost all its forms, accelerate research in almost every field, … and the list could go on.
If it is too late to stop or even slow down the movement, it is important to keep informed and prepare for the changes. As for the birth of an artificial consciousness? We’ll see!
Sincerely.
PS: except for the photo of Gordon Moore, all the images in this article were generated by an artificial intelligence.