The need for legislation focused on specific societal issues…
For several years, AI has been the source of numerous technological fantasies in the collective imagination.
However, while the kinds of uses most covered in the media, such as those referenced above, are critical areas in need of regulation to avoid potential abuses, these are far from representative of what AI is today, especially in Europe.
Therefore, it would not be advisable to create catch-all legislation for every possible application of this technology.
So far, however, the standard European approach has often been to respond to the emergence of disruptive technologies by implementing legislative framework which may help these new practices succeed.
This is where the challenge lies; to truly allow them to thrive, and to not stifle them.
As legislation regarding AI as a whole, limiting innovation and research in the sector, could be extremely damaging and unduly favor countries outside of the European community.
« We are only truly afraid of what we don't understand » said Maupassant.
And indeed this is the crux of the problem: how could our representatives properly create legislation dealing with the so recent, vast and complex subject of AI?
The most common response would be to delegate certain monitoring tasks to the private sector, most capable of « certifying »ethical AI and rejecting the more dangerous uses.
As well as the fact that these certification companies, in order to be fair, will have to use different and objective criteria for every application area, what does this mean for industrial property rights?
Our few European gems who have developed proprietary AI, and who have also embarked on this path because they enjoyed significant freedom to develop, would then be forced to grant access to a third party to the core of their business, upon which their competitive advantage depends.
That would be enough to hinder the development of a promising ecosystem, or risk seeing it expelled to other parts of the world.
…for Europe to have a strong presence in tomorrow’s technology scene
So even though« strong AI »does not yet exist and probably will not exist for a number of years, it would surely be preferable for the moment to let the European ecosystem grow and develop, to gain market share and consolidate its leading role, rather than legislating now as a precaution.
Let’s make a parallel with a topic that’s more relevant than ever: genetic modification.
Some possible advancements in this sector are of serious concern, while others are sources of fantastic innovation.
However, during the development of the vaccines related to the current health crisis, French laboratories have partly opted to not work on messenger RNA technology, and have therefore fallen behind other countries, on the pretext of this precautionary approach.
Regulating AI too strictly today, and therefore reducing the system of scientific development in Europe, would leave the field open once again to the other major powers of the United States and China.
Worse than that, it could possibly allow the arrival of tools based on AI programmed according to values we do not share, even though a consensus on the ethical values of AI is actually the main objective of the legislation currently being discussed.
Once these foreign tools have been widely adopted, no regulation will be able to do anything. Another example can be compared here: GDPR. It’s Europe’s weapon against the Big Tech Four, but has only reinforced their power by slowing down the development of healthy and local competition. Competition that in fact no longer has access to certain data and does not have the means to circumvent—unlike the internet giants—legislation.
Therefore, rather than aiming to legislate quickly and broadly, our public authorities would be wise to enter the AI ecosystem through other methods: financing, training initiatives, supporting research… it is now up to them to be innovative in their approach to allow Europe to lead the international scene.