
“The issue of deepfakes is a two-sided problem”, say Anuragini Shirish, Professor of Management Information Systems at Institut Mines-Telecom Business School, and her co-author Shobana Komal, Intellectual Property Rights Attorney. In their article entitled A socio-legal inquiry on Deepfakes, the researchers examine the ways in which states can legislate on the issue of deepfakes to counter its harmful uses, both for ordinary people and for political figures and celebrities.
Thursday October 10, South Korea: South Korea’s Council of Ministers approves a new bill. It aims to toughen penalties for digital sex crimes using deepfake in response to the wave of non-consensual pornographic content produced using this technology. Yet South Korea remains one of the world leaders in AI, thanks in particular to Samsung. According to Anuragini Shirish’s article, they would have a more systematic approach to the problem, as artificial intelligence is part of the country’s social organization, and the context in which AI is used must be taken into account.
There is, however, another approach put forward by the researchers. A perfect opportunity to look at the issue of deepfakes, the importance they have gained in recent years and the legislation surrounding them elsewhere in the world.
World tour of legislation
First of all, we have to start with the special case of the European Union. In its legislation, it is the only one to treat AI as something completely separate, doubting the merits of this technology and introducing safeguards to prevent abuses as far as possible. In its Code of Best Practice against Misinformation, it obliges various platforms and social networks to remove content deemed problematic (misinformation, defamation, malicious deepfakes, etc.) on pain of a fine of up to 6% of the company’s global turnover. The latter must be transparent, responsible and respectful of human rights when identifying harmful deepfakes.
Anglo-Saxon countries such as the US and the UK take a more moderate approach to the problem. They are less aggressive than the EU and focus on understanding the technology and its uses. The US may not have succeeded in passing any real legislation on deepfakes, but it has invested heavily in the means of identifying such content and in communications to raise public awareness of the threats posed by malicious deepfakes. The UK, for its part, has relied on pre-existing regulations to combat misinformation and fraud. It has recognized pornographic deepfakes as a new criminal offense, but does not yet regulate AI productions created without the victim’s consent. It is looking to European guidelines to educate its population about the potential abuses of artificial intelligence.
China, for its part, has put in place quite extensive regulations to avoid the risk of misinformation and to ensure the country’s cybersecurity. The Cyberspace Administration of China has issued a series of measures to platforms to counter the spread of false information, especially on platforms that influence public opinion. In addition, before releasing their software, developers must specify the concept, purpose, and primary functions of the system in question, all of which must be traceable and allow the authorities to easily trace back to the source. Some academics, however, criticize the lack of transparency regarding the content being targeted and see this as a potential large-scale censorship enterprise.
A social study of deepfakes
Despite everything, deepfakes are not necessarily malicious. With the merit of advancing research into the uses of AI, they also offer new perspectives for many actors in society, such as education and culture. Deepfakes also represent a new dimension in anonymizing people, whether on social networks or for journalists in filmed interviews. On the other hand, they also present a number of pitfalls, facilitating fraud, scams, misinformation, and sex crimes. Accessibility to the greatest number of people makes the technology highly questionable.
This is why studies are multiplying to assess the psychological, financial and even political impact of deepfakes. In January 2024, an audio recording of Joe Biden was sent to 5,000 people. This message told them not to vote in the American primaries. This was, of course, a deepfake orchestrated by the opposing presidential campaign team to win a few votes. A table in the article summarizes the potential consequences of bad AI legislation.
It has to be said that today’s laws are struggling to keep up with current technological developments. The deep learning part of the problem is the most worrying. The ability of AIs to learn from the mistakes they make is leading to an improvement in techniques that could quickly render most identification systems obsolete. What’s more, the Davos report on the global risks facing our societies tells us that disinformation is the biggest short-term risk over the next two years, so constant attention from governments is essential.