The ethical challenges of Artificial Intelligence

issues-ethics-artificial-intelligence
issues-ethics-artificial-intelligence

Between algorithmic bias, facial recognition and manipulation

The rapid advent of artificial intelligence in our daily lives is generating considerable enthusiasm, promising significant advances in a variety of fields. However, behind this technological revolution lie crucial ethical challenges that deserve sustained attention. From the distortion of social representations by algorithmic biases, to concerns about discriminatory facial recognition, to the rise of deepfakes and their potential to manipulate information, this article will look at the social risks associated with the growing use of artificial intelligence. 

Algorithmic biases in image generators 

Recent breakthroughs in artificial intelligence (AI), particularly image generators, have generated widespread enthusiasm. However, behind the neutral facade of these tools lie algorithmic biases likely to exert a significant influence on our social perception.

A revealing example of this problem emerges with the campaign by Heetch, a French VTC company, entitled "Greetings from la banlieue". This campaign denounces the clichés about suburbs propagated by Midjourney's AI. Indeed, when confronted with the term "banlieue", the image generation algorithm produces distorted representations of reality. These are often imbued with negative stereotypes. This situation highlights a wider issue. Algorithmic biases in AI can influence our perception of social groups. And this contributes to the reinforcement of prejudice.

Faced with these challenges, it is imperative to adopt a more critical approach to the development and use of artificial intelligence. Companies such as Heetch are leading the way in highlighting these issues and seeking creative solutions.

Facial recognition as a source of discrimination 

The emergence of artificial intelligence has been accompanied by the growing use of facial recognition. Together, these two technologies can pose discrimination problems. Joy Buolamwini, an American-Ghanaian researcher, has revealed worrying shortcomings in some software. She has shown that some algorithms have difficulty identifying female faces and dark skin.

The root of these problems probably lies in the process of training algorithms on insufficiently diverse databases. If these datasets do not adequately reflect human diversity, particularly in terms of black women, the machine runs the risk of being poorly trained. This revelation highlights the ethical concerns surrounding the increasing use of artificial intelligence in society.

In 2017, the French Data Protection Authority (CNIL) had already warned programmers about the risk of artificial intelligence reflecting, or even amplifying, existing discrimination in our society.

Algorithmic bias and employment risks 

Researchers currently point to the risk of discrimination as one of the main vulnerabilities of artificial intelligence. AI algorithms tend to freeze racist or sexist stereotypes. This was strikingly illustrated by Amazon's former recruitment algorithm. Analysts discovered that the program, based on an automated scoring system, penalized applications containing references to women. These discriminatory biases present in artificial intelligence systems originate mainly in the datasets on which these AIs are trained, notably databases of CVs of former, predominantly male, candidates.

We can distinguish two categories of bias in AI: algorithmic bias and societal bias. In the first case, AIs are trained on biased data. This includes data biased by the AI itself. If AI feeds on content it produces, biases are likely to be amplified. In the second case, societal biases are rooted in prejudices and stereotypes rooted in the collective unconscious. This makes them difficult to detect and therefore to correct.

Deep fake and information manipulation

As the internet giants tighten their grip on our digital and real lives, advances in artificial intelligence exacerbate this domination, often in the service of political ideologies or lobbying. Deepfakes result from the use of sophisticated algorithms such as Generative Adversarial Networks (GAN). These tools enable the manipulation of videos and images, and even the generation of extremely realistic false information.

Faced with these growing dangers, major technology companies are attempting to counter deepfakes by using AI themselves. The French Data Protection Authority (CNIL) has expressed its intention to create a legislative and regulatory framework for facial recognition.

In conclusion, the emergence of artificial intelligence raises crucial questions about the preservation of truth, privacy and the fight against discrimination of all kinds. As technologies continue to advance, it becomes imperative to develop robust regulations and control mechanisms to minimize the risks inherent in these tools. The future of AI will depend on our ability to balance potential benefits with protection against abuse, ensuring a safer and more ethical digital future for all.

What can CIOs expect from AI in 2024?

Atlassian Intelligence features

AI for Atlassian Data Center solutions

Enhanced recovery generation

Soukayna KOUYDER

Soukayna KOUYDER

issues-ethics-artificial-intelligence

Share

An article by

Soukayna KOUYDER

Soukayna KOUYDER

Want to go further?