The Future of Synthetic Media: Exploring AI Deepfakes

Cover: Deep Faked Mark Zuckenberg. Source: https://www.flickr.com/

“If AI is reaching the point where it will be virtually impossible to detect audio and video representations of people saying things they never said …, seeing will no longer be believing.” from http://www.brookings.edu

Maurício Pinheiro

I. Introduction

AI 2041: Ten Visions for Our Future, published in 2021, by Kai-Fu Lee, former president of Google China, and Chen Qiufan, celebrated novelist, provides a fascinating exploration of how artificial intelligence (AI) will shape our world in the next twenty years. The book has received wide acclaim, being named a WALL STREET JOURNAL, WASHINGTON POST, AND FINANCIAL TIMES BEST BOOK OF THE YEAR. As Yann LeCun, winner of the Turing Award and chief AI scientist at Facebook notes, “This inspired collaboration between a pioneering technologist and a visionary writer of science fiction offers bold and urgent insights.” Mark Cuban describes the book as “Amazingly entertaining . . . Lee and Chen take us on an immersive trip through the future. . . . Eye-opening.

The book delves into the ways AI will revolutionize various aspects of daily human life. It predicts that AI will create brand-new forms of communication and entertainment and ultimately challenge the organizing principles of our economic and social order by liberating us from routine work. However, the book also emphasizes that AI will bring new risks, such as autonomous weapons and smart technology that inherits human bias. In ten gripping short stories, Lee and Chen introduce readers to a range of eye-opening 2041 settings. Through these stories, AI 2041 offers urgent insights into our collective future and reminds readers that, ultimately, humanity remains the author of its destiny.

In the following discussion, I will delve into the theme of the second story of the book (Deepfakes) without revealing any spoilers. The story is titled “Gods Behind the Mask.”

2. Deepfakes history

With the emergence of artificial intelligence (AI) technology, a new phenomenon called “deepfakes” has become increasingly prevalent in recent years. Deepfakes involve the creation of manipulated images, videos, or audio using AI algorithms, often utilizing face-swapping or voice synthesis techniques to produce content that appears hyper-realistic and difficult to distinguish from genuine footage.

The use of manipulated media has a long history, dating back to the early days of photography. However, the emergence of artificial intelligence has enabled the creation of hyper-realistic media with relative ease.

https://help.evolphin.com/wp-content/uploads/2014/09/Photoshop-screenshot.jpg

The first AI deepfake was created in 2017 by a Reddit user who used open-source deep learning tools to swap the faces of celebrities onto pornographic content. Since then, the technology has advanced rapidly, and deepfakes have become increasingly sophisticated and difficult to detect. You can do it right now online at, for example https://faceswap.webit.ai/.

Machine learning algorithms, particularly deep learning neural networks, can analyze vast amounts of data to mimic the appearance and behavior of real people and objects, making it possible to create deepfakes that are often indistinguishable from the real thing. Unfortunately, this technology has been used maliciously. For example, in 2018, Jordan Peele created a deepfake video of former President Barack Obama calling President Trump “deepsh*t” for BuzzFeed. This video went viral and raised concerns about the use of deepfakes to spread misinformation.

The porn industry has been embroiled in numerous scandals involving the unauthorized use of deepfakes to superimpose celebrities’ faces onto pornographic actors and actresses. At the same time, there are various online tools available for creating one’s own deepfake pornography, and numerous websites cater to this demand. However, if you reside in California, it’s important to bear in mind that in 2019, Governor Gavin Newsom signed two deepfake bills into law, thereby regulating their use in the state.

Such incidents have highlighted the need for better detection methods and regulations to prevent the spread of malicious deepfakes. While researchers, policymakers, and technology companies are working to address this issue, deepfakes are likely to remain a challenge for the foreseeable future.

The creation of deepfakes involves using advanced machine learning algorithms to manipulate existing images or videos, replacing the original content with new, synthetic material. Techniques such as face-swapping or voice synthesis are often used to create hyper-realistic content that can deceive viewers into thinking that they are watching something genuine.

The implications of deepfakes are vast, and they have raised significant concerns about the potential misuse of this technology. For instance, deepfakes can be used to spread disinformation, defame individuals, and even influence political campaigns. As such, there is an urgent need to develop technologies that can detect and mitigate the spread of deepfakes.

In this context, this article aims to provide an overview of deepfakes, including their history, techniques used to create them, and the potential impact they may have on society. It will also explore some of the challenges associated with detecting and mitigating deepfakes and discuss possible solutions to address this growing problem.

3. How AI Deepfakes Work

A. Overview of deep learning

Deep learning is a subset of machine learning that uses artificial neural networks to process large amounts of data. These neural networks can be trained to perform complex tasks, such as image and speech recognition, natural language processing, and even creating realistic images, videos, and audio. For example, an AI algorithm can be trained to recognize objects in images, such as dogs or cats, by feeding it a large dataset of labeled images. The algorithm can then use that knowledge to identify dogs and cats in new images it has never seen before.

B. Generative Adversarial Networks (GANs)

Generative Adversarial Networks (GANs) are a type of deep learning algorithm used to create AI deepfakes. GANs work by pitting two neural networks against each other: one that generates content, and another that tries to distinguish between real and fake content. This back-and-forth process continues until the generated content is indistinguishable from the real content. For example, a GAN can be trained to generate images of human faces by first feeding it a dataset of real human faces. The GAN can then generate new faces by producing variations on the faces it has learned from the dataset.

Soruce: https://www.frontiersin.org/

An example of how GANs are used to create AI deepfakes can be seen in the creation of realistic human faces. One neural network generates synthetic faces while the other tries to determine whether the face is real or fake. Through continuous training, the network generating the faces improves its ability to create realistic-looking images, while the other network improves its ability to distinguish between real and fake faces. The result is a set of images that are incredibly convincing and can easily pass as real, even though they are entirely generated by the AI algorithm.

C. Face-swapping techniques

One popular technique for creating AI deepfakes is face-swapping, which involves replacing the face of one person in an existing video or image with the face of another person. This is done by training a GAN on a dataset of images of both individuals and then using that algorithm to generate a new image with one person’s face swapped for the other. For example, a GAN can be trained to swap the faces of two actors in a movie scene, making it appear as if one actor is playing the role of the other.

Recently, a deepfake video featuring Tom Cruise went viral on social media platforms. The video showed a person who appeared to be Tom Cruise performing some stunts and magic tricks, but the video was actually a deepfake created by a visual effects artist. The artist used a GAN to train an AI model on thousands of images of Tom Cruise and then applied that model to create a video of someone else performing like him. The video highlighted the potential dangers of deepfake technology and the need for better regulations to prevent its misuse.

D. Audio and video synthesis

AI deepfakes can also be used to synthesize audio and video content. For example, an AI algorithm can be trained to generate a video of a person speaking even if no such video exists. The algorithm can use existing footage of the person to generate realistic lip movements, while a separate AI algorithm synthesizes the person’s voice to match the movements. Another example is using AI to create a realistic video of a person dancing by training the algorithm on a dataset of videos of people dancing and then synthesizing a new video of a person performing a dance routine that doesn’t exist in reality.

In recent news, there have been reports of individuals utilizing AI technology to clone people’s voices in order to scam money by providing false bank account information. This fraudulent activity involves the use of deep learning algorithms, which are trained to mimic the voice of a specific person by analyzing and learning from large datasets of their speech patterns. With this capability, scammers can make it seem like the phone call is coming from a trusted source, such as a family member or a bank representative, and provide false account information in an attempt to deceive the recipient into transferring money. This sophisticated form of voice cloning poses a significant risk for individuals, and it is crucial to be vigilant and cautious when receiving unsolicited phone calls.

Artificial intelligence ‘clones’ voice in false kidnapping scam

Overall, AI deepfakes work by using deep learning algorithms to create realistic media that is difficult to distinguish from authentic media. As the technology continues to improve, it is becoming increasingly challenging to detect deepfakes, making it essential to develop effective methods for identifying and preventing their spread.

4. Risks and Challenges

While AI deepfakes offer many exciting possibilities, they also present several risks and challenges that must be considered.

A. Spread of misinformation

The potential for the spread of misinformation is one of the most significant risks associated with AI deepfakes. The advancement of technology has made it increasingly easy to create realistic-looking videos and images that are hard to distinguish from the real ones. This creates a significant risk of deepfakes being used to spread false information, which can lead to disastrous consequences. For example, a deepfake video of a politician could be created to make it appear that they made a controversial statement or committed an illegal act, which could result in widespread outrage or damage to their reputation. Furthermore, these deepfakes can be shared on social media platforms, where they can easily go viral, potentially influencing public opinion and creating chaos. The spread of misinformation can be extremely harmful and can have severe consequences, such as social unrest, distrust in institutions, and even violence. Therefore, it is crucial to develop effective methods to detect and prevent the spread of deepfakes to minimize the risks associated with the spread of false information.

B. Privacy violations

Privacy violations are a significant concern when it comes to AI deepfakes. As the technology advances, it is becoming easier to create deepfakes that look and sound like real people, which poses a serious threat to individuals’ privacy. For instance, deepfakes could be used to create fake videos or images of individuals engaging in inappropriate behavior, such as sexual acts or violence. These videos could then be shared online or used for extortion or blackmail, potentially causing significant harm to the victim’s reputation and mental health. Furthermore, deepfakes could be created without an individual’s consent or knowledge, violating their privacy rights. For example, a deepfake could be created using footage of a person taken in a public place or from their social media accounts, without their permission or knowledge. This could result in the individual being portrayed in a false or negative light, leading to reputational damage or other negative consequences.

C. Legal and ethical implications

The legal and ethical concerns surrounding AI deepfakes are complex and multifaceted. In addition to violating privacy laws, the use of deepfakes in political campaigns or advertising raises questions about their legality and ethics. For example, a deepfake video of a political candidate could be created and shared online, potentially misleading voters and altering the outcome of an election. Similarly, the use of deepfakes in advertising could be seen as deceptive and unethical, as companies may use the technology to create false claims about their products or services.

Furthermore, the use of AI deepfakes in certain contexts may also be illegal. For instance, using deepfakes to defame or harass an individual can be considered as cyberbullying, and in many countries, it is a punishable offense. Moreover, deepfakes can be used for financial crimes such as identity theft, fraud, or blackmail, making them an even more significant threat to society.

Therefore, it is crucial to develop legal and ethical guidelines for the creation and use of AI deepfakes to prevent their harmful impact on society. These guidelines could include requiring the consent of individuals featured in deepfakes, regulating their use in political campaigns, advertising, or news media, and establishing penalties for those who create and distribute malicious deepfakes.

D. Detection and prevention

Detecting and preventing the spread of AI deepfakes is a significant challenge that requires the development of effective methods. As the technology improves, it is becoming increasingly difficult to distinguish between real and fake media. This makes it essential to develop new techniques that can identify deepfakes before they are widely distributed.

One potential solution is to use blockchain technology to register all authorial photos or videos before they are published. By doing this, it would be possible to create an immutable record of the original content that could be used to verify the authenticity of the media later. This could help prevent deepfakes from being distributed as authentic content, as anyone could check the blockchain to ensure that the media they are viewing has not been tampered with.

Graphic of data fields in Bitcoin block chain. By Matthäus Wander, 22 June 2013. Source: Wikimedia Commons.

Other techniques for detecting and preventing deepfakes include using machine learning algorithms to identify patterns that are consistent with deepfakes, as well as developing new watermarking and encryption technologies. While these approaches are still in their infancy, they hold great promise for helping to prevent the spread of misinformation and protect individuals’ privacy in the future.

Overall, AI deepfakes present several significant risks and challenges that must be considered as the technology continues to advance. It is essential to develop effective methods for detecting and preventing their spread while also considering the legal and ethical implications of their use.

5. Current and Future Applications

The development of AI deepfakes has opened up many possibilities for their use in a variety of applications. Here are some examples:

A. Entertainment and Media

AI deepfakes can be used in the entertainment and media industry to create more immersive and engaging experiences. For instance, they can be used to create realistic special effects, such as digitally adding or removing actors or changing the appearance of a set. This can help save time and money during production, as well as create new possibilities for storytelling.

One example of this is the use of deepfake technology in the Star Wars movie, “Rogue One”. The producers used AI deepfakes to create a digital version of the late actor Peter Cushing after his death in 1994,. He played the character of Grand Moff Tarkin in the original “Star Wars” trilogy.

B. Political Manipulation

AI deepfakes can also be used for political manipulation, which is one of the most concerning applications of this technology. For instance, deepfakes can be used to create videos of political candidates saying or doing something they never actually did. This could potentially sway public opinion and influence election outcomes. A real-life example of this occurred in the last Malaysian elections.

C. Advertising and Marketing

AI deepfakes can be used in advertising and marketing to create more engaging and realistic campaigns. For example, a deepfake video could be created to show a product in use or demonstrate its capabilities. This can help companies stand out from their competitors and create a more memorable experience for consumers.

One example we already mentioned is the deepfake video created by Adobe that features TV host and comedian Jordan Peele impersonating former President Barack Obama.

D. Social Media and User-Generated Content

AI deepfakes can also be used in social media and user-generated content. For example, a deepfake video could be created to show a person doing something they never actually did, such as performing a stunt or dancing. This can generate more engagement and interest in the content.

However, this type of application can also be problematic. For example, deepfake pornographic videos have become a major issue, with people’s faces being swapped onto the bodies of pornographic actors without their consent.

Overall, AI deepfakes have a wide range of potential applications, both now and in the future. While there are significant risks and challenges associated with their use, it is important to continue exploring the possibilities of this technology while also developing effective methods for detecting and preventing their spread. One potential solution is to register all original authoral photos or videos without tampering in a blockchain before they are published, which can help prevent the spread of deepfakes.

6. Conclusion

AI deepfakes have great potential to revolutionize many fields, but their use presents significant risks and challenges. This paper explores their workings, potential applications, risks, and challenges, including the crucial role of deep learning and Generative Adversarial Networks (GANs). Future research should focus on developing more accurate detection methods and exploring positive applications. Responsible use is crucial, including ethical creation and use with consent, detecting and preventing deception, and developing effective safeguards and guidelines. By developing responsible guidelines and continuing to explore their potential, we can harness the power of AI deepfakes while mitigating their risks and preventing harm to individuals and society.

#DeepFakes #AI #ArtificialIntelligence #MachineLearning #DeepLearning #AIin2041 #GenerativeAdversarialNetworks #GAN #AIDeepfakes #SyntheticMedia #DigitalManipulation #PoliticalManipulation #EntertainmentIndustry #DetectionMethods #FaceSwap

References:

https://pt.wikipedia.org/wiki/Deepfake

https://www.creativebloq.com/features/deepfake-examples

https://sundayguardianlive.com/opinion/deepfakes-destroy-democracy

https://abcnews.go.com/Technology/wireStory/deepfake-porn-growing-problem-amid-ai-race-98618485

https://www.businessinsider.com/california-deepfake-laws-politics-porn-free-speech-privacy-experts-2019-10

https://www.msn.com/en-us/news/politics/deepfake-targeted-law-proposed-by-ny-da/ar-AA1a3yLY

https://news.yahoo.com/man-created-deepfake-porn-former-162219573.html

https://beebom.com/best-deepfake-apps-websites/

https://www.youtube.com/channel/UCKpH0CKltc73e4wh0_pgL3g



Copyright 2024 AI-Talks.org

Leave a Reply

Your email address will not be published. Required fields are marked *