NoiseGPT app worries about ‘deepfake chaos’

A slow shutter of former US President Donald Trump on a TV screen can be seen in this photo illustration in Warsaw, Poland on Feb. 23, 2022. (Photo by STR/NurPhoto via Getty Images)

The potential of deepfakes to create mistrust, suspicion, and manipulation has raised concerns. Photo: STR/NurPhoto via Getty

A new chatbot, similar to ChatGPT, can turn text into celebrity voices, creating “deepfakes” in the style of Morgan Freedman, Jordan Peterson, Donald Trump and many more.

NoiseGPT can even be trained by users to imitate their own voice, or that of their friends, relatives or colleagues.

Imagine getting a birthday voice message from your favorite US president, or a voice from beyond the grave in the form of John Lennon or Elvis sharing some personal information with you that only your close relatives know.

This is the selling point of the latest chatbot application to be released following the much-hyped launch of the Microsoft-backed (MSFT) ChatGPT artificial intelligence content generator in November 2022.

NoiseGPT’s chief operating officer Frankie Peartree told Yahoo Finance UK: “We are training the AI ​​to mimic about 25 celebrity voices at the moment, and will soon have more than 100 celebrity voices to offer.”

Read more: Microsoft’s ChatGPT investment could create a game-changer AI search engine

NoiseGPT was released on Telegram on Monday, allowing users to send social media messages to friends voiced by famous celebrities.

Peartree said instructions on how to train the app to use your own voice will be available on the company’s website soon.

The app can be used by any smartphone that can download the Telegram social messaging app, increasing its ability to achieve mass adoption.

Watch: How ChatGPT Could Lead To ‘Mass Technology Unemployment’ – The Crypto Mile

The future ability of the AI ​​applications to imitate your own voice, or that of your friends, or whoever you can get a voice sample from has raised concerns, such as children getting messages that imitate their parents’ voices.

The concept of a deepfake is not technically illegal in any jurisdiction. However, the potential of deepfakes to create mistrust, suspicion, and manipulation is a concern.

The NoiseGPT app said its app will try to cover up the personal and intellectual property rights violations that deepfake technology enables. When selecting celebrity voices in which the user wants their lines spoken, these choices are labeled as “not Donald Trump” or “not Jennifer Lawrence”, to avoid infringement.

Is society on the verge of falling into deepfake chaos?

Peertree thinks it won’t all be bad. He told Yahoo Finance UK: “I think it’s a good thing, it’s going to create some chaos at the beginning, but eventually we’ll find a balance. That was also the concern when, say, Photoshop came out.”

He added that in light of its legal implications, risk mitigation for censorship is factored into the design of the application. The application is not stored on a centralized server, but uses blockchain-based decentralized storage.

Read more: How to master using the new AI tool ChatGPT

“Legal issues are one of the reasons why we will decentralize quickly, both for the training and the API connection, so we cannot be censored,” he said.

The decentralized nature of the new application means that the computational burden to run the application will be shared by computers around the world, which will “run the models, training and API feed in people’s homes.” Running the program from your home computer will be rewarded with NoiseGPT cryptocurrency tokens.

Peartree said: “People who create new popular votes for the app will also be rewarded in the cryptocurrency.

“There is currently a 5% tax on any transaction with this cryptocurrency, but this will be removed in the future. All funds are used for development/operations, and they were not team tokens and the entire stock was publicly sold.”

Legal and social implications of deepfake technology

Being able to manipulate the human voice can challenge the veracity of the information we receive online and through our phones, and call into question the face-to-face communications we receive through messaging apps.

This also has implications for the interplay between nation states and how it can be used to influence rivals and please public opinion.

Policymakers are now working to mitigate the risks of deepfakes. But current UK law needs to catch up.

These laws only cover the distribution of real images, particularly in cases such as revenge porn, where private and confidential explicit material is publicly shared by an ex-partner.

If an offender creates and shares deepfake material that features the identity of his “target” in pornographic content, he can only be prosecuted if he directly harass the target by sending him the material or if the violation is related to copyright infringement .

Read more: What went wrong with Google’s ChatGPT rival Bard?

The legal and wider societal implications of deepfake technology can extend to:

  • Infringement of Intellectual Property Rights ⁠⁠— deepfake technology can be used to impersonate someone who owns intellectual property, potentially violating their rights.

  • Violation of personal rights ⁠— deepfakes can be used to create exploitative or pornographic content, infringing on an individual’s privacy and personal rights.

  • Reputational damage ⁠— deepfakes can spread false information and damage an individual’s reputation, potentially leading to consequences in their personal and professional lives.

  • Compromise of data protection and privacy – deepfakes can threaten an individual’s privacy and data protection, making them vulnerable to identity theft and other forms of cybercrime.

  • Distortion of political agendas ⁠— deepfakes can be used to manipulate public opinion, especially at times of heightened political tension, such as elections.

  • Spreading disinformation ⁠— deepfakes can be used to spread false information and can lead to a general mistrust of news sources, individuals and institutions.

  • Liability Issues ⁠— the use of deepfakes in marketing and other promotional materials can lead to liability issues if consumers are misinformed or misled.

  • Threat to national security ⁠— deepfakes can create geopolitical tension and pose a threat to national security if used to spread false information or manipulate public opinion.

Deepfakes are becoming very realistic and as technology advances, online video and audio communication can become increasingly questionable.

Watch: The Reasons UK Banks Block Crypto Exchanges | The crypto mile

Download the Yahoo Finance app, available for Apple And Android.

Leave a Comment