K-pop has recently evolved into more than a subculture. There are K-pop deepfake girls groups, single K-pop solo singers, and double-sided versions of themselves in another world. The pure K-pop deepfake female trio was truly created using Deepfake technology.
Eternity is an AI female band and the first virtual K-pop idol produced by an AI graphic studio. On the other hand, there are certain double-sided female bands, such as Aespa. They have their own avatar that resembles the actual members of the group “ae.”
K-pop Deepfake: Who Sings?
Some may believe that a K-pop deepfake girl trio is singing using Vocaloid; however, individuals are singing behind them. After listening to their songs, many people realize there are genuine singers behind them, even if they are merely a virtual idol girl band. Eternity has their auditioned vocals from “BE my voice.” Anyone interested in being Eternity’s voice auditioned on a voice training matching platform.
The champion of this audition was chosen to be the singer for their third single album. Virtual idol groups, like prominent Vtubers that reveal their personalities via the people behind them, only show their faces as the individuals sing, dance, and even make every move.
K-pop Deepfake: Who Speaks?
Does the fact that they can’t sing imply that they can’t speak? The response may differ based on the female groupings. For example, in the following SM Entertainment interview between the real Karina and the ae-Karina, who is her virtual, her voice does not seem like an actual person behind her.
However, a business that develops virtual people has developed a virtual YouTuber called Rui. There is a real person behind her. Since she has to make a lot of stuff such as mukbang (eating a lot of food), vlogs, and cover songs. Because Rui can sing and dance, it is assumed that the person behind her is a singer. She has posted videos with cover dances and songs to lure K-pop fans.
Deepfake Is the Future of Content Production
In 2021, millions of South Korean TV viewers tuned to the MBN channel to catch up on the newest headlines. Regular news anchor Kim Joo-Ha began going over the day’s headlines at the beginning of the hour. It was a fairly typical collection of news for late 2020, complete with Covid-19 and pandemic response updates.
However, this message was unusual because Kim Joo-Ha was not visible on television. Instead, she had been replaced with a “deepfake” version of herself. A computer-generated duplicate that attempts to replicate her voice, movements, and facial expressions flawlessly.
Viewers had been notified that this was likely to happen, and South Korean media reported a mixed reaction after seeing it. While some were shocked at how accurate it appeared. Others were concerned that the actual Kim Joo-Ha might lose her job. MBN said it would continue to employ deepfake for breaking news items. In contrast, the company behind the artificial intelligence technology, Moneybrain of South Korea, said it would now seek additional media buyers in China and the United States.
Despite the negative implications of the colloquial name deepfakes, the technology is becoming more commercially utilized. AI-generated videos, also known as synthetic media, are becoming more popular in journalism, entertainment, and education industries, with technology getting more advanced.
Synthesia, a London-based startup that generates AI-powered training videos for clients such as global advertising giant WPP and business consultancy Accenture, was an early commercial user. “It’s the future of content production,” says Victor Riparbelli, CEO and co-founder of Synthesia. To create an AI-generated film using Synthesia’s technology, choose an avatar from a list, write in the word you want them to speak, and that’s pretty much it.
Overall, businesses have worked out how to employ technology when required. Even when sophisticated vocal technology, like rapper voice synthesizers, is available. hey are insufficient to captivate audiences in the same way that the aesthetics of Kpop girl groups do.
People anticipate that as technology advances, they will no longer need a person to make them dance through motion capture. We think singing will be one of the places where they can use the technology.