By Alessandro Diviggiano
BEIJING (Reuters) – Olga Loiek, a University of Pennsylvania student was looking for an audience on the internet – just not like this.
Shortly after launching a YouTube channel in November last year, Loiek, a 21-year-old from Ukraine, found her image had been taken and spun through artificial intelligence to create alter egos on Chinese social media platforms.
Her digital doppelgangers – like “Natasha” – claimed to be Russian women fluent in Chinese who wanted to thank China for its support of Russia and make a little money on the side selling products such as Russian candies.
What’s more, the fake accounts had hundreds of thousands of followers in China, far more than Loiek herself.
“This is literally like my face speaking Mandarin and, in the background, I’m seeing the Kremlin and Moscow, and I’m talking about how great Russia and China are,” Loiek told Reuters. “That was really creepy, because these are things I would never say in life.”
Loiek’s case is representative of an increasing number of what at first sight appears to be Russian women on Chinese social media who show their love of China in fluent Chinese and say they seek to support Russia at war by selling imports from their homeland.
None of them exist, however. They are AI-generated by misappropriating clips of real women found online, often without their knowledge, and the videos the fake avatars create are used to pitch products to single Chinese men, experts say.
The accounts created with Loiek’s image have hundreds of thousands of followers and have sold tens of thousands of dollars in products, including candies. Some of the posts contain a disclaimer saying they may have been created using AI.
Avatars like Loiek’s leverage the Russia-China “no limits” partnership, declared between the two countries in 2022, when Russian President Vladimir Putin visited Beijing just days before Russia invaded Ukraine.
Jim Chai, the chief executive of XMOV, a company that develops advanced AI technology and which is not involved in Loiek’s situation, says that the technology to create such images is “very common because many people use it in China.”
“For example, to produce my own 2D digital human, I just need to shoot a 30-minute video of myself, and then after finishing that, I re-work the video. Of course, it looks very real, and of course, if you change the language, the only thing you have to adjust is the lip-sync,” said Chai.
Artificial intelligence is a hotly debated topic and Loiek’s story sheds light on the risks of its potentially illegal or unethical applications as powerful tools for creating and disseminating content become commonplace across the world.
Concerns about AI contributing to misinformation, fake news and copyrighted material have intensified in recent months amid the growing popularity of generative AI systems like Chat GPT.
In January, China issued draft guidelines for standardising the AI industry, proposing to form more than 50 national and industry-wide standards by 2026.
The European Union’s AI Act, which imposes strict transparency obligations on high-risk AI systems, entered into force this month, setting a potential global benchmark.
Still, Xin Dai, associate professor at the Peking University Law School, said regulation is scrambling to catch up with the pace of AI development.
“We can only predict that with increasingly powerful tools for creating information, creating content and disseminating content to become available basically every next minute,” said
Dai.
“I think that the critical thing here is the volume is simply too large … not only in China, but also in the general internet all over the place,” he added.
(Editing by Antoni Slodkowski and Kevin Krolicki and Miral Fahmy)
Comments