NEW YORK (BLOOMBERG) – In their version of the metaverse, the creators of start-up Sensorium envision a fun environment where your likeness can take a virtual tour of an abandoned undersea world or watch a live-streamed concert with French DJ Jean-Michel Jarre.
But at a demonstration of this virtual world at a Lisbon technology conference earlier this year, things got weird. While attendees chatted with these virtual personas, some were introduced to a bald-headed bot named David who, when simply asked what he thought of vaccines, began spewing health misinformation. Vaccines, he claimed in one demo, are sometimes more dangerous than the diseases they try to prevent.
After their creation’s embarrassing display, David’s developers at Sensorium said they plan to add filters to limit what he can say about sensitive topics. But the moment showed how easily people might encounter offensive or misleading content in the metaverse, and how hard it will be to control it.
Technology firms including Apple, Microsoft and Facebook parent Meta Platforms are racing to build out the metaverse, an immersive digital world that evangelists say will eventually replace some in-person interactions.
The tech is in its infancy, but industry watchers warn about whether the nightmarish content moderation challenges already plaguing social media could be worse in these worlds powered by virtual reality (VR) and augmented reality (AR).
Tech companies’ mostly dismal track record in policing offensive content has come under renewed scrutiny in recent months following the release of a cache of thousands of Meta’s internal documents to United States regulators by former Facebook product manager Frances Haugen. The documents, which were provided to Congress and obtained by news organisations in redacted form, revealed new details about how Meta’s algorithms spread harmful material such as conspiracy theories, hateful language and violence, and led to critical stories by The Wall Street Journal and a consortium of news organisations.
The reports prompted questions about how Meta and others intend to patrol the burgeoning virtual world for offensive behaviour and misleading material.
“Despite the name change, Meta still allows purveyors of dangerous misinformation to thrive on its existing apps,” said Mr Alex Cadier, managing director of NewsGuard in Britain. “If the company hasn’t been able to effectively tackle misinformation on more simple platforms like Facebook and Instagram, it seems unlikely it will be able to do so in the much more complex metaverse.”
Meta executives have not been ignorant of the criticism. As they build up hype about the metaverse, they have pledged to take into account the privacy and well-being of their users as they develop the platform.
The firm argues that these next-generation virtual worlds will not be owned exclusively by Meta, but will come from a collection of engineers, creators and tech companies whose environments and products work together.
Those innovators, and regulators worldwide, can start now to debate policies that would maintain the safety of the metaverse before the underlying tech has been fully developed, executives say.
“In the past, the speed at which new technologies arrived sometimes left policymakers and regulators playing catch-up,” said Mr Nick Clegg, vice-president of global affairs at Meta, at the firm’s annual Connect conference in October. “It doesn’t have to be the case this time around because we have years before the metaverse we envision is fully realised.”
Meta also says it plans to work with human rights groups and government experts to responsibly develop the virtual world, and it is investing US$50 million (S$68 million) to that end.
Sci-fi becomes real
To its evangelists, VR and AR will unlock the ability to experience the world in ways that previously existed only in the dreams of sci-fi novelists. Firms will be able to hold meetings in digital boardrooms, where employees in disparate locations can feel as if they are together in one place. Friends will choose their own avatars and teleport together into concerts, exercise classes and 3D video games.
But digital watchdogs say the same qualities that make the metaverse a tantalising innovation may also open the door even wider to harmful content. The realistic feeling of VR experiences could be a dangerous weapon in the hands of bad actors seeking to stoke hate, violence and terrorism.
“The Facebook Papers showed that the platform can function almost like a turn-key system for extremist recruiters and the metaverse would make it even easier to perpetrate that violence,” said Ms Karen Kornbluh, director of the German Marshall Fund’s Digital Innovation and Democracy Initiative and former US ambassador to the Organisation for Economic Cooperation and Development.
The far-reaching metaverse is still theoretical, but existing VR and gaming platforms offer a window into what problematic content could flourish there. The Facebook Papers revealed that the firm already has evidence that offensive content is likely to make the jump from social to virtual.
In one example, a Facebook employee describes experiencing a brush of racism while playing the VR game Rec Room on an Oculus Quest headset. After entering one of the most popular virtual worlds in the game, the staffer was greeted with “continuous chants of ‘N***** N***** N*****'”.
According to the documents, the employee wrote in an internal discussion forum that he or she tried to figure out who was yelling and how to report them, but could not. Rec Room said it provides several controls to identify speakers even when that person is not visible, and in this case it banned the offending user’s account.
Bad V.R. behaviour
The abuse has already reached other VR products. People on the VRChat platform, where users explore worlds dressed as different avatars, describe an almost transformative experience where they have built a virtual community unparalleled in the real world. On a Reddit thread about VRChat, they also describe huge amounts of racism, homophobia and transphobia. It is not uncommon for players to repeat the N-word. Some virtual worlds get raided by Hitler and KKK avatars.
VRChat wrote in 2018 that it was working to address the “percentage of users that choose to engage in disrespectful or harmful behaviour” with a moderation team that “monitors VRChat constantly”. But, years later, players are still reporting harmful users. Others try muting or blocking problematic users’ voices or avatars, but the frequency of abuse can be overwhelming.
People also describe racism on popular video games like Second Life and Fortnite; some women have described being sexually harassed or assaulted on VR platforms; and parents have raised concerns that their children were being groomed on the seemingly innocuous Roblox game for kids.
Social media firms like Meta, Twitter and Google’s YouTube have detailed policies that prohibit users from spreading offensive or dangerous content. To moderate their networks, most lean on artificial intelligence (AI) systems to scan for images, text and videos that look like they could violate rules against hate speech or inciting violence. Sometimes those systems automatically remove the offensive posts. Other times, the platforms apply special labels to the content or limit its visibility.
The degree to which the metaverse remains a safe space will depend partially on how companies train their AI systems to moderate the platforms, said Professor Andrea-Emilio Rizzoli, director of Switzerland’s Dalle Molle Institute for Artificial Intelligence. AI can be trained to detect and take down hate speech and misinformation, and systems can also inadvertently amplify it.
The level of problematic content in the metaverse will depend on whether tech firms design digital environments to function like small invitation-only private groups or open public squares.
Ms Haugen, who is openly critical of Facebook’s metaverse plans, recently told European lawmakers that hate speech and misinformation in virtual worlds might not travel as far or as quickly as they do on social media, as most people would be interacting in small numbers.
But it is also just as likely that Meta would integrate its current networks, including Facebook, Instagram and WhatsApp, into the metaverse, said Dr Brent Mittelstadt, a data ethics research fellow at the Oxford Internet Institute.
“If they keep the same tools that have contributed to the spread of misinformation on their current platforms, it’s hard to say the metaverse is going to help,” said Dr Mittelstadt, who is also a member of the Data Ethics Group at the Alan Turing Institute.
Since a great deal of the misinformation and hate speech could arise in private metaverse interactions, Prof Rizzoli added, platforms will face the same debates over free speech and censorship when deciding whether to take down harmful content. Do platforms want to have virtual beings approach people and tell them their conversation is not fact-based, or prevent them from having the conversation at all?
“This is a debatable issue,” Prof Rizzoli said, regarding the type of control that you will be subjected to in this new metaverse.
Defining and determining authenticity in the metaverse could also become more complicated. Tech companies could face questions about the freedom people should enjoy to portray themselves as a member of a different race or gender, said Associate Professor Erick Ramirez of Santa Clara University. Deep fakes – videos or audio that use artificial intelligence to make someone appear to do or say something they did not – could become more realistic and interactive in a metaverse world.
“There’s more room for deception,” said Prof Ramirez, who recently participated in a roundtable discussion with Mr Clegg about the policy implications of the metaverse. That kind of deceit “takes advantage of a lot of in-built psychology about how we interact with people and how we identify people”.
Virtual privacy
The metaverse could also compromise user privacy, advocates and researchers said. For instance, people who wear the AR glasses being developed by Snap and Meta could end up recording details about other people around them without their knowledge or consent. Users in virtual worlds could also face digital harassment or stalking.
“In the physical world, often you have to do some extra work in order to track somebody, for example, but the online world makes it much easier,” said Mr Neil Chilson, a senior research fellow for technology and innovation at the right-leaning Charles Koch Institute.
Mr Bill Stillwell, Meta’s product manager for VR privacy and integrity, said developers have tools to moderate the experiences they create on Oculus, but the tools can always improve. “We want everyone to feel like they’re in control of their VR experience and to feel safe on our platform.”
Even metaverse supporters such as Mr Chilson and Mr Jarre, the French DJ who will soon hold VR concerts, say regulators globally will have to draft new rules on privacy, content moderation and other issues to make these digital spaces safe. That might be a tall order for governments that have struggled for years to pass regulations to govern social media.
Mr Jonathan Victor, a product manager at open-source developer Protocol Labs, sees a potential bright side. In his vision of the metaverse, anyone will be able to own a digital 3D version of themselves, exchange cryptocurrency or make a career selling virtual goods they created. “There’s incredible upside,” Mr Victor said. “The question is, what’s the right way to build it?”