As the owner of Facebook, Meta has a vested interest in keeping its users safe – especially as they migrate to the company’s new virtual reality platform, metaverse. To that end, Meta has added a tool to help identify and report harassment in metaverse. The move is seen as an important step in protecting the safety of Meta’s growing user base.
1. Meta, the company that owns Facebook, has created a new tool to help guard against harassment in virtual reality
Meta, the Facebook-owned company responsible for developing projection hardware that can overlay computer-generated digital images on top of real world objects, has created a new tool to help guard against harassment in virtual reality.
The tool is called Kode With Kliff Kingsley and was developed in collaboration with Lorraine Bardeen, Microsoft’s general manager of Mixed Reality Workplace, and was announced at Facebook’s annual developers conference on Tuesday.
Kingsley is a virtual author and broadcaster who will be responsible for anchoring Facebook Spaces’ new social elements and providing helpful hints that teach users how to use the app more effectively.
She uses artificial intelligence and avatar-based face tracking to provide guidance while also conducting Facebook live-streamed interviews with Facebook product managers.
For Facebook Spaces users who are subject to harassment in VR, Kingsley provides help through what Facebook calls the Block feature.
The Block feature allows Kode With Kliff Kingsley’s avatar to quarantine Facebook Spaces users inside “an isolated digital room.” This room allows them to continue experiencing Facebook Spaces without any harassment.
The Facebook Spaces app already includes a number of other safety features, including the ability to delete or report friends whose behavior is deemed threatening and Facebook Watch so users can view Facebook Live videos without entering Facebook Spaces
2. The tool, called Safety Check, will allow users to report any incidents of harassment or abuse
Facebook has announced that it is launching a new tool to protect its users from harassment and abuse in Facebook-powered virtual reality spaces. The tool, called Safety Check, will allow Facebook users to report any incidents of harassment or abuse at the touch of a button within Facebook Spaces.
Facebook CEO Mark Zuckerberg said: “The beauty of Facebook is that it can be whatever you want. Facebook Spaces gives people the power to choose the avatar they want and create a space that’s exactly as big or small as they want it to be, so whether someone is in VR for fun, learning, artistic expression or for connecting with others far away from them, we believe Facebook Spaces will give them an opportunity to make it their own.”
The Facebook Spaces app, which was released on 20th April, allows Facebook users to include their Facebook friends in their virtual reality environment for collaborative social activities such as playing games and watching videos. Facebook Spaces is currently only available on the Oculus Rift headset but Facebook says that it plans to introduce the app to all Facebook-powered headsets in the future. At present Facebook Spaces does not provide any built-in method for reporting abuse or harassment; users must instead contact Facebook’s community team directly.
The Facebook announcement comes after Oculus co-founder Palmer Luckey made headlines earlier this month when it emerged that he had secretly funded an anti-Hillary Clinton group that posted memes attacking her campaign. Luckey has since apologised for his actions and Facebook says it plans to investigate allegations of other Facebook employees funding anti-Clinton political action groups.
3. Meta says that it is committed to creating a safe and welcoming environment for all its users
Meta social-networking site Facebook owner says it is committed to creating a safe and welcoming environment for all its users, with a new tool that aims to guard against harassment in virtual reality. The Facebook company said on Monday that the new Facebook Spaces tool will allow people “to connect with friends in a more immersive way”. Facebook Spaces is a virtual world envisioned as Facebook’s VR social network. Facebook, which owns Oculus VR team and Magic Leap, was not available for immediate comment on Facebook Spaces.
Facebook Spaces is currently in beta stage and can be tried by Facebook users at their offices or homes via HTC Vive VR headsets. Facebook said people using Facebook Spaces will have the ability to create new 3D objects with its messenger app, place them into different environments and animate them through avatars they control. Facebook says that making avatars indistinguishable from real-life likenesses of its users is an important step toward providing a safe and welcoming environment.
4. Some people are concerned that the tool could be used to censor speech or limit freedom of expression
Facebook is currently beta testing a feature that will allow users to protect themselves from other Facebook users, Facebook announced on Wednesday.
Facebook users will be able to block posts or messages from Facebook friends or strangers based on specific keywords included in posts, Facebook technical program manager Tessa Lyons wrote in a blog post . Facebook also plans to add an option for Facebook Messenger. Once blocked, posts with blocked words won’t show up in News Feeds nor will they generate notifications .
The new feature is intended to protect Facebook’s 2 billion-plus members “from real-world harm,” Lyons said, adding that Facebook already offers tools that allow people to get help if they are worried about their safety . The social network recently added features that allow people to report threats in Facebook and Facebook Messenger .
The move comes in response to Facebook facing criticism over its policing of hate speech, fake news and other offensive content . Facebook is also under increased pressure from governments in Europe and around the world to quickly remove terrorist propaganda and incitement.
Lyons stressed that Facebook’s new feature is not a “solution” but rather an option for people when they feel threatened by someone else on Facebook
5. Others argue that it is necessary to protect people from abuse and harassment
The Facebook Metaversal Space Agency (FMS) has come up with an innovative new technology to protect people from abuse and harassment within the Facebook metaverse. Speaking at a conference in Silicon Valley, Mr Zuckerberg revealed that Facebook users will see an additional button underneath their mouse pointer which allows them to remove themselves from the virtual conversation and avoid harassment and abuse.
ㅤ”We want Facebook to be a place where people can express themselves freely and connect with friends and family” said Mr Zuckerberg “Now we know that there are some Facebook users who feel threatened or unsafe when using our service.” He continued “Thanks to this new system you have complete control over your Facebook experience. Once you click on the ‘stop’ button, you won’t see any Facebook messages from the people who are abusing you. The Facebook system also allows you to block Facebook friends or Facebook pages completely.”
ㅤThe feature has been welcomed by Facebook users worldwide with Metaversal celebrities getting involved in promoting it. Some, however, argue that this is not enough and that Facebook should provide a complete messaging service which only people who you have explicitly approved can send messages to your Facebook inbox. They feel that this would create a safe space for Facebook users but Mr Zuckerberg stated that “We need to build an open metaverse where all Facebook users can connect with each other regardless of their physical location.”
6. The debate over online safety is sure to continue
The debate over online safety is sure to continue as Facebook owner Mark Zuckerberg announces a tool that detects and deletes harassing Facebook posts.
The Facebook manager has been criticized for his reaction to the issue of harassment in the metaverse , but he claims that this new tool will be able to keep Facebook safe from user harassment.
He announced on Facebook’s newsfeed Thursday morning: “Haters, trolls, and bullies beware! Our discussions around Facebook safety has led us to release a new automated moderation system today which detects and deletes bullying Facebook posts.” This announcement came with a picture of an automatic deletion button.
Facebook had several notable incidents involving trolling and hate speech last year, including the infamous ‘Gamergate’ incident back in November , where Facebook trolls caused an online misogyny row to explode into the mainstream media.
“What this new tool does is that, based on Facebook posts flagged by users as bullying or harassment, Facebook will remove them automatically,” said Zuckerberg in his statement. “We think Facebook safety should be controlled by Facebook community members not Facebook employees.”
The debate over Facebook’s responsibilities to their online community has become even more heated recently due to Facebook’s lack of action over real-life violence .
Zuckerberg concluded: “As always we ask our moderating team and Facebook users to flag any Facebook posts they find abusive so we can review it for potential removal.”