Opportunities for the security industry in the Metaverse

Digital Twins | Metaverse

10th of November 2022

KEYS FACTORS OF THE ANALYSIS

  • The main opportunities posed by the arrival of the metaverse to security companies are in relation to the new risks to physical and moral integrity, as well as the increased exposure to risks inherent to digital activity, such as cybercrime, harassment, extortion, and fraud. Furthermore, it also offers opportunities to provide services that protect against data theft, digital assets, and crypto custody. 

  • The Oasis Consortium is made up of multiple video game and digital platform companies which have developed standards to protect security and ethical behavior in virtual environments. The proposals of the Oasis Consortium include the requirement that companies operating in the metaverse recruit professionals who ensure the security and protection of interactions between users. 

  • Meta has also announced significant investments in research into possible risks to the security and physical and moral integrity of metaverse users. To this end, collaborations are being established with a variety of external public and private institutions. Machine-learning and artificial intelligence models have the potential to automate part of these tasks. 

The arrival of the metaverse will expose users to new types of security risks and vulnerabilities, leading to new business opportunities in the industry to respond to these protection needs. As part of this analysis, we have identified the risks that could potentially emerge as well as a number of pioneering initiatives. 

NEW TYPES OF RISK AND THEIR CLASSIFICATION

The security risks linked to the metaverse can be divided into different types:

  1. Physical security risks: first and foremost, the use of extended reality technologies can have an impact on the user's real-life security. For example, exposure to the continuous use of 3D glasses, for several hours a day, can cause disorientation when returning to normal life. It can also result in virtual habits being acquired that are not compatible with real life, giving rise to high-risk situations as regards traffic or physical injuries. To this end, companies in the security sector might consider the possibility of offering, both to individuals and companies, security mechanisms to minimize this type of risk, both in their professional and occupational spheres, either through training services or establishing standards and best professional or personal practices when using the metaverse. 
  2. Mental health risks: the prolonged use of virtual reality is associated with the risk of isolation, with the potential to lead to violent or depressive behavior. As with the first point, the task of security companies in this regard could involve prevention and training, although there could also be opportunities to take direct action by identifying cases of risk, based on behavioral patterns in the virtual world, and then act accordingly in real life.
  3. Risks to physical and moral integrity: the metaverse remains an unregulated space, where many legislative limits are yet to be properly defined. However, as the regulatory frameworks are gradually adapted, the use of surveillance measures will become more necessary to ensure compliance with these norms. One example in relation to this point is the fact that the first sexual harassment and rape claims in virtual spaces are now being made (4). Security companies will therefore have a role to play in protecting metaverse users from these dangers and other limits in relation to personal consent and individual freedom. Protecting minors is also of vital importance. As children could also be users of the metaverse, there is an increased danger of their rights being violated through intimidation, extortion, harassment, etc. This new form of protection must now move from the realm of security in real physical spaces, to digital spaces online and now also to virtual spaces and environments. 
  4. Privacy risks: the arrival of the metaverse will result in an exponential increase in the volume of personal data registered online, as part of an individual's daily actions. Again, legislation is going to have to be very present to safeguard the right to privacy and the protection of information. Given the digital nature of this space, business opportunities already available online are expanded to offer protection against privacy intrusions or violations of the right to privacy and data confidentiality.
  5. Cybersecurity: related to the point above is one of the most important dangers posed by the metaverse, which is the risk of cyber attacks. These are of systemic importance to all types of organizations and will require that security measures are reinforced. However, although security companies can also operate in this field, the space is already saturated with specialized providers with experience in the field of cybersecurity. However, new startups are also emerging, as is the case of Arkose Labs (10), which are dedicated to metaverse cybersecurity, with which synergies are possible.
  6. Identity risks: the arrival of virtual avatars poses new dangers in relation to identity theft in digital spaces and cybercriminals having new opportunities to steal personal data, manipulate identification details or impersonate others, etc. Security companies can work to offer technological tools to users that allow them to adequately prevent these dangers.
  7. Economic scams and virtual blackmail: the use of cryptocurrencies and virtual exchange assets, based on blockchain and independent of traditional fiat currencies, entails additional exposure to crimes of fraud and deceit. Furthermore, the existence of avatars makes it easier for criminals to impersonate false identities and commit crimes of extortion or manipulation. Thus, the metaverse increases users' exposure to new dangers, including: catfishing: creating false identities to attract people into supposed online relationships and, by doing so, asking for financial favors or participating in abusive and deceptive actions; doxing: crime which involves accessing the personal data or private documents belonging to an individual (for example, photos), through online deception, before extorting money from that person by threatening to publish said data or documents. Security companies can work to develop systems that make it easier to identify this type of crime, as well as to offer their customers prevention and response services in the event of this type of incident. This may include creating whistleblowing channels allowing users who believe they are the victim of this type of crime to seek assistance handling the problem. 
  8. Risk of theft of virtual assets, goods and properties: in relation to cybersecurity risks, there are also dangers in relation to the theft of virtual goods: from avatars, to property, objects or cryptocurrencies, etc. A very clear example of this can be seen in relation to crypto custody services. 

This represents a fairly comprehensive list of the new risks that security companies can respond to in relation to the metaverse; however, are there already any initiatives in the sector specific to this area? 

OASIS CONSORTIUM AND METAVERSE SECURITY STANDARDS

One initiative in relation to the field of security in the metaverse is the creation, in 2020, of the NPO known as the Oasis Consortium. This global body consists of a group of companies from the video game industry, and other online businesses, which seek to undertake joint actions to promote a more ethical network where people can interact more securely. (1) 

At the start of 2022, Oasis Consortium published initial standards in relation to user security in the metaverse. This series of recommendations includes hiring a reliability and security manager responsible for ensuring compliance with the regulations and detecting possible cases of unethical, discriminatory behavior or behavior in breach of individual rights. 

All companies that form part of the consortium at present (including well-known metaverse platforms like Roblox and video game giants like Riot Games) have subscribed to these standards and have committed to applying these measures to achieve the stated objective. Together, these companies account for hundreds of millions of users to whom protection shall be provided in their virtual spaces. 

Oasis Consortium has indicated that the decentralized nature of the metaverse must be taken into account by companies, when self-regulating their own protection and security measures on behalf of users. 

Another objective of the consortium is getting big tech firms, including Meta, to join the initiative. Meta's inclusion would mean the practical standardization of its metaverse security recommendations worldwide.

META INVESTMENTS FOR SECURITY RESEARCH

Meta has already made several public statements in which it claims to be working on building a responsible metaverse, placing a priority on the security of virtual spaces. In one of these statements, Meta asserted that it was not unilaterally building the metaverse, rather that it aimed to do so in collaboration with regulators, legal experts and other industry partners to guarantee the effectiveness of the metaverse in real life. This is where security companies could be seen as potential partners by the creators of metaverses. (6) 

In November 2021, Meta announced a $10 billion investment in its virtual reality research lab, Reality Labs, with a view to identifying the problems affecting the creation of the metaverse in terms of security, integrity, equity, and social inclusion. To this end, it is also working with various government agencies, non-profit organizations, and academic institutions. Part of this investment is set aside for external companies to conduct independent research into these security issues. 

SECURITY SCORING SYSTEM

The creation of standards aside, another strategy that the Oasis Consortium plans to implement is the creation of a scoring system that allows users to easily establish the level of reliability and security offered by a virtual reality platform. This system works with scoring that is similar to the system already used in fields like the hotel/catering industry, the efficiency ratings of energy services and financial product risk levels.

THE FIGURE OF THE VIRTUAL SECURITY OFFICER

Another of the measures proposed by Oasis is to require that companies operating in the metaverse and virtual environments recruit a person responsible for reliability and security. This is an existing role at some large companies; however, to date there is no official obligation or standard that indicates the extent to which this measure must be complied with. 

Should this initiative become effective, subject to the approval of all the companies participating in the consortium, a precedent would be set that could be extended to other public and private organizations involved in the creation of the metaverse, opening a huge door for security companies to provide professionals specializing in protecting the security of virtual spaces. 

ARTIFICIAL INTELLIGENCE TO DETECT HARASSMENT AND HATE CRIMES

Much of this work, structured around the protection and integrity of metaverse users, could potentially be automated. To this end, proposals have already been made to apply artificial intelligence and machine-learning algorithms capable of detecting, both automatically and in real time, virtual behaviors that entail hate or harassment. 

For example, the Center for Countering Digital Hate performed research that involved recording interactions in the "VRChat" over several weeks, which entails the use of "Oculus" virtual reality glasses. (8) As part of this game, participants form virtual communities through their avatars and play card games or meet up to interact in different virtual public spaces, like clubs. 

As part of this research, more than 100 incidents were identified in which even minors were involved. Some cases involved sexual or violent harassment. In others, it was found that attempts had been made to expose minors to sexually explicit content. 

This type of research can be monitored not only by real professionals, they can be built into machine learning models allowing artificial intelligence algorithms to detect them automatically. 

Virtual security professionals would also serve as moderators in the face of certain conflicts arising between platform users. 

The field of research to really understand how AI can be applied to this type of situation is still very broad; however, without a doubt, it represents another opportunity for security companies to develop early warning tools that can later be marketed

To this end, the role that law enforcement authorities will play is yet to be made clear. The ultimate provisions of the regulations will determine the opportunities to which companies in the industry will have access. 

These regulations must also be capable of specifying the extent to which the security rights and information privacy rights of users are compatible, as they could come into conflict. For example, a user could file a complaint or claim before a virtual security guard for having been subject to harassment or discrimination. However, it is difficult for this person to provide quick proof if the platforms are not allowed to record or store what happens on them. To this end, the case would be similar to a case in the "real world", in which public legal action would have to be taken. The same goes for artificial intelligence algorithms, if they are designed to only handle anonymized data, but for which limits are set by data protection and information privacy rights.

BIBLIOGRAPHY

01. MIT TECHNOLOGY REVIEW. El reto casi imposible de ofrecer seguridad y privacidad en el metaverso. January 24, 2022 [accessed 23/03/2022]. Available at: https://www.technologyreview.es/s/13950/el-reto-casi-imposible-de-ofrecer-seguridad-y-privacidad-en-el-metaverso 

02. DIGITAL TRENDS. ¿Es seguro el metaverso? Los 5 riesgos asociados a esta tecnología. March 02, 2022 [accessed 23/03/2022]. Available at: https://es.digitaltrends.com/sociales/es-seguro-el-metaverso-cinco-riesgos/ 

03. IT DIGITAL SECURITY. Estos son los riesgos de seguridad del metaverso. December 24, 2021 [accessed 3/23/2022]. Available at: https://www.itdigitalsecurity.es/actualidad/2021/12/estos-son-los-riesgos-de-seguridad-del-metaverso 

04. YAHOO FINANZAS. Una mujer denuncia una violación grupal virtual en el metaverso de Facebook. February 6, 2022 [accessed 3/23/2022]. Available at: https://es.finance.yahoo.com/noticias/mujer-denunci%C3%B3-violada-virtualmente-metaverso-170000579.html 

05. COMPUTER WEEKLY. El despliegue del metaverso conlleva nuevos riesgos y desafíos de seguridad. February 16, 2022 [accessed 3/23/2022]. Available at: https://es.finance.yahoo.com/noticias/mujer-denunci%C3%B3-violada-virtualmente-metaverso-170000579.html 

06. META. Building the Metaverse Responsibly. September 27, 2021 [accessed 3/23/2022]. Available at: https://about.fb.com/news/2021/09/building-the-metaverse-responsibly/ 

07. META QUEST. Keeping people safe in VR and beyond. November 12, 2021 [accessed 23/03/2022]. Available at: https://www.oculus.com/blog/keeping-people-safe-in-vr-and-beyond/ 

08. THE NEW YORK TIMES.  The Metaverse’s Dark Side: Here Come Harassment and Assaults. December 30, 2021 [accessed 3/23/2022]. Available at: https://www.nytimes.com/2021/12/30/technology/metaverse-harassment-assaults.html 

09. INFORMATION AGE.  David Mahdi, chief strategy officer and CISO advisor at Sectigo, discusses what organisations operating the metaverse must consider when it comes to its security. March 15, 2022 [accessed 23/03/2022]. Available at: https://www.information-age.com/we-need-to-talk-about-metaverse-security-123498964/ 

10. VEREDICT.CO.UK.  Trying to do business with your customers, securely, online? This metaverse security CEO might know something. December 9, 2021 [accessed 3/23/2022]. Available at: https://www.verdict.co.uk/trying-to-do-business-with-your-customers-securely-online-this-metaverse-security-ceo-might-know-something/