How to digitally protect oneself in the Metaverse
When a survey asked if 6000+ users had ever interacted with AI, almost a third (34%) said yes. In reality, 84% of users had met with an AI interface, which means that 50% could not tell if they were interacting with AI or a human service provider.
Now, what happens when the AI requests your personal information to provide you with a service? Who has jurisdiction over that information, and does your consent apply to AI and human service providers indiscriminately?
These are some of the thorny questions we’d have to answer as the vision for a sophisticated, converged metaverse becomes a reality.
Data protection and privacy are major concerns for metaverse companies, developers, and users alike. For users, it could mean violation of their personal privacy, potential identity theft, and other types of fraud.
Companies that fail to factor in data protection and privacy rights in the metaverse could face heavy penalties in the long term – like the current $5 billion fine ahead of Facebook.
The metaverse can be described as a three-dimensional virtual space where users can engage in social interactions and also interact with their virtual surroundings using advanced human-computer interface (HCI) technology.
If data privacy is a problem in today’s 2D, Web 2.0 world, then the embodied internet of the metaverse adds a more complex dimension to the challenge. Consumers will be using all new technologies to interact with the metaverse – e.g., electromyography-enabled haptic gloves.
The data collection, storage, and utilisation processes via these devices are yet to be fully documented. Also, user anonymity could become a bigger issue in the metaverse.
Hyper-realistic avatars like the Codex avatars being developed by Facebook could allow users to hide their identity or even make it possible for children to appear as adults. How would this impact consent in the metaverse?
Simply put, the metaverse blurs the lines between the real and the virtual at a scale never seen before. We are still reeling from the personal rights protection impacts of the internet, and the next bend is already knocking at the gates.
There are six factors companies must consider as they prepare to operate in the metaverse.
HCI devices could help collect a variety of data types, including user biometrics information.
Users must be educated on the privacy implications and consent mechanisms must be simple enough for the user to meaningfully engage.
Also, consent should be regularly refreshed without the assumption of perpetual permission and with every new data type, these mechanisms have to be upgraded.
The metaverse will be populated by both human and AI entities – and with time, it could become difficult to tell the two apart.
For complete transparency, AI bots (i.e., digital humans) must come with labels so that users always know who they are sharing their data with.
Further, these AI bots are modelled on human models who willingly share their biometrics data – the rights and consent rules governing these trades have to be clearly outlined.
Currently, data protection and privacy laws are not consistent around the world. EU’s GDPR, for example, lays down specific rules pertaining to EU citizens.
Different US states have different laws, like the CCPA in California, and the UK has its own version of the GDPR along with additional Privacy and Electronic Communications Regulations (PECR).
Meanwhile, the metaverse could become a wholly separate territory operating both in a universal and independent manner.
This requires stringent self-regulation.
One of the biggest reasons behind data misuse is that most of the internet is thought to be a free service.
In reality, services like Google and Facebook are funded by ad revenues collected via ad targeting based on user data. By compensating users for collecting their information, some of these issues could be avoided in the metaverse.
For instance, in privacy-focused browsers like Brave, cookies are turned off by default and users can collect rewards or tokens if they wish to view targeted ads.
Since the metaverse will house massive volumes of user data, the technology has to be watertight. Developers must be careful to keep vulnerabilities to an absolute minimum and adopt secure coding principles.
Data breaches and accidental exposure could prove costly for companies in the long term, and regular testing and upgrades are needed to address this.
Finally, there will be situations where companies must choose between data privacy and user convenience or ease-of-use. For example, interoperability becomes much quicker when you have a single set of terms & conditions governing both platforms.
But ideally, for the user’s sake, consent should be renewed at every point of data re-entry even if that means an additional authentication layer.
The first step to ensure data protection and privacy in the metaverse is building privacy-sensitive technologies from the ground up.
Facebook has taken several measures in this direction. It recently shut down its facial recognition system that would identify whenever a user would appear in tagged photos and other places.
It is also strengthening its age verification procedures to ensure age-appropriate interactions in its platforms. The company has even announced a Transfer Your Information tool (TYI) that aligns with GDPR and allows users to retract the information from Facebook whenever they want.
Finally, the company is working on privacy-enhancing technologies (PETs) to curb the use of personal data for ads through cryptography and statistical techniques.
All of this together will go a long way towards building a safe, privacy-sensitive, and regulated metaverse for users.
Other companies building their own metaverses or looking to operate in one must adhere to similar principles right now, even when the metaverse is over a decade away.