A Washington, D.C.-based technology think tank released a report on Tuesday that said attempts to protect children in the two-dimensional world of social media online could negatively impact the 3D realm of virtual and augmented reality.
Report by the Information Technology & Innovation Foundation.
The report explained that if KOPSA is passed, AR/VR platforms could be forced to enforce their policies in the same way as social media platforms.
It continued that by giving the FTC the authority to deem harmful content on AR/VR, the FTC could over-censor content or the platforms themselves might censor content in order to avoid liability.
We are concerned that KOPSA could lead to over-censorship if the FTC is given the power of censorship. [Federal Trade Commission] Alex Ambrose is the Policy Analyst who wrote the report.
She told TechNewsWorld, “It is another way that a political party can decide what’s harmful.” The FTC can say that content such as environmental protection, climate change, and global warming is anxiety-inducing. We need to remove all content related to climate changes because it can cause anxiety in children.
You Can Avoid Over-Censorship
Andy Lulham, COO of VerifyMyIn discussions on online regulation, the fear of over-censorship has become a common theme. He told TechNewsWorld that he believes this fear is not only understandable but also largely unfounded. “Well-crafted regulations by the government are not the enemy, but its protector in the digital age.”
Lulham maintains that the key is the approach. “Blanket, heavy handed regulations run the risk of tipping scales in favor of over-censorship,” Lulham said. “However, a nuanced, principles-based regulatory framework can enhance online freedom and protect vulnerable users.” Such balanced approaches have been seen in privacy regulations such as GDPR.
The GDPR — General Data Protection Regulation — which has been in effect since 2018, is a comprehensive data protection law in the European Union that regulates how companies collect, store, and use the personal data of EU residents.
Lulham added, “I am adamant that regulations should be focused on mandating robust systems and processes for safety rather than dictating content decisions.” This approach transfers the responsibility for developing comprehensive trust and safety strategy to platforms, rather than creating a climate of fear and overremoval.
He stated that transparency would be the cornerstone of any effective regulation. He explained that “mandating detailed transparency reports” can hold platforms responsible without resorting heavy-handed content police. This helps to prevent overreach and builds public confidence in both platforms and regulatory frameworks.
“Furthermore,” said he, “I support regulations that require clear and accessible appeal processes when content removal decisions are made.” This safety device can be used to correct unavoidable mistakes and prevent unwarranted content removal.
Lulham admitted that “critics might argue any regulation will lead to some sort of censorship.” “However, i contend that the greatest threat to freedom of expression comes from unregulated space where vulnerable users can be silenced by abusive and harassing behavior. Well-designed regulations create a level playing field that amplifies diverse voices.
AR/VR: The Good, the Bad and the Ugly
The ITIF report notes that AR/VR is often overlooked in discussions of online safety. The report explains that immersive technologies stimulate imagination and creativity, and foster social connections. Children’s development is dependent on play, imagination and creativity.
However, the report recognized that addressing risks to children with immersive technologies was a challenge. It was noted that the majority of immersive technologies available are not intended for children below 13 years old. Children are exposed to inappropriate content when they explore spaces that were designed by adults. They can develop harmful habits and behaviors.
It added that addressing these risks would require a combination market innovation and thoughtful policies. Design decisions, content moderating practices, parental controls, trust and safety strategies and the companies’ policies will all have a major impact on the safety of the metaverse.
The report acknowledged, however, the need for public policy interventions to address certain safety threats. ITIF notes that policymakers have already addressed the safety of children on “2D platforms” such as social media. This has led to regulations which may affect AR/VR technology.
The report urged policymakers to consider the ongoing safety efforts of AR/VR developers and ensure these tools retain their effectiveness before enacting such regulations. It continued that when safety tools are not sufficient, policymakers need to focus on targeted interventions addressing proven harms and not hypothetical risks.
Ambrose explained that while most online services work to remove harmful material, some will still slip through. “The same issues that we see on today’s platforms, such as the incitement to violence, vandalism or spreading harmful content, will continue only on immersive platforms.”
“The metaverse is going to thrive on massive amounts of data, so we can assume that these issues will be pervasive — maybe even more pervasive than what we see today,” she added.
Safety by Design
Lulham agreed that design decisions made by companies will determine the safety of the metaverse.
He stated that “in my opinion, the decisions that companies make about online safety will be pivotal to creating a secure environment for children.” “The current environment is fraught with risk, and I think companies have the power and responsibility to reshape this.”
He said that the design of user interfaces is the first line to defend children. He explained that companies prioritizing age-appropriate, intuitive designs can fundamentally change how children interact online. “By designing interfaces which naturally guide and educate users on safer behaviors, it is possible to reduce harmful encounters.”
He added that content moderation has reached a crucial point. He noted that “the volume of content requires a paradigm change in our approach.” While AI-powered tools can be essential, they are not a panacea. I argue that a hybrid approach is the way to go, combining advanced AI and human oversight in order to find the right balance between protection and censorship.
He argued that parental controls are frequently overlooked, yet they are essential. These tools shouldn’t simply be added-ons. Instead, they should be designed as core features with the same level of attention that the platform itself receives. He said: “I see a future in which these tools will be so intuitive and efficient that they will become an integral part of family digital life.”
He argued that safety and trust strategies would differentiate successful platforms from those in decline. He declared that companies adopting a holistic strategy, which integrates robust age verification and real-time monitoring with transparent reporting will set the standard. For companies that are serious about protecting children, regular engagement with child safety specialists and policymakers is a non-negotiable.
“In essence,” he continued, “I see the future of online safety for children as one where ‘safety by design’ isn’t just a buzzword but the fundamental principle driving all aspects of platform development.”
The report stated that children play a vital role in the market’s adoption of immersive technologies.
It acknowledged that ensuring innovation in this emerging field, while also creating a secure environment for all AR/VR users, will be a challenging task. The report added that all stakeholders, including parents, corporations and regulators, have a role to play, by balancing privacy concerns and safety concerns, while creating innovative and engaging immersive experiences.