HomeNetsafe CEOWhat’s in store with the online world for 2023?

What’s in store with the online world for 2023?

-

Our senior experts make their predictions about the online world.

Significant advancements in technology make it difficult to predict what will happen in the online safety space this year.

Netsafe’s Leadership Team have predicted what they think will happen in the online world for 2023. From the metaverse to artificial intelligence, here are five predictions in the online safety space.

Brent Carey – Chief Executive Officer

This year the Metaverse will become a relevant alternate reality by appealing to people as an escape from their real world.

It offers an immersive 3D internet where users can interact, play, and even do business deals with each other. As it starts to become more understood, people will expect platforms to build in protections for online safety. For example, harmful and illegal content detection tools need to be different in the Metaverse because it works differently from current technology.

The Metaverse will also feature the ‘bubble’ concept that kiwis know from New Zealand’s COVID risk management plan. Users will want to activate a bubble, to ensure they are not grouped up with strangers and lured into a false sense of security.  

Andrea Leask – Chief Operating Officer

As 2023 is an election year, mis and dis information will play a significant role in the public’s perception of political candidates and parties.

New Zealand’s content regulatory regime includes a combination of statutory law and voluntary self-regulation. It should stop an excess of fake news that America experienced during the Trump era. However, the growing number of alternative news websites have the potential to cause trouble, as we saw in last year’s Wellington protest.

The internet, particularly social media has the power to accelerate socio-political movements. The election is an opportunity for people to spread mis and dis information and be heard because their claims flourish in the absence of legitimate, evidence-based information.

Leanne Ross – Chief Customer Officer

The explosion of Artificial Intelligence (AI) into the mainstream at the end of 2022 sent shivers down the spines of communications professionals everywhere, spurred on by the release of ChatGPT in the online world.

AI now can write university essays undetected, beat Grandmasters at chess, produce art that wins awards, design proteins in science and probably a lot more we don’t know about yet.

We’re already seeing what can happen when AI is utilised to harm. “Deep fake” manufactured videos have moved from politicians and celebrities to the public. The deep fake image-based abuse is just as damaging for victims as if the content shared had been real.

2023 will bring more discussion around ethical AI and transparency. The proposed European AI Act aims to govern the risk of AI use cases and would impact AI usage worldwide.

In public discourse here we should also see a call for more investment in critical thinking skills in our education system and more media literacy training available to all segments of society, but especially our most vulnerable.

As sectors across society adopt AI for innovation and efficiency, it’s more important than ever that we educate ourselves to critique these technologies, as we have to increasingly interact with them in our everyday lives.

Michael des Tombe – Legal Advisor  

While we await the outcome of the Government’s content regulatory review, and the legislation that will follow, we will likely see some changes in the way tech companies approach content regulation in New Zealand this year.

Overseas, the EU has recently adopted the Digital Service Act (DSA). Although not coming into full force until 2024, it will likely set the global standard for content regulation given the EU’s size and global influence.

To avoid different internal practices in each region, global tech companies may choose to adopt the DSA’s standards and apply them in other jurisdictions. New Zealand might then start to see changes in platform terms and conditions and the introduction of DSA processes such as illegal content reporting.

2023 looks to be an important year for content regulation – let’s hope it ushers in a new era for online safety.

Sean Lyons – Chief Safety Officer

This year will see technology’s importance in keeping us safe make its mark.  

New technology brings an opportunity to cause harm and a reminder to stay grounded and not be swept up by the latest service or device. As more people use technology, there are an increasing number of factors to consider about what exactly they are engaging with.

There are increasing calls for technology and its developers to do more online safety. Often these are from people with a certain view such as data encryption to defend an individual’s privacy rights versus those that oppose the use of encryption because of the impact it has on our ability to protect the young and vulnerable from sexual exploitation online.

Although their intentions are not bad, their views might be at odds with another, laws around the world or a company’s operational model.

With everyone wanting different things from the online world, the only way to resolve such fundamental differences is by applying technology to the technology itself. For example, the application of algorithms to remove or notify content that might be harmful to the individual.

There is a long way to go but the changes already in the global regulatory landscape will force technology developers to pick up the pace. It will not only foster innovation in this space but provide us with the safe online spaces we want.

Must Read

It’s Safer Internet Day and that means Every Space a Safe...

0
For the 20th consecutive year, the world is coming together for Safer Internet Day. The annual event is marked globally to promote the safe...