By Graeme McGilliard and Jake Foster
Is the UK government’s new Online Safety Act a major step forward in the protection of the public – or the first step in state censorship of the internet in the UK?
Opinion is divided as to what the new legislation passing through parliament will mean for the public and brands who talk to different audiences online.
Ofcom has published draft guidance for tech firms to follow to protect users from illegal content online. This was prompted by insight that highlighted the dangers presented to many children online.
Three in five under 18s say they have been contacted online in a way that made them feel uncomfortable, with around 30% receiving an unwanted follow or friend request.
Technology companies – including social media, gaming, search and sharing sites – will have to follow these draft codes of practice to meet their obligations under the Online Safety Act. This requires them to assess the risk of their users being harmed by illegal content on their sites and to take reasonable steps to protect users from that content.
Ofcom will also require some platforms to use hash matching technology to detect Child Sexual Abuse Materials, which converts an image into numbers called a ‘hash’ and compares this to a database of numbers generated by known CSAM images. If there is a match, this means that a known CSAM image has been found.
Some commentators – including Wikipedia founder Jimmy Wales – are extremely sceptical about the motivation behind these new powers, fearing they are a pathway towards state censorship.
Ofcom says those powers will not be consulted on until 2024 and are unlikely to come into force until around 2025.