The UK’s communications regulator, Ofcom, says it is prepared to “disrupt” tech platforms that don’t comply with the country’s controversial new Online Safety Act, including cutting them off from payment systems or even blocking them from the UK.
The act—a sprawling piece of legislation that covers a spectrum of issues, from how technology platforms should protect children from abuse to scam advertising and terrorist content—became law in October. Today, the regulator released its first round of proposals for how the act will be implemented and what technology companies will need to do to comply.
The proposed regulations would compel Big Tech companies to tackle avenues for grooming children for abuse on their platforms, and to have “adequate” trust and safety teams to limit the spread of harmful content. Companies will also have to name an individual in the UK who can be held personally accountable for violations.
“Our supervision activity starts today,” says Gill Whitehead, a former Google executive who now heads Ofcom’s Online Safety Group. “From today, we will be supervising, one-to-one, the largest firms and the firms that we think may have the highest risks of certain types of illegal harms … The tech firms need to step up and really take action.”
Ofcom’s proposals give some clarity over what tech companies will need to do to avoid penalties for breaching the act, which could include fines of up to 10 percent of their global revenue and criminal charges for executives. But the proposals are unlikely to reassure messaging platforms and online privacy advocates, who say that the act will compel platforms to undermine end-to-end encryption and create backdoors into their services, opening them up to privacy violations and security risks.
In defending the Online Safety Act, the government and its backers have portrayed it as essential to protecting children online. Ofcom’s first tranche of proposals, which will be followed by more consultations stretching into 2024, focus heavily on limiting minors’ access to disturbing or dangerous content, and on preventing them from being groomed by potential abusers.
Ofcom says its research shows that three out of five children between the ages 11 and 18 in the UK have received unwanted approaches that made them feel uncomfortable online, and that one in six have been sent or been asked to share naked or semi-naked images. “Scattergun” friend requests are used by adults looking to groom children for abuse, Whitehead says. Under Ofcom’s proposals, companies would need to take steps to prevent children from being approached by people outside of their immediate networks, including making it impossible for accounts they’re not connected to to send them direct messages. Their friend lists would be hidden from other users, and they wouldn’t appear in their own connections’ lists.
Source and Read More: https://www.wired.com/story/uk-online-safety-regulation-ofcom/