X/Twitter boss Elon Musk's defiance over the eSafety Commissioner's takedown orders unveils the hostility large platforms have always held for meaningful public accountability.
Subscribe now for unlimited access.
Rather than exercise any resemblance of corporate social responsibility, the latest events of this saga demonstrates a level of contempt that has until now been hidden in plain view.
Violent, objectionable footage will continue to be spread online, leaving regulators to play a never ending game of whack-a-mole. As the eSafety Commissioner and X/Twitter face off in a legal process, the limits of the Online Safety Act are being tested in real time.
There is a showdown here that is bigger than the eSafety commissioner and Musk. It is a confrontation many years in the making between governments and big tech about power and responsibility.
This recent example of harmful content is one example of many where large platforms have revealed ensuring safety and wellbeing for users is considered an optional exercise, and when pursued, done squarely on their terms.
Reputational risk, once held out as incentive enough for platforms to ensure user safety, has become an early casualty in this week's belligerent battle. Indeed, Musk doubled-down on chasing a bad reputation, angling for more notoriety and baiting for online engagement in the process.
Appallingly, he has deliberately pointed his thousands-strong goon squad to attack the Commissioner herself in disgusting and sexist terms. And Musk is not only flipping the middle finger at concerned regulators - he's hounding not-for-profits and attempting to use the courts to silence their public interest research and maim the accountability sector. These are just some of the 'offline' harms that happen while we wait for weak regulation to fail.
Large platforms are exploiting the gaps in Australia's patchy framework for tech accountability and goading the government and regulators to make the moves.
Over at Meta, their shameless withdrawal from the News Media Bargaining Code and manoeuvres to evade liability for negligent and harmful advertising systems in the courts is another example of this alarming but utterly unsurprising shift.
These platforms are heaving with systemic risks that give rise to harms extending beyond surface-level content. Various platforms' content recommender systems (or algorithms) have been routinely found to actively promote content that is harmful to mental and physical health, such as eating disorder content and pro-suicide content.
A recent Reset.Tech study found variable results in platform mitigations. TikTok appears to block algorithmic distribution of eating disorder content, indicating it is operationally feasible.
However, Instagram's algorithm was only partially effective at preventing this type of content from spreading into young people's feeds. Over on X, the algorithm takes users from pro-eating disorder content to pro-suicide content after less than 40 posts.
Content can harm, but not all online harms come from surface-level content. Much of it stems from the digital systems that power the distribution of ads and posts.
It suits Musk and his peers to keep the regulatory fight to reactive measures like content takedowns. Proactive measures targeting all these platforms' underlying systems would require a substantial redistribution in company resourcing.
It would mean that, for example, regulators could probe the supercharged advertising underbelly that milks Australians of millions of dollars in online scams annually.
A systemic approach, such as via an overarching "duty of care" would be a strong start.
We must move away from focusing on individual pieces of content and galvanise the foundations of the Online Safety Act so regulators have options beyond the time consuming notice-and-take-down model and the performance of information requests (the latter also overly relied upon in the proposed misinformation bill). In its place could be a positive, legally enforceable obligation which compels platforms to mitigate against harms before they occur.
Whatever the outcome of the courtroom showdown between X and the eSafety commissioner, it's become clear that the current model is unable to keep up with masses of user-generated content.
Without systemic regulation over online safety, social media companies will continue to shirk their responsibilities, leaving the government to deal with the fallout.
If we want to truly hold platforms accountable we must abandon this relentless game of whack-a-mole for a more robust and enforceable system which will protect the safety and dignity of all users.
- Alice Dawkins is executive director of Reset.Tech