Instagram to Use AI to Censor Mean Comments

Instagram will soon trial a new AI-powered system for dealing with online bullying and abuse. Currently, the Facebook-owned social media platform uses a conventional human-dependent system for dealing with this issue that relies on users flagging and reporting comments as spam, abuse, or hate speech.

On Monday, however, Instagram Head Adam Mosseri reportedly revealed that the company has started testing a new feature that uses AI to notify users if a comment they just posted is offensive. Even more ominously, Mosseri also revealed that the platform will shortly begin trialing a related AI feature that automatically hides comments it deems as offensive without the user being aware that their comment was hidden.

‘Big Brother’ Turns Out to Be AI

When George Orwell wrote “1984,” the dystopian ‘Big Brother’ system was at least overseen by living, breathing humans even if it manifested as a series of recorded messages. Even that apparently is too much to ask in 2019. After playing every kind of ethical roulette imaginable with its users, including allowing its user data to fall into the hands of billionaire supervillains and functioning as a sort of political mind control black hole for certain age demographics, Facebook has clearly not had enough of being the bad guy in the room.

Explaining how the not-at-all creepy new feature works in a blog post, the company said:

“In the last few days, we started rolling out a new feature powered by AI that notifies people when their comment may be considered offensive before it’s posted. This intervention gives people a chance to reflect and undo their comment and prevents the recipient from receiving the harmful comment notification. From early tests of this feature, we have found that it encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect.”

In other words, Instagram’s big idea to tackle online bullying is to urge users to self-censor. We can all look forward to receiving messages from the super-brilliant AI urging us to remove our “offensive” comment under our friend’s post that was clearly intended to be tongue-in-cheek. The very idea behind being digitally tapped on the shoulder by an internet robot every time one posts something it considers naughty is like something out of a satirical TV sketch.

On seeing this, one would be forgiven for thinking that the only thing that could make this worse is if the project is codenamed ‘Skynet’ and comes with indestructible military androids, but Instagram has somehow managed to make it even worse. Much worse.

Turning Instagram Into Facebook

Alongside the user-nudging feature, the company also announced the launch of a new feature called Restrict, which enables users to hide posts by certain users from themselves and crucially from other people. In other words, rather than bring out an anti-bullying tool to help users stop themselves from seeing offensive comments without jeopardizing conversations (like Twitter), Instagram is going full-on Facebook.

Users who do not want to use the existing ‘limited comment’ function that restricts commenting to a selected group of friends will now effectively be able to determine who posts on their supposedly public posts by controlling whose posts can be seen by anybody. This means we can say goodbye to any kind of honest interaction, especially on brand and influencer pages. Instagram users can now astroturf their own comment sections instead of simply restricting their comments or going private.

Facebook already knows more about us that we know about ourselves, and it has designs on someday controlling the money we all spend. Even for such a company, however, this is something new and unprecedented.

Source