Social media giant Facebook, which also owns popular apps Instagram, Messenger and WhatsApp, reportedly assesses about 5 lakh reports of revenge porn per month.
But this number seems low considering Facebook now has around 2.6 billion monthly active users.
Earlier this year, Facebook launched artificial intelligence (AI) tools that could spot revenge porn, also known as non-consensual intimate images, before being reported by users, NBC News reported.
In 2017, the company launched a pilot project that let users submit intimate pictures to the platform as a means of training its AI tool to identify and remove such pictures if they appeared on the platform.
Also Read | The Dark Art Of Deepfake: How This Height Of Manipulation Can Even Make Mona Lisa Frown
"In hearing how terrible the experiences of having your image shared was, the product team was really motivated in trying to figure out what we could do that was better than just responding to reports," NBC News quoted Radha Plumb, head of product policy research at Facebook.
Facebook has a team of around 25 people -- excluding content moderators -- that works full-time fighting revenge porn.
The team's goal is not only to quickly remove pictures or videos once they have been reported, but also to detect the images using AI the moment they are uploaded, to prevent them from being shared.
Also Read | Opinion: Deepfake Will Make Fake News Realistic
(IANS)
For in-depth, objective and more importantly balanced journalism, Click here to subscribe to Outlook Magazine
How Much Would You Pay For A Kidney?
'Surprising', Says Rajasthan HC On Arrest Of Jaipur Blast Accused Who Stayed In Jail For 12 Years; Grants Bail
Motera Cricket Stadium Renamed As Narendra Modi Stadium; President Inaugurates World's Biggest Sporting Arena
Why E Sreedharan Deserves A National Role In The BJP