Meet the Other Phone. Child-safe in minutes.

Meet the Other Phone.
Child-safe in minutes.

Buy now

Please or to access all these features

Chat

Join the discussion and chat with other Mumsnetters about everyday life, relationships and parenting.

How can they police Instagram uploads...

5 replies

gutrotweins · 07/02/2019 18:07

... when there are over 1000 posts per second?

OP posts:
SlinkyDinkyDoo · 07/02/2019 18:14

Software?

gutrotweins · 07/02/2019 18:19

But software looking for what? I'm thinking of the clamping down on self-harm content - there are so many different things to set the software to search for.

OP posts:
Willowdenedixon · 07/02/2019 19:23

Data mining of images- software can identify suspect images. It’s similar to how facial recognition software works. For example I recently took a picture of a branded food item (I made no text reference to what it was) to check I was buying the right one with DH, since then I have got sponsored content on Instagram, websites, etc for that brand. The software looks for identifying features, also the captions and hashtags on IG pictures will provide flags for what to look for. It isn’t people checking them by hand, they will get a list of flagged items.

gutrotweins · 07/02/2019 21:00

I realise that people won't be checking them Grin, and that there are clever ways of finding images.
But methods of self-harming are so diverse. Knives, razors, rope, arms, necks, pills , etc, etc - plus all the tags. I don't see how they're going to isolate all of the relevant images.

OP posts:
Willowdenedixon · 08/02/2019 06:14

Computers learn over time what to look for, they’ll never catch them all, but they’ll associate with similar images so say a more obvious picture is identified and then that user posts something less obvious, the computer learns to find those pics or from people flagging a picture as inappropriate that then ‘teaches’ what to look for too.

New posts on this thread. Refresh page