Facebook says that its AI system that is the artificial intelligence system gives more details about offensive photos than humans report. Such information if received at the correct time, can be destroyed before hurting the feelings of the person or group to whom it is sent, receives it. At least one person has to see and flag the offensive substance uploaded by an user to intentionally disturb people. When we say offensive substance we mean it can be anything like hate speech, threatening or pornographic; incites violence; or contains nudity or graphic or gratuitous violence. Examples of offensive substance are a bully, jilted ex-lover, stalker, terrorist or troll could post offensive photos to someone’s wall, a group, event or the feed reports Tech crunch
But by the time the content is seen and flagged as unpleasant and removed by Facebook, it may already have been seen by the people concerned and caused the damage. Therefore Artificial intelligence is now helping Facebook to release active control at scale by making the computers scan every image uploaded before anyone sees it. Facebook’s Director of Engineering for Applied Machine Learning, Joaquin Candela said today we have more offensive photos being reported by AI algorithms than by people.
Facebook says that they find at least 25 percent of their engineers now regularly making use of their internal AI platform to build features and for doing business. AI could ultimately help social networking sites fight hate speech. Yesterday, Facebook, Twitter and Microsoft agreed to new hate speech rules that have been introduced.
This AI technology by facebook helps rank news feed stories, read aloud the content of photos to those with vision impairment and automatically write closed captions for video ads that increase view time by up to 12 per cent.