Given the fact that Facebook is such a powerful tool these days, can it use all the resources it has in order to stop suicides? This is getting closer and closer to reality, and it all started with a video of a 12-year-old committing suicide, which got to be shared all over the famous social network. Apparently Facebook could not control the spread of the tragic video, and they weren’t sure if it contradicted their terms and conditions.
Now, after a month, Mark Zuckerberg made it clear in a global community manifesto that the company intends to become more of a parent who is aware of the importance and its impact on more than 2 billion people on the globe.
On Wednesday, the social media giant announced that they are ready to take the next step in having a safer community on Facebook. They have already had suicide reporting tools for almost 10 years now, but now they are planning to use artificial intelligence in order to identify which members of the community need support in order to prevent suicide. According to Zuckerberg’s declarations, they are now testing a new system based on pattern recognition in posts, in order to identify any suicide risk.
The AI analyzes the words used in posts and other comments from friends, for example “I’m here to help” or “Are you okay?”, which might indicate if someone has serious issues. The tool will not report the accounts of those found at risk automatically to Facebook, but next time when the person logs in, it will underline the options of reporting self-injury and suicide.
People are really impressed by the fact that the company is keen on using its resources in order to help users who really need it, and it is indeed a great step forward in preventing suicide.