Facebook is reaching out to users vulnerable to suicide through AI

According to emerging reports, Facebook have begun testing the use of artificial intelligence in an effort to identify users who may be at risk of suicide.

The AI tool will scan and filter posts and comments which contain language indicative of pain, sadness or concern, and ultimately send the results to a human review team.

At this point, the review team will reach out to the user flagged by the tool and offer help in the form of support services.

The news comes amid an announcement that the corporation have introduced a new safety feature to its Facebook Live function which will allow users to immediately bring a troubling stream to the attention of the team.

Those sceptical of the new feature suggest the site should immediately cut streaming if suicide is mentioned as opposed to flagging it with staff.

Disagreeing with this argument, Jennifer Guadagno, the project’s lead researcher. said: "What the experts emphasised was that cutting off the stream too early would remove the opportunity for people to reach out and offer support."

 "So, this opens up the ability for friends and family to reach out to a person in distress at the time they may really need it the most.”

Explaining the motivation behind the initiative in a recent manifesto, Mark Zuckeberg said: "Looking ahead, one of our greatest opportunities to keep people safe is building artificial intelligence to understand more quickly and accurately what is happening across our community."

The AI tool is currently being tested in the United States; up until now, Facebook relied on users to flag worrying content.

Trending