Facebook is using artificial intelligence (AI) to prevent suicides among its users. There are already tools in place to report a post of someone who is suicidal, but this will go even further. In a blog post, the company said they will be offering:
- Integrated suicide prevention tools to help people in real time on Facebook Live
- Live chat support from crisis support organizations through Messenger
- Streamlined reporting for suicide, assisted by artificial intelligence
They are already using AI to monitor for offensive content in their Facebook Live feature. The updates will give users the option to reach out to someone who is broadcasting live. The users will have the option to report a video to Facebook. It will make the existing reporting features easier to use as well.
They are testing out a pattern-recognition algorithm that will look in posts for potentially suicidal thoughts. The Community Operations team will then look at the flagged posts to see if the person is in danger. They are starting out using these new features in the United States only, then they will begin expanding to more countries.
Facebook’s founder and CEO, Mark Zuckerberg would also like to use AI to try and spot terrorist posts and other inflammatory content. He said:
“Right now, we’re starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda so we can quickly remove anyone trying to use our services to recruit for a terrorist organization.”
It will take years to develop because they will need to make an algorithm that can tell the difference between a news story about an attack and propaganda.
I applaud the efforts to prevent suicide. If you or someone you know is having suicidal thoughts, please call the National Suicide Prevention Lifeline, 1-800-273-TALK. You can also use the Crisis Text Line by texting “HERE” to 741-741. Facebook is partnering with organizations around the world to help prevent suicides.
Featured image via abilk.com