On Thursday, February 13, 2014, 15 year old Jayah Ram-Jackson jumped to her death from her grandmother’s Upper West Side high rise. Ram-Jackson had a history of depression and mental issues, but according to her family, no one anticipated the tragedy ahead.
When a loved one decides to take their own life everyone asks themselves, “Was there something I could have done?” or “Did I miss any signs or cries for help?” Most of the time the family dwells on minute phrases or actions from the days leading up to the death, but let’s face it, hindsight is always 20/20. Rarely do we have a clear documented threat days before a loved one takes their own life. On Tuesday, just two days before Ram-Jackson decided to plunge to her death, she posted on Facebook, “I’m actually just going to wait for someone to make a petition for me to kill myself because it’s inevitable…like, we all see it coming.” Unfortunately, no one did see it coming. Now, Ram-Jackson’s family should in no way, although they naturally may, blame themselves for not seeing this post. But, is someone else to blame? Should someone else be responsible for not catching this post earlier?
In such a social media controlled world, most teens in America have a Facebook account. If a teen contemplating suicide wanted to cry for help, I can think of no better place for that cry to be heard loud and clear than on their Facebook page. But are these cries being heard loud and clear? Facebook’s Statement of Rights and Responsibilities specifically states that, “we…are not responsible for the content or information users transmit or share on Facebook.” With this new age tool to detect suicide before it occurs, should Facebook have a duty to spot these cries for help and intervene? Should the government be placing some program into effect, or is it the guardians of the teens who should be responsible for monitoring their Facebook page? It seems to me that it is easier for a teen to post a half joking, half serious status about taking their own life on Facebook, then to physically go up to a parent or friend and talk to them about what they are feeling. Facebook has become an outlet, a friend, a therapist to these teens and we should not waste these opportunities to step in and get the teen help before doing something that they most definitely have not fully thought through.
Should someone be responsible for catching these cries for help posted by teenagers? If it is Facebook, will they willingly set up a filter to catch certain buzz words? If it is the government, are they allowed to fully monitor every word that Facebook users post in order to save lives? If it is the guardians, should they have a duty to continuously check their teens Facebook page?
Facebook has stated from the beginning that they are not responsible for what is posted on their site, so they are unlikely to create this type of program. Before Facebook was in existence, a guardian never had a duty to read a child’s diary searching for potential suicide threats, so it is also unlikely that a law would be created to force parent’s to consistently monitor their child’s social media pages. The government, on the other hand, may have the power to intervene. There is no privacy issue to these Facebook posts because there is no longer an expectation of privacy to the users, who have voluntarily chosen to relinquish this information to a 3rd party. The issue then becomes, is the government willing to set a program in motion which monitors these posts?
This highlights an interesting issue. The federal government has granted immunity to all ISPs through section 230 of the Communications Decency Act. Congress enacted the law in part to shield ISPs from liability for defamatory or reckless posts or other civil wrongs committed by individuals, particularly those who posted anonymously. The issue you raise, however, goes to a matter greater issue; that is one of life and death. Perhaps there should be some type of law mandating that IPS employ a crisis intervention supervisor who could alert government officials to threats of serious violence to a poster or to others.
Of course, this idea is costly, in both time and resources. Individuals would need to be hired, and there is always the concern of false threats. But, like everything else we do in the law, the issue comes down to one on balance, and I believe, based on your post, perhaps the scales tip in favor of guarding against preventable suicides.