When violence unfolds on social streaming media

Steve Stephens's random and vicious killing of Cleveland retiree Robert Godwin Sr. is not the only violent crime to unfold in near-real time on Facebook.

Earlier this year, a group of Chicago teens live-streamed themselves beating a disabled boy. 

Last summer, the aftermath of a police officer shooting Minnesota man Philando Castile was streamed by his girlfriend, who was with him in the car.

And suicides and sexual assaults have streamed on Facebook Live.

All of these incidents highlight the danger of allowing anyone to broadcast live from anywhere, with the simple click of a button.

"The people on social media know the power and harm it can do," said Cleveland Police Chief Calvin Williams, who is leading the investigation into Godwin's murder. "This is something that should not have been shared around the world, period."

Godwin's murder wasn't broadcast live, but Stephens posted it on Facebook shortly after, where it remained for two hours. Facebook says it removed the clip 23 minutes after it was first reported to the company.

Stephen also reportedly live-streamed several other videos immediately after the shooting, including one in which he says, "I just snapped."

While speaking at a conference, Facebook CEO Mark Zuckerberg offered his condolences to Godwin's family.

"We have a lot of work and we will keep doing all we can to prevent tragedies like this from happening," Zuckerberg said.  

"Given the magnitude of content and videos that are posted on social media, it's not always possible for companies to immediately take down these videos," social media attorney Pedram Tabibi said.

Tabibi said that a social media company like Facebook is unlikely to be held liable for a crime like Godwin's murder because of First Amendment and other legal protections, but he said he does see room for improvement in how the companies handle this kind of content.

"They could ... potentially put technology in the videos so the technology itself may recognize a crime is occurring and help shut it down quickly before it reaches millions of viewers," Tabibi said.

Using that kind of artificial intelligence to screen and flag content is something Zuckerberg has said Facebook is already testing out. But in an online posting earlier this year, he said it will take many years to fully develop the system.