“Are you safe?” Facebook asks. So far, the tech giant says, it has posed the question to tens of millions of people in the wake of disasters.
They click “I’m safe,” and their friends and relatives know they survived the hurricane, or the earthquake or the bombing. And a company that hopes to revolutionize human interaction brings calm to the aftermath of chaos.
But sometimes — like when Safety Check’s increasingly complex algorithms flag a peaceful Black Lives Matter protest or a long-extinguished fire as a catastrophe — users can only wonder in confusion: Safe from what?
The two-year-old system stumbled again this week when it alerted people in Bangkok to an apparent mass bombing, which turned out to be a protester lobbing large firecrackers — or possibly “ping pong bombs” — outside a government building.
The man had made scenes at the Government House before, outlets in Thailand reported. But after he threw several small explosives from a rooftop on Tuesday — injuring no one, then surrendering peacefully — Safety Check took his demonstration to a new level.
Man Tosses Small Bombs at Gov House for ‘Justice’ https://t.co/rc3saPFW8h
— Khaosod English (@KhaosodEnglish) December 27, 2016
“The Explosion in Bangkok,” as Facebook’s code automatically dubbed it, was pushed out to phones across the city. “Are you safe?”
Users who clicked were led to a BBC News story about a 2015 bombing that killed and injured many people in central Bangkok — inaccurately dated, and thus conflated by Facebook with the present “Explosion in Bangkok.”
“Even experienced journalists, who would have realized the story was not genuine, inadvertently gave it some credence by responding to the Facebook prompt,” a BBC correspondent in Thailand complained to the outlet.
Others, like the Verge, taunted: “Facebook got fooled by its own algorithm.”
Fake bomb news fools thousands in Bangkok https://t.co/qu1qZRkIGD pic.twitter.com/nEBWEiGcpK
— Bangkok Post (@BangkokPostNews) December 27, 2016
“We want to make sure that doesn’t happen again,” said Anna White, a spokeswoman for Facebook.
She defended the Bangkok alert, but acknowledged that misinformation went out with it.
“We’re obviously learning with every incident,” she said. “At a scale of 1.8 billion people on the platform, it can’t be developed in a vacuum.”
Indeed, Safety Check has suffered its growing pains in something more like a glass house.
The idea seemed simple enough when it was announced in 2014.
“Safety Check is our way of helping our community during natural disasters,” Facebook CEO Mark Zuckerberg wrote that October. It “gives you an easy and simple way to say you’re safe and check on all your friends and family in one place.”
In its first incarnation, Facebook employees activated Safety Check manually — and sparingly — for natural disasters.
Users in the Philippines were the first asked to check in after Typhoon Ruby struck in December 2014. The system proved popular in other countries as the hurricane season advanced.
It got complicated in 2015, when Facebook expanded Safety Check beyond natural disasters.
An apparent Islamic State attack that killed 130 people in Paris in November 2015 marked the first time that Facebook turned the system on for a man-made catastrophe, said White, the Facebook spokeswoman.
And immediately, the company was criticized for not using it earlier.
“Facebook has come under fire for implementing a ‘safety check’ feature for the Paris terrorist attacks, but not after deadly bomb blasts hit Beirut” one day earlier, the Independent reported.
Tropical storms had been no trouble, but now Facebook was being accused of cherry-picking terrorist attacks.
Apparently Beirut doesn’t deserve a safety check on Facebook
ZUCK bro grow up… https://t.co/t5zAdYkioS
— lrlikhon (@lrlikhon) November 15, 2015
On the other hand, the company was mocked after a bombing in Pakistan last spring — when its too-eager algorithm worried over people on the other side of the world.
Hmm…. Facebook safety check thinks I’m in Pakistan. (I’m in Chicago) pic.twitter.com/HHp5ntwv7A
— Nisha Chittal (@NishaChittal) March 27, 2016
But Facebook has never been known to shrink from its ambitions, even in the face of criticism. It expanded and enhanced Safety Check again this year, shifting editorial discretion over catastrophes to computers and the crowd.
Beginning with a Canadian wildfire in May, Facebook started to test an automated system for spotting disasters. A third-party company (Facebook won’t identify it) monitors police scanners and news feeds around the world, flagging crises of all kinds.
Then, Facebook watches its users to see who cares.
“If enough people are talking about the event, the system automatically sends those people messages inviting them to check in as safe,” Wired reported in a profile on the company’s innovations in emergency response.
Like the original rollout, the experiment started to rave reviews.
In June, Safety Check warned Flordians about the attack on a nightclub in Orlando “11 minutes before police officially announced that there had been a shooting at Pulse,” Wired reported.
White said one user stayed up all night, checking in on hundreds of friends.
But the beta test also hit a few bumps.
A month after the Orlando shooting, a writer for Chicago Magazine got a Safety Check about an incident unhelpfully titled “The Violent Crime” — and unhelpfully mapped to a radius that included nearly the entire city, plus a slice of Lake Michigan.
Many others were just as confused, with some wondering whether Facebook was trying to make a comment on Chicago’s epidemic of isolated shootings.
“So now we have a safety check just for living in Chicago,” one user wrote. “Well I guess we do need it.”
You know Chicago is unsafe when Facebook have a safety check-in. ?
— Corey A. Hardiman (@HopeDealerCH) July 28, 2016
Reporters eventually traced the mysterious “Violent Crime” to a shooting at a house party that led to three deaths.
As with everything about Facebook, its flaws are amplified to enormous proportions.
Defending Safety Check, White said users have activated it more than 300 times since June, compared to just 39 in the year-and-a-half when employees handled it manually.
The majority of alerts work as intended, and not just for headline-grabbing incidents like the murder of five police officers in Dallas.
An automatic Safety Check also went out for a hazmat spill in Atchison, Kan., that sent dozens to a hospital in October — though not many outside the state heard about it.
“Events that may not seem like a big deal around the globe . . . are a big deal for people in that town or government building,” White said.
But sometimes, no one is quite sure what Safety Check is making a fuss about.
In September, a historic hotel caught fire in Dallas. Though the blaze was contained, it drew attention and triggered a Safety Check.
For some reason, many in Dallas suddenly got the alert on their phones three days after the fire had been put out.
“Wait, what Dallas fire? An old Facebook notification causes mass confusion,” said a headline in the Dallas Morning News.
I heard Dallas was On fire
— Alecia (@CatoMccoyHigh) September 23, 2016
The same week, Facebook mass-pinged phones about “The Protest in North Carolina.” And Safety Check didn’t just cause confusion that time — but indignation, too.
A man had died at a Black Lives Matter-inspired march in Charlotte two nights before the alert. But demonstrations on the night of the Safety Check were reported to be largely peaceful.
Some accused Facebook of equating black protesters with a public danger — exactly the sort of political controversy Facebook had hoped to avoid.
Facebook instituting a safety check for the Charlotte protests is an example of how a “neutral” platform can still editorialize a situation
— Victor Luckerson (@VLuck) September 23, 2016
Whatever the masses think, the company seems happy with the new-and-improved Safety Check: It announced last month that the automated system is here to stay.
White noted that it’s always a work in progress.