Twitter Chaos Endangers Public Safety, Emergency Managers Warn

Twitter Chaos Endangers Public Safety, Emergency Managers Warn

Yeahhhh if these sorts of accounts are not only gonna get the blue check but get promoted by Twitter’s algorithm it might be time to explore other options. pic.twitter.com/4VQcB7guSG

— Andy Hazelton (@AndyHazelton) November 11, 2022

Twitter has also been somewhat useful in giving authorities up-to-date on-the-ground information during unfolding emergencies. It can be used to crowdsource what streets are flooding in a storm, for example. During Hurricane Harvey in 2017, when the 911 system became overwhelmed, some of those stranded by floodwaters tweeted at emergency services.

Twitter itself has touted its usefulness and concerted efforts to improve in this area. In a blog post dated to October 13 (two weeks before Musk took over), the company proclaimed it “has become a critical communication tool for responding to natural disasters” and that it has a “longstanding commitment to working alongside global partners and developers to share important information, provide real-time updates, facilitate relief efforts” and combat misinformation.

There have, of course, been growing pains. Hutton cites the case of Southern California’s 2017 Thomas Fire, which was then the largest wildfire in the state’s recorded history. One of the Twitter hashtags used during the event was awash in random, often unrelated tweets, drowning out official sources, she says. Issues such as these prompted Twitter to verify official government accounts—and to make sure its algorithms elevated them. The company also manually curated news alerts and other aggregation features during emergencies, says former Twitter employee Tom Tarantino, who worked with emergency managers during his time there. Additionally, Twitter introduced various policies to curb the spread of misinformation and to respond to violations. These measures ranged from a warning message appended to a tweet to the suspension of an account.

The blue check was a crucial aspect of Twitter’s efforts to ensure correct information was getting out during crises, including the COVID pandemic. After Musk took over, the sudden rollout of the $8-per-month “Blue Verified” program immediately sowed confusion as fake accounts emerged.

Initially, at least some legacy verified accounts received a second label: a check mark and the word “Official” written in gray below the account name. But this feature was halted on the same day it was rolled out, November 9. It has since reemerged, though it appears to be applied unevenly. The Weather Channel and the Department of Homeland Security both have it, but as of the time of publication, the National Weather Service does not. “If you’re looking for coherence, it just doesn’t quite exist yet,” says a current Twitter employee who asked to remain anonymous for fear of retaliation. “We’re just iterating live.” Neither Twitter nor Musk replied to e-mailed and tweeted requests for comment on the criteria used for this label or to questions about how the company plans to avoid impersonators and the spread of misinformation. Twitter product management director Esther Crawford said in a tweet before the initial rollout of the “Official” designation that it would apply to “government accounts, commercial companies, business partners, major media outlets, publishers and some public figures.” Technology news website the Verge reported that Twitter plans to impose waiting periods for signing up for Twitter Blue (a subscription package that includes Blue Verified). The report also said that if an account changes its name, its check mark will be removed until Twitter approves that new name. But these measures would still leave open possibilities for impersonation.

Though Twitter removed the spoof accounts that popped up after the Blue Verified launch fairly quickly, many had already been screenshotted and shared widely. Companies, including pharmaceutical manufacturer Eli Lilly, also had to send out tweets countering information shared in the fake accounts. “I think that in the hour it took for Eli Lilly to correct that tweet and say, ‘That wasn’t us,’ that’s an hour that we generally don’t have in emergency management,” Hutton says.

If any updated version of Blue Verified doesn’t adequately label trusted sources, people scrolling through Twitter could see information from an account with a blue check mark that provides inaccurate or even detrimental action—such as telling people to evacuate when they should be sheltering in place. “It’s going to cost people time, which ultimately costs them lives and injury and property during an emergency,” Hutton says. Prestley says research has shown that people often do check other sources for confirmation. But any added steps needed to verify information can delay taking action. “The sooner that people can take action, obviously, the better,” he says.

The spoof accounts that did pop up under Blue Verified largely seemed to be created as intended humor or to expose problems inherent in the new program. But “it doesn’t matter if you’re intending harm or not. There is harm caused by these actions because you sow confusion at a time when there’s already mass confusion,” the current Twitter employee says. Hutton and others have raised concerns that once the novelty of creating fake accounts wears off—and people become less vigilant about double-checking sources—more dedicated bad actors could eventually exploit that space if there is no way to distinguish Blue Verified accounts from authoritative sources of information.

People inside Twitter “have been trying to communicate with [Musk] and share concerns,” the current Twitter employee says. “But the reality is that he is limited in his willingness to engage with those people and take those concerns seriously and act on them.” Wealthy people like Musk have far more resources than others to protect themselves from extreme events, Hutton says. “When you’re insulated from consequence, as many billionaires are, I think it’s easy to wave off a lot of these concerns” and not realize how “dangerous and even possibly deadly” some of these issues can be for more vulnerable groups during an emergency.

Also of concern to emergency managers and forecasters are the impacts of the massive staff layoffs at Twitter following Musk’s takeover. Dedicated teams had previously created news alerts and other curated products that emphasized credible sources. But “those teams do not exist anymore” after the layoffs, says Tarantino, the former employee. Gone, too, are large parts of the trust and safety teams and other people responsible for content moderation, as well as many of the engineers responsible for keeping the site running smoothly. Notably, problems with the two-factor authentication function (which helps prevent identity theft) kept some users from logging on to their accounts on November 14. Hutton notes the possibility of an emergency manager being locked out of their account by such a glitch during a crisis. “It’s just unfortunate that, I think, a platform that has been woven into the fabric of what we do as society these days, that rug is being pulled out very quickly in terms of trustworthiness,” Hutton says.

Such instability not only raises security and clarity concerns—it could also drive people away from Twitter altogether. And if enough users leave the site, it will become less effective for emergency mangers to maintain a presence on Twitter. If people do leave in droves or if Twitter otherwise ceases to function, “that would be a pretty tremendous loss to our ability to communicate during these types of events,” Prestley says.

Emergency mangers have few alternatives in the social media world because it would take several other apps to replicate what Twitter can do, Montano and others say. This approach “spreads out where people are getting information, spreads out where we have to be posting information,” Montano says. “It just makes everything more complex at a time where you don’t necessarily want more complexity.” Also, local emergency management offices have limited staff and time to maintain multiple social media presences, Hutton adds. “Depending on what direction Twitter goes here,” Montano says, “there is potential for some huge gaps in how emergency management unfolds.”

Tarantino advises users, particularly those who represent authoritative sources, to continue to maintain their Twitter accounts in order to fill the site with as much trustworthy information as possible. Abandoning accounts leaves a vacuum for bad actors to fill, he says. Hutton advises people to use Twitter’s list feature to round up accounts they currently know and trust, making it easier to sort good information from bad. She also encourages people to sign up for emergency alerts from their local jurisdiction.

“Disasters are relatively inevitable, unfortunately,” Hutton says. “The next time something big happens, especially a no-notice sort of a thing” such as an earthquake or a tornado, “if we are in our current state of affairs with social media, I think it’s going to be very, very confusing and chaotic—more so than it needs to be.”

ABOUT THE AUTHOR(S)

author-avatar

    Andrea Thompson, an associate editor at Scientific American, covers sustainability. Follow Andrea Thompson on Twitter Credit: Nick Higgins

    Read More

    Leave a Reply

    Your email address will not be published.