The fact that Russian-linked bots penetrated social media to influence the 2016 U.S. presidential election has been well documented, and the details of the deception are still trickling out.
In fact, on October 17, Twitter disclosed that foreign interference dating back to 2016 involved 4,611 accounts — most affiliated with the Internet Research Agency, a Russian troll farm. There were more than 10 million suspicious tweets and more than 2 million GIFs, videos and Periscope broadcasts.
In this season of another landmark election — a recent poll showed that about 62 percent of Americans believe the 2018 midterm elections are the most important midterms in their lifetime — it is natural to wonder if the public and private sectors have learned any lessons from the 2016 fiasco — and what is being done to better protect against this malfeasance by nation-state actors.
There is good news and bad news here. Let’s start with the bad.
Two years after the 2016 election, social media still sometimes looks like a reality show called “Propagandists Gone Wild.” Hardly a major geopolitical event takes place in the world without automated bots generating or amplifying content that exaggerates the prevalence of a particular point of view.
On October 22, The Wall Street Journal reported that Russian bots helped inflame the controversy over NFL players kneeling during the national anthem. Researchers from Clemson University told the newspaper that 491 accounts affiliated with the Internet Research Agency posted more 12,000 tweets on the issue, with activity peaking soon after a September 22, 2017 speech by President Trump in which he said team owners should fire players for taking a knee during the anthem.
The problem hasn’t persisted only in the United States. Two years after bots were blamed for helping sway the 2016 Brexit vote in Britain, Twitter bots supporting the anti-immigration Sweden Democrats increased significantly this spring and summer in the lead up to that country’s elections.
These and other examples of continuing misinformation-by-bot are troubling, but it’s not all doom and gloom. I see positive developments, too.
First, awareness must be the first step in solving any problem, and cognizance of bot meddling has soared in the last two years amid all the disturbing headlines.
About two-thirds of Americans have heard of social media bots, and the vast majority of those people are worried bots are being used maliciously, according to a Pew Research Center survey of 4,500 U.S. adults conducted this summer. (It’s concerning, however, that much fewer of the respondents said they’re confident they can actually recognize when accounts are fake.)
Second, lawmakers are starting to take action. When California Gov. Jerry Brown on September 28 signed legislation making it illegal as of July 1, 2019 to use bots — to try to influence voter opinion or for any other purpose — without divulging the source’s artificial nature, it followed anti-ticketing-bot laws nationally and in New York State as the first bot-fighting statutes in the United States.
While I support the increase in awareness and focused interest by legislators, I do feel the California law has some holes. The measure is difficult to enforce because it’s often very hard to identify who is behind a bot network, the law’s penalties aren’t clear and an individual state is inherently limited it what it can do to attack a national and global issue. However, the law is a good start, and shows that governments are starting to take the problem seriously.
Third, the social media platforms — which have faced congressional scrutiny over their failure to address bot activity in 2016 — have become more aggressive in pinpointing and eliminating bad bots.
It’s important to remember that while they have some responsibility, Twitter and Facebook are victims here too, taken for a ride by bad actors who have hijacked these commercial platforms for their own political and ideological agendas.
While it can be argued that Twitter and Facebook should have done more sooner to differentiate the human from the non-human fakes in its user rolls, it bears remembering that bots are a newly acknowledged cybersecurity challenge. The traditional paradigm of a security breach has been a hacker exploiting a software vulnerability. Bots don’t do that — they attack online business processes and thus are difficult to detect though customary vulnerability-scanning methods.
I thought there was admirable transparency in Twitter’s October 17 blog accompanying its release of information about the extent of misinformation operations since 2016. “It is clear that information operations and coordinated inauthentic behavior will not cease,” the company said. “These types of tactics have been around for far longer than Twitter has existed — they will adapt and change as the geopolitical terrain evolves worldwide and as new technologies emerge.”
Which leads to the fourth reason I’m optimistic: technological advances.
In the earlier days of the internet, in the late 1990s and early 2000s, networks were extremely susceptible to worms, viruses and other attacks because protective technology was in its early stages of development. Intrusions still happen, obviously, but security technology has grown much more sophisticated and many attacks occur due to human error rather than failure of the defense systems themselves.
Bot detection and mitigation technology keeps improving, and I think we’ll get to a state where it becomes as automatic and effective as email spam filters are today. Security capabilities that too often are siloed within networks will integrate more and more into holistic platforms better able to detect and ward off bot threats.
So while we should still worry about bots in 2018, and the world continues to wrap its arms around the problem, we’re seeing significant action that should bode well for the future.
The health of democracy and companies’ ability to conduct business online may depend on it.