Another day, another political grilling for social media platform giants.
The Senate Intelligence Committee’s fourth hearing took place this morning, with Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey present to take questions as U.S. lawmakers continue to probe how foreign influence operations are playing out on Internet platforms — and eye up potential future policy interventions.
During the session US lawmakers voiced concerns about “who owns” data they couched as “rapidly becoming me”. An uncomfortable conflation for platforms whose business is human surveillance.
They also flagged the risk of more episodes of data manipulation intended to incite violence, such as has been seen in Myanmar — and Facebook especially was pressed to commit to having both a legal and moral obligation towards its users.
The value of consumer data was also raised, with committee vice chair, Sen. Mark Warner, suggesting platforms should actively convey that value to their users, rather than trying to obfuscate the extent and utility of their data holdings. A level of transparency that will clearly require regulatory intervention.
Here’s our round-up of some of the other highlights from this morning’s session.
Google not showing up
Today’s hearing was a high profile event largely on account of two senior bums sitting on the seats before lawmakers — and one empty chair.
Facebook sent its COO Sheryl Sandberg. Twitter sent its bearded wiseman CEO Jack Dorsey (whose experimental word of the month appears to be “cadence” — as in he frequently said he would like a greater “cadence” of meetings with intelligence tips from law enforcement).
Which meant the company instantly became the politicians’ favored punchbag, with senator after senator laying into Alphabet for empty chairing them at the top exec level.
Whatever Page and Pichai were too busy doing to answer awkward questions about its business activity and ambitions in China the move looks like a major open goal for Alphabet as it was open season for senators to slam it.
Page staying away also made Facebook and Twitter look the very model of besuited civic responsibility and patriotism just for bothering to show up.
We got “Jack” and “Sheryl” first name terms from some of the senators, and plenty of “thanks for turning up” heaped on them from all corners — with some very particular barbs reserved for Google.
“I want to commend both of you for your appearance here today for what was no doubt going to be some uncomfortable questions. And I want to commend your companies for making you available. I wish I could say the same about Google,” said Senator Tom Cotton, addressing those in the room. “Both of you should wear it as a badge of honor that the Chinese Communist Party has blocked you from operating in their country.”
“Perhaps Google didn’t send a senior executive today because they’ve recently taken actions such as terminating a co-operation they had with the American military on programs like artificial intelligence that are designed not just to protect our troops and help them fight in our country’s wars but to protect civilians as well,” he continued, warming to his theme. “This is at the very same time that they continue to co-operate with the Chinese Communist Party on matters like artificial intelligence or partner with Huawei and other Chinese telecom companies who are effectively arms of the Chinese Communist Party.
“And credible reports suggest that they are working to develop a new search engine that would satisfy the Chinese Communist Party’s censorship standards after having disclaimed any intent to do so eight years ago. Perhaps they did not send a witness to answer these questions because there is no answer to these questions. And the silence we would hear right now from the Google chair would be reminiscent of the silence that that witness would provide.”
Even Sandberg seemed to cringe when offered the home-run opportunity to stick the knife in to Google — when Cotton asked both witnesses whether their companies would consider taking these kinds of actions?
But after a split second’s hesitation her media training kicked in — and she found her way of diplomatically giving Google the asked for kicking. “I’m not familiar with the specifics of this at all but based on how you’re asking the question I don’t believe so,” was her reply.
After his own small pause, Dorsey, the man of fewer words, added: “Also no.”
Dorsey repeat apologizing
‘We haven’t done a good job of that’ was the most common refrain falling from Dorsey’s bearded lips this morning as senators asked why the company hasn’t managed to suck less from all sorts of angles — whether that’s by failing to provide external researchers with better access to data to help them help it with malicious interference; or failing to informing individual users who’ve been the targeted victims of Twitter fakery that that abuse has been happening to them; or just failing to offer any kind of contextual signal to its users that some piece of content they’re seeing is (or might be) maliciously fake.
But then this is the man who has defended providing a platform to people who make a living selling lies, so…
“We haven’t done a good job of that in the past,” was certainly phrase of the morning for a contrite Dorsey. And while admitting failure is at least better than denying you’re failing, it’s still just that: Failure.
And continued failure has been a Twitter theme for so long now, when it comes to things like harassment and abuse, it’s starting to feel intentional. (As if, were you able to cut Twitter you’d find the words ‘feed the trolls’ running all the way through its business.)
Sadly the committee seemed to be placated by Dorsey’s repeat confessions of inadequacy. And he really wasn’t pressed enough. We’d have liked to see a lot more grilling of him over short term business incentives that tie his hands on fighting abuse.
Amusingly, one senator rechristened Dorsey “Mr Darcey”, after somehow tripping over the two syllables of his name. But actually, thinking about it, ‘pride and prejudice’ might be a good theme for the Twitter CEO to explore during one of his regular meditation sessions.
Y’know, as he ploughs through a second turgid decade of journeying towards self-awareness — while continuing to be paralyzed, on the business, civic and, well, human being, front, by rank indecision about which people and points of view to listen to (Pro-Tip: If someone makes money selling lies and/or spreading hate you really shouldn’t be letting them yank your operational chain) — leaving his platform (the would be “digital public square”, as he kept referring to it today), incapable of upholding the healthy standards it claims to want to have. (Or daubed with all manner of filthy graffiti, if you want a visual metaphor.)
The problem is Twitter’s stated position/mission, in Dorsey’s prepared statements to the committee, of keeping “all voices on the platform” is hubris. It’s a flawed ideology that results in massive damage to the free speech and healthy conversation he professes to want to champion because nazis are great at silencing people they hate and harass.
Unfortunately Dorsey still hasn’t had that eureka moment yet. And there was no sign of any imminent awakening judging by this morning’s performance.
Sandberg’s oh-so-smooth operation — but also an exchange that rattled her
The Facebook COO isn’t chief operating officer for nothing. She’s the queen of the polished, non-committal soundbite. And today she almost always had one to hand — smoothly projecting the impression that the company is always doing something. Whether that’s on combating hate speech, hoaxes and “inauthentic” content, or IDing and blocking state-level disinformation campaigns — thereby shifting attention off the deeper question of whether Facebook is doing enough. (Or even whether its platform might not be the problem itself.)
Albeit the bar looks very low indeed when your efforts are being set against Twitter and an empty chair. (Aka the “invisible witness” as one senator sniped at Google.)
Very many of her answers courteously informed senators that Facebook would ‘follow up’ with answers and/or by providing some hazily non-specific ‘collaborative work’ at some undated future time — which is the most professional way to kick awkward questions into the long grass.
Though do it long enough and the grass can turn on you and start to bite back because it’s got so long and unkempt it now contains some very angry snakes.
Senator, Kamala Harris, very clearly seething at this point — having had her questions to Facebook knocked about since November 2017, when its general council had first testified to the committee on the disinformation topic — was determined to get under Sandberg’s skin. And she did.
The exchange that rattled the Facebook COO started off around how much money it makes off of ads run by fake accounts — such as the Kremlin-backed Internet Research Agency.
Sandberg slickly reframed “inauthentic content” to an even more boring sound “inorganic content” — now several psychologic steps removed from the shockingly outrageous Kremlin propaganda that the company eventually disclosed.
She added it was equivalent to .004% of content in News Feed (hence Facebook’s earlier contention to Harris that it’s “immaterial to earnings”).
It’s not so much the specific substance of the question that’s the problem here for Facebook — with Sandberg also smoothly reiterating that the IRA had spent about $100k (which is petty cash in ad terms) — it’s the implication that Facebook’s business model profits off of fakes and hates, and is therefore amorously entwined in bed with fakes and hates.
“From our point of view, Senator Harris, any amount is too much,” continued Sandberg after she rolled out the $100k figure, and now beginning to thickly layer on the emulsion.
Harris cut her off, interjecting: “So are you saying that the revenue generated was .004% of your annual revenue”, before adding the pointed observation: “Because of course that would not be immaterial” — which drew a rare stuttered double “so” from Sandberg.
“So what metric are you using to calculate the revenue that was generated associated with those ads, and what is the dollar amount that is associated then with that metric?” pressed Harris.
Sandberg couldn’t provide the straight answer being sought, she said, because “ads don’t run with inorganic content on our service” — claiming: “There is actually no way to firmly ascertain how much ads are attached to how much organic content; it’s not how it works.”
“But what percentage of the content on Facebook is organic,” rejoined Harris.
That elicited a micro-pause from Sandberg, before she fell back on the usual: “I don’t have that specific answer but we can come back to you with that.”
Harris pushed her again, wondering if it’s “the majority of content”?
“No, no,” said Sandberg, sounding almost flustered.
“Your company’s business model is complex but it benefits from increased user engagement… so, simply put, the more people that use your platform the more they are exposed to third party ads, the more revenue you generate — would you agree with that,” continued Harris, starting to sound boring but only to try to reel her in.
After another pause Sandberg asked her to repeat this hardly complex question — before affirming “yes, yes” and then hastily qualifying it with: “But only I think when they see really authentic content because I think in the short run and over the long run it doesn’t benefit us to have anything inauthentic on our platform.”
Harris continued to hammer on how Facebook’s business model benefits from greater user engagement as more ads are viewed via its platform — linking it to “a concern that many have is how you can reconcile an incentive to create and increase your user engagement with the content that generates a lot of engagement is often inflammatory and hateful”.
She then skewered Sandberg with a specific example of Facebook’s hate speech moderation failure — and by suggestive implication a financially incentivized policy and moral failure — referencing a ProPublica report from June 2017 which revealed the company had told moderators to delete hate speech targeting white men but not black children — because the latter were not considered a “protected class”.
Sandberg, sounding uncomfortable now, said this was “a bad policy that has been changed”. “We fixed it,” she added.
“But isn’t that a concern with hate period, that not everyone is looked at the same way,” wondered Harris?
Facebook “cares tremendously about civil rights” said Sandberg, trying to regain the PR initiative. But she was again interrupted by Harris — wondering when exactly Facebook had “addressed” that specific policy failure.
Sandberg was unable to put a date on when the policy change had been made. Which obviously now looked bad.
“Was the policy changed after that report? Or before that report from ProPublica?” pressed Harris.
“I can get back to you on the specifics of when that would have happened,” said Sandberg.
“You’re not aware of when it happened?”
“I don’t remember the exact date.”
“Do you remember the year?”
“Well you just said it was 2017.”
“So do you believe it was 2017 when the policy changed?”
“Sounds like it was.”
The awkward exchange ended with Sandberg being asked whether or not Facebook had changed its hate speech policies to protect not just those people who have been designated legally protected classes of people.
“I know that our hate speech policies go beyond the legal classifications, and they are all public, and we can get back to that on that,” she said, falling back on yet another pledge to follow up.
Twitter agreeing to bot labelling in principle
We flagged this earlier but Senator Warner managed to extract from Dorsey a quasi-agreement to labelling automation on the platform in future — or at least providing more context to help users navigate what they’re being exposed to in tweet form.
He said Twitter has been “definitely” considering doing this — “especially this past year”.
Although, as we noted earlier, he had plenty of caveats about the limits of its powers of bot detection.
“It’s really up to the implementation at this point,” he added.
How exactly ‘bot or not’ labelling will come to Twitter isn’t clear. Nor was there any timeframe.
But it’s at least possible to imagine the company could add some sort of suggestive percentage of automated content to accounts in future — assuming Dorsey can find his first, second and third gears.
Lawmakers worried about the impact of deepfakes
Deepfakes, aka AI-powered manipulation of video to create fake footage of people doing things they never did is, perhaps unsurprisingly, already on the radar of reputation-sensitive U.S. lawmakers — even though the technology itself is hardly in widespread, volume usage.
Several senators asked whether (and how comprehensively) the social media companies archive suspended or deleted accounts.
Clearly politicians are concerned. No senator wants to be ‘filmed in bed with an intern’ — especially one they never actually went to bed with.
The response they got back was a qualified yes — with both Sandberg and Dorsey saying they keep such content if they have any suspicions.
Which is perhaps rather cold comfort when you consider that Facebook had — apparently — zero suspicious about all the Kremlin propaganda violently coursing across its platform in 2016 and generating hundreds of millions of views.
Since that massive fuck-up the company has certainly seemed more proactive on the state-sponsored fakes front — recently removing a swathe of accounts linked to Iran which were pushing fake content, for example.
Although unless lawmakers regulate for transparency and audits of platforms there’s no real way for anyone outside these commercially walled gardens to be 110% sure.
Sandberg’s clumsy affirmation of WhatsApp encryption
Since the WhatsApp founders left Facebook, earlier this year and in fall last, there have been rumors that the company might be considering dropping the flagship end-to-end encryption that the messaging platform boasts — specifically to help with its monetization plans around linking businesses with users.
And Sandberg was today asked directly if WhatsApp still uses e2e encryption. She replied by affirming Facebook’s commitment to encryption generally — saying it’s good for user security.
“We are strong believers in encryption,” she told lawmakers. “Encryption helps keep people safe, it’s what secures our banking system, it’s what secures the security of private messages, and consumers rely on it and depend on it.”
Yet on the specific substance of the question, which had asked whether WhatsApp is still using end-to-end encryption, she pulled out another of her professionally caveated responses — telling the senator who had asked: “We’ll get back to you on any technical details but to my knowledge it is.”
Most probably this was just her habit of professional caveating kicking in. But it was an odd way to reaffirm something as fundamental as the e2e encrypted architecture of a product used by billions of people on a daily basis. And whose e2e encryption has caused plenty of political headaches for Facebook — which in turn is something Sandberg has been personally involved in trying to fix.
Should we be worried that the Facebook COO couldn’t swear under oath that WhatsApp is still e2e encrypted? Let’s hope not. Presumably the day job has just become so fettered with fixes she just momentarily forgot what she could swear she knows to be true and what she couldn’t.