On its face, Jack Dorsey’s resignation as CEO of Twitter last week was just another shift in the Silicon Valley furniture, albeit an outsized one given Dorsey’s iconic stature. At a different level though, the move suggests a new chapter in the debate about social media platforms, regulation, the future of the internet, and ultimately how we define and allow free speech.
As the Jack Dorsey chapter of Twitter Inc. came to a close, a ten-year Twitter veteran, Parag Agrawal, formerly its Chief Technical Officer, has taken the helm. Agrawal’s new role portends a faster-moving company. But it may also signal an even more algorithmically filtered, boosted and suppressed conversation in the years to come.
How so?
Imagine you had an algorithm which could instantly calculate the health or danger of any given tweet or online conversation. For instance, it might look in on a substantive, respectful debate amongst career astrophysicists and assign a positive score of 4852325 and climbing, but score a hateful, racist tirade at -2439492, trending lower and lower.
Wouldn’t such an algorithm be something you could use to make Twitter conversations healthier? Couldn’t it also be used to block bad actors and boost desired, good-faith discussion, thereby reducing harm and promoting peace? The man who once championed this tantalizing, risky idea within Twitter is its new Chief Executive Officer, Parag Agrawal.
So what, you say. But therein lies a fierce and philosophical battle raging about how free speech is defined, protected and suppressed online. Twitter has become a significant tool for the world’s information consumers and purveyors — which in a sense, is all of us. It has been especially useful for journalists, particularly in the understanding of and reporting upon real-time news events. It’s become highly influential in politics and social change. And as a result, the CEO of the Internet’s real-time public square is far more than a tech executive. Ultimately, he and the people he appoints, pays, and evaluates, cast the deciding vote on consequential questions, such as: what news, political and scientific discussion are allowed to be consumed? What are we allowed to know is happening right now? Who determines what can be expressed, and what cannot? What gets boosted, and who gets booted?
Dorsey stepped down via an email to his team on November 29, 2021, posting it to where else but his Twitter account. He wrote that being “founder-led” can be a trap for a company, and he wanted Twitter to grow beyond him. He expressed “bone deep” trust and confidence in successor and friend, Agrawal.
But there were likely other reasons behind his decision too. Dorsey’s been public about wanting to spend more time promoting Bitcoin; he’s called being a Bitcoin missionary his “dream job.” There’s also the small matter that his other day-job has been running Block Inc., (formerly known as Square), the ubiquitous small-business payments processor, which is now worth more than $80 billion and employs more than 5,000. Adding to the incentive: Dorsey also owns a much bigger personal stake in Block than he does in Twitter.
A final, less discussed contributor might be simmering investor dissatisfaction. Twitter’s stock price is languishing in the mid $40’s, the same trading range it had eight years ago. And its user numbers, while growing, have not shown the rapid growth many investors expect. Twitter’s user growth has been small compared to Facebook, Instagram and TikTok.
Activist investors Elliott Management and its ally Silver Lake Partners own significant stakes in Twitter, and they pushed for new leadership and faster innovation. According to Axios, while Elliott Management resigned its board seat in 2020, it demanded and got two things in return: new management, and a plan to increase the pace of innovation. Also looming large are regulator moves, debates over user safety and privacy, and controversy over moderation.
Agrawal has impressive technical chops. He earned a BS in Computer Science and Engineering from the prestigious Indian Institute of Technology (IIT), then earned a PhD in Computer Science at Stanford University in 2012. He’s worked in brief stints at Microsoft Research, AT&T Labs and Yahoo before joining Twitter in 2011, rising through the ranks over ten years. He’s led their machine learning efforts, and he’s been intimately involved in a research project called “BlueSky,” which is a decentralized peer to peer social network protocol.
Agrawal has moved quickly, shaking up Twitter’s leadership team. Head of design and research Dantley Davis is stepping down — the scuttlebutt is that Dantley demonstrated an overly blunt and caustic management style that rubbed too many employees the wrong way. Head of engineering Michael Montano is also departing by year’s end. Agrawal’s lines of authority are now more streamlined; he has expressed a desire to impose more “operational rigor.”
“We want to be able to move quick and make decisions, and [Agrawal] is leading the way with that,” said Laura Yagerman, Twitter’s new head of corporate communications. Agrawal’s swift change in key leadership positions suggests that Dorsey didn’t leave entirely of his own volition.
While Dr. Agrawal brings deep technical experience to the role of CEO, most outside observers are focused intently on his viewpoints regarding free speech and censorship.
Every day, voters, world leaders, journalists and health officials turn to Twitter to exchange ideas. As I write this today, the public square is pondering the dangers (or potentially nascent optimistic signs) of a new COVID variant. Foreign policy twitter is abuzz about troops massing on the Ukraine’s border and China’s activities in both Hong Kong the China Sea. Law Enforcement Twitter is asking the public for crowdsourced detective work on the latest tragic homicides.
What they all have in common is this: these stories often come to the world’s attention via Twitter. Twitter decides which types of speech should be off-limits on its platform. They say who gets booted, and what gets boosted. In other words, they have a big role in defining the collective Overton Window of online conversation. Ultimately, Twitter’s moderation policies, algorithms and (for now at least) human editorial team decide what can and cannot be said, what gets amplified, and what gets shut down.
Further, our world increasingly conflates the concepts of internet “consensus” and truth. So how do we go about deciding what information is true, and what is gaslighting? Which sources will Twitter deem “credible” and which untrustworthy? What labels will get slapped on your tweets?
The CEO of Twitter has an enormously powerful role in determining what does and doesn’t come to the public’s attention, what catches fire and what does not, and who is anointed with credibility. Agrawal knows this intimately; it’s been a big part of his work for the past several years. Twitter’s servers process nearly one billion tweets every day. And usage has swelled to nearly 220 million daily active users, with few signs of slowing:
More important, perhaps, is the highly influential nature of these users. Seth Godin called such influencers “sneezers of the Idea Virus.” Watch any cable TV news channel for more than fifteen minutes, and you’re likely to encounter someone talking about what someone has tweeted. Indeed a very high number of politicians, journalists, celebrities, government and policy officials use Twitter regularly, either to spread, consume or evaluate information. Twitter’s moderation policies can act quickly to fan an ember, or snuff it out.
During Dorsey’s tenure, Twitter came under withering fire for too-hastily suppressing and blocking views. It’s also come under fire for the opposite reason — not being fast enough to block and remove misinformation (for instance “Gamergate,” and later “QAnon” and communication surrounding January 6th.)
Most recently, concern over Twitter’s moderation policies and blocking/amplification/suppression been fiercest from civil libertarians, the right, and center-right. Among the examples:
- In October 2020, just weeks before the presidential election, Twitter blocked the New York Post for weeks for its explosive scoop on Hunter Biden’s laptop. Twitter first said the ban was because the materials were hacked, though to this day, there is no definitive proof they were obtained that way. Subsequent reporting by Politico this year independently confirmed the authenticity of several of those emails. The New York Post was prevented from participating on Twitter for weeks leading up to the 2016 election. Dorsey later apologized for this blocking, calling it a “total mistake,” though he wouldn’t say who made it.
- Twitter locked the account of the Press Secretary of the United States for retweeting that Biden laptop story.
- In October 2020, Twitter suspended former Border Patrol Commissioner Mark Morgan for tweeting favorably about the border wall.
- Twitter temporarily banned and then permanently suspended Donald Trump, a sitting president of the United States, citing his repeated violations of terms of service, specifically its Glorification of Violence policy. Yet Twitter does not ban organizations like the Taliban, nor does it suspend world leaders who threaten the nation of Israel’s existence; they generally only remove individual tweets.
Even before several of these incidents above, in a 2018 interview at NYU, Dorsey admitted that Twitter’s conservative employees often don’t feel comfortable expressing their opinions. And he conceded both that Twitter is often gamed by bad-faith actors, and that he’s not sure that Twitter will ever build a perfect antidote to that gamification. In 2020, a massive hack exposed the fact that Twitter has administrative banning and suppression tools, which among other things allow their employees to prevent certain topics from trending, and which also likely block users and/or specific tweets from showing up in “trending” sections and/or searches.
As Twitter’s influence rose, these decisions caused consternation among some lawmakers, Dorsey was pressed to sit before multiple Congressional hearings, in which he’s asked about these instances and more:
One big issue is “bots,” (short for robots), which are automated programs which use Twitter’s platform and act as users. They amplify certain memes by posting content to Twitter, liking and retweeting certain content, replying affirmatively to things they are programmed to agree with, or negatively to things they are not, etc. They are a great example of how Twitter, in its “wild west” initial era, often let its platform be manipulated.
Twitter’s initial response was slow; one has to remember that bots help amplify usage numbers which in turn might help create a feeling of traction (and or ad-ready eyeballs.) But bots are often designed with malicious intent to skew the public’s perception of what’s catching fire, or to add credibility to false stories. Since 2016, Twitter has gotten more aggressive about cleaning out bots, and in 2018 greatly restricted use of their application programming interface (API.) Earlier this year, after years of hedging, Twitter finally decided to take aggressive action to de-platform the conspiracy fringe group QAnon, suspending 70,000 accounts related to that conspiracy movement, after the January 6th riot at the United States Capitol. Dorsey regretted that this ban was done “too late.”
The justification for these interventions often centers around harm. Or perhaps more accurately, it centers around what Twitter’s human and algorithmic decisionmakers judge in the snapshot moment to be “harmful.”
What’s Agrawal’s attitude about free speech? While some civil libertarians and commentators on the political right initially cheered Dorsey’s departure, that enthusiasm quickly cooled. That’s because Agrawal has in the past signaled very clearly that he believes Twitter’s censorship policy should not be about free speech, but about reducing harm and even improving peace. You can get an idea for Agrawal’s philosophy in his extended remarks with MIT Technology Review in November 2018:
“[Twitter’s] role is not to be bound by the First Amendment, but our role is to serve a healthy public conversation and our moves are reflective of things that we believe lead to a healthier public conversation. The kinds of things that we do about this is, focus less on thinking about free speech, but thinking about how the times have changed. One of the changes today that we see is speech is easy on the internet. Most people can speak. Where our role is particularly emphasized is who can be heard. The scarce commodity today is attention. There’s a lot of content out there. A lot of tweets out there, not all of it gets attention, some subset of it gets attention. And so increasingly our role is moving towards how we recommend content and that sort of, is, is, a struggle that we’re working through in terms of how we make sure these recommendation systems that we’re building, how we direct people’s attention is leading to a healthy public conversation that is most participatory.” (Emphasis added.)
In 2010, he tweeted:
In other words, his viewpoint (at least in 2010) appears to be that book banning might be not only acceptable but desirable if it increases societal peace. This sentiment is most definitely not aligned with those who believe the best antidote to speech with which you disagree is more and better speech. As one wag put it, “I’ll be happy to ban all forms of hate speech, as long as you let me define what it is. Deal?”
More of Agrawal’s outlook can be discerned from his November 2018 interview with MIT Technology Review. For some time from 2015 to about 2018, he and the rest of the technical team at Twittter put great effort into determining whether the health of any given public conversation can be scored algorithmically. But thus far, that effort appears to have yielded disappointment. Yet Agrawal seems undaunted in this quest.
Agrawal’s Holy Grail of algorithmic scoring of the “health” or potential “harm” of a public conversation isn’t yet fully possible. Thus, they employ humans to curate discussion, and block, ban, suppress and promote (through sorting) certain expressions over others. Now, given that human editors are expensive, Twitter focuses them on a few subjects. Agrawal specifically names pandemic response and election integrity as two areas which he deems most appropriate for such intervention. Yet let’s keep in mind he also clearly believes that automated algorithmic “scoring” of healthy conversation is both possible and desirable.
Our approach to it isn’t to try to identify or flag all potential misinformation. But our approach is rooted in trying to avoid specific harm that misleading information can cause.
Dr. Parag Agrawal, Twitter’s new CEO, MIT Technology Review November 2018
While controlling discussion to promote peace might seem to be an unalloyed good, it’s not at all clear that a harm-reducing, peace-promoting Internet public square is also necessarily a truth-promoting one. For one thing, truth doesn’t care about its impact. And it isn’t always revealed right away. Our understanding and interpretation of facts change over time. It seems increasingly often that things which we once “knew” to be certain are suddenly revealed to be quite different. Would such an algorithm optimize for the wrong things, leaving us less informed in the process? These and other conundra confront Twitter’s new CEO, who took office last week.
In a way, Agrawal’s appointment as Twitter CEO can be seen as an important waypoint in the Internet’s transformation from techno-libertarianism to a much more progressive worldview with a willingness to use a heavier hand. Anti-establishment civil libertarians used to typify many Internet and tech leaders’ outlook. Yet quite steadily over the past decade, a progressive worldview has grown dominant. While one side values free speech and civil liberties as paramount values, the other believes societal peace, equity, and least “harm” trump other goals. And for some, if free speech needs to be sacrificed to achieve it, so be it. Throughout his tenure, Dorsey himself has shown elements of each philosophy.
Agrawal may be a technocrat progressive. In 2017, he donated to the ACLU so it could sue President Trump in 2017. He has also likened religion to a pyramid scheme:
Yet Agrawal is far from a censorship extremist. He advocates more open access to Twitter’s archives through Application Programming Interfaces (API’s), more third-party analysis of what’s discussed on the platform. And he’s certainly not anywhere close to a censorship champion.
One hopeful sign is that Agrawal has already experienced his own “my old tweets have been taken greatly out of context” moment immediately after being made Twitter’s new CEO. Critics on the right seized on this October 26th 2010 tweet of his, suggesting it somehow demonstrates that he’s equating white people with racists:
But as he quickly explained, “I was quoting Asif Mandvi from The Daily Show,” noting that his intent was precisely the opposite. Agrawal was joking about the harm of stereotypes. He was of course not making a factual statement.
As someone who sides more with civil libertarians with respect to free speech, I hope he remembers that it was his ability to respond and clarify and respond with more speech which helped convey his true feelings and much more clearly convey the truth that went beyond the first 280 characters. Wasn’t it good for him that he could dispel a controversy and continue to engage, and wasn’t banned due to some lower-level employee determining his first tweet caused harm under at least one subjective interpretation?
Perhaps the central conundrum is that content moderation is impossible to get perfectly “fair” or least-harm-imposing. No algorithm or human will be able to make the correct decision at every moment. Thus, guidelines need to exist which define an optimal content moderation policy. For that, you need the platform’s leader to define what should be optimized via such a policy. Truth? Liberty? Fairness? Viewpoint Diversity? Peace?
Back to the thought-exercise which started this piece. Would everyone score the “harm” of a given conversation the same way, or the credibility or intent of the speakers? Obviously, we wouldn’t. Algorithms — especially machine learning algorithms — are tremendously powerful, but they also can give an illusion of objective authority. In reality, they’re only as good as their data training and evaluation sets, and those evaluation sets have explicit goals. Each desirable metric chosen to optimize (Peace, Truth, Viewpoint Diversity, etc.) would yield very different algorithms. And the result would be very different content moderation, amplification and suppression policies.
Agrawal’s view will likely evolve, but for the moment, he appears to prioritize what he considers “healthy conversation” and avoidance of “harm.” It is how he actually defines health and harm which will be very important indeed. For it will determine what we know to be true, and from whom we hear.
Jack Dorsey’s resignation letter concludes with this statement: “My one wish is for Twitter Inc. to be the most transparent company in the world.”
That would be most welcome. But they have a very long way to go indeed. Godspeed, Dr. Agrawal.
Steve: I’m not sure I agree with the framing of the free speech issue as what we’re “allowed” to see and what we’re not. We’re allowed to say anything we want. Shout it from the rooftops. The question is whether any private company should be obligated to transmit or amplify what anyone says. Before social media, news organizations were the main conduit of news. They used their own human versions of algorithms – editors – to decide what “free speech” to pay attention to and amplify. They weren’t forced to pay attention to speech that didn’t fit their editorial judgment. Clearly social media aren’t journalism, but where is the implicit promise that the platforms can’t have standards by which they decide who gets amplified on their platforms? Where would you draw the line? Explicit death threats? It’s all too easy to say that the cure for harmful speech is more, better speech, but every medium of exchange has to have some rules or it ceases to function.
I think the slippery slope here is the business model on which these platforms are built. They have every incentive to inflate viewership of tweets, posts, etc. as much as possible so that users get addicted to the idea that they have an enormous audience. The outsized required scale of the number of users and their audience is necessary to justify company valuations and stock prices.
So instead of every Tweet competing with every other Tweet on an equal basis for an audience, Twitter puts its thumb on the scale to amplify some content over other content. As we know, those interventions reflect value systems built into the algorithms – in this case maximizing audience for advertising and flattering users so they use the platform more. It’s really as simple as that. In such a scenario, is it any surprise that content value becomes the product of mob rule, elevating not the “best” content (whatever that might be) but content with the greatest potential to attract the most consumption, no matter how calorie-free (or toxic) it might be?
To get to a workable version of your “free speech,” one might largely eliminate algorithmic amplification and have content spread be chronologically based. Every Tweet competes with every other and users have to take a more conscious role in choosing what content they want to see. But then, users would follow radically fewer accounts, their own follower numbers would fall and without the illusion of a huge following, usership would plummet. No major platform would go for that because they couldn’t make money at scale.
Lastly, I’m wary of framing this as a free speech issue, since government (so far) isn’t suppressing or dictating what can or can’t be said. The problem is in conceding monopolistic control to these giant platforms because of their scale to the point that we confuse them with being THE public square. They’re not. They’re private companies. If the rules they operate under make them anti-competitive, then that’s a good place to start changing things.
Great thoughts, thanks. You’re quite right to drill down on my use of the word “allowed.”
It admittedly includes a bit of fast-forwarding of the videotape here, and a recognition that the one-time defenders of free speech have greatly diminished their stance, and in too many cases, joined the other team.
Twitter decided it was not acceptable for the nation’s fourth largest newspaper and the Press Secretary of the United States to participate in its platform when it shared startling news that suggested the son of the president, and perhaps even the president himself, were engaged in influence peddling with one of our nation’s greatest rivals. This was suppressed. Though they were free to “speak” elsewhere, they were not free to participate in any discussion on the world’s real-time news discussion platform.
Ultimately, the vast majority of the world’s speech will be transmitted over the dominant platforms, and those platforms have become ever-more concentrated. While for the moment we can continue to say the things we want to say, one-time defenders of free speech have morphed. Their goals have changed.
That includes both the once techno-libertarian enclaves of Silicon Valley, but also now long-time free-speech champions like the ACLU (https://www.nytimes.com/2021/06/06/us/aclu-free-speech.html.) While it’s true, as some have suggested, that people can just “create their own platform,” the oligigopolistic nature of online hosts means these potential other town squares too, are at risk of being shut down.
Directionally, the movement over the past decade has been to place ever-more subjective limitations on speech (or at least punitive consequences for it), rather than to maintain and defend it. This is different than how I once imagined the Information Age to evolve — not inexorably toward more restrictions, filtering, boosting, and banning.
Societally too, in both movements and university campuses, major movements frequently equate speech with violence, and speech with harm. (Ironically, leaders of several of these movements often argue that property destruction is not violence. “It’s a beautiful thing, the destruction of words”, as Orwell once wrote.)
Of course, much of it depends upon what one considers “harmful,” or ultimately what trained algorithms detect is “harmful.” The devil’s ultimately in the details. The Supreme Court has wrestled with harmful speech in its “fire in a crowded theater” rubric. Personally, I like that framing. Incitement-to-violence clearly crosses the line, but even that cannot be a hard-and-fast rule — one -should- shout “fire” in a crowded theater if one really does see fire.
Twitter ultimately can and does do as it pleases. My hope is that they exercise a light and transparent a hand on that tiller as they can. The best antidote to speech with which we disagree is more and better speech. I think the civil rights activists in the 60’s got that exactly right.
The horse left the barn on the “private company” framing. Much as I detest Trump, removing an ex-POTUS from its system & without due process is more than throwing an obstreperous customer out of the store
I don’t have any answers about Twitter — government intervention by court or bureaucracy would be even worse.
But Twitter is far more than a “private company”. It acts as a central coordinating system in media, which is obviously central.
Probably best thing to do is same thing as with Microsoft: leave it alone and hope that alternatives emerge. Or stop using it, as I have w/ Facebook.
I should also add that it’s in the platforms’ interests to frame the speech issue as a content moderation problem and a question of figuring out where to draw the line. Framing it thus nibbles around the edges of the problem, while avoiding the larger issue of defining quantity as quality in the service of selling ads and user info.
Moderation is an interesting word. It’s defined as “the action of making something less extreme, intense, or violent.” And two decades ago, it might have suggested a team of human beings whose main goal was exactly that — removing posts with swear-words, not-suitable-for-work images, etc.
But algorithms now do far more than removal. They can boost, downboost and put people and hypotheses — even plausible ones — in the “you’re no longer credible” bin. Since topic search is such a big part of how people use Twitter, algorithms can boost certain views and facts, and suppress other views and facts. For example, for more than an entire year, the viewpoint that COVID might first have emerged accidentally from a lab was deemed a “conspiracy theory” and derided, labeled with danger-warnings, and suppressed. This certainly diminished any possible momentum for international investigation when any potential evidence was freshest.
Yet as we sit here today, two years later, more than half of the American public now believe this to be the most plausible theory of emergence. What changed wasn’t any emergence of new bombshell information, it was that first a New York Magazine essaysist and later a 20+ year veteran from the NY Times dared discuss the topic.
The filtering, promotion and suppression policies that a real-time news discussion/dissemination platform will continue to have a huge societal impact on what the public believes to be within the realm of possibility.
A related, and I think terrific piece by Abigail Shrier today:
https://abigailshrier.substack.com/p/what-i-told-the-students-of-princeton
I would not describe Twitter as a benevolent agent. Twitter has ideological leanings and its ethical track record reveals definite biases. Moreover, Twitter should never be entrusted with responsibility for gatekeeping or authentication, centralizing more authority is not the answer. Twitter exists in the marketplace of ideas and we should not allow Big Tech to monopolize, make them compete, truth-beauty-goodness will prevail in the end.
We focus on the CEOs of these companies, but I think we should also pay attention to the employees, the way they are hired, and the company culture they create. It’s these employees, or at least the activist cadre of them, that drive a company in certain directions. I can well imagine that the emphasis on peace is not coming from Dorsey so much as from empowered employees. True, CEOs love to replicate themselves through hiring policies, but eventually all those employees outvote the C-suite. The same is true of media companies.
Great article and sobering insights, Steve.
A memory: I remember the first time on Facebook I noticed there was tinkering with my feed. Friends started to disappear. Acquaintances popped up in their place.
The simple chronology of posts was abandoned. You lost the straightforward control of curating your own relationships. At the time it felt like they were clumsily trying to help!
It was pre-virality. It’s almost impossible to get your head around it. Facebook was still casting about trying to find a business model! They were mocked!
I don’t believe there’s an algorithm to cleanup the algorithms, and this CEO isn’t any more likely to find it than the next CEO. Isn’t the mischief in algorithmically driving engagement in the first place? There’s no will to give up the thrill of retweets. And there’s no money in chronological Facebook feeds.
But there’s a small comfort in conversations like this one, responses in a comment thread from people I can learn from. New friends and ideas, indistinguishable from early Facebook exchanges. What a pleasure they were!
So, if, say, banning companies from starting social media fires as a business model was ever enforced, ideas could still travel, freely, without regulation. That social media companies will regulate themselves is as pointless as an alcoholic hiding liquor from himself.
I could live with a world where ideas were free, but they couldn’t be commoditized. I’m sure there’s a lot to tear apart here, but really that’s my point.
Anyway, thanks Steve for a chance to think out loud and fumble my way through it all. Very thoughtful and inspiring read. As always with you.