How Big Tech Fueled Capitol Insurrection Radicalization

Sophia Scott ‘21, Editor-in-Chief

 

On January 6, 2021, amidst a joint-session of Congress gathered to certify Joe Biden’s Electoral College presidential victory, the nation’s eyes turned towards the United States Capitol to witness another historic moment: thousands of Trump supporters invaded the sacred congressional chambers, causing lawmakers to evacuate, don gas masks, and barricade themselves in their offices, hiding in fear for their lives. In a location kept unidentified for safety reasons, legislators sat transfixed by live news footage of splintered glass windows, billowing Confederate flags, and clouds of tear gas and pepper spray filling the Capitol Rotunda.

US Capitol police officers try to stop supporters of President Donald Trump, including Jake Angeli (R), a QAnon supporter known for his painted face and horned hat, to enter the Capitol on January 6th, 2021, in Washington, DC. (Courtesy of Saul Loeb/AFP/Getty Images)

The nation watched in horror as the hallowed halls of American democracy were desecrated by the very American citizens it exists to protect and defend—all against the backdrop of centuries-old artwork depicting the founding and founders of the great American democratic experiment. Five people died, including a Capitol Police officer. Countless legislators were left psychologically traumatized. And in the aftermath of the violence, the historic Capitol building was littered with trash, Trump campaign memorabilia, damaged artifacts, and blood, which was even smeared on a 19th-century marble bust of President Zachary Taylor.

A bust of President Zachary Taylor in the Capitol building stained with blood on Wednesday, January 6th. (Courtesy of Anna Moneymaker, The New York Times)

While watching the insurrection unfold live on television screens across the nation, many Americans wondered, “How on earth could this have happened?” 

To understand how we got here, we need to turn our attention towards the roots of how these rioters organized: the Big Tech industry fueled their radicalization by popularizing algorithms designed to compel users to engage repeatedly with extremist content.

Parler, a Twitter-like social media platform geared towards far-right conservatives functioned not only as an echo chamber for hate groups and anti-government extremism, but also as an extremely effective recruitment, advertising, and coordination tool for the January 6th pro-Trump Capitol insurrection. Parler allowed blatant threats against legislators and police officers to remain visible, enabling users to share and interact with these posts. Later, the FBI traced many of these threats back to insurrectionists who participated in the Capitol raid.

However, more mainstream social media platforms such as Facebook and Twitter also played an integral, more deeply-seated role in fueling the Capitol insurrection. These platforms’ algorithms fostered the spread of misinformation and extremist content, thus contributing to the radicalization of the Trump-supporting mob that stormed the building.

For example, Facebook aided in facilitating the spread of extremist content by spearheading the algorithmic vehicle for posts shared on its platform when it pioneered the “Like” button in 2009. This engagement tool laid the groundwork for other major social media platforms to introduce the feature, which gathers data user feedback on different posts, and in turn, teaches the platform’s algorithm to show users more similar content. For instance, if a user clicks “Like” on a Facebook post regarding a conspiracy theory, such as QAnon, Facebook’s algorithm will then direct more content towards that user’s News Feed surrounding other conspiracy theories and misinformation. 

After Facebook popularized this user-behavior-driven recommendation system, other major social media platforms like YouTube, Instagram, and President Trump’s platform of choice, Twitter, have implemented equivalent forms of algorithm-focused recommendation systems. As a result, these Big Tech platforms have transformed into engines of radicalization and epicenters of disinformation, which often serve as hotspots of recruiting real participants for real-life manifestations of their online machinations.

Inflamed Trump supporters stormed the Senate side of the Capitol on Wednesday afternoon, after the president’s rally. (Courtesy of Jason Andrew, The New York Times)

The most prominent example of Big Tech’s role in fueling the radicalization responsible for the Capitol Insurrection occurred on December 20, 2020, mere weeks before the insurrection, when then-President Trump tweeted, “Statistically impossible to have lost the 2020 Election. Big protest in DC on January 6th. Be there, will be wild!” Twitter did not take down the President’s incendiary post. As a result it was liked, retweeted, and shared thousands of times. Big Tech has engineered platforms with algorithms conducive to empowering ordinary citizens, hate groups, politicians, and world leaders alike with the ability to incite mass violence, rope millions of people into conspiracy theories based on lies, and organize them to act on these extremist viewpoints in real life. 

However, In the wake of the Capitol insurrection, Youtube, Twitter, and Facebook, have each taken public steps to reduce the hospitality of their platforms to extremist content. After the events of January 6th, all of former President Trump’s accounts were suspended by every major social media platform for his role in inciting the violence at the Capitol building. Many other accounts of public figures and ordinary people associated with perpetuating violent extremism were banned as well. Yet still, encrypted, far-right messaging platforms such as Signal and like Telegram gained dramatic spikes in new users after Big Tech’s recent crack-down on disinformation and violence incitement. This mass-migration of conspiracy theorists and extremists towards other online communication platforms without strict content regulation guidelines constitutes a brewing issue. A highly radicalized group of users who were removed from more mainstream social media platforms are now confined to an echo chamber of rapid-communication, propaganda, and extremist content.

Nonetheless, the Biden administration is currently developing a plan to address online radicalization, conspiracy theorism, and extremism. On January 22, 2021, White House Press Secretary Jen Psaki announced the administration’s efforts to counter domestic terrorism and dismantle online networks of radicalization. Congress is also debating reforming legislation that regulates online freedom of expression. Some members of Congress are in favor of forcing Big Tech platforms to change their algorithms to reduce radicalization feedback loops, yet none of these reform proposals have become law yet. 

Four years ago on January 20, 2017, when President Donald J. Trump delivered his inaugural address, he decried the country’s major cities as havens for violence, and infamously proclaimed, “this American carnage stops right here and stops right now.” Though his declaration was initially dismissed as hyperbolic, under the former president’s leadership, the “American carnage” he foretold ultimately came to fruition at the hands of his own supporters, radicalized by Big Tech platforms that facilitated the spread of his constant lies about election fraud and dog-whistle calls to violence.

How it Started vs. How it’s Going