Editor’s Note: Sally Hubbard is director of strategic enforcement at the Open Markets Institute, a former antitrust enforcer and author of Monopolies Suck: 7 Ways Big Corporations Rule Your Life and How to Take Back Control. The opinions expressed in this commentary are her own.

Less than a week before the US presidential election, the CEOs of Google, Facebook and Twitter appeared before the Senate Commerce Committee to testify about Section 230 of the Communications Decency Act — the 1996 law that limits platforms’ liability for content posted by users and gives them latitude to moderate content as they see fit. The question at hand was whether Section 230 should be reformed to hold these tech companies accountable for the content on their platforms and for their moderation practices.

Republicans on the committee have accused the digital platforms of liberal bias and have called for limits on Section 230’s liability shield, which has granted Big Tech almost blanket immunity in the past. Democrats scoff at the notion of there being any anti-conservative slant on the platforms, pointing to data that shows right-wing content garnering much of the traffic on Facebook and to reports that Facebook has modified its newsfeed algorithm to the detriment of left-leaning news sites. Instead, Democrats fault the tech platforms for failing to adequately moderate content to stop the spread of disinformation, harmful content, extremism and voter manipulation.

Both critiques miss the heart of the matter.

First, the platforms’ business models — originally designed to manipulate and addict — are the very reason that disinformation goes viral in the first place. Second, whether the tech platforms are making fair, unbiased moderation decisions about what content reaches millions of people is the wrong question to ask. Instead, Congress and the American people should ask why we are allowing Big Tech that power in the first place. We must drastically deconcentrate control over speech so that any single company’s bias or business model cannot have a sizable impact on public discourse.

Too much control over the marketplace of ideas is antithetical to democracy. That’s why America has long had rules that limit such power, like media ownership limits that prohibit the same corporation from owning a TV station and newspaper in the same place. (Trump’s Federal Communications Commission, incidentally, is currently trying to roll back those rules.) The government failed to limit consolidation and allowed Facebook and Google to morph into monopoly monsters. Big Tech didn’t simply grow by being the best; the House Judiciary Antitrust Subcommittee’s recent 450-page report details just how extensive their anticompetitive behavior has been.

About 68% of American adults get some news from social media, according to Pew. And Facebook is now the second-largest news provider when measured in terms of share of Americans’ attention, according to a University of Chicago study.

Meanwhile, the news industry has been in a state of crisis, with newsroom employment falling 51% between 2008 and 2019, and more than half of America’s counties now lacking a daily local paper. In the digital age, online advertising largely could support journalism just as regular advertising did in the past. But Facebook and Google account for 85% to 90% of the growth of the more than $150 billion North American and European digital advertising market, according to Jason Kint, CEO of Digital Content Next, a main trade association for publishers. In 2019, Facebook made roughly $1.2 billion a week from ad revenue alone. Not only is a strong press critical to combat deceptive propaganda, but the very same business model that sucks away news publishers’ revenue also helps disinformation and polarization thrive.

The platforms have made some policy changes purporting to lessen disinformation and election interference. Facebook announced it is blocking new political and issue ads the week before the election, while allowing existing ads to continue to run, and it will take measures to combat voter suppression, like partnering with state election authorities to remove false claims about voting. Google-owned YouTube has promised to remove videos that include deceptive information about voting, and Google has said it launched two features in Search that offer information about voting. Twitter has rolled out changes like labeling misleading tweets and removing tweets meant to incite election interference. But a recent study by the German Marshall Fund found that “the level of engagement with articles from outlets that repeatedly publish verifiably false content has increased 102 percent since the run-up to the 2016 election.” The platforms’ problems are nowhere near solved.

As a result of their extensive data collection, the platforms have so much data on each individual user, and on millions of people who are like them, that their algorithms can target individuals based on their particular vulnerabilities — precisely identifying which people are most likely to be susceptible to, as one investigative report found, “pseudoscience,” like coronavirus misinformation.

Facebook and Google have grown their data collection capabilities through a series of mergers (think Google’s purchases of DoubleClick, Admob, YouTube and Android; and Facebook’s acquisitions of Instagram and WhatsApp). In turn, their data collection furthers their monopoly power and is the reason they dominate digital advertising. The platforms can track you and then allow other parties to hyper-target ads and content at you based on that surveillance.

Facebook and Google’s data collection infrastructures are vast indeed. The platforms have all-encompassing views of what people read, think, do, believe, buy, watch, where they go and who they are with. Google, for example, collects user information through its own services like Search, YouTube, Maps, Android, Chrome, Google Play, Gmail and Google Home.

Mark Zuckerberg often claims that Facebook is committed to free expression. But free expression is impossible when one company controls the flow of speech to close to 3 billion people, using individualized, targeted feeds of content intended to make people react. Instead of a public marketplace of ideas, each user sees a private, personalized set of facts, making it nearly impossible to counter harmful speech with more speech.

Advertising used to be done based on context (e.g. you see an ad for diapers because you’re reading an article about newborns), not identity (e.g. you see an ad for diapers because a social media platform identified you as pregnant after collecting your ovulation data from an app). An important solution is to return to context-based advertising. At the very least, digital platforms should be required to tell users who is funding the messages they see and why exactly they are being personally targeted, under basic truth-in-advertising rules already on the books. Facebook has provided an ad library application programming interface for researchers, but critics say it is not transparent enough.

Leaving it up to the platforms to self-regulate is not a solution: The platforms’ business models are too profitable for the corporations to give them up voluntarily.

The House Antitrust Judiciary report provides Congress with a path forward that will enable it to deconcentrate the platforms’ control over speech: Structurally break them up, reform the antitrust laws, pass non-discrimination rules that prohibit preferential treatment to certain types of content or themselves and pass interoperability rules that require platforms to make their infrastructure compatible with upstart competitors. Strong privacy rules are also needed to curb the surveillance that fuels the platforms’ targeting.

Starting today, we must dismantle the platforms’ business models that are tearing apart America — and countries around the globe. And we must enforce the antitrust laws we already have, while pushing for legislative reform. Our democracy cannot afford to wait.