Cambridge Analytica’s Abuse Shows Why Diversity In Tech Matters

Technology changes, but the deliberate marginalization of racial minorities stays the same.
Americans cast their votes in Ferguson, Missouri, on Nov. 4, 2014.
Americans cast their votes in Ferguson, Missouri, on Nov. 4, 2014.
Scott Olson via Getty Images

Last week, while in West Virginia for a roundtable discussion, President Donald Trump made a sharp detour from his scheduled remarks on taxes to rail against alleged voter fraud in elections. “In many places, like California, the same person votes many times,” he told his audience. “They always like to say, ‘Oh, that’s a conspiracy theory.’ Not a conspiracy theory, folks. Millions and millions of people.”

In an ironic twist better suited for Greek tragedy than American presidential politics, Trump’s demonstrably false claims of electoral fraud reappeared just as Facebook CEO Mark Zuckerberg was preparing to testify before Congress about a long list of political, digital and privacy abuses enabled by the social media giant. Much of the controversy hinges on the scandalous misdeeds of Cambridge Analytica, the now-infamous data and political consulting firm that worked extensively on the president’s 2016 campaign and improperly accessed the private information of some 87 million Facebook users.

In doing so, Cambridge Analytica provided the means for the Trump campaign not only to activate likely supporters but to influence minorities not to vote at all. (While Congress and the public are still sorting out the details, these efforts to depress minority voter turnout paralleled the campaign of “information warfare” orchestrated by Russia’s Internet Research Agency during the presidential contest.)

Encouraging low turnout through questionable means is a longstanding political strategy, now compounded by voter ID laws, to undermine the electoral impact of minority voters.

During the 1964 presidential election, for instance, a Republican consultant was indicted on charges of electoral fraud after he distributed more than a million misleading leaflets that claimed Martin Luther King Jr. wanted black voters to write in his name for president.

In 1980, consultants on Ronald Reagan’s election team proposed a strategy of “holding down the black turnout” as a means of neutralizing the political impact of black voters. The idea was to encourage political apathy and voter abstention among African-Americans through attack ads focused on President Jimmy Carter.

Cambridge Analytica accessed the private information of some 87 million Facebook users for political purposes.
Cambridge Analytica accessed the private information of some 87 million Facebook users for political purposes.
Chris J. Ratcliffe via Getty Images

While the deliberate political marginalization of racial minorities hasn’t changed over the decades, what has become increasingly clear as Facebook and Cambridge Analytica offered their alarming and convoluted narratives is that the technology that allows politicos to target these groups has evolved dramatically. Five decades later, a million leaflets have evolved into millions of Facebook “dark posts” aimed at discouraging minorities from supporting Democratic candidates.

Although Republicans were the first to technologically “innovate” in their anti-democratic tactics, the privacy and political implications cross partisan lines because all political parties have access to these tools. Anyone or any group that seeks to depress or suppress the turnout of a particular constituency now has the unregulated technological means and platform to do so. And voter suppression is an undemocratic weapon, no matter which political party deploys it.

What does our democracy look like if candidates regularly use these technological tools to stifle political rights in local or state races? On ballot referendums? Would Republicans still be silent if the Democratic Party deployed their own version of Cambridge Analytica to suppress the turnout of rural white voters in presidential elections?

These broader political and racial implications are issues that Facebook appears to have ignored. The romanticized narrative of the tech industry is that it’s a meritocracy: Anyone can participate; it democratizes access and opportunity; all voices are elevated. The reality of the industry is the opposite. Intentionally or not, tech titans have shown they simply don’t consider the disparate and often detrimental impact their products can have on already marginalized people.

Cambridge Analytica’s ability to target marginalized groups of voters was only possible because Facebook completely overlooked the potential for a nefarious organization to do so. This oversight, in the face of a long and glaring history of such attempted exploits, is a symptom of an industry culture that prioritizes speed and deprioritizes the lives of users in general, but racial minorities in particular. That, in turn, is a reflection of the absence of diversity in Silicon Valley, especially in leadership and policymaking positions.

Facebook CEO Mark Zuckerberg has been called before Congress to discuss what his company didn't do to protect its users.
Facebook CEO Mark Zuckerberg has been called before Congress to discuss what his company didn't do to protect its users.
Yasin Ozturk / Anadolu Agency via Getty Images

These political and social issues aren’t fully considered and vetted, or are flat-out ignored, because marginalized groups are not at the table when technology is made, “use cases” are discussed, and implementation is tested. Inclusive innovation, with members of marginalized groups sitting in positions of power, surely would have caught some of this. An engineer or product manager who comes from a community that was historically targeted by voter suppression efforts is far more likely to anticipate vulnerabilities, and may have caught such manipulation prior to the 2016 election.

Alas, the Equal Employment Opportunity Commission reminds us that only 1 percent of management in Silicon Valley is black. Facebook just added its first black director, Ken Chenault, to the board in January. Facebook and its peers can mitigate these issues going forward under social pressure, but some changes will have to be mandated by federal regulation. We’ve seen over the last decade that, if left to their own devices, tech companies will do only what is required and nothing more.

While the law lags behind the pace of technology, the last year has taught us that regulatory oversight of social media and individual data privacy and portability are imperative in politics.

Consumers should own their information across any and every site and repository where it’s stored. Give them the opportunity to transparently opt into what they do and don’t share, when, how and with whom. If that’s not possible, give them the choice to permanently delete their data. Federal legislation along the lines of the European Union’s General Data Protection Regulation would be a good place to start, but state and local governments could also lead the way.

Diversity is imperative at the highest levels in tech. Companies must elevate the opinions, suggestions and thought leadership of minorities, particularly around ideation, testing and implementation of new products. If they can’t do it quickly within the leadership, do it through supplier diversity programs and hire consultants to provide this information.

The first step, however, is for companies to humbly acknowledge that they don’t have all the answers. That’s OK. Hire and engage with people who have the same lived experiences as those users and customers that the C-suite may not relate to.

Facebook’s failures provide a direct rebuttal of ex-Googler James Damore’s anti-diversity screed and former Apple diversity head Denise Young Smith’s argument about cognitive diversity being the only kind that matters. Rooms full of white tech executives ― of various genders, ages and presumably political beliefs ― can fall into these mistakes via complacency, privilege and unacknowledged blind spots.

People of color in these rooms would have seen this coming. Include them, elevate them and believe them when it comes to decision-making. Or don’t, to the industry and democracy’s detriment.

Leah Wright Rigueur, a historian and assistant professor of public policy at the Harvard Kennedy School, is the author of The Loneliness of the Black Republican. You can find her on Twitter at @leahrigueur.

Bärí A. Williams previously served as lead counsel for Facebook and created its supplier diversity program. She is a tech industry legal and operations executive currently at Marqeta, a start-up adviser and former head of business operations, North America, for StubHub. Follow her on Twitter @BariAWilliams.

Go To Homepage

Before You Go

Popular in the Community