Back to News & Commentary

The Rise of Platform Authoritarianism

hazy interior of office
Artificial intelligence is changing the workplace, with real dangers to anti-discrimination and privacy principles.
hazy interior of office
Ifeoma Ajunwa,
Assistant Professor with a joint appointment at Cornell ILR School and Cornell Law School
Share This Page
April 9, 2018

When Mark Edelstein, an unemployed 58-year-old social media strategist, logged in to Facebook last December, he was shown ads that reflected some of his interests, including an ad for marketing software and an ad for a trip-booking website that he later used to book a trip to visit his mother in Florida.

What Edelstein did not see that day was a posting for a social media director job at HubSpot. That’s because this ad, identified in an investigation by ProPublica, was targeted only to Facebook users aged 27-40. Edelstein never had a chance at the job — simply because of his age.

Facebook isn’t the only platform accused of this type of discrimination: Google and LinkedIn have also been accused of allowing ads that would exclude target groups older than 40. Social media platforms have argued that they are immune to liability from such discrimination claims under the Communications Decency Act, which protects platforms from actions taken by third parties on their sites. Recent allegations, however, point to Facebook using its own platform to disseminate age-restricted employment ads in violation of the Age Discrimination in Employment Act.

Platforms like Facebook are increasingly becoming part of the hiring process. Job seekers must engage with these platforms solely on the terms dictated by the platform, without complete information about how the data will be used or stored. Job applicants also have no guarantee that those terms conform to existing laws and regulations. I call this phenomenon “platform authoritarianism.”

There is a real risk that platform operators could sell sensitive personal employee data to unknown third parties.

The use of algorithmic platforms in the hiring process is becoming the norm: A recent study found that nearly all Global 500 companies have done away with paper applications in favor of automated hiring platforms that screen and sort applicants. In the era of machine-learning algorithms, which may constantly change the variables used for decision-making, there may be no sure way to audit hiring decisions. This also means that the use of protected variables or their proxies in the hiring process could potentially go undetected. The rise of platform authoritarianism may enable companies to engage in discriminatory hiring practices. A class action complaint filed in February 2017, for example, alleges that Facebook allowed for “ethnic affinity” group advertising — meaning that employers could show employment ads to specifics demographics that corresponded to racial groups. Consider, for example, that Facebook derived its affinity groups by grouping together users it had identified as “Spanish dominant.” Such proxies could serve as stand-ins for protected categories — such as race — in contravention of the Civil Rights Act.

AI Video Thumbnail
This embed will serve content from {{ domain }}. See our privacy statement

Platform authoritarianism also raises concerns beyond the hiring process. For example, scheduling platforms such as Kronos, which allows for “on-demand scheduling,” are becoming ubiquitous, giving rise to employment situations where the employee may be at the mercy of erratic, automated scheduling with no guaranteed work hours. Productivity platforms, which collect not just key strokes but also biometric data, are also raising new legal issues concerning employee privacy rights. A major concern is that these new platforms are contributing to a technologically enabled workplace panopticon that is gradually eroding worker privacy and human dignity.

The capability for massive and indefinite data collection afforded by workplace algorithms raises legal questions about data ownership and control. As platform authoritarianism demands that both job applicants and employees share more and more of their personal data, the law has not kept apace, leaving applicant and employee data vulnerable to misuse. There is a real risk that platform operators could sell sensitive personal employee data to unknown third parties. Given the recent revelations that Cambridge Analytica was able to harvest the Facebook data of millions of users, this concern is not far-fetched.

Artificial intelligence is changing the workplace, with real dangers to anti-discrimination and privacy principles. Acknowledging the rise of platform authoritarianism is the first step toward reform, but state and federal regulators must devote real resources to understanding and addressing these new realities to ensure civil rights for all job applicants and employees.

This piece is part of a series exploring the impacts of artificial intelligence on civil liberties. The views expressed here do not necessarily reflect the views or positions of the ACLU.


Learn More About the Issues on This Page