Regulating Bot Speech


We live in a world of artificial speakers with real impact. Chat bots befriend children in order to acquire marketing data. Robotic telemarketers laugh at the suggestion that they are not real. Russian social media bots foment sufficient political strife to merit a spotlight in Congressional hearings. Concerns over bot speech have led prominent figures in technology to call for regulation. Legislators have begun to heed these calls, drafting laws that would require online bots to clearly indicate that they are not human. This essay is the first to consider how efforts to regulate bot speech might fare under the First Amendment. At first blush, requiring a bot to self-disclose raises little in the way of free speech concerns—it does not censor speech per se, nor does it unmask the identity of the person behind the automated account. A deeper analysis, however, reveals several areas of First Amendment tension that any bot disclosure law would need to address. These include a poor fit between the disclosure requirement and the harms such a law would aim to address, the potential for unmasking anonymous speakers in the enforcement process, and the creation of a scaffolding for censorship by private actors and other governments. We offer recommendations for legislators who seek respond to the real risks autonomous speakers pose while avoiding these pitfalls.

Full paper available here

Ryan Calo, University of Washington, Stanford Law School, Yale Law School; Madeline Lamo, University of Washington

Brought to you by ICLR.