American Politics, Progressive News, Human Rights, Civil Disobedience, Foreign Policy, Current Events, Cultural Activism, and Social Justice.
http://www.dustcircle.com | http://www.facebook.com/dissentingheretic | http://www.twitter.com/dustcirclenews

Friday, June 22

Do you lose free speech rights if you speak using a computer?


Scholar argues computer-generated speech is not protected by the Constitution.

It took guts for the New York Times to publish an op-ed by Tim Wu, the Columbia law professor who coined the phrase "network neutrality," arguing that the First Amendment doesn't protect the contents of the New York Times website. A significant amount of the content on the Times website—stock tickers, the "most e-mailed" list, various interactive features—were generated not by human beings, but by computer programs. And, Wu argues, that has constitutional implications:
Protecting a computer’s "speech" is only indirectly related to the purposes of the First Amendment, which is intended to protect actual humans against the evil of state censorship. The First Amendment has wandered far from its purposes when it is recruited to protect commercial automatons from regulatory scrutiny.
OK, I fibbed. The target of Wu's op-ed was Google and Facebook, not the New York Times. But accepting Wu's audacious claim that computer-generated content doesn't deserve First Amendment protection endangers the free speech rights not only of the tech titans, but of every modern media outlet.
No one believes that the output of computer programs, as such, are protected by the First Amendment. It would be ridiculous, for example, to argue that the First Amendment barred the government from regulating a computer that controlled a nuclear power plant. But when a firm is in the business of providing information to the public, that information enjoys First Amendment protection regardless of whether the firm creates the information "by hand," or using a computer.

"Computer speech" at the New York Times

Wu's argument depends on drawing a sharp distinction between constitutionally protected human speech and computer speech that is unprotected by the First Amendment. But closer examination demonstrates how nonsensical this distinction is. To make the point, we don't need to look any further than the grey lady herself.
Articles published by the New York Times are often composed using word processors, and pages in the print newspaper are laid out using page layout software. The nytimes.com website is sent to readers by a Web server (a computer program) and rendered by a Web browser (also a computer program).
Of course, Wu isn't talking about those programs. He means programs that are directly involved in the production or selection of content. But the New York Times website has plenty of examples of those too. The home page features an automated stock ticker. A box on the right-hand side of the page shows "most e-mailed" and "recommended for you" stories—also generated automatically. The millions of ads the Times shows its readers every month are almost certainly chosen by computer algorithms.
In 2010, the Times produced an interactive feature called "You Fix the Budget." Users were invited to try to balance the US federal budget by choosing a mix of spending cuts and tax increases. A January feature, called "What Percent Are You?," invited readers to enter their household income in order to see how it compares with others in hundreds of metropolitan areas around the country. Features like this would be impossible to create "by hand."
On election night, the Times typically has an extensive section of its website featuring election results from around the country, complete with maps, charts, and poll results. These features are updated in real time, far too quickly for a human staff to keep them up to date.
The Times employs Nate Silver, a statistician who collects thousands of poll results and produces sophisticated mathematical models of election outcomes. These models are complex enough that his results could only be generated by a computer, and indeed even Silver himself can't always explain exactly why the model produces a particular outcome.

Censoring computers means censoring people

Obviously, it would be ridiculous to say that the New York Times website doesn't get First Amendment protection because too much of the site is generated by software. It wouldn't make sense to say that the government can freely regulate the computer-generated sections of the site—like Silver's projections or the "most e-mailed" list—because they are software-generated. Nor would it make sense to say that a feature like the "most e-mailed" list would receive more First Amendment protections if a human being, rather than a computer, made the list of most e-mailed pages.
Silver was exercising his First Amendment rights when he designed his model and selected the polls that drive it. Similarly, the Times was exercising its First Amendment right when it decided that readers would be interested in stories that were frequently e-mailed by other readers. Regulating these sections of the site raises exactly the same First Amendment issues as regulating the content or placement of traditional news stories.
The same point applies to Google. The question isn't whether Google's computers have First Amendment rights. Obviously they don't. Rather, the people who own and operate Google's computers—its engineers, executives, and shareholders—have First Amendment rights. And regulating the contents of Google's website raises First Amendment issues regardless of how Google might have used software to generate that content.
Perhaps the most perverse aspect of Wu's argument is that it seems to offer the highest level of First Amendment protection to those aspects of Google's website that have attracted the most criticism. For example, critics have charged that it is anticompetitive for Google to give its own services, such as Google Books and Google Maps, a more prominent position in search results than competing services. But this has generated controversy precisely because a human Google employee overrode the default behavior of Google's software to move Google products to the top of the page. Similarly, some critics criticize Google for manually removing some search results it regards as too spammy. Again, Google's alleged sin is using too much human judgment, not too little.
Wu's theory seems to imply that manual manipulation of search results is entitled to a higher level of First Amendment protection than automated, organic search results. That doesn't make any sense.
Ars alum Julian Sanchez has pointed out that Wu's argument also seems to imperil free speech rights for video games. The Supreme Court recently ruled that video games are entitled to the same First Amendment protections as other forms of media. But the contents of a video game are generated by software in much the same way Google's search results are. Would the state of California have won the case if it had raised Wu's "computers don't have free speech rights" argument? I hope not.
So it's true that computers don't have First Amendment rights. As Paul Alan Levy points out, neither do printing presses. But people have free speech rights, and those rights apply even if we use computers to help us speak.
Related Posts Plugin for WordPress, Blogger...