Social Media, Inc.- The Global Politics of Big Data-WPR

June 22, 2012

Ron Deibert

World Politics Review: June 19, 2012

In June 2012, Google’s acrimonious relationship with the People’s Republic of China took a couple of new turns. In order to assist Chinese users to access information freely from behind the controls of the Great Firewall of China, Google created a unique feature for its popular search engine: When users attempt to search for banned keywords, Google warns them that this might cause their Google connection to be interrupted and suggests alternative spellings and phrasings that would ensure they can access the desired content. In effect, Google’s search engine now facilitates the circumvention of China’s censorship of the Internet.

The second new feature is slightly more ominous: a warning issued to some users of Google’s Gmail whose accounts Google suspects might be targeted by “state-sponsored” hackers. Attribution of any attack in cyberspace is extraordinarily difficult, and Google has remained tight-lipped about how they know a particular attack is state-sponsored. Nor did Google explain to what extent they would issue this warning to users who might be victimized by states other than China. These questions are particularly relevant given that, in the same week, the New York Times reported in detail that the United States and Israel were behind the Stuxnet worm that sabotaged uranium enrichment facilities in Iran last year. It is doubtful that Google would give the same warning to Iranians working in critical infrastructure facilities — “state-sponsored” may simply be a synonym for “China-sponsored.”

The new Google features, which are certain to further irritate the Chinese leadership, display a degree of boldness in the face of the Chinese market that few companies are willing to emulate. But the issues at play have broader implications than just the ongoing tensions between Google and China. They are part of a larger trend emerging in world politics: the growing political importance of the corporate giants that own and operate cyberspace. The decisions these companies take for commercial reasons can end up having political consequences, both domestically and internationally. And at the same time as they begin to flex their political muscles, the corporate behemoths that already control huge swaths of cyberspace are being deputized by governments with more-expansive policing responsibilities.

Are we entering a new age of corporate power in cyberspace? Are companies like Facebook and Google examples of a new type of “corporate sovereignty,” as Rebecca MacKinnon suggests?

The starting point for exploring these issues begins with the nature of both cyberspace itself and the universe of “Big Data” we have created as our communications ecosystem expands and multiplies in all directions. Over the past several decades, societies have come to increasingly depend on a globally networked information and communications infrastructure. Our lives have, in essence, been turned inside out, as data that we used to store on our desktops and in our filing cabinets are now entrusted to servers and networks beyond our immediate grasp and distributed across territorial jurisdictions around the planet. The quantity of data we enter into this common pool resource is exploding in all dimensions as the number of devices we use to connect to it, and to each other, expands dramatically. Though the concept of “exponential” is often misused in common parlance, it seems like an apt description of the staggering volumes of data that we collectively create as we go about our daily lives texting, tweeting, searching and sending.

Although our first-hand experiences in cyberspace may seem ethereal, it is important to understand that all of that data transits through cables, radio waves, cellular towers and other media, and is stored on physical machines somewhere. The infrastructure for all of this complex networking is primarily owned and operated by the private sector — companies that run our Internet services, telecommunication networks, mobile phones and satellites. Until very recently, this very article would have been saved locally on a desktop computer; only 20 years ago it likely would have been drafted on a typewriter, or with pen and paper. But as so many other documents today, it was instead composed on Google Docs and edited on Dropbox.

This massively expanding universe of Big Data has spawned an ecosystem of its own comprised of companies whose job it is to repurpose and commercialize the personal data we give away for free. All of our “likes,” “pokes” and “tweets” are geolocated and cross-referenced to our purchasing habits, social networks and professional interests, then sold to companies to help them more accurately target their advertisements. As the profit from exploiting that data increases, the urgency to acquire, store, mine and analyze increasingly personalized information grows in a self-reinforcing manner, extending the reach of companies ever deeper into our personal lives, which in turn increases our own dependence on those platforms. Internet service providers (ISPs), web hosting companies, cloud and mobile providers, massive telecommunications and financial companies, and a host of new digital market organisms digest and process unimaginably large volumes of information about each and every one of us, which are then sold back to us as value-added products and services, or used in advertisements for yet more products and services.

The companies behind all of this data-repackaging have become behemoths, both in terms of revenue and size. The social networking site Facebook now counts more than 900 million users, or almost three times the current population of the United States and more than the entire number of Internet users in 2005. Even taking into account its recent disappointing IPO, the company’s current market capitalization stands at roughly $61 billion. But Facebook is puny compared to some of its competitors: Apple’s quarterly profits in 2012 exceeded $11 billion, and its current market capitalization stands at roughly $525 billion, which is more than Microsoft (at $244 billion) and Google (at $192 billion) combined.

The sheer size of these companies, combined with our dependence on them for almost all of our communications experiences, can make the decisions they take enormously consequential for society and politics. As MacKinnon puts it, “We have a problem: The political discourse in the U.S. and in many other democracies now depends increasingly on privately owned and operated digital intermediaries. Whether unpopular, controversial and contested speech has the right to exist on these platforms is left up to unelected corporate executives, who are under no legal obligation to justify their decisions.”

As these companies grow and mature, we should expect them to exercise political influence at home and abroad, in an attempt to shape public policy in accordance with their commercial interests. Google’s recent very public lobbying and advocacy activities are a case in point. In addition to issuing its user warnings as outlined above, the company has undertaken a growing number of efforts designed to shore up Internet “openness” in line with its corporate preferences. It has supported free speech activists and research networks, such as its Google policy fellows network. It has held two major conferences, called Internet at Liberty, designed to raise awareness about threats to an open Internet and provide opportunities for free speech activists to network and share ideas. Its vigorous opposition to the SOPA and PIPA bills, backed by considerable financial resources, was seen by many as instrumental in the defeat of these bills in the U.S. Congress.

Of course, having such a powerful company support research and lobby on behalf of Internet openness is mostly a welcome development. But there are many who are justifiably concerned about the implications of such a wealthy company throwing its weight behind political causes in a selective and partial manner. What would Google’s attitude be toward activists that focus their attention on Google itself? And if Google funds them, will these activists temper their criticism?  What about Google’s resistance to privacy protections? Is its support for human rights online only selectively applied to those areas that mesh with the company’s private interests, but actively resisted in others that do not? These are all very serious issues worthy of close consideration.

Deliberate lobbying and advocacy campaigns are the most obvious examples of corporate political power. But political consequences can emerge from seemingly apolitical decisions taken purely for commercial reasons, including the structure of interaction created by the terms of service itself. We have come to see social media like Twitter and Facebook as the infrastructure not just for entertainment but for political discourse, too, and we increasingly depend on them as the online equivalent of town halls or village squares. The companies themselves often contribute to this notion. For example, Facebook recently offered its users the opportunity to vote on its privacy policy, making its 900 million users the “largest electorate in the world.”

Rarely, however, do average citizens step back and examine the constraints, along with the opportunities, presented by these companies in the constitution of those public spheres — the “unprecedented synthesis of corporate and public spaces,” as Steve Coll described it in his New Yorker essay “Leaving Facebookistan.” In fact, according to the Electronic Frontier Foundation’s Jillian York, social media are less like town squares and more like shopping malls, bound by private regulations. Decisions taken by these companies for commercial or other reasons and without public input or accountability can end up having enormous political consequences for freedom of speech and association as well as access to information. Here, it is important to remind ourselves of the political economy of social media: If social media may seem in many ways like “imagined communities,” in the language of Benedict Anderson, they are ones in which the members of the community are more like serfs than citizens, where users are both consumers and product. Social media might thus best be described as recessed quasi-public spheres: While we may increasingly use these platforms for political purposes, politics is only a byproduct of their intended purpose — and one that is highly constrained by terms of service that remain beyond the direct control of users.

The recessed nature of political participation in social media is exacerbated when one factors territorial jurisdiction into the equation. Most social media platforms have international customer bases, offering services all over the world. However, at the end of the day the companies are registered and headquartered in some political jurisdiction, and thus are subject to its laws and regulations. So when we use Gmail, Facebook and other social media platforms, we may be placing personal data under the jurisdiction of laws and regulations that we as citizens have no direct input in formulating. For example, any data stored on Google servers, no matter their physical location, are subject to U.S. Patriot Act provisions on data sharing because Google is a company domiciled in the United States and thus subject to U.S. law. In response to these concerns, Norwegian legislators recently proposed regulations that would restrict the use of Google products and services by public officials.

The jurisdictional dimensions of social media bring up an important and nuanced point: While social media companies may yield increasing political power, so too are they increasingly subject to the growing assertions of state power in cyberspace, particularly around security concerns. This is all the more salient given the sea change in governments’ approach to cyberspace security and governance over the past several years. Whereas in the early days of the Internet, state policy was either absent or deliberately hands-off, today governments are seeking to shape and secure cyberspace as an urgent priority. As much of what constitutes cyberspace is in private sector hands, in order to secure it, governments must enlist or otherwise compel the private sector to police the data and networks they control within state-based territorial jurisdictions. These pressures have led to a gradual downloading of policing responsibilities to the private sector in order to monitor users, filter access to information and control free speech for political or other purposes.

Naturally, expectations of how and under what authority private sector actors should police cyberspace can vary widely from political jurisdiction to jurisdiction. Compliance with “local laws” can bring with it tough choices and might even compromise larger principles, such as those relating to human rights or privacy protections. Companies like Google, Microsoft, Research in Motion (RIM), Yahoo!, Twitter, Facebook and many others have all faced these growing pressures as their operations expand worldwide. And though these tensions are most often associated with efforts to penetrate potentially lucrative markets in authoritarian countries, they can arise in democratically governed countries as well. Consider the case of India, which has pushed for an increasingly stringent set of requirements on Internet, social media and mobile providers to police the Internet. Recently, the Indian government requested that Yahoo!, Gmail and other email providers route all emails accessed in India through the country, even if the actual email account is registered in a foreign jurisdiction. After a prolonged negotiation couched in secrecy, it appears that RIM has been compelled to do likewise with servers for its popular Blackberry service.

The process of downloading policing responsibilities is also occurring in the advanced industrialized world, typically under the rubric of “lawful access,” which refers to the legalized interception of communications and search and seizure of information by law enforcement agencies, typically involving coordinated responsibilities on the part of the private sector. To take just one example, Canada’s proposed Bill C-30 sparked a major controversy by mandating that ISPs and telecommunication companies retain and share information without judicial oversight and install specific surveillance equipment of the government’s choosing onto their networks.

In other cases, the downloading of controls to the private sector has opened up new markets for the commercial exploitation of data they are required to police. As companies are required to inspect their networks and data, new technologies, products and services are emerging that enable them to do so more effectively and efficiently. The American privacy researcher Chris Soghoian, who has studied how new policing responsibilities are affecting corporate behavior, found that some companies actually derive revenues from charging fees for “lawful access.” He notes that the volume of requests received by one U.S.-based wireless carrier, Sprint, grew so large that its 110-member in-house electronic surveillance team could not keep up. To address the problem, Sprint automated the process by developing a Web interface that gives law enforcement agents direct access to users’ data for a fee. That website was subsequently used more than 8 million times in a single year.

A similar dynamic can be seen in the market for products and services that assist ISPs in censoring communications. The OpenNet Initiative has tracked a growing number of nondemocratic countries using Western technologies to censor access to communications and monitor users’ online habits. For example, citizens of Yemen, Sudan, the United Arab Emirates and Qatar now live in communications ecosystems whose boundaries are patrolled by the Canadian company Netsweeper. The company helpfully categorizes all Web content into ready-made classifications, such as “alternative lifestyles,” “sex education” and “political,” so that ISP operators can simply check off those baskets of content to which they want to restrict citizens’ access. Unfortunately, many of the clients of these products are authoritarian regimes that lack transparency, leaving citizens behind a double blind of unaccountability: that of their own nontransparent governments and that of the commercial filtering companies located in foreign jurisdictions with proprietary products.

The trends outlined above are certainly ominous for rights and democracy: We have created an environment in which our digital lives have been, in essence, turned inside out and our private data delegated to enormous social media companies. We depend on these behemoths for social, economic and political activities to such an extent that their operating decisions can end up having enormous political consequences. At the same time, governments are pushing more and more responsibilities downward to the private sector to police cyberspace, setting up a potentially troubling dynamic involving the exercise of private authority over a quasi-public good without public accountability.

As concerning as these trends should be, however, they also need to be put in a proper historical and contextual perspective. Cyber behemoths of the social media world may be formidable, but they are certainly not the first or only examples of large corporations wielding political influence. Only several hundred years ago, two multinational corporations, the Hudson’s Bay Company and the English East India Company, effectively ruled large parts of Canada and India respectively. Throughout the 20th century, political economists have lamented the role of multinational corporations, whose power and influence has often been used to manipulate the domestic political systems of developing countries. Large giants of the resource-extractive industries throw their political weight around in Washington with a degree of sophistication that makes the Googles and Facebooks of the world look like amateurs. In short, the relationship between the private sector and public authority and the impact of that relationship on citizen and consumer rights is a constantly adjusting problem and part of an ongoing process of industrialization, globalization and democracy. Social media giants are but the latest manifestation of this dynamic.

Nor should we necessarily lament the self-conscious political roles taken on by social media companies, as Google has in recent years. Having these companies think of themselves and justify their services in broader terms, and acting not just out of narrow self-interest but also as stewards tending to a common pooled resource, is an ethic of responsibility that should be encouraged for everyone’s benefit. Certainly there are times when such pretenses may just be self-serving fig leaves designed to sell products. But in the long run, nurturing such expectations may lead to more weighty considerations of the public good, as well as to norms of self-governance and corporate social responsibility.

Above all else, however, it is imperative that as our communication environment continues to undergo the radical transformation to the universe of Big Data, the timeless principles of division, oversight and restraint be rigorously applied to the institutions that own and operate the spaces through which we think, share and act, and to the public institutions that regulate those spaces. Privacy commissioners, antitrust regulators, independent watchdogs and consumer advocacy groups are going to be critical elements of the application of these principles. Here, it is essential that citizens not confuse social media companies with political institutions, and that political authorities not delegate core responsibilities away to actors who are ill-suited to perform them. Citizens, private companies and states all have roles to play to ensure that cyberspace is an open commons of information within which citizens’ rights are protected by the rule of law.  As we move swiftly into the bewildering new universe of Big Data, we will need to ensure those roles are clearly defined.

Ron Deibert is director of the Citizen Lab and Canada Centre for Global Security Studies, Munk School of Global Affairs, University of Toronto. He is a co-founder and principal investigator of the OpenNet Initiative and the Information Warfare Monitor projects (2003-2012). He is a co-editor of “Access Denied” (2008), “Access Controlled” (2010) and “Access Contested” (2011) with MIT Press. He was a co-author and principal investigator of the “Ghostnet” cyber-espionage report (2009), and the author of the forthcoming book “Black Code” (2013).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: