States began treating civil populations like data sets in the late 1990s. In 2002, the National Security Agency (NSA), established its Information Awareness Office (IAO), whose motto is Scientia est potentia – knowledge is power. Since knowledge was now being produced in “ones and zeroes”, the agency understood it had to reorient its gaze. A new will to knowledge ensued: “collect it all, tag it, store it” (in order to foresee it all). So what then is new about the Cambridge Analytica scandal?
From the point of view of a company like Cambridge Analytica, you are not only an algorithmic identity – the sum of your social network – you are valuable because of it. The scandal has made this explicit to the public. Another novelty is that the Other in this case is not the supposedly all-seeing state. In fact, in the financialised intermeshing of companies, contractors, subcontractors, politicians, elections, apps, friends of friends, profile quizzes and user agreement fine print, there seems to be no discernible Other at all. In this way, the scandal can be understood as a contemporary symptom of the arbitrariness of jouissance. But how exactly?
Out of all the Facebook apps that could potentially have mined the data of 87 million Facebook users, it is perhaps of no surprise that the one in question was a personality quiz designed by psychologists. Instead of being stored away in a raw data ocean, the quiz data was ‘psychographically’ remodelled, using other lifestyle, demographic and geographical data sets. This was done by a private company, Cambridge Analytica, and its subcontractors, to create a targeted advertising platform of political suggestion and ‘fake news’. In a world full of data vendors and lobbyists, including Facebook’s own advertising standards, this is routine. Nevertheless, the scandal explicitly ties an already controversial election outcome to the use of personalised data for private profit, without user awareness.
As a representative of electoral democracy, Congress then stepped in to regulate. The Mark Zuckerberg testimony can be read as an attempt by the state to separate itself, and electoral politics, from the data practices of Facebook and Cambridge Analytica. At stake here is not so much that the state, represented here by individual state senators, is acting hypocritically, or to what extent it is complicit. Rather, what symptom does the testimony itself speak?
During Zuckerberg’s testimony his speech was quick, strategically evasive and precise – coached beforehand by a team of lawyers – while most senators stumbled over basic technical terminology. How can the state, in so far as it is represented here, step in to legally regulate this company if there is not a shared technical language? Even if that is somehow the point – that senators and users alike are left in the dark – the testimony still makes a certain historical lag between big tech and government apparent, at the very moment of their near overlap. Instead of being the object of regulation, then, Zuckerberg’s language, as representative of the new ‘network’ norm, in turn regulates the senators as a juridico-political ‘group’ presiding over a declining empire. One symptom the testimony speaks of is thus a larger historical shift from ‘group’ to ‘network’, at the level of the law and subjectivity.
However, for E.U. antitrust regulators, as well as Facebook’s stocks and users, the scandal has resulted in more consequences, demanding interior shifts in the company, notably of its ethical orientations. Yet, even if these interior shifts will work to put a check on Facebook’s misuses (for example, hiring more Burmese speakers to counter the use of Facebook in facilitating the Rohingya genocide), this does not mean by any measure a halt in the company’s reach or technical expanse. ‘Ethics’, in fact, comes hand in hand here with development in Artificial Intelligence – the challenge of, for instance, creating an algorithm to detect hate speech. Artificial Intelligence, not the state, becomes the ‘regulator’ of an era defined by the arbitrariness jouissance.
Cambridge Analytica whistleblower Brittany Kaiser responded to the scandal with the hashtag #ownyourdata. In this discourse, data, as your most “valuable asset”, modifies the juridico-philosophical fiction of personal autonomy, in turn raising the question of whether it is possible to reinsert free will into algorithmic determination. Yet Kaiser’s question seems framed more within the terms of private ownership than collective political autonomy: if your data is so profitable for corporations, why not get in on it yourself? Kaiser’s response is thus neither a dialectisation of the symptom nor a revival of ethics. Rather, by suggesting a further entrepreneuralisation of the self, it remains at the level of Homo numericus. Here, the distinction between the individual, as ‘un-dividable’, and the singular, a characteristic of the divided subject in psychoanalysis, is crucial.
 Chamayou, Gregoire, “Oceanic Enemy: A brief philosophical history of the NSA”, Radical Philosophy 191, May/June 2015.
 Voruz, Véronique, “Ethics and morality in the time of the decline of the symbolic”, Psychoanalytical Notebooks: Autism, No. 25, 2012.
 Cf. Jackie Wang’s Carceral Capitalism (New York: Semiotext(e), 2018) for a discussion of the racialization of algorithmic identity and the debt economy.
 Voruz, ibid.
 Immediately after the testimony, Facebook stocks soared. But later in the summer, Facebook suffered the worst stock drop in Wall Street History. “Facebook shares plunge explained,” https://www.ft.com/video/ad97b53d-3a27-4072-a5bd-84f8cbf7dd3c.
Cf. also “Can Mark Zuckerberg fix Facebook before it breaks democracy?” in the September 17th, 2018, issue of The New Yorker.
 Cf. Cyrus Saint Amand Poliakoff, “Data Transference,” LRO 76.
Image Description: Jaap Arriens/NurPhoto via Getty