It short, it is about collecting vast amounts of information and only constructing hypotheses about them afterwards. It is difficult to imagine the quasi-monstrous quantity of data to be analysed, over twenty years, classed, sorted, and above all understood. How can sense be extracted from this multitude of parameters, the separate parts [pièces détachées] of what constitutes a human being and his or her life? A challenge as much for medical science as for the social and human sciences: how can one grasp the subject that can supposedly be deduced from this data?

This goes right to the heart of “Big Data” projects, in which the logistics of analysis are based on collecting a mass of information over a period of years, which is subsequently stored, and blended, and from which a knowledge can be supposedly extracted. We are no longer in the traditional models of research where one sets out from hypotheses to be tested. Here, a line of causality has to emerge from the data without even having been imagined beforehand. The data is supposed to deliver up its own understanding. The data is thus supposedly able to make reality speak without prior interpretation. As if the choice of what is to be measured were not already in itself an interpretation, a hypothesis, that remains unspoken.

It must also be said that this type of strategy involves a level of funding that completely outstrips current levels: the total budget has not yet been disclosed, it would be in all likelihood paranoiac, but it implies the support of a philanthropic foundation associated with the University of New York, which would plan and overview the launch in 2017. One could ask what it is that attracts those providing the funding, what it is that so fascinates the researchers, if not the hope of an absolute knowledge.

It is an attempt to attain a knowledge without remainder, like the one sought by the geographers in the famous story by Borges, which appeared in Aleph and Other Texts, who constructed a map so precise that it ended up covering the whole land. Besides, one could ask if the researchers of the HUMAN project are more fascinated by the absence of remainder than by pure knowledge, by the fact that not even a scrap would escape their investigation any longer.

This gets right to the heart of the trend towards a generalised will to master information – one of the major scientific shifts of recent years – behind which genetics has been a driving force, according to which one produces data, then links it to the realities observed in order to create a matrix of prediction through which a substantial amount of data would always result in the same consequence.

One of the problems in the HUMAN study is that it not only concerns biological parameters but also social, individual, private data – with a view to making links between the individual, their biology, and behaviour, then coming up with predictions of socio-behavioural risks, without giving the individual the benefit of the doubt, of the unpredicted, of everything still being possible.

Yet, one can say that, even though science has always used its formulas or measurements to process all of the real, this resisted investigation and always contributed to surprising us. Beyond the strategies of science, the real was always produced in a beyond, setting things off on other tracks.

This time, do the strategies of Big Data succeed in totally mastering it? Or, on the contrary, will a part that cannot be mastered return, perhaps in a way that is more destabilising than ever? Perhaps we should already be asking ourselves what scientific revolution will emerge when the paradigm of Big Data exhausts itself in making an inventory of the world.

* Science, October 2015