Slow science in the age of Artificial Intelligence
Thoughts on 'Another Science is Possible'
2025-12-30The past few years have seen a huge surge in the capabilities of large language model (LLM) based artificial intelligence (AI). Investors have poured money into AI models and products in the hopes that at some point in the future the staggering costs of training and running the models will be paid for by the number of workers that the tools can replace. Leaving aside whether such an outcome is either as inevitable or as desirable as AI proponents claim, it is worth thinking about where these things came from and what they do.
LLMs are processors of media. They can’t grasp at reality itself, but only at the way it has been represented. Through AI tools, technology hides the victory of relativism beneath a scientific veneer. This helps explain why the more people understand about how LLMs work, the less likely they are to use them.1
Humans also engage with representations to know about the world. At a basic level, the mind is dealing with representations of reality made by the senses, and often misleads us about what our senses are telling us. Humans can enhance their senses by developing new tools and instruments. The output of such equipment is itself a mediation of reality, but its veracity is evaluated through a different kind of process than the one an LLM would subject it to. For humans validity is negotiated through social practices, while for LLMs it remains probabilistic.
The approach taken by LLMs is to consume the entirety of human knowledge, at least to the extent it has been represented in digitalisable artifacts of text, sound, images and video — including the whole of the internet. Then, in response to prompting, the LLM takes that vast corpus (scientific and nonscientific alike) and generates some new statements. Most of the time, these statements appear highly plausible, even to experts, and even when they are incorrect.
Their apparent veracity is driven by the LLMs ability to remix what it has already consumed (constrained by factors like model size, available contextual memory and so on). It can perform this mixing at a much more granular level than humans can. The result is that to the user the outputs appear to be completely new, but they are in fact generated from media that already exists. If we thought that the internet was the final form of media theorist Marshall McLuhan’s claim that ‘the “content” of any medium is always another medium,’2 it is a surprise to find that the content of LLMs includes the internet.
The reliance on media by LLMs is not neutral. It means that, just as people do, LLMs can mix in biased and discredited ideas from the past which have been preserved in media artifacts. They can also mix in the biases of the present, and the ideas of the present which have yet to be officially discredited. Bias in the underlying corpus is not the only reason LLMs can be incorrect. Fundamentally, they are probabilistic predictors — their outputs are non-deterministic and variable. They are designed to produce outputs in line with user expectations, which can make them seem sycophantic. All of this contributes to giving LLM outputs their human-like texture.
However, there is a danger that this texture, their plausibility, and their technological impressiveness gives statements of LLMs a weighty facticity that is unearned. Are we prepared to accept that facts can be produced in this way? The extent to which we accept statements from LLMs as facts is the sort of question that prompted the ‘science wars’, but slightly shifted.
The science wars saw critical theorists showing that science was a social practice, and scientists showing that critical theory was nonsense. Theorists often relied on a particular reading of Thomas Kuhn, a historian of science best known for popularizing the idea of paradigm shifts, who argued that scientific revolutions reshape not just theories but the very standards of truth.3 This reading was supported by evidence of scientific practices gathered by sociologists like Bruno Latour and Steven Woolgar, who managed to show that the facts science relayed were a sort of social convention amongst scientific practitioners.4 The scientist’s output was not ‘science’ as such, but scientific papers — a bunch of statements. Eventually this went too far, annoying some scientists, and even annoying the sociologists who had provided the original evidence.5
The outcome was that scientists were more or less free to dismiss attempts at social intervention in their work, provided their research was useful to industry (GMO crops is often the chosen example). According to Isabelle Stengers, a philosopher of science, this bargain between science and industry produced a ‘fast science’, driven by industrial utility, which was insulated from the concerns of non-scientists.6
When technology companies talk about the benefits of AI, it is always to emphasise their speed. Indeed, they are impressively fast. The quantity of their output is seemingly infinite, and the quality is often ‘good enough.’ We are assured their accuracy is increasing, and they are more accurate than experts at some tasks. Soon we will just leave the AI to get on with scientific breakthroughs on its own. Some tech leaders have convinced themselves that their conversations with ChatGPT have yielded new insights into the fabric of reality. Perhaps if she had been writing a few years later, Stengers might have preferred the term ‘slop science’. She writes:
“Twenty years ago the idea of a ‘social construction of facts’, as taken up by critical thinkers, became associated with ‘relativism’ by the scientists it infuriated. However, as we shall see, the way the so-called ‘knowledge economy’ is mobilising research today may be equated with the possibility of a victory for relativism.”7
The dream of industrial AI is one where the scientists are no longer needed to do the work of social construction. From the perspective of industry, the ability of LLMs to produce highly plausible output at pace makes them much better, cheaper, and reliable at this than the scientists. Furthermore, the ability to adjust and tinker with base prompts and model weights gives industry an even deeper level control over the whole process.
During the science wars, a critical theorist would tell the scientist that her facts are merely a matter of social construction. With LLMs, socially constructed texts have been quantified and reduced to weighted probabilities. The LLM is effectively mediating what has already been socially constructed, producing novel configurations from the same base material. However, the person pointing this out is most likely to be a scientist who understands how these tools work.
Writing before LLMs really took off, Stengers called for a slow science in contrast to the fast science of the knowledge economy. Slow science is one that values the depth and accuracy of research over the publishability and political usefulness of results. Similarly, to imagine a future in which LLMs are truly valuable, we must consider whether they can be integrated into practices that favor depth over speed and unbiased outputs over reinscribing our existing mistakes.
One possibility is to use LLMs access to vast knowledge bases to provide context for facts. In the hands of researchers, LLMs could help situate facts by foregrounding the historical circumstances and practices in which they arose. This would be in line with the idea “scientific thought collectives… should actively accept that their concern for ‘facts’ must include the way these facts come to matter for other collectives”.8 This takes us back to what LLMs do when generating new outputs: remixing.
Remixing as a set of cultural practices (for example, sampling in hip-hop music) echoes the French Situationist idea of détournement: repurposing mass culture imposed from above in order to subvert it. With the rise of fast science, it can feel that science is now also something that is imposed from above. For many, the lockdowns during the COVID pandemic brought this home. Much as a digital sampler enabled music to be remixed into new songs, LLMs allow knowledge to be endlessly remixed into new expressions. Motivated groups, “anti-vaxxers” for example, have been adept at developing effective practices that utilise LLMs to further their aims and spread their message. Is it possible to imagine a parallel practice that leverages the same underlying ability of LLMs that makes them effective remixers but repurposes it and incorporates it into a practice that strengthens science?
LLMs could be used as a tool for showing how science is situated in discursive networks. This should not be seen as marking facts as arbitrary or relativistic but strengthening it by systematically exposing it to objections are already latent in our collective knowledge. Doing this would require a more deliberate and structured approach to using LLMs as part of the research process. Defining such approaches is an area that is ripe for development by proponents of slow science. It would be a democratising move that opens the possibility for science to find allies among other collectives, giving a basis for questioning the objectives and agendas of research, rather than accepting the demands of the knowledge economy in exchange for shelter from the masses. It would also mean developing ways of interacting with LLMs that foreground the connections that they make ‘behind the scenes’ rather than their rapid generative capabilities. In this way LLMs have the potential of becoming a tool for the practice of slow science, but only if incorporated into scientific practices. In the age of artificial intelligence it is not possible to ignore the ways in which science is a social practice. To do so risks science being replaced by a relativistic practice in which confidently generated facts are the ones that matter, rather than those carefully established in the face of competent objection.
-
Tully, S. M., Longoni, C., & Appel, G. (2025). Lower Artificial Intelligence Literacy Predicts Greater AI Receptivity. Journal of Marketing, 89(5), 1-20. https://doi.org/10.1177/00222429251314491 ↩
-
McLuhan, M. (1964). Understanding media: The extensions of man. New York: McGraw Hill. ↩
-
Stengers, I. (2018). Another science is possible: A manifesto for slow science (S. Muecke, Trans.). Polity Press. ↩
-
Latour, B., & Woolgar, S. (1986). Laboratory life: The construction of scientific facts. Princeton, N.J.: Princeton University Press. ↩
-
Latour, B. (2004). Why Has Critique Run out of Steam? From Matters of Fact to Matters of Concern. Critical Inquiry, 30(2), 225-248. https://doi.org/10.1086/421123 ↩
-
Stengers, I. (2018). Another science is possible: A manifesto for slow science (S. Muecke, Trans.). Polity Press. ↩
-
Stengers, I. (2018). Another science is possible: A manifesto for slow science (S. Muecke, Trans.). Polity Press, pp. 83-84. ↩
-
Stengers, I. (2018). Another science is possible: A manifesto for slow science (S. Muecke, Trans.). Polity Press, p. 84. ↩