With the election of Joe Biden as the next US president, a collective sigh of relief spread across much of the world’s media. Normality, as well as a vaccine, had arrived on the horizon.
But, unfortunately, while there is good news, it is unlikely that we will be able to put the events of 2020 to bed quickly. We argued in May that the immediate consequence of “the crisis” was likely to be short-term stasis; now that we’ve passed this peak, the actual impact of the pandemic will quickly become apparent.
This is a phenomenon the theorist Paul David termed “technological presbyopia”, that is the human tendency to overestimate change in the short term but underestimate change over the longer term.
You don’t need to be a historian of technology to see that one lasting legacy of the pandemic is likely to be even greater use of surveillance, artificial intelligence and algorithmic control. The majority of developed nations have used technology to try and manage public health, and it is probable these organisational assumptions will set future paradigms of governance.
In Chinese cities, citizens are apparently colour-coded. In Britain, the National Health Service’s contact tracing app assigns risk scores to users. Some observers have proposed the use of enhanced apps that award points and benefits to users who assiduously wear masks and practise social distancing.
It is not unthinkable that in the future, a subtly branded social credit-inspired initiative, perhaps presented in the form of a game, could take root in liberal technocratic states.
Meanwhile, the International Monetary Fund has argued that the world needs “a new Bretton Woods”, a systematic attempt to rebalance the terms of global trade similar in scale to the initiatives that followed World War II.
The World Economic Forum (WEF) has promoted the idea of “the great reset”, an attempt to coordinate a comprehensive political and social response to the current crisis that goes far beyond seeing off Covid-19.
In one of the WEF’s widely shared videos, eight predictions are made for the year 2030. These include the 3D printing of human organs, the increasing use of drones and, most strikingly, “you’ll own nothing and be happy”..
At such a fraught moment, it’s no surprise to us historians that provocative visions of the future are popping up everywhere. But we should be wary of such visions.
While predicting the future has always been the preserve of the powerful, one lesson from the history of futurology is that if you can’t imagine a world where you’re not in control, then you’ve only a slim chance of predicting a future shaped by genuinely transformative change.
Equally, because so many of the inventions of the past two decades have been in medical and informational technologies, we’re surrounded by sophisticated simulations of the future that have tended to blind us to the more prosaic material realities that surround us.
For example, one thread in many predictions of the future is that we’re about to enter a fourth industrial revolution. This is not presented as a choice; organisations like the WEF claim it is an inevitability.
Except, it isn’t. We have, after all, been here before. Public concerns about the threat of automation were also strong in the United States in the late 1950s and the 1960s.
In 1972, Ernest Mandel argued that the world stood on the brink of a “third industrial revolution” in which robots would replace industrial labour and unleash a cybernetic future.
Arguably what actually happened is that working-class jobs in the West were exported to East Asia and Latin America, and investment in production methods stagnated because of the easy availability of cheap labour.
As David Edgerton put it in The Shock of the Old, for billions of people around the world the internet remains less important than corrugated iron.
Having forgotten this recent past, we’re highly susceptible, according to Meredith Broussard, to “technochauvinist” perspectives. Because we’re often told technology is the major cause of change, we can’t imagine a future in which the solution to our problems is not more and more technology.
This is particularly important because, while information technologies depend on decentralised infrastructure, they enable the centralisation of attention. We’re now seeing this playing out in the political sphere. To say this is not to be anti-technology, it’s to be anti-technology that does not serve social purposes.
From a historical perspective, we’d argue that organisations like the WEF play a role in today’s society akin to that of the church during the industrial revolution: they aim to inspire a respect for order, cooperation, and elite political leadership.
They exist to bridge the gap between the vision of the future we’re currently being promised and its actuality. For the WEF, predictions are normative claims, the gaming out of a permanent present rather than genuine insight into the future.
The role of the future for the ordinary citizen is, or should be, emancipatory. It exists to carve out a space for moral and social reflection. It stresses that individual actions in aggregate can change the course of world affairs. In a time of flux like the present, this makes the collective identification and agreement of social priorities an urgent task.
Secular history doesn’t accept the idea of predestination, it assumes with enough imaginative ingenuity “the end of days” can be endlessly deferred. The best lesson about predictions history can offer is that the future is always as much a social invention as a technological one.

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *