Studying the future
You don't need to be a scientist to up-skill as a futuristic badass.
Thinking about the future makes me, roughly speaking, want to shit my pants. Sure, thoughts about what comes next can be equal parts daunting and exhilarating. We can (and should) consider it optimistically; thinking about the countless opportunities that tomorrow holds for us. New technologies, new ideas, new ways of living, new horizons for our species. It can all get very exciting and uplifting to consider!
But, of course, for many good reasons, the future often concerns or terrifies us. It can be argued that fear is a well-justified response – perhaps a necessary one. There are countless different flashpoints of concern about the future, but I really wanted to start with the LAW.
LAW stands for “Lethal Autonomous Weapon”. In simple terms, this is a weaponised machine that can act with a large degree of independence. Human input is typically limited to giving the final command to engage. In some cases, however, LAWs have no human input whatsoever. The fear here is an obvious and longstanding one; the trope of the killer robot in popular media has existed for centuries.
Landmines are a type of LAW. They operate autonomously, and with a savage lethality. Landmines are an especially brutal weapon because of their ability to operate effectively for decades, lingering underground like silent killers long after the conflict that brought them there has ended.
They are like a soldier gone mad; one who kills indiscriminately, and who refuses to be recalled from active duty. Landmines are brutal not just because they prevent villages and communities near them from returning to normal life, but because of the way they ruthlessly prosecute their goal.
The way we have used landmines, the effects they have wrought, and our inability to predict and respond effectively to their unintended consequences I think says a lot about our capacity to control the future of technological development more generally. If you think landmines are bad, think about what something like a modern-day drone could become if their development took a similar trajectory.
If it weren’t obvious already, LAWs and landmines should make it clearer what kind of narrative I’m building up to here: not all progress is good. Some developments create unforeseen and significant problems that persist stubbornly for decades.
This observation leads us to an equally obvious conclusion: we want to optimise the future – to limit the unintended negatives of things like technological development, while at the same time reaping the benefits that progress brings.
We want to optimise the future – to limit the unintended negatives of things like technological development, while at the same time reaping the benefits that progress brings.
The Watchdogs of the Future
Across the world, spanning just about every conceivable industry, there exist groups with the explicit purpose of optimising the future. This involves monitoring, questioning, and yes, at times even challenging our ideas about human progress. Some, like the Future of Life Institute, focus on issues like LAWs, actively campaigning, disseminating information, and providing expert opinion and research to prevent another land-mine type problem. Folks like these are the “watchdogs of the future”.
I’ve given them a somewhat comic-book name, partly because I think these outfits are staffed by kick ass sustainability super heroes, and partly because there is already the faint whiff of superhero drama lingering above this whole endeavour anyway.
Think tanks like the Future of Life institute are often a revolving door of titans of industry, technology, philosophy, medicine, and other fields. All those big names and leading experts huddled together into incredibly well-resourced interdisciplinary teams, ready to save the world. It’s not without a little bit of inherent badassery, you know?
Within that same field of Artificial Intelligence (AI) and technological progress, many other concerns loom on the horizon. AI-led automation threatens economic and social upheaval. Big data, surveillance technology, and similar developments will continue to have significant implications for personal privacy and government control.
In other fields like biotechnology, advances in genetics, cloning and life extension pose equally concerning questions about development and perhaps more fundamentally, what it means to be human in the future.
We could be just a few decades away from game-changing, physiology-redefining advances, things straight out of science-fiction stories. Things you would probably be startled to realise are already being advanced in substantive ways: full brain mapping at resolutions required for digital emulations of a physical one, downloading and uploading memories via implants, recording dreams.
The world of Promethean pseudo-immortal gods we see in shows like Altered Carbon, and the nightmare scenarios of brain tinkering that litter the series Black Mirror are closer to reality than I think most people realise. It’s enough to have scared many a billionaire into setting up some kind of “futures think tank” – sort of like Mr Burns releasing the hounds, but good.
These scary stories echo, I think, the general anxiety we all feel when on the cusp of paradigm-shifting change. The future at this specific time, I think rightly, makes us shit our collective pants. Perhaps shows like Altered Carbon and Black Mirror, which capture that dread so perfectly, are popular in part because of their ability to advocate fears about the future on our behalf.
The future at this specific time, I think rightly, makes us shit our collective pants.
That’s why I think watchdogs of the future are so necessary going forward – they’re helping prevent outcomes we all collectively fear, just as writers, artists and storytellers are helping us voice that anxiety and grapple with it.
It’s also why I think, from the perspective of a university student, subjects like sustainability are valuable in and of themselves, but far from the only subject that can be useful in this field. Any discipline can be applied towards sustainability, but some fields – like futures studies – are also explicitly related to issues of sustainability, while still carving out their own paths.
One thing that has become clear to me though after talking to students of futures studies, is that part of the coursework involves teaching students how to look at things in ways many don’t, how to ask questions few would think to – and importantly, want to – ask. There’s an element of diplomacy and communication skills required, which surprised me to learn but seems obvious in retrospect.
I already know from studying the UN’s efforts over the last years, that convincing people to change because of what you claim to see happening in the future – that’s a hard sell. It’s not enough to offer a convincing model or vision of the future, you also need the skills to get people to engage with it, and that’s true whether you’re talking about climate change or autonomous drones.
As a sustainability student, I often see the future as a murky, uncertain morass of hypotheticals. The science of climate change may offer high-certainty models of our ecological future but predicting political and economic responses to this looming crisis of climate change is infinitely more difficult. Futures studies people seemed less daunted, and the systematic ways in which they can think about the future are unlike anything I’ve learned, at least so far, in my own travels in sustainability. That’s reassuring to me.
The neoliberal, competitive mindset we’re often encouraged to take as students might engender a sense of fear instead: “Oh shit, this other discipline is going to be competitive with me in the workforce!”. The scale and diversity of sustainability challenges and opportunities we see today and in the future, however, means there’s plenty of hard work to go around – and room for all disciplines to contribute.