The internet was meant to set us free.
Tech has radically changed the way we live our lives. But have we unwittingly handed too much away to shadowy powers behind a wall of code, all manipulated by a handful of Silicon Valley utopians, ad men, and venture capitalists? And, in light of recent data breach scandals around companies like Facebook and Cambridge Analytica, what does that mean for democracy, our delicately balanced system of government that was created long before big data, total information and artificial intelligence? In this urgent polemic, Jamie Bartlett argues that through our unquestioning embrace of big tech, the building blocks of democracy are slowly being removed. The middle class is being eroded, sovereign authority and civil society is weakened, and we citizens are losing our critical faculties, maybe even our free will.
The People Vs Tech is an enthralling account of how our fragile political system is being threatened by the digital revolution. Bartlett explains that by upholding six key pillars of democracy, we can save it before it is too late. We need to become active citizens; uphold a shared democratic culture; protect free elections; promote equality; safeguard competitive and civic freedoms; and trust in a sovereign authority. This essential book shows that the stakes couldn’t be higher and that, unless we radically alter our course, democracy will join feudalism, supreme monarchies and communism as just another political experiment that quietly disappeared.
Jamie Bartlett is the bestselling author of The Dark Net, an examination of the hidden corners of the internet and Radicals Chasing Utopia: Inside the Rogue Movements Trying to Change the World. He is the Director of the Centre for the Analysis of Social Media at the think-tank Demos. He also writes on technology for the Spectator, the Telegraph and for several other publications on how the internet is changing politics and society. In 2017 Jamie presented the two-part BBC TWO documentary series The Secrets of Silicon Valley. He lives in London.
Also by Jamie Bartlett
Radicals: Outsiders Changing the World
The Dark Net
Orwell versus the Terrorists: Crypto-Wars and the Future of Surveillance
IN THE COMING few years either tech will destroy democracy and the social order as we know it, or politics will stamp its authority over the digital world. It is becoming increasingly clear that technology is currently winning this battle, crushing a diminished and enfeebled opponent. This book is about why this is happening, and how we can still turn it around.
By ‘technology’ I do not mean all technology, of course. The word itself (like ‘democracy’) came from an amalgamation of two Greek words – techne, meaning ‘skill’ and logos meaning ‘study’ – and therefore encompasses practically everything in the modern world. I am not referring to the lathe, the power-loom, the motor car, the MRI scanner or the F16 fighter jet. I mean specifically the digital technologies associated with Silicon Valley – social media platforms, big data, mobile technology and artificial intelligence – that are increasingly dominating economic, political and social life.
It’s clear that these technologies have, on balance, made us more informed, wealthier and, in some ways, happier. After all, technology tends to expand human capabilities, produce new opportunities, and increase productivity. But that doesn’t necessarily mean that they’re good for democracy. In exchange for the undeniable benefits of technological progress and greater personal freedom, we have allowed too many other fundamental components of a functioning political system to be undermined: control, parliamentary sovereignty, economic equality, civic society and an informed citizenry. And the tech revolution has only just got going. As I’ll show, the coming years will see further dramatic improvements in digital technology. On the current trajectory, within a generation or two the contradictions between democracy and technology will exhaust themselves.
Strangely for an idea that nearly everyone claims to value, no one can agree on precisely what democracy means. The political theorist Bernard Crick once said its true meaning is ‘stored up somewhere in heaven’. Broadly speaking, it is both a principle of how to govern ourselves, and a set of institutions which allow for sovereignty to be derived from the people. Exactly how this works changes from place to place and over time, but easily the most workable and popular version is modern liberal representative democracy. When I use the term ‘democracy’ from now on, this is what I’m referring to (and I am only looking at mature, Western democracies – to look beyond that is a different subject entirely). This form of democracy typically means that representatives of the people are elected to make decisions on their behalf, and that there is a set of interlocking institutions making the whole thing work. This includes periodic elections, a healthy civil society, certain individual rights, well-organised political parties, an effective bureaucracy and a free and vigilant media. Even that is not enough – democracies also need committed citizens who believe in the wider democratic ideals of distributed power, rights, compromise and informed debate. Every stable modern democracy shares nearly all of these features.
This is not another book-length whinge about rapacious capitalists who masquerade as cool tech guys, nor a morality tale about grasping multinationals. Democracy has seen off plenty of them over the years. While there are certainly contradictions in minimising tax while claiming to empower people, doing so doesn’t necessarily betray insincerity. And, on first glance, technology is a boon to democracy. It certainly improves and extends the sphere of human freedom and offers access to new information and ideas. It gives previously unheard groups in society a platform and creates new ways to pool knowledge and coordinate action. These are aspects of a healthy democratic society too.
However, at a deep level, these two grand systems – technology and democracy – are locked in a bitter conflict. They are products of completely different eras and run according to different rules and principles. The machinery of democracy was built during a time of nation-states, hierarchies, deference and industrialised economies. The fundamental features of digital tech are at odds with this model: non-geographical, decentralised, data-driven, subject to network effects and exponential growth. Put simply: democracy wasn’t designed for this. That’s not really anyone’s fault, not even Mark Zuckerberg’s.
I’m hardly alone in thinking this, by the way. Many early digital pioneers saw how what they called ‘cyberspace’ was mismatched with the physical world, too. John Perry Barlow’s oft-quoted 1996 Declaration of the Independence of Cyberspace sums up this tension rather well: ‘Governments derive their just powers from the consent of the governed. You have neither solicited nor received ours. We did not invite you. You do not know us, nor do you know our world … Your legal concepts of property, expression, identity, movement and context do not apply to us. They are all based on matter, and there is no matter here.’ This is an exhilarating statement of the freedom offered by the internet that still holds digital aficionados in thrall. But democracy is based on matter, in addition to the legal concepts of property, expression, identity and movement. If you scratch beneath Silicon Valley’s corporate pieties about connectivity, networks and global communities, you’ll find that an anti-democratic impulse continues to exist.
In the following pages, I will argue that there are six key pillars that make democracy work, not just as an abstract idea, but also as a workable system of collective self-government that people believe in and support. These are:
ACTIVE CITIZENS: Alert, independent-minded citizens who are capable of making important moral judgements.
A SHARED CULTURE: A democratic culture which rests on a commonly agreed reality, a shared identity and a spirit of compromise.
FREE ELECTIONS: Elections that are free, fair and trusted.
STAKEHOLDER EQUALITY: Manageable levels of equality, including a sizeable middle class.
COMPETITIVE ECONOMY AND CIVIC FREEDOM: A competitive economy and an independent civil society.
TRUST IN AUTHORITY: A sovereign authority that can enforce the people’s will, but remains trustworthy and accountable to them.
In the following chapters I will examine these pillars, and explain why and how they are threatened. In some cases they are under siege already. In other cases, I will look a little further ahead and argue that they soon will be. Whether it’s the rise of smart machines limiting our capacity for moral judgement, the reappearance of tribal politics, or the prospect of mass unemployment as hyper-efficient robots displace break-taking humans, democracy is under assault from all sides. Some of these threats are familiar. There is nothing particularly new about angry politics, unemployment or citizen apathy, although they are taking a new form. But other threats are entirely novel: smart machines might replace human decision-makers, transforming political choices in ways we don’t yet fully understand. Invisible algorithms are creating new, hard-to-see sources of power and injustice. As more of the world gets connected, it will be easier for a small number of rogue actors to cause immense damage and harm, often beyond the reach of the law. We don’t have a clue how to deal with these problems.
In the final chapter I project how things might unfold if we continue on our current trajectory. We won’t witness a repeat of the 1930s, everyone’s favourite analogy. Rather, I believe that democracy will fail in new and unexpected ways. The looming dystopia to fear is a shell democracy run by smart machines and a new elite of ‘progressive’ but authoritarian technocrats. And the worst part is that lots of people will prefer this, since it will probably offer them more prosperity and security than what we have now.
But we shouldn’t start smashing the machines just yet. For one thing, there is currently a tech arms race between democratic societies and their Russian and Chinese counterparts, and it is important for the democracies to win this race. And if subjected to democratic control, the tech revolution could transform our societies in myriad positive ways. However, both tech and democracy need to change dramatically. At the end of the book, I have 20 suggestions for how democracy – and more importantly, each of us – must change in order to survive in an era of ubiquitous intelligent machines, big data and a digital public sphere.
At this point you might well think I am a hypocrite, that I probably wrote this book on a laptop, used Google for my research, will tweet about the publication date and hope it sells strongly on Amazon. That’s all true! Like many of us, I simultaneously rely on, love and detest all the technologies I write about. In fact, I have been working at the forefront of technology and politics for the last decade, at Demos, one of the UK’s leading think tanks. Since I started there in 2008 I’ve written pamphlets about how digital technology would breathe new life into our desperately tired political system. Over the years my optimism drifted into realism, then morphed into nervousness. Now it is approaching mild panic. I still believe that technology can be a force for good in our politics – and that many of the big tech companies hope it can be, too – but for the first time I am genuinely concerned about the long-term prospects of the system that Winston Churchill once famously referred to as ‘the worst kind of government, except for all the others that have been tried’.
The great tech pioneers, of course, do not share this concern because they are firm believers in a sunny techno-utopia and in their ability to take us there. I have been fortunate enough to interview some of them, and have spent a lot of time either in Silicon Valley itself or with people who inhabit that world. In my experience they are rarely evil and most have faith in the emancipatory power of digital technology. Many of the technologies they build are wonderful. But that makes them potentially more dangerous. Just like the eighteenth-century French revolutionaries, who believed they could construct a world based on abstract principles like equality, these latter-day utopians are busily dreaming up a society dictated by connectivity, networks, platforms and data. Democracy (and indeed the world) does not run like this – it is slow, deliberative and grounded in the physical. Democracy is analogue rather than digital. And any vision of the future that runs contrary to the reality of people’s lives and wishes can only end in disaster.
We live in a giant advertising panopticon which keeps us addicted to devices; this system of data collection and prediction is merely the most recent iteration in a long history of efforts to control us; it is getting more advanced by the day, which has serious ramifications for potential manipulation, endless distraction and the slow diminishing of free choice and autonomy.
FOUNDING MYTHS ARE important for industries. They shape how companies see themselves and reflect how they wish to be seen by others. The founding myth for social media is that they are the heirs to the ‘hacker culture’ – Facebook’s HQ address is 1, Hacker Way – which ties them to rule-breakers like 1980s phone phreaker Kevin Mitnick, the bureaucracy-hating computer lovers of the Homebrew Club scene and further back to maths geniuses like Alan Turing or Ada Lovelace. But Google, Snapchat, Twitter, Instagram, Facebook and the rest have long ceased to be simply tech firms. They are also advertising companies. Around 90 per cent of Facebook and Google’s revenue comes from selling adverts. The basis of practically the entire business of social media is the provision of free services in exchange for data, which the companies can then use to target us with adverts.fn1
This suggests a very different, and far less glamorous, lineage: a decades-long struggle by suited ad men and psychologists to uncover the mysteries of human decision-making and locate the ‘buy!’ button that lurks somewhere in our frontal lobe. A more cogent founding story is the early years of American psychology, which emerged as a serious academic discipline a century ago alongside the beginnings of mass consumer culture. Psychology had been developing in Europe – and especially Germany – for some years, and was imported to the US before the First World War. But the American variety diverged from the European fascination with philosophical whimsies like ‘free will’ and ‘the mind’. Driven by pioneers such as James Cattell and Harlow Gale, it looked instead at how to turn the question of human decision-making into a hard science that could be used by business.1
In 1915 John Watson became president of the American Psychological Association. He argued that all human behaviour was essentially the product of measurable external stimuli, and could therefore be understood and controlled through study and experiment. This approach became known as behaviourism, and was later popularised further by the work of B.F. Skinner. The promise of malleable humans was catnip to companies hoping to sell products, and behaviourism spread through the corporate world like a virus. For some years, businesses – encouraged by Watson and others – believed they had godlike powers over desires, hopes, fears and, of course, shopping. Behaviourism was knocked out of fashion somewhat in the 1920s with the arrival of statistical market research (which, unlike behaviourism, actually required asking people questions). But together, behaviourism and market research signalled a more scientific approach to advertising that has been with us ever since.
If John Watson were alive today, he would be employed as ‘chief nudger’ at Google, Amazon or Facebook. Social media platforms are the latest iteration of the behaviourist desire to manage society through scientific observation of the mind, via a complete information loop: testing products on people, getting feedback and redesigning the model. Another word for this idea is what Yuval Noah Harari calls ‘dataism’: positing that mathematical laws of data apply to humans as well as machines. The notion that with enough data the mysteries of the human mind can be understood and influenced is perhaps the dominant philosophy in Silicon Valley today. In an oft-cited essay from 2008, then editor-in-chief of Wired Chris Anderson hailed the ‘end of theory’. Scientific theories were unnecessary, he said, now that we have big data. ‘Out with every theory of human behaviour … Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity.’ Google engineers don’t speculate and theorise about why people visit one site over another – they just try things and see what works.
In the bowels of every inspirationally branded tech firm some of the world’s smartest minds are paid small fortunes to work out why you click on things, and to get you to click on more things. Although the secret of Facebook’s success is ultimately the human psyche (humans are creatures that like to copy and watch each other and Facebook is the greatest system ever invented to allow us to see and be seen) this is supplemented by every imaginable tactic to keep you hooked. Nothing is left to chance, since even the smallest improvement can be worth a fortune. Tech companies run thousands of tests with millions of users – tweaking backgrounds, colours, images, tones, fonts and audio – all to maximise user experience and user clicks.2 Facebook’s homepage is carefully designed to be full of visible numbers – likes, friends, posts, interactions and new messages (and always in red! Urgent!). Autoplay, endless scroll and reverse chronological timelines are all sculpted to keep your attention.3
It’s certainly working. Hordes of us are now members of a zombie army that walks while looking down at our phones and chats to distant disembodied avatars rather than whoever is sitting next to us. Like many people, I consider myself a witness to these changes rather than a participant, and so last year I downloaded an app called RealizD, which counted how often and for how long I checked my phone.
Monday 27th November: 103 pick-ups, 5 hours 40 minutes
Tuesday 28th November: 90 pick-ups, 4 hours 29 minutes
Wednesday 29th November: 63 pick-ups, 6 hours 1 minute
Thursday 30th November: 58 pick-ups, 3 hours 42 minutes
Friday 1st December: 71 pick-ups, 4 hours 12 minutes
According to these results, on average I pick up and check my phone 77 times per day. Take out sleep and that’s roughly once every twelve minutes. I’m not alone. According to Adam Alter, addictions to alcoholism and tobacco are giving way to digital dependency, an epidemic of checking, picking-up, swiping and clicking.4 Significant numbers of people now say they are addicted to the internet and could not live without their phone.5 Some academics even think declining drug and alcohol intake among young people might be caused by them getting their dopamine rushes through pings and beeps.fn2 ‘In 2004 Facebook was fun,’ writes Alter. ‘In 2016 it’s addictive.’6 This is no accident. Welcome to the attention economy.
The reason I check my phone roughly once every twelve minutes is the constant but inconsistent feedback. Studies have shown that the anticipation of information is deeply involved with the brain’s dopamine reward system, and that addictiveness is maximised when the rate of reward is most variable.7 This is designed-in too, through the use of ‘push notifications’, which are the little beeps and messages that pop up to let you know when something has arrived in your inbox. Similarly, the introduction of a ‘like’ button in 2009 came from a much older subfield of – yes, this really exists – Liking Studies, which has long shown that likability is an advert’s most potent characteristic.8 (Apparently Facebook originally planned an ‘awesome’ button.)9 Sean Parker, Facebook’s first President, recently called the ‘like’ button ‘a social-validation feedback loop … exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology’. He said that he, Mark Zuckerberg and others understood this, ‘And we did it anyway’.10
The Holy Grail for the social media giants, just as it always has been for all ad men, is to understand you better than you understand yourself. To predict what you will do, say and even think. Facebook doesn’t collect data about you for fun; it does it to get inside your head. What the company knows about you, based solely on the untold hours you’ve spent there, is enough to fill several binders – interests, age, friends, job, activity and more. And that’s not all. Facebook has partnerships with quietly powerful ‘data brokers’ like Acxiom, which has information on over 500 million active consumers worldwide, with thousands of data points per person: things like age, race, sex, weight, height, marital status, education level, politics, buying habits, health worries and holidays, often scooped up from other shops and records.11 Armed with all this information, cross-referenced and analysed, companies can target you with ever more refined advertising.
Amazingly, this data collection frenzy is just getting started. By 2020 there will be around 50 billion internet-enabled devices – four times what there are now – and each one hoovering up data: cars, fridges, clothes, road signs and books. Your precious daughter playing with her doll: data point! Your partner adding some sugar to her tea: data point! Nothing will be safe from these giant, insatiable data monsters. Google has started to send Street View photographers into shops, offices and museums, in order to create detailed 3D models of the surroundings wherever you want to go. Smart homes want to know your preferred temperature, when you wash, what you cook, how long you sleep for. Everything will be collected, analysed and compared against everything else, in a relentless quest for dataism.
The data windfall is far beyond human analysis these days, which is why algorithms have become so central to the modern economy. An algorithm is a simple mathematical technique, a set of instructions that a computer follows in order to execute a command. That’s the technical description, but in truth these are the magic keys to the kingdom, which filter, predict, correlate, target and learn. Your life is already guided by algorithms that determine everything from Amazon recommendations and your Facebook news feed to the things that pop up on your Google search. Your dating matches. Your route to work. Your music. News aggregators. Your clothes.
The scary thing about modern big data algorithms is how they can figure things out about us that we barely know ourselves. Humans are often quite predictable, and with enough data – even trivial or meaningless scraps such as what songs you play – algorithms can learn very important things about what sort of person you are.
Back in 2011, Dr Michal Kosinski, then a psychologist at Cambridge University, developed an online survey to measure respondents’ personality traits. For decades psychologists have developed techniques to work out someone’s personality through questionnaires.fn3 Kosinski was interested in whether online data might determine something important about a person’s personality without the need for a survey: perhaps it might be possible to generate a psychological profile simply based on things people had liked on Facebook. So Kosinski and his team set up several personality tests and posted them on Facebook, inviting people to respond. The surveys went viral – we do live in the age of narcissism, after all – and millions of people took part. By cross-referencing people’s survey answers against their Facebook likes, he was able to work out the correlation between the two. From that he created an algorithm that could determine, from likes alone, intimate details of millions of other users who hadn’t taken the survey. In 2013 he published the results, showing that easily accessible digital records of behaviour can be used to quickly and accurately predict sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age and gender.12fn4
In 2017 I went to visit Michal Kosinksi at Stanford University, where he is now based. Many consider Stanford to be the university of Silicon Valley – it is nearby, and the founders of Cisco, Google, Hewlett-Packard and Yahoo! all graduated from there. Michal, who looks far too young to be a university professor, took me into his office in the Graduate School of Business (of course) and agreed to give me a demonstration of how this system works. I submitted my roughly 200 Facebook likes into his algorithm: The Sopranos, Kate Bush, Terminator 2, The Spectator magazine, etc. The algorithm went out into the world, looking at other people who had similar combinations, or variants of combinations. A little wheel span around on the screen for a few seconds while the algorithm worked its magic and the results popped out: open-minded, liberal, artistic and extremely intelligent. This is obviously a very accurate system, I told Michal. Far more bizarrely, it also determined that I was not religious, but that if I were, I’d be Catholic. I couldn’t have put it better myself – I went to a Catholic comprehensive school aged 5–18, and while I have a soft spot for the religion, I am no practising churchgoer. Similarly, it predicted my job to be in journalism, and that I had a strong interest in history; I studied history at university and have a Master’s degree in historical research methods.
All this from Facebook likes, which have nothing to do with my background or upbringing. ‘This is one of the things people don’t get about these predictions,’ Michal told me. ‘Obviously if you like Lady Gaga on Facebook, I can tell you like Lady Gaga … what’s really world-changing about those algorithms is that they can take your music preferences or your book preferences and extract from this seemingly innocent information very accurate predictions about your religiosity, leadership potential, political views, personality and so on.’ I’ll show you in Chapter Three how political parties might use this at election time. But I left Michal’s office with the sensation that this sort of insight was very exciting, but also a new source of power that we barely understand, let alone control.
The logical end goal of dataism is for each of us to be reduced to a unique, predictable and targetable data point. Anyone who’s tried to talk to a chatbot or seen an ad for something they just bought knows that these technologies are far from perfect. But the direction of travel is clear, and it is easy to imagine the ways in which every choice you take might one day be subject to a series of algorithmically informed nudges, all carefully and perfectly calibrated around you. Just imagine! Get up nice and early, based on the auto-set alarm that knows your calendar and average getting ready time (factoring for typical traffic). A data-driven breakfast would be proposed after a quick analysis of the health stats of you and thousands of others like you, to ensure the perfect balance of nutrients you might need today. (Plus: a small reduction in your health insurance premiums, if you take its advice.) Hop in your driverless car, which is just returning from a night shift earning money for you as an autonomous taxi. And, as you relax into the journey, your personal AI assistant bot will advise you on what to say in today’s key sales meeting, based on previous performance and who else will be present. Before being whisked back home …
The possibilities for advertising here would of course be phenomenal. If you fell off the diet bandwagon, or were even statistically likely to fall off it, based on an analysis of sleep pattern, diet, word use on Facebook and voice tone you would get an ad for the local gym. A personal AI assistant would be telling you things you need, exactly when you needed them, and you wouldn’t even know why.
It’s easy to lose sight of the positives, because this sounds like an episode of Charlie Brooker’s Black Mirror. I run a centre at Demos that specialises in big data analysis, and we’ve found new ways to understand social trends, illness, terrorism and much more. Data can and will help people hold governments to account by making more information about departmental performance available. It’s inevitable that we will one day have personal AIs that negotiate for us with company AIs (think credit cards, car loans, pensions and investments).14 This is all good news from the user’s perspective.
However, this whole pattern that leads from data collection to analysis to prediction to targeting presents three challenges to the life of a democratic citizen. The first is the question of whether being under the glare of social media and costant data collection allows people to mature politically. The second is the danger that these tools are used to manipulate, distract and influence us in ways that are not in our best interests. The third is more hypothetical and existential, concerning whether we even trust ourselves to make important moral decisions at all. We’ll take each one in turn.
Back in 1890, in a landmark – and still highly relevant – article for the Harvard Business Reviewwhether