My older son was two years and four months old the first time I saw him mesmerised. He was sitting on the kitchen floor in our apartment in Leiden, and my wife had propped her phone against the base of the cupboard so that he could watch something — a short animation of a Dutch nursery rhyme, bright colours, simple melody, thirty seconds on a loop — while she cooked. He sat perfectly still. His mouth was slightly open. His eyes did not move from the screen. I said his name. Nothing. I said it louder. Nothing. I crouched in front of him, blocking the screen, and he leaned sideways to see around me. Not toward me. Around me. He was not ignoring me. He had not decided that the screen was more important. He had, in a neurological sense, ceased to register that I was there. The screen had captured his attention with a completeness that no toy, no face, no voice in the room could match.
I picked him up. He screamed. Not the protest cry of a toddler who has been interrupted in play — I knew that sound well. This was the raw, bewildered rage of an organism that had been disconnected from a stimulus its brain had locked onto. I held him. He arched away, toward the phone. The animation was still looping. Within ten seconds of my putting him down, he was still again. Mouth open. Eyes fixed. Gone.
I am a zoologist. I recognise a supernormal stimulus when I see one.
The term was coined by Nikolaas Tinbergen in the 1950s during his experiments with herring gulls at Oxford. Tinbergen discovered that gull chicks, which normally peck at the red spot on their parent's beak to solicit food, would peck more vigorously at an artificial model with an exaggerated spot — bigger, redder, more contrasted than any real beak. A fake stick with three red stripes outperformed the parent. The chick's nervous system was calibrated to respond to a specific signal, and when that signal was amplified beyond anything found in nature, the response was amplified too. The organism did not choose the supernormal stimulus. Its biology chose for it. The signal hijacked the circuitry. Tinbergen called these stimuli "supernormal" because they exceeded the parameters the nervous system had evolved to process, producing responses more intense than any natural trigger could elicit. He received the Nobel Prize in Physiology or Medicine in 1973, alongside Karl von Frisch and Konrad Lorenz, partly for this work. The implications for human behaviour were noted at the time. They were not acted upon. We knew what was coming. We let it come.
My son's phone was a supernormal stimulus. The animation delivered colour, movement, pattern, sound, and novelty at a rate and intensity that no physical object in the room could match — not his wooden blocks, not the cat, not his mother's face. His visual cortex was receiving input engineered to exceed the parameters his nervous system had evolved to process. He was two years old. He had no defence against it. Neither, as it turns out, do I. I check my phone before I check on my children in the morning. I know this. I have measured it. On four consecutive mornings in January 2024, I timed myself: the interval between my alarm sounding and my picking up my phone averaged eleven seconds. The interval between my alarm sounding and my walking to my sons' room averaged fourteen minutes. I have not changed this. Knowing what the stimulus is doing has not altered my response to it, because the circuitry it targets is older and faster than the knowledge. If that admission unsettles you, good. It should. I am a trained zoologist who studies attentional systems, and the device in my pocket defeats me every morning before I am fully conscious. What chance does a twelve-year-old have?
This chapter is about information — about what it does in nature, what it was supposed to do for humans, and what it does now. The story follows the same arc as every other system examined in this part of the book: a good impulse, a beautiful beginning, and a slow inversion that turned nourishment into parasitism.
In nature, information serves the organism. This statement is so foundational that it barely registers as a claim, but it is worth establishing precisely because the modern situation has reversed it so completely.
A vervet monkey on the Kenyan savanna produces three acoustically distinct alarm calls for three different predator classes. Robert Seyfarth, Dorothy Cheney, and Peter Marler demonstrated this in a landmark paper in Science in 1980: when a vervet sees a leopard, it produces one call and nearby monkeys run into the trees; when it sees a martial eagle, it produces a different call and monkeys look up or dive into bushes; when it sees a python, a third call, and monkeys stand upright and scan the ground. The calls are not reflexive screams. They are semantically specific signals — the first documented example of what might reasonably be called referential communication in a non-human animal. The information flows from sender to receiver. The receiver changes its behaviour. The behaviour increases the probability of survival. The information serves the organism.
A honeybee returning to the hive from a productive flower patch performs what Karl von Frisch, in research spanning from 1919 to 1945, described as the waggle dance — a figure-eight movement on the vertical surface of the comb, in which the angle of the waggle run relative to vertical encodes the direction of the food source relative to the sun, and the duration of the run encodes the distance. The vigour of the dance correlates with the quality of the source. Attending bees detect the dance through physical vibration and the scent carried on the dancer's body, then fly directly to the indicated location. Von Frisch received the Nobel Prize in 1973 for deciphering this communication system. The mechanism is elegant beyond anything most human engineers have designed: a nine-hundred-milligram insect converts a three-dimensional spatial location into a two-dimensional symbolic representation, transmits it through choreography, and the receivers decode it accurately enough to fly to a flower patch they have never visited. The information serves the organism.
Birdsong attracts mates and delineates territory. The distinctive odour of a wolf pack's scent marks communicates occupancy to rival packs without the risk of physical confrontation. The distress calls of elephant calves summon the matriarch. The electric signals of weakly electric fish in murky water establish dominance hierarchies without contact. Across every phylum, every class, every order, the pattern is the same: information is produced by the organism, for the organism, in service of the organism's survival and reproduction. The information is, in the deepest biological sense, nutritive. It nourishes the animal's capacity to navigate its environment.
There is something worth noticing about all of these systems, something so obvious that it is easy to miss: in nature, the information stops. The vervet's alarm call is brief. The bee's dance has a duration. The birdsong ends at dusk. The signal is produced, received, acted upon, and then the organism returns to other activities — foraging, resting, socialising, grooming. No animal in any habitat on earth lives in a state of continuous information reception. The signal arrives, the organism responds, and then the channel closes. The off switch is not a feature of natural information systems. It is a precondition. Can you think of a single species, in any environment on this planet, that receives information continuously, without pause, from waking to sleep? You can. You are looking at it. We are the only one.
If you wanted to identify the single most consequential innovation in the history of human information systems, a reasonable candidate would be Johannes Gutenberg's moveable type printing press, developed around 1440 in Mainz. Before Gutenberg, the production of a single book required months of scribal labour. After Gutenberg, a press could produce several hundred copies per day. By 1500, an estimated twenty million volumes had been printed in Europe. By 1600, between one hundred fifty and two hundred million. Literacy rates in Europe, which stood at roughly thirty percent at the time of the press's invention, had risen to forty-seven percent by 1650. The Reformation, the Scientific Revolution, the Enlightenment — each was downstream of the capacity to replicate and distribute information cheaply, accurately, and at scale. Copernicus's De revolutionibus was printed in fifteen hundred and forty-three. Within decades, astronomers across the continent were working from identical data. The printing press did not merely spread knowledge. It made knowledge cumulative. A scientist in Warsaw could build on a finding in Padua because the finding arrived intact. The information served the organism.
But within the press itself was the seed of the inversion. Tim Wu, in The Attention Merchants — a history of the commercial capture of human attention published in 2016 — traces the turning point to a specific person: Benjamin Day, a twenty-three-year-old printer who in 1833 launched the New York Sun, the first penny newspaper. Before Day, newspapers were expensive — six cents per copy, sold to subscribers, funded by subscription revenue. Day's innovation was not journalistic. It was economic. He sold the Sun for one cent — below the cost of production — and funded the deficit with advertising. The newspaper was no longer a product sold to readers. The readers were the product sold to advertisers. Day's model was spectacularly successful: by 1836, the Sun had a daily circulation of thirty thousand, the largest in the United States. The information still reached the organism. But the information was no longer there to serve the organism. The organism was there to serve the advertiser. This is the inversion. This is where the river changed direction. And we did not notice, because the water still looked the same.
Wu's history tracks this inversion through every subsequent medium — radio in the 1920s, television in the 1950s, cable in the 1980s, the internet in the 2000s — and in each case the pattern is identical. The medium begins as a tool for delivering information to the audience. Within a generation, the medium's business model transforms: the audience is delivered to the advertiser, and the information is shaped to maximise the size, duration, and emotional state of that delivery. The content is not incidental to this process. The content is the mechanism. Whatever captures attention most efficiently is what gets produced, and what captures attention most efficiently is, by the available evidence, not what informs the organism but what arouses it.
The arousal principle has a name. The newsroom phrase, codified by the 1980s, is "if it bleeds, it leads." The research behind it is now extensive. A study of one hundred and five thousand headlines conducted by the analytics firm Outbrain found that each positive word in a headline decreased the click-through rate by one percent, while each negative word increased it by 2.3 percent. The pattern is neurological. Functional MRI scans demonstrate that threatening stimuli activate the amygdala and periaqueductal gray — the brain's threat-detection circuitry — earlier and more strongly than neutral or positive stimuli. The response is not a choice. It is a reflex inherited from the same evolutionary pressures that produced the vervet's alarm call: the organism that attends to threats survives more often than the organism that does not. The difference is that the vervet's threat was real. The threat on the screen is a representation — a signal designed to trigger the alarm circuitry without a predator in the vicinity. The organism responds as though a leopard has been sighted. No leopard has been sighted. The organism has been activated for the benefit of an advertiser. And we scroll on, alarm bells ringing in our nervous systems, unable to look away, because looking away from a threat is the one thing our biology will not permit. Do we see what has been done to us?
Edward Bernays understood this before the science confirmed it. The nephew of Sigmund Freud — his mother was Freud's sister Anna — Bernays published Propaganda in 1928, in which he argued, with a candour that reads now as either admirably transparent or profoundly chilling, that "the conscious and intelligent manipulation of the organised habits and opinions of the masses is an important element in democratic society." Bernays did not invent manipulation. He industrialised it. He rebranded cigarettes as feminist "Torches of Freedom" in 1929 to double the tobacco industry's addressable market. He engineered the idea that bacon and eggs constitute a proper American breakfast — on behalf of the Beech-Nut Packing Company — by soliciting endorsement letters from physicians. He did not sell products. He sold desires, anxieties, and identities, and he did so by applying his uncle's theories about the unconscious to the emerging science of mass communication. The organism was not informed. It was engineered.
The systems described above — the penny press, broadcast advertising, Bernays's public relations — operated within a constraint that now seems almost quaint: they were bounded in time. A newspaper was read and set aside. A radio programme had a beginning and an end. A television was switched off at night. The information arrived in discrete units, and between units the organism returned to its life — to conversation, to silence, to sleep, to thought. The off switch existed, and most people used it.
The constraint dissolved in the early 2010s, and the speed of the dissolution is historically unprecedented. In 2007, the smartphone was introduced. By 2012, half of American adults owned one. By 2015, sixty-eight percent. By 2024, ninety-seven percent of American adults aged eighteen to forty-nine carried, at all times, a device capable of delivering information continuously, designed by the most well-resourced engineering teams in human history to ensure that they did. The colonisation was complete within a single decade. Has any other environmental change in our species' history happened this fast?
Tristan Harris, a former design ethicist at Google who co-founded the Center for Humane Technology, has described the resulting information environment in terms borrowed from the gambling industry. The parallel is not metaphorical. It is mechanical. The smartphone's pull-to-refresh gesture — the downward swipe that reloads the feed — mimics the arm of a slot machine. The mechanism is identical: the user performs an action, experiences a moment of anticipation, and receives a reward of variable magnitude. Sometimes the refresh produces something engaging. Sometimes it does not. The variability is the point. B.F. Skinner demonstrated in the 1950s that variable reinforcement schedules — rewards that arrive unpredictably — produce the most persistent behaviour. A pigeon that receives food every tenth peck will stop pecking when the food stops. A pigeon that receives food at random intervals will peck for hours after the reward ceases. The feed is a variable reinforcement schedule delivered through a screen. The organism pecks. The organism cannot stop pecking. This is not a character failure. It is a design specification. We are all the pigeon. The feed is the lever.
The design is explicit. Harris coined the phrase "human downgrading" to describe what he calls an interconnected system of mutually reinforcing harms — addiction, distraction, isolation, polarisation, misinformation — that degrades human capacity in order to capture human attention. The feed does not aim to inform. It aims to engage. Engagement is measured in time-on-screen, and time-on-screen is maximised by content that provokes emotional arousal — specifically fear, outrage, and anxiety, because these are the emotions that activate the organism's threat-detection circuitry, the circuitry that evolved to override all other motivational systems. The vervet does not continue foraging when it hears a leopard alarm. It stops everything and attends. The feed triggers the leopard alarm. There is no leopard. The organism attends anyway.
Steve Rathje, Jay Van Bavel, and Sander van der Linden quantified this at scale. In a study published in the Proceedings of the National Academy of Sciences in 2021, they analysed social media posts from US congressional members and news organisations across Facebook and Twitter. The finding was stark: posts about political opponents — out-group content — were shared approximately twice as often as posts about one's own political group. Out-group language was the single strongest predictor of engagement — stronger than emotional language alone, stronger than moral language, stronger than any other linguistic feature measured. Content that referenced "them" was shared roughly twice as much as content that referenced "us." The algorithm that governs what appears in a user's feed learns this. It promotes what is shared. What is shared is what provokes outrage about the other side. The feed, through pure optimisation, becomes a machine for manufacturing tribal hostility — not because anyone designed it to polarise, but because polarisation is what the metrics reward. Our outrage is their revenue. Our division is their product.
The phrase "attention economy" is now common enough to have lost its force. But it is worth restoring its literal meaning. An economy is a system for the allocation of scarce resources. Attention is a scarce resource — the organism can attend to only one thing at a time, and the hours of the day are finite. The technology companies that dominate the modern information environment — their combined market capitalisation exceeding ten trillion dollars — compete for a share of this finite resource, and their revenue is a direct function of the share they capture. Every second of attention has a price. Every swipe, every tap, every minute of scrolling generates data that is sold to advertisers who wish to place their message in front of the organism's eyes at the moment of maximum receptivity. The organism is not the customer. The organism is the inventory. The advertisers are the customers. The feed is the product. Benjamin Day's inversion, refined through a century and a half of increasing sophistication, has reached its logical conclusion: the entire information environment surrounding the animal is architected not to inform the animal but to monetise its attention. And we walk around inside it, every day, believing we are the customer.
I title this chapter after the screens themselves — the coloured boxes that now occupy the centre of nearly every room our species inhabits, the boxes we carry in our pockets, the boxes we mount on our walls, the boxes we place before our children and then wonder why the children have changed.
The data on what the boxes have done to the young of the species are now extensive enough that the debate, among researchers who study the question, has shifted from "is there an effect?" to "how large is the effect and through what mechanisms does it operate?"
Jean Twenge, a psychologist at San Diego State University, and Jonathan Haidt, a social psychologist at New York University's Stern School of Business, have compiled and analysed the most comprehensive datasets available. The timeline is consistent across every measure. Between 2010 and 2012 — the period during which smartphone ownership among American teenagers crossed the fifty percent threshold — trends in adolescent mental health that had been stable or improving for decades reversed sharply. Twenge, drawing on the CDC's Youth Risk Behavior Surveillance System and emergency department admission data, published figures that are difficult to read without stopping. Between 2010 and 2020, emergency room admissions for self-harm among girls aged ten to fourteen tripled. Among girls aged fifteen to nineteen, they more than doubled. Suicide rates for girls aged ten to fourteen increased by one hundred and sixty-seven percent. Depression among teen girls rose steeply — Twenge and Haidt have documented that rates of major depressive episodes in this group have increased by over one hundred and forty-five percent since 2010. These are not small effects obscured by statistical noise. They are population-level shifts visible to the naked eye in any graph that plots the data. Our daughters are telling us something. Are we listening?
The correlation is temporal: the inflection point coincides with smartphone adoption, not with any other candidate variable — not the 2008 financial crisis (which preceded the inflection by several years), not changes in diagnostic criteria (which remained stable), not academic pressure (which increased linearly rather than inflecting). It is dose-dependent: Twenge's analysis of the Monitoring the Future survey data shows that heavy users of smartphones were twice as likely as light users to report low well-being, and twice as many heavy users of social media had clinical levels of depressive symptoms compared to non-users. And it is cross-national: the pattern appears in Canada, the UK, Australia, Scandinavia — every country in which smartphone adoption followed a similar timeline. Haidt, in The Anxious Generation, published in 2024, describes what he calls the "Great Rewiring of Childhood" — the replacement, between roughly 2010 and 2015, of a play-based childhood with a phone-based childhood, and the measurable consequences of that replacement across every available mental health indicator.
The mechanism is not mysterious. The adolescent brain is in a period of intense social calibration — the organism is building the neural architecture for social cognition, identity formation, and emotional regulation that will serve it for the rest of its life. This calibration evolved to occur through face-to-face interaction in small groups: reading facial expressions, interpreting tone of voice, learning to navigate conflict and repair rupture, experiencing the embodied feedback of physical presence. The coloured box replaces this with a disembodied information stream — algorithmically curated, comparison-saturated, and continuous. The organism receives constant social information without the social context that gives information its meaning. A "like" is social feedback stripped of face, voice, body, and nuance. A comment section is conflict without repair. An image feed is social comparison without the tempering effect of physical co-presence. The adolescent brain calibrates to this environment because the adolescent brain calibrates to whatever environment it finds itself in. That is not a flaw. That is exactly what adolescent brains evolved to do. The flaw is the environment. We built the environment. We handed it to our children. The flaw is ours.
I want to apply the zoological frame directly, because the analogy is not merely illustrative. It is, I believe, diagnostic.
Imagine a zookeeper who introduces a novel enrichment device into the enclosure of a social primate colony. The device is a screen — a coloured box that displays moving images and emits sounds. The device is designed not by the zoo's welfare team but by an external commercial entity whose revenue depends on the amount of time the animals spend interacting with it. The zoo installs the device and observes the results.
Within the first year, the following behavioural changes are documented. Several adolescent animals begin spending six to eight hours per day in front of the device, at the expense of foraging, social grooming, play, and sleep. Social behaviour in the colony shifts: physical proximity between individuals decreases; direct social interactions become shorter and less frequent; new dominance conflicts emerge around access to the device. Self-directed behaviour increases — animals exhibit more scratching, hair-pulling, and skin-picking, behaviours that in primate ethology are classified as indicators of anxiety. In a subset of adolescent females, self-injurious behaviour appears — a category of behaviour that is among the most alarming welfare indicators in captive primate management, associated with chronic stress, social isolation, and inadequate environmental conditions. Reproductive behaviour and appetite decline in the most affected individuals. Sleep architecture is disrupted.
The zoo's veterinary team reviews the data. The welfare assessment is unambiguous: the device is classified as a welfare hazard. It is removed immediately. The external commercial entity protests that the device was popular — the animals chose to use it, voluntarily, for hours at a time. The veterinary team responds that popularity is not a welfare indicator. An animal will consume sugar until its teeth rot. An animal will self-stimulate to the exclusion of all other activity. The question is not whether the animal engages with the device. The question is whether the device serves the animal's welfare. The data say it does not. The device is removed.
The smartphone was introduced to adolescent humans with no welfare assessment whatsoever.
There was no ethological baseline study. There was no controlled introduction. There were no welfare indicators defined in advance. There was no monitoring protocol. There was no threshold at which the device would be removed. There was no removal mechanism at all. The device was distributed to the entire juvenile population of the species simultaneously, by commercial entities whose revenue model depended on maximising the time the juveniles spent using it, and the welfare consequences were observed retrospectively, years later, by researchers who had no authority to intervene. The zookeeper who introduced a device to a primate colony under these conditions — no welfare assessment, no monitoring, no exit criteria, commercial interests governing the design — would face professional censure. For the human species, it was called Christmas morning. We wrapped the supernormal stimulus in paper and handed it to our children as a gift.
I am aware that this comparison will strike some readers as patronising — as though I am suggesting that adolescents are animals in a zoo. I am suggesting something more uncomfortable than that. I am suggesting that we afford less consideration to the welfare of our own young than a competent zoo affords to a troop of capuchin monkeys. The zoo has a welfare protocol. The zoo monitors behaviour. The zoo removes hazards. We handed our children a supernormal stimulus optimised for addiction and called it a gift. What does that make us?
In ecology, parasitism is defined as a relationship in which one organism benefits at the expense of another. The parasite extracts resources from the host — energy, nutrients, reproductive capacity — and redirects them toward the parasite's own survival and reproduction. The host is not destroyed, at least not immediately. The most successful parasites keep the host alive and functional, because a dead host is a lost resource. The ideal parasite extracts the maximum possible benefit while maintaining the host in a state of just-adequate functioning.
The modern information environment meets every criterion of a parasitic ecology. The host is the human organism. The resource being extracted is attention — the finite cognitive capacity of the animal, its capacity to notice, to process, to care. The parasite is the network of commercial entities — platforms, advertisers, content producers, algorithmic systems — that have evolved, through competitive selection pressures not unlike those in a natural ecosystem, to extract attention with increasing efficiency. The host is not destroyed. The host goes to work, eats meals, maintains basic social bonds. The host functions. But the resource extracted — the hours of attention redirected from embodied experience to screen-mediated engagement — is not trivial. The American Time Use Survey, conducted by the Bureau of Labor Statistics, reports that the average American adult spends approximately seven hours per day consuming digital media. Seven hours. Nearly half of our waking lives. That time is not spent in social interaction, in physical activity, in creative endeavour, in rest, in contact with the natural environment. It is spent in front of a coloured box, generating data and revenue for the parasite. We feed it our attention. It feeds us back our fears.
The feedback loop is the parasitic mechanism. Make the host afraid — fear activates the attentional circuitry and binds the host to the information stream. Sell the host a product that promises to address the fear — insurance, supplements, political identity, the next scroll that will contain something reassuring. Repeat. The fear is never resolved, because resolved fear does not generate engagement. The loop is self-sustaining: the more the host consumes, the more afraid the host becomes, and the more afraid the host becomes, the more the host consumes. The American Psychological Association's 2017 State of Our Nation report found that ninety-five percent of American adults followed the news regularly, and more than half reported that doing so caused them stress. A separate study found that anxiety and depression symptoms could be measured after just fourteen minutes of news consumption. The organism is being nourished with poison and returning for more, because the poison is engineered to taste like survival. We know it is making us sick. We cannot stop consuming it. That is not weakness. That is a parasite doing what parasites do.
The zoological parallel is precise. In 2016, Whisson and colleagues documented the collapse of a koala population at Cape Otway in Victoria — the study I discussed in Chapter 2. The koalas maintained good body condition scores right up until the crash. They looked fine. Their keepers — in this case, conservation biologists — did not see the decline until it was too late, because the decline was subclinical: chronic stress, immune suppression, progressive nutritional deficit, all masked by the appearance of normal behaviour. The coloured box produces the same pattern in the human organism. The organism looks fine. It goes to work. It smiles in photographs. The cortisol is elevated. The sleep is disrupted. The social bonds are thinning. The adolescent females are presenting at emergency departments. The organism is compensating, compensating, compensating. The trajectory is not a gradual slide. It is a threshold event — and we do not yet know where our threshold is.
It would be dishonest to end this chapter without acknowledging what was good, because the impulse underneath the information environment is not merely good — it is among the greatest achievements of the species.
The impulse is this: to share knowledge across the group.
Every information system humans ever built — every one — began as an attempt to make knowledge available to more members of the species. The oral tradition of hunter-gatherer societies, in which critical survival knowledge was encoded in stories and transmitted across generations, was information-sharing. The development of writing in Mesopotamia around 3400 BCE was information-sharing. The Library of Alexandria was information-sharing. The printing press was information-sharing. The public library movement of the nineteenth century — Andrew Carnegie alone funded over two thousand five hundred libraries — was information-sharing. The BBC's original charter, drafted in 1927, stated that the purpose of the corporation was to inform, educate, and entertain. Wikipedia, which as of 2024 contains over six point eight million articles in English, written and maintained by approximately thirty-nine thousand active volunteer editors — seventy-one percent of whom report being motivated by a desire to share knowledge, and sixty-nine percent by a belief that information should be free — is information-sharing. The impulse is magnificent. The impulse says: what I know should not be locked inside me; it should be available to anyone who needs it. This is our species at its best. This is what we are capable of when the system serves the animal instead of extracting from it.
The printing press made knowledge cumulative. The public library made it accessible. The internet made it instantaneous. These are real achievements. The species that shares knowledge across its members at this speed and scale has, in principle, the capacity to solve any problem its collective intelligence can frame. That capacity is real. It produced the eradication of smallpox, the sequencing of the human genome, the detailed understanding of climate systems that, were the knowledge acted upon, would allow the species to stabilise its own habitat. The good impulse works. It has always worked.
What happened is what happened to money, to justice, to education, to every system examined in this section of the book: the good impulse was captured by an economic model that inverted its function. Information that was supposed to serve the organism was restructured to serve the attention market. The printing press became the penny press. The public broadcaster became the cable network. The internet became the feed. The library became the slot machine. Each transition preserved the appearance of the original function — you are still receiving information, after all — while reversing the underlying relationship. The organism is no longer the beneficiary. The organism is the resource. We are the product. Our attention is what is being sold. And the tragedy is that most of us do not know it.
The tragedy is not that the information is false. Much of it is accurate. The tragedy is that the system selects for information that activates the organism's threat circuitry, regardless of whether the threat is real, relevant, or actionable. The most important news story in any given week is almost never the most emotionally arousing one. The most emotionally arousing story is the one that gets shared, discussed, and amplified — because the algorithm rewards engagement, and engagement is a function of arousal, and arousal is a function of threat. The organism is informed. It is informed about the wrong things, in the wrong proportions, through a channel that never closes. The information is technically nutritive. The diet is pathological. We are eating information junk food and wondering why our minds feel sick.
I said at the beginning that I check my phone before I check on my children. I want to return to this, because the admission is not incidental. It is the data.
I am a trained zoologist. I have spent years studying the biological needs of organisms. I understand, in detail, the mechanisms by which the coloured box captures attention. I have read Twenge, Haidt, Harris, Wu, Rathje, and every study cited in this chapter. I know what the device is doing to my attentional circuitry. I know that my children, sleeping in the next room, constitute — by every biological measure, by every evolutionary priority, by every ethical framework I hold — a more important stimulus than anything the screen could possibly display. I know all of this. And every morning, for eleven seconds, I reach for the phone. If you are reading this on a phone, you already know the feeling. You know the pull. You feel it even now.
If knowledge were sufficient to change behaviour, this chapter would not need to be written. I would read the research, adjust my behaviour, and the problem would be solved. But the research itself explains why the research is insufficient. The attentional capture mechanisms exploited by the coloured box target neural systems that operate below conscious deliberation. The dopaminergic reward pathways activated by variable reinforcement do not consult the prefrontal cortex before firing. The threat-detection circuitry that makes the notification badge feel urgent does not pause to assess whether the urgency is real. The organism responds to the stimulus before the organism's rational faculties have an opportunity to intervene. I know this. The knowledge does not help. The eleven seconds happen anyway.
This is not a confession of weakness. It is an observation about the asymmetry between individual cognition and industrial design. The coloured box in my pocket was designed by teams of engineers numbering in the thousands, informed by behavioural research costing billions, optimised through A/B testing on billions of users, with the explicit objective of maximising the probability that I will pick it up and the duration for which I will look at it. I am one organism, with one prefrontal cortex, opposing an industry. The contest is not close. And yet we blame ourselves for losing it. We call it a lack of discipline. We call it a failure of willpower. We locate the problem in the animal, not the enclosure. Sound familiar? It should. It is the same diagnostic error that runs through every chapter of this book.
The question this raises — the question that sits underneath every data point in this chapter — is not "why can't people put down their phones?" That question assigns the failure to the organism. A zookeeper does not ask why the animal keeps returning to the enrichment device that is damaging its welfare. A zookeeper asks why the device was placed in the enclosure, who placed it there, and how it can be removed. The failure is not the animal's. The failure is in the enclosure design. The organism is doing exactly what the organism's biology predicts it will do when exposed to a supernormal stimulus backed by a trillion-dollar industry. The surprise would be if it did anything else.
There is one final dimension of the modern information environment that distinguishes it from every previous information system, and it is perhaps the most damaging of all.
Every natural information system has a closing condition. The vervet's alarm call ends when the predator departs. The bee's waggle dance concludes when the dance is complete. The birdsong ceases at nightfall. The organism receives the signal, responds, and then the signal stops, and the organism returns to a state of baseline alertness — attending to its environment without being captured by any single stimulus within it. This baseline state is not empty. It is, in neurological terms, the default mode network — the brain at rest, engaged in consolidation, imagination, self-reflection, and social cognition. The default mode is when the organism processes experience, integrates memory, and generates the internal narratives that, for humans, constitute identity and meaning. It is not idle time. It is, by any welfare measure, essential cognitive maintenance. When was the last time you experienced it? When was the last time your brain was genuinely at rest — not sleeping, not consuming, not scrolling, just present?
The coloured box eliminates this state. The feed has no end. There is no bottom to the page, no final segment, no closing credits. The infinite scroll — a design innovation introduced by Aza Raskin in 2006, who has since publicly expressed regret — removes the natural stopping point that every previous information medium contained. The book has a last page. The newspaper has a back cover. The television programme has an ending. The feed has none. The organism scrolls until it decides to stop, and the decision to stop must be generated internally, against the full force of variable reinforcement, social comparison, and threat-activated attention. Most organisms do not generate the decision until fatigue intervenes. The device accompanies the organism to bed. The blue-spectrum light emitted by the screen suppresses melatonin production — the research, detailed by Matthew Walker at Berkeley, is conclusive — delaying sleep onset, reducing sleep quality, and disrupting the circadian biology on which every physiological system depends. The organism lies in the dark, face illuminated, scrolling, and the information stream does not close because the information stream was not designed to close. It was designed to continue until the organism loses consciousness. We have all been there. The blue glow on the ceiling. The promise to ourselves that this is the last scroll. It never is.
Seven hours per day. Nearly half of waking life. The American Time Use Survey quantifies what every individual already senses: the coloured box has displaced enormous portions of embodied experience — physical activity, face-to-face social interaction, unstructured time outdoors, silence, sleep. Each of these is, as previous chapters have established, a biological requirement of the species. None of them is provided by the coloured box. The device provides stimulation in the place of nourishment. The organism mistakes one for the other because the stimulation activates the same reward circuitry that nourishment does, in the same way that a herring gull chick mistakes a stick with three red stripes for its mother. We are the gull chick. The striped stick is in our pocket.
I am sitting in my study in Leiden, writing this chapter on a screen, and the device that I have spent six thousand words describing as a welfare hazard is eighteen inches from my left hand. It has buzzed twice since I began this section. I have not looked at it. This is not because I have transcended the mechanism. It is because I am engaged in a task that is temporarily capturing my attention more effectively than the variable reinforcement schedule on the phone. The moment I stop writing, I will look at it. The mechanism is not paused. It is waiting.
The coloured box is a parasite, and like all successful parasites, it is invisible to the host. The host does not experience the interaction as parasitism. The host experiences it as information, entertainment, connection, identity — as something it chose. The host is not wrong about the choice. The host is wrong about the choosing. The choice architecture was designed by an industry that profits from the host's engagement, and the engagement is maximised by features that exploit the host's evolved attentional biases. The organism chose the supernormal stimulus in the same sense that the herring gull chick chose the striped stick: it did not. Its biology chose for it, and the stick was placed there by an entity that understood the biology better than the organism does.
The good impulse — to share knowledge across the species — remains intact. It is buried under the parasitic overlay, but it is still there, in every Wikipedia editor, every open-access journal, every teacher who shares a resource, every parent who reads aloud. The task is not to destroy the information infrastructure. The task is to restore its original orientation: information that serves the organism, rather than an organism that serves the information. A zookeeper who discovered that the enrichment programme had been redesigned by a commercial entity to maximise the entity's revenue rather than the animal's welfare would not blame the animal. The zookeeper would redesign the programme. That is our task. Not to abandon the technology. Not to smash the coloured boxes. But to redesign the relationship — to put the organism back at the centre, where it belongs.
The coloured boxes shape what the animal thinks about. They determine which threats feel urgent, which comparisons feel relevant, which emotions are activated and which are suppressed. They are, in this sense, the most powerful environmental feature of the modern enclosure — more influential than the food, the shelter, the working conditions, the justice system, or the schools, because they mediate the organism's perception of all of these. The animal that controls the information controls the experience of the enclosure.
But who controls the information? The same question applies to every dimension of the enclosure — who decides, who governs, who holds the power to shape the conditions in which the animal lives? Media shapes what the animal thinks about. Governance determines what the animal is allowed to do. The coloured boxes are the enclosure's sensory environment. The next chapter examines who designed the enclosure itself — and who, in the history of this strange and struggling species, has been given the keys.