Preface. This book conveys a sense of wonderment and awe about our brains work and how we become who we are. I think if you read the excerpts below you will understand why Artificial Intelligence will probably never come close to general intelligence and being as smart as human beings — able to learn, have emotions and consequently motivation and curiosity. Heck, I doubt AI will even become as intelligent as ants after reading “Journey to the Ants: A Story of Scientific Exploration” by Bert Holldobler and Edward O. Wilson.
Our brains have 86 billion neurons with 200 trillion connections that constantly grow, die, or change as we live our lives and learn from our experiences. Computer neural networks will never be that complex or able to change themselves flexibly — they are hard-wired, not LIVE wired. Which means AI is not an existential threat unless some idiot gives AI software written by humans has complete authority over deciding whether to launch nuclear weapons after Congress votes to take that choice away from the President. Now there’s an existential threat…
Here are some of the main ideas:
“…Our brain machinery isn’t fully preprogrammed, but instead shapes itself by interacting with the world. As we grow, we constantly rewrite our brain’s circuitry to tackle challenges, leverage opportunities, and understand the social structures around us. Our species has successfully taken over every corner of the globe because we represent the highest expression of a trick that Mother Nature discovered: don’t entirely pre-script the brain; instead, just set it up with the basic building blocks and get it into the world.
If you had a magical video camera with which to zoom in to the living, microscopic cosmos inside the skull, you would witness the neurons’ tentacle-like extensions grasping around, feeling, bumping against one another, searching for the right connections to form or forgo, like citizens of a country establishing friendships, marriages, neighborhoods, political parties, vendettas, and social networks. The elaborate pattern of connections in the brain—the circuitry—is full of life: connections between neurons ceaselessly blossom, die, and reconfigure. You are a different person than you were at this time last year, because the gargantuan tapestry of your brain has woven itself into something new.
Yesterday you were marginally different. And tomorrow you’ll be someone else again.
Imagine you were born 30,000 years ago. You have exactly your same DNA, but you slide out of the womb and open your eyes onto a different time period. What would you be like? Would you relish dancing in pelts around the fire while marveling at stars? Would you bellow from a treetop to warn of approaching saber-toothed tigers? Would you be anxious about sleeping outdoors when rain clouds bloomed overhead? Whatever you think you’d be like, you’re wrong. It’s a trick question. Because you wouldn’t be you. Not even vaguely. This caveman with identical DNA might look a bit like you, as a result of having the same genomic recipe book. But the caveman wouldn’t think like you. Nor would the caveman strategize, imagine, love, or simulate the past and future quite as you do.
Why? Because the caveman’s experiences are different from yours. Although DNA is a part of the story of your life, it is only a small part. The rest of the story involves the rich details of your experiences and your environment, all of which sculpt the vast, microscopic tapestry of your brain cells and their connections. What we think of as you is a vessel of experience into which is poured a small sample of space and time. You imbibe your local culture and technology through your senses. Who you are owes as much to your surroundings as it does to the DNA inside you.
Contrast this story with a Komodo dragon born today and a Komodo dragon born 30,000 years ago. Presumably it would be more difficult to tell them apart by any measure of their behavior. What’s the difference? Komodo dragons come to the table with a brain that unpacks to approximately the same outcome each time. The skills on their résumé are mostly hardwired (eat! mate! swim!), and these allow them to fill a stable niche in the ecosystem. But they’re inflexible workers. If they were airlifted from their home in southeastern Indonesia and relocated to snowy Canada, there would soon be no more Komodo dragons.”
Here are some paragraphs from the book about the difference between the brain and a computer:
Contemporary AI could never, by itself, decide that it finds irresistible a particular sculpture by Michelangelo, or that it abhors the taste of bitter tea, or that it is aroused by signals of fertility. AI can dispatch 10,000 hours of intense practice in 10,000 nanoseconds, but it does not favor any zeros and ones over others. As a result, AI can accomplish impressive feats, but not the feat of being anything like a human.
Current AI algorithms don’t care about relevance: they memorize whatever we ask them to. This is a useful feature of AI, but it is also the reason AI is not particularly humanlike. AI simply doesn’t care which problems are interesting or germane; instead, it memorizes whatever we feed it. Whether distinguishing a horse from a zebra in a billion photographs, or tracking flight data from every airport on the planet, it has no sense of importance except in a statistical sense.
Think about riding a bicycle, a machine that our genome presumably didn’t see coming. Our brains originally shaped themselves in conditions of climbing trees, carrying food, fashioning tools, and walking great distances. But successfully riding a bicycle introduces a new set of challenges, such as carefully balancing the torso, modifying direction by moving the arms, and stopping suddenly by squeezing the hand. Despite the complexities, any seven-year-old can demonstrate that the extended body plan is easily added to the résumé of the motor cortex.
Think of how effectively this strategy creates biodiversity. A live-wired brain does not need to be swapped out for each genetic change to the body plan. It adjusts itself. And that’s how evolution can so effectively shape animals to fit any habitat. Whether hooves or toes are appropriate to the environment, fins or forearms, or trunks or tails or talons, Mother Nature doesn’t have to do anything extra to make the new animal operate correctly. Evolution really couldn’t work any other way: it simply would not operate quickly enough unless body-plan changes were easy to deploy and brain changes followed without difficulty.
Related posts on Artificial Intelligence here
Alice Friedemann www.energyskeptic.com Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”. Women in ecology Podcasts: WGBH, Financial Sense, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity, Index of best energyskeptic posts
***
Eagleman D (2020) Livewired: The inside story of the ever-changing brain. Vintage.
The bawling baby eventually stops crying, looks around, and absorbs the world around it. It molds itself to the surroundings. It soaks up everything from local language to broader culture to global politics. It carries forward the beliefs and biases of those who raise it. Every fond memory it possesses, every lesson it learns, every drop of information it drinks—all these fashion its circuits to develop something that was never pre-planned, but instead reflects the world around it.
The human brain consists of 86 billion cells called neurons: cells that shuttle information rapidly in the form of traveling voltage spikes. Neurons are densely connected to one another in intricate, forest-like networks, and the total number of connections between the neurons in your head is in the hundreds of trillions (around 0.2 quadrillion). To calibrate yourself, think of it this way: there are 20 times more connections in a cubic millimeter (0.4 inch) of cortical tissue than human beings on the entire planet.
The brain is a dynamic system, constantly altering its own circuitry to match the demands of the environment and the capabilities of the body.
When you learn something—the location of a restaurant you like, a piece of gossip about your boss, that addictive new song on the radio—your brain physically changes. The same thing happens when you experience a financial success, a social fiasco, or an emotional awakening. When you shoot a basketball, disagree with a colleague, fly into a new city, gaze at a nostalgic photo, or hear the mellifluous tones of a beloved voice, the immense, intertwining jungles of your brain work themselves into something slightly different from what they were a moment before.
In contrast, humans thrive in ecologies around the globe. What’s the trick? It’s not that we’re tougher, more robust, or more rugged than other creatures: along any of these measures, we lose to almost every other animal. Instead, it’s that we drop into the world with a brain that’s largely incomplete. As a result, we have a uniquely long period of helplessness in our infancy. But that cost pays off, because our brains invite the world to shape them—and this is how we thirstily absorb our local languages, cultures, fashions, politics, religions, and moralities. Dropping into the world with a half-baked brain has proven a winning strategy for humans.
Our DNA is not a fixed schematic for building an organism; rather, it sets up a dynamic system that continually rewrites its circuitry to reflect the world around it and to optimize its efficacy within it.
Think about the way a schoolchild will look at a globe of the earth and assume there is something fundamental and unchanging about the country borders. In contrast, a professional historian understands that country borders are functions of happenstance and that our story could have been run with slight variations: a would-be king dies in infancy, or a corn pestilence is avoided, or a warship sinks and a battle tips the other way. Small changes would cascade to yield different maps of the world.
Neurons are locked in competition for survival. Just like neighboring nations, neurons stake out their territories and chronically defend them. They fight for territory and survival at every level of the system: each neuron and each connection between neurons fights for resources. As the border wars rage through the lifetime of a brain, maps are redrawn in such a way that the experiences and goals of a person are always reflected in the brain’s structure. If an accountant drops her career to become a pianist, the neural territory devoted to her fingers will expand;
At the end of 1945, Tokyo found itself in a bind. Through the period that spanned the Russo-Japanese War and two world wars, Tokyo had devoted 40 years of intellectual resources to military thinking. This had equipped the nation with talents best suited for only one thing: more warfare. What would they do with their vast numbers of military engineers? Over the next few years, Tokyo shifted its social and economic landscape by redeploying its engineers toward new assignments. Thousands were tasked with building the high-speed bullet train known as the Shinkansen. Those who had previously designed aerodynamic navy aircraft now crafted streamlined railcars. Those who had worked on the Mitsubishi Zero fighter plane now devised wheels, axles, and railing to ensure the bullet train could operate safely at high speeds. It beat its swords into plowshares
You type rapidly on your laptop because you don’t have to think about the details of your fingers’ positions, aims, and goals. It all just proceeds on its own, seemingly magically, because typing has become part of your circuitry. By reconfiguring the neural wiring, tasks like this become automatized, allowing fast decisions and actions. Compare this with hitting the correct keys on a musical instrument you’ve never played before. For these sorts of untrained tasks, you rely on conscious thinking, and that is comparably quite slow.
Consider the feeling of stumbling on a diary entry that you wrote many years ago. It represents the thinking, opinions, and viewpoint of someone who was a bit different from who you are now, and that previous person can sometimes border on the unrecognizable.
Like the globe, the brain is a dynamic, flowing system, but what are its rules? The number of scientific papers on brain plasticity has bloomed into the hundreds of thousands. But even today, as we stare at this strange pink self-configuring material, there is no overarching framework that tells us why and how the brain does what it does.
The thrill of life is not about who we are but about who we are in the process of becoming.
By 1874, Charles Darwin wondered if this basic idea might explain why rabbits in the wild had larger brains than domestic rabbits: he suggested that the wild hares were forced to use their wits and senses more than the domesticated ones and that the size of their brains followed. In the 1960s, researchers began to study in earnest whether the brain could change in measurable ways as a direct result of experience. The rats raised in enriched environments performed better at tasks and were found at autopsy to have long, lush dendrites (the treelike branches growing from the cell body). In contrast, rats from the deprived environments were poor learners and had abnormally shrunken neurons.
In the early 1990s, researchers in California realized they could take advantage of autopsies to compare the brains of those who completed high school with those who completed college. In analogy to the animal studies, they found that an area involved in language comprehension contained more elaborate dendrites in the college educated.
Why was Einstein Einstein? Surely genetics mattered, but he is affixed to our history books because of every experience he’d had: the exposure to cellos, the physics teacher he had in his senior year, the rejection of a girl he loved, the patent office in which he worked, the math problems he was praised for, the stories he read, and millions of further experiences—all of which shaped his nervous system into the biological machinery we distinguish as Albert Einstein. Each year, there are thousands of other children with his potential but who are exposed to cultures, economic conditions, or family structures that don’t give sufficiently positive feedback. And we don’t call them Einsteins.
If DNA were the only thing that mattered, there would be no particular reason to build meaningful social programs to pour good experiences into children and protect them from bad experiences. But brains require the right kind of environment if they are to correctly develop.
When the first draft of the Human Genome Project came to completion at the turn of the millennium, one of the great surprises was that humans have only about 20,000 genes. This number came as a surprise to biologists: given the complexity of the brain and the body, it had been assumed that hundreds of thousands of genes would be required. So how does the massively complicated brain, with its 86 billion neurons, get built from such a small recipe book?
The answer pivots on a clever strategy implemented by the genome: build incompletely and let world experience refine. It’s a great trick on the part of Mother Nature, allowing the brain to learn languages, ride bicycles, and grasp quantum physics, all from the seeds of a small collection of genes. Our DNA is not a blueprint; it is merely the first domino that kicks off the show.
With 20,000 genes and 200 trillion connections between neurons, how could the details possibly be prespecified? That model could never have worked. Mother Nature’s strategy of unpacking a brain relies on proper world experience. Without it, the brain becomes malformed and pathological. Like a tree that needs nutrient-rich soil to arborize, a brain requires the rich soil of social and sensory interaction.
Neurons spend a small fraction of their time sending abrupt electrical pulses (also called spikes). The timing of these pulses is critically important. Let’s zoom in to a typical neuron. It reaches out to touch 10,000 neighbors. But it doesn’t form equally strong relationships with all 10,000. Instead, the strengths are based on timing. If our neuron spikes, and then a connected neuron spikes just after that, the bond between them is strengthened. This rule can be summarized as neurons that fire together, wire together.
They don’t host barbecues, but instead they release more neurotransmitters, or set up more receptors to receive the neurotransmitters, thus causing a stronger link between them.
How does this simple trick lead to a map of the body? Consider what happens as you bump, touch, hug, kick, hit, and pat things in the world. When you pick up a coffee mug, patches of skin on your fingers will tend to be active at the same time. When you wear a shoe, patches of skin on your foot will tend to be active at the same time. In contrast, touches on your ring finger and your little toe will tend to enjoy less correlation, because there are few situations in life when those are active at the same moment.
After interacting with the world for a while, areas of skin that happen to be co-active often will wire up next to one another, and those that are not correlated will tend to be far apart. The consequence of years of these co-activations is an atlas of neighboring areas: a map of the body. In other words, the brain contains a map of the body because of a simple rule that governs how individual brain cells make connections with one another: neurons that are active close in time to one another tend to make and maintain connections between themselves. That’s how a map of the body emerges in the darkness.
France’s king, Louis XIV, started to intuit an important lesson: if he wanted New France to firmly take root, he had to keep sending ships—because the British were sending even more ships. He understood that Quebec wasn’t growing rapidly enough because of a lack of women, and so he sent 850 young women. The problem was that the British were sending far more young men and women. By 1750, when New France had 60,000 inhabitants, Britain’s colonies boasted a million. That made all the difference in the subsequent wars between the two powers: despite their allegiances with the Native Americans, the French were badly outstripped.
For a short time, the government of France forced newly released prisoners to marry local prostitutes, and then the newlywed couples were linked with chains and shipped off to Louisiana to settle the land. But even these French efforts were insufficient. By the end of their sixth war, the French realized they had lost. New France was dissolved. The spoils of Canada moved under the control of Great Britain, and the Louisiana Territory went to the young United States. The waxing and waning of the French grip on the New World had everything to do with how many boats were being sent over. In the face of fierce competition, the French had simply not shipped enough people over the water to keep a hold on their territory. The same story plays out constantly in the brain. When a part of the body no longer sends information, it loses territory.
When a person’s eyes are damaged, signals no longer flood in along the pathways to the occipital cortex (the portion at the back of the brain, often thought of as “visual” cortex). And so that part of the cortex becomes no longer visual. The coveted territory is taken over by the competing kingdoms of sensory information.
Age matters. In those born blind, their occipital cortex is completely taken over by other senses. If a person goes blind at an early age—say, at five years old—the takeover is less comprehensive. For the “late blind” (those who lost vision after the age of ten), the cortical takeovers are even smaller. The older the brain, the less flexible it is for redeployment
Ronnie Milsap is just one of many blind musicians; others include Andrea Bocelli, Ray Charles, Stevie Wonder, Diane Schuur, José Feliciano, and Jeff Healey. Their brains have learned to rely on the signals of sound and touch in their environment, and they become better at processing those than sighted people.
While musical stardom is not guaranteed for blind people, brain reorganization is. As a result, perfect musical pitch is overrepresented in the blind, and blind people are up to ten times better at determining whether a musical pitch subtly wobbles up or down. Many blind persons develop in the course of time a considerable ability to avoid obstacles by means of auditory cues received from sounds of their own making. This included their own footsteps, or cane tapping, or finger snapping
Memorization can benefit from the extra cortical real estate. In one study, blind people were tested to see how well they could remember lists of words. Those with more of their occipital cortex taken over could score higher: they had more territory to devote to the memory task.
Color-blind people don’t have it all bad: they are better at distinguishing between shades of gray. Although the military excludes color-blind soldiers from certain jobs, they have come to realize that the color-blind can spot enemy camouflage better than people with normal color vision.
Deaf people have better peripheral visual attention.
There is nothing special about visual cortex neurons. They are simply neurons that happen to be involved in processing edges or colors in people who have functioning eyes. These exact same neurons can process other types of information in the sightless.
Sighted participants were blindfolded for five days, during which time they were put through an intensive Braille-training paradigm. At the end of five days, the subjects had become quite good at detecting subtle differences between Braille characters—much better than a control group of sighted participants who underwent the same training without a blindfold. But especially striking was what happened to their brains, as measured in the scanner. Within five days, the blindfolded participants had recruited their occipital cortex when they were touching objects
One of the unsolved mysteries in neuroscience is why brains dream. What are these bizarre nighttime hallucinations about? Do they have meaning? Or are they simply random neural activity in search of a coherent narrative? And why are dreams so richly visual, igniting the occipital cortex every night into a conflagration of activity?
Consider the following: In the chronic and unforgiving competition for brain real estate, the visual system has a unique problem to deal with. Because of the rotation of the planet, it is cast into darkness for an average of 12 hours every cycle. (though not to the current, electricity-blessed times.) We’ve already seen that sensory deprivation triggers neighboring territories to take over. So how does the visual system deal with this unfair disadvantage? By keeping the occipital cortex active during the night.
We suggest that dreaming exists to keep the visual cortex from being taken over by neighboring areas. After all, the rotation of the planet does not affect anything about your ability to touch, hear, taste, or smell; only vision suffers in the dark. As a result, the visual cortex finds itself in danger every night of a takeover by the other senses. And given the startling rapidity with which changes in territory can happen (remember the forty to sixty minutes we just saw), the threat is formidable. Dreams are the means by which the visual cortex prevents takeover.
A key point to appreciate is that these nighttime volleys of activity are anatomically precise. They begin in the brainstem and are directed to only one place: the occipital cortex. If the circuitry grew its branches broadly and promiscuously, we’d expect it to connect with many areas throughout the brain. But it doesn’t. It aims with anatomical exactitude at one area alone: a tiny structure called the lateral geniculate nucleus, which broadcasts specifically to the occipital cortex. Through the neuroanatomist’s lens, this high specificity of the circuit suggests an important role.
From this perspective, it should be no surprise that even a person born blind retains the same brainstem-to-occipital-lobe circuitry as everyone else. What about the dreams of blind people? Would they be expected to have no dreaming at all because their brains don’t care about darkness? The answer is instructive. People who have been blind from birth (or were blinded at a very young age) experience no visual imagery in their dreams, but they do have other sensory experiences, such as feeling their way around a rearranged living room or hearing strange animals barking.44 This matches perfectly with the lessons we learned a moment ago: that the occipital cortex of a blind person becomes annexed by the other senses. Thus, in the congenitally blind, nighttime occipital activation still occurs, but it is now experienced as something nonvisual.
People who become blind after the age of seven have more visual content in their dreams than those who become blind earlier—consistent with the fact that the occipital lobe in the late-blind is less fully conquered by other senses, and so the activity is experienced more visually.
Some mammals are born immature—meaning they’re unable to walk, get food, regulate their own temperature, or defend themselves. Examples are humans, ferrets, and platypuses. The animals born immature have much more REM sleep—up to about eight times as much In our interpretation, when a highly plastic brain drops into the world, it needs to constantly fight to keep things balanced. When a brain arrives mostly solidified, there is less need for it to engage in the nighttime fighting.
All mammalian species spend some fraction of their sleep time in REM, and that fraction steadily decreases as they get older. In humans, infants spend half of their sleeping time in REM, adults spend only 10–20 percent of sleep in REM, and the elderly spend even less. This cross-species trend is consistent with the fact that infants’ brains are so much more plastic, and thus the competition for territory is even more critical. As an animal gets older, cortical takeovers become less possible. The falloff in plasticity parallels the falloff of time spent in REM sleep.
Nature does not need to genetically rewrite the brain each time it wants to try out a new body plan; it simply lets the brain adjust itself. And this underscores a point that reverberates throughout this book: the brain is very different from a digital computer.
We discovered that when people with normal visual systems are blindfolded for as little as an hour, their primary visual cortex becomes active when they perform tasks with their fingers or when they hear tones or words. Removing the blindfold quickly reverts the visual cortex so that it responds only to visual input.
The Potato Head model of evolution: I use this name to emphasize that all the sensors that we know and love—like our eyes and our ears and our fingertips—are merely peripheral plug-and-play devices. You stick them in, and you’re good to go. The brain figures out what to do with the data that come in.
In the same way that you can plug in an arbitrary nose or eyes or mouth for Potato Head, likewise does nature plug a wide variety of instruments into the brain for the purpose of detecting energy sources in the outside world.
Can you be born without a tongue, but otherwise healthy? Sure. That’s what happened to a Brazilian baby named Auristela. She spent years struggling to eat, speak, and breathe. Now an adult, she underwent an operation to put in a tongue, and at present she gives eloquent interviews on growing up tongueless. The extraordinary list of the ways we can be disassembled goes on.
We can look across the animal kingdom and find all kinds of strange peripheral devices, each of which is crafted by millions of years of evolution. If you were a snake, your sequence of DNA would fabricate heat pits that pick up infrared information. If you were a black ghost knifefish, your genetic letters would unpack electrosensors that pick up on perturbations in the electrical field. If you were a bloodhound dog, your code would write instructions for an enormous snout crammed with smell receptors. If you were a mantis shrimp, your instructions would manufacture eyes with sixteen types of photoreceptors. The star-nosed mole has 22 finger-like appendages on its nose, and with these it feels around and constructs a 3-D model of its tunnel systems.
Does the brain have to be redesigned each time? I suggest not. In evolutionary time, random mutations introduce strange new sensors, and the recipient brains simply figure out how to exploit them. The devices we come to the table with—eyes, noses, ears, tongues, fingertips—are not the only collection of instruments we could have had. These are simply what we’ve inherited from a lengthy and complex road of evolution.
Hundreds of studies on transplanting tissue or rewiring inputs support the model that the brain is a general-purpose computing device—a machine that performs standard operations on the data streaming in—whether those data carry a glimpse of a hopping rabbit, the sound of a phone ring, the taste of peanut butter, the smell of salami, or the touch of silk on the cheek. The brain analyzes the input and puts it into context (what can I do with this?), regardless of where it comes from. And that’s why data can become useful to a blind person even when they’re fed into the back, or ear, or forehead.
The vOICe is not the only visual-to-auditory substitution approach; recent years have seen a proliferation of these technologies. For example, the EyeMusic app uses musical pitches to represent the up-down location of pixels: the higher a pixel, the higher the note. Timing is exploited to represent the left-right pixel location: earlier notes are used for something on the left; later notes represent something on the right. Color is conveyed by different musical instruments: white (vocals), blue (trumpet), red (organ), green (reed), yellow (violin).
If it seems surprising that a blind person can come to “see” with her tongue or through cell phone earbuds, just remember how the blind come to read Braille. At first the experience involves mysterious bumps on the fingertips. But soon it becomes more than that: the brain moves beyond the details of the medium (the bumps) for a direct experience of the meaning.
Given that 5 percent of the world has disabling hearing loss, researchers some years ago got interested in ferreting out the genetics involved. Unfortunately, the community has currently discovered more than 220 genes associated with deafness.
The skin, in contrast, is focused on other measures, and it has poor spatial resolution. Conveying an inner ear’s worth of information to the skin would require several hundred vibrotactile motors—too many to fit on a person.
We’ve developed our technology into many different form factors, such as a chest strap for children.
The children also began to vocalize more, because for the first time they are closing a loop: they make a noise and immediately register it as a sensory input. Although you don’t remember, this is how you trained to use your ears when you were a baby. You babbled, cooed, clapped your hands, banged the bars of your crib…and you got feedback into these strange sensors on the side of your head. That’s how you deciphered the signals coming in: by correlating your own actions with their consequences. So imagine wearing the chest strap yourself. You speak aloud “the quick brown fox,” and you feel it at the same time. Your brain learns to put the two together, understanding the strange vibratory language.
We have also made a wristband (called Buzz) that has only four motors. It’s lower resolution, but more practical for many people’s lives. Philip reports he can tell when his dogs are barking, or the faucet is running, or the doorbell rings. I quizzed him carefully on his internal experience: Did it feel like buzzing on his wrist that he had to translate, or did it feel like direct perception? In other words, when a siren passed on the street, did he feel that there was a buzzing on his skin, which meant siren…or did it feel that there was an ambulance out there. He was very clear that it was the latter:
The idea of converting touch into sound is not new. In 1923, Robert Gault, a psychologist at Northwestern University, heard about a deaf and blind ten-year-old girl who claimed to be able to feel sound through her fingertips, as Helen Keller had done. Researchers have attempted to make sound-to-touch devices, but in previous decades the machinery was too large and computationally weak to make for a practical device.
In the early 1930s, an educator at a school in Massachusetts developed a technique for two deafblind students. Being deaf, they needed a way to read the lips of speakers, but they were both blind as well, rendering that impossible. So their technique consisted of placing a hand over the face and neck of a person speaking. The thumb rested lightly on the lips and the fingers fanned out to cover the neck and cheek, and in this way they could feel the lips moving, the vocal cords vibrating, and even air coming out of the nostrils. Because the original pupils were named Tad and Oma, the technique became known as Tadoma. Thousands of deafblind children have been taught this method and have obtained proficiency at understanding language almost to the point of those with hearing. All the information is coming in through the sense of touch.
There are many reasons to take advantage of the system of touch. For example, a little-known fact is that people with prosthetic legs have to do an enormous amount of work to learn how to walk with them. Given the high quality of prosthetics, why is walking so difficult? The answer is simply that you don’t know where the prosthetic leg is. Your good leg is streaming an enormous amount of data to the brain, telling about the position of your leg, how much the knee is bent, how much pressure is on the ankle, the tilt and twist of the foot, and so on. But with the prosthetic leg, there’s nothing but silence: the brain has no idea about the limb’s position. So we attached pressure and angle sensors on the prosthetic leg and fed the data into the Neosensory Vest. As a result, a person can feel the position of the leg, much like a normal leg, and can rapidly learn how to walk again.
This same technique can be used for a person with a real leg that has lost sensation—as happens in Parkinson’s disease and many other conditions. We use sensors in a sock to measure motion and pressure, and feed the data into the Buzz wristband. By this technique, a person understands where her foot is, whether her weight is on it, and whether the surface she’s standing on is even.
In 2004, inspired by the promise of visual-to-auditory translation, a color-blind artist named Neil Harbisson attached an “eyeborg” to his head. The eyeborg is a simple device that analyzes a video stream and converts the colors to sounds. The sounds are delivered via bone conduction behind his ear. So Neil hears colors. He can plant his face in front of any colored swatch and identify it. “That’s green,” he’ll say, or, “that’s magenta. Even better, the eyeborg’s camera detects wavelengths of light beyond the normal spectrum; when translating from colors to sound, he can encode (and come to perceive in the environment) infrared and ultraviolet, the way that snakes and bees do.
When it came time to update his passport photo, Neil insisted he didn’t want to take off the eyeborg. It was a fundamental part of him, like a body part, he argued. The passport office ignored the plea: their policy disallowed electronics in an official photo. But then the passport office received support letters from his doctor, his friends, and his colleagues. A month later, the snap of his passport photo included the eyeborg, a success upon which Neil claims to be the first officially sanctioned cyborg. And with animals, researchers have taken this idea one step further: mice are color-blind…but not if you genetically engineer photoreceptors to give them color vision. With an extra gene, mice can now detect and distinguish different colors.
Many people go in for cataract surgery and have their lenses exchanged for a synthetic replacement. As it turns out, the lens naturally blocks ultraviolet light, but the replacement lens does not. So patients find themselves tapping into ranges of the electromagnetic spectrum they couldn’t see before. many objects have a blue-violet glow that other people don’t see.
Because of a long road of evolutionary particularities, we have two eyes placed on the front of our heads, giving us a visual angle on the world of about 180 degrees. In contrast, the compound eyes of houseflies give them almost 360 degrees of vision. So what if we could leverage modern technology to gain the joy of fly vision? A group in France has done just that with FlyVIZ, a helmet that lets users see in 360 degrees. Their system consists of a helmet-mounted camera that scans the whole scene and compresses it to a display in front of the user’s eyes. The designers of the FlyVIZ note that users have a (nauseating) adjustment period upon first donning the helmet. But it’s surprisingly brief: after 15 minutes of wearing the headset, a user can grab an object held anywhere around him, dodge someone sneaking up on him, and sometimes catch a ball thrown to him from behind.
Already, devices from hearing aids to our Buzz wristband can reach beyond the normal hearing scale. Why not expand into the ultrasonic range so that one can hear sounds only available to cats and bats? Or the infrasonic, hearing sounds with which elephants communicate? And consider smell. Remember the bloodhound dog, which can smell odors well beyond our comprehension? Consider building an array of molecular detectors and feeling different substances. Instead of needing a drug dog with its huge snout, you could directly experience that depth of odor detection yourself.
Todd Huffman is a biohacker. His hair is often dyed some primary color or another; his appearance is otherwise indistinguishable from a lumberjack. Some years ago, Todd ordered a small neodymium magnet in the mail. He sterilized the magnet, sterilized a surgical knife, sterilized his hand, and implanted the magnet in his fingers. Now Todd feels magnetic fields. The magnet tugs when exposed to electromagnetic fields, and his nerves register this. Information normally invisible to humans is now streamed to his brain via the sensory pathways of his fingers. His perceptual world expanded the first time he reached for a pan on his electric stove. The stove casts off a large magnetic field (because of the electricity running in a coil). He hadn’t been aware of that tidbit of knowledge, but now he can feel it.
Another biohacker, Shannon Larratt, explained in an interview that he could feel the power running through cables and could therefore use his fingers to diagnose hardware issues without having to pull out a voltage meter. If his implants were removed, he says, he would feel blind. A world is detectable that previously was not: palpable shapes live around microwave ovens, computer fans, speakers, and subway power transformers.
What if you could detect not only the magnetic field around objects but also the one around the planet? After all, animals do it. Turtles return to the same beaches on which they were hatched to lay their own eggs. Migrating birds wing each year from Greenland to Antarctica and then back again to the same spot. Starting in 2005, scientists at Osnabrück University wondered if a wearable device could allow humans to tap into that signal. They built a belt called the feelSpace. The belt is ringed with vibratory motors, and the motor pointing to the north buzzes. As you turn your body, you always feel a buzzing in the direction of magnetic north. At first, it feels like a pesky humming—but over time it becomes spatial information: a feeling that north is there.62 Over several weeks, the belt changes how people navigate: their orientation improves, they develop new strategies, they gain a higher awareness of the relationship between different places.
The environment feels more ordered. The layout of locations can be more easily remembered. As one participant described the experience, “The orientation in the cities was interesting. After coming back, I could retrieve the relative orientation of all places, rooms and buildings, even if I did not pay attention while I was actually there.” Instead of thinking about moving through space as a sequence of cues, they thought about their routes from a global perspective. It’s a new kind of human experience. The user goes on: During the first two weeks, I had to concentrate on it; afterwards, it was intuitive. I could even imagine the arrangement of places and rooms where I sometimes stay. Interestingly, when I take off the belt at night I still feel the vibration: When I turn to the other side, the vibration is moving too—this is a fascinating feeling! Interestingly, after users take off the belt, they often report that they have a better sense of orientation for a while.
Whatever data the brain receives, it makes use of.
In 1938, an aviator named Douglas Corrigan revived a plane, then flew the plane from the United States to Dublin, Ireland. In those early days of aircraft, there were few navigation aids—generally just a compass combined with a length of string to indicate the direction of airflow relative to the plane. In a recounting of the event, The Edwardsville Intelligencer quoted a mechanic who described Corrigan as an aviator “who flies by the seat of his pants,” and by most accounts this was the beginning of the expression in the English language. To “fly by the seat of one’s pants” meant to steer by feeling the plane. After all, the body part that had the most contact with the plane was the pilot’s rump, so that was the pathway by which information was transmitted to the pilot’s brain. The pilot felt the plane’s movements and reacted accordingly. If the aircraft slipped toward the lower wing during a turn, the pilot’s buttocks would slide downhill. If the aircraft skidded toward the outside of the turn, a slight g-force pushed the pilot uphill.
We’re expanding the perception of drone pilots. The Vest streams five different measures from a quadcopter—pitch, yaw, roll, orientation, and heading—and that improves the pilot’s ability to fly it. The pilot has essentially extended his skin up there, far away, where the drone is. In case the romantic notion ever strikes you: airline pilots were not better when they flew by the seat of their pants, without instrumentation. Flights are made safer with a cockpit full of instruments, allowing a pilot to measure elements he otherwise can’t access. For instance, a pilot can’t tell from his derriere if he is flying level or in a banked turn.
Because we live in a world bloated with information now, it’s likely that we’re going to have to transition from accessing big data to experiencing it more directly. In that spirit, imagine feeling the state of a factory—with dozens of machines running at once—and you’re plugged in to feel it. You become the factory. You have access to dozens of machines at once, feeling their production rates in relation to one another. You sense when things are running out of alignment and need attention or adjustment. I’m not talking about a machine breaking: that kind of problem is simple to hook up to an alert or an alarm. Instead, how can you understand how the machines are running in relation to one another? This approach to big data gives deeper patterns of insight.
Imagine feeding real-time patient data into surgeons’ backs so that they don’t have to look up at monitors during an operation. Or being able to feel the invisible states of one’s own body—such as blood pressure, heart rate, and the state of the microbiome—thereby elevating unconscious signals into the realm of consciousness. And let’s take this one step further. At Neosensory, we’ve been exploring the concept of a shared perception. Picture a couple who feels each other’s data: the partner’s breathing rate, temperature, galvanic skin response, and so on. We can measure this data in one partner and feed it over the internet into a Buzz worn by the other partner. This has the potential to unlock a new depth of mutual understanding. Imagine your spouse calling from across the country to ask, “Are you okay? You feel stressed.” This may prove a boon or a bane to relationships, but it opens new possibilities of pooled experience.
What if you walked around with the Neosensory Vest and felt a data feed of neighboring weather cells from the surrounding 200 miles? At some point you should be able to develop a direct perceptual experience of the weather patterns of the region, at a much larger scale than a human can normally experience. You can let your friends know whether it’s going to rain. That would be a new human experience, one that you can never get in the standard, tiny, limited human body you currently have.
Or imagine the Vest feeds you real-time stock data so that your brain can extract a sense of the complex, multifaceted movements of the world markets. The brain can do tremendous work extracting statistical patterns, even when you don’t think you’re paying attention. So by wearing the Vest around all day, and being generally aware of what’s going on around you (news stories, emerging fashions on the street, the feel of the economy, and so on), you may be able to develop strong intuitions—better than the models—about where the market is going next. That too would be a very new kind of human experience.
You might ask, why not just use eyes or ears for this? Couldn’t you hook up a stock trader with virtual reality (VR) goggles to view real-time charts from dozens of stocks? The problem is that vision is necessary for too many of our daily tasks. The stock trader needs her eyes to be able to find the cafeteria, see her boss coming, or read her email. Her skin, in contrast, is a high-bandwidth, unused information channel.
As a result of feeling the high-dimensional data, the stock trader might be able to perceive the big picture (oil is about to crash) long before she can pick out the individual variables (Apple is going up, Exxon is sinking, and Walmart is holding steady).
There are an unimaginable number of streams that could be fed from the internet. We’ve all heard of the Spidey sense: the tingling sensation by which Peter Parker detected trouble in the vicinity. Why not have a Tweety sense? Let’s start with the proposition that Twitter has become the consciousness of the planet. It rides on a nervous system that has encircled the earth, and important ideas (and some non-important ones) trend above the noise floor and rise to the top. Not because of corporations who want to tell you their messages, but instead because an earthquake in Bangladesh, or the death of a celebrity, or a new discovery in space has captured the imagination of enough people around the world.
The interests of the world ascend, just as do the most important issues in an animal’s nervous system (I’m hungry; someone’s approaching; I need to find water). On Twitter, the ideas that break the surface may or may not be important, but they represent at every moment what’s on the mind of the planetary population
At the TED conference in 2015, Scott Novich and I algorithmically tracked all the tweets with the hashtag “TED.” On the fly, we aggregated the hundreds of tweets and pushed them through a sentiment analysis program. In this way, we could use a large dictionary of words to classify which tweets were positive (“awesome,” “inspiring,” and so on) and which were negative (“boring,” “stupid,” and so forth). The summary statistics were fed to the Vest in real time. I could feel the sentiment of the room and how it changed over time. It allowed me to have an experience of something larger than what an individual human can normally achieve: being tapped into the overall emotional state of hundreds of people, all at once. You can imagine a politician wanting to wear such a device while she addresses tens of thousands of people: she would get on-the-fly insight into which of her proclamations were coming off well and which were bombing.
If you want to think big, forget hashtags and go for natural language processing of all the trending tweets on the planet: imagine compressing a million tweets per second and feeding the abridgments through the Vest. You’d be plugged into the consciousness of the planet. You might be walking along and you’d suddenly detect a political scandal in Washington, or forest fires in Brazil, or a new skirmish in the Middle East. This would make you more worldly—in a sensory sense.
René Descartes spent a good deal of time wondering how he could know the real reality that surrounded him. After all, he knew our senses often fool us, and he knew that we often mistake our dreams for waking experiences. How could he know if an evil demon were systematically deceiving him, feeding him lies about the world that surrounded him? In the 1980s, the philosopher Hilary Putnam upgraded this question to “am I a brain in a vat?” How would you know if scientists had removed your brain from your body, and were merely stimulating your cortex in the right ways to make you believe that you were experiencing the touch of a book, the temperature on your skin, the sight of your hands? In the 1990s, the question became “am I in the Matrix?” In modern times, it’s “am I in a computer simulation?
Neuroscientists at Stanford are working on a method to insert 100,000 electrodes into a monkey, which (if damage to the tissue is minimized) may tell us remarkable new things about the detailed characteristics of the networks. Several emerging companies, still in their infancy, hope to increase the speed of brain communication to the outside world by writing and reading neural data rapidly by means of direct plug-ins.
The problem is not theoretical but practical. When an electrode is placed into the brain, the tissue slowly tries to push it out, in the same way that the skin of your finger pushes out a splinter. That’s the small problem. The bigger one is that neurosurgeons don’t want to perform the operations, because there is always the risk of infection or death on the operating table. And beyond disease states (such as Parkinson’s or severe depression), it’s not clear that consumers will undergo an open-head surgery just for the joy of texting their friends more rapidly. An alternative might be to sneak electrodes into the tree of blood vessels that branches throughout the brain; however, the problem here is the possibility of damaging or blocking the vessels.
Inside the vault of the skull, the brain has access only to electrical signals racing around among its specialized cells. It doesn’t directly see or hear or touch anything. Whether the inputs represent air compression waves from a symphony, or patterns of light from a snow-covered statue, or molecules floating off a fresh apple pie, or the pain of a wasp sting—it’s all represented by voltage spikes in neurons. If we were to watch a patch of brain tissue with spikes dashing to and fro, and I were to ask whether we were watching the visual cortex or auditory cortex or somatosensory cortex, you couldn’t tell me. I couldn’t tell you. It all looks the same.
If the idea of learning a new sense seems foreign, just remember that you’ve done this yourself. Consider how babies learn how to use their ears by clapping their hands together or by babbling something and catching the feedback in their ears. At first the air compressions are just electrical activity in the brain; eventually they become experienced as sound. Such learning can be seen with people who are born deaf and eventually get cochlear implants as adults. At first, the experience of the cochlear implant is not like sound at all. One friend who got cochlear implants first described their effect as painless electrical shocks inside her head; she had no sensation that it had anything to do with sound. But after about a month it became “sound,” albeit lousy like a tinny and distorted radio. Eventually she came to hear fairly well with them. This is the same process that happened to each of us when we were learning to use our ears; we simply don’t remember it.
The brain needs to learn how to see, just as it needs to learn how to control its arms and legs.
With the right sort of data compression, what are the limits to the kinds of data we can take in? Could we add a sixth sense with a vibrating wristband and then a seventh with a direct plug-in? How about an eighth with a tongue grid and a ninth with a Vest? It’s impossible at the moment to know what the limits might be. All we know is that the brain is gifted at sharing territory among different inputs; we saw earlier how smoothly it does so.
On the other hand, given the finite territory in the brain, is it possible that each added sense will reduce the resolution of the others, such that your new sensory powers will come at the cost of slightly blurrier vision and slightly worse hearing and slightly reduced sensation from the skin? Who knows? Answers about our limits remain pure speculation until they can be put to the test in the coming years.
If the Potato Head model is correct, and the brain acts as a general-purpose computer, then this suggests that data coming in will eventually become associated with an emotional experience. Whatever the data stream, and however it gets there, it can carry passions.
Imagine that you take on a new stream of stock market data. You suddenly get information that tech is tanking, and you’re heavily invested in that sector. Will it feel bad? Not just cognitively bad, but emotionally aversive, like the smell of rotten meat or the sting of an ant bite?
All the examples we’ve tackled so far involve input from the body’s senses. What about the brain’s other job, output to the limbs of the body? Is that also flexible? Could you embellish your body with more arms, mechanical legs, or a robot on the other side of the world controlled by your thoughts? Glad you asked.
All the animals in the kingdom (including us) possess surprisingly similar genomes. So how do creatures come to operate such wonderfully varied equipment—like prehensile tails, claws, larynxes, tentacles, whiskers, trunks, and wings? How do mountain goats get so good at leaping up rocks? How do owls get so good at plunging down upon mice? How do frogs get so good at hitting flies with their tongues? To understand this, let’s return to our Potato Head model of the brain, in which varied input devices can be attached. Exactly the same principle applies to output. In this view, Mother Nature has the freedom to experiment with outlandish plug-and-play motor devices. Whether fingers, flappers, or fins; whether two legs, four legs, or eight; whether hands or talons or wings—the fundamental principles of brain operation don’t need to be redesigned each time.
A baby learns how to shape her mouth and her breath to produce language—not by genetics, nor by surfing Wikipedia, but instead by babbling. Sounds come out of her mouth, and her ears pick up on those sounds. Her brain can then compare how close her sound was with the utterances she’s hearing from her mother or father. Helping things along, she earns positive reactions for some utterances and not for others. In this way, the constant feedback allows her to refine her speech until she mellifluously converses in English or Chinese. In the same way, the brain learns how to steer its body by motor babbling. Just observe that same baby in her crib. She bites her toes, slaps her forehead, tugs on her hair, bends her fingers, and so on, learning how her motor output corresponds to the sensory feedback she receives
By this technique, we eventually learn to walk, bring strawberries to our mouths, stay afloat in a pool, dangle on monkey bars, and master jumping jacks. We use the same learning method to attach extensions to our bodies.
In the earliest days of aviation, pilots used ropes and levers to make their flying machines extensions of their own bodies, and the task for a modern pilot is of course no different: the pilot’s brain builds a representation of the plane as a part of himself. And this happens with piano virtuosos, chain-saw lumberjacks, and drone pilots: their brains incorporate their tools as natural extensions to be controlled. In this way, a blind person’s cane extends not just away from the body but into the brain circuitry.
László Polgár has three daughters. He loves chess and he loves his daughters, so he launched a small experiment: he and his wife homeschooled the girls on many subjects, and they trained them rigorously in chess. Daily, they hopped and skipped the assorted pieces across the sixty-four squares. By the time the eldest daughter, Susan, turned 15 years old, she became the top-ranked chess player in the world. In 1986, she qualified for the Men’s World Championship—a first-time achievement for a female—and within five years she had earned the Men’s Grandmaster title. In 1989, in the middle of Susan’s astounding accomplishments, her 14-year-old middle sister, Sofia, achieved fame for her “Sack of Rome”—her stunning victory at a tournament in Italy—which ranked as one of the strongest performances ever by a 14-year-old. Sofia went on to become an International Master and Woman Grandmaster. And then there was the youngest sister, Judit, who is widely considered the best female chess player on record. She achieved Grandmaster status at the tender age of 15 years and four months and remains the only woman on the World Chess Federation’s top 100 list. For a while she held a position in the top ten. What accounts for their success? Their parents lived by the philosophy that geniuses are made, not born.1 They trained the girls daily. They not only exposed them to chess; they fed them on chess. The girls received hugs, stern looks, approval, and attention based on their chess performance. As a result, their brains came to have a great deal of circuitry devoted to chess.
After one of his concerts, an admiring concertgoer said to Violinist Itzhak Perlman, “I would give my life to play like that.” To which Perlman replied, “I did.” Each morning, Perlman drags himself out of bed at 5:15 am. After a shower and breakfast, he begins his 4.5 hour morning practice. He takes a lunch and an exercise session, then launches his afternoon practice for another 4.5 hours. He does this every day of the year, except for concert days, when he does only the morning practice session.
This is what underlies getting good at something. Professional tennis players such as Serena and Venus Williams spend years of training so that the right moves will come automatically in the heat of the game: step, pivot, backhand, charge, fall back, aim, smash. They train for thousands of hours to burn the moves down into the unconscious circuitry of the brain; if they tried to run a game just on high-level cognition, there’s little chance they could win. Their victories emerge from crafting their brains into overtrained machinery.
You might have heard of the 10,000 hour rule The general idea is correct: you need massive amounts of repetition to dig the subway maps of the brain.
When medical students study for their final exams over the course of three months, the gray matter volume in their brains changes so much it can be seen on brain scans with the naked eye. Similar changes transpire when adults learn how to read backward through a mirror. And the areas of the brain involved in spatial navigation are visibly different in London taxi drivers from the rest of the population. In each hemisphere the taxi drivers have an enlarged region of the hippocampus, which is a region involved in internal maps of the outside world.
You’re more than what you eat; you become the information you digest.
Imagine that a friend suffers a stroke that damages part of her motor cortex, and as a result one of her arms becomes mostly paralyzed. After trying many times to use her weakened arm, she gets frustrated and uses her good arm to accomplish all of the necessary tasks in her daily routine. This is the typical scenario, and her weak arm becomes only weaker. The lessons of live-wiring offer a counterintuitive solution known as constraint therapy: strap down her good arm so that it cannot be used. This forces her to employ the weak arm.
Reward is a powerful way to rewire the brain, but happily your brain doesn’t require cookies or cash for each modification. More generally, change is tied to anything that is relevant to your goals. If you’re in the far north and need to learn about ice fishing and different types of snow, that’s what your brain will come to encode. In contrast, if you’re equatorial and need to learn which snakes to avoid and which mushrooms to eat, your brain will devote its resources accordingly.
Acetylcholine broadcasts widely throughout the brain, and as a result it can trigger changes with any kind of relevant stimulus, whether a musical note, a texture, or a verbal accolade. It is a universal mechanism for saying this is important—get better at detecting this. It marks relevance by increasing territory and changes in neural territory map onto your performance. This was originally demonstrated in studies with rats. Two groups were trained in a difficult task of grabbing sugar pellets through a small, high slot. In one group, the release of acetylcholine was blocked with drugs. For the normal rats, two weeks of practice led to an increase in their speed and skill and a correspondingly large increase in the brain region devoted to the forepaw movement. For the rats without the acetylcholine release, the cortical area didn’t grow, and accuracy for reaching the sugar pellet never improved. So the basis of behavioral improvement is not simply the repeated performance of a task; it also requires neuromodulatory systems to encode relevance. Without acetylcholine, the 10,000 hours is wasted time.
Recall Fred Williams, who (unlike Serena and Venus) hates tennis. Why doesn’t his brain change, even after the same number of hours of practice? Because these neuromodulatory systems are not engaged. As he drills backhands over and over, he’s like the rats grabbing the pellets without the acetylcholine,
Cholinergic neurons reach out widely across the brain, so when these neurons start chattering away, why doesn’t that turn on plasticity everywhere they reach, causing widespread neural changes? The answer is that acetylcholine’s release (and effect) is modulated by other neuromodulators. While acetylcholine turns on plasticity, other neurotransmitters (such as dopamine) are involved in the direction of change, encoding whether something was punishing or rewarding.
How does the modifiability of the brain—and its relationship to relevance—bear on teaching our young? The traditional classroom consists of a teacher droning on, possibly reading from bulleted slides. This is suboptimal for brain changes, because the students are not engaged, and without engagement there is little to no plasticity. The information doesn’t stick.
The trick of inspiring curiosity is woven into several traditional forms of learning. Jewish religious scholars study the Talmud by sitting in pairs and posing interesting questions to each other. (Why does the author use this particular word rather than another? Why do these two authorities differ in their account?) Everything is cast as a question, forcing the learning partner to engage instead of memorize. I recently stumbled on a website that poses “Talmudic questions” about microbial biology: “Given that spores are so effective in ensuring survival of bacteria, why don’t all species make them?
More generally, this is why joining a study group always helps: from calculus to history, it activates the brain’s social mechanisms to motivate engagement.
In the 1980s, the author Isaac Asimov gave an interview with the television journalist Bill Moyers. Asimov saw the limits of the traditional education system with clear eyes: Today, what people call learning is forced on you. Everyone is forced to learn the same thing on the same day at the same speed in class. But everyone is different. For some, class goes too fast, for some to slow, for some in the wrong direction. Give everyone a chance…to follow up their own bent from the start, to find out about whatever they’re interested in by looking it up in their own homes, at their own speed, in their own time—and everyone will enjoy learning.
It is through this lens of triggering interest that philanthropists such as Bill and Melinda Gates aim to build adaptive learning. The idea is to leverage software that quickly determines the state of knowledge of each student and then instructs each on exactly what he needs to know next. Like having a one-to-one student-teacher ratio, this approach keeps each student at the right pace, meeting him where he is right now with material that will captivate.
The internet allows students to answer questions as soon as they pop into their heads, delivering the solution in the context of their curiosity. This is the powerful difference between just in case information (learning a collection of facts just in case you ever need to know them) and just in time information (receiving information the moment you seek the answer). Generally speaking, it’s only in the latter case that we find the right brew of neuromodulators present. The Chinese have an expression: “An hour with a wise person is worth more than 1,000 books.” This insight is the ancient equivalent of what the internet offers: when the learner can actively direct her own learning (by asking the wise person precisely the question she wants to answer), the molecules of relevance and reward are present. They allow the brain to reconfigure. Tossing facts at an unengaged student is like throwing pebbles to dent a stone wall.
If the student can’t get the answer, the questions stay at the same level; when he gets the answer correct, the questions get harder. There’s still a role for the teacher: to teach foundational concepts and to guide the path of learning. But fundamentally, given how brains adapt and rewrite their wiring, a neuroscience-compatible classroom is one in which students drill into the vast sphere of human knowledge by following the paths of their individual passions.
Given that the brain becomes wired from experience, what are the neural consequences of growing up on screens? Are the brains of digital natives different from the brains of the generations before them? It comes as a surprise to many people that there aren’t more studies on this in neuroscience. Wouldn’t our society want to understand the differences between the digital- and analog-raised brain? Indeed we would, but the reason there are few studies is that it is inordinately difficult to perform meaningful science on this. Why? Because there’s no good control group against which to compare a digital native’s brain. You can’t easily find another group of 18-year-olds who haven’t grown up with the net. You might try to find some Amish teens in Pennsylvania, but there are dozens of other differences with that group, such as religious, cultural, and educational beliefs.
You might be able to turn up some impoverished children in rural China, or in a village in Central America, or in a desert in Northern Africa. But there are going to be other major differences between those children and the digital natives whom you intended to understand, including wealth, education, and diet. Perhaps you could compare millennials against the generation that came before them, such as their parents who did not grow up online—but who instead played street stickball and stuffed Twinkies in their mouths as they watched The Brady Bunch. But this is also problematic: between two generations there are innumerable differences of politics, nutrition, pollution, and cultural innovation, such that one could never be sure what brain differences should be attributed to.
Even when we’re by ourselves, there is no end to the learning that goes on when we look up a Wikipedia page, which cascades us down the next link and then the next, such that six jumps later we are learning facts we didn’t even know we didn’t know. The great advantage of this comes from a simple fact: all new ideas in your brain come from a mash-up of previously learned inputs, and today we get more new inputs than ever before.
The strategy of ignoring the unchanging keeps the system poised to detect anything that moves or shifts or transforms. At the extreme, this is how reptile visual systems work: they can’t see you if you stand still, because they only register change.
Your brain actively recalibrates, because that allows it to burn less energy. You pay attention to the unexpected bang, the unforeseen brush on your skin, the surprising movement in your periphery. This recalibration is the basis of the ugly symptoms of drug withdrawal. The more the brain is adapted to the drug, the harder the fall when the drug is taken away. Withdrawal symptoms vary by drug—from sweating to shakes to depression—but they all have in common a powerful absence of something that is anticipated.
This understanding of neural predictions also gives an understanding of heartbreak. People you love become part of you—not just metaphorically, but physically. You absorb people into your internal model of the world. Your brain refashions itself around the expectation of their presence. After the breakup with a lover, the death of a friend, or the loss of a parent, the sudden absence represents a major departure from homeostasis. As Kahlil Gibran put it in The Prophet, “And ever has it been that love knows not its own depth until the hour of separation.”