January 18, 2024
In 2007, I published my first book: The Overflowing Brain. I wanted to investigate how the increased availability of information in the digital age more often puts us into situations where we are distracted and trying to do multiple things at once. In the book I also aimed to explain how this overwhelms our stone-aged brain because it has certain limits, chief among them our attention and working memory.
The contrast between our restricted cognition and unlimited access to information results in an increase in distractibility for our brains: we more and more often fail to keep relevant information in mind. A constant access to irrelevant information distracts and destroys our ability to focus deeply. When we attempt to accomplish multiple things at once–like a circus juggler with twelve balls–we inevitably drop some of them. Our brain overflows.
Even though I saw potential issues with this increased flow of information in some circumstances, back then I wasn’t worried about long-term negative effects. Our brains aren’t damaged by too much information just as reading too many books will not drive you insane. The brain is malleable and we can stretch most of its capacities. One simple example is how data from IQ tests over the past 100 years shows a general increase in our ability to solve abstract problems, likely due to an increasingly complex daily life filled with more and more mental challenges.
The same year that The Overflowing Brain was published, Apple launched the iPhone. Within a few years, we had Facebook, Twitter, Instagram and Snapchat. In 2006, our old GSM network was replaced by the 10-times faster 3G, which shortly thereafter was ousted by 4G. Suddenly, we could surf the web on the street: constant access to, and need for, information.
This increase in the amount of information and distractions will probably take another leap with the arrival of large language models, such as GPT. One of the more immediate risks with AI is that it will be used by organizations and countries to spread propaganda and false news. This will first hit social media, but might even be a threat to the internet when it contains more false than correct information. What I described as an “information flood” in 2007, seems more like a creek compared to the tsunami of information we now encounter.
The Cultural Co-Evolution of Man and Machine
Throughout history, humans have always used tools and, later, created machines. These technologies have in turn affected our way of life: the interactions between man and machine create continuous feedback loops. Most of these have, on the whole, been good. Technological advancements have eased our way of life, improved healthcare outcomes, education and decreased poverty globally.
The new technological landscape of the early 2000s brought with it a lot of optimism. However, the naïve hope we experienced then appears to have given way to more and more skepticism. In the interaction between man and machine, many are feeling the balance shifting toward the machine. In this dance, the humans are no longer leading but instead swinging around like rag dolls in the arms of technology.
The criticism is now even coming from the very people who created the technology in question. Steve Jobs famously would not let his children use iPads. Chamath Palihapitiya, former VP of user growth at Facebook, does not use the social network he helped create because he finds its algorithms too manipulative: “the short-term, dopamine-driven feedback loops that we have created are destroying how society works.”
Now, we once again stand before a new shift in the technological landscape: AI. In 12 years we’ve moved from IBM’s Watson to Alfa-Go, Alfa-fold, OpenAI’s ChatGPT and DALL-E which shock and awe us all with their ability to create images, write texts, and solve scientific problems. Estimates for when we’ll see a general AI–one that is better than people in all areas of problem-solving and decision-making–are constantly made. The date continues to move closer and closer to our present moment.
Positive or Negative Feedback Loops?
With this type of technology comes immense opportunity but also increased risk. On the one hand, AI could revolutionize learning, opening up a continuous cycle of positive feedback loops where humans, through our unlimited access to better digital education, create technology and computers that provide us with increased knowledge and opportunities. This in turn can make us even more competent and support us in the development of new groundbreaking technologies and better learning. Inventions to improve our communities and standard of living on a never-before-seen scale.
But it is a delicate balance. Increasingly, we see evidence of how this new information-tsunami is destroying our collective attention spans and affecting our ability to tell the truth from what’s false. Thoughtless digitalization impairs our schools and the cognitive development of our children. We are at risk of entering a fast-moving negative feedback loop: a downward spiral.
Whether or not this new stage of our evolution will yield net positive or negative results will come down to the types of tools we allocate resources toward developing, and how these are regulated and used on a global scale.
Professor of Cognitive Neuroscience