20.49: Notes on the Attention Economy

ℹ️ This is not a newsletter, nor is it a weekly update one can expect to be delivered every Sunday. These are just random thoughts I came across while preparing for the episode 13 of Safareig.

Marketplaces of human futures

The Internet has enabled a new breadth of “free” products with an unprecedented reach. Run by organizations with a business model solely predicated on keeping people engaged, they are competing in the business of selling attention.

The reason why we don’t pay upfront for these products is because they are not actually free. It all comes at a price, and sometime, somewhere, somebody ends up paying for it. In this case, advertisers are the ones paying for the service.

We’ve all heard the cliche: “if you are not paying for the product, you are the product.”

However, I’ve always found this idea a too simplistic resolution to a very complex problem. I like better Jaron Lanier’s re-phrasing of the term: “it is the gradual, slight, imperceptible change in our behavior and perception that is the product”.

Now we are getting somewhere: these products have grown around marketplaces that trade with human futures.

The interesting part though is to understand how these markets operate: through the collection of massive amounts of data they recreate models of ourselves; this model, a digital avatar of sorts, is then used to make predictions about what we’re going to do next; and finally, they end up selling certainty, the certainty that an ad that will be placed.

It inevitably reminds me of Yuval Noah Harari’s advice when it comes to preparing for the future: “know thyself.”

Algorithms could potentially know us better than ourselves. They don’t need to be or understand us perfectly, just better than we do. Historically, this hasn’t been much of a problem because an external agent could not know more than we did — no matter how little we did know. But nowadays, for the first time in history, any algorithm can potentially know, predict, and manipulate our will.

Magic tricks at scale

We don’t understand where our mind is vulnerable, and tech companies are using all we know about psychology to exploit vulnerabilities in the human brain. The brightest engineers and designers are now being employed by these companies to make technology more persuasive.

We live in an unsolicited, continuous A/B test run on society. We are being played by a technology tapping into our minds’ flaws, which end game is to hack people’s psychology and ultimately modify our behavior.

In other words, we can affect real-world behavior and emotions without ever triggering our awareness.

We’ve moved away from Steve Jobs’ idea about tech being a “bicycle for the mind”, a tool of sorts. Social media is not a tool waiting to be used, instead, it is constantly demanding more of our attention.

A dopamine-driven culture

Social media is built on top of an evolutionary, hard-wired need for connection. Our communication is now based on third-party decisions ruling over who to connect with.

However, we have not evolved to be aware of what thousands of people think of us. We conflate this ceaseless social validation with true value.

There is an unambiguous link between mental health problems (anxiety, depression, even suicide) and social media abuse.

We are playing cards with AI. Think of it as a game between our primal brain vs. an exponentially-growing technology pointed at you all the time. Who’s gonna win this game?

Algorithms that shape our reality

All these products are usually designed by small teams — mostly comprised of white males in their late twenties — who are making decisions on behalf of two billion people.

Algorithms are opinions embedded in code; they are not objective; they are optimized to some definition of success. However, few people understand how they work, but even so, once programmed we can’t fully predict how they will end up behaving.

We think we are in control, yet, we see what algorithms want us to see. We all live in our particular Truman Show. An entire subjective reality attuned to our tastes and preferences, a machine for confirmation bias.

We ask ourselves: how come all these people are so stupid? Aren’t they seeing the same information as I do? And the fact is that they don’t. Which brings us to fake news, conspiracy theories, and polarization in society.

Truth is boring, and algorithms were designed to give us more of what we want, to maximize for profit. Hence, we get more fake, polarizing information, which consequently spreads faster than truth.

The dark side of these tools is their potential abuse (by large organizations) to influence and control behavior at scale. For example, to modify the course of an election, or destabilize and erode the fabric of society.

Tech is not an existential threat. It is the tech’s ability to bring up the worst of society. The worst of society being the existential threat.

Its consequence, beyond polarization, is that we can’t agree on what’s true. We don’t even have a shared understanding of reality anymore.

The way out

There is no “quick fix” to the problem. AI is not the solution. AI can’t distinguish between true and fake news. AI doesn’t have a better proxy than a click. We can’t rely on the same technology to solve the problem itself has created.

Tech companies are trapped by their own business model and shareholder pressures. Their very financial incentives prevent them from addressing their flaws. The bigger it gets, the harder it is to change and put the genie back in the bottle.

At the end of the day financial incentives rule the world. Any solution must start by realigning financial incentives. However, as long as the industry continues unregulated, we are worth more staring at a screen than going outside and living our lives.

First published on December 06, 2020