🎙 Ramon Gilabert and I just launched a new podcast — say hi to Safareig
🦢 Black Swan characteristics — and some necessary definitions:
History is opaque. You see what comes out, not the script that produces events, the generator of history. There is a fundamental incompleteness in your grasp of such events, since you do not see what's inside the box, how the mechanisms work.
The normal is often irrelevant: can you assess the danger of a criminal by examining what he does on an ordinary day?
We glorify "treatment", but few reward acts of prevention. Imagine a politician that enforced severe airport controls the 10th of September — despite having "saved the world from a terrorist attack" it'd have been vilified.
It is easier and faster to destroy than to build. 9/11 lasted hours, but technological implementations can take decades. Positive Black Swans take time to show their effect while negative ones happen very quickly.
Events that are non-repeatable are ignored before their occurrence, and overestimated after (for a while). After a Black Swan, such as September 11, 2001, people expect it to recur when in fact the odds of that happening have arguably been lowered.
We've been tricked into thinking that most of the inner works of our society follow a Gaussian distribution. When in fact, the opposite is true: they are decidedly sensitive to the extremes.
The main idea is that Gaussian–bell curve variations face a headwind that makes probabilities drop at a faster rate as you move away from the mean. While "scalables" do not impose such a restriction.
This idea is illustrated throughout the book with the juxtaposition of Mediocristan (where no single instance can affect the aggregate, featuring the physical, non-scalable, and subject to gravity) vs. Extremistan (where a single unit can affect the total in a disproportionate way; usually related with scalable matter, such as social issues or 1s and 0s).
In Mediocristan, as the sample size increases, the observed average will present itself with less and less dispersion — the distribution (or bell shape) will be narrower. In a nutshell, this the "law of large numbers": uncertainty in Mediocristan vanishes under averaging.
The consequence of this is that variations around the average of the Gaussian, also called "errors," are not truly worrisome. They are small and they wash out. They are domesticated fluctuations around the mean.
An extreme example of how the randomness of the Gaussian is tamable by averaging is "the stability" of a coffee cup. If the cup were one large particle, or acted as one, then its jumping off the table would be a problem. But the cup being the sum of trillions of very small particles, make it extremely stable.
However, the same does not hold in a winner-take-all environment such as Extremistan. Intellectual, scientific, and artistic activities belong to the province of Extremistan, where there is a severe concentration of success, with a very small number of winners claiming a large share of the pot.
There are two varieties of randomness, qualitatively different. One does not care about extremes; the other is severely impacted by them. One does not generate Black Swans; the other does.
The "labor" person sells his work. Some professions, such as dentists, restaurants, consultants, or massage professionals, cannot be scaled — no matter how highly paid, your income is subject to gravity.
Your revenue depends on your continuous efforts more than on the quality of your decisions.
The "idea" person sells an intellectual product in the form of a transaction or a piece of work. As "idea" person you do not have to work hard, only think intensely. You do the same work whether you produce a hundred units or a thousand.
🖇️ It is the idea of zero marginal cost and removing local barriers and friction:
🔖 The inventor of the chessboard requested the following compensation: one grain of rice for the first square, two for the second, four for the third, eight, then sixteen, and so on, doubling every time, sixty-four times.
There is more money in designing than in actually making: organizations can get paid for just thinking, organizing, and leveraging their know-how and ideas while subcontracted factories in developing countries do the grunt work. Which explains why losing manufacturing jobs can be coupled with a rising standard of living.
Capitalism is, among other things, the revitalization of the world thanks to the opportunity to be lucky. Luck is the grand equalizer, because almost everyone can benefit from it.
📍 Stochastic tinkering: free markets work because they allow aggressive trial-and-error, not by rewarding skills. We are all benefiting from overconfident entrepreneurs who fall for the narrative fallacy.
By the same token, organizations can go bust as often as they like, thus subsidizing us consumers by transferring their wealth into our pockets.
Out of the 500 largest U.S. companies in 50s, only 74 were still part of SP500, in the 90s. A few had disappeared in mergers; the rest either shrank or went bust.
The more bankruptcies, the better for us — unless they are "too big to fail" and require subsidies, which is an argument in favor of letting companies go bust early.
✏️ The strategy is to tinker as much as possible and try to collect as many Black Swan opportunities as you can.
There is no way to derive profits from traded securities since these instruments have automatically incorporated all the available information. Therefore, public information can be useless, since prices can already "include" all such information, and news shared with millions gives you no real advantage.
It is easier for the rich to get richer, for the famous to become more famous.
—— (adapted from) Matthew 25:29
Ideas are not like genes: what people call "memes," that compete with one another using people as carriers, are not truly like genes. Ideas spread because they have for carriers self-serving agents who are interested in them, and interested in distorting them in the replication process.
To be contagious, a mental category must agree with our nature.
📍 The long tail implies that the small guys, collectively, should control a large segment of culture and commerce, thanks to the niches and subspecialties that can now survive thanks to the Internet.
Despite the idea of networks being widely balanced and distributed, networks have a natural tendency to organize themselves around an extremely concentrated architecture: a few nodes are extremely connected (that serve as central connections); others barely so.
This has implications in Extremistan: we'll see more periods of calm and stability, with most problems concentrated into a small number of Black Swans.
We will have fewer, but more severe crises. Conflicts with extremelly low-probability, yet degenerating into total decimation of the human race, a conflict from which nobody is safe anywhere.
The same applies to globalization: it creates interlocking fragility, while reducing volatility and giving the appearance of stability. In other words it creates devastating Black Swans. We have never lived before under the threat of a global collapse.
We can find confirmation for just about anything.
Read books are far less valuable than unread ones. Black Swan logic makes what you don't know (antiknowledge), far more relevant than what you do know.
📍 The problem of induction: how can we know the future, given knowledge of the past? How can we figure out properties of the (infinite) unknown based on the (finite) known?
We get closer to the truth by negative instances, not by verification. It is misleading to build a general rule from observed facts. Contrary to conventional wisdom, our body of knowledge does not increase from a series of confirmatory observations.
See the turkey example: what can a turkey learn about what is in store for it tomorrow from the events of yesterday? A lot, perhaps, but certainly a little less than it thinks, and it is just that "little less" that may make all the difference.
Karl Popper: you formulate a (bold) conjecture and you start looking for the observation that would prove you wrong.
Something has worked in the past, until it unexpectedly no longer does, and what we have learned from the past turns out to be at best irrelevant or false, at worst viciously misleading.
On top of that, Black Swans can't be used to predict the future. One can't derive future explanations and formulate patterns ex post.
Our inferential machinery, that which we use in daily life, is not made for a complicated environment in which a statement changes markedly when its wording is slightly modified.
📍 Domain-specific: our reactions, mode of thinking, intuitions, depend on the context in which the matter is presented. We react to information not on its logical merit, but on the basis of which framework surrounds it, and how it registers with our social-emotional system.
We tend to use different mental modules for different events: our brain lacks a central all-purpose computer that starts with logical rules and applies them equally to all possible situations.
🖇 An idea from Thinking Fast And Slow: large samples are more precise than small samples, which is the same to say that extreme outcomes are more likely to be found in small samples. Or, large samples are more stable and should fluctuate less from the long-term average.
We tend to learn the precise, just facts; not the general, the rules.
📍 The narrative fallacy: the lack of ability to look at sequences of facts without weaving an explanation into them; forcing a logical link, an arrow of relationship, upon them.
We like stories, we like to summarize, and simplify — to reduce the dimension of matters.
The more random information is, the greater the dimensionality, thus the more difficult to summarize. Explanations bind facts together. They make them more easily remembered and make more sense.
Where this propensity can go wrong is when it increases our impression of understanding. It pushes us to think that the world is less random than it actually is.
There is a biological foundation on why we tend towards simplification; it has to do with the chemical dopamine. Information is costly to store. The more orderly, patterned, and narrated, the easier it is to store in one's mind.
Our brains suffer from indexing limitations, rather than consciousness storage capacity. We hold life-long memories, but struggle remembering a phone number.
Narrativity and causality makes time flow in a single direction. Messing up with remembrance of past events: we will tend to remember facts from our past that fit a narrative.
We recall events while knowing the answer of what happened subsequently. It is impossible to ignore posterior information. This simple inability to remember not the true sequence of events, but a reconstructed one will make history appear in hindsight to be far more explainable than it actually was, or is.
Memory is dynamic, not set in stone the moment is recorded. We continuously revise and re-narrate past events in the light of what appears to make logical sense after these events occur.
Opinions converge. If you selected 100 journalists in isolation, you would get 100 opinions. But the process of having people report in lockstep caused the dimensionality of the opinion set to shrink — they converged on opinions and used the same items as causes.
Each story will immediately seek and draw a cause to make matters more concrete, and fit into a narrative you can "buy". It is as if they wanted to be wrong with infinite precision.
Narrative make us also fail to react on things we relate rather than statistics:
✏️ Signal self-confidence: humans will believe anything you say provided you do not exhibit the smallest shadow of diffidence.
A bottle or a cistern? Our enjoyment declines with additional quantities. These nonlinear relationships are ubiquitous in life. Linear relationships are truly the exception.
Mother Nature destined us to derive enjoyment from a steady flow of pleasant small, but frequent, rewards. How good matters rather little. To have a pleasant life you should spread these small "affects" across time as evenly as possible.
The same property in reverse applies to our unhappiness. It is better to lump all your pain into a brief period rather than have it spread out over a longer one.
It follows that the right strategy for our current environment may not offer internal rewards and positive feedback. The world has changed too fast for our genetic makeup.
People devoted to "their own Black Swan" spend most of their time waiting for the big day that (usually) never comes. This takes your mind away from the pettiness of life — the cappuccino that is too warm or too cold, the waiter too slow or too intrusive, the food too spicy or not enough — all these considerations disappear because you have your mind on much bigger and better things.
However, there is a price you might end up paying seeking a Black Swan: hippocampus is the structure where memory is supposedly controlled. It is the most plastic part of the brain; it is also the part that is assumed to absorb all the damage from chronic stress. These small, seemingly harmless stressors do not strengthen you; they can amputate part of your self.
✏️ Takeaway: look at your portfolio once a month, instead of updates every minute.
📍 Anticipated utility by Danny Kahneman or affective forecasting by Dan Gilbert: you are about to buy a new car. It is going to change your life. You do not anticipate that the effect of the new car will eventually wane and that you will revert to the initial condition, as you did last time. If you had expected this, you probably would not have bought it.
We fail to learn from our past errors in projecting the future of our affective states. We grossly overestimate the length of the effect of misfortune on our lives. You will adapt to anything, as you probably did after past misfortunes.
A life saved is a statistic; a person hurt is an anecdote.
The losers (or the dead) are not the ones writing the history books: drowned worshippers, being dead, would have a lot of trouble advertising their experiences from the bottom of the sea. This can fool the casual observer into believing in miracles. We call this the problem of silent evidence.
The graveyard of failed persons will be full of people who shared the following traits: courage, risk taking, optimism... Just like the population of millionaires. There may be some differences in skills, but what truly separates the two is for the most part a single factor: luck. Plain luck.
If we have a large enough sample pool and an extreme asymmetry of outcomes it is easy (yet wrong) to assume the winners, the lucky ones, shared innate talent or unique skill.
📍 The beginners luck: those who start gambling will be either lucky or unlucky (given that the casino has the advantage, a greater number will be unlucky). The lucky ones, with the feeling of having been selected by destiny, will continue gambling; the others, discouraged, will stop and will not show up in the sample.
You either have to believe some transcendental intervention is in play or accept his skills and insight in picking the winning numbers. But if you take into account the quantity of gamblers out there, and the number of gambling sessions (several million episodes in total), then it becomes obvious that such strokes of luck are bound to happen.
Be suspicious of the "because" and handle it with care. Particularly in situations where you suspect silent evidence.
The fact that you survived is a condition that may weaken your interpretation of the properties of the survival.
Despite our life came under a serious threat but, having survived it, we retrospectively underestimate how risky the situation actually was.
Same applies to evolution: it is a series of flukes, some good, many bad. You only see the good. But, in the short term, it is not obvious which traits are really good for you, particularly if you are in the Black Swan–generating environment of Extremistan.
Think of the odds of the parameters being exactly where they need to be to induce our existence (any deviation from the optimal calibration would have made our world explode, collapse, or simply not come into existence).
For someone who observes all adventurers, the odds of finding a Casanova are not low at all: there are so many adventurers, and someone is bound to win the lottery ticket.
The problem here with the universe and the human race is that we are the surviving Casanovas, or the lucky gambler. When you start with many adventurous Casanovas, there is bound to be a survivor, and guess what: if you are here talking about it, you are likely to be that particular one.
So we can no longer naïvely compute odds without considering that the condition that we are in existence imposes restrictions on the process that led us here.
Even at a more personal level: if I am now writing these lines, it is certainly because history delivered a "rosy" scenario, one that allowed me to be here, a historical route in which I avoided massacres, wars, and many others "dead-ends".
Our bing here is a consequential low-probability occurrence, and we tend to forget it.
Do not compute odds from the vantage point of the winning gambler (or the lucky Casanova), but from all those who started in the cohort. If you look at the entire starting population taken as a whole, you can be close to certain that one of them (but you do not know in advance which one) will show stellar results just by luck.
This is why we do not see Black Swans: we worry about what has happened, not what may happen but did not. We respect what has happened, ignoring what could have happened.
✏️ Wrapping up Part I: shut down the television set, minimize time spent reading newspapers, ignore the blogs. Train your reasoning abilities to control your decisions; nudge System 1 (the heuristic or experiential system) out of the important ones. This insulation from the toxicity of the world will have an additional benefit: it will improve your well-being.
We tend to "tunnel" while looking into the future, making it business as usual, Black Swan–free, when in fact there is nothing usual about the future.
The future ain't what it used to be.
—— Yogi Berra
📍 Epistemic arrogance: our hubris concerning the limits of our knowledge. We overestimate what we know, and underestimate uncertainty, by compressing the range of possible uncertain states. We are demonstrably arrogant about what we think we know.
When you are employed, hence dependent on other people's judgment, looking busy can help you claim responsibility for the results in a random environment. The appearance of busyness reinforces the perception of causality, of the link between results and one's role in them.
📍 Belief perseverance: the tendency not to reverse opinions you already have. Once we produce a theory, we are not likely to change our minds — so those who delay developing their theories are better off. Additional knowledge of the minutiae of daily business can be useless, even actually toxic.
📍 Past and future asymmetry: in people's minds, the relationship between the past and the future does not learn from the relationship between the past and the past previous to it. Because of this introspective defect we fail to learn about the difference between our past predictions and the subsequent outcomes. When we think of tomorrow, we just project it as another yesterday.
This makes us believe in definitive solutions — yet not consider that those who preceded us thought that they too had definitive solutions. We laugh at others and we don't realize that someone will be just as justified in laughing at us on some not too remote day.
🔖 Imagine being a member of a higher-level species, far more sophisticated than humans. You would certainly laugh at the people laughing at the non-human primates. Now think of those people amused by the apes, the idea of a being who would look down on them the way they look down on the apes cannot immediately come to their minds. If it did, they would stop laughing.
Do not trust experts: note the difference between "know-how" and "know-what." The Greeks made a distinction between technē and epistēmē. The empirical school wanted its practitioners to stay closest to technē (craft), and away from epistēmē (knowledge).
Professions that deal with the socioeconomic future and base their studies on the non-repeatable past have an expert problem. They do not know what they do not know.
We humans are the victims of an asymmetry in the perception of random events. We attribute our successes to our skills, and our failures to external events outside our control, namely to randomness.
🔖 Interesting view around executive vs. craftsman: needless to say they were usually sleep-deprived. Being an executive does not require very developed frontal lobes, but rather a combination of charisma, a capacity to sustain boredom, and the ability to shallowly perform on harrying schedules. [...] and that strange activity called the business meeting, in which well-fed, but sedentary, men voluntarily restrict their blood circulation with an expensive device called a necktie.
Montaigne, fully accepted human weaknesses and understood that no philosophy could be effective unless it took into account our deeply ingrained imperfections, the limitations of our rationality, the flaws that make us human.
To me utopia is an epistemocracy, a society in which anyone of rank is an epistemocrat, and where epistemocrats manage to be elected. It would be a society governed from the basis of the awareness of ignorance, not knowledge.
📍 Modern politics: we are made to follow leaders who can gather people together because the advantages of being in groups trump the disadvantages of being alone. It has been more profitable for us to bind together in the wrong direction than to be alone in the right one. Those who have followed the assertive idiot rather than the introspective wise person have passed us some of their genes.
📍 The riddle of induction: we project a straight line only because we have a linear model in our head. The fact that a number has risen for 1.000 days straight should make you more confident that it will rise in the future. But if you have a nonlinear model in your head, it might confirm that the number should decline on day 1.001.
The same past data can confirm a theory and also its exact opposite! If you survive until tomorrow, it could mean that either:
Used correctly and in place of more visceral reactions, the ability to project effectively frees us from immediate, first-order natural selection — as opposed to more primitive organisms that were vulnerable to death and only grew by the improvement in the gene pool through the selection of the best. In a way, projecting allows us to cheat evolution: it now takes place in our head, as a series of projections and counterfactual scenarios.
It is as if evolution has put us on a long leash whereas other animals live on the very short leash of immediate dependence on their environment.
Randomness, in the end, is just unknowledge.
📍 Forward-backward problem — assuming nonlinearity: a single butterfly flapping its wings in New Delhi may be the certain cause of a hurricane in North Carolina, though the hurricane may take place a couple of years later.
However, given the observation of a hurricane in North Carolina, it is dubious that you could figure out the causes with any precision: there are billions of billions of such small things as "wing-flapping butterflies" that could have caused it. The process from the butterfly to the hurricane is greatly simpler than the reverse process from the hurricane to the potential butterfly. Confusion between the two is disastrously widespread in common culture.
Pascal's advocated for eliminating the need to understand the probabilities of a rare event, since there are fundamental limits to our knowledge of these; rather, we can focus on the payoff and benefits of an event if it takes place.
This idea that in order to make a decision you need to focus on the consequences (which you can know) rather than the probability (which you can't know) is the central idea of uncertainty. Much of my life is based on it. You can build an overall theory of decision making on this idea.
Be the only one looking for the unread books.
📍 Seek for asymmetry: put yourself in situations where favorable consequences are not capped and much larger than the unfavorable ones. Indeed, the notion of asymmetric outcomes is the central idea of this book.
People are ashamed of losses, thus engage in strategies that produce very little volatility, however exposed to blowups. This trade-off between volatility and risk can show up in careers that give the appearance of being stable, like jobs at IBM until the 1990s. When laid off, the employee faces a total void: he is no longer fit for anything else.
The strategy is to be as hyper-conservative and hyper-aggressive as you can be, instead of being mildly aggressive or conservative:
📍 Convex combination: the average will still be medium risk, but constitutes a positive exposure to the Black Swan. When you have a very limited loss you need to get as aggressive, as speculative, and sometimes as "unreasonable" as you can.
Nero engaged in a strategy that he called "bleed." You lose steadily, daily, except when some event takes place for which you get paid disproportionately well. No single event can make you blow up, on the other hand — some changes in the world can produce extraordinarily large profits that pay back such bleed for years.
✏️ Go to parties: maximize the serendipity around you. Do not look for the precise and the local. Work hard, but not in grunt work, but chasing chasing Black-Swan-like opportunities and maximizing exposure to them.