INDIAN ENGINEERING DESIGN FORUM

Connect. Collaborate. Innovate.
28
February
2017

The Drunkard's Walk: How Randomness Rules Our Lives

Book Review

The Drunkard's Walk: How Randomness Rules Our Lives

Author: Leonard Mlodinow
Publisher: Penguin, 2009

This book is about the important role that probability and statistics play in our lives. It's title comes from the fact that the path traced by molecules are haphazard. In fact, it was first observed by Robert Brown, later analyzed by Boltzmann and Maxwell, and finally mathematically proved by Einstein in one of his papers of 1905. Just as random bombardments by molecules on a piece of pollen suspended in a liquid sometimes reinforce and give the pollen a visible nudge, so do random events in our lives sometimes shape our future more than we could have predicted.

There are in fact, two problems with accepting this. Firstly, our minds are not really built to either generate or recognize randomness. Our minds more naturally look for patterns and attempt to find the cause of things. Secondly, we like to be in control. We value skill and ability. So to say that our future is determined by chance is quite the opposite to what we like to believe.

The author comes to a compromise. Yes, skill and ability count for something. The more we practice, the better we get at the things we do. But the link between how good we are at something and the success we achieve in our pursuits is not quite direct. Plenty of things can come in the way to either reinforce or divert that link. It's not as bad as fatalism. Our abilities improve our chance of success but do not guarantee it. We have to therefore first recognize the importance of chance. Then we need to understand it better so that we don't fall into misinterpreting or misapplying the laws of chance.

The author mentions a number of interesting examples and real world case studies to show how simple situations can be misread. For example, in one study people thought the probability of being a female bank teller and feminist was more likely that being a female feminist. In fact, the former is a subset of the latter but a story that has more details somehow makes it more credible. When analyzing the success patterns of movies or wins of particular basketball teams, the result is often associated with the leader or manager. In fact, it is common to have exceptional performances or failures followed by a normal result. This is called regression to the mean. We should not mistake this for causality. It's therefore important not to judge people by their results alone. Look at their skills and abilities and be brave enough to bet on them.

Another interesting example is the case against O.J. Simpson where we learn about inversion error. Apparently, the defense argued that only 1 in 2500 men who batter their wives will go on to kill them. On this basis, the verdict was not guilty. The real fact that should have been considered is that of all the battered women killed, 90% were killed by their abusers. We often confuse the variables and possibly because we have not studied conditional probability as codified by Thomas Bayes.

The author mentions a personal anecdote where he was diagnosed with AIDS. Upon investigation, the author discovered that his doctor has got his probabilities wrong. While HIV tests may be accurate, there are also false positives. If you test HIV positive, the chance that you have AIDS is high only if you are in the high risk group (such as homosexuals) but not if you are in a low risk group. This is an example where a basic knowledge of probability does have practical value in one's life.

The other interesting story is of the Monty Hall Problem. There are three doors with one hiding an expensive prize. You choose one. The host throws open of the other two doors and let's you decide if you want to switch your choice. Most people may not care thinking that the choice is really 50/50 between the two unopened doors. The fact is that once the host intervenes, the game is no longer random. The host has used his knowledge to open the door that didn't have the prize. So if you original choice was right, the probability of winning was 1/3. If it was wrong, it will be 2/3 if you switch.

The book starts with probability and ends with statistics. The author defines these briefly, "The former concerns predictions based on fixed probabilities; the latter concerns the inference of those probabilities based on observed data." Often we are fooled by reports in the press how ratings have improved by a certain point percentage. What we are not told is that the number is only a statistical measure that has its limits of accuracy. This is also true of grades given to students or the weight of breads. In fact, variations in measurements are so natural that if such variations are not found we must suspect something's wrong. This is employed these days to detect frauds. During World War II, London was bombed by V2 rockets. Many believed that these rockets sought their targets with extreme precision. But analysis in 1946 showed that the bombed locations were consistent with random distribution.

What's interesting in this book is the way the author weaves together these anecdotes with the development of the mathematics of probability and statistics. He begins from the Greeks and Romans but the theory of randomness really starts with Gerolamo Cardano of the 16th century, who is credited with the idea of sample space. Soon after, it was Galileo who recognized the importance of permutations as being distinct from combinations. Then in 1654 Pascal attempted to solve a gaming problem and in the process came up with his now famous Pascal's Triangle. It was also Pascal who invented the notion of expected value of a random variable. Jakob Bernoulli is credited with the law of large numbers, which tells us how many trials we must perform to obtain a result that's close to the actual value at a certain confidence. Thomas Bayes showed us how to improve on initial estimates based on new observations. Laplace, based on Gauss' initial work, published what is now called the Central Limit Theorem. Karl Pearson designed the chi-square test to check if observations conform to a distribution.

In conclusion, randomness does not imply chaos. There is orderliness in the sense that numbers follow statistical distributions. Within certain limits of confidence, we can infer and predict. While this is true of a population in general, we cannot predict how a particular individual's future will turn out. In hindsight, it's usually easy to explain why something happened because the alternatives have been forgotten. Looking forward, it's hard to predict what will happen.

Related Articles

About the Author

Arvind Padmanabhan

Arvind Padmanabhan

Arvind Padmanabhan graduated from the National University of Singapore with a master’s degree in electrical engineering. With fifteen years of experience, he has worked extensively on various wireless technologies including DECT, WCDMA, HSPA, WiMAX and LTE. He is passionate about training and is keen to build an R&D ecosystem and culture in India. He recently published a book on the history of digital technology: http://theinfinitebit.wordpress.com.

Leave a comment

You are commenting as guest.
VIEW MORE