Jump to content

Recommended Posts

Posted

Anyone else looking forward to this? May touch on a lot of hot topic subjects from the past year or so. Particularly, why experts often dont know what theyre talking about. 

  • 1 month later...
Posted

It’s a good book! The “noise” topic has been discussed by Mr. Kahneman and others before but the book is an actualized summary, kind of. Here’s a summary and review which i’ve tried real hard to make bias- and noise-free. The topic (decision making and communication) has always been a parallel interest wherever most of my time has been spent (professional and personal). FWIW.

 

The topic is tangentially relevant to an investment board because of the behavioral finance aspect. The main author is well known and the co-authors are O. Sibony, an ‘academic’ business consultant and C. Sunstein, an ‘academic’ behavioral specialist with some ‘political’ experience (under the Obama administration).

 

They explain that noise is not bias. Biases are many (‘social’ and cognitive) but their defining characteristic is that they tend to cause decisions to be wrong in a consistent and systematic way. Biases are well known and are easy to recognize (especially in others..). The book is not about biases although the difference between noise and bias, contrary to what the authors seem to suggest, is not completely binary but more along a spectrum especially for some types of noise (see paragraphs below).

 

Helpful picture from previous relevant publication to illustrate difference between noise and bias:

 

693998649_Noisevsbias.thumb.png.b7abf26992c4853b7dc25ca3965cd1e7.png

 

Their definition of noise is basically unwanted variability in judgments or decision making outcomes not explained by biases. The backbone of the book is to show real-life examples where noise is a source of poor decision outcomes in many fields: medical, legal, business valuations and appraisals, economic forecasts etc. They define two basic types (or sources) of noise and suggest potential solutions to measure noise, reduce it and improve decision making. The basic message is that inconsistency in judgements has a lot to do with noise and they try to show ways to introduce some kind of discipline in the wildness.

 

The most basic form (and the most amenable to improvement) of noise is the variability due to general circumstances unrelated and irrelevant to the actual decision process (ie mood, weather, fatigue, unrelated circumstances of daily life). This type of noise explains the significant intra-observer variability (same person evaluating same data at different times) that has been documented in many areas. This seems like a rather mundane concern but it’s a huge problem in many important domains. It’s been shown that sentencing and other various decisions with legal implications can be influenced significantly by completely unrelated and irrelevant factors.

 

The other forms of noise, which can be joined into one group, reflect a general tendency for individuals to lean certain ways and/or to think differently (different thought process or different patterns of assessment). There is a potential intersection with bias here but it’s important to make a distinction when trying to measure and address this sub-type of noise because the strategies are different. This noise, even if it originates from individual variations and even if this noise has bias-like characteristics at the individual level, is really noise that can be impactful at the systemic level.

 

{Personal note: in some cases (administrative courts and others) where i’ve been involved as an ‘expert’, the two opposing parties would learn who the judge was sitting the morning of the audience and, in in significant number of cases, this would trigger some kind of settlement or even dropping the case because of the extreme ways some judges were consistently leaning (typically a right-left thing or a variation thereof) which underlined the excessive variability embedded in the process (it felt almost like a lottery at times)}.

 

For example, the authors describe how physicians come to different conclusions (diagnosis) when assessing the same patient or the same set of data. This is also striking in sub-groups where objectivity of data is more prominent in the decision process (ie analysis of pathology specimens under the microscope, imaging studies evaluated by radiologists) where one would not expect such variability. In fact, when relevant participants learn about such variability levels, they are surprised and are expecting much less variations. You have both inter-observer variability and intra-observer variability. The intra-observer variability is clearly and essentially unwanted noise, especially when the variability is large and impactful (it often is). They describe similar scenarios in the legal field and in the process of various business appraisals including underwriting decisions. Some of this noise is related to uncertainty of the underlying problem or issue but a significant part of this noise is detrimental and unwanted.

 

A key aspect that is only touched upon in the book is the importance of not completely eliminating noise because, when looking at this from a systemic level, one has to strike a reasonable balance between the asset of the potential to be right unconventionally (atypical but possible) and the liability of being wrong unconventionally (typical). In fact, one could argue that, as an individual, to be a successful contrarian investor requires to succeed unconventionally but the book is not a recipe to beat the market, it’s a guide to improve systematic decision making and it seems clear that noise levels are very high in various areas of decision making and the systematic cost of this noise is also very high. There can be tremendous value in intuitive judgements, especially in certain situations, but the noise associated with intuitions can be very large and damaging. Intuition is one of the driving forces behind « animal spirits ». Animal spirits should not be killed but they have to be kept in check.

 

{People who follow the opioid crisis saga may be interested to look into what a specific judge is trying to do in an Ohio court. The judge has taken an intuitive (and potentially noisy) route and is meeting checks and balances. The GSEs’ potential to leave conservatorship and the legal implications also reveal noise-related issues as well as the issue of different and opposing schools of thought and various levels of decision powers (legislative versus judicial).}

 

The first step is to accept that noise exists, can be measured and remediated. The authors define how noise can be measured (‘noise audit’) and describe various ‘hygienic’ decision (noise reduction) techniques. They also address some cost-benefit considerations before undertaking such projects.

 

Although not explicitly mentioned in the book, simply seriously considering the noise effect and its consequences on the decision-making process (similar to the Hawthorne effect) can be associated with improved outcomes, although this may be temporary in nature. But constructive attempts to improve the culture can result in lasting and self-sustaining improvements. The authors give concrete examples and suggest to focus on areas where unwanted variability is high and where standardization of the process and introduction of objective criteria even if underlying variables are subjective in nature, can effectively reduce noise with acceptable costs (financial costs, productivity issues etc).

 

They give the basic example of hand washing. Hand washing, in certain selected circumstances has been shown to reduce bad outcomes. By making the hand washing component automatic or as part of a standard routine, some noise tends to be automatically eliminated form the decision making process and outcomes are improved. People have understood this for a long time in the aviation industry (safety aspect), importance of protocols, checklists etc.

 

{In my humble experience, apparently small and simple (but well designed) improvements can result in immediate, impressive and sustainable improvements.}

 

They also suggest the possibility to aggregate more than one decision maker when unusually impactful decisions are made. This is applied in top management candidate selection and in higher education contingent candidate selection. You can see this aspect operating at the Board of Directors level, especially if decisions (such are mergers) are separated into sub-units and evaluated independently. This is controversial and may not work because of the potential echo-chamber effect and others and because of the risk to miss the forest from the trees.

 

{This is the idea behind the structure of Supreme Courts. Anecdotally, i've been involved in individual peer reviews that could (and sometimes did) significantly curtail their professional activities and i've found that independent reviews coalesced into a 'committee' tends to work best under these circumstances, partly to reduce noise.}

 

They contend that intuitions and hunches have to be controlled somehow but, even if conceptually attractive, this is often hard to achieve in real life, especially when human factors are critical such as during a negotiating process or when biases play a major factor.

 

A promising area is the introduction of algorithms (potentially AI or AI-like) to be used alongside the decision making process.

 

Whatever is attempted, ease of use and practicality are required for acceptance by market participants. Also, poorly designed guidelines or algorithms can be sources of additional noise (and bias).

 

-Additional personal and anecdotal comments of questionable significance

 

{When i started out in practice about 25 years ago, after a few weeks, i noticed a very high rate of inadequate relevance of consultations referred to me for various reasons and this was supported by various (poorly designed) incentives. For many reasons, changing incentives was the fundamental thing to do but that’s complicated and attempts are often met with great resistance. This resulted in a lot of noise and poor outcomes mostly because of poor resource allocation (quantity, quality and timing). After a few relatively simple steps (and engaging with others with skin in the game), all consultations in my area (about 500k population) became essentially channeled into a centralized process with a unique reception area where consultation requests were assessed using simple and objective criteria and using a simple and standardized form that reflected current standards of practice with the possibility (not the default option) to go around the standard procedure. This simple process resulted in a significant reduction of noise with a marked improvement in timely resource allocation. Over the years, many similar projects followed using various protocols for example in dealing with frequently seen clinical presentations. It became obvious that noise can be reduced and it typically ends up as a win-win for all (or at least most) involved.

 

Later on, i also became involved in the legal arena (independent expert reports, independent decisions) and it became obvious that noise was a significant problem. By using standardized criteria, formation and training to reach the ‘expert’ level admissibility, by standardizing the evaluation/decision process and by listing requirements expected in formal reports, noise level was significantly reduced. For example, it’s being recognized that expert reports as well as decisions and case law based on specific criteria (motivated by facts and reasoning, rules based) have more weight than work produced more on the basis of vague principles or even impressions. A recent review completed in a specific segment (that was felt to be problematic) simply verifying the basic features considered required in an ‘expert’ report resulted in an assessment whereby about a third of the reports had no value and another third had little value. Authors of such reports have been notified (improve the noise or do something else).}

----- 

The authors show that, in some areas, including some critical areas, noise level is excessive and has consequences. This can be improved. In some specific areas, decentralized hunches and superficial impressions do a lot more harm than good and checks and balances are required in order to improve the decision-making process.

 

-----

**Clearly an optional section here, simply skip if your worldview is threatened by it. 

 

-Comments about the coronavirus episode

 

This is now considered a twilight zone topic here and some may have a point about relevance to investments. That’s fine. The elephant in the room though is not the relevance or even the ‘political’ aspect, it’s the situation where achieving levels of constructive on-line conversations can be challenging when the style becomes dominated by noisy and biased tribal thinking and by inappropriate components that destroy engagement.

 

AFAIK, the book was not intended to be applied to the Covid-19 issue and the main author has only superficially associated the two in related interviews. From limited information, it appears that Mr. Kahneman thinks that a lot of noise happened as a result of the pandemic but he seems to suggest that biases played a much more important role.

 

The noise played out mostly in relation to a growing diversity of alternative assessment styles and polarization of thought processes.

 

That's all i'm goin' to say about that.

 

For those interested, Mr. Micheal Lewis has recently released an interesting book about the pandemic called The Premonition. The noise concept is not the primary theme but the author, through a few ‘stories’, tries to explain gaps between reputation, capacity and actual performance and how talents can be wasted when leadership fails. Mr. Lewis also touches upon the possibility, related to growing noise in the public environment, that 2020-1 showed a deeper developing fissure than simply the controversial place of ‘science’ in public life.

 

**End of optional section

-----

 

It’s interesting to compare this recent book to the work Mr. Kahneman produced a long time ago (1970s) on the value and risks of intuitions and how he has evolved with a refined definition for noise, away from biases. Overall, a good book if noise is your thing and if you have enjoyed Kahneman and Tversky’s previous works.

Posted (edited)

Thank you @Cigarbutt and @Gregmal for sharing. 

 

This is a fascinating multi-disciplinary look at the issue.  I haven't read the book yet, but have been thinking about this topic through the lens of multiple disciplines, including the ones you mentioned. I'm still synthesizing, but the discipline that has helped me the most so far in reaching a deeper understanding here was probably AI.

 

When trying to improve accuracy & precision of decisions made by AI neural nets running on silicon-based-computation, sometimes you go back and try to figure out what led to each incorrect decision.  Sometimes it might be noise or bias in your raw input, a lot of times it might be missing signals, and other times it might be an issue in one of the intermediate signals in your machine learning ensemble or one of the layers of the neural net. The interesting thing is issues end up being some of the same things @Cigarbutt mentioned above for human neural nets running on hydrocarbon-based-computation.

 

One of the most fascinating moments for me was when the team was analyzing an incorrect decision made by AI to try to figure out why it made the incorrect decision, and then the team realized that the decision made by AI was actually the right one and the human expert in the team had made the wrong decision and labeled it incorrectly.  The look on the human neural net's face was priceless :-).  He was defending his decision up to some point, but finally went all red and said, "Humans can't get all decisions right, you know." 🙂  When silicon-based-neural-nets can start outperforming hydrocarbon-based-neural nets for some decisions at such an early stage of AI, it really makes you realize that maybe life is broader than our hydrocarbon-focused view. 

Edited by LearningMachine
Posted (edited)

Human behaviour is not empirical science. Repeat the same experiment 100x and humans will generate a ‘range of answers’, empirical science will generate one. That ‘range of answers’ is 'noise' - and bias.

 

Engineers routinely correct for ‘signal’ noise, by applying an electronically opposing background noise. Amplitudes cancel each other out, leaving the signal. Run the signal through an ‘optimizer’ to get an actionable result.

 

We do the same thing in ‘real life’ - when we get our news from opposing media organisations. Each media stream viewed as propaganda, opposing propagandas cancelling each other out – mostly leaving just the facts.  Apply ‘common sense’ to the stream of facts, to produce an actionable result.  

 

The very good (Taleb) recognize that the process is fragile. They ALSO apply an overlay by which to make the result anti-fragile, should the process fail. In ‘real life’ we call it risk management, ideally obtained via the school of hard knocks.

 

For most people, simply keep a circle of friends from as wide a range of different cultures, life experiences, occupations, etc. as possible. Common sense stays grounded, and you get to enjoy the best of life. You will also learn a thing or two!

 

SD

 

Edited by SharperDingaan
Posted (edited)
14 minutes ago, SharperDingaan said:

Human behaviour is not empirical science. Repeat the same experiment 100x and humans will generate a ‘range of answers’, empirical science will generate one. 

 

 

Different instances of the same reinforcement-learning based AI model also generates a 'range of answers' just like different instances (humans) of same/similar reinforcement-learning based human neural net.

 

Even the same instance of the reinforcement-learning based AI model will also generate a 'range of answers" as it learns or accumulates bias just like the same human neural-net can generate a 'range of answers' as it learns or accumulates bias.

 

I believe if you've enough computation power and signals, you can model human behavior.  What some of the tech companies do today in terms of rewarding reinforcement-learning based neural nets for being able to predict and get human neural-nets to do what it wants to do is a start of modeling human behavior.  Over time, I think human neural nets will be no match for silicon-based neural nets that have access to way more data and computation power.

Edited by LearningMachine
Posted (edited)

The emperical sciences are physics, chemistry, math, etc. The iron rule of 1 + 1 ALWAYS equals 2. An AI model is just a probability driven decision tree. We are enamoured with AI because for given probabilities and inputs, we will always get a unique result - hence it's emperical! Sadly ....... no.

 

As soon as you have more than one variable determining result, they influence each other. In the portfolio world, this is the covar matrix used ti create a portfolio sitting on the efficient frontier. You also learn very quickly that the covariences are NOT static - they change as the market price of the stocks in the porfolio change. Hence, when prices are rapidly changing (up or down), the portfolio must be periodically rebalanced. Point? A 'range' of portfolio combinations, as the covar changes.

 

When thare are too many variables, or stocks - all you can do is simulate, and assess the distribution of predicted results. Thereafter, the value add is in using distribution skewness to your advantage. AI calls that 'learning', humans call it 'opportunity to manipulate!'

 

Different POV

 

SD

 

 

 

Edited by SharperDingaan
  • 2 weeks later...
Posted (edited)

@LongHaul

i'm happy to hear that. i understand that you show a healthy suspicion (disdain?) of experts.

i also saw this interview on CNN:

https://www.cnn.com/videos/tv/2021/05/21/amanpour-kahneman-noise.cnn

This is early Sunday morning and here are a few additional points (some of which investment and noise related (and some not) and it may interest you?

 

Recently (i got this through a Morninstar article), i spent about 5 minutes on a study (if it interests you, i could give you a link but i wouldn't bother since the study has significant limitations) which is relevant to the link between noise and investment decisions. The study was made in Taiwan (where retail investing is very significant) and the authors were able to show an increased level of buying activity in momentum stocks (through local brokerage houses) in the days following a specific event which was the occurrence of the public announcement of a local lottery win. It's possible that "feeling lucky" makes one feel like a real winner. This seems like a slippery and noisy slope.

 

Some time ago, you mentioned a book which i had read before and which i did not find particularly useful but the topic is fascinating: Fear. And there may be a link about noise. Fear can be noisy (and ridiculed) but it's an emotion wired deep within our brain that has contributed to the survival of the species. For somebody heavily 'biased' to rational thinking, how do you decrease noise but preserve the basic fear instinct that can be fundamentally useful (even for survival, financial or otherwise). When in training in the 80s, it was required to go through a few weeks of psychiatry in the real world. As you may know, psychiatry is a very noisy specialty, a weakness which has been very partially addressed by devising a series of committee-type "DSM" diagnostic manuals with lists of specific criteria. The real difficulty is that those underlying criteria (fatigue etc) are very subjective. By combining very specific criteria into a constellation of symptoms, the noise is reduced, at least partly. The main lesson that i got from this experience was how to integrate fear into the algorithm concerning a specific situation. There was a rule that, when alone with a psychiatry patient (psychiatry patients can be, at times, dangerous), one had to sit between the door and the patient (to be able to flee or call for help in the event of threat). There's been some work to quantify fear or to find alternative and more objective measures of fear (including AI stuff) but this is an area where fundamental human intuition cannot be replaced. The best, by far, indicator of fear (as a threat to one's security or survival) is an intrinsic feeling that is difficult to describe but that is also unmistakable (if you're appropriately alert to it). When deeply felt, at least keep a clear path to the door (or the exit). It's the same with the fear and greed thing and capital markets that Mr. Buffett describes.

Edit (final comment):

This is an area where you may want to have a higher relative rate of false positives (margin of safety) but, obviously, that's an individual responsibility and the decision outcome may be correlated to the level of one's predisposition in "feeling lucky".

Edited by Cigarbutt
Posted (edited)

I read this a few weeks ago and my only criticism is that it shouldn't have been a book, it could have been an article.  If you read Cigarbutt's summary then you have 90% of the information you would have gotten by reading the book.  It just went on way too long saying the same things over and over in different ways with different examples.

 

Edited by rkbabang
  • 1 month later...
Posted

After reading a couple dozen books including this one on let's say programmed behavior, it's amazing how little control I have over what I do 

except in very specific areas and only for moments of time. I've fought this notion for a decade... fighting against the nihilism that's resulted from

this awakening. It took years but at this point it's clear that it doesn't change anything. It doesn't change the howness (read the master and his emissary another favorite) of life. Understanding this howness vs whatness has concluded in me truly understanding my circle of competence or let's say led to a step change function at least. 

 

Anyone who likes these sorts of books should read Kegan's, In over our heads and Mcgilchrist's Master and the emissary as companions. 

  • 2 years later...
  • 2 months later...
Posted (edited)

Re-listening to my audio book again. 

 

Kahneman separates decision making errors into bias (systematic deviations) and system noise. System noise from level noise (differences between people) and pattern noise (variation within people). Pattern noise from stable pattern noise (internal variance that result from an individual's past experiences and personality and their interactions with specific situations) and occasion noise (random events that affect decisions). 

 

Interestingly, in DK's studies, he found that system noise is a larger component of decision error than bias; pattern noise accounts (50-80% of system noise) for more than level noise; and finally stable pattern noise (4x larger in a study of model judges) is often greater than level noise which is greater than occasion noise. 

 

Level noise can be addressed with eg guidelines, rules. But stable pattern noise is much more challenging to address because it is due to people's uniqueness in judgment abilities. 

 

This got me thinking about the overall effectiveness of the market price aggregation mechanism over the long-term especially in the light of indexing, gamification of investing, in effect lowering the barrier to entry of non-expert investors and deluge of information. I can see how the search bubble and algorithmic feeding of information based on past searches, can super-charge the homogeneity of thinking and in turn bias the market participants. But what about noise, especially stable pattern noise, since it makes up so much of the error in decisions. 

 

With fewer and fewer people doing primary work (eg indexing), and more and more retail non-experts participating (more level noise, more pattern noise, momentum buying, short-term trading), it seems that the implication is that the quality of judgment has reduced. What are the implications here? I would think that there should be more volatility (more violent up and downs, coupled with longer than average over and under-valuation of markets than history). 

 

This coupled with Gigerenzer's concept of wicked environments, and Max Bennett's (A brief history of intelligence) discussion of how humans learn (neocortex's ability to simulate their own & other minds, and future needs; humanities ability to communication ideas from generation to generation, the propensity of tribalism is a form of punishing and rewarding those that don't conform to consensus values to maximize survival of the species as a evolutionary teaching mechanism) makes me think that successful investing from an individual level will require increasingly more patience (slow down decision-making, more incrementalism in building positions), persistent efforts to learn and understand the few deep factors (in the businesses) that on the balance of probabilities will favor survivability and resilience (thereby maximizing time arbitrage ie target very deep value or very high quality, less in-between), and the 2 analytical skills that are the most important is the ability to change your mind with changing data and ability to filter out useless information. 

 

Perhaps, the other implication is that the old adage of time in the market is more important than timing the market is less applicable today. Perhaps, sitting in cash, taking time to really understand what you are buying is more important now, than before. 

Edited by jfan

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...