Into-the-box thinking

Something I’ve learned doing my MBA is that it is possible for someone to be very intelligent, well-credentialed, and clearly a subject matter expert yet show a startling lack of intellectual curiosity or talent for critical thinking.

If someone presents themselves as an expert on a subject and I don’t know them, I try to build trust in what that person is telling me. I do this by asking them a difficult question or challenge something they’ve said. The way they respond will tell you an awful lot about what that person can really teach you. I used to do this with technical experts in my previous job, and most of the time they’d fall over themselves to explain their point in considerable detail. Thanks to one electrical engineer, I now know rather more than I used to about variable frequency drives. Opposite the electrical engineer sat a naval architect who I’d often pop in and see just because he’d start talking about some aspect of his work which I’d find interesting. Other times, particularly with managers but rarely with engineers, the response would be an instant dismissal based on the first thing which popped into their head. They most likely do this because they’re incompetent; they get away with it because of their position in the hierarchy.

But I’ve discovered even knowledgeable people like professors can respond this way too. My theory is it’s possible to become very successful in a given field by applying the prevailing orthodoxy and doing exactly as everyone expects without the slightest deviation. Like this, you can become very competent in your chosen subject – until someone chucks a curveball at you and it becomes clear you’ve had no practice in dealing with dissenting opinions. Some professors clearly like their views to be challenged or a strange idea thrown at them. “Okay, let’s look at this,” one might say and a discussion ensues. Or another will say: “Ah, no. This is why you’re wrong. What you need to consider is…” But others don’t seem to like it at all, and on occasion it’s  obvious they’re hearing common objections to an orthodox position for the first time.

Sadly, I think this is the future of education and expertise. Very bright people will be channeled into narrowly focused areas of expertise and discouraged from ever thinking for themselves outside the boundaries set by those who control the subject. A simple test of this theory is to listen to an expert in one field talk about another. More often than not it’s incoherent, emotionally-driven gibberish reminiscent of a protest organised by high-schoolers. I suspect the root of the problem lies partly in the pervasive culture of credentialism. If the certificate didn’t matter, there’d be no point attending a university or business school to get from a lecturer what you could easily learn by reading a book and doing some exercises. The added value a lecturer brings is the ability to go beyond the orthodoxy, stimulate discussion, push the boundaries a little, explore ideas, and get some real-world experience thrown into the mix. There were one or two classes I’ve had where I’d have happily paid just to hear the professor speak, because he had some fascinating insights into the world of business and management you’d never find in a textbook. But if the certificate is what matters most, lectures will turn into sessions where a professor simply regurgitates whatever you can find online or in a book.

The trouble with me – and there is always trouble with me – I go to school to learn, not to get a certificate. I also have one eye keenly trained on what I paid.

Liked it? Take a second to support Tim Newman on Patreon!
Share

28 thoughts on “Into-the-box thinking

  1. “A simple test of this theory is to listen to an expert in one field talk about another.”

    This. In spades.

    However, my diagnosis of the problem might be more fundamental. It’s a lack of principles and building upon that foundation. If all you are doing is parroting orthodoxy, you have nothing you can apply outside that narrow field.

    If, however, you have the fundamental underpinnings of the subject and you have been taught/had experience applying them to the real world and unknown situations, then new situations do not present an existential problem. You look at it and go, “well, I don’t know about this, but maybe we can model it as x.” Perhaps it might generate a working approximation, not perfect, but good enough.

    The next really interesting test is what happens if your initial attempts do NOT provide a working approximation. It is a marker of confidence and maturity to admit that this is not your area of expertise and perhaps you would be wise to consult someone in that field. Even better if you can say “this is the tricky unsolved problem to which I do not have the answer”.

    This lack of principles is why you can get the comedy of people feeding students quotes about immigration and getting them to go off the deep end about how Trump is horrible, before they are told that the quote was actually Obama in 2009 or HRC in 2016…

  2. “The trouble with me – and there is always trouble with me – I go to school to learn, not to get a certificate. I also have one eye keenly trained on what I paid.”

    More and more school isn’t about learning. It’s about getting the certificate.

    If I ever somehow write a screenplay, I won’t go to film school to learn how to make it. Why would I spend £30K on being taught by people for a few hours each day who have never worked in film when there’s a ton of books on the theory and I can get a video course for £100 from Scorsese, Ron Howard or Werner Herzog, people who are masters at the craft?

  3. What Hector just said. The trouble with me is I’ve been having the same issues as Tim but since about 1994. But for me it’s even worse. Even the experts within their fields cannot tolerate real thinking. I had a statistic professor in college who did not understand The Monty Hall Problem. His dogged insistence that “luck has no memory” overrode his ability to understand the problem. As a student of 19-20 I trusted what he was teaching but I recall other students arguing with him for quite some time after class regarding this. It was something I had mostly forgotten about until about 1994 when I read a book by Marylin vos Savant inwhich she discussed the blowback she had gotten from one of her syndicated newspaper columns discussing this problem. She published an exchange that she had had with a college professor about it who insisted that she was wrong. That college professor was the same one I had 10 years earlier. And this was math. Something with an objective answer. Imagine how screwed up the other professional credentialing systems can get by simply making copies of copies of copies of ideas. AGW comes to mind.

  4. Another benefit of a good teacher over and above books and exercises, is that they can quickly free you when you get stuck. Saves a lot of time.

  5. A Theory about this type of thinking, particularly with regard to academia (I’ve seen this expounded elsewhere, for what that’s worth), Hector might want to opine as whether it’s bollocks or not;

    Historically, academia was founded in empiricism (once we got the Churches out of the way). That is, given a set of bridges that were still standing, academics tried to discover why they were still up, and why others had fallen down. The result of this is that they would develop theories about how to build a bridge, which would include how not to build a bridge. Consequently, academic theory would lag real-world practice, but applying a confirmed (by evidence) theory would have lower risk when building a bridge, particularly the long-tail risk (ie, hoped to be rare, but horrifically expensive) of a collapse (which includes the social cost of bad poetry). Now, armed with a theory of what does not work, yer actual academic can start to extend the theory of what does work. So, a successful academic has various forms of prestige and power, both within the internal academic world, but also in the external “real” world, and this is shared by “academics” generally.

    There’s a bit of a problem here; anyone entering academia, can assume that prestige and power simply by association. Two things should mitigate this problem, forms of gatekeeping, peer-reviewed research and replicability.

    The thing is, as “academics” begin to research areas with very little, or conflicting, evidence, then both those methods begin to fail. The peer-review (and certification) process should discriminate against schoolboy errors, but will be under pressure from the power structures within academia. The lack of evidence means that “what not to do” is entirely missing from any useful theory that might have applications.

    The end result is that those certified, with a bit of research under their belts, start to agitate for experiments to be carried out in order to confirm their theories, from the assumed position of power and prestige.

    The bastard problem being that the risk, including long-tail, of any experiment, is now completely arse about face; the chances of it going completely tits-up and the costs are utterly unknown. Of course, if you do manage to get your theory confirmed, your prestige and power comes completely through the roof. Academics look suspiciously like priests, devoted to defending the faith at any cost.

  6. Ducky

    “Historically, academia was founded in empiricism (once we got the Churches out of the way)”

    William of Ockham might have something to say about that.

  7. monty hall problem

    If you don’t like the math just make a monte-carlo simulation in excel and see what strategy outperforms. Takes 10 minutes which is less time than arguing about it.

  8. If you don’t like the math just make a monte-carlo simulation in excel and see what strategy outperforms. Takes 10 minutes which is less time than arguing about it.

    Well exactly. And that’s how Ms. vos Savant addressed it. But some academics cannot be convinced of their being wrong. What I learned from that is that whenever possible don’t argue, bet. Construct a parallel experiment using a finite, obtainable problem space and then challenge the bloviator to bet on it. As I began using this strategy, people stopped arguing with me about such things. Oh, they come up with excuses, the most popular amongst such people is “I don’t gamble”. I’ve often suspected this is why religions are so opposed to gambling. Of course there’s the damage that it does to society but I’m not sure many such folks really understand why that is. I sense amongst some a fear of facing reality.

  9. Recusant – he well might, but he’s dead, so I’ll never know. Anyway, he studied Theology, which, umm, was the domain of the Church(es).

    I was really getting at the whole “And yet it moves” thing, about 300 years later, and the Enlightenment.

  10. What I learned from that is that whenever possible don’t argue, bet.

    I’ve short-circuited any number of bloviating twerps with exactly this. People seem unwilling to waste my time when it may cost them 20 quid.

    Although it’s not working so well of late because word has got around (“Daniel has a math degree. He doesn’t gamble. If he’s asking you to bet, it’s because he already knows the answer. He’s taking your money.”)

    he studied Theology, which, umm, was the domain of the Church(es).

    Your understanding of the origins of academia in the West are…lacking. And a nice example of the phenomenon Tim’s talking about.

  11. Well, would someone care to show me where I’m wrong? Perfectly willing to accept advice.

    (Come to think of it “Hector might want to opine as whether it’s bollocks or not;” was in the post)

  12. Although it’s not working so well of late because word has got around (“Daniel has a math degree. He doesn’t gamble. If he’s asking you to bet, it’s because he already knows the answer. He’s taking your money.”)

    EXACTLY my experience. Except for the part about having a math degree. Though sometimes additionally accompanied by eye rolls from the more obtuse ones.

  13. Ducky it goes both ways. The French rationalist school would go with the theory and say that the monte carlo simulations were just wrong and not representative of the world as we construct it. British empiricists go with the simulations and try to describe why common sense doesn’t work.

    2 totally opposed viewpoints. But I must say that the laws of probability are beyond my understanding. No one can tell me that the Monty Hall results make sense…. And yet it moves

  14. The trouble with me – and there is always trouble with me – I go to school to learn, not to get a certificate. I also have one eye keenly trained on what I paid.

    Me too – as you know

  15. Ducky, what you put forward was generally too vague to really attribute any definite truth value to, and academia is a big place these days with many fields, which makes it even harder to put forward anything too overarching.

    >Historically, academia was founded in empiricism (once we got the Churches out of the way).

    Not really. And you can’t really say ‘Once we got the churches out of the way’ because that’s removing most of the foundations. But generally academia became a lot more empiricist in the first half of the twentieth century. Before that, a lot less so.

    Entrenched ‘power structures’ within Universities: they’re still a problem, but it’s not as cut-and-dried matter, it’s very complex and slippery. Even more so than I suggested in my novel.

  16. @ Diogenes “But I must say that the laws of probability are beyond my understanding. No one can tell me that the Monty Hall results make sense…. And yet it moves”
    Bayesian probabilities I think: the difference between the probability before any box is opened and the revised probability given the new information that the opened box is not a winning box.

  17. >I had a statistic professor in college who did not understand The Monty Hall Problem. His dogged insistence that “luck has no memory” overrode his ability to understand the problem… And this was math. Something with an objective answer.

    In a way, it’s philosophy (and logic) as much as math. I’ve found that some mathematicians don’t properly understand probability. They can do the probability maths fine, but they don’t appreciate what probability is, and that can lead them astray. In particular, they don’t understand that probability is always relative to a set of information (or to a set of premises, to put it as logic). That means that p can have probability x relative to one set of premises, but probability y relative to another set of premises. New information comes in, the probability of p can change. There’s no probability of p simplicter, it’s always relative to some evidence.

    Moreover, there’s not always a clear-cut probability of p relative to x. In some contexts, like dice-throwing, there is, but in others, like horse-racing, there isn’t. Some mathematicians try to evade this by saying that we talking about completely different sorts of things here, but we’re not.

    The Monty Hall problem: that was one that always went down well with my students. One way of looking at it was just to say that when you make your initial choice — say you chose door no. 1 — obviously that has a one-third chance of being the winning door (or box). And that means that the odds of the prize being behind one of the other doors, 2 or 3, is two-thirds. Those odds *don’t* change when the producer deliberately opens one of the other doors that he knows has no prize behind it. Let’s say he opens 2. We know that at least one of 2 and 3 will not have a prize behind it, and we know that the producer will always choose one that doesn’t have a prize, so the fact that an empty door is opened doesn’t change the fact that there is a two-thirds chance of the prize being behind door 2 or 3, rather than than the door you chose, 1.

    But what has changed is that we now know that it can’t be 2, so the two-thirds chance must belong to door no. 3. So it always make sense to switch, because you’re switching to a door with a higher probability.

    What leads some people astray is that they’re conflating this situation with one where the producer opens one of the other two doors at random, and it turns out by luck to be empty. In that case the situation has become 50-50.

    It’s also easier to see if you increase the numbers. Suppose there are 100 doors, and only one has the prize. You choose a door, say no. 23. That’s got a 1/100 chance of being right, and there’s a 99/100 chance of the prize being behind one of the other 99 doors. If the producer opens the the 98 non-prize-winning doors out of those 99, then it should be obvious that there’s a 99% chance that the prize is behind the last of those 99 doors, and only a 1% chance of it being behind the door you chose. Clearly, in that situation, you should switch.

  18. It’s also easier to see if you increase the numbers. Suppose there are 100 doors, and only one has the prize

    Exactly, Hector. My betting strategy with those who argue this is to ask for a deck of 52 cards and a joker. You need a third person to actually do this, but describing it turns lights on in people’s heads. Have your mark pick which one he thinks is the joker. Now offer say a 5:1 payoff if, after turning over all the cards that are not the joker except the last two, he picked the right one. After all, according to his belief it’s a 50/50 shot at the end so 5:1 has high profit potential. Hopefully I explained that right…

    But don’t get me started on how Bayesian analysis is taught. And I’m not a math major.

  19. It’s all to do with randomness.

    The Monty Hall paradox was explained to me in a business math(s) course at a US university that because the producer’s choice was non-random the odds stayed the same. As Hector Drummond’s explanation implies, where randomness is introduced (ie where the producer selects a box in ignorance of whether or not the box he selects holds the prize) the odds shift to reflect that randomness.

  20. Three varieties of ‘probability’/statistical likelihood—
    a. The mechanism and all possible outcomes are known: eg throwing a (true) die or picking a card from a pack.
    b. The result in any particular case is not certain but given enough instances the likelihood of every possible result can be calculated: eg life expectancy.
    c. There is a ‘true value’ to be determined but its measurement is uncertain: eg a Gaussian distribution.

  21. Academics have powerful incentives to appear clever. Otherwise there is no justification for their existence.
    Confessing ignorance does not make the confessor look clever, so this is avoided more than is honest.
    Stating the obvious does not give the appearance of cleverness, so counter-intuitive theories are preferred, even though the obvious answer is more likely to be useful.
    Use of plain English does not make them look clever, whereas use of jargon and complicated language structure does so there is more of that than is useful – with the added benefit that it makes errors harder to detect.
    Pointing out the errors of fellow academics makes you unpopular, may loose you promotions, and brings the whole profession into disrepute, so there is less of this than there should be.

  22. Actually, the power to express complex ideas in simple words and concepts makes you look incredibly smart. See Jordan Peterson. But you have to be actually smart to make the difficult look easy.

    Using big words doesn’t make you look smart, and everyone knows that.

    What it does do is make it very difficult to counter your argument. Because you have to not only follow the verbiage, you have to translate it so the others in the argument agree. Which is almost always more work than its worth.

    But if you want to actually persuade people, fancy language is not your friend.

  23. Some people find the Monty Hall problem confusing because they don’t necessarily think of it as an exercise in probabilities, and even if they do, they feel some critically important information is missing. In particular, Monty’s logic is unclear. Is he following a fixed algorithm? If yes, what is it? Or is he playing a game with you, seeking to keep the car for his employer? Is it a repeated game, since Monty has played it thousands of times with other players?

    It’s not obvious that the probabilistic approach is the only sensible one but let’s focus on it right now.

    One convenient formalization postulates that the doors are all equally likely to win; that Monty knows where the car is; and that, when necessary, he chooses between two empty doors at random, on a 50-50 basis. It shouldn’t be hard to solve (and/or or simulate) this problem, although the solution may seem counterintuitive (legend has it that Erdös himself was only convinced by a Monte Carlo simulation).

    But the trick here – in this particular setup – is that the unconditional probability of winning by switching and the probability of winning by switching conditional on Monty’s opening one of the remaining two doors both happen to be equal to 2/3. For the former, the logic is simple: in the end, there are only two doors to choose from and staying has a 1/3 chance of winning, so switching has 2/3. Actually, you don’t have to assume anything about Monty’s behavior to get this result – these assumptions are superfluous.

    For the conditional probabilities, use Bayes’ theorem. It should yield the same 2/3 if Monty chooses between two empty doors by tossing a fair coin. There’s no improvement, ironically, on the prior probability. However, suppose the doors are numbered and you know that Monty always prefers Door N to Door M when N>M. If you pick Door 1 and he opens Door 2, the car is behind Door 3 for sure. If he opens Door 3, you’re indifferent between staying and switching. Still, switching doesn’t make you worse off.

    The big question is, which probability should we look for, unconditional or conditional? There’s a 2010 article discussing this by Richard Gill of Leiden University, The One and Only True Monty Hall Paradox. He believes that using unconditional probability is the better probabilistic approximation, but prefers a game-theoretical approach, where choosing at random and then switching turns out the optimal strategy. He also says this:

    My mother, who was one of Turing’s computers at Bletchley Park during the war, but who had had almost no schooling and in particular never learnt any mathematics, is the only person I know who immediately said: switch, by immediate intuitive consideration of the 100-door variant of the problem.

  24. “There’s no improvement, ironically, on the prior probability. ”

    Not so much ironic as insightful, or (retrospectively) intuitive, I think – in a way it’s not surprising the “information” Hall provides you, in this variant of the problem, is in fact entirely uninformative about the one probability that matters. He was always going to open another door, regardless of whether you picked the correct door or not, and at the point Hall picks between the two remaining doors they’re essentially symmetric.

    Lovely post, by the way!

  25. @MyBurningEars: Agreed – thanks for the comment. Two more cents from me: Leonard Mlodinow writes, in The Drunkard’s Walk, that “those who found themselves in the situation described in the problem and switched their choice won about twice as often as those who did not.” The show ran about 4,500 times over 27 years so it seems a pretty good sample.

Leave a Reply

Your email address will not be published. Required fields are marked *