When every answer and its opposite appears equally obvious, then, as Lazarsfeld put it, “something is wrong with the entire argument of ‘obviousness.’Read more at location 105
whether it’s learning to fit in at a new school, or learning the ropes in a new job, or learning to live in a foreign country, we’ve all had to learn to negotiate new environments that at first seem strange and intimidating and filled with rules that we don’t understand but eventually become familiar. Very often the formal rules—the ones that are written down—are less important than the informal rules, which just like the rule about subway seats may not even be articulated until we break them. Conversely, rules that we do know about may not be enforced, or may be enforced only sometimes depending on some other rule that we don’t know about. When you think about how complex these games of life can be, it seems kind of amazing that we’re capable of playing them at all. Yet in the way that young children learn a new language seemingly by osmosis, we learn to navigate even the most novel social environments more or less without even knowing that we’re doing it.Read more at location 219
But as a number of management scholars have shown in recent years, corporate plans—whether strategic bets, mergers and acquisitions, or marketing campaigns—also fail frequently, and for much the same reasons that government plans do.22 In all these cases, that is, a small number of people sitting in conference rooms are using their own commonsense intuition to predict, manage, or manipulate the behavior of thousands or millions of distant and diverse people whose motivations and circumstances are very different from their own.23Read more at location 435
Continuing this litany of irrationality, psychologists have found that human judgments are often affected by the ease with which different kinds of information can be accessed or recalled. People generally overestimate the likelihood of dying in a terrorist attack on a plane relative to dying on a plane from any cause, even though the former is strictly less likely than the latter, simply because terrorist attacks are such vivid events. Paradoxically, people rate themselves as less assertive when they are asked to recall instances where they have acted assertively—not because the information contradicts their beliefs, but rather because of the effort required to recall it. They also systematically remember their own past behavior and beliefs to be more similar to their current behavior and beliefs than they really were. And they are more likely to believe a written statement if the font is easy to read, or if they have read it before—even if the last time they read it, it was explicitly labeled as false.14Read more at location 683
All our participants were recruited from a website called Amazon’s Mechanical Turk, which Amazon launched in 2005 as a way to identify duplicate listings among its own inventory. Nowadays, Mechanical Turk is used by hundreds of businesses looking to “crowd-source” a wide range of tasks, from labeling objects in an image to characterizing the sentiment of a newspaper article or deciding which of two explanations is clearer. However, it is also an extremely effective way to recruit subjects for psychology experiments—muchRead more at location 788
We also found that for any given pay rate, workers who were assigned “easy” tasks—like sorting sets of two images—completed more tasks than workers assigned medium or hard tasks (three and four images per set respectively). All of this, in other words, is consistent with common sense. But then the kicker: in spite of these differences, we found that the quality of their work—meaning the accuracy with which they sorted images—did not change with pay level at all,Read more at location 798
In other words, no matter what they were actually paid—and remember that some of them were getting paid ten times as much as others—everyone thought they had been underpaid. What this finding suggested to us is that even for very simple tasks, the extra motivation to perform that we intuitively expect workers to experience with increased financial incentives is largely undermined by their increased sense of entitlement.Read more at location 807
cumulative advantage models have disruptive implications for the kinds of explanations that we give of success and failure in cultural markets. Commonsense explanations, remember, focus on the thing itself—the song, the book, or the company—and account for its success solely in terms of its intrinsic attributes. If we were to imagine history being somehow “rerun” many times, therefore, explanations in which intrinsic attributes were the only things that mattered would predict that the same outcome would pertain every time. By contrast, cumulative advantage would predict that even identical universes, starting out with the same set of people and objects and tastes, would nevertheless generate different cultural or marketplace winners. TheRead more at location 1126
Employees, for example, may well influence their bosses as much as their bosses influence them, but they are not equally likely to name each other as sources of influence—simply because bosses are supposed to be influential, whereas employees are not. In other words, our perceptions of who influences us may say more about social and hierarchical relations than influence per se.Read more at location 1383
as it turned out, the most important condition had nothing to do with a few highly influential individuals at all. Rather, it depended on the existence of a critical mass of easily influenced people who influence other easy-to-influence people.Read more at location 1453
What we concluded, therefore, is that the kind of influential person whose energy and connections can turn your book into a bestseller or your product into a hit is most likely an accident of timing and circumstances. An “accidental influential” as it were.22Read more at location 1465
Of 74 million events in our data, only a few dozen generated even a thousand retweets, and only one or two got to ten thousand. In a network of tens of millions of users, ten thousand retweets doesn’t seem like that big a number, but what our data showed is that even that is almost impossible to achieve. For practical purposes, therefore, it may be better to forget about the large cascades altogether and instead try to generate lots of small ones. And for that purpose, ordinary influencers may work just fine. They don’t accomplish anything dramatic, so you may need a lot of them, but in harnessing many such individuals, you can also average out much of the randomness, generating a consistently positive effect.Read more at location 1568
In other words, although it is tempting to attribute the outcome to a single special person, we should remember that the temptation arises simply because this is how we’d like the world to work, not because that is how it actually works.Read more at location 1761
common sense and history conspire to generate the illusion of cause and effect where none exists. On the one hand, common sense excels in generating plausible causes, whether special people, or special attributes, or special circumstances. And on the other hand, history obligingly discards most of the evidence, leaving only a single thread of events to explain. Commonsense explanations therefore seem to tell us why something happened when in fact all they’re doing is describing what happened.Read more at location 1763
at no point in time is the story ever really “over.” Something always happens afterward, and what happens afterward is liable to change our perception of the current outcome, as well as our perception of the outcomes that we have already explained.Read more at location 1881
psychologists have found that simpler explanations are judged more likely to be true than complex explanations, not because simpler explanations actually explain more, but rather just because they are simpler.Read more at location 1897
The real problem of prediction, in other words, is not that we are universally good or bad at it, but rather that we are bad at distinguishing predictions that we can make reliably from those that we can’t.Read more at location 1993
Whenever people get together—in social gatherings, sports crowds, business firms, volunteer organizations, markets, political parties, or even entire societies—they affect one another’s thinking and behavior.Read more at location 2046
another way, there is a difference between being uncertain about the future and the future itself being uncertain. The former is really just a lack of information—something we don’t know—whereas the latter implies that the information is, in principle, unknowable. The former is the orderly universe of Laplace’s demon, where if we just try hard enough, if we’re just smart enough, we can predict the future. The latter is an essentially random world, where the best we can ever hope for is to express our predictions of various outcomes as probabilities.Read more at location 2108
As I mentioned at the beginning of the previous chapter, keeping track of our predictions is not something that comes naturally to us: We make lots of predictions, but rarely check back to see how often we got them right. But keeping track of performance is possibly the most important activity of all—because only then can you learn how accurately it is possible to predict, and therefore how much weight you should put on the predictions you make.12Read more at location 2441
This is the strategy paradox. The main cause of strategic failure, Raynor argues, is not bad strategy, but great strategy that just happens to be wrong. Bad strategy is characterized by lack of vision, muddled leadership, and inept execution—not the stuff of success for sure, but more likely to lead to persistent mediocrity than colossal failure. Great strategy, by contrast, is marked by clarity of vision, bold leadership, and laser-focused execution. When applied to just the right set of commitments, great strategy can lead to resounding success—as it did for Apple with the iPod—but it can also lead to resounding failure. Whether great strategy succeeds or fails therefore depends entirely on whether the initial vision happens to be right or not. And that is not just difficult to know in advance, but impossible.Read more at location 2521
errors—Mintzberg recommended that planners should rely less on making predictions about long-term strategic trends and more on reacting quickly to changes on the ground. Rather than attempting to anticipate correctly what will work in the future, that is, they should instead improve their ability to learn about what is working right now. Then, like Zara, they should react to it as rapidly as possible, dropping alternatives that are not working—no matter how promising they might have seemed in advance—and diverting resources to those that are succeeding,Read more at location 2621
Plans fail, in other words, not because planners ignore common sense, but rather because they rely on their own common sense to reason about the behavior of people who are different from them.Read more at location 2958
The Halo Effect, in other words, turns conventional wisdom about performance on its head. Rather than the evaluation of the outcome being determined by the quality of the process that led to it, it is the observed nature of the outcome that determines how we evaluate the process.Read more at location 3077
whenever we find ourselves describing someone’s ability in terms of societal measures of success—prizes, wealth, fancy titles—rather than in terms of what they are capable of doing, we ought to worry that we are deceiving ourselves.Read more at location 3213
No comments:
Post a Comment