(Roughly) Daily

Posts Tagged ‘behavioral psychology

“The whole problem with the world is that fools and fanatics are always so certain of themselves, and wiser people so full of doubts”*…

In a recent post we considered “agnotology”—the study of ignorance. Today, John Timmer unpacks a related phenomenon…

The world is full of people who have excessive confidence in their own abilities. This is famously described as the Dunning-Kruger effect, which describes how people who lack expertise in something will necessarily lack the knowledge needed to recognize their own limits. Now, a different set of researchers has come out with what might be viewed as a corollary to Dunning-Kruger: People have a strong tendency to believe that they always have enough data to make an informed decision—regardless of what information they actually have.

The work, done by Hunter Gehlbach, Carly Robinson, and Angus Fletcher, is based on an experiment in which they intentionally gave people only partial, biased information, finding that people never seemed to consider they might only have a partial picture. “Because people assume they have adequate information, they enter judgment and decision-making processes with less humility and more confidence than they might if they were worrying whether they knew the whole story or not,” they write. The good news? When given the full picture, most people are willing to change their opinions…

[Timmer explains the experiment and runs through the particulars of the results]

… This is especially problematic in the current media environment. Many outlets have been created with the clear intent of exposing their viewers to only a partial view of the facts—or, in a number of cases, the apparent intent of spreading misinformation. The new work clearly indicates that these efforts can have a powerful effect on beliefs, even if accurate information is available from various sources…

The full PLOS One paper is here.

When given partial info, most feel confident that’s all they need to know: “People think they already know everything they need to make decisions,” from @jtimmer.bsky.social in @arstechnica.com.

* Bertrand Russell

###

As we read widely, we might spare a thought for a victim of just this sort of misplaced confidence, John Scopes; he died on this date in 1970. A teacher in Dayton, Tennessee, he was prosecuted in 1925 for teaching evolution in the local high school.

… [Scopes] was accused of violating Tennessee’s Butler Act, which had made it illegal for teachers to teach human evolution in any state-funded school. The trial was deliberately staged in order to attract publicity to the small town of Dayton, Tennessee, where it was held. Scopes was unsure whether he had ever actually taught evolution, but he incriminated himself deliberately so the case could have a defendant.

Scopes was found guilty and was fined $100 (equivalent to $1,700 in 2023), but the verdict was overturned on a technicality. The trial served its purpose of drawing intense national publicity, as national reporters flocked to Dayton to cover the high-profile lawyers who had agreed to represent each side. William Jennings Bryan, three-time presidential candidate and former secretary of state, argued for the prosecution, while Clarence Darrow served as the defense attorney for Scopes. The trial publicized the fundamentalist–modernist controversy, which set modernists, who said evolution could be consistent with religion, against fundamentalists, who said the word of God as revealed in the Bible took priority over all human knowledge. The case was thus seen both as a theological contest and as a trial on whether evolution should be taught in schools…

… In 1958 the National Defense Education Act was passed with the encouragement of many legislators who feared the United States education system was falling behind that of the Soviet Union. The act yielded textbooks, produced in cooperation with the American Institute of Biological Sciences, which stressed the importance of evolution as the unifying principle of biology. The new educational regime was not unchallenged. The greatest backlash was in Texas where attacks were launched in sermons and in the press. Complaints were lodged with the State Textbook Commission. However, in addition to federal support, a number of social trends had turned public discussion in favor of evolution. These included increased interest in improving public education, legal precedents separating religion and public education, and continued urbanization in the South. This led to a weakening of the backlash in Texas, as well as to the repeal of the Butler Law in Tennessee in 1967…

source

Scopes (source)

“See no evil, hear no evil, speak no evil”*…

Altruists seek to understand how their actions will affect others—while willful ignorance can free people to act selfishly. Linh Vu and Margarita Leib explain…

Willful ignorance abounds in daily life. People regularly look the other way rather than examining the consequences of their actions. Despite the plethora of scientific evidence for climate change, for instance, many people still avoid engaging with facts about global warming. They don’t always want to know about the harsh living conditions of farm animals. And consumers often put aside ethical concerns about how the products they purchase were sourced.

As behavioral scientists, we wanted to understand just how prevalent willful ignorance is—as well as why people engage in it. Together with our colleagues, we pooled data from multiple research projects that collectively involved more than 6,000 individuals. We discovered that willful ignorance is common and harmful, with 40 percent of people choosing “not to know” the consequences of their actions to free themselves of guilt while maximizing their own gains. But we also found that about 40 percent of people are altruistic: rather than avoiding information about the consequences of their actions, they seek it out to increase the benefits to others…

[The authors unpack their findings…]

… Our findings hint at ways to combat willful ignorance. In the studies we analyzed, decision-making occurred within a moral framing: you could benefit yourself at the expense of your partner. This presentation is fertile ground for willful ignorance because it poses a threat to one’s self-image, heightening the sense that—if you know what’s really going on—you will have to make harder choices to be a good person.

If we can avoid putting a strong moral emphasis on decisions, it may make people feel less threatened and, as a result, be less willfully ignorant. Other research groups have found promising ways to do this. For instance, we can present choices in ways that highlight ethical options first, such as making vegetarian menus the default, while still allowing people to opt for meat, as part of an effort to promote sustainable food choices. Or we could encourage people to think more positively about good deeds rather than guilt-trip them for what they have failed to do. Highlighting recent global achievements, such as healing the ozone layer, for instance, can inspire people to keep up the good work rather than feeling like the battle is lost and that the situation is all gloom and doom.

In short, we can encourage one another and ourselves toward more selfless and generous actions…

Addressing the all-too-prevalent problem of willful ignorance: “Why Some People Choose Not to Know,” from @scientificamer. Eminently worth reading in full.

Apposite: “How David Attenborough Went From Delighting at the Natural World to Pleading for Its Future.”

* Proverb (originating in Japan in the 16th century)

###

As we encourage inquiry, we might spare a thought for Rachel Carson; she died on this date in 1964.  A pioneering environmentalist, her book The Silent Spring— a study of the long-term dangers of pesticide use– challenged the practices of agricultural scientists and the government, and called for a change in the way humankind relates to the natural world.

The more clearly we can focus our attention on the wonders and realities of the universe about us, the less taste we shall have for destruction.
– Rachel Carson

 source

“For what are myths if not the imposing of order on phenomena that do not possess order in themselves? And all myths, however they differ from philosophical systems and scientific theories, share this with them, that they negate the principle of randomness in the world.”*…

And we humans are, as Kit Yates explains, myth-making animals…

Unfortunately, when it comes to understanding random phenomena, our intuition often lets us down. Take a look at the image below. Before you read the caption, see if you can pick out the data set generated using truly uniform random numbers for the coordinates of the dots (i.e., for each point, independent of the others, the horizontal coordinate is equally likely to fall anywhere along the horizontal axis and the vertical coordinate is equally likely to fall anywhere along the vertical).

Three data sets, each with 132 points. One represents the position of the nests of Patagonian seabirds, another the position of ant colony nest sites and the third represents randomly generated coordinates. Can you guess which one is which?

The truly randomly distributed points in the figure are those in the left-most image. The middle image represents the position of ants’ nests that, although distributed with some randomness, demonstrate a tendency to avoid being too close together in order not to overexploit the same resources. The territorial Patagonian seabirds’ nesting sites, in the right-most image, exhibit an even more regular and well-spaced distribution, preferring not to be too near to their neighbors when rearing their young. The computer-generated points, distributed uniformly at random in the left-hand image, have no such qualms about their close proximity.

If you chose the wrong option, you are by no means alone. Most of us tend to think of randomness as being “well spaced.” The tight clustering of dots and the frequent wide gaps of the genuinely random distribution seem to contradict our inherent ideas of what randomness should look like…

… As a case in point, after noticing a disproportionate number of Steely Dan songs playing on his iPod shuffle, journalist Steven Levy questioned Steve Jobs directly about whether “shuffle” was truly random. Jobs assured him that it was and even got an engineer on the phone to confirm it. A follow-up article Levy wrote in Newsweek garnered a huge response from readers having similar experiences, questioning, for example, how two Bob Dylan songs shuffled to play one after the other (from among the thousands of songs in their collections) could possibly be random.

We ascribe meaning too readily to the clustering that randomness produces, and, consequently, we deduce that there is some generative force behind the pattern. We are hardwired to do this. The “evolutionary” argument holds that tens of thousands of years ago, if you were out hunting or gathering in the forest and you heard a rustle in the bushes, you’d be wise to play it safe and to run away as fast as you could. Maybe it was a predator out looking for their lunch and by running away you saved your skin. Probably, it was just the wind randomly whispering in the leaves and you ended up looking a little foolish—foolish, but alive and able to pass on your paranoid pattern-spotting genes to the next generation…

This… is just one example of the phenomenon known in the psychology literature as pareidolia, in which an observer interprets an ambiguous auditory or visual stimulus as something they are familiar with. This phenomenon, otherwise known as “patternicity,” allows people to spot shapes in the clouds and is the reason why people think they see a man in the moon. Pareidolia is itself an example of the more general phenomenon of apophenia, in which people mistakenly perceive connections between and ascribe meaning to unrelated events or objects. Apophenia’s misconstrued connections lead us to validate incorrect hypotheses and draw illogical conclusions. Consequently, the phenomenon lies at the root of many conspiracy theories—think, for example, of extraterrestrial seekers believing that any bright light in the sky is a UFO.

Apophenia sends us looking for the cause behind the effect when, in reality, there is none at all. When we hear two songs by the same artist back-to-back, we are too quick to cry foul in the belief that we have spotted a pattern, when in fact these sorts of clusters are an inherent feature of randomness. Eventually, the dissatisfaction caused by the clustering inherent to the iPod’s genuinely random shuffle algorithm led Steve Jobs to implement the new “Smart Shuffle” feature on the iPod, which meant that the next song played couldn’t be too similar to the previous song, better conforming to our misconceived ideas of what randomness looks like. As Jobs himself quipped, “We’re making it less random to make it feel more random.”…

Why Randomness Doesn’t Feel Random,” an excerpt from How to Expect the Unexpected: The Science of Making Predictions—and the Art of Knowing When Not To, by @Kit_Yates_Maths in @behscientist.

* Stanislaw Lem

###

As we ponder purported patterns, we might send carefully-discerned birthday greetings to a man who did in fact find a pattern (or at least a meaning) in what might have seemed random and meaningless: Robert Woodrow Wilson; he was born on this date in 1936.  An astronomer, he detected– with Bell Labs colleague Arno Penzias– cosmic microwave background radiation: “relic radiation”– that’s to say, the “sound “– of the Big Bang… familiar to those of us old enough to remember watching an old-fashioned television after the test pattern was gone (when there was no broadcast signal received): the “fuzz” we saw and the static-y sounds we heard, were that “relic radiation” being picked up.

Their 1964 discovery earned them the 1978 Nobel Prize in Physics.

 source

“A few scribbles on a blackboard… can change the course of human affairs”*…

 

What’s the most transformative piece of technology in U.S. classrooms? Smart boards? Laptops? In a 2000 paper on computers in education, Steven D. Krause argues that it’s one that’s been around for nearly two centuries: the blackboard. And he suggests that if we want to understand how teachers adopt technology, we might want to study its history.

To understand the impact of blackboards, Krause writes, we need to consider what schools were like before them. Around 1800, most U.S. schools were one-room log buildings with a fireplace at one end and a single window at the other. “Writing lessons” generally meant students working on their own, whittling goose-quill pens and copying out texts.

When the idea of chalkboards first arrived in the early nineteenth century, they came as a revelation to teachers and education experts. In 1841, one educator declared that the blackboard’s unknown inventor “deserves to be ranked among the best contributors to learning and science, if not among the greatest benefactors of mankind.” Around the same time, another writer praised blackboards for “reflecting the workings, character and quality of the individual mind.”

It’s important to remember that school budgets and student-teacher ratios in the early nineteenth century would seem ludicrous to a modern school district. One teacher might be responsible for hundreds of students, with very little funding for supplies.

Krause writes that one prominent way of using the blackboard to improve education under these circumstances was known as the Lancasterian method, after British educator John Lancaster. Lancaster prescribed particular ways of physically arranging the classroom so that a teacher could work with a large group all at once…

The whole dusty story at “How blackboards transformed American education.”  Read Krause’s paper, “‘Among the Greatest Benefactors of Mankind’: What the Success of Chalkboards Tells Us about the Future of Computers in the Classroom,” here.

* Stanislaw Ulam

###

As we clean the erasers, we might send repetitive-but-instructive birthday greetings to Edwin Ray Guthrie; he was born on this date in 1886.  A philosopher and mathematician by training, he became a leading behavioral psychologist, specializing in the psychology of learning and more specifically, in the role association plays in acquiring skills.  He’s probably best remembered for his belief that all learning is based on a stimulus- response association, instantiated in his Law of Contiguity, which held that “a combination of stimuli which has accompanied a movement, will on its recurrence tend to be followed by that movement.”  Movements are, he argued, small stimulus- response combinations; these movements make up an act.  Thus, a learned behavior– an act that is consolidated by the learner so that it can be repeated– is, at its root, a series of movements.  Guthrie believed that what is learned are the movements (of which the behaviors are simply a result).

 source

 

Written by (Roughly) Daily

January 9, 2018 at 1:01 am