Cathleen O'Grady
arstechnica
November 2, 2015
Many people are already familiar with the concept of confirmation bias, which is the tendency for people to seek out arguments that support their existing opinions. It turns out that we’re not only addicted to seeking information that confirms our biases, we’re also willing to tolerate really weak arguments to support our opinions. So weak, in fact, that if we’re tricked into thinking our own arguments come from a stranger, we’re likely to reject them.
A recent paper in the journal Cognitive Science explored this “selective laziness of reasoning,” finding that people really are quite sloppy with their own arguments. The laziness is selective, though—when we’re assessing the arguments of other people, we’re actually inclined to be pretty tough, especially when we disagree with their conclusions.
To test their hypothesis of selective laziness, the authors of the Cognitive Science paper created a situation in which 237 workers on Amazon Mechanical Turk were tricked into thinking that their own arguments came from other people. They were presented with a series of logical problems and prompted for their answers as well as an explanation for their reasoning.
Once they were finished, the participants moved on to the second phase of the experiment. They were told that for each of the problems, they would be reminded of their own answer and shown the answer and argument of another participant. If they thought the other participant’s arguments were good, they would be allowed to change their own answer.
For most of the problems, this setup contained no substantial trickery, except that the answers the researchers provided weren’t actually from another participant. If the participants answered correctly, they were shown the most frequent incorrect answer with an argument that could plausibly support it. If they answered incorrectly, they were shown the correct answer with a supporting argument.
The authors changed this setup for a single question in the set where the participants' original answers were altered slightly and presented as their initial responses. The particpants' real answers, and the reasoning they'd given for them, were then presented as if they were from a different participant.
Obviously, quite a number of people noticed this change. So the experiment was followed by a question about whether the participants noticed their own answers being fed back to them. Slightly more than half of the participants said they noticed, but only 32 percent correctly identified the questions where it had happened.
The analysis focused on the people who hadn’t detected the switch. These people were generally quite unconvinced by their own arguments, rejecting them over half of the time.
People were also reasonably good at detecting whether their original argument had been good or bad. When they had initially chosen the incorrect answer and then were presented with their own argument for that answer, they ended up rejecting it two-thirds of the time. On the other hand, if they originally had the correct answer, they ended up keeping it nearly two-thirds of the time.
Overall, people chose to change their answers in line with better arguments. In the initial test, only 41 percent of people had chosen the correct answer on the manipulated question. In the second phase, when they were given the opportunity to be persuaded by an argument and make the switch, 62 percent got it right—a big jump upward.
It’s also possible that pure and simple practice was playing a role: in the second round, participants had already seen and thought about the questions, so maybe their performance improved just because they were getting more time for consideration.
To handle some of these issues, the researchers conducted a second, similar experiment with one difference: a new group of 174 participants were first asked to answer the question, but they weren't asked to provide their reasoning. In a second round, they were invited to re-think their answers, change them if they preferred, and then give a reason for their final answer. Finally, they were presented with answers from "other participants."
This setup allowed the researchers to check whether people’s answers improved in the second round, helping to isolate the effect of practice. It turned out that for the most part, the participants didn’t change their answers—only 23 people did; 141 didn’t. This outcome was true whether or not the initial answers were correct, suggesting that it wasn’t the extra practice at the problems that was causing the scores to improve.
This time, 46 percent of participants reported noticing the deception, so the analysis focused on the 54 percent who hadn’t spotted it. These participants rejected their own answers 58 percent of the time, and again, they were more likely to change to the correct answer. As a result, the number of people getting the answer right jumped up from 43 percent to 64 percent.
It’s encouraging that people were inclined to adopt the correct answers when they were persuaded by strong reasoning. That said, an experiment dealing with logic problems can’t necessarily be generalized to bigger, messier beliefs and arguments like vaccines or climate change. When individuals’ beliefs are more tied up with political and cultural identities, there are likely to be other influences on people’s decision to question and change their beliefs.
Cognitive Science, 2015. DOI: 10.1111/cogs.12303
http://arstechnica.com/science/2015/11/if-you-think-your-own-logic-came-from-someone-else-you-might-not-believe-it/
arstechnica
November 2, 2015
Walking in Sydney on Flickr |
Many people are already familiar with the concept of confirmation bias, which is the tendency for people to seek out arguments that support their existing opinions. It turns out that we’re not only addicted to seeking information that confirms our biases, we’re also willing to tolerate really weak arguments to support our opinions. So weak, in fact, that if we’re tricked into thinking our own arguments come from a stranger, we’re likely to reject them.
A recent paper in the journal Cognitive Science explored this “selective laziness of reasoning,” finding that people really are quite sloppy with their own arguments. The laziness is selective, though—when we’re assessing the arguments of other people, we’re actually inclined to be pretty tough, especially when we disagree with their conclusions.
Lies, tricks, and deceit
Previous research has found that people produce weak arguments for their own beliefs; other studies have indicated that people are quite rigorous at assessing other people's arguments. But no previous study managed to show both happening at the same time. It's important to do this because circumstances can vary widely across different studies, making it difficult to be sure that people are definitely applying different standards.To test their hypothesis of selective laziness, the authors of the Cognitive Science paper created a situation in which 237 workers on Amazon Mechanical Turk were tricked into thinking that their own arguments came from other people. They were presented with a series of logical problems and prompted for their answers as well as an explanation for their reasoning.
Once they were finished, the participants moved on to the second phase of the experiment. They were told that for each of the problems, they would be reminded of their own answer and shown the answer and argument of another participant. If they thought the other participant’s arguments were good, they would be allowed to change their own answer.
For most of the problems, this setup contained no substantial trickery, except that the answers the researchers provided weren’t actually from another participant. If the participants answered correctly, they were shown the most frequent incorrect answer with an argument that could plausibly support it. If they answered incorrectly, they were shown the correct answer with a supporting argument.
The authors changed this setup for a single question in the set where the participants' original answers were altered slightly and presented as their initial responses. The particpants' real answers, and the reasoning they'd given for them, were then presented as if they were from a different participant.
Obviously, quite a number of people noticed this change. So the experiment was followed by a question about whether the participants noticed their own answers being fed back to them. Slightly more than half of the participants said they noticed, but only 32 percent correctly identified the questions where it had happened.
The analysis focused on the people who hadn’t detected the switch. These people were generally quite unconvinced by their own arguments, rejecting them over half of the time.
People were also reasonably good at detecting whether their original argument had been good or bad. When they had initially chosen the incorrect answer and then were presented with their own argument for that answer, they ended up rejecting it two-thirds of the time. On the other hand, if they originally had the correct answer, they ended up keeping it nearly two-thirds of the time.
Overall, people chose to change their answers in line with better arguments. In the initial test, only 41 percent of people had chosen the correct answer on the manipulated question. In the second phase, when they were given the opportunity to be persuaded by an argument and make the switch, 62 percent got it right—a big jump upward.
More chance for self-scrutiny
As telling as these results might be, there were still limitations in the experiment. Most importantly, people weren't really given the opportunity to scrutinize and rethink their own arguments at any point. That makes it difficult to be certain that they wouldn’t be critical of their own reasoning if they were given another chance to think things through.It’s also possible that pure and simple practice was playing a role: in the second round, participants had already seen and thought about the questions, so maybe their performance improved just because they were getting more time for consideration.
To handle some of these issues, the researchers conducted a second, similar experiment with one difference: a new group of 174 participants were first asked to answer the question, but they weren't asked to provide their reasoning. In a second round, they were invited to re-think their answers, change them if they preferred, and then give a reason for their final answer. Finally, they were presented with answers from "other participants."
This setup allowed the researchers to check whether people’s answers improved in the second round, helping to isolate the effect of practice. It turned out that for the most part, the participants didn’t change their answers—only 23 people did; 141 didn’t. This outcome was true whether or not the initial answers were correct, suggesting that it wasn’t the extra practice at the problems that was causing the scores to improve.
This time, 46 percent of participants reported noticing the deception, so the analysis focused on the 54 percent who hadn’t spotted it. These participants rejected their own answers 58 percent of the time, and again, they were more likely to change to the correct answer. As a result, the number of people getting the answer right jumped up from 43 percent to 64 percent.
Simple logic problems aren’t the real world
The results show that people are more likely to be critical of their own arguments when they think they are coming from someone else, the authors write. They also show that people “can discriminate strong from weak arguments when they think they are someone else’s.” So we’re not bad at reasoning, just indulging ourselves.It’s encouraging that people were inclined to adopt the correct answers when they were persuaded by strong reasoning. That said, an experiment dealing with logic problems can’t necessarily be generalized to bigger, messier beliefs and arguments like vaccines or climate change. When individuals’ beliefs are more tied up with political and cultural identities, there are likely to be other influences on people’s decision to question and change their beliefs.
Cognitive Science, 2015. DOI: 10.1111/cogs.12303
http://arstechnica.com/science/2015/11/if-you-think-your-own-logic-came-from-someone-else-you-might-not-believe-it/
No comments:
Post a Comment