What makes the children motivated and get succeeded?

Thursday 20 September 2012 0 comments


The following article tells about the how children Succeed
Which matters more, cognitive ability or motivation?
Angela Duckworth, a psychologist at the University of Pennsylvania, has made it her life’s work to analyze which children succeed and why. She says she finds it useful to divide the mechanics of achievement into two separate dimensions: motivation and volition. Each one, she says, is necessary to achieve long-term goals, but neither is sufficient alone. Most of us are familiar with the experience of possessing motivation but lacking volition: You can be extremely motivated to lose weight, for example, but unless you have the volition—the willpower, the self-control—to put down the cherry Danish and pick up the free weights, you’re not going to succeed. If children are highly motivated, self-control techniques and exercises—things like learning how to distract themselves from temptations or to think about their goals abstractly—might be very helpful. But what if students just aren’t motivated to achieve the goals their teachers or parents want them to achieve? Then, Duckworth acknowledges, all the self-control tricks in the world aren’t going to help.

But that doesn’t mean it’s impossible to shift a person’s motivation. In the short term, in fact, it can be surprisingly easy. Consider a couple of experiments done decades ago involving IQ and M&M’s. In the first test, conducted in Northern California in the late 1960s, a researcher named Calvin Edlund selected 79 children between the ages of 5 and 7, all from “low-middle class and lower-class homes.” The children were randomly divided into an experimental group and a control group. First, they all took a standard version of the Stanford-Binet IQ test. Seven weeks later, they took a similar test, but this time the kids in the experimental group were given one M&M for each correct answer. On the first test, the two groups were evenly matched on IQ. On the second test, the IQ of the M&M group went up an average of 12 points—a huge leap.


A few years later, two researchers from the University of South Florida elaborated on Edlund’s experiment. This time, after the first, candy-less IQ test, they divided the children into three groups according to their scores on the first test. The high-IQ group had an average IQ score on the first test of about 119. The medium-IQ group averaged about 101, and the low-IQ group averaged about 79. On the second test, the researchers offered half the children in each IQ category an M&M for each right answer, just as Edlund had; the others in each group received no reward. The medium-IQ and high-IQ kids who got candy didn’t improve their scores at all on the second test. But the low-IQ children who were given M&M’s for each correct answer raised their IQ scores to about 97, almost erasing the gap with the medium-IQ group.

The M&M studies were a major blow to the conventional wisdom about intelligence, which held that IQ tests measured something real and permanent—something that couldn’t be changed drastically with a few candy-covered chocolates. They also raised an important and puzzling question about the supposedly low-IQ children: Did they actually have low IQs or not? Which number was the true measure of their intelligence: 79 or 97?

This is the kind of frustrating but tantalizing puzzle that teachers face on a regular basis, especially teachers in high-poverty schools. You’re convinced that your students are smarter than they appear, and you know that if they would only apply themselves, they would do much better. But how do you get them to apply themselves? Should you just give them M&M’s for every correct answer for the rest of their lives? That doesn’t seem like a very practical solution. And the reality is that for low-income middle-school students, there are already tremendous rewards for doing well on tests—not immediately and for each individual correct answer, but in the long term. If a student’s test scores and GPA through middle and high school reflect an applied IQ of 97 instead of 79, he is much more likely to graduate from high school and then college and then to get a good job—at which point he can buy as many bags of M&M’s as he wants.

But as every middle-school teacher knows, convincing students of that logic is a lot harder than it seems. Motivation, it turns out, is quite complex, and rewards sometimes backfire. In their book Freakonomics, Steven Levitt and Stephen Dubner recount the story of a study researchers undertook in the 1970s to see if giving blood donors a small financial stipend might increase blood donations. The result was actually that fewer people gave blood, not more.

And while the M&M test suggests that giving kids material incentives to succeed should make a big difference, in practice, it often doesn’t work that way. In recent years, the Harvard economist Roland Fryer has essentially tried to extend the M&M experiment to the scale of a metropolitan school system. He tested several different incentive programs in public schools—offering bonuses to teachers if they improved their classes’ test results; offering incentives like cellphone minutes to students if they improved their own test results; offering families financial incentives if their children did better. The experiments were painstaking and carefully run—and the results have been almost uniformly disappointing. There are a couple of bright spots in the data—in Dallas, a program that paid young kids for each book they read seems to have contributed to better reading scores for English-speaking students. But for the most part, the programs were a bust. The biggest experiment, which offered incentives to teachers in New York City, cost $75 million and took three years to conduct. And in the spring of 2011, Fryer reported that it had produced no positive results at all.

This is the problem with trying to motivate people: No one really knows how to do it well. It is precisely why we have such a booming industry in inspirational posters and self-help books and motivational speakers: What motivates us is often hard to explain and hard to measure.

Part of the complexity is that different personality types respond to different motivations. We know this because of a series of experiments undertaken in 2006 by Carmit Segal, then a postdoctoral student in the Harvard economics department and now a professor at a university in Zurich. Segal wanted to test how personality and incentives interacted, and she chose as her vehicle one of the easiest tests imaginable, an evaluation of basic clerical skills called the coding-speed test. It is a very straightforward test. First, participants are given an answer key in which a variety of simple words are each assigned a four-digit identifying number. The list looks something like this:



And then a little lower on the page is a multiple-choice test that offers five four-digit numbers as the potential correct answer for each word.

All you have to do is find the right number from the key above and then check that box (1C, 2A, 3C, etc.). It’s a snap, if a somewhat mind-numbing one.
Segal located two large pools of data that included scores from thousands of young people on both the coding-speed test and a standard cognitive-skills test. One pool was the National Longitudinal Survey of Youth, or NLSY, a huge survey that began tracking a cohort of more than 12,000 young people in 1979. The other was a group of military recruits who took the coding exam as part of a range of tests they had to pass in order to be accepted into the U.S. Armed Forces. The high-school and college students who were part of the NLSY had no real incentive to exert themselves on the tests—the scores were for research purposes only and didn’t have any bearing on their academic records. For the recruits, though, the tests mattered very much; bad scores could keep them out of the military.

When Segal compared the scores of the two groups on each test, she found that on average, the high-school and college kids did better than the recruits on the cognitive tests. But on the coding-speed test, it was the recruits who did better. Now, that might have been because the kind of young person who chose to enlist in the armed forces was naturally gifted at matching numbers with words, but that didn’t seem too likely. What the coding-speed test really measured, Segal realized, was something more fundamental than clerical skill: the test takers’ inclination and ability to force themselves to care about the world’s most boring test. The recruits, who had more at stake, put more effort into the coding test than the NLSY kids did, and on such a simple test, that extra level of exertion was enough for them to beat out their more-educated peers.

Now, remember that the NLSY wasn’t just a one-shot test; it tracked young people’s progress afterward for many years. So next Segal went back to the NLSY data, looked at each student’s cognitive-skills score and coding-speed score in 1979, and then compared those two scores with the student’s earnings two decades later, when the student was about 40. Predictably, the kids who did better on the cognitive-skills tests were making more money. But so were the kids who did better on the super-simple coding test. In fact, when Segal looked only at NLSY participants who didn’t graduate from college, their coding-test scores were every bit as reliable a predictor of their adult wages as their cognitive-test scores. The high scorers on the coding test were earning thousands of dollars a year more than the low scorers.

And why? Does the modern American labor market really put such a high value on being able to compare mindless lists of words and numbers? Of course not. And in fact, Segal didn’t believe that the students who did better on the coding test actually had better coding skills than the other students. They did better for a simple reason: They tried harder. And what the labor market does value is the kind of internal motivation required to try hard on a test even when there is no external reward for doing well. Without anyone realizing it, the coding test was measuring a critical noncognitive skill that mattered a lot in the grown-up world.
Segal’s findings give us a new way of thinking about the so-called low-IQ kids who took part in the M&M experiment in south Florida. Remember, they scored poorly on the first IQ test and then did much better on the second test, the one with the M&M incentive. So the question was: What was the real IQ of an average “low-IQ” student? Was it 79 or 97? Well, you could certainly make the case that his or her true IQ must be 97. You’re supposed to try hard on IQ tests, and when the low-IQ kids had the M&M’s to motivate them, they tried hard. It’s not as if the M&M’s magically gave them the intelligence to figure out the answers; they must have already possessed it. So in fact, they weren’t low-IQ at all. Their IQs were about average.
But what Segal’s experiment suggests is that it was actually their first score, the 79, that was more relevant to their future prospects. That was their equivalent of the coding-test score, the low-stakes, low-reward test that predicts how well someone is going to do in life. They may not have been low in IQ, but they were low in whatever quality it is that makes a person try hard on an IQ test without any obvious incentive. And what Segal’s research shows is that that is a very valuable quality to possess.




Share this article :

Post a Comment

 
Support : PsychTronics | Psych | Psych Template
Copyright © 2013. PsychTronics - All Rights Reserved
Template Created by Psych Published by Psych
Proudly powered by Blogger