Tuesday, January 17, 2006

How do you measure intelligence?

It's that time of year again, with thousands of high school students sitting for the SAT. It's a different beast than most of us took; now your score is out of 2400 points, not 1600. There are no antonyms, you can use calculators, quantitative comparisons have disappeared, and you need to write a 25 minute essay. USA Today has an article about the test, and various student angst problems with it here.

These changes, instituted last year, are just the most recent in a long line of reforms since the Scholastic Aptitude Test was introduced in 1926. The College Board says the test has "evolved to remain aligned with classroom practices." As such, the math section has started covering more advanced work, the word "Aptitude" has been removed, and the letters SAT now don't stand for anything.

I'm quite torn on this. On one hand, the SAT has always been about determining your ability to succeed in college. One's performance in the high school curriculum is usually the best predictive measure in this regard. A standardized exam that tests the high school curriculum helps colleges compare students across schools (when straight A students get "1750" at one school, and get "2100" at another, you have a pretty good indication which school is more rigorous). And I'm thrilled that schools now have to teach writing (the most neglected "R" in the past).

On the other hand, one of the original marks in the SAT's favor is that it didn't cover the high school curriculum. Before the early 1900's, few people beyond the sons of wealthy, WASPy families attended college. Before pursuing higher education, these young men attended a handful of northeastern prep schools. The SAT provided a way for young people from high schools that hadn't aligned their curricula with Harvard to prove they were just as much Harvard material. After all, if the SAT didn't test knowledge of the high school curriculum, nothing prevented a brilliant working class kid from a bad school from outscoring a privileged, but not so bright, boy from Exeter.

Or at least that's the theory. It's hard to design a pure intelligence test. As kids have started being exposed to more logic puzzles and games like mazes, their ability to solve such puzzles has risen. The old SAT covered subject matter such as geometry; my math score (but presumably not my intelligence) rose 140 points on the old test after taking a summer class in the topic.

The fine line between innate intelligence and subject matter exposure is one reason many educators are wary of using intelligence tests alone to select kids for gifted programs. The National Education Association website has an article on a math program in Connecticut that tries, explicitly, to expand the pool of kids identified as capable of doing advanced mathematical work, by moving beyond IQ tests.

"Kids are selected based on multiple criteria, including a special assessment of nonverbal math ability, which measures such things as spatial sense and reasoning, and standardized tests when available. Teacher recommendations and prior grades also factor in. Opening up the selection process (gifted programs in the past often selected students based on IQ scores alone) has allowed students with less obvious talents to benefit," the article says.

Yet at the same time that teacher recommendations and grades are being used, the NEA article warns teachers that "Sometimes actions speak louder than words. A kid who seems bored or disinterested (even acting up) may, in fact, need more challenging work." Since such students are less likely to be earning high marks and teacher praise, opening up the selection process might miss them.


Stormia said...

About the SAT... personally, I miss the old test. The new one is an unnecessarily long ordeal (especially the day it was first given. I was at the test for six hours.) The old one seemed to test logic more, and I miss analogies. But, I do very well on the new test (800V,740M,740W) so I suppose I can't complain. I think that no matter how the test is designed, some kids are going to be at a disadvantage, and the test is going to take up ridiculous amounts of time. The test will never be perfect, and the problem with making it 'better' is that no one can agree upon what 'better' might mean.

jason SMith said...

Dear Laura

I strongly support your point that access to gifted education should not be based on one test. The ability of one test to assess true intellectual ability for an individual is limited at best.

For example the correlation beween different iq tests, or iq and old sat, is around 0.7, which means score on one test accounts for only 49% of the variance in performance on another. The coeffiecient is actually lower at higher scores (regression to the mean).

Put in lay terms, a correlation of 0.7 or less means a child has a good chance of being found PG on one test and not even G on another.

IMO (I anticipate being flamed on this one here) tests should not be used to EXCLUDE children from gifted programs, but rather to identify students not found based on their class work. If the child cannot keep up with the class then they can be put back with standard schooling but if they are doing well they should be allowed to stay in no matter what their test scores.


PS - in the few school systems with gifted programs (including my nieces) it is common for children to be kicked out independent of their performance based on yearly IQ or other aptitude testing.

Anonymous said...

I don't know how to measure it, but I think the test should be less academics-focused and instead look at 3 areas. 1. Problem solving; 2 Ability to generate original thought and ideas; 3. Emotional awareness/control. I add the third one not because it's about intelligence but rather a sort of predictor of success.

jo_jo said...

Jason, I'm dumbfounded that your nieces' gifted program is set up this way. It shows such ignorance of who gifted people are and what they need, not to mention human development and the purpose of tests. That's really very disturbing.

Concerning the SAT, I see the point of it for getting into college, but I do think that as a measure of intelligence it is an example of correlation, not causation. Especially if you can raise your score so drastically with extra training.

jason smith said...

Hi Jo Jo

Unfortunately a lot of them in NJ are set up that way. Its largely because the resources for them are so minimal - they almost seem to look every way they can to reduce the class number. (of course that is the last step before eliminating the program altogether).

Actually the SAT is as good an IQ test as anything else. There was an excellent article by Frey and Detterman proving this recently.
(see for a press report on it
http://www.boston.com/news/globe/ideas/articles/2004/07/04/the_sat_tests/ )

The real myth here is that IQ tests are NOT coachable. The WAIS uses a lot of very standard puzzle type questions you can find on the internet or games magazine as well as standard vocabulary testing like on the SAT. The SB tests up until 5 were mostly vocabulary and reading comprehension - again shown to be coachable through the experiences with the SAT.

Anyway my main point, building on Laura's point about the lack of an absolute test measure of ability, is that since no one test can give a very accurate measure gifted programs should be more inclusive in enrolling students. Furthermore performance in the program, not on a test, should determine who stays with the program.