How do you measure intelligence?
It's that time of year again, with thousands of high school students sitting for the SAT. It's a different beast than most of us took; now your score is out of 2400 points, not 1600. There are no antonyms, you can use calculators, quantitative comparisons have disappeared, and you need to write a 25 minute essay. USA Today has an article about the test, and various student angst problems with it here.
These changes, instituted last year, are just the most recent in a long line of reforms since the Scholastic Aptitude Test was introduced in 1926. The College Board says the test has "evolved to remain aligned with classroom practices." As such, the math section has started covering more advanced work, the word "Aptitude" has been removed, and the letters SAT now don't stand for anything.
I'm quite torn on this. On one hand, the SAT has always been about determining your ability to succeed in college. One's performance in the high school curriculum is usually the best predictive measure in this regard. A standardized exam that tests the high school curriculum helps colleges compare students across schools (when straight A students get "1750" at one school, and get "2100" at another, you have a pretty good indication which school is more rigorous). And I'm thrilled that schools now have to teach writing (the most neglected "R" in the past).
On the other hand, one of the original marks in the SAT's favor is that it didn't cover the high school curriculum. Before the early 1900's, few people beyond the sons of wealthy, WASPy families attended college. Before pursuing higher education, these young men attended a handful of northeastern prep schools. The SAT provided a way for young people from high schools that hadn't aligned their curricula with Harvard to prove they were just as much Harvard material. After all, if the SAT didn't test knowledge of the high school curriculum, nothing prevented a brilliant working class kid from a bad school from outscoring a privileged, but not so bright, boy from Exeter.
Or at least that's the theory. It's hard to design a pure intelligence test. As kids have started being exposed to more logic puzzles and games like mazes, their ability to solve such puzzles has risen. The old SAT covered subject matter such as geometry; my math score (but presumably not my intelligence) rose 140 points on the old test after taking a summer class in the topic.
The fine line between innate intelligence and subject matter exposure is one reason many educators are wary of using intelligence tests alone to select kids for gifted programs. The National Education Association website has an article on a math program in Connecticut that tries, explicitly, to expand the pool of kids identified as capable of doing advanced mathematical work, by moving beyond IQ tests.
"Kids are selected based on multiple criteria, including a special assessment of nonverbal math ability, which measures such things as spatial sense and reasoning, and standardized tests when available. Teacher recommendations and prior grades also factor in. Opening up the selection process (gifted programs in the past often selected students based on IQ scores alone) has allowed students with less obvious talents to benefit," the article says.
Yet at the same time that teacher recommendations and grades are being used, the NEA article warns teachers that "Sometimes actions speak louder than words. A kid who seems bored or disinterested (even acting up) may, in fact, need more challenging work." Since such students are less likely to be earning high marks and teacher praise, opening up the selection process might miss them.