In January of 2016, the old SAT was taken by nervous high schoolers around the world for the last time. In March of 2016, the new SAT was administered for the first time. For students anxious to gain admittance into a prestigious university, for parents hoping to tout their child’s success, and for nervous school administrators who often take the blame for low performance, the news of a redesigned SAT was welcome. But is it truly the miracle aptitude test the world is searching for? Despite some welcome changes, the controversy regarding standardized testing seems to be growing instead of shrinking, and no one is quite sure how to handle it.
The Scholastic Aptitude Test was originally devised in the year 1926 as a comprehensive measure of potential that did not take into account class or background. Having been modified from the World War I Army I.Q. test, it was assumed that there was no effective way to prepare for a test meant to measure innate intelligence. As early as 1938, however, Stanley Kaplan conceived a method by which to better SAT scores (a method that launched his highly successful tutoring service after his last name). Nowadays, America spends approximately 4.5 billion dollars a year on similar academic services aimed to help increase scores.
But for decades now, people have been questioning the test and its efficacy in predicting collegiate success. The current president of the College Board and parent company of the SAT, David Coleman, has admitted himself that “Unequal test-prep access is a problem,” and that it was evidently clear that “no parents, whatever their socioeconomic status, were satisfied,” by the test or the overall painful process. These issues, along with its reputation for convoluted and often awkward questions, frustrated college admissions officers to the point that many schools began eliminating the SAT (and its challenger the ACT) from the admissions process altogether. In a now famous University of California study, the SAT was said to be a “relatively poor predictor of student performance.” News like this is compounded by research by men like Les Perelman, former director of writing at M.I.T., who increased student’s scores on the writing portion simply by asking them to add incorrect facts, irrelevant quotations, fluff sentences, and little-known words.
Long story short, something needed to change. In this new version, there are no penalties for wrong answers, there are only 4 answer choices (both of which are currently features of the ACT), and the vocabulary is more familiar. However, the new reading passages are now filled with complex sentence structure, the sections are longer, and math problems are often clouded in unnecessary words. Jed Applerouth, who heads a national tutoring service, guessed that the, “new math test was 50 percent reading comprehension,” and that “students will need to learn how to wade through all the language to isolate the math.”
Despite the hopes of the College Board, criticism of this new test have found it to have an even looser correlation with actual IQ than previous versions. Because both versions are known for their “trickiness,” both college admissions officers and the general public is not sure how to react. Now, only time, and new research, will be able to tell if these changes were worth it.