This is
a brief excerpt from
The Read-Aloud Handbook by Jim Trelease
(Penguin, 2013, 7th edition).
Now available as both a paperback and e-book.
CHAPTER FIVE
See also Handbook
FAQs.
|
 |
ISSUES
ADDRESSED HERE FROM CHAPTER FIVE |
PAGE
ONE:
|
PAGE
TWO:
|
|
Chapter 5: SSR—sustained
silent reading,
reading aloud's silent partner—continued
What About Those Computerized “Reading Incentive” Programs?
Thirty years ago, when The Read-Aloud Handbook was first published, the idea of computerized reading incentive/ reading management programs would have sounded like futurism. Today it is one of the hotly debated concepts among educators and parents: Should children read for intrinsic rewards (the pleasure of the book) or should they be enticed to read for extrinsic rewards— prizes (or grades)?
 
Advantage Learning Systems’ Accelerated Reader and Scholastic’s Reading Counts, the two industry leaders, work this way: The school library contains a core collection of popular and traditional children’s books, each rated by difficulty (the harder and thicker the book, the more points it has).
Accompanying the books is a computer program that poses questions after the student has read each book. Passing the computer quiz earns points for the student reader, which can be redeemed for prizes like school T-shirts, privileges, or items donated by local businesses.
Both programs strongly endorse SSR as an integral part of their program and require substantial library collections. Both Accelerated Reader and Reading Counts have expanded their scope beyond incentives to include substantial student management and assessment tools.
Before going forward with this subject, I must note in full disclosure that I have been a paid speaker at three Accelerated Reader national conventions. I spoke on the subjects of reading aloud, SSR, and home/school communication problems, topics I have addressed at conventions for major education associations over a three-decade period.
Too many schools are doing
the same thing
with reading
programs that other districts
sadly have done with the
game of basketball.
I have written and spoken both favorably and negatively about these computerized programs, but in recent years I’ve grown increasingly uneasy with the way they are being used by school districts. Too often now I see them being abused in ways similar to the way some places abuse a sport, turning it from recreation into a form of religion.
An increasing number of dedicated educators and librarians also are alarmed by the way computerized reading programs are being used. The original design was a kind of “carrot on a stick”—using points and prizes to lure reluctant readers to read more. For a while the big complaint from critics was about these points or incentives. I didn’t have a problem with that as long as the rewards didn’t get out of hand (and some have).
The real problem, as I see it, arrived when districts bought the programs with the idea that they would absolutely lift reading scores. “Listen,” declared the school board member, “if we’re spending fifty grand on this program that’s supposed to raise scores, then how can we allow it to be optional? You know the kids who’ll never opt for it, the ones with the low scores, will drag everyone else’s scores down. No, it’s gotta be mandatory participation.”
And to cement it into place, the district makes the point system 25 percent of the child’s grade for a marking period. They just took the carrot off the stick, leaving just the stick— a new grading weapon.
Here is a scenario that has been painted by more than a few irate librarians (school and public) in affluent districts that are using the computerized programs:
The parent comes into the library looking desperately for a “seven-point book.” “What kind of book does your son like to read?” asks the librarian.
The parent replies impatiently, “Doesn’t matter. He needs seven more points to make his quota for the marking period, which ends this week. Give me anything with seven points.”
In cases like that, we’re back to same ol’ same ol’: “I need a book for a book report. It’s due on Friday, so it can’t have too many pages.”
Believe it or not, high reading scores have been achieved in communities without computerized incentive programs.
As for the research supporting the computerized programs, that’s hotly contested with no long-term studies with adequate control groups. True, the students read more, but is that because the district has poured all that money into school libraries and added SSR to the daily schedule? Where’s the long-term research to compare twenty-five “computerized” classes with twenty-five classes that have rich school and classroom libraries and daily SSR in the schedule? So far, it’s not there.
Believe it or not, high reading scores have been achieved in communities without computerized incentive programs, places where there are first-class school and classroom libraries, where the teachers motivate children by reading aloud to them, give book talks, and include SSR time as an essential part of the daily curriculum. James K. Zaharis Elementary School in Mesa, Arizona, under principal Mike Oliver, is just such a place.
And the money that would have gone to the computer programs went instead to building a larger library collection. Unfortunately, such schools are rare. Where the scores are low, often the teachers’ knowledge of children’s literature is also low, the library collection is meager to dreadful, and drill-and-skill supplants SSR time. (Consider the blight of empty bookshelves in urban and rural schools noted in chapter 6.)
Are there any other negatives associated these computerized programs?
Here are some serious negatives to guard against:
- Some teachers and librarians have stopped reading children’s and young adult books because the computer will ask the questions instead.
- Class discussion of books decreases because a discussion would give away test answers, and all that matters is the electronic score.
- Students narrow their book selection to only those included in the program (points).
In areas where the points have been made part of either the grade or classroom competition, some students attempt books far beyond their level and end up frustrated. (For an example of how to use such programs correctly, see page 69 of the print edition.)
Before committing precious dollars to such a program, a district should decide its purpose: Is the program there to motivate children to read more or to create another grading platform?
Susan Straight is no lightweight critic. With six novels to her credit (including a finalist for the National Book Award), along with an Edgar Award (given to mystery writers) and inclusion in the 2003 Best American Short Stories, this literature professor and mother of three carries some ballast in her literary criticism. In 2009 she took on Accelerated Reader.
Her argument was not with its good intentions but with how it is implemented and its point system (which often comes down to “thicker is better”). She wrote:
Librarians and teachers report that students will almost always refuse to read a book not on the Accelerated Reader list, because they won’t receive points. They base their reading choices not on something they think looks interesting, but by how many points they will get. The passion and serendipity of choosing a book at the library based on the subject or the cover or the first page is nearly gone, as well as the excitement of reading a book simply for pleasure.
This is not all the fault of Renaissance Learning [AR], which I believe is trying to help schools encourage students to read. Defenders of the program say the problem isn’t with Accelerated Reader itself, but with how it is often implemented, with the emphasis on point-gathering above all else. But when I looked at Renaissance Learning’s Web site again this summer, I noticed the tag line under the company name: “Advanced Technology for Data-Driven Schools.”
That constant drive for data is all too typical in the age of No Child Left Behind helping to replace a freely discovered love of language and story with a more rigid way of reading.12

Straight and her daughter winced at the rating given to To Kill a Mockingbird: 15 points. She couldn’t help but gulp at the rating for Harry Potter and the Order of the Phoenix: 44 points. And then there was Gossip Girl, 8 points, with this AR description: “Enter the world of Gossip Girl and watch the girls drown in luxury while indulging in their favorite sports—jealousy, betrayal and late-night bar-hopping.”
If you were keeping score, the evaluations would look like this: Harry Potter is three times better than Mockingbird, and Mockingbird is only twice as good as Gossip Girl. Is there something wrong with the rating system here? How about the value system?
Topics
covered in Chapter 5 of print and e-book editions of The Read-Aloud Handbook:
NEXT: The Print Climate in the home, school, and library |
|