Secrets Of The Test Security Industry

image from
Soon it'll be time for kids to start taking tests and for educators to start administering them -- all under a cloud of suspicion and mistrust about the tests' security.  So I thought I'd pass along this Q and A with test security expert John Fremer, who talks about how big,
how complicated, how dangerous cheating prevention efforts can be
(people have been killed!), who cheats (everyone but state agencies,
apparently),  and high-tech tactics that might  be used to prevent
cheating on the "common core" assessments" (can you say "Linear On The
Fly Testing"?).  What's your take on test security -- too much, too little, the wrong kind?  Read below and share your thoughts.  

Q:  How big is the testing industry, in terms of tests or dollars per year?  What about the test security industry?

JF:  The testing industry is tens of billions of dollars annually
with many facets - test development, test administration, test scoring,
test program management, test preparation, etc. The test security
industry is much smaller and much of the expenditure is buried within
other contracts. Overall it is probably in the tens of millions of

Q:  What would you consider a normal, average amount of cheating each year?

JF:  My guess is that one to two percent of educators have some
involvement in cheating when they play a major role in high stakes
testing. This is a very low rate compared to many other fields, although
some fields have even lower levels of cheating.

Q:  Education's not the worst?  What fields have higher rates of cheating, and which have lower?

JF: The Information Technology area has very significant problems, in
part because a substantial number of their test takers take exams in
other countries around the world.  The legal testing arena has
relatively low rates of testing misbehavior, in good measure because
they exercise very firm control of the testing process.

Q:  What did you find when you looked into cheating allegations in Atlanta?

JF:  We confirmed that there were some schools and some classes
within schools where the results strongly suggest testing misbehavior on
the part of educators. In Atlanta, looking at the school with the
greatest number of inexplicable results, the probability of those
results occurring by chance is one over ten to the 52nd power. I liken
this to flipping two coins and having the first land and stay on its
edge and the second landing and staying on the edge of the first.

Q:  What if anything do you think you or the state or
district could have done to prevent the dispute over your findings
that's taking place with the Governor?

JF:  I think things would have gone more smoothly if I had arranged
to meet directly with the Governor's Office of Student Achivement to be
absolutely certan that our data requests and the reasons for them
where spelled out in the clearest possible way. As to the Atlanta Public
Schools, I have recommended that they get permission to run their own
cheating detection analyses to give them more ability to focus on what
specific kinds of problems are occurring and where. 

Q:  How's your relationship with Governor's Office of Student Achievement ED Kathleen Mathers these days?

JF:  People seem intrigued by the disagreement that I have with
Kathleen Mathers about Caveon not getting all the data we asked for and
with the outcome of Caveon and KPMG's analyses and with our conclusions.
Still, I truly applaud her for carrying out cheating analyses and then
calling for a thorough review of what accounted for the results that her
vendor came up with. Kathleen's activities are quite commendable and in
the forefront of how a state testing leader should address possible

Q:  Is most cheating done by kids, teachers, school-level staff, or district or state staff?

JF:  Cheating is found in all of educational testing, including
teacher-made tests which account for the majority of all testing in
education. In these situations cheating by students is the most common;
there's no reason for teachers to cheat.

Q:  What about in school- district- or state-wide testing situations?

JF:  Student cheating may still be the most prevalent, but attention
tends to focus on educator cheating, of which most occurs at the school
level, some at the district level, and almost none to my knowledge at
the state level.

Q:  Is most cheating done on high-stakes tests that affect graduation or on lower stakes testing that affects school ratings?

JF:  Cheating occurs on all high stakes tests, regardless of the
purpose. The degree of cheating is very much impacted by the way the
program is managed. Very careful monitoring, clear testing rules,
explicit consequences, systematic cheating analyses, etc. when done
carefully and thoroughly can keep cheating to a quite low level. It is
extremely difficult to completely avoid cheating and very expensive to
come close.

Q:  Are there more cheating problems now or is it just that there's more testing and more coverage in the media under NCLB?

JF:  I believe that there is more cheating now. There is also more
media coverage, but I think that is partly because there is more
cheating to cover. It is true that the public's appetite for cheating
stories seems very strong.

Q:  What's the worst test security breach or cheating scenario you've ever seen or heard about?

JF:  We track stories about test cheating from around the world and
proctors have been killed while trying to maintain test security.

Q:  What's the worst breach over the past year or two?

JF:  In a certification testing program outside of education, an
entire computer-delivered test item bank was stolen before even one
person had taken the test for real.

Q:  What field did that happen in, and was it reported to the public?

JF: It was Information Technology.  I don't think that there was a public report about it.

Q:  Will the common core of standards and the new tests coming with them make things worse or better in terms of cheating?

JF:  The incentive to compromise the tests will go up because of the
widespread use of common test items so the return to "test pirates" will
go up. Also the same test items will be given in many different
places. Unless the cooperating states prepare conscientiously and
follow through with great skill and focus, cheating will go up
substantially.  Fortunately we have time to get this right.

Q:  What will be required to keep the common tests secure?

JF:  In some content areas, test items can be composed as they are
being administered, sometimes called "Linear on the Fly Testing." In
this approach there are no test items to steal in advance and the ones
administered will not be used again exactly as is. This is a powerful
anti-cheating approach where it can be applied.

A 35-year ETS veteran who's now Executive Vice President
of Caveon Testing Security, Fremer is an
increasingly familiar name in news coverage of the testing industry --
including most recently the hotly disputed instances of cheating in the
Atlanta Public School system, in which Governor Sonny Perdue and others
believe that cheating was much more widespread than Caveon and other
have found it to be  (see AJC's Get Schooled for some background).  Caveon works with 15 states and the DoD.

Cross-posted from TWIE


Leave a comment
  • I think that Alexander's post was very powerful. If one goes to the CPS website:
    and go to the pull down for "school PM." In this section you will find something called the CPS PM white paper and documents for what are called Instructional Leadership Teams (ILTs).

    One very important reference in the CPS PM strategy is a book titled "Driven by Data" by Paul Bambrick-Santoyo. This book is very hostile to test security at the level of interim assessments and calls for open interim tests that can be directly taught to. I recommend blog readers become familar with this book.

    Here is one of my favorite quotes from Driven by Data: "Many companies that sell interim assessments do not allow schools to see the product - either before or after administration - because they want to keep results "vaild" [quote marks by author]. It cannot be said more strongly; if standards are meaningless until you define how to assess them, then curriculum scope and sequences lack a road map fo rigor without a transparent assessment. Tansparent assessments allow teachers to plan more effectively and increase rigor across schools. The goal is not to compare schools that's the purpose of summative state tests!) - it is to guide instruction at the classroom level. This is not possible with transparent assessments."

    I found this to be very interesting in relation to Alexander's informative post.

    Rod Estvan

  • I had a typing error. The end of the quote from Driven by Data should have read "This is not possible with out transparent assessments."


  • What Chicago Teacher stated is in a way what "Driven by Data" advocates, although I do not think the author would agree with the idea that students be "drilled" on the questions. CPS does not in its performance management documents indicate that they argee with Paul Bambrick-Santoyo on his position that the actual test questions on interim assessments be revealed to teachers prior to the administration of the assessment as part of instructional planning. But CPS also does not say they disagree with that idea either and they do publicly promote the book. (See page 14 of the CPS Performance Management white paper titled "Performance Management A Focus on Excellence August 2010")

    The book Driven by Data does have some value in that it provides teachers with a blue print for examining questions that students miss on tests and when a high percentage of students are missing the same question its relation to instruction and the need for re-teaching. But the book simply does not admit that no one question or group of questions equate to all particular aspects of the various grade level standards, particularly now the common core standards.

    Here is an example of what I mean. The common core standards for grade 2 include this one: "Describe the overall structure of a story, including describing how the beginning introduces the
    story and the ending concludes the action." Students may not be able to correctly answer questions attempting to test for this standard when reading one short story, but can on another short story. In fact students may get this standard for certain types of stories wirtten at their reading level and not for others. It also depends on how the issue of the "overall structure of the story" is actually tested and what is expected of a 2nd grade student. Assuring mastery of this standard is no simple thing.

    Mr. Bambrick-Santoyo states very clearly his thinking writing: "Assessments are not the end of the teaching and learning process; they're the starting point." I am not sure this is right in the least for every student and for every standard. I would like very much to see how the Scantron Performance Series measures the common core standard I have just given for second grade and the reading passage they give to students to be tested on.

    Rod Estvan

  • Well, if someone is telling them that Scantron means nothing, he or she is setting the school up for failure. Wow. Why would anyone say that in this critical time of accountability? I tell my students that EVERY assessment or test they take is important to them and that their scores tell the school what course of action it will take on their behalf. That usually fires them up because they want to prove how smart they are.
    I can't stand it when people tell them assessments are meaningless. I mean, seriously, why would anyone do that? On the high school level, kids are often told the Work Keys are not important. Sometimes, people never cease to amaze me.

Leave a comment