It’s Not About the Score; It’s the Cut-Score

***As this information continues to change and develop, this post will be updated.

There has been a lot of news about cut-scores in the last few days; but the only thing people really seem to understand, is that they don’t really understand cut scores. So, let’s break it down.

What is a “cut score”?
This one is pretty straight-forward:  It is a cut-off point. If you had a number line, and divided it by what scores are advanced, which are proficient, etcetera, those dividing points are the “cuts.”

What is a “quick score”?
A quick score is a score is temporary. It is delivered quickly, after the tests have been administered, to give students and teachers a score that can be used for final grades and or placement in the next year’s courses.

If these tests are as carefully created and reliable as they claim to be, why can’t they deliver the real score, rather than a “quick score”?
That’s a good question. Keep asking it. (I’m pretty sure you already know the answer.) If anyone gives you an answer that doesn’t involve manipulating test data, question that, too.

What is an “equated” score?
This is just taking a score and making it comparable to another score. For example, we might have one test that produces a score of 1 to 5, and a similar test that produces a score of 0% to 100%. Comparing those, a score of 4 would mean two very different things. To make a comparison, we might “equate” (translate) the 4 to 80%.

That sounds easier than it is. When creating an “equated” score, additional factors are taken into consideration, beyond just the number. One of those tests may have questions that are far more difficult than the other, for example. So, to make an equated score, the difficulty of each test is also considered, along with the number of questions, the kinds of questions, and other factors, to get down to what a score on one of these tests would “equate” to, on the other.

What are “pre-equated” and “post-equated” scores?
“Pre-equated” scores are scores that are equated BEFORE students are tested.

“Post-equated” scores are equated AFTER students are tested.

Why does it matter whether cut scores are equated before or after testing?
Now we are in a tricky area. It looks simple; but it is not.

(If you would like to read a paper on the topic, A Comparison of Pre-Equating and Post-Equating using Large-Scale Assessment Data explains things well.)

Most organizations use PRE-equated scores, because they are better able to justify where the cut-offs occur. If an test creator has carefully considered their questions, the course content those questions map to, and the difficulty of the questions, they should be able to reasonably set the cut-off points where students should be expected to perform if they have “basic” knowledge, “proficient” knowledge, or “advanced” knowledge.

Since the Tennessee Department of Education uses Post-equated scores – they can willy-nilly set the cut scores to whatever they want, AFTER the tests have been taken.

This changes EVERYTHING your students and teachers have been told. They were given curriculum to use and scores to work toward. Their “quick scores” showed that they mastered the material. But the Department of Education gets to go back and CHANGE THEIR MINDS about the cut-scores.

Can you imagine being in a class, where the teacher gives you a grade and over the summer, sends you a note that he re-calculated the grades and your grade is completely different?!  That is what the Tennessee Department of Education can do by determining cut-scores AFTER the tests have been taken and scored.

A local reporter told me that she is trying to do a story on this, but is having trouble getting information from the Department of Education!  How is that acceptable? Why aren’t parents on the doorstep of the TN DOE?

 

The Tennessee Education Association has also been trying to get answers and posted this information on May 27, 2015, on their Facebook page:

TCAP Update:
Following the state’s conference call, we now know that the state did change its methodology for calculating quick scores for students in grades 3-8. It is now using the cubed-root method the state has been using for high school EOCs. This change in methodology resulted in apparent grade inflation, leading parents and educators to believe students had performed better than in previous years. The change resulted in about a 4-point increase in cut scores from the method used in 2014.

Please visit the link below for documents provided by the state in its attempt to explain these changes. TEA still has many, many questions about the reliability of both the quick and cut scores, why these changes were made and how proficiency levels are determined. We will continue our efforts to get more answers from the state and insist that they‪#‎showthemath.

State Documentation of TCAP scores

Below are some of the official answers TEA has been able to get, so far. Please note that TEA has been doing their due diligence on this issue and there has been more information, each time I have looked at their page.  Please use the link above, to follow their findings.

Quick Score/Proficiency level correlation:
We have not changed the mark or expectation for student proficiency on TCAP; there have been no changes to cut scores for proficiency levels. I’d also like to clarify that quick scores are no longer tied to TCAP performance levels. For example, a quick score of 85 is not equivalent to the cut score for proficient. We compare student performance each year based on the scale scores.  The scale scores determine the cut points for performance levels (i.e. below basic, basic, proficient, advanced). We always produce equating tables in the fall that clearly define the raw score equivalent cut points based on the scale score. This is designed to help teachers know what to expect early in the school year. The equating tables for 3-8 achievement can be found here.  The equating tables for EOC can be found here.

Student performance expectations for the proficiency threshold have not changed.  They are exactly the same as last year, and these expectations are exactly the same as the equating tables which we published online in the fall for teachers to access. Quick scores do not determine proficiency levels. I have attached a FAQ – A Guide to Understanding Quick Scores – that we created to help explain the purpose for quick scores.  In addition, please see the attached TCAP Scoring Flow Chartthat shows how and where quick scores fall into the scoring process.  It is clear from the flow chart that quick scores have no relationship to performance levels.  Quick scores are used only to calculate a 100-point grading scale. There are various methodologies that can be used to create a 100-point grading scale from the raw score, and, this year, we used the cube root method for grades 3-8, as we have done for EOCs over the past several years.

Quick Score Calculation:
What was the rationale for making this change to the cube root method? Is it possible to see the formula used for this calculation?

The rationale for making the change was to create a consistent methodology for generating quick scores and one that was not dependent upon TCAP performance levels like the interval scaling method used in 3-8 achievement since 2012. We updated the methodology to be consistent with what we are doing for End of Course exams.  We will be engaging directors of schools in more conversations about quick scores for 2015-16.

I have attached (linked above) a memo from April 2012, TCAP Quick Score Conversion Guidance, which includes the interval scaling methodology for generating quick scores in grades 3-8.  I have also attached the Cube Root Quick Score Calculation guidance that details the cube root method used this year for all grades.

Proficiency Levels:
What are the proficiency level ranges for Below Basic, Basic, Proficient, and Advanced for the various assessments? How do these ranges compare to previous years?

The equating tables for 3-8 achievement and EOCs are posted online, and they show the scale score ranges for each performance level.  These scale score ranges are the same for 2015 as they were in 2014.

 

The fact that teachers, districts, parents, and communities are having difficulty getting timely and adequate answers from the State Department of Education should be very concerning. It certainly makes things look fishy.

Next Up: High School EOC Cut Scores, Predicted Scores, and Misuse in Teacher Data

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>