top of page
Value-Added:  Great Expectations

Growth

Achievement is your height.  Growth is when you need new pants.

​

Achievement tells where you are.  Growth tells how far you've come.  It's important to have both.  With growth metrics, we can see if struggling students are catching up.  Growth metrics also serve as a warning sign when students are not learning enough in a year.

​

The simplest growth metric is a score difference.  For example, here's a sample calculation giving the difference between this year's test score vs. last year's test score, assuming that the test covers a year's worth of material. 

((this year's score + max points) - last year's score) / max points = growth in years

We divide by the total number of points to get growth in terms of years.  Generally this number is expected to be 1.  (We'll look at exceptions in a later post, such as tracked classes). 

​

For example, here's a student with growth surpassing our 1 year goal, assuming a max score of 1000:

((897 + 1000) - 847) / 1000 = 1.05 years

Notice that the amount of growth isn't shown on the report card at all.  Currently, only the growth difference metric (value-added) is given.  So, our first recommendation is to show the amount of growth on the report card.

​

One potential pitfall occurs if a student completely fails the tests.  Mathematically we have:

but this can hardly be true.  In fact, any score below 25% (equivalent to guessing on multiple choice) is ineligible for computing a growth score.  That's not good...

​

We should provide a growth score for everyone.  What can we do?  Offer below grade level testing.

​

The good news is, the testing world is moving to skill-based approaches that assess mastery, regardless of age or grade level.  Skill-based approaches avoid the zero-score problem, since each skill is mastered individually.  A grade-level equivalent is then computed based on the skills mastered. 

​

So, there we have it!  Progress.  

 

We can easily add this growth metric to the report card, using the average growth for the district.  The growth distribution could also be shown, like we've done for achievement.  

 

Next, we'll look at value-added, which is a measure of growth compared to expectations. 

​

((0 + 1000) - 0) / 1000 = 1.0 years

Remember the 80's?  A time of big hair and bold clothes?  When the future was as bright as your socks?  The 80's brought us another invention - value-added metrics.  Dr. William Sanders, a statistician at the University of North Carolina, proposed a method for a problem that had been bothering him: measuring teacher effectiveness.

​

Now, we'll look at what goes into "teacher effectiveness" in a minute, because this topic is often treated superficially and that's neither fair nor productive.  But, for the moment, let's look at the motivation behind value-added metrics, which is to generate a fair evaluation despite differences in the student population. 

​

Imagine we have two teachers, Mrs. P. and Mrs. Q.  We'd like to know whom to nominate for our Teacher of the Year award.  The simplest way is to compare the average student growth (class average gain) for Mrs. P.'s class and Mrs. Q.'s class, according to Dr. Sanders in this Ohio Department of Education reference.

​

But - have you seen Mrs. Q.'s class?  What if Mrs. P has a class of angels?  And Mrs. Q. ... not so much?  It's not really fair to blame a teacher for student characteristics that impact learning.  This is exacerbated at a district level, where conditions such as poverty can hamper student readiness in certain districts.  

​

Dr. Sanders posited that the solution to unfair blame is to introduce growth expectations.  

​

Value-added "extra" growth = Actual growth - Expected growth

​

where any "extra" growth is assumed to be due to teacher or district quality.  The "extra" growth can be positive (presuming high-quality teaching), negative (presuming low-quality teaching), or zero (exactly as expected).

 

We've talked about computing actual growth.  How is expected growth determined?  In general, there are two approaches. 

 

The first approach is to set a growth expectation by explicitly estimating the impact of external factors on a growth goal.  For example, we might start with a growth goal of one year, subtract a bit for students receiving free or reduced price lunch, and add a bit for students in honors classes.  This is called a "gain-score" model.

​

The second approach (which the ODE currently uses) is to set expectations based on past student test performance.  In particular, Ohio uses the SAS EVAAS method.  Here, all of the student characteristics that might affect performance are assumed to be reflected in previous years' scores.  Generally, it states that an individual student will have similar growth each year, and ascribes any difference (good or bad) to the teacher.

​

Once you select an approach, the rest is details... and details... and details.  The goal of all of these details is to minimize the amount of unfair blame.

​

It's important to note that all value-added approaches rely on a set of assumptions.  That is, certain conditions must be true for the metric to report valid results.  Unfortunately, we've violated one of them.  The tests must have enough stretch to accurately measure the achievement of high and low achievers.  Sadly, the extremely low achievement scores in some districts are indistinguishable from multiple-choice guessing, and true achievement is not being measured.

​

It turns out there is a surprising SECOND criterion that's crucial for evaluating value-added approaches.  We'll talk about this in our next post.

​

Recommendations

1. Show the total amount of growth on the report card.

2. Allow below-grade level testing. (In addition to above-level testing which is already allowed).

bottom of page