William Sanders VAM Testimony 2006 and William Sanders have/had a generic relationship

Testified William Sanders VAM Testimony 2006
Tesified William Sanders
Start Date 2006-00-00
Notes Statement of William L. Sanders Before the Committee on Education and the Workforce Hearing on “No Child Left Behind: Can Growth Models Ensure Improved Education for All Students” July 27, 2006 Thank you, Mr. Chairman. My name is William L. Sanders; I am presently Senior Manager, Value-added Research and Assessment, SAS Institute, Inc. Additionally, I hold the honorary title of Senior Research Fellow with the University of North Carolina. Previously, I was Professor and Director of the Value-Added Research and Assessment Center with the University of Tennessee. My letter of invitation asked me to comment on: my experience with value-added and growth models, how growth or value-added models might fit into State developed accountability systems under No Child Left Behind (NCLB), benefits and challenges of implementation, as well as some of our major research findings. In my remarks I will refer to each of these requests. However, the total intent of my remarks will be to make the case to Congress that the addition of a properly constructed growth component to the adequate yearly progress measure (AYP) will make NCLB fairer to schools and will provide positive benefits to a greater percentage of their student populations. My experiences. I am a statistician that fortuitously got involved with educational research 24 years ago. At that time in Tennessee, under the leadership of Governor Lamar Alexander, there was considerable discussion on how to improve the effectiveness of public schooling for that state’s population of students. These discussions inevitably lead to the question of how to quantitatively measure the impact of schooling on measures of academic performance, especially measures of academic growth attributable to various schooling entities. In fact, some of the same quantitative issues that are now being raised relative to the growth model discussions were indeed being raised in that era. After learning of these issues, and being knowledgeable of statistical mixed model theory and methodology, I felt that there existed solutions to many of the pertinent questions being cited as impediments to using student test data to provide quantitative, reliable robust measures of schooling influences on the rate of academic progress of student populations. Using this methodology, my colleagues and I built the quantitative system on which the Tennessee Value-Added Assessment System is based. Perhaps I was not the first, but one of the first to apply the term “value added assessment” to measurement of educational outcome. Value-added assessment provides measures of the influence that educational entities, (i.e. districts, schools and classrooms) have on the rate of student academic progress. All value-added procedures use longitudinal data (i.e. follow the progress of individual students over grades) to get measures of these influences. These measures provide information as to the effectiveness schools or districts in providing the opportunity for academic progress for all students. Tennessee has had value-added measures as part of its accountability system since 1993. However, all value-added modeling efforts do not give equivalent results. Some of the more simplistic value-added approaches should be rejected because of serious biases and/or unreliable estimates that they provide. If value-added models are to be used as part of an accountability system, then there are some minimal criteria that must be required. To dampen the error of measurement associated with a single test score for an individual student, all test data over grades and subjects for each individual student must be used in the analysis. However, all students do not have the same quantity of test data. Disproportionately low scoring students have more missing longitudinal data than higher scoring students. Thus, any value-added model approach must be sophisticated enough to provide unbiased, reliable measures using all data for each student no matter how sparse or complete. Simple posttest minus pretest averages and simple regression approaches, which use only the previous year’s score as a predictor variable, are examples of value-added attempts that should not be used. Next, I would like to make a distinction between the use of value-added models (like the Tennessee Value-Added Assessment System) and growth, or projection models to be used as part of NCLB. In accountability systems, value-added models use longitudinal data to provide a summative measure of the aggregate progress of all students attending a school. The projection (growth) model recently approved for Tennessee to augment AYP uses longitudinal data to ascertain if a student is on a trajectory to reach a proficiency standard three years in the future. The same data structure is used for two different purposes. However, the same statistical issues (fractured student records, using all data for each student, etc.) are just as important and must be accommodated in the projection (growth) models as with the value-added models. Again some of the more simplistic approaches to measurement of growth should not be used because of the resulting innate biases and greater unreliability. On this topic, I concur with the U.S. Department of Education’s peer review team’s comment that all of each student’s prior data should be used; not just two data points. Why should NCLB be augmented to allow projection (growth) models? Students enter a school with a wide range of achievement levels. Under existing rules if a school has an entering population whose achievement level is extremely low, then regardless of the magnitude of progress of these students, it is most difficult for the school to make its AYP targets. For those schools which are eliciting superior academic growth for its student population, this additional measure can clearly differentiate these schools and enable them to be recognized proudly for their effectiveness. Even with the additional augmentation of AYP, as two states have now been approved in a pilot project by the Department of Education, there is another consideration that Congress should give prior to reauthorization of NCLB. With the existing AYP rules, schools can now be meeting their AYP targets, yet within those schools the progress rates of students who are currently proficient can be so relatively modest that their likelihood of not meeting proficiency in the future is greatly enhanced. Unfortunately, it appears that in too many schools, which are in jeopardy of not meeting their AYP targets, more instructional effort is focused on the “bubble kids”, (i.e. those kids who are perceived to be near proficiency) with less effort extended for other students. The mistaken belief is that some students are so far behind that regardless of effort they will not reach proficiency this year, while other students are going to meet the proficiency requirements without much curricular attention. The focus on “bubble” students leaves two groups of students vulnerable, those most behind academically and those barely proficient. To provide a disincentive for this practice, I would recommend that Congress give serious consideration to replacing the existing “safe harbor” component of the AYP rules with a projection component so that schools will receive credit for keeping all students on trajectories to be proficient in the future. This should tend to alleviate the problem of not focusing appropriate instructional effort on all students. All states will now have the annual testing in place to allow for the projection approach to be applied. At a minimum, states should be allowed to substitute this better approach for the existing “safe harbor” rules. In summary, the impact of NCLB is beginning to yield many positive results in raising the academic achievement level for a large segment of the nation’s student population. The suggested tweaking with the addition of a projection (growth) component will be an improvement. Our research accumulated on the past 24 years, certainly has documented that effective schooling will trump socio-economic influences if effective schooling is sustained over time for each student. The data resulting from the implementation of NCLB, if wisely used, can certainly lay the information base for insuring that all students will have the opportunity to learn, consistent with their achievement level.
Updated about 6 years ago