The 7 Biggest Complaints Teachers Have About Testing—and How to Fix Them

“Tests are my favorite part of school!” said no one ever.

Sponsored By Renaissance
Broken pencil on standardized test answer sheet - Testing in Schools

“Tests are my favorite part of school!” said no student ever. Or teacher. Or administrator. Or parent. Or guardian. Or anyone, really. And it’s not hard to see why. Many aspects of testing in schools are particularly frustrating for educators. However, assessments are also one of the most powerful tools for educators. They can inform instruction, personalize learning, and accelerate student growth.

The good news is that much of what makes assessments so frustrating is totally fixable. Drawing on insights from the recently released Tests and Time: An Assessment Guide for Education Leaders, here are seven of the biggest complaints teachers have about testing in schools—and how to fix them:

1. Over-Testing

A typical student may take more than 100 district and state exams between kindergarten and high school graduation—that’s before even counting the tests included in many curriculum programs and the assessments created by classroom educators. In some test-heavy districts, students may take more than twenty standardized assessments in a single year. That’s one test every nine school days!

Across the country, there has been rising criticism of over-testing in schools, and many states now allow families to opt their children out of some or even all state exams. However, most tests are actually at the district, school, or classroom level. This means even “opted-out” students are still taking the majority of assessments.

Suggested Fix: 

It’s important to remember that assessments serve an essential function in education. They provide educators with critical information about student learning, so educators can better tailor instruction, support, and intervention to match student needs. While over-testing in schools can be a problem, testing students too little can also be a real concern if educators aren’t getting the data they need.

One way to test students less without sacrificing key data insights is to look for assessments that serve multiple purposes. If your universal screener, diagnostic assessment, growth measure, progress monitoring tool, and standards mastery assessment are five different tests, you need to test students five separate times. If you have one assessment that serves all these purposes, then you can test students just once and still get all the data you need.

One Assessment Multiple Purposes Graphic - Testing in Schools

2. Lost Instructional Time

Whenever you set aside time for assessment, the time available for instruction shrinks. If students are taking 100 exams in K–12 and each is an hour long, that’s 100 hours—about 15 school days!—that can’t be used for instruction. Because many tests are longer than one hour (some can be 3.5 hours or more), there are students who lose 40 or more school days to testing.

Suggested Fix:

While educators can’t control the length of their state tests, they can control the length of district- and school-selected assessments, such as universal screeners and progress monitors. Thanks to computer-adaptive testing and other advances in learning science, there are now assessments that provide valid, reliable data about student learning in 20 minutes or less. One such assessment has already saved educators 7.7 million instructional hours this year (and counting).

3. Delayed Results

As if spending days taking assessments isn’t bad enough, sometimes educators and their students have to spend days, weeks, or even months waiting for the results. If one of the main goals of assessment is to guide and inform instruction, then any delay is already too long.

Suggested Fix:

Again, while educators can’t control how quickly they get results from state tests, they can look for options that provide immediate results when evaluating district- and school-selected assessments.

And those delayed state test results? There’s a work-around for that, too. Certain interim assessments can now accurately predict student performance (in some cases, within 3 to 5 percent) on state tests months in advance. Imagine knowing how your students will perform on end-of-year tests at the beginning of the school year!

Illustration of bored students - Testing in Schools

4. Bored and Frustrated Students

Although students rarely enjoy taking tests, the experience can be particularly unpleasant for learners at the upper and lower ends of achievement. High-achieving students can quickly grow bored with overly easy questions, while low-achieving students may feel stressed, anxious, or intimidated by overly challenging items. For both groups, negative emotions and distractions can even produce artificially low scores that don’t reflect students’ true ability levels.

Suggested Fix:

Computer-adaptive tests (CAT) come to the rescue again! Not only does a CAT typically require half the number of items to be as reliable and valid as a traditional fixed-form assessment, but a CAT can also provide more precise measures for low- and high-achieving students than traditional testing in schools. This is because a CAT assessment mimics a wise examiner, automatically adjusting the difficulty of questions based on student responses. With CAT, students generally have to answer fewer questions, and the questions they do see are tailored to their specific skill level.

Graph: CAT Tailors Item Difficulty to Match Student Level - Testing in Schools

5. Lack of Actionable Data

“What’s next?” That’s the question many educators find themselves asking when reviewing student assessment data.

Assessment is sometimes defined as “the process of collecting information (data) for the purpose of making decisions for or about individuals,” which means gathering data through administering a test is only the first step of the two-step assessment process. The second step is using that data to enhance teaching and learning. This part can also be the hardest, especially if your assessment doesn’t offer clear action steps.

Suggested Fix:

The key is something called a learning progression. In simplified terms, learning progressions provide a teachable order of skills.

When an assessment is linked with a learning progression, a student’s test score will place them at a specific point within the progression. As a result, educators can see which skills students have learned—and what they’re ready to learn next. Some learning progressions even provide instructional resources linked to specific skills. This way, educators can see what to teach and how to teach it all within their assessment software.

"Learning Progressions Show the Way Forward" illustration of Student climbing on blocks - testing in schools

6. Misalignment With State Standards

Hopefully, your state test is aligned to your state standards, but what about your district or school assessments? You may be surprised by the answer.

Many organizations claim their assessments are “aligned” to state standards, but then provide the exact same test to all 50 states. This means the assessment may have skills in the wrong order or wrong grade for your state, skip skills your state requires, and add ones that it doesn’t. Teachers won’t be able to fully trust the information from the assessment—and there’s nothing worse than an untrustworthy test.

Suggested Fix:

Make sure your district or school assessment is truly tailored for your needs, ideally with a learning progression specifically built for your state. For example, even if you’re in a state that uses Common Core or standards based on Common Core, a “Common Core” assessment may still be misaligned if your state has added to or otherwise altered the Common Core standards in any way. When evaluating an assessment, be critical of alignment and correlation documents—they can be produced even if a vendor only has a general list of skills used across multiple states. Don’t hesitate to ask for more proof. The domains, headings, and language of your state’s standards should be directly reflected in the assessment itself and in its learning progression. 

7. Disparate Reporting and Inconsistent Data

Do you have inconsistent data across your district or school because different tests are used in different grades or with different student groups? Do you shudder to think of how much time you’ve spent trying to cobble together data from disparate sources? Is data lost when students change buildings, schools, or even districts?

Suggested Fix:

You won’t have inconsistent, disparate, or disconnected data if you use a single assessment solution across all grades, pre-K–12, and for students of all ability levels. This includes English Language Learners, students in intervention, and gifted and talented learners. Some solutions can even aggregate data from multiple sources (such as interim assessments, state summative tests, and student practice programs) to provide a single overview of student mastery.

Get the Most Out of Your Assessment Data

Ready for more assessment tips? Click to save and print your free assessment guide for school leaders!

From the length and frequency of testing to the ability of assessments to track student learning over time, Tests and Time: An Assessment Guide for Education Leaders explores the different interactions of tests and time—and what they mean for today’s educators. You’ll get essential insights into finding and using your best assessment to accelerate student learning.

Tests and Time Book Cover - Testing in Schools