Skip to main content

Explore, Discover, Excel | Enroll in CTD Summer Programs for Students, Families, or Educators!

Spring 2023

Talent Newsletter

In this issue of Talent:  Director's Message: Making Assessments Better | Igniting Spatial Talent DevelopmentAssessing the State of Assessment

Director's Message: Making Assessments Better  

The Center for Talent Development was founded 41 years ago as a Talent Search program, an approach conceived and created by Dr. Julian Stanley at Johns Hopkins University. A key tenet of Talent Search is assessment, specifically above-grade-level assessment, which is used to discover students’ strengths and determine the most effective programming to develop their talents.  

"Paula Olszewski-Kubilius"Good assessments can provide insights about academic achievement and instructional readiness and shed some light on potential and realized abilities related to school learning and cognitive development. From a talent development perspective, assessment should be aimed at identifying strengths and determining where needs exist for enrichment, acceleration, or supplemental programming.

But there is still much to learn about how to design good assessment—across domains, in promoting equity, and based on which purpose(s) they’re serving. School-based assessments are often designed to measure academic achievement on grade-level content or benchmarks, and they can be helpful starting points. Unfortunately, these assessments are frequently used inappropriately and ineffectively as endpoints, and even worse as gatekeepers to programs and services. In this edition of Talent, we explore efforts to “do assessment better”—by using tools differently, using new and better tools, and improving data literacy to understand what assessments can and cannot tell us.

Dr. Joni Lakin, co-creator of the CogAT, talks with us about the work she is doing on a new spatial abilities assessment for the CogAT that she believes will help educators find and serve students with advanced spatial reasoning abilities. Spatial abilities are employed in a wide variety of domains, including engineering, arts, and mathematics, but they have been overlooked by schools in terms of both talent identification and development.

CTD’s Assessment Coordinator Melissa Hinshaw discusses the most widely used assessments in schools, their strengths and limitations, and the importance of data literacy and assessment culture. How well educators administer assessments, how well they prepare students to engage with them, and what they know about the data all have an impact on the usefulness and reliability of the results. If educators want to discover students’ talents and provide the most effective programming for their development, assessment has to be done differently and more effectively.

Good assessment leads to optimally matched instruction, resulting in the talent development we are all hoping to see blossom. For all the challenges we face in the current assessment landscape, there are very real opportunities and successes, as evidenced by over 50 years of Talent Search, and presented by the experts in this edition of Talent.

Paula Olszewski-Kubilius

Igniting Spatial Talent Development

When you’re packing a car trunk, building a bookshelf with visual-only directions (think IKEA), solving geometric equations, or trying to understand geologic land formations, you’re using a cognitive capability called spatial reasoning.

"Joni Lakin"Educational researchers at Northwestern, including the Center for Talent Development, and elsewhere have been investigating how spatial reasoning could be—and why it should be—better incorporated into testing regimes and curricula in K-12 education. The reasons include the recognition that spatial reasoning is involved in many domains, particularly STEM fields and making students aware of their capabilities in this area would improve counseling towards professions and careers that are a good fit. Including spatial ability in assessment protocols could additionally result in the identification of the talents of a wider array of students.

Joni Lakin, a professor of educational studies at The University of Alabama who’s been involved in producing the widely used Cognitive Abilities Test (CogAT) and has administered it both in the general education classroom and for gifted and talented identification, says she would like to see that assessment and others broadened beyond the three traditional domains of verbal, quantitative, and nonverbal figural reasoning.

To that end, Dr. Lakin is leading a team of researchers—which also includes Northwestern’s David Uttal, Paula Olszewski-Kubilius, and Susan Richert Corwith—who received a four-year grant in 2021 from the Institute of Education Sciences to develop and validate the CogAT Spatial Assessment, aimed at students grades 2-8. They are doing so in collaboration with Riverside Insights, based in Itasca, Illinois, the publisher of CogAT.

Center for Talent Development at Northwestern · Joni Lakin On Spatial Reasoning

“One of the limitations of the current test is that it doesn’t measure spatial thinking skills,” she says. “Spatial reasoning is that visual, mental image; that ability to create mental images and either imagine objects rotating in space, or imagining how different objects might fit together. … If you struggle with putting together furniture or packing the trunk of a car, that’s spatial visualization skills you’re reasoning with.”

In addition to geometry, geology and engineering, researchers have found that even early math skills rely on spatial reasoning. “Their sense of the number line at the early grade levels, and even pre-K, is found to be a spatial challenge,” Dr. Lakin says. “So students who struggle with spatial thinking may also struggle with early mathematics, and that can kind of have a trickle-down effect throughout their math experience.” Beyond that, she says, visual arts call on spatial skills, as does—perhaps paradoxically—computer science. “We’re finding more and more opportunities where spatial thinking might be part of these other areas of skill.”

While the existing CogAT test can be administered both online and on paper, the Spatial Assessment will be digital-only, which Dr. Lakin says will allow for more interactive question formats. “One of the designs that we have is based on what engineers call orthographic versus isometric views. It’s different ways that you can represent the same objects in two and three dimensions,” she says. “It probably looks like someone playing Minecraft. It has some elements where it’s interactive for the student, rather than just the multiple-choice selection.”

Center for Talent Development at Northwestern · Spatial Skills And Effects On Learning

The new spatial portion of the CogAT will help tease out whether and an in what type of spatial reasoning students have strengths and weaknesses. This, in turn, will provide teachers with different ways to interact in the classroom—either by building out spatial skills so they’re less of an impediment for those who find them challenging, or encouraging those with stronger skills to develop those talents, in domains like engineering design.

Dr. Lakin has been working with Dr. Uttal and other researchers to understand how and why students with strengths either only or primarily in the spatial realm tend to get overlooked in K12 education. Their four-year grant is following 2,300 students and 30 teachers in urban, rural, and suburban districts—in Illinois, Alabama and Arkansas—and their work thus far has included a large-scale validity study to develop norms and technical guides, as well as cognitive lab and eye-tracking studies. Still to come: examining test reliability and sensitivity, and the final data analysis and dissemination of findings, scheduled to occur in 2025.

“We have this pool of students that we’ve identified, and our data has shown us that there might be up to 3 million U.S. students whose talent lies in spatial reasoning—but not necessarily in verbal or quantitative,” she says. “So we’re really interested in who those 3 million students are, and how their talents might be developed.” This work was published with Dr. Jon Wai and the University of Arkansas, who is also a collaborator on the grant.

“We’ve looked at the demographic characteristics, and we do think that there’s evidence that including spatial reasoning skills [in assessments] might help us identify students from historically underrepresented groups, to develop their talents and to integrate them into some of these talent development systems." –Dr. Joni Lakin 

The group’s research has highlighted that while such students are generally successful, they tend to struggle more in school and might need either more support, or different kinds of support, than other gifted and talented students, Dr. Lakin says. Using traditional testing measures like the SAT or ACT that don’t include spatial reasoning may identify those with verbal or quantitative skills, but not the distinct group of spatially talented students.

“In addition, we’ve looked at the demographic characteristics, and we do think that there’s evidence that including spatial reasoning skills [in assessments] might help us identify students from historically underrepresented groups, to develop their talents and to integrate them into some of these talent development systems, like CTD,” she says. It is important to consider the many ways that people can have and exhibit talent. Deliberately looking for spatial talent, “may diversify our pool of students who are receiving talent development services.”

The researchers have been working with CTD to help flesh out what the in-classroom response would look like, once the results on the new CogAT for spatial reasoning start rolling out. While teacher resources and guides exist for how to use the verbal, quantitative and nonverbal CogAT scores to design curricula, the interventions might look different on the spatial side, Dr. Lakin says.

For younger children, developing spatial thinking might involve simple exercises for parents or early childhood educators to administer, like looking at puzzle pieces and talking about their respective shapes. For older students, it could be an assignment to draw and represent objects in two dimensions.

Center for Talent Development at Northwestern · CogAT® Spatial Assessment And Talent Identification

“There are opportunities to have projects where they’re drawing schematics, or creating designs that require some kind of visual representation,” Dr. Lakin says. “There’s lot of opportunities for students to demonstrate those skills and to use that to support their knowledge. Like in social studies, there are graphs and figures that represent historical events—and that could be a way of the student engaging differently with social studies topics.”

Such projects will help identify these students who sometimes fall through the cracks. “They may not have the opportunities, especially in high school, to develop those technical skills that might allow them to be successful in STEM and other careers,” she says. “Having these understandings of what their needs are and how to engage them in the curriculum will help us to reach these students. And the preliminary evidence suggests that these students may be really creative and able to innovate in some of these important areas like STEM. … And that’s really the goal—to broaden our perspective of who we’re considering to be talented, and then, on the education side, providing them with appropriate services.”

Assessing the State of Assessment

Talent contributing editor Ed Finkel spoke with Melissa Hinshaw, assessment coordinator at CTD, about different ways to assess students—particularly advanced learners—how such measurements have evolved over the years, and the best ways to use data for instruction, among other topics.

What follows is a transcript of their Q-and-A, edited for space and clarity.

EF: How should we assess students’ cognitive capabilities and general knowledge? How does that change regarding advanced learners?

MH: There are lots of aspects involved in doing assessment well. There are many well-done achievement assessments—they’re carefully written, they’re well-aligned to standards. They have robust norm-sets that allow us to compare students to same-age and same-grade peers, and they allow us to measure growth over time. How well we, as educators, administer them, and how well we prepare our students to engage with them, is a different story. Oftentimes, I see datasets that do not tell the complete story of achievement, or student growth, because of poor test conditions. That’s a phenomenon that I like to call a toxic assessment culture. The way assessment is talked about in school systems has created anxiety and a toxicity around this process.

"Melissa Hinshaw"When we talk about measuring cognitive capabilities, that's a different construct, based more on general learning. These tests allow students to demonstrate their problem-solving and reasoning by finding sequences and relationships in domains like verbal, nonverbal, and quantitative. They can provide information about how students engage with academic content. They’re much less ubiquitous in schools, though, and schools most often use them as part of an identification process for specialized services. They are a valuable companion to academic assessments, though, because they can help teachers maximize learning in school.

When we talk about specific attention to advanced learners, these types of tests are used to build a needs case for advanced academic placements or enrichment. And a synthesis of data, meaning the cognitive data along with assessment data—along with other indicators including performance and motivation—tells us a data story of a need and learning readiness for something different than their peers. Using assessments to identify services needed requires attention to the quality of assessments, how well we give those assessments, and the overall screening process. Schools certainly want to use assessments to create a wide pool of candidates to consider for advanced programming, then determine what the specific needs are, and which students have similar needs.

EF: Why is it important to use multiple measures in this process?

MH: We want to have different looks at different students so that we don’t miss anybody. Not every student comes to school or comes to assessment with the same experiences. And if students have had limited exposure to academic content, then sometimes cognitive abilities tests can level the playing field a little bit by letting them demonstrate their ability to think and problem-solve, instead of telling us what they've already learned or what they already know. Still, there is no single tool that tells the whole story, and assessment is dogged by significant questions, specifically about bias and the misuse of data.

And while test developers are attending to the issue of bias in tests, making significant efforts to assess validity and reliability with large, diverse populations of students, all assessments are impacted to some extent by opportunity, exposure, and context and that is important to keep in mind. As often as possible, we want to provide students with rigorous content and instruction so that they can demonstrate their skills and potential. Understanding data and assessments is critical. As Dr. Neil Lewis Jr. explained in a recent piece in The Atlantic, there is a lesson we can learn from standardized tests, which is that their outcomes, or outputs, are data and data can be both interpreted and used in multiple ways. It is critical that educators understand the benefits and limitations of assessment tools and do not use them as gatekeepers in inappropriate ways.

EF: How has measurement of both ability and learning evolved in recent years and decades? What have been the key reforms or levers of change? And what are the key differences between those two types of measures?

MH: Over the past 20 years, there’s been a shift to better use of assessment data to inform student performance related to acquisition of standards, effectiveness of schools, and effectiveness of instruction. The most notable change, though, is the shift towards mandating the assessment of academic proficiency—measuring knowledge and skills aligned to grade-level standards. It is reasonable that we would measure mastery of content standards based on exposure to curriculum by grade level. What I mean by that is, fourth-grade students who are exposed to fourth-grade curriculum are then, at the end of the school year, assessed on their exposure and experience so as to know if they've mastered that content. And if they're ready to move on to the next level.

Where the system has some flaws is that it really accounts only for a student's grade level; it does not necessarily account for what students may be ready for above-grade-level, or below-grade-level. While students in the middle of a bell curve might be fairly well-served, those that are ready to learn well-above or well-below are sometimes not and it can cause a mismeasurement of the effectiveness of instruction. If a student already knows much of the information, it’s really difficult for us to understand if that instruction was effective for them.

Center for Talent Development at Northwestern · Cognitive Abilities and Academic Achievement Assessments

This process was also negatively affected by No Child Left Behind, which sought to create proficiency at every grade level for every student, which is a lofty goal. At the same time, stopping at proficiency excluded a lot of students who entered school already proficient with the grade level content. The system was often punitive for schools who didn’t meet required scores on an annual basis, which created a negative assessment culture. The more recent Every Student Succeeds Act sought to resolve these poor assessment practices and context by adding in a component of measuring growth in learning. We’re going to measure academic achievement, but we’re also going to factor in where students started in the process. And while this did relieve some stress on schools, areas in need of improvement include the accountability construct, and the poor way growth is measured for high achievers.

EF: What are the best ways to use the data that are gleaned from such assessments in classroom instruction? And what questions should educators be asking themselves as they make these determinations?

MH: Classroom instruction is the reason why we have assessment in schools. We want to help our educators understand what students know and are ready to learn. And we have a large volume of data for teachers to wade through: the assessments that we were just talking about, which measure proficiency and mastery. Those are not necessarily the best tools for teachers to use, to participate in what I like to call responsive planning. We want teachers using current data to plan responsive lessons for students on a regular basis. A lot of schools are using interim, or adaptive, assessment data. Most schools do these assessments two or three times during the year. And these interim assessments give us the ability to understand student readiness in the moment.

"We want to partner with schools and systems to help educators understand, 'How do I plan effectively for those kids that are consistently performing beyond the standards that are coming into the classroom?'"–Melissa Hinshaw

And they can help guide the best assessment practices in the classroom for planning, based on the curriculum, and marry that with what they know about student readiness. That leads to diagnostic assessments to understand: Is the student ready for this instruction? Or do I need to get them ready before the instruction comes? Or are they beyond ready for this instruction? Do I need to work on some sort of enrichment model for students who are demonstrating that they already know the content? And that’s where our work here at CTD becomes important because we want to partner with schools and systems to help educators understand, “How do I plan effectively for those kids that are consistently performing beyond the standards that are coming into the classroom?”

EF: What would you say are two or three of the most critical misunderstandings about assessment and assessment data as they pertain to advanced learners?

MH: Advanced learners, by definition, are ready for academically advanced material. And these thresholds of learning readiness are really important for us to understand for accurate placement in programs or courses, depending on where students are in their learning journey. Too often, schools use limited assessments that create somewhat of a “ceiling effect” for our advanced learners. We may not understand completely what their zone of proximal development or learning “sweet spot” is, because the scope of the assessment is just too limited. And while some advanced learners are well-served by robust classroom differentiation and enrichment, the more advanced a student is, the more urgent and different their needs are. And we need to find a way to measure that.

Center for Talent Development at Northwestern · How Assessments Have Changed Over Time

To best help us understand the learning readiness for these most advanced students, there is above-grade-level assessment; historically, it’s also been called Talent Search. And this research-based practice is aimed at raising the ceiling of assessments for students so that we can best understand how far they’re ready to learn above their current grade level in school. For years, organizations like the Center for Talent Development have helped families and schools administer assessments like the ACT®, the SAT® and the PSATTM 8/9, early, for talented students. And here at CTD, we also have the ability to help compare these talented students to each other; so, we create a new, more custom norm pool for the students. It helps us understand better not only what they’re ready to learn, but what might be the best programs and placements to help them reach their full potential.

EF: How should educators be attempting to measure learning loss in the wake of the pandemic? What about among advanced learners?

MH: The topic of learning loss has been on the lips of educators for almost three years now. There have been students and schools who have seen notable differences in data points and growth trajectories for students. But I believe the alarm was sounded too soon and too loudly because what we’re beginning to see now is some recovery in a lot of our data sets in terms of achievement, both for students and for schools. Because our educators have worked really diligently to intervene and measure these losses and accelerate that lagged learning, it looks more like it did pre-pandemic.

But one group to call out in this phenomenon are our advanced learners. Here at CTD, when the pandemic first happened, and schools shifted to mostly online learning, we saw a large influx of first-time students accessing our enrichment programs. While learning online, we saw that these advanced students were moving more quickly through content and, possibly for the first time, at their own pace, which is faster and more compacted. And this looked like it was a much better fit because they had the freedom and autonomy to learn new material at their own pace. We saw leaps and jumps in growth for students because of this phenomenon. While learning loss does exist in many contexts, we’re seeing that learning acceleration has happened for some advanced students, and this is important to consider in terms of long-term planning for instruction.

2023 © Northwestern University Center for Talent Development