A colleague of mine in the education community and I were discussing SEL assessment a couple of years ago. Her district had recently administered SELweb EE, our SEL assessment system, to assess children in first through third grades and they now had terrific information about their students’ social-emotional strengths and needs. I was feeling pretty good. Mission accomplished, right?
As my colleague put it, now that we had good assessment data, it was time to consider two important questions: “So what?” and “Now what?” The first (“So what?”) is shorthand for, “What do the assessment data tell us about our students?” The second (“Now what?”) is shorthand for, “Now that we understand something about our students’ SEL, what do we do with what we have learned?”
In Part 1 of this two-part blog series, I’ll share some ideas about how to answer the question, “So what?” Part 2 will provide guidance in answer to the question, “Now what?”
What Do the Assessment Data Tell us About Our Students?
Imagine for a moment that your district has assessed SEL in all elementary-aged students. You receive reports summarizing student performance on this assessment. To answer the question, “So what?” I recommend you engage in a systematic data use process. After all, data won’t do anything if no one looks at it, makes sense of it, and considers what they might do about what they learn.
The Meaning of SEL Data Use
When I say “data use,” I have something very specific in mind. Using data means that doing these things:
- Understand of what assessment scores mean.
- Review the data completely, which includes:
- Individuals review the data and develop an initial interpretation of the data.
- The team discusses the assessment in a well-run forum.
- In teams, refine initial interpretations into an understanding of student strengths and needs (and list questions not answered by the data) so that together, they can
- Identify actions to build on student strengths and address student needs,
- Commit to an action plan, and
- Implement the action plan.
In this blog post, we’ll discuss point 1 and the first half of point 2.
A lot of activities might seem like “data use” but aren’t. Emailing a summary of finding to is not data use. A five-minute presentation in a staff meeting is not data use. Talking about the SEL assessment findings in the lunch room is not data use. District leadership interpreting the findings and sending a directive about what to do to building-level staff is not data use.
So how do you accomplish high-quality data use? The mechanics will depend on your district’s practices and constraints, but there are several actions that, if done with sufficient vigor, can increase the chances that everyone will come to a very good answer to the question, “So what?”
Interpreting SEL Assessment Scores
Scores and skills. The foundation of data use is a good understanding of score reports. This consists of two parts. The is to understanding what skill each score in the assessment report reflects. SELweb EE score reports, for example, include scores for “emotion recognition,” “social perspective-taking,” “social problem-solving,” “self-control,” and overall SEL skill. Each consumer of the score should know which score on the report corresponds to what skill. More importantly, you should know what each of these skills look like in practice. This means knowing what the assessment items look like and being able to picture how the skill expresses itself in children’s daily lives.
What’s in a number? You also need to know what the numbers mean. For example, if a child receives a score of 100 on SELweb’s social problem-solving module, what does that mean? In the case of SELweb, which is normed, individual student performance on the assessment compared to a large sample of same-aged peers. We scaled the standard score so that the mean is 100 and the standard deviation is 15. That means that a child who receives a score of 100 performed equivalent to the average score of her same-aged peers. A child who achieves a score of 115 performed one standard deviation above the mean. A student who scores 1 standard deviation above the mean (in this case 115) has scored better than 84% of her peers while a student who scores 1 standard deviation below the mean (in this case 85) has scored worse than 16% of his peers. Plus or minus a standard deviation is commonly used refer to the “average range.” However, there is no hard and fast rule to determine the boundary between good enough and not good enough performance. You have to impose that boundary on the scores.
Other assessments will have different scoring systems, some of which will be criterion-referenced, or benchmarked against a predetermined criterion. Others will be norm-referenced, as is the case with SELweb. Your job is to be clear about the meaning of the numbers.
To get comfortable either with the skill each score reflects or the meaning of the numbers, consult the assessment manual first. If you still feel confused, contact the test publisher. Your school psychologist can also provide guidance, as school psychologists are highly trained in test scoring and interpretation.
Reviewing and Reflecting on SEL Assessment Data
Now it’s time to start making sense of the data. The first step is for each individual on the team to review the assessment reports. Spend enough focused time with the data to develop an initial interpretation of what the data mean. Your job is to be able to answer the question, “What are our students’ social-emotional strengths and needs?”
The next step is to convene the team. Your meeting goals are: (1) come to a common understanding of what the data say; (2) interpret what the data mean; and (3) generate testable hypotheses about student social-emotional skill development. You can do this by engaging in an exercise a friend of mine called “see, think, wonder.”
See—the facts in the data. The first step is to describe what is seen. In this stage—the “see” portion of see, think, wonder—the conversation focuses strictly on the facts. So for example, it is fair game to say, “I see that on emotion recognition, students in my class are on average 5 points below average.” That is a description of fact. It is straying into interpretation to say, “Emotion recognition is a weakness in my students.” Describing the data in terms of strengths and weaknesses goes beyond the surface facts and begins to apply an interpretive judgment to the data. During the “see” part of the conversation, the facilitator’s job is to keep the team reporting facts without straying into interpretation or planning.
Think—interpret and make connections. In the next stage, the “think” part of the conversation, participants can start imposing interpretations of the data onto the facts. The question here is, “What do these facts make you think?” Here are some fair interpretations:
- “I think the high scores on social problem-solving shows that our students are generally good at working out differences, but there are a few outliers.”
- “The low scores on social perspective-taking shows that we have room to work on that skill, even though some of our students are clearly really good at this.”
- “It doesn’t surprise me that our students are low on ratings of problem behavior. I am that they were below average on socially skilled behavior.”
As was the case during the “see” phase of the conversation, the facilitator’s job is to elicit thoughts from participants until the group has articulated multiple interpretations of the data.
Wonder—develop hypotheses about what’s going on. The final phase of this conversation is the “wonder” part of see, think, wonder, and it involves generating testable hypotheses. In this phase of the conversation, discussion participants ask questions that the data raised. Respondents are encouraged to start their responses with, “I wonder…” Examples might be, “I wonder if our kids are missing out on opportunities to learn social problem-solving because the adults intervene to solve problems and kids don’t learn to work them out,” or “I wonder whether the troubles we’re seeing on measures of self-control are contributing to my difficulties with classroom management.”
All of these statements draw connections between assessment scores and other things that might be going on in students’ lives. All can be rephrased as a testable hypothesis—the more time children spend with electronics, the more trouble they’ll have with self-control. The more adults solve children’s problems for them, the less children will learn to solve their own problems. Note that I’m not saying any of these statements is accurate. But you can see how wondering about the data can easily generate interesting and testable hypotheses, some of which may be grounds for action.
Strong data use is not easy. It requires time, effort, and work to understand the story your SEL data are telling you. If you take these steps, chances are, you’ll have come to a solid answer to the question, “So what?” With that in hand, you can turn your attention to the question, “Now what?” which I’ll discuss in my next blog post.