Remember that cell phone you used to carry around in the mid-90s? You know, that waffle iron with the three-foot, extending antenna that did one thing—make phone calls?   

Now, your cell phone is as thin as your credit card, and it has replaced your need for your camera, your calendar, and—oh, yes—your credit card! With these fast advances in technology, we need a workforce that is technologically literate. We’ve been using a sledge hammer where we can now use a scalpel. For example, even before we had cell phones, we had a sense in education that poor kids needed more help.

Federal agencies gave funds to provide extra services to poor kids. Now, our data systems can tell us which individual kids need more help. Although a larger proportion of the poor kids need more help, many of them don’t. And some of the rich kids need more help. Because we now can hone in on precisely which students need more help in reading, or who are ready for advanced math, federal agencies have changed the way they offer grant funds.

Where We Came From

When President Johnson started the War of Poverty in the 60s, Title 1 was born. Title 1 gave schools with a lot of poor kids extra money. The kids who paid any less than the standard fee for lunch were identified as low income, and the money was divvied up accordingly. The money could be spent on services for poor kids. Accountability for the money was to show that, yes, indeed, poor kids were served. Impacting academics was assumed but not measured.

Title 1’s method of serving students would be seen as fuzzy from today’s scalpel-precision perspective. During this sledge-hammer era in education, many products were marketed as “great for poor kids,” so schools could spend Title 1 money on them. Looking back, some people have called this “profiting from the poor.”  

Gradual Changes Began

Schools with poor kids continued to receive extra money and—which is understandable—they purchased products and provided extra services to them. We’d beaten the Russians to the moon. That was the final battle in the space war that began when the Russians launched Sputnik. Nothing motivates like competition, and competition during the cold war was strong. We the people began developing things—and not just Tang. This development was slow at first, but began to move exponentially fast.

After the cold war, schools became complacent again, and a handful of nerds was making it happen for everybody else. Everybody wanted the latest thing, and they wanted it smaller, cheaper, faster, and capable of doing so many more things. Getting a cell phone became the rite of passage for tweens and teens. Neat, hand-written ledgers were replaced by spreadsheets and databases.

You no longer had to sign the guest book at the hotel. The hotel had a record of every time you’d stayed with them, which movies you watched, what you ordered for room service, and every Orangina you drank from fridge-stocked bar. By the 1980s, when “nerd” was still an insult and Pluto was still a planet, the supply side of the workforce needed for this rapid technological movement could not keep up with the demand side.

That—coupled with embarrassingly low scores on international, standardized math and science tests—revitalized the quest for the schools to churn out technophiles. No Child Left Behind (NCLB) was the bipartisan brainchild of this bind. NCLB promoted cleaning up the data so we could interpret it in ways that helped us promote success.

NCLB required that any state taking federal funds had to:

  1. Say what they hoped kids would learn in each core subject in each grade.
  2. Test the degree to which kids learned those things.
  3. Report the test results by subgroup.

For number 1, the states were allowed to decide what it was they wanted their kids to learn. They could set the bar as high or as low as they wanted to, they just had to say where it was.

Try Our School Counselor Apps

Confusion Reigned During Transitional Years

By the time NCLB rolled out, schools could keep track of what the kids were learning, and the Department of Education wanted to know. The data showed that in general, poor kids were not learning much. This caused much scratching of heads. After all, Title 1 poured extra money into schools with lots of poor kids. Yet, the data showed large percentages of poor kids were academically behind.

Surely giving the poor kids remedial work, keeping them on the lowest math track, and protecting them from the stresses of advanced courses should have made them all geniuses by now, right? If you think I am making this up, the NCLB itself can disabuse you of that notion. According to NCLB, any poor student from a Title 1 school that hadn’t met certain standards was eligible for free tutoring from an approved tutoring service of the parents’ choice. Some districts provided their own tutoring, and served poor students who were behind. But most schools hired local, commercial tutoring services to come in after school and tutor.

We evaluated many of these programs, and read the evaluation reports of others. One after-school tutoring program in a large urban district was using four commercial tutoring services. The poor kids’ parents were told their children were eligible for after-school tutoring from [very expensive commercial tutoring services].  All the poor kids were eligible. We were called in near the end of the services to evaluate the program.

Did the kids get to the point where they were reading at grade level? At least the DoE was asking the right question. The problem was that most of the kids were doing fine before the program. We got their pre-program reading and math scores, and found that most were already at or above grade level. No one had looked at the pre-test scores, though, and all students received remediation. In fact, they were only allowed to use state-approved curriculum, and all the state approved curriculum was remedial.

Teachers reported to us that these students were all reading below grade level. Title 1 funds were paying for these services. Poor students were receiving the services, just like everyone was used to. The difference was that now there was an objective for the services, and that objective was to be measured with reading scores.

Many schools saw this new requirement of measuring the outcomes with pre- and post-reading scores as something they could just hire Edstar to do. They continued to operate in the traditional paradigm. 

Here’s a quote from the school’s own Evaluation and Research team’s report:

“Only students receiving free or reduced-price lunch (FRL) were eligible. More than half of those served scored at or above grade level before service, while some students who scored below grade level were not eligible. The curricular materials used were remedial and not designed to extend the learning of students scoring at grade level” (Paeplow & Baenen, 2006).

What happens when you treat capable people as if they’re not? They are measurably damaged, at least the subjects in our evaluation were. Among the group in this example, 11% of 3rd– through 5th-grade students who were at or above grade level before these services were below grade level after their remediation. We compared that to a control group of 3rd– through 5th-grade students who were at or above and didn’t receive any services. Only 2% of those dropped below grade level. It was worse for k-2 kids; 23% of them went from at-or-above to below grade level after they received the tutoring (compared to 8% in the control group).

poor-students-at-risk-education

Poor students are often considered “at-risk” kids. So are Black and Hispanic students—and this is often because these kids are assumed to be poor. They are at risk being victims of the soft prejudice of low expectations. In other words, they are at risk of being labeled “at risk.” This misnomer was sanctioned by NCLB. The term “at-risk” was used interchangeably to describe both low-income and low-achieving students. NCLB said that Title 1 schools that missed certain benchmarks had to serve all kids who were at-risk of failing academically and had to serve only poor kids, and serve all poor kids. “Say what?” you ask. So did we.

We have a long string of emails from the Department of Education that basically says being poor and academically failing are the same thing. It is easy to see where this confusion came from.  When knowing specifically which students were below grade level was difficult due to lack of good data and no easy-to-use technology, generalizations were used.

It must have been implicitly assumed that poor kids are behind at school, and Title 1 services will help them catch up. Now, someone who is good with data is to come in at the end and help report how many of the poor kids could now read at grade level. A lot more then comparing data at the end of the program was needed.

As a result, many very smart poor kids got remedial work and were put on remedial tracks. Not that this wasn’t already happening, but it got worse.  The confusion was bi-partisan. There were some school districts that quit using data. They did not want to know or use the data to better align services. As we moved into the 21st century, all government departments started moving toward better record keeping and better accountability. This was not due to some president’s opinion. It was due to moving forward in time.

The Office of Budget Management called for a review of all federal grants. They created a reviewing tool called Program Assessment Rating Tool (PART). This system assigned scores to programs based on services being related to goals, showing that the goals were appropriate for the individuals served, and student success measured against quality standards and assessments.

PART rated programs that could not demonstrate whether they had been effective or not because of lack of data or clear performance goals with the rating “Results Not Demonstrated.”  After years of chances to improve, nearly half (47%) of U.S. Department of Education grant programs rated by the government were still given this rating of “Results Not Demonstrated”, thus illustrating the difficulties of making this transition to outcome-based accountability.

An Example: After School Programs

The federally funded Department of Education afterschool programs were called 21st Century Community Learning Center (21st CCLC) grants. Before NCLB, they were to provide child care after school for families that couldn’t afford child care. After NCLB, they were supposed to serve failing kids instead of poor kids.  (But remember, many educators believed these were the same thing.) But, now, programs were supposed to have some positive impact on academic goals (or get canned).

They made the goal of this grant to bring kids to grade level (just like the previous Title 1 program example). School districts all over the country were supposed to quit serving poor kids if they were at grade level. Department of Education money for poor kids just got switched out for money for failing kids on paper, but in reality the poor kids were still getting the services, which usually involved remediation. During this time, Edstar evaluated countless programs that were confusing poor kids for failing kids, and we would compare pre- and post-reading scores to see how many kids met the goal of raising to grade level. Most of the students were above grade prior to service.

Summary

To get federal grant money, schools and nonprofits now need to show that data supports the need, the services are research-based, and the objectives must be written in terms of data.  Very specific program records need to be kept, and financial records must be kept specifically for the grant. The budget must align with the proposal.  Students need to be targeted based on academic data.

Educators and nonprofit staff may not have the skills and knowledge required for the new world of federal grants.  This new skill set requires technical knowledge and different ways of thinking.

Call to action

Counselors can be instrumental in ushering in a new paradigm—one in which data is used before the services are provided, so that the proper students can be served. They have been leaders in this move toward data. The American School Counselors Association national model provides them with a framework for using data—now a requirement for many funding sources. Goals are set and services are aligned to meet the goals, and students who meet certain data criteria are served.

This is the trifecta for success: goals, services provided that have been shown to meet the goals, and serving the right kids. Take out any one of these factors and, at best, time is wasted. At worst, the kids are harmed and resources are wasted.

Try Our School Counselor Apps

Reference

Paeplow, C., & Baenen, N. (2006). E & R Report No. 06.09: Evaluation of Supplemental Educational Services at Hodge Road Elementary School 2005-06. Raleigh. Retrieved from http://www.wcpss.net/evaluation-research/reports/2006/0609ses_hodge.pdf