Harnessing Data to Help Children Learn: Lessons from the 2018-19 Evaluation of Luminos’ Second Chance Program in Liberia

Harnessing Data to Help Children Learn: Lessons from the 2018-19 Evaluation of Luminos’ Second Chance Program in Liberia

Lindsey Wang, Luminos Program Analyst

Lindsey Wang is Program Analyst at the Luminos Fund where she is instrumental in program monitoring, evaluation, and reporting. She joined Luminos in 2016 as a Mechanical Engineering graduate of MIT and is entering Harvard Kennedy School this autumn.

I would like to tell you a story. In the Dargweh community of Liberia, West Africa, an 11-year-old girl steps into a classroom for the first time in two years. She attended school previously and can name a few letters of the alphabet but is unable to read even two-letter words. Years helping her mother in the market taught her to perform simple sums in her head, but she doesn’t know how to write any numbers. In her new classroom, she chants and claps alongside her peers, repeating the names of letters, their sounds, and words beginning with those letters. A. Ah. Africa. B. Buh. Bird. A letter. A sound. A word. She memorizes the pattern and steps to the front of the class to lead her classmates in song. Outside, she can hear toddlers from her community chanting along, drawn to the boisterous chorus rising from the cinder block building.

Ten months later, imagine returning to this one-room classroom in Dargweh to find that this 11-year-old girl now can not only identify all 26 letters, she can read entire paragraphs about Sammy and his sister Satta. She’s more than happy to tell you that if Yatta has 8 pencils and Abdul has 5 pencils, Yatta has 3 more pencils than Abdul. When she encounters an unfamiliar word, she holds out her left arm and taps it with her right hand, moving from her shoulder to her wrist, one tap for each phonetic sound: shuh, oh, puh. Shop.

In 10 months, thanks to her own tenacity and the Luminos Second Chance program, this girl jumped from near illiteracy to acing a second-grade reading comprehension assessment. Her progress is real, and we have the data to prove it.

As the Luminos Fund’s Program Analyst, I had the great fortune to attend the first week of Luminos Second Chance classes in Liberia in September 2018 and the final week of classes in June 2019. I observed similar advances in dozens of the children I met as I supervised the baseline and endline EGRA and EGMA assessments that measure the learning levels of a sample of students before and after our program. Our Liberian program team—Program Manager Abba Karnga and Program Coordinator Alphanso Menyon—diligently arranged for enumerators (the third-party professionals who conduct evaluations and capture raw data) to randomly sample five students from each of our Second Chance classes during the first week of school. At the end of the 10-month program, those same five students were given the same test by the same enumerator. These kinds of data enable Luminos to identify program strengths and the weaknesses we need to rectify for the next cohort of students. The baseline and endline evaluations are our report card, so to speak.

——————————

Everyone in international development knows that external, independent evaluations are essential, but we may underestimate what it takes to get it right. Leading up to my trip and on my flight to Liberia, I buried myself in lecture notes and slide decks from an evaluation management training I had attended until finally, somewhere over the Atlantic Ocean, I realized that no workshop would prepare me completely for the boots-on-the-ground experience of supervising an evaluation.

The lessons that have fundamentally shaped my approach to managing independent evaluations came not from lectures but from visiting classrooms and speaking with enumerators. Now, with the June 2019 endline evaluation completed, I can reflect on the entire process and share a few of those lessons here.

Pilot the survey instrument. Pilots may not always be possible due to time or resource constraints, but the experience of testing a survey with subjects before it launches is invaluable and will strengthen the actual evaluation. We were fortunate to pilot our survey instrument at a Monrovia government school a few days before the baseline evaluation began. Looking over my field notes, I have pages of scribbles even though our pilot took place during just one afternoon. I jotted down every mistake that enumerators came across in the survey and every set of instructions that students did not understand. I noted the names of enumerators I thought were particularly skilled at putting children at ease. Receiving test data collected during the pilot also made it easier for me to prepare an R script to run data checks on the actual evaluation data as they were received from enumerators each evening. This script was a critical time-saver and allowed me to respond swiftly to data discrepancies and issues that arose in the field.

The enumerator training prior to the September 2018 baseline evaluation

Build relationships with your enumerators. While piloting the survey instrument, I received the most insightful feedback from the twelve enumerators preparing to evaluate our students. Rufus noted that students seemed to struggle to read words, not because they didn’t know the words but because the font was too small. Sarah suggested that marking incorrect responses on a paper in front of the child may be discouraging which prompted the other enumerators to change their own processes and mark their papers under the table. During evaluations, enumerators have a front row seat to the students and can share more qualitative insights into students’ knowledge and behavior. At the endline, Margaret shared with me—with a beaming smile—that students seem much more confident in their abilities than they did at the start of the program year, something that wouldn’t have been clear from the data alone. One student, she reported, even corrected her as she tried to demonstrate how to break up a word into its phonetic sounds. Without this direct line of communication to the enumerators, I would have a less nuanced understanding of Luminos results.

Raise concerns early and often. I was nervous going into the baseline evaluation. Was I ready to be an authority on the Luminos program and supervise the enumerator team in the field? I had the Luminos leadership team’s support and they reminded me that, in that room, no one knew more about the program than I did. “Don’t hesitate to raise concerns,” they told me, so I didn’t. I disputed the phrasing of one of the questions. I stressed to enumerators the importance of putting students at ease and reassuring the children that their performance would not impact their enrollment in our program. It surprises me even now how easily the survey team and I fell into a good rhythm. I would observe the enumerators and recommend a change to the survey. The survey firm’s manager would adjust the instrument and the cycle would begin again. This ease is a testament to the survey firm’s professionalism and investment in conducting a rigorous, informative evaluation in service of Luminos’ mission.

Dive in (and be prepared to sweat the details). Did we edit the survey so that both addition and subtraction questions require numeric responses? Does every enumerator know that we will no longer be reading the examples for question 5? The night before the baseline evaluation launched, I caught myself drafting an email to the field coordinators with a few more observations from the pilot only to realize that the enumerators and field coordinators were probably asleep and wouldn’t respond to my emails at 3:00 a.m.! In the end, despite some sleep deprivation, the exhilaration of accompanying the survey into the field kept me motivated. After four hours of driving over pothole-ridden dirt roads – the same pace of work our Liberian colleagues keep every week – I would return to my room to start running data checks, keeping an eye out for enumerator errors and data inconsistencies. In evaluations, as in our Second Chance program as a whole, success lies in the details.

——————————

Sleepless nights, constant survey revisions, and many miles logged on bumpy dirt roads. Conducting evaluations can be tedious and time-consuming, expensive and exacting. Why do we do it?

Data drives decision-making and real-time program enhancements. When mid-year internal monitoring reports flagged that our students were struggling with language arts, the Luminos program team in Liberia acted immediately and restructured the curriculum around phonics. A few weeks later, when facilitators met for our semi-annual training, Luminos staff and curriculum consultants delivered a new training module in phonics that led to increased emphasis on literacy in the classroom. Real-time data collection and analysis enables efficient and agile program improvements. This process helps Luminos fulfill our commitment to deliver high-quality education to students in joyful, welcoming, safe, and instructive classrooms.

Lindsey crunching the evaluation data back in Boston

Data is key to achieving impact at scale. At Luminos, I have seen firsthand how a lean NGO-operated education program can evolve into broad, government-funded, and implemented education policies. In Ethiopia, where Luminos also runs classrooms, our academic research partners at the University of Sussex Centre for International Education have rigorously evaluated that program’s pedagogy, implementation, and long-term impact on students’ educational prospects. Excitingly, Ethiopia’s Ministry of Education has now adopted the Second Chance program model as a national strategy to reach out-of-school children, largely due to the rich body of evidence demonstrating our program’s impact. Going into the baseline/endline process in Liberia, I understood that for our Liberia program to follow a similar path to scale, we must produce another compelling body of evidence — beginning with this evaluation.

Not all NGOs are fortunate enough to have strong evaluation partners. Even when they do, evaluations can be expensive, especially for small teams. But, without data, how does an organization self-reflect, implement better strategies, and, frankly, attract more investment? Only through data-driven action, dialogue, and policymaking can the global community address systemic inequalities with sustainable solutions.

When you’re deep in the analysis process—trying to make sense of one thousand data points—it’s easy to lose sight of why data matter. Data is important because our decisions and policies have implications for real people. Data should be the foundation for policymaking, not only to scale effective programs more efficiently but because, in the end, each of those data points represents a person or a community. Remember the young girl who aced the second-grade reading comprehension test? I know her only as Student B014, but she is a reminder that these data we collect are more than a series of numbers. She is a person with dreams and aspirations of her own. She is a daughter. She is a friend.

This autumn, I am taking my experience with data-driven program management to Harvard Kennedy School in pursuit of a Master in Public Policy and will continue working at Luminos on a project basis. In my academic studies and career so far, I have approached international development as an implementor. At HKS, I look forward to bringing my implementer’s lens to the policymaking table. As I transition to this next chapter, I proudly carry with me the humanity and dignity that the Luminos Fund brings to its work, whether around a conference table in Boston or in a one-room classroom in Liberia.

71 Commercial Street, #232 | Boston, MA 02109 |  USA
+1 781 333 8317   info@luminosfund.org

The Luminos Fund is a 501(c)(3), tax-exempt charitable organization registered in the United States (EIN 36-4817073).

Privacy Policy

We use cookies in order to give you the best possible experience on our website. By continuing to use this site, you agree to our use of cookies.
Accept
Reject