Showing Up, Raw Data, and Logistics: 3 Lessons on Conducting a Successful Baseline Evaluation in Ghana

Showing Up, Raw Data, and Logistics: 3 Lessons on Conducting a Successful Baseline Evaluation in Ghana

Ernesta Orlovaitė is Associate Director of Programs at the Luminos Fund. Ernesta oversees the design and delivery of Luminos’ program in Ghana, collaborating closely with the government and local implementing partners. She also guides Luminos’ efforts to strengthen its capacity for data-based decision-making and drive better outcomes for our students. Previously, Ernesta worked as a Product Manager at Google, leading product design and development teams in Switzerland and Japan. 


Launching an education program in a new country is an unforgettable experience. As a seemingly endless list of tasks gets shorter, emails and calls give way to something much more tangible: printing primers, delivering teacher training, and, finally, ushering excited children to the classrooms for their first lesson. The first day in class is the singular focus in the weeks and months before program launch – getting everything ready just in time is a monumental undertaking.

Yet at the same time, the first day is just that – the first day in a long journey of learning to read, write, and do basic maths; of learning to learn; and of falling in love with it. In that journey, every day matters. It’s that journey, joyful and child-centered, that transports Luminos students from zero to functional literacy and numeracy in just one school year.

On March 8, 2022, as dusk fell over the hills of Ashanti, Ghana, where we celebrated the launch of the Luminos program to 1,500 out-of-school children, we were already thinking about our next big goal: an external evaluation of our first year in Ghana.

Kicking off an independent program evaluation in Ghana

In Ghana, we are lucky to be working with an experienced local partner, Educational Assessment and Research Centre (EARC). In 2015, EARC, together with Ghana Education Service and RTI International, administered the national Early Grade Reading Assessment (EGRA) and Early Grade Mathematics Assessment (EGMA) to more than 7,000 Primary 2 pupils in twelve languages across all ten regions of the country.

We knew we’d want to use the national assessment instruments to evaluate our program. These instruments have been extensively tested and translated to Asante Twi: the language of instruction in Luminos classrooms this year. Using them would also allow us to compare our student progress against learning achievements in formal schools. But we also knew that Luminos classrooms are, in several important ways, different from government schools, making assessment delivery significantly more challenging. So, we got involved – and here’s what we learned.

Lesson 1: Attend enumerator training and provide rapid feedback

EARC ran a five-day enumerator training in Kumasi, the capital of Ashanti, the week before data collection started. Under the guidance of field coordinators, enumerators visit classrooms to administer EGRA/EGMA and record student responses.

Enumerator training is a critical component of the data collection process, so the day after our program officially launched, I arrived at the Bethel Methodist Primary School to observe a practice EGRA/EGMA delivery to Grade 2 students. After the first round, I had several pages worth of feedback and so did the two field coordinators from EARC. Huddling together in an empty classroom, we discussed our reflections from that first attempt.

Enumerator training in Kumasi.

Some mistakes the enumerators made were mundane and would go away with further practice. For example, with students facing enumerators, several indicated the wrong direction of reading – from right to left. In the second attempt, not a single enumerator repeated the mistake.

A more serious issue – and one that’s difficult to catch when you don’t speak the language of assessment – was not sticking to the assessment script. With the two field coordinators fluent in Asante Twi, however, we identified and addressed the problem right away. As one of the coordinators emphatically put it while pointing at the enumerator manual, “Read this, and you will go to heaven.”

Knowing that EARC is an experienced partner, and seeing most enumerators administer EGRA/EGMA with fluency and precision, I wasn’t too concerned about the technical aspects of the evaluation. What worried me was how our students would experience the assessment.

Luminos works with some of the most vulnerable children in Ashanti. A typical student enrolled in our program would be an 11-year-old who is unable to read even the simplest of words. She might have been kept out of school because her family could not afford a school uniform. She might be tired because that morning she had been working on the family farm. She might be distracted because she hadn’t had lunch before coming to class. The experiences of Luminos children are very different from those of 11-year-olds at the Bethel Methodist Primary School. Few children enjoy assessments. I was worried our students would hate them and fail to demonstrate the extent of their true knowledge.

As the day progressed, I demonstrated the behaviours that I observed and wanted to correct, celebrated behaviours I wanted to replicate, told heart-warming stories about our students, and gave passionate elevator pitches on rapport building. I might have overdone it, but that’s a small price to pay if, in return, our children showed off all their skills and had fun while doing so.

As I continued observing enumerators, I kept bringing it up – the importance of building rapport with the student, of making the assessment feel like a (granted, rather boring) game, of creating a welcoming environment, and of treating Luminos children with the same level of respect the enumerators were treating each other and me. As the day progressed, I demonstrated the behaviours that I observed and wanted to correct, celebrated behaviours I wanted to replicate, told heart-warming stories about our students, and gave passionate elevator pitches on rapport building. I might have overdone it, but that’s a small price to pay if, in return, our children showed off all their skills and had fun while doing so.

Traveling to Hamidu to observe EGRA/EGMA data collection.

Lesson 2: Get access to raw data and analyse it daily

EARC used Tangerine, a mobile data collection tool, to record EGRA/EGMA observations in our classrooms. There are numerous advantages to using tablets in data collection, including a significantly higher data quality. No more fiddling with timers or trying to decipher the overly confusing question skip logic. A wonderful side effect of using digital tools is the opportunity to analyse data daily to identify and immediately address issues in assessment administration.

A wonderful side effect of using digital tools is the opportunity to analyse data daily to identify and immediately address issues in assessment administration.

An enumerator uses a tablet during training.

In fact, raw data can be analysed even before assessments start. With practice at the Bethel Methodist Primary School completed, we walked back to the training venue for the first assessor accuracy test. All 20 enumerators completed the same assessment delivered by the two field coordinators. By the time I reached Accra the next morning, I had everything I needed to perform a quick enumerator accuracy analysis.

After three days of training, our enumerators had an average accuracy rate of 92%. With the 2015 national EGRA/EGMA accuracy goal of 90%, my initial impressions of the experience of the EARC team were confirmed. They were doing well and would do even better by the time training finished.

Digging deeper into the accuracy data, I noticed a few interesting patterns. EGRA, it turns out, is significantly more challenging to administer than EGMA (90% and 97% enumerator accuracy, respectively), with the phonemic awareness subtask, at an appalling 69% accuracy rate, giving everyone a headache. On the other hand, having worked with challenging EGRA/EGMA data before, I was pleased with the highly consistent task timings. If we are to scale the raw non-word reading scores by time-to-completion, we better trust that the time-to-completion metric is accurate – and now I knew we could.

That night, I shared my reflections and the names of the 2-3 enumerators who needed individual support with the EARC team. The next day, my feedback was incorporated into the training.

Once assessments began in our classrooms, I continued analysing the raw data. Rather than trying to gain insights into the baseline learning achievements of our students, I scoured for issues with the data that the EARC team could address right away. As I shared my reflections with field coordinators (“I rather doubt there were 256 boys present in our classroom in Aframso”), they were passing the feedback along to the assessors. We did end up with one more classroom recording the attendance of 207 boys (again, a typo), but I expect we would have seen quite a few more if not for the quick feedback loop.

Lesson 3: Don’t underestimate logistical challenges

As enumerator training finished, my “build rapport” mantra gave way to a fixation on logistics. Our classrooms are very different from a typical primary school in Ashanti. We work in some of the most marginalised communities – many don’t have a phone signal, some can only be reached by a motorbike (and it better not rain!), and few can be found on the map. Visiting 60 remote classrooms in five days is a tall order when merely finding these communities can be a challenge.

School surroundings in Hamidu, one of the remote communities our classrooms operate in.

Our goal was to ensure that the EARC team completes the assessment in five days. That weekend, Angie Thadani (Luminos Senior Director of Programs) and I sat down and listed all the different ways data collection might go wrong, from enumerators not being able to reach teachers over the phone (definitely happened) to them failing to reach the assigned communities (also definitely happened). For each issue, no matter how outlandish, we identified a solution (or three). By the end of the day, we had a Plan B, a Plan C, and a set of simple mechanisms to improve the chances of Plan A succeeding.

The single most effective solution was connecting people. Nothing beats a phone call (once it finally goes through) in which the supervisor tells the enumerator how to get to the community, where to stop and ask for directions, and what kind of vehicle is needed to traverse the terrain. In low-connectivity contexts, WhatsApp is another must-have tool, great for planning the next day’s classroom visits once everyone’s back at the base.

The single most effective solution was connecting people. Nothing beats a phone call (once it finally goes through) in which the supervisor tells the enumerator how to get to the community, where to stop and ask for directions, and what kind of vehicle is needed to traverse the terrain.

Flexibility is another key ingredient. For example, our teachers and supervisors worked together to start teaching earlier in the day where possible (in Ghana, Luminos classes take place in formal school buildings and thus start in the afternoon once the other students have departed) so that EARC assessors would not have to travel after dark. As assessors became more familiar with the landscape, data collection schedules changed as well – on some days, a single team might assess two classrooms, while on others, reaching a single community could take hours and hours.

Finally, when all else fails, there’s luck. I planned to observe the first day of assessments, arranging to meet EARC assessors in Abotreye at midday. Abotreye is not the hardest-to-reach community we work in. Nevertheless, to get there in time, I had to get up at 3AM, spend hours in a (thankfully air-conditioned) car, and even push it on a particularly bad strip of the road. But I made it to Abotreye in time. The assessors, however, didn’t. Luck came afterwards – I ran into them a few hours later, alone with their backpacks (with no vehicle in sight), seemingly pondering their options. We picked them up, drove them to the nearest classroom, and left a few hours later as they were finishing the day’s work.

On the road to Abotreye we came across a particularly bad strip of road.

What’s next

Working with such a strong evaluation partner was an incredible experience. The Luminos Fund knew we could trust EARC to deliver high-quality EGRA and EGMA in our classrooms in Ghana. But we also knew that the context we work in is unusual.

Informed by our understanding of the unique features of our classrooms and guided by their extensive experiences of administering learning assessments in Ghana, EARC completed the Luminos EGRA/EGMA baseline in time. We are yet to receive the final dataset and the accompanying report, but the raw data is already telling a story – one that we will share next time.

Harnessing Data to Help Children Learn: Lessons from the 2018-19 Evaluation of Luminos’ Second Chance Program in Liberia

Harnessing Data to Help Children Learn: Lessons from the 2018-19 Evaluation of Luminos’ Second Chance Program in Liberia

Lindsey Wang, Luminos Program Analyst

Lindsey Wang is Program Analyst at the Luminos Fund where she is instrumental in program monitoring, evaluation, and reporting. She joined Luminos in 2016 as a Mechanical Engineering graduate of MIT and is entering Harvard Kennedy School this autumn.

I would like to tell you a story. In the Dargweh community of Liberia, West Africa, an 11-year-old girl steps into a classroom for the first time in two years. She attended school previously and can name a few letters of the alphabet but is unable to read even two-letter words. Years helping her mother in the market taught her to perform simple sums in her head, but she doesn’t know how to write any numbers. In her new classroom, she chants and claps alongside her peers, repeating the names of letters, their sounds, and words beginning with those letters. A. Ah. Africa. B. Buh. Bird. A letter. A sound. A word. She memorizes the pattern and steps to the front of the class to lead her classmates in song. Outside, she can hear toddlers from her community chanting along, drawn to the boisterous chorus rising from the cinder block building.

Ten months later, imagine returning to this one-room classroom in Dargweh to find that this 11-year-old girl now can not only identify all 26 letters, she can read entire paragraphs about Sammy and his sister Satta. She’s more than happy to tell you that if Yatta has 8 pencils and Abdul has 5 pencils, Yatta has 3 more pencils than Abdul. When she encounters an unfamiliar word, she holds out her left arm and taps it with her right hand, moving from her shoulder to her wrist, one tap for each phonetic sound: shuh, oh, puh. Shop.

In 10 months, thanks to her own tenacity and the Luminos Second Chance program, this girl jumped from near illiteracy to acing a second-grade reading comprehension assessment. Her progress is real, and we have the data to prove it.

As the Luminos Fund’s Program Analyst, I had the great fortune to attend the first week of Luminos Second Chance classes in Liberia in September 2018 and the final week of classes in June 2019. I observed similar advances in dozens of the children I met as I supervised the baseline and endline EGRA and EGMA assessments that measure the learning levels of a sample of students before and after our program. Our Liberian program team—Program Manager Abba Karnga and Program Coordinator Alphanso Menyon—diligently arranged for enumerators (the third-party professionals who conduct evaluations and capture raw data) to randomly sample five students from each of our Second Chance classes during the first week of school. At the end of the 10-month program, those same five students were given the same test by the same enumerator. These kinds of data enable Luminos to identify program strengths and the weaknesses we need to rectify for the next cohort of students. The baseline and endline evaluations are our report card, so to speak.

——————————

Everyone in international development knows that external, independent evaluations are essential, but we may underestimate what it takes to get it right. Leading up to my trip and on my flight to Liberia, I buried myself in lecture notes and slide decks from an evaluation management training I had attended until finally, somewhere over the Atlantic Ocean, I realized that no workshop would prepare me completely for the boots-on-the-ground experience of supervising an evaluation.

The lessons that have fundamentally shaped my approach to managing independent evaluations came not from lectures but from visiting classrooms and speaking with enumerators. Now, with the June 2019 endline evaluation completed, I can reflect on the entire process and share a few of those lessons here.

Pilot the survey instrument. Pilots may not always be possible due to time or resource constraints, but the experience of testing a survey with subjects before it launches is invaluable and will strengthen the actual evaluation. We were fortunate to pilot our survey instrument at a Monrovia government school a few days before the baseline evaluation began. Looking over my field notes, I have pages of scribbles even though our pilot took place during just one afternoon. I jotted down every mistake that enumerators came across in the survey and every set of instructions that students did not understand. I noted the names of enumerators I thought were particularly skilled at putting children at ease. Receiving test data collected during the pilot also made it easier for me to prepare an R script to run data checks on the actual evaluation data as they were received from enumerators each evening. This script was a critical time-saver and allowed me to respond swiftly to data discrepancies and issues that arose in the field.

The enumerator training prior to the September 2018 baseline evaluation

Build relationships with your enumerators. While piloting the survey instrument, I received the most insightful feedback from the twelve enumerators preparing to evaluate our students. Rufus noted that students seemed to struggle to read words, not because they didn’t know the words but because the font was too small. Sarah suggested that marking incorrect responses on a paper in front of the child may be discouraging which prompted the other enumerators to change their own processes and mark their papers under the table. During evaluations, enumerators have a front row seat to the students and can share more qualitative insights into students’ knowledge and behavior. At the endline, Margaret shared with me—with a beaming smile—that students seem much more confident in their abilities than they did at the start of the program year, something that wouldn’t have been clear from the data alone. One student, she reported, even corrected her as she tried to demonstrate how to break up a word into its phonetic sounds. Without this direct line of communication to the enumerators, I would have a less nuanced understanding of Luminos results.

Raise concerns early and often. I was nervous going into the baseline evaluation. Was I ready to be an authority on the Luminos program and supervise the enumerator team in the field? I had the Luminos leadership team’s support and they reminded me that, in that room, no one knew more about the program than I did. “Don’t hesitate to raise concerns,” they told me, so I didn’t. I disputed the phrasing of one of the questions. I stressed to enumerators the importance of putting students at ease and reassuring the children that their performance would not impact their enrollment in our program. It surprises me even now how easily the survey team and I fell into a good rhythm. I would observe the enumerators and recommend a change to the survey. The survey firm’s manager would adjust the instrument and the cycle would begin again. This ease is a testament to the survey firm’s professionalism and investment in conducting a rigorous, informative evaluation in service of Luminos’ mission.

Dive in (and be prepared to sweat the details). Did we edit the survey so that both addition and subtraction questions require numeric responses? Does every enumerator know that we will no longer be reading the examples for question 5? The night before the baseline evaluation launched, I caught myself drafting an email to the field coordinators with a few more observations from the pilot only to realize that the enumerators and field coordinators were probably asleep and wouldn’t respond to my emails at 3:00 a.m.! In the end, despite some sleep deprivation, the exhilaration of accompanying the survey into the field kept me motivated. After four hours of driving over pothole-ridden dirt roads – the same pace of work our Liberian colleagues keep every week – I would return to my room to start running data checks, keeping an eye out for enumerator errors and data inconsistencies. In evaluations, as in our Second Chance program as a whole, success lies in the details.

——————————

Sleepless nights, constant survey revisions, and many miles logged on bumpy dirt roads. Conducting evaluations can be tedious and time-consuming, expensive and exacting. Why do we do it?

Data drives decision-making and real-time program enhancements. When mid-year internal monitoring reports flagged that our students were struggling with language arts, the Luminos program team in Liberia acted immediately and restructured the curriculum around phonics. A few weeks later, when facilitators met for our semi-annual training, Luminos staff and curriculum consultants delivered a new training module in phonics that led to increased emphasis on literacy in the classroom. Real-time data collection and analysis enables efficient and agile program improvements. This process helps Luminos fulfill our commitment to deliver high-quality education to students in joyful, welcoming, safe, and instructive classrooms.

Lindsey crunching the evaluation data back in Boston

Data is key to achieving impact at scale. At Luminos, I have seen firsthand how a lean NGO-operated education program can evolve into broad, government-funded, and implemented education policies. In Ethiopia, where Luminos also runs classrooms, our academic research partners at the University of Sussex Centre for International Education have rigorously evaluated that program’s pedagogy, implementation, and long-term impact on students’ educational prospects. Excitingly, Ethiopia’s Ministry of Education has now adopted the Second Chance program model as a national strategy to reach out-of-school children, largely due to the rich body of evidence demonstrating our program’s impact. Going into the baseline/endline process in Liberia, I understood that for our Liberia program to follow a similar path to scale, we must produce another compelling body of evidence — beginning with this evaluation.

Not all NGOs are fortunate enough to have strong evaluation partners. Even when they do, evaluations can be expensive, especially for small teams. But, without data, how does an organization self-reflect, implement better strategies, and, frankly, attract more investment? Only through data-driven action, dialogue, and policymaking can the global community address systemic inequalities with sustainable solutions.

When you’re deep in the analysis process—trying to make sense of one thousand data points—it’s easy to lose sight of why data matter. Data is important because our decisions and policies have implications for real people. Data should be the foundation for policymaking, not only to scale effective programs more efficiently but because, in the end, each of those data points represents a person or a community. Remember the young girl who aced the second-grade reading comprehension test? I know her only as Student B014, but she is a reminder that these data we collect are more than a series of numbers. She is a person with dreams and aspirations of her own. She is a daughter. She is a friend.

This autumn, I am taking my experience with data-driven program management to Harvard Kennedy School in pursuit of a Master in Public Policy and will continue working at Luminos on a project basis. In my academic studies and career so far, I have approached international development as an implementor. At HKS, I look forward to bringing my implementer’s lens to the policymaking table. As I transition to this next chapter, I proudly carry with me the humanity and dignity that the Luminos Fund brings to its work, whether around a conference table in Boston or in a one-room classroom in Liberia.

71 Commercial Street, #232 | Boston, MA 02109 |  USA
+1 781 333 8317   info@luminosfund.org

The Luminos Fund is a 501(c)(3), tax-exempt charitable organization registered in the United States (EIN 36-4817073).

Privacy Policy

We use cookies in order to give you the best possible experience on our website. By continuing to use this site, you agree to our use of cookies.
Accept
Reject