Using a Retrospective Pre-Post Design to Evaluate Early Childhood Programs

“How do I demonstrate the impact of my early childhood program?”

This is a common question considered by many early childhood professionals and one that we love to discuss here at the Institute for Child Success. 

Designing an evaluation to understand the impact of your early childhood program or system can feel overwhelming and complex. There are many different evaluation designs, each with their own strengths and limitations. What is best for one program, context, or setting, may not be the best for another.  

In a newly published brief, we compare and contrast the “traditional pre-post evaluation design” with the less common “retrospective pre-post evaluation design” to help early childhood programs and organizations consider some of the options that exist for their own program evaluations.  

Traditional vs. Retrospective Pre-Post Evaluation Design

In short, the traditional pre-post evaluation design compares a variable of interest (e.g., skills, knowledge, attitudes, and/or behavior) before the program starts and again after the program ends to examine whether any change took place among the participants.  

The retrospective pre-post evaluation design takes place after the program has ended with two sets of questions, one to inquire about the variable of interest at the current moment (e.g., how would you rate your skill level now?) and another to reflect back before the program started and answer questions thinking about a previous moment (e.g., how would you rate your skill level before the program started?).  This is another way of examining whether any change took place among the participants.  

Each approach has different strengths and limitations. For example, it is important to consider potential biases that come with memory recall, or the way that a perspective shift between two assessment time points can impact the accuracy of the data collected. The program evaluation design you select ultimately will depend on your unique context and needs, and careful consideration of the advantages and disadvantages of each approach.  

The following case example highlights the use of the retrospective pre-post design, which can be a great option for many early childhood programs.  

Example: Retrospective Pre-Post Evaluation Design  

Let’s imagine you direct a community-based family center that offers an array of services, including parenting classes. You have decided to adopt an evidence-based parenting program to support young children’s social-emotional development and prevent challenging behaviors. You have done your research and selected the Incredible Years Preschool Parenting Program, which has a robust evidence base showing it is effective and well-liked by families. Even though the program has shown to be effective, you still want to evaluate its success in your setting and with the community of families you serve.  

How do you pick the right approach to your evaluation? You first ask yourself some key questions to help lead your way: 

Question Answer 
What is the research question I want to answer?  You want to know whether parents’ knowledge of social-emotional development improves after participating in the program. You recognize that this is a question of knowledge change. After reading our recent brief, you think that a pre-post design may be a good match for your question of interest.  
What resources do I have to dedicate to the evaluation? You have a small staff including two group facilitators who will run the program. They are willing to give questionnaires to the parents as part of the program implementation. You have a small budget you can allocate to the evaluation, including stipends for participants who complete a questionnaire. You have access to Excel for data analysis. You do not have any staff with specific expertise in evaluation or data analysis so you will need to do the analysis yourself. You think that an evaluation approach that is simple and low cost will be the best fit for your current resources and knowledge background.  
Do I need a control group? You understand the benefits of a control group which allows you to more closely tie evaluation findings to the program itself. However, given your available resources and the fact that this program has already had multiple randomized clinical trials showing its effectiveness, you decide that a control group is not a priority at the moment.  
Are there any specific evaluation design limitations that are particularly relevant to my program or approach?  You read about the “response shift bias” limitation of the traditional pre-post design and realize this is definitely relevant to your program and question of interest. You expect parents to gain knowledge about children’s social-emotional development over the course of the program, and it is possible that their pre-program ratings may be an overestimate of their knowledge given that they “don’t know what they don’t know.” There is a risk that a traditional pre-post design may underestimate the program effect because of this.  

Given your responses to these key questions, you decide that the retrospective pre-post design is a great fit for your needs given that it is low in cost and resources (only needing one survey administration) and can avoid the risk of “response shift bias.” Although there are other limitations to this approach, you feel good that for your question of interest and context, this will help you understand more about how the program works with the families you serve.  


Just like there is no one perfect measure, there is also no one perfect program evaluation design. Selecting a design requires thoughtful consideration of your context, taking stock of available resources, and making decisions on how to prioritize designs based on their strengths and limitations. 

For more information about measurement & evaluation, visit our Resources library. You can also join our mailing list for free resources. We also provide personalized measurement & evaluation services for early childhood programs: visit out website to learn more.

For more information on measurement selection and valuable measurement criteria, visit the free IMPACT Measures Tool®, an interactive database of over 400 measures, with research-backed ratings of each measure. Set preferences, filter, compare, and access measures that may be useful for your purpose and community.

Back to News