Getting started with a basic Learning Agenda
Updated: Feb 24, 2020
A learning agenda is a broad approach to learning about programs for the purposes of improvement, coordination of evaluation activities, and building a reflective learning culture within an organization. In our work at Aspect, we develop a learning agenda in tandem with a communication plan that identifies stakeholders or groups within the community and what they need to know or do related to our program.
Because we primarily partner with educational institutions (non-profits, school districts, higher-ed), most of the work we do at Aspect do focuses on learning outcomes. Most of the learning activities our evaluations are then focused on collecting evidence of learning from participants. However, learning activities can be anything an organization or program undertakes to reflect on their practice and learn about their work. For example, a literature review might be a learning activity that answers the question, “What does it mean to be environmentally literate?”
In this post, I break down the components of the learning agenda and why each is useful, and then discuss why our partner communities need to be part of the conversation.
What questions can we ask and answer that will enable us to do our best work as effectively as possible?
How can we test and explore our Theory of Change?
In what ways does [program activity] improve [outcome] for [participant]?
How can we best make progress on centering equity in our programs?
Indicators are where we might find the answers to our learning questions. What might our participants say or do that would help us answer this question? What evidence should we look for?
For example, What does it look like to center equity in our programs? What does success look like?
This guides our decisions about possible learning activities, where and when we might collect data, or other places we might look for evidence to answer our learning questions.
Learning Activities and Timing
What strategies and activities will provide evidence for each question?
In our evaluation work at Aspect, learning activities are often data collection with program participants, but it really depends on the question asked and the kinds of evidence that are needed to answer the question. Some learning questions might be best answered by identifying outputs (e.g., how many teachers served in professional development), through strategic planning sessions, or other activities.
What analyses will we need to do? How will we communicate what we learn to others in ways that will support their role with our organization ?
This section clarifies the best ways to communicate what we’ve learned to a variety of audiences: internal and external stakeholders, community members, and others who need to know the answers to the learning questions. It’s especially important to focus on how to best inform decision-making. Internally, this might include how program staff use evidence to make program improvements or just plan the next participant session. Externally, an organization might be making the case for more funding or pitching a partnership idea.
In another post, we’ll dive into what it looks like to develop a communication plan that will clarify who our community is and what they need to do know or do. In a utilization-focused evaluation that sets out to make evaluation as useful as possible, this step serves as a sort of backwards-planning moment, potentially driving the learning questions, as well as making sure the learning activities and products are going to serve us well.
Key Decisions or Impact
Why is it important to answer this question? What will our community be able to do or decide with what we learn?
In a way, identifying the key impacts of the learning questions helps to justify the evaluation efforts and keep us on track. If the key impacts don’t align with what our stakeholders need to know and do, then we revisit our questions and make sure we’re effectively and efficiently producing useful information about our programs.
An essential reflection as we build a learning agenda is:
“Who gets to decide which questions we ask?”
A longstanding issue in research and evaluation is that we typically “do” research and evaluation to our partner communities (program participants, their families, school communities, neighbors, etc.), instead of with them.
Marginalized communities are often further marginalized when we collect data and then use it to speak for or narrate the experience of others, or worse, weaponize data to further marginalize people. Involving partner communities in building the learning agenda from the ground up is one step we can make to ensure that the questions we’re asking align with the goals of our partners, honoring and amplifying their voices. It also sets the stage to then co-create solutions and opportunities, in our programs and in evaluation. Experts and organizations that do this well use their privilege and platform to create opportunities to collaborate with partner communities in our conversations about what success looks like and shift space for others to be authors of their own stories. In our own work, as white women and evaluators, we're centering this kind of authentic collaboration in all of our client conversations, as well as shifting our evaluations as they get carried out (so it's not just talk).
Stepping back to build a learning agenda can feel like a luxury in a time when there is a constant need for immediate data, but my clients find the process helpful for reflecting on what evaluation is for in their programs and how it will contribute to their overall goals. Creating a team made up of internal and external community members to build a learning agenda helps ensure the evaluation process is meaningful and productive in the long run-- not just for us as program providers, but for the communities we work in as well.
Here are a couple of resources to get started with: