Once data is collected, it must be analyzed, interpreted, and digested for its meaning. Next steps involve:
- Drawing evaluative conclusions that describe and assess the quality and value of your program and its components.
- Debriefing and discussing your draft interpretations and patterns with key stakeholders to get their input into the context of the findings and any ‘reality checks’ on the data.
- Provide insights into possibilities, options and improvements moving forward (for both program and evaluation). Programs need a formal plan that articulates when and how data is shared with various stakeholder groups, as well as how that information is used to improve the program and more effectively meet client expectations and needs.
- Involve diverse stakeholders in discussing the findings, and developing the best actions for course correction and for what’s next.
- Decide on modes of dissemination that allow multiple stakeholders to participate in the sharing.1
As Davidson describes, the most ‘actionable’ evaluations are the ones that follow these six (6) key elements:
- Clear purpose
- The right stakeholder engagement strategy
from the start (including a focus on the participants themselves) - Important, big picture questions
- Well-evidenced, well-reasoned answers
- Succinct, straight-to-the-point reporting
- Actionable insights, collaborative problem-solving for moving forward2
- Bania, M. (2015). Evaluation of mentoring programs. Presentation for the Ontario Mentoring Coalition, Toronto, ON, February 2015.
- Davidson, J. (2012). Actionable evaluation basics. New Zealand: Real Evaluation.