Program Planning with Evaluation in Mind
At Canva University we believe in demonstrating impact through learning outcomes. That means we consistently measure and report on the impact of our programs in a way that is meaningful to our stakeholders. We do this by focusing on learning transfer which is the process of putting learning to a work in a way that improves performance.
To find out more about the importance of focusing on learning transfer, read more about the To do this, before any program is established, our learning designers/partners work with stakeholders to identify the learning needs, objectives and measurement criteria for success using this @Program Planning Doc (demo)
. Bloom’s Taxonomy and the Kirkpatrick Model
Globally, we use Bloom's Taxonomy to create our objectives and use the Kirkpatrick model to measure our success across all programs:
Level 1: Reaction – what are our learner's initial reaction to the material, facilitator and experience Level 2: Learning – how much information was effectively absorbed during the training reflected against the learning objectives of the experience, session or program Level 3: Behaviour – how much of the training has influenced the behaviour of the participant and evaluate how they've applied this information on the job Level 4: Results – what impact has this training had on the business level
Here's an example of the standard we expect in our learning measurement for onboarding:
When measuring the success of the onboarding experience (30 - 90 days), we aim to measure enablement which refers specifically to how quickly people get to be productive and how people experience our culture. To measure that, we need to break down things that help people do their job demonstrated below.
Our formal reporting is done seasonally on the last week of that season and is submitted to the Head of Learning for discussion/review. reporting template will be circulated in a calendar invite but will roughly look like this. It is then sent to our stakeholders for feedback and input. We expect our learning designers and partners to report on:
Overall satisfaction of the program How many people went through the program Summary of what's working well / what could be improved Risk factors: Are there any areas that are at risk of breaking, or are not performing as we expected? Are there any upcoming initiatives that might interrupt learning transfer for participants, or impact on the training overall? Sample size of respondents Specific Demographics: Makeup of specialties (if relevant) of the participants, tenure, team Impact on business outcomes Data Collection Methods
Interviews: Interviews are a means of gathering 1:1 information in a structured or semi-structured way Focus groups: Focus groups are conducted as a means of gathering thematic information in a scaled manner Online surveys: Online surveys are used to gather quantitative and qualitative information from a larger and often remote audience Profiling and assessment: This can be used when current and future capability needs to be identified. It can also be used to inform learner personas Facilitated workshops: Used when needing to engage and identify themes from a key stakeholder group Observation: Used to understand how work or tasks are performed, particular when looking at job re-design or task related information Change readiness, assessment, audit, review: Often used when needing reviewing processes, ecosystems and organisational capabilities Desktop reviews: This includes reviewing existing client frameworks, models, strategy and job descriptions/role profiles that will inform other areas of the analysis
Other thoughts and ideas around data:
6 Month Focus Group: We ask newbies directly for the impact of trainings they've experienced in the past using when curating our questions and referencing past scores they'd given us from their bootcamp surveys Comparing against the industry benchmark: This will serve as our control group when we run AB testing. AB testing: segmenting people into these quadrants and then measuring how they fare across different skills / competencies: have taken mentoring course and has mentored outside of Canva have not taken mentoring course but has mentored outside of Canva has not taken mentoring course and hasn't mentored outside of Canva has taken course but has not mentored outside of Canva Pilots: Google measure the effectiveness of their programming by comparing a pilot group to people who haven’t received the training. They did this with their manager training program. Poorly Performing Programs
When learning and development initiatives and programs fail to achieve their learning objectives of either an observed change in people's behaviour or shift in mindset, that's when we know there's something wrong. As learning practitioners, it's important to to set clear objectives upfront about what we want people to do as a result of the program and what we expect to have changed in the organisation as a result of the program 1 month after participants have finished – 6 months later.
Some signals for poorly performing programs may be:
Low engagement either in the form of participation or clickthroughs: this may mean we're not directly solving the problems that people have. We may not be clear during our performance consultations of what prime performance should be and aren't completely aware of what challenges people face within their roles.
Walt Disney Learning and Development Consultant David James says:
We've been fooled that classroom training is actually learning in the same way that we've chosen completion as a high metric of learning. It's simply presence and exposure. What we should be doing instead is fully understanding what people are trying to do and giving them support when they need it, tailored to their context and role. Let's measure efficacy not engagement
Low uptake on the mindset, skills or behaviours that we've identified as learning objectives for people who attend programs.
Why leadership training fails – and to do about it (Beer et al., 2016) "Participants in corporate education programs often tell us that the context in which they work makes it difficult for them to put what they've taught into practice"
In most follow up studies after leadership learning and development programs, it was found that most supervisors had regressed to their pre-training views. If the system doesn't change, it will set people up to fail.
Unclear direction on strategy and values communicated in the programs.
Most of our programs and initiatives don't specify concrete measurable behaviours and standards. For example, our definition of a mentor is very subjective and immeasurable. Is the purpose of being a mentor to connect someone to the right people or is it to up skill someone rapidly in technical skills, potentially to the expense of your own output? Are they meant to coach you through problems or act as a cultural ambassador helping you feel more included? Or all of these things? Mentoring has been reported to take up to 30% of a mentor's time, what happens when someone is spending 70% of their time mentoring someone?
Data Experimentation
If we ever were to get more experimental with our data with the aim of forecasting and predicting trends, determining correlations between initiatives and success metrics, we would then try and follow best data standard practices where we would have to consider these questions:
How much data should we aim to collect - what proportion of people? (sample size comparing between two cohorts, you have to have one control and one variant, it also has to be large enough → 100, normal distribution, law of large numbers) What other data collection methods do we need to get a holistic understanding of our programs? Examples to research:
Google do a good job of learning data so we might need to research there Some ideas for eng onboarding involve: 1. In-session measure of learning 2. Overall satisfaction of boot camp and belonging to Canva / their team 3. Mentor / mentee reported accreditation of all material learned and demonstrated 4. Mentee confident in speaking up and asking for help How does this apply to you?