
- Indicator 4: Analyse and appropriately use learner information which has been gathered formally and informally.
- Indicator 5: Design and plan the best learning programme.
Remember - Evaluation is something we all do, all of the time.
In fact - evaluation is a natural, normal process in any service - we do it all the time when we formally or informally discuss our work and assess parts of it. This is known as organic evaluation. It is really a part of our lives - we evaluate most things we do and decisions we make. It is human nature to examine and use experience to advance ideas, avoid mistakes and improve the way we do things.
Evaluation is a much less daunting process if we can accept it as a part of our service which helps us to respond to the needs of the organisation and the community, rather than an exercise done purely to placate or manipulate our funders.
Evaluation is a positive part of the whole service rather than a separate, compartmentalised exercise outside the context of the "real work".
Even though evaluation is something we do all the time, there are three commonly used terms that are useful to be familiar with. These are: formative evaluation, process evaluation and outcome evaluation. Each of these types of evaluation differ to the extent they focus on the stage of the programme.
Covered in this article:
- Types of evaluation
- Why evaluate?
- Approaches to evaluation
- An "evaluation plan"
- Potential evaluation pitfalls
- Some methods of evaluation
Types of evaluation
Formative Evaluation
Formative evaluation is the collection and feedback of information, for use in developing and improving the programme as it is designed and implemented (see Turner, Dehar, Caswell and McDonald, 1992, p. 8).
Process Evaluation
Process evaluation attempts to provide clear documentation of what the programme consisted of in practice, and answers questions of how and why a programme produced the results it did.
Outcome Evaluation
Outcome evaluation assesses the programme’s effects, allowing judgement on whether programme objectives have been achieved. (See Turner et al., 1992, p. 10-11).
Why evaluate?
Evaluation of programmes, projects, procedures, or performances can be used to:
- check for success
- indicate quality
- provide new or better ideas
- improve the service
- add to the learning and delivery of the programme
- add to the group process and status
- assist in networking and developing relationships
- assist with marketing and promoting the service.
Effective evaluation benefits everyone involved in the exercise.
The service provider gets:
- improved confidence to deliver the service
- improved methods of delivery
- more information about what is wanted
- clarification of objectives or programme goals
- clarification of the needs of the end users and of the service provider
- new relationships and networks
- improved credibility and status.
The client, the person using the service or community gets:
- improved service that delivers what is required
- education about the service, its potential, and its usefulness
- information to assist in making informed decisions
The funder gets:
- accountability - knowing how funds have been spent
- knowledge about what future funding needs and trends might be
- information about the service and the environment in which it operates
- good publicity from a successful project.
Approaches to evaluation
There are two approaches that can be used in evaluation - qualitative and quantitative. Usually both approaches are used to gain a complete picture.
Quantitative evaluation is about collecting numbers to understand how well something is working. It looks at things like how many people took part, how often something happened, or how much something changed. For example, you might count how many learners finished a course or ask people to rate their experience on a scale from 1 to 5. This type of evaluation helps show patterns or trends using facts and figures.
Qualitative evaluation is about understanding people’s thoughts, feelings, and experiences. Instead of numbers, it uses words and stories to explain what happened and why. This might include comments from a feedback form, interviews with learners, or notes from someone watching a session. It helps give a deeper understanding of what worked well and what could be improved.
Key evaluation questions
Workshop participants identified several key questions that community groups need to consider as a planning tool before designing an evaluation. Depending on answers to these questions community groups can decide whether the focus of their evaluation falls into formative, process and/or outcome evaluation. Also, it might be useful workshopping these questions as a way of getting everyone involved committed to the process.
What are the questions you might ask to begin with?
- Why do we need to evaluate?
- Who will own the evaluation?
- What will make it most useful?
- Do we know enough about our organisation and the project to be able to provide accurate information?
- Does everyone involved know what we are trying to do with evaluation and why we are wanting to do it?
- Who should be involved?
- Who will use the information?
So, you have decided to go ahead - what are some of the questions you might look at?
- What works successfully and what does not?
- What might work better?
- What has been achieved?
- What opportunities have opened up?
- What opportunities have been missed?
- What resources and costs were involved?
- How we can be more cost effective?
- What critical issues arise?
- What needs to be included in future planning?
- How do we use this information to maintain, support or withdraw the service?
- Has this clarified the goals and objectives of the service?
An “evaluation plan”
As an outcome of the process outlined above, your group will have developed the basis of an “evaluation plan.” You need to be in a position of answering the following:
- purpose of the evaluation
- stakeholders involved/their needs
- time frames
- costs and other resources involved
- consultation with people effected
- methods to be used
- personnel required
- reporting/presentation format
- utilisation/distribution of information for maximum effect.
Potential evaluation pitfalls
Evaluation can raise issues which could cause difficulties and create resistance to the process and idea of evaluation. With foresight they can be managed to prevent major problems.
- People involved with the service or project feel insecure or threatened by evaluation.
Information and involvement are the key to getting people on board. Make sure that people are given as much information about the nature of the exercise as possible, preferably before they hear it through a grapevine where rumour and speculation can raise anxiety levels.People need an opportunity to ask questions, get answers and best of all contribute to designing the evaluation plan and process.
- Vagueness and conflict about what the exercise is supposed to achieve.
Think carefully about what information you need and want then to make an evaluation plan.- State exactly what the evaluation will cover
- Set out why it is being done, what will be done, how it will be done, who will be involved and what outcomes you hope for
- Specify how you will present and use the information gathered through evaluation.
- Time, effort, and costs involved.
It needs to be done, so it needs to be planned and budgeted for.
Spending time and money on evaluation can eventually save money through the development of more effective delivery systems and better use of resources.
Good planning ensures that time and money are not wasted on unnecessary or unfocussed activities.
Some methods of evaluation
All methods have advantages and disadvantages. It is up to the organisation to identify the method which best suits its needs, the available resources, the purpose for which it is going to be used and the context of the evaluation.
You need to know:
- is it a one-off exercise or part of a larger process?
- what is the main purpose of the evaluation?
- what time, finance and resources are available?
- will one method or a variety provide the best results?
According to a paper by Alan Arnott (2023), a combination of two or more methods which complement each other can make the evaluation exercise more productive for example a combination of quantifiable methods, interviews, and snapshots (read the paper here).
Some suggested methods
Quantifiable Information
Collection of data. Figures relating to the organisation and its service that can be linked into other data such as demographic information from Statistics New Zealand. It can be useful to provide absolute numbers such as numbers of clients, numbers of Māori, women and unemployed using the service compared to the total number of such people in your area. Quantitative information can be particularly useful if collected over time to show the changes in an organisation, its service, and clients’ circumstances. Your organisation will find collecting information over time is also useful for planning the service and strategising for the future.
Performance indicators
Useful management tool to ensure that everyone is involved at the beginning of the project or programme. They should be set with the client group so that everyone is clear about what is being measured and what indicates a successful programme. Indicators and the process for assessment need to be agreed to by all concerned.
Evaluation portfolio
A folder/file that keeps information such as news articles, letters of commendation. Such information can provide a profile of the organisation and is particularly useful for process and outcome evaluation.
Keep copies of documents such as:
- newsletters
- reports
- brochures/programmes
- speeches/presentations
- submissions
- drawings, posters
- news articles.
Use for:
- preparing and giving speeches/seminars/presentations
- brochures/advertising
- reporting to funders or community
- publicity/profile building
- briefing inexperienced staff
- showing to the community group with which you are working.
Action research
Action research is a means of planning, developing, and evaluating programmes or courses while they are operating. The group is involved in devising its own methods and activities. The use of participants’ knowledge is central with the group an integral feature of the process.
Organisations should use an outside facilitator with action research skills - as this enables everyone to participate but to be kept on track.
Action research is a formal process of structured discussion involving planning, implementing, collecting information/evidence, and reviewing.
Sampling
There are four types of sampling outlined here:
- Random
- Quota
- Focussed
- Convenience or Low Cost (see Turner et al., 1992, p. 34).
Surveys
Surveys are presented in questionnaire format and can be administered with or without face-to-face contact (for instance online surveying). Clever design and structure are vital as this will result in information that will be perceived as credible. Get help if you are not confident (university, polytechnic etc.). Run a trial/pilot to make sure that your questions are clear, and what you are asking is understood by participants. Also keep questions clear, simple, and unambiguous.
Focus Groups
An effective way of getting wide ranging as well as specific information. Focus groups are structured discussion groups for specific topics or issues. Clarify the topic and ensure good facilitation (someone with no involvement with the organisation is often best).
Recording day-to-day comment
Acknowledge and record the informal processes by which effectiveness of parts of the service are assessed. It is a way of using the structures and networks of an organisation to ensure feedback. Encourage comment and feedback from people involved at all levels. Take time to reflect on criticism.
Make opportunities for individuals and groups to discuss and report back. Ensure the confidentiality of response.
An external "audit" or review
Such an exercise may be driven from outside the organisation - such as by the funder, or it may be that the organisation contracts an outside group to conduct a review.
It is best to treat it like a contract situation. Establish and clarify the terms of reference, agree methods, check data and interpretation, set time frames and meet regularly during the exercise to review the process.
Such an approach should not be done “to” or “on” the organisation but as a “partnership” with the organisation.
Further information to assist:
- Guenther, J., Arnott, A., & Williams, E. (2009). Measuring the unmeasurable: Complex evaluations in the Northern Territory (Working Paper Series 1, Paper 1). CoValuator.
- Stats NZ. (2019). A guide to good survey design (5th ed.).
- Dick, B. (1997). Qualitative evaluation for program improvement. Retrieved from https://www.aral.com.au/resources/qualeval.html
- Duigan, P.; Dehar, M. & Casswell, S. (1992). Planning evaluation of health promotion programmes: A framework for decision making. Auckland: Auckland School of Medicine.
- Turner, A.; Casswell, S.; & MacDonald, J. (1992). Doing evaluation: A manual for health promotion workers. Auckland: Auckland School of Medicine.
The National Resource Centre for Adult Education and Community Learning ran a workshop in May 1996 on evaluation. Participants at the evaluation workshop who helped contribute to this article:
- Pauline Kislick and Tui Tararo - Wellington City Council
- Judi Altinkaya - National Association of Home Tutor Schemes
- Eliz Mortland - Ruapehu REAP
- Barbara Turner - NZ Council of Social Services
- Rebecca Forbes - New Zealand Prostitutes Collective
- Merlin Sansom - Porirua City Council
- Brenda Smith - Baseline
- Jan Prankerd - National Resource Centre
- Janet Carlyle & Lyndsey McAteer - Hutt City Council
- Barbara Lambourn & Jennie Darby - Trustees, National Resource Centre
- Mike Roguski - Hutt City Council