In the Evidence Portal, each program is given ‘strength of evidence’ rating. This helps us understand the quality and volume of evidence that sits behind each program. The Evidence Portal uses the Evidence Rating Scale detailed in the Technical Specifications.
There are 7 different ratings a program can receive:
The rating a program receives depends on the:
The approach for rating evidence considered and adapted the method of other publicly available evidence rating scales, including the Early Intervention Foundation Evidence Standards and the What Works Clearinghouse Procedures and Standards Handbook (Version 4.0) (United States Department of Education, 2017).
To rate a program, first we rate each outcome domain the program reports on. We review the outcomes each study reports on for a program, identify if the direction of effect, and then identify where the outcome sits on the Evidence Rating Scale. See Section 2.6.4 in the Technical Specifications.
Once we’ve rated the evidence for each outcome domain, we then give each program an overall evidence rating. This is done by looking at the outcomes the program contributes to and how many studies have evaluated the program to identify where the program sits on the Evidence Rating Scale. See Section 2.6.5 in the Technical Specifications.
A program is given the rating ‘well supported by research evidence if a systematic review with a meta-analysis has been conducted on the program and the meta-analysis found that the program had a positive impact of client outcomes.
A program is given the rating ‘supported research evidence’ if at least two evaluations of the same program have been conducted and those evaluations show the program had a positive impact on client outcomes. A program will NOT be given this rating if any adverse (negative) outcomes are found.
A program is given the rating ‘promising research evidence’ if at least one evaluation shows the program has a positive impact of client outcomes. A program will NOT be given this rating if any adverse (negative) outcomes are found.
Programs can have mixed evidence if at least one client outcome was positive, another was neutral and/or another was negative.
Programs with at least one negative outcome and a neutral or positive outcome are rated as ‘mixed research evidence (with adverse effects)’. Caution should be used in implementing these programs. This is because the program could have a negative impact on a particular outcome for your clients. However, we can still use information about these programs to understand what doesn’t work.
Programs with a combination of positive and neutral outcomes are rated as ‘mixed research evidence (with no adverse effects)’. Caution should be used in implementing these programs also. You should carefully review the client outcomes the program can have a positive impact on and the outcomes it is unlikely to achieve.
A program is given the rating ‘evidence fails to demonstrate effect’ if an evaluation shows that the program did not have a positive or negative effect on client outcomes.
While the program may not be effective in the specific context it was evaluated in, information about that program could still be useful to help us understand what does and doesn’t work for our clients. If the evidence shows that a program has no benefit, then it is recommended to consider alternative programs or activities.
A program is given the rating ‘evidence demonstrates adverse effects’ if an evaluation shows that the program only had a negative impact on client outcomes.
It is not recommended that these programs are implemented. However, they have been included on the Evidence Portal so we can understand what programs and activities don’t work.
Rating | Evidence Rating Scale Description |
---|---|
Well supported by research evidence |
|
Supported research evidence |
|
Promising research evidence |
|
Mixed research evidence (with no adverse effects) |
|
Mixed research evidence (with adverse effects) |
|
Evidence fails to demonstrate effect |
|
Evidence demonstrates adverse effects |
|
*On this rating scale, high-quality indicates studies with low-to-moderate risk of bias. |
28 Mar 2022
We acknowledge Aboriginal people as the First Nations Peoples of NSW and pay our respects to Elders past, present, and future.
Informed by lessons of the past, Department of Communities and Justice is improving how we work with Aboriginal people and communities. We listen and learn from the knowledge, strength and resilience of Stolen Generations Survivors, Aboriginal Elders and Aboriginal communities.
You can access our apology to the Stolen Generations.