Our Managing Director, Christine co-authored a paper on the economics of assistive technology (AT) programmes and policies with three international experts in this field. It investigated how to best determine how valuable an assistive technology programme has been, comparing the total costs against the overall impacts.
In summary, it noted that limitations in current economic assessment methodologies and the resulting opacity of the overall impact of programmes;
• constrains current funding below efficient levels,
• limits the pace and extent of programme improvements, and
• makes comparative investments more difficult to assess.
There is an easier to read summary of this blog at the bottom of this page if you prefer.
Some alternatives were identified and recommended for further testing across a range of global AT programme contexts. The peer-reviewed paper was presented at the Global Report on Assistive Technology (GReAT) Summit at the World Health Organisation in Geneva, 2019.
If you are interested in reading the full paper, you can download it as a PDF from Day 1 of the GReAT Consultation proceedings here. It is on page 248 – 268.
How can current economic assessment approaches limit AT programmes?
The authoring team looked at how current economic approaches are used today and where and how they are most valuably applied. They also investigated where alternative approaches can help generate more accurate and useful measures of a programme’s effectiveness against defined objectives, and net or comparative return on investment. These alternate approaches are most relevant to the assessment of non-financial impacts, such as an individual’s quality of life, or greater social cohesion. Approaches were outlined that we felt could be applicable to both public and private programmes, as well as across a range of contexts such as high to low-income environments.
A more consistent and robust economic assessment approach for AT programmes would increase the ability to select higher impact programmes, improve efficiency and delivery impact, unlock increased investment in AT, and optimise resource allocation over time.
This project was done pro-bono and we had no sponsors or external influences on the research or findings. Open Inclusion, and Christine specifically, was delighted to contribute, bringing her background in micro-economics, programme strategy, design and delivery to this important topic. It provided us with the opportunity to work in collaboration with some wonderful global experts, researchers and practitioners in the field of assistive technology, and deeply consider and debate global leading practices in disability-inclusive resource allocation.
The co-authors of the paper with Christine included:
David Banes who has worked in the field of assistive technology throughout his long and illustrious career. He has a deep understanding of international and cross-cultural implementation and practical considerations from living and working in the Middle East and Europe and working in-situ on major projects in Africa and Asia. He is a regular contributor and contractor with global organisations such as the UN, ILO and WHO.
- Dr. Natasha Layton, is an occupational therapist, tertiary educator, and practitioner as well as a leader at ARATA (Australian Rehabilitation and Assistive Technology Organisation) and an academic researcher at Swinburne and now Monash Universities in Australia. She is also a regular contractor with the WHO in the field of assistive technology provision. She brought deep knowledge of how programme delivery can influence individuals and the challenges of assessing these.
- Siobhan Long, of Enable Ireland is a very experienced practitioner and leader of the national body in Ireland that is charged with translating assistive technology programmes and policies into practice – assessing and delivering the solutions to individuals in the community. She leads the National AT Training Service.
The key issues
The basic thesis of the paper is that the current standard economic assessment approaches that define Return on Investment (ROI) are incomplete and potentially inaccurate measures of project or programme impact. This is for three primary reasons:
- Failure to accurately capture the “return”. The “investment” or costs side of a return on investment (ROI) calculation is usually quite clear, but trying to convert all impacts, or the “return” in ROI into a financial basis for comparison of costs relative to benefits is not possible or an accurate measure of impact. For some areas of AT impact, financial measures are appropriate and possible, for other areas, they are possible but often inaccurate, and for yet other impact areas, converting to a financial basis is impossible to do credibly.
- Maintaining measurement attention over time. Impacts occur across very long-time horizons, up to decades. For example, an intervention that helps a child gain confidence and enjoyment in learning in their teens may help their employability and income throughout the course of their working life. What may be a minor uplift in grades (measurable) and more positive teacher assessments (subjectively and non-numerically measured) may have very significant long-term payback for the individual, their family/close community and society. Impacts need to be captured over very long time horizons. This also means they need to be quite efficient to assess to minimise the burden.
- Including quality of life improvements. Some impacts, particularly related to a personal sense of agency and quality of life are not easily converted to a measure that can be compared across different individuals, across time or turned into an accurate financial metric. These need a new, more consistent approach to capture them and allow comparison against costs to know the total ROI and impact relative to other projects.
These difficulties tend to systematically undervalue the real net benefit of interventions and as a result lead to endemic underinvestment in assistive technology programmes globally.
This is because the costs and effort to implement a programme are relatively easy to put a financially measurable market value on, yet the positive benefits they enable are much more difficult to comprehensively and accurately measure and assess.
It also limits understanding of comparative options and progress towards more efficient solutions over time. If you can’t tell which of two programmes was better in the specific context they were applied, it is hard to rapidly continually improve and optimise resource allocation and ensure positive impact and contextual appropriateness of possible interventions.
A recommended approach for further research and testing
Our recommended approach, requiring further research and testing, was to improve the assessment in the following ways.
- Measure costs across the full provision pathway (which can be quite long, in some cases up to years) from assessment to provision, setup, use and maintenance or support. Measure outcomes across a significantly longer time horizon also, as relevant to each programme.
- Measure impact and outcomes of the intervention to the individual recipient, their immediate family, community, carers and friends and the broader society, recognising changes in social capital such as reduced inequality and greater social cohesion.
- Assess the net impacts using a currency / financial value approach only where it is possible to do so in a way that is a relatively complete and accurate representation of that change.
- Assess the net impacts using non-financial measures where outcomes are not able to be financially translated easily or accurately in a consistent way, such as a sense of personal agency and choice. Any such measure needs to capture change against a pre-assessed baseline
- Impacts that are not captured financially may still use consistent standardised measures (such as years of schooling attained or final year results) noting changes against a pre-programme baseline.
- Some impacts may need to be measured using behavioural or attitudinal assessments against a baseline to identify the direction and extent of change.
- Different programmes may validly prioritise and weigh one outcome ahead of another. For example, a programme aimed at changing mental health and wellness may prioritise outcomes on changing sense of mental wellness relative to educational or employment or physical health. Another programme aimed at improving specific skills would prioritise attainment of such skills ahead of other possible outcomes such as changes to social connections.
- Use as consistent a set of measures, and approaches to gather the measures, as possible across programmes of varying scale and duration, resource contexts, cultures, and outcome priorities. This will allow for greater comparability of programmes and a deeper understanding of what impacts were achieved, for broader comparability.
We created a starter list for comment on impact measurement areas and options for consistently capturing measures for each. These included; personal independence and choice, educational outcomes, employment and meaningful work / engagement, mental health and wellbeing, physical health, and community and social participation and connection.
In the paper, we applied the approach suggested to a data set from 8 years of AT provision in Ireland. We found some fascinating insights from doing so.
We concluded the paper by asking AT researchers and programme designers to test out the suggested approach in their work, and build on this concept with specific datasets and experiences challenging it in a range of contexts in order to create a more consistent and accurate measure of overall impact of AT programmes.
Easy read version
The paper and ideas in it can be a bit hard to describe. So here is a summary in clearer English for those short of time, focus or who just prefer it.
- We wrote a paper with some other experts about assistive technologies that can help disabled people.
- Assistive technology means any tools that help disabled people live more independently and do more things they would like to (such as a programme that reads text to someone who finds it hard to use a computer, or a wheelchair for someone who can’t walk).
- Our work showed that it is hard to correctly measure how helpful and valuable assistive technology can be.
- It is usually a bigger positive impact than you can easily measure.
- Costs are easier to measure than the benefits.
- As the measurement is hard, governments spent too little on programmes that could help disabled people, the government, and the community.
- We suggested some ways that can help make it easier to measure the benefit of providing tools to disabled people (assistive technology).
- This could also help work out which programme is best when there is more than one to choose from.
- If we use the approach we suggested, it could mean governments give more money to make sure people with disabilities have the tools they need as they will know that it is worth it.