The importance of using an evidence-based approach, and then evaluating road safety interventions, was stressed to delegates attending a road safety conference held in North Wales last week.
Road safety professionals came together at the Road Safety Conference: North Wales to discuss some of the challenges they face in delivering road safety education, training and publicity initiatives.
The presenters included Christina Brown, RoSPA’s road safety evaluation officer, who outlined a case study produced by RoSPA and Cambridgeshire County Council’s road safety team which looked at the evaluation of a training scheme for primary pupils.
The case study is based on ScootSmart, a playground-based scooter training course developed by Cambridgeshire County Council for Years 3 and 4 pupils. It has been published on the Road Safety Evaluation website.
The case study follows the Cambridgeshire road safety team as they design, plan and begin to conduct their evaluation.
Christina’s presentation highlighted some of the challenges in designing and conducting evaluations of road safety ETP interventions and described how they can be overcome. It also included tips on how to make sure a road safety ETP evaluation is successful.
For more advice or information about road safety ETP evaluation contact Christina Brown on 0121 2482149.
I believe that xbox and other gaming consoles have potentialy had a significant impact on reducing road casualties amongst those who use them. I know they have kept my kids off the roads. Has anyone thought to evaluate that?
Dan Holdsworth Merseyside.
0
Thanks for the offer Duncan, however in the unlikely event of me ever appearing before a Magistrate, I would not be pleading insanity anyway.
Hugh Jones, Cheshire
0
My reasoning seems to be well understood by the entire aviation safety industry as well as most of the world’s manufacturing industries, Mandy. If you don’t quite get it I am more than happy to point you in the direction of some resources that might help. For a first step you could do no better than studying the work of Sidney Dekker at: http://sidneydekker.com/papers/
Have a read through those and let me know how you get on.
Should you need it Hugh, I would be happy to appear on your behalf in front of the Magistrates, but I’m fairly certain that road accidents are dealt with in a higher court than that.
Duncan MacKillop, Stratford on Avon
0
Duncan I have to say I do not understand your reasoning here. I think it will be best if we agree to differ.
Mandy Rigault. Oxfordshire.
0
“It wasn’t really my fault, it was my brain’s”. Try saying that to the Magistrate.
Hugh Jones, Cheshire
0
The simple answer to your question Mandy is knowing about brains, why we have one and how it works. Understand brains and you understand everything. Behaviourists of course treat the brain as a ‘black box’ and its functions as unknowable which is why behaviourist interventions tend not to work and are often counter-productive.
Duncan MacKillop, Stratford on Avon
0
That is an interesting claim, Duncan. How do you prove that no other external factor was involved in the decision making process of all the road users in your area?
Mandy Rigault. Oxfordshire.
0
Mandy;
My contention is that despite so many external factors other than the RS intervention it is perfectly possible to attribute any casualty reduction directly to that intervention alone! The fact that the road safety industry has agreed on the idea that it is impossible does not neccessarily make it impossible for those of us that have already achieved it outside of the road safety sphere.
Duncan MacKillop, Stratford on Avon
0
I think the comparison of a human being with a vehicle is not helpful. Unlike machines humans are influenced by so many external factors which vary from day to day, individual to individual and also possibly by geographical location. Humans get tired, angry, stressed, distracted, happy, blinded by low sunlight, possessed by a sudden urge to show off to their passenger, change the radio channel, send a text, sneeze and so on. I’m sure I don’t have to try and produce an exhaustive list of the factors which can affect a person’s road user behaviour – and that’s without mentioning media coverage of RTCs, personal experience of RTC involvement etc. etc. Because people are subject to so many external factors other than the RS intervention it is impossible to attribute any casualty reduction directly to that intervention alone. As Nick says, all ETP work is designed to contribute to a reduction in road death and injury. My point was we can’t evaluate the work if we haven’t stated what we wish the outcomes to be in advance of doing the work. In the past some ETP interventions would take place and then RS teams would wonder how to evaluate them. Since the work by DfT and ROSPA to produce E-valu-it, ETP teams have a tool to guide and support them in their methods for evaluating and case studies to learn from. As Duncan said in one of his previous comments, it is not sufficient to put up a few posters and hope they work nor is it particulary helpful to measure recall – just remembering something doesn’t necessarily mean you are going to act on it or that you have the knowledge and skills to act on it (Hedgehog evalution, for those who haven’t guessed what I am alluding to!). I think it is heartening we are having this conversation and that so much interest is being shown, as ultimately we all want to achieve the same results, so a healthy and respectful debate on how to do this has to be a positive thing.
Mandy Rigault. Oxfordshire.
0
Eric:
The Casualty Reduction Partnerships involve themselves with all three of the ‘E’s, (as you’d expect from a partnership) i.e Engineeering, Enforcement and Education whereas RSOs – usually employed by Councils – tend to be just Education, hence E,T & P.
Hugh Jones, Cheshire
0
There may be a multitude of factors Nick, but they are all knowable and all quantifiable! That is why it is a requirement of the profound knowledge system that knowledge of variation (and all the variables) is so important.
The average car is made up of 30,000 parts that all exhibit variation, yet the car industry perfectly understands how this variation effects the performance and reliability of the end product. If the car industry can manage this feat how come the road safety industry can’t manage the evaluation of a far smaller number of variables?
Here’s a little slideshow that shows how to do it. http://youtu.be/-uzKXgZGkco
Duncan MacKillop, Stratford on Avon
0
So why have many of them renamed themselves “Casualty Reduction Partnerships” over the last few years? Also, you say “the primary objective is to positively influence driver behaviour”. So if an intervention emanating from a road safety team had negative influence on driver behaviour, you would presumably all agree that that intervention should be withdrawn?
Eric Bridgstock, Independent Road Safety Research, St Albans
0
Duncan:
The role of the road safety officer is to develop and deliver road safety education, training and publicity (ETP) interventions and campaigns. While the ultimate goal of these initiatives is to contribute towards casualty reduction, the primary objective is to positively influence driver behaviour. Given the multitude of factors that can come together to produce a collision, casualty reduction cannot be used as the sole measure of success of ETP initiatives.
Nick Rawlings, editor, Road Safety News
0
Mandy’s revelation that the road safety industry had “long ago agreed that we will never be able to attribute any casualty reductions directly to our work alone” is shocking to say the least! Without knowing whether the actions of the industry are having a positive effect it would be safe to assume that the industry doesn’t know whether it’s having a negative effect either! If the industry doesn’t know whether it’s actions are killing or curing then it should take immediate steps to find out!
Duncan MacKillop, Stratford on Avon
0
In order to evaluate any given intervention you need first to state a clear objective/s. After that the method of evaluation – questionnaire, focus group etc. needs to be determined. I thought we (road safety professionals) had long ago agreed that we will never be able to attribute any casualty reductions directly to our work alone, thus making casualty reduction an aim or a goal but not an objective? It seems to me perfectly reasonable for elected members or the general public to want to know what value they are getting for the money they pay us or allocate to our budgets. We need to be able to answer the question “what have you achieved?”, so need to have clear and measurable objectives for everything we do. If we know what we want to achieve at the outset measuring whether or not we have achieved it should be relatively simple, surely? If we can’t measure it, it ain’t an objective!
Mandy Rigault. Oxfordshire.
0
The ‘Think’ website states that the campaign is based upon “how persuasive and motivating the communications are and the extent to which they will change attitudes and ultimately behaviours”. Doesn’t that all sound a lot like “putting up some safety posters and hoping that they work”?
Duncan MacKillop, Stratford on Avon
0
Having used the E-valu-It Toolkit recently I can only endorse the need for good evaluation. Why continue with projects and initatives if you cannot prove they work, or at least have any affect? I have been working on evaluation for sometime looking at the effectiveness of our Speed Indicator Device programme, and what is already clear is that our present approach to the regularity of deployments returns very little change in driver behaviour. However, deploying SIDs over a longer period, albeit at random times, returned better results. The evaluation continues and will do for some time, and this will shape the way we move forward. Constant evaluation is not easy, but it is essential if we are to make a difference, particularly with ever diminishing resources.
Rob Camp, Dorset County Council – Road Safety ETP Team
0
Duncan’s first conmment is both right and wrong. He is right that the before/after comparison is essentially simple – but only after the confounding factors have been identified and removed from the equations. Taking into account long term trends (either by adjusting the data for it or by showing one alongside the other) is relatively easy but needs to be done with care. At least as important, especially when looking for what are usually small changes, is to compare “after” not with the often abnormal “just before” data which led to the intervention, but with “well before” data more likely to represent normality.
Often overlooked, but often critical, is that as interventions are applied at any time of year, annual data is simply not precise enough to differentiate between what (as Dave rightly points out) had already happened prior to the intervention and could not therefore have been caused by it, and what happened afterwards and might.
But is is equally important to recognise that whatever effect interventions might have, will reach their maximum within weeks or a few months at most – and that if it cannot be measure in that time period, it does not exist whatever might happen in later years.
Idris Francis Fight Back With Facts Petersfield
0
Developments in evaluation methodology are to be welcomed and I congratulate Christina on her work. In addition to knowing whether a given intervention worked or not, we need to know why it worked or why it failed. In the long term and in the context of developing future interventions, the ‘why’ can be as important as the result.
Road safety is a continually evolving field and its practitioners are continually reviewing what they do by reflection, feedback and continuing professional development. Knowing why some parts of a given intervention achieved better results and why some parts did not is valuable information that can be used in the design of future interventions. Any new developments that can assist in the feedback and continuous improvement cycle should be welcomed. Other professions use recognised tools for continuous improvement for example Failure Mode Effects Analysis (FMEA) and others.
Mark – Wiltshire
0
Duncan:
I’m sorry to say that I fear you are in danger of losing credibility on this newsfeed, which is a pity given the thought-provoking contributions you’ve made over the past 12 months or so.
To imply that the road safety officer/professional’s role involves “putting up some safety posters” is, with all due respect, ridiculous.
A quick whiz around this newsfeed will give an idea of the scope and breadth of campaigns and interventions that are constantly being developed.
You may not approve of some or all of them, but to dismiss them in the terms you use below is, in my view, unfair and unreasonable.
Nick Rawlings, editor, Road Safety News
0
Imagine if you owned a factory that produced ‘safely completed motorcycle journeys’ rather than one making widgets. In your annual report and accounts you might state that from a safety point of view it was a pretty good year as you only managed to kill and cripple a couple of thousand of your employees. As the owner of such a company you might ask your senior staff why you are managing to ruin so many lives and you might ask them to find ways of reducing the number. It would be very surprising if those senior managers came back and told you that it is very difficult to work out how to reduce the number so rather than do that they are going to put up some safety posters instead and hope that works. You might then wonder how long you will continue to employ those senior staff if that’s the best they can come up with.
Duncan MacKillop, Stratford on Avon
0
Derek and Duncan,
The reason I disagreed with Duncan’s statement is that evaluating road safety interventions is not “rather simple”.
On other posts Duncan suggests he has a good understanding of psychology and human factors and with that knowledge should be able to comprehend the task faced in evaluating behaviour change and ascertaining links between marketing/training/education activities and people’s subsequent intentions, actions and any long-term relationship any of those have with collision invovlement.
Yes, some interventions can be evaluated in simple ways using before/after data, but others need much more complex evaluation processes.
Matt Staton, Cambridgeshire
0
It’s great to see some interesting discussion here. I think Dave and Hugh are pretty spot on the money here. What I’m talking about is Education, Training and Publicity and hence, in these areas of road safety, it is very difficult to pin down anything you do here to a change in accident rates.
So, if you do some cycle training and there happens to be a reduction in accidents at that time, how do you know it was due to your training? It might have been a very wet few months and so less people cycled, the police could have been doing some enforcement activity at the same time, pot holes may have been filled, more cycle infrastructure could have been put in, all of these things can also influence the accident rate.
Evaluation is about checking what you have done has had a positive influence. Have those on the cycle course learned the knowledge and skills to make them safer? Will they actually put this into practice when the trainer is not around? Are there any unintended consequences?
Christina Brown, Birmingham
0
I absolutely agree with Ms Brown that we need to start evaluating the effects of road safety interventions. The standard before v after comparison does not evaluate interventions, it reports what would have occurred anyway, plus or minus the unknown effect of the intervention. Was Duncan’s comment tongue in cheek (normally his views are well worth taking on board)?
Accurate and honest evaluations may perhaps still not be sufficient to be believed though and that’s why I believe we need to go further by implementing all interventions within simple scientific trials, where-ever possible. Results from such trials could, I believe, start a real revolution in road safety thinking that could see improvements that would not have been possible without such trials.
Dave Finney, Slough
0
I think the evaluation referred to in the news item relates to the effect an intervention may, or may not have had, on a particular ‘target’ group – whether road user type, or age etc. rather than just an anonymous number of accidents. The item specifically relates to scooter training so I suppose they would want to know how the recipients responded to the training, whether they subsequently put it into practice and hopefully did not become victims. How could you measure that?
The same problem applies to all road safety education campaigns. Enforcement action is perhaps slightly easier to evaluate on individuals and engineering is perhaps the easiest to assess as it usually relates to one particular site and you can see almost straight away if it has worked or not.
Hugh Jones, Cheshire
0
This ‘agree/disagree’ facility clearly is of little or no value when such a comment from Duncan – which is pure logic – has so many people disagreeing with it. Duncan has stated a simple basic fact. What is there to disagree with?
Derek Reynolds, Salop.
0
How many collisions before versus how many collisions after? Evaluating road safety interventions is really rather simple.
Duncan MacKillop, Stratford on Avon
0