New study: pricey, lightweight helmets aren’t always safer

Escape Collective

September 20, 2024

A new study by Imperial College London reveals that higher-priced, lighter helmets don’t necessarily provide better protection against head injuries. The Helmet Impact Protection Effectiveness Rating (Hiper) project, funded by the protection.”

In fact, one of the top-performing helmets,the £50 Specialized Tactic Mips, outperformed its higher-priced competitors, some of which were even found to slightly increase the risk of injury. Other topline findings involve the role of helmet weight – heavier helmets were associated with less effective protection – and the performance of rotational energy management systems like Mips; the nine top-scoring helmets all used Mips, even as not all Mips-equipped helmets were rated most protective.

Published in the Annals of Biomedical Engineering, ICL’s study aimed to evaluate helmet safety beyond the European EN1078 helmet standard – which does not include a rotational impact assessment – and in the process produce a more useful new safety rating system. After testing – which we’ll look into more detail below – the project rated helmets from 0 to 5 based on their performance in lab crash simulations.

Those ratings are available on ICL’s new Hiper website, providing a third-party helmet safety ranking system that could help guide cyclists in their helmet purchasing decisions. But beyond the ratings, the Hiper project also highlights how difficult comparing helmet safety results is because none of the third-party or official regulatory test standards like EN 1078 and the CPSC 1203 are conducted with the same methods – or sometimes even the same helmets.

Current safety standards are simply passed/failed and only test direct impact sustained during straight-on [linear] head impact. However, evidence from previous studies shows that lasting brain damage occurs in more serious impacts or when the head undergoes rapid rotations during an impact,” Dr. Claire Baker, the lead author from Imperial’s Dyson School of Design Engineering, said in a press release.

A good head on your shoulders

The Hiper project is not the first one to provide third-party safety ratings for cycling helmets. The US-based Virginia Tech Helmet Lab, as well as Swedish insurance company Folksam, also release helmet rankings annually. But for cyclists, it’s not as simple as picking one of the top-rated helmets from these rankings.

Which head form is used in helmet testing is a much-contested topic, and the choice can have a substantial impact on the results. In the US,the CPSC uses the ISO/DIS 6220 headform, while EN testing uses the similar EN 960 headform; Virginia Tech and Hiper use other headforms still, with different weights, materials, and coefficients of friction, not to mention the question of the neck portion (rigid or flexible). A helmet that performs well in one test might do badly in another, undermining how much a consumer can actually benefit from looking at these ratings.

The ICL researchers tested all 30 helmets in a lab setting that was to mimic real-life head impact conditions as closely as possible. The ICL team used the new biofidelic Cellbond-CEN 2022 head form; according to the study, it is designed to represent the human head more accurately than other available head form models. All of the helmets were medium-sized because that is the size that fits the new head form the testing was done with.

….

“Unlike traditional head forms,the Cellbond-CEN 2022 headform better matches the coefficient of friction (CoF) and moments of inertia (MoI) of a human head. This allows for a more precise assessment of helmet performance under oblique impact conditions – the impacts that strike the helmet at an angle ratherthan directly,” Dr. Baker explained in a press release published by ICL.

The Cellbond-CEN headform is said to have a realistic head and face geometry derived from data from a large human population and also features a small portion of the neck, which should ensure a more realistic interaction between the helmet strap and the neck.

Virginia Tech, on the other hand, uses the National Operating Committee on Standards for Athletic Equipment (NOCSAE) headform, which has a glycerine bladderto mimic the effect of brain motion, encased in a polymeric skull covered by a bonded urethane skin. Perhaps a little bizarrely, it also has one ear, making it asymmetrical. It does not have any neck to speak of,though.

Beyond those factors, there are a whole host of other differences ranging from weight to even the materials used (none of the head forms have any hair), which can impact how much the helmet moves against the dummy head. A recent study from June 2024 looked into the test methodology used in current cycling helmet standards and concluded that:

“Varying characteristics of head forms have shown to substantially influence the dynamic impact response. These distinctive features must therefore be tested. The development of head forms to measure intracranial mechanics may be an important tool for further understanding of brain injuries.” The helmet industry doesn’t seem set on what head form is the best, but regardless of the model, it’s used to measure the impacts. The Hypertime equipped theirs with a sensor package and a wireless transmitter, which recorded 400 data points within 20 milliseconds. That was then used to evaluate the “head injury criteria.”

Beyond the head form

A second part of the testing protocol is the impact testing itself – it’s not only the head form that differs between different helmet testing standards. At ICL, the helmets were tested for their response to controlled oblique impacts at various locations of the helmet/head. By using a drop tower helmet test rig the researchers were able to measure parameters such as linear and rotational acceleration, which could be used to assess the risk of skull fractures and diffuse brain injuries.

Standard helmet tests usually measure linear impacts but often neglect rotational forces. By contrast, all three independent third-party test groups also test for rotational impacts, although there are variations in their methods as well.

….

At a 45° angle, at 6.5 m/s (23.4 km/h) impact speed and four locations on the helmet: front, rear, side, and front/side.

The group recorded drop speed, while sensors inside the head form determined peak linear acceleration (PLA), peak rotational acceleration (PRA), and peak rotational velocity (PRV). Together, this data helped researchers calculate the overall risk of diffuse brain injuries (BrIC). They also determined the risk of skull fractures based on PLA.

For each helmet, the team tested four different impact scenarios with three repeats of each scenario, giving 12 total impact tests per helmet.

“To do this in a way that doesn’t comprise the helmet from repeated impacts to similar areas of the helmet, we had to use 6 helmets with two tests on each (in line with methods taken in other literature and helmet testing procedures),” Dr. Baker explained to Escape Collective.

Hiper determined each helmet’s score by the average of the linear and rotational risks recorded for each impact location. This average was then multiplied by the likelihood of impact in that area, based on data of 1,809 head injuries. The weighted risks for all locations were combined to calculate the helmet’s overall risk.

In the star rating system, five stars represent the lowest risk, while no stars indicate the highest risk (no helmets scored zero stars). Of the 30 helmets tested, a helmet needed an overall risk score of 0.1 or lower to achieve a full five-star rating. These risk thresholds may be adjusted as more helmets are tested in the future.

….

In comparison, Virginia Tech tests helmets through 12 “impact conditions,” including six locations and two velocities. It also notes that “locations are dispersed around the helmet and include two at the rim, a commonly impacted area in cyclist’s head impacts that are not considered in standards testing.”

Virginia Tech tests four samples of each helmet model, each subjected to one impact per location. Each of the twelve configurations is tested twice, which makes a total of 24 impacts per helmet model. Helmets are tested without visors, whereas Hipertested its helmets with visors.

Which helmets got tested?

Despite the differences in third-party testing methods, all of the 30 helmets ICL chose for its test complied with the European standard EN1078. The selection included best-selling and popular helmets, as well as helmets with and without Mips (a technology designed to reduce rotational forces during impact).

Because the helmets were sourced from UK retailers, they did not have to comply with the US Consumer Product Safety Commission’s CPSC 1203 standard – though many of them did.

The EN and the CPSC standards don’t differ much in practice apart from the CPSC standard helmets undergoing a little greater impact force during testing. Both helmet standards specify the testing procedures and outcomes the off-test). The slight differences in the standards mean that some helmets might comply with the EN standards and be lighter and less bulky in profile, while helmets in the US are typically slightly heavier to account for the CPSC test’s higher impact forces. Generally, though, most helmet manufacturers aim to adhere to the highest standard across the markets with each of their designs.

Helmet prices in the ICL study ranged from £9.99 to £135, and as mentioned earlier, the researchers found no significant correlation between helmet price and protection levels – instead the performance varied widely independent of price, which led the researchers to suggest that consumers should not rely on price as an indicator of safety. Instead, they noted that even if you paid more, the linear, rotational, and overall risks were not significantly reduced – quite the contrary.

When it came to the higher-cost helmets, the study revealed a small link between increased helmet price and non-exposure-weighted overall risk. This means that some of the more expensive helmets had a higher risk of brain injury in the case of a crash – but not across all the measured impact areas. £135 and an overall risk rating of 0.253. In comparison, the test’s best performer, the £50 Specialized Tactic Mips, scored an overall risk of 0.108. The second-most expensive helmet, the ABUS Gamechanger, scored an overall risk of 0.246.

According to the researchers, the worst-performing helmet, the Halfords Urban, scored 0.283, which in their calculations meant a wearer has 2.62 times the risk of brain injury in a crash than if she was wearing the Tactic Mips. What the study doesn’t say is what is the absolute risk of brain injury for the Tactic-wearing rider in that crash (in any multiplier relationship, the base number matters significantly). Like test procedure and head form, that’s a subject of fierce debate in the scientific community, without firm data on what thresholds of force – and whether linear, rotational, or a combination – are required to produce a brain injury.

What about weight and Mips?

The relationship between helmet weight, Mips, and protection was another focal point of the study. Contrary to the assumption that heavier helmets – which would often mean a more closed shell, less ventilation, or a visor – offer better protection, the research indicated that increased helmet mass was associated with higher brain injury risk when the impact is linear.

But even in testing the impact of weight, the results were not all straightforward and not necessarily directly linked to the weight alone. The study found that while higher linear risk was linked to increasing mass, increasing rotational risk was associated with a lower helmet mass. The heaviest helmet in the test, the BTwin 500, weighed 560 g and had a linear risk of 0.228 and rotational risk of 0.266. For the lightest helmet, the 230 g Kask Protone, the linear risk was 0.19 and the rotational risk 0.315. Then again, another 230 g helmet in the test, Lazer’s Tonic Mips, recorded a linear risk of reducing rotational impact – and the Kask relies on its own Rotational Impact WG11 Test is something the study doesn’t specify.

On the topic of Mips,this study was the first to show that the system remains effective at mitigating rotational motion with a head form that is not the HIII, which is commonly used on standard helmet testing. Though the researchers found that Mips reduced the overall injury risk by lowering rotational risk, it didn’t decrease the linear risk, which was lowest on a helmet with the WaveCel technology. At the same time, that helmet had a high rotational risk.

This led to the study to conclude that “these results show that it is vital to design helmets holistically to reduce both linear and rotational kinematics metrics to protect against a range of different head injury types caused by different mechanisms.”

Is this the new gold standard for helmets?

Given that the Hiper study was only conducted on 30 helmets this far, it is not yet a definitive rating system, and it excludes many of the helmets that other third-party ratings have ranked highly. Virginia Tech, for example, has rated 241 Hiper also only tested helmets available in the UK, meaning some helmets listed may not even be sold elsewhere. The use of retailers Halfords and Decathlon for selecting popular helmets has a part to play in this, and a focus on performance-oriented road cycling could result in a very different selection.

For someone focused on performance cycling, these results might not provide much practical use. Look for example again at the Specialized Tactic Mips that scored full five stars; it isn’t overly weighty at 380 g but it also has a visor, and it’s hard to see someone aero-focused would choose it over, say, the comparatively low-scoring Kask Protone.

The highly engineered, top-level helmets that boast both low weight and high price are often desired by cyclists because of their comfort, not merely the safety credits – and after all, they all still comply with the minimum regulatory standards set by national governing bodies. The lower weight also doesn’t strain the neck over long distances, and more ventilation means your head won’t boil in hotter temperatures. Not to mention that while tests can associate a certain helmet with a low risk of brain injury, its shape might not work for your head, meaning wearing it can become a pain.

Some of the results between the two testing groups correlate well; the Specialized Tactic is Hiper’s best-scoring helmet and is one of the top five in the Virginia Tech database. Yet, you can as easily spot vast differences. Bontrager Specter WaveCel, for example, has a 10.79 Virginia Tech rating (5/5 stars) while in Hipertesting, the same helmet scored 2.74/5.

And though we can use these standards to get an idea of how helmets perform against an impact, the issue of proper helmet fitment remains (as Ronan detailed in his review of the Canyon Highbar helmet) and can have a huge impact on safety. The Hipertests were performed on just one medium size the adult population does wear a medium helmet, future work should ensure that different helmet sizes are tested, promoting equitable research.”

It’s good to see more helmet testing done, but these results shouldn’t be taken without a pinch of salt. There is still no standard approach that exists to conduct these tests repeatedly, with different head sizes, in a commercial setting.

In the case of Hiper,the Road Safety Trust has extended its funding forthree years so that the team can apply its testing and rating techniques to children’s helmets as well as continue to test the wide range of adult helmets available to buy.

Leave a Reply

Your email address will not be published. Required fields are marked *

Equipment Standards News

View All