CSA’s Distorted Rankings, Part 3: The ‘ugly little secret’ of CSA

user-gravatar Headshot
Updated Jul 24, 2014

This is the third and final part of the CSA’s Distorted Rankings series, which is part of the broader CSA’s Data Trail series. Click here to see Part 1 of the series, and click here to see Part 2

inspection scales

One initiative to improve Compliance Safety Accountability scoring was an April 30 draft report issued by the CSA Subcommittee of FMCSA’s Motor Carrier Safety Advisory Committee. Principle areas of recommendation are as follows:

Violation severity weights

The current weightings given to individual violations in the Safety Measurement System are point values on a scale of 1-10, with 10 the most severe.

A presentation to the subcommittee by Dave Madsen of the Volpe National Transportation Systems Center, a chief architect of the SMS, illustrated what he called the “ugly little secret” of the system: It implies “a level of precision that doesn’t exist.” Only broad groupings of violations, in FMCSA’s view, correlate to crash risk statistically. Severity weighting, Madsen said, might be better if simplified to a basic low, medium and high system.
CSA_Pie_lead_0614 copy

Tests of such a three-tiered system show that “the companies that move in and out” of alert status in the individual categories don’t change much, Madsen said. Nevertheless, the current system has the added problem of contributing to misperceptions about how the SMS works.

“Everyone’s focused on the individual violation and the points assigned to it. People think ‘points on licenses,’ and third parties take this stuff and create driver scores. … We don’t calculate driver scores,” at least not for public consumption, Madsen added. “In general, we’re trying to get patterns, and we’re losing the forest for the trees with what we have now.”

The CSA Subcommittee recommended such a modification of the weighting system, but its chief concern with the quality of the system went further. “FMCSA should explore further improving correlation of violation groupings within [Behavioral Analysis and Safety Improvement Categories] to crash risk,” its report reads. This would be in keeping with the ultimate goal of identifying unsafe carriers more effectively.

Subcommittee members also offered other things to test relative to violation severity weights – from an even simpler low-high system, with only two weights, to a more detailed examination of what enters the measurement system in the first place.

Carrier peer groupings in the BASICs

One CSA problem is that a carrier can experience a dramatic swing in its percentile ranking when it moves from one peer group to another. It’s often a particularly hard reality at the small-carrier level.

Madsen gave the MCSAC subcommittee the example of a carrier in the smallest peer group in the Hours of Service Compliance BASIC, which includes carriers getting five to 10 relevant inspections. Assume a carrier with 10 inspections and an absolute measure score of 2.39, putting it right on the 65th percentile in the BASIC, or at the level of FMCSA’s intervention alert threshold.

PODCAST: CSA’s golden triangle more like a scarlet letter? Click here or the triangle above to listen.PODCAST: CSA’s golden triangle more like a scarlet letter? Click here or the triangle above to listen.

If that carrier then gets a clean inspection, what is typically advised as a way to improve BASIC scores, its absolute measure score falls to 2.25 in Madsen’s example. However, its percentile ranking in the Hours BASIC, its hours-performance face to the public, shoots up to 76 because it’s now being compared to the next-highest carrier peer group, those with 11-20 relevant inspections.

Madsen also presented on so-called “dynamic peer groups” – a concept FMCSA has considered to guard against such severe changes. The idea is that the lines separating peer groups are not set in stone, but are more fluid and narrowly defined.

Say a carrier has five inspections, he said. “In a dynamic safety event group, [that carrier is] compared to those with five, and I look to those immediately to the left and right” with four to six inspections as well.

The CSA Subcommittee adopted the recommendation to implement such a peer grouping system. It also offered these suggestions for testing:

Changing the peer group being compared more broadly beyond current exposure measures (i.e., number of inspections/power units/vehicle miles traveled). Other peer grouping considerations could include characteristics of operations, routes, number of violations, geography of where carriers received inspections, etc.

Consider separating motorcoach operations from truck operations.

Achieve more uniform crash reporting from states.

Crash exposure should be taken into account in the Crash Indicator BASIC, and not just vehicle miles traveled. In more high-traffic areas, crash risk is higher.