Human Factors in Barrier Management: Hard Truths and Challenges

This paper discussed some “hard truths” in the assurance of human performance in high risk environments. Namely drawing on insights from cognitive decision making, heuristics & biases, bowtie analyses and weaknesses in the way human factors are considered in barrier management.

It’s said human performance continues to be relied on as a control, yet organisations may have miscalibrated ideas on how human performance can be relied upon when needed or how, normally, human performance is the overwhelming reason for successful outcomes even in the face of poor systems & resources.

Thus, organisations may struggle to ensure:

a) that the performance of people that they rely on “can reasonably be expected to happen when and where it is needed (p3)

b) that their expected controls are as robust as expected to human performance variability.

Note this is a detailed paper – I can only scratch the surface and recommend reading the full version. I’ve included my own comments in square brackets [** ].

A common exhortation following investigations is that if only had people followed the procedure then the incident wouldn’t have happened. But this assertion relies on some implicit assumptions:

  • that the organisation actually had all of the required procedures it needs
  • they are specific, accurate, clear and up-to-date [** and to add, sufficiently valued]
  • that people have the knowledge, skills and training to know what procedures to leverage and when
  • that they will accurately recognise the situations that call for what procedure
  • carry them out under the conditions that exist at the time [** which procedures rarely recognise, e.g. what we may call a procedure-context vacuum]

It’s said that some hard truths about how people see and interpret the world and respond are “hard” because they can be difficult and inconvenient to design and manage; but being hard doesn’t make them less valid or less important to address.

Some hard truths include: (p4)

1. Human emotion, thought, performance and attitudes are highly situated – that is, influenced by the situation or context at the time

2. Design & layout of work systems, equipment interfaces and the environment influence how people sense and respond in the world

3. People optimise their performance, even if it may be riskier

4. People are not necessarily rational (e.g. system 1 / system 2 and the use of pattern matching and heuristics or how sensitive people are to loss aversion).

Drawing on work from Daniel Kahneman, it’s highlighted that people generally optimise their performance such that they follow the path of least effort [** another reason to be vigilant removing frictions, e.g. clumsy process, systems, technology, environmental conditions etc].

Definitions

Next some useful definitions are covered.

1. Control means measures that are expected to be in place to prevent incidents.

Controls are comprised of barriers & safeguards. Barriers are “controls that are assessed as being sufficiently robust and reliable that they can be relied on as primary control measures against incidents” (p6). These can be passive or active and be a combination of human elements, technology etc.

Safeguards are “controls that support and underpin the availability and performance of barriers but that cannot meet the standards of robustness or reliability to be relied on as a full barrier” (p7).

Further, a distinction is made for human barriers.

1. Organisational barriers are when the company explicitly prescribes how decisions are to be taken, what is to be done by means of rules, instructions and procedures. Little room for autonomy is intended here [** similar to what we may call action rules].

2. Operational barriers is when there’s no specifically prescribed manner of deciding or acting, where the individual is given discretion to take appropriate action. This relies on operator skills and capabilities [** more akin to process and goal rules].

Criteria for robust controls

Several factors should be met for controls to be classed as full barriers. These are:

1. must be specific to a single potentially hazardous event (specificity)

2. it must be independent of other protection layers (independent)

3. it can be counted on to do what it was designed to do (dependability)

4. It’s capable of being audited (auditability)

These make a lot of sense for engineering systems but need careful consideration for human performance.

For instance:

1. Assuring true independence with human performance is a major challenge. Factors like workload, fatigue, distraction, competency, resource constraints etc. can defeat multiple controls. Organisational factors can influence control performance (incentivizing certain outcomes, contractual arrangements, norms etc).

2.  Independence achieved by having double-checks by another person may also not be truly independent, since it’s also affected by the above factors and more. E.g. “…the behaviour of an operator and a checker are not independent” (p7). [** or what we may call the fallacy of social redundancy]

These factors & more are said to be overlooked when deciding how to assure human performance and/or in investigations.

According to Mcleod, “judgements about the likely effectiveness of controls that rely on human performance means being clear about exactly what is intended, and what is expected of human performance for the control to be considered to meet the effectiveness criteria” (p8).

Intentions are said to be the things that can reasonably be expected to be within the scope of influence of people. These include the design of the work environment or equipment interfaces, e.g. if a control relies on someone opening or closing a valve then a clear intention is necessary that people will know which valve to operate, how/when/why to operate it, and the valve be designed & labelled in a way to minimise the chance of people not operating it out of sequence.

Drawing on guidance from the CCPS and CIEHF, it’s argued that most organisational measures should be treated as safeguards rather than barriers.

Safeguards would include local warnings/signs, design & implementation of alarms, human machine-interfaces, job design and more. Organisational safeguards are more to ensure that the barriers that are expected to function are not degraded or defeated by other factors [** escalation factors as per bow ties].

Quoting McLeod, “Safeguards cannot, and do not need, to provide the same level of risk reduction as barriers” (p8). Nevertheless, safeguards should still have “clear ownership, be capable of being audited, and be traceable to some elements of the organisations management system” (p8).

Importantly, a control may be a barrier in one situation but may be treated as a safeguard elsewhere in the organisation if the company is unable or unwilling to invest the resources to ensure that it functions to the necessary specifications.

Drawing again on insights from CIEHF, eight concerns with human and organisational factors are highlighted:

  1. Top events are situated too far right where the events that are sought to be avoided are too close to the consequences (losses, fatalities etc.).
  2. Too many barriers are identified, many of which don’t meet the accepted criteria
  3. Human and organisational factors are rarely incorporated into barrier models
  4. Ideas of cognition and complexity are rarely incorporated into the performance of barriers
  5. Work-as-imagined vs work-as-done rarely weigh into considerations
  6. “Human error” is frequently identified as a threat and barriers identified to block the error from leading to the top event
  7. The implicit expectations of human performance is rarely made explicit
  8. Barrier models are often designed and implemented to the workforce “in a manner that does not properly support their operational use”.

One interesting point is how “human error” shouldn’t be identified as a threat in bowtie analyses since this creates a “misleading impression that the risk of human error is being adequately managed by barriers” (p10).

Examples are provided below:

Further, it can promote 1) focusing attention on minimising human performance variability over recognising the real barriers and ensuring they are as robust as can be and 2) removing human performance factors out of their context.

Critically, treating people as a threat in the bowtie “also misses the opportunity to develop a deeper understanding

of the ways people provide flexibility and adaptability and therefore contribute to system resilience” (p9). This view also reinforces a negative view of people as unreliable factors to be managed.

Instead, more focus should be directed towards understanding the performance requirements of the interactions of people and technology and what’s needed to ensure the inherent robustness of barriers and their escalation factors [** which I’d argue heavily involves learning from normal work and work analysis methods].

I’ve skipped a huge amount of interesting points to reach this stage, but it’s argued that barriers should contain at least even considerations:

1. The performance the barrier is expected to deliver should be specific to the threat and situation

2. Who is involved in delivering the performance? E.g. who detects the barrier, who decides what needs to be done, who takes action? [** this is also covered under the IDDR framework, being Indication (signal), Detection of the indication, Diagnosis of the indication to determine actions, Response to put it right; see Bellamy 2014 for more info]

3. What info is needed for successful performance of the situation?

4. What decisions or judgements are likely to be involved?

5. What actions need to be taken and how will the operators know whether the actions have been successfully completed or receive feedback during the task?

6. Any technical or non-technical guidance to be followed?

7. The standard for successful performance of the barrier, which could include:

  • Max allowable time to detect an event to trigger the function
  • Accuracy of interpreting the event
  • Max allowable time to initiative a response
  • Max allowable time to complete a response
  • Max acceptable reliability, e.g. specificity vs sensitivity
  • Tolerance limits for acceptable performance.

In all, it’s said that “People are nearly always a positive element in complex socio-technical systems. The objective should therefore be to strive to make people as reliable as possible. Organisations operating complex socio-technical systems should seek to ensure they have in place the necessary systems and support structures, and should design and operate their activities in ways that allows people to be as productive and adaptable as they can be”.

Author: McLeod, R. W. (2017). Process Safety and Environmental Protection110, 31-42.

Study link: http://dx.doi.org/doi:10.1016/j.psep.2017.01.012

Link to the LinkedIn article: https://www.linkedin.com/pulse/human-factors-barrier-management-hard-truths-ben-hutchinson

2 thoughts on “Human Factors in Barrier Management: Hard Truths and Challenges

Leave a comment