Date: Tue 14 Sep 2021
Title: Fixing Common ERM Challenges
Is the faltering ERM program fixable? How to diagnose challenges in Enterprise Risk Management initiatives, and determine the best course of action.
The curious juxtaposition of need vs poor take-up, as explained in Episode 1.
Steps in Analyzing and Fixing Poor Take-Up in ERM Programs
1. Take a step back in order to analyze the situation.
2. What is the nature of the ERM mandate and your role in it?
3. What are the motives of senior executive regarding ERM?
4. What are the motives of staff regarding ERM?
5. Delicate matter of sorting out objective vs subjective reasons, and degree of personal investment.
6. Characteristics of an overly-elaborate program in its mid-life stage.
7. Specific fixes for improving the quality and compelling nature of risk information:
– goal of remediation: ensure relevance, utility and compelling nature of risk information
– cogent risk register using properly formed risk statements – review guidelines for planning; goal formulation; setting context; and risk ID facilitation
– review Likelihood and Consequence schemas with a view to simplifying, if appropriate
– granularity of risk analysis corresponds to context
– groups may not engage with text-based risk process (use more verbal, visual in daily stand-ups)
8. What about “opportunity”? Ref: Innovation.
9. What about other risk management sub-disciplines?
1. Return to first principles recommended.
2. Aim for information directing action to reduce uncertainty.
3. Simplify the program and focus on efficiency; integrate it with planning and management.
4. Review the principles of program success (Ep 15): make sure you’re not falling into common pitfalls.
“The result [of High Quality Risk Assessment] is a body of risk information that is fresh and revelatory, leading to problem solving. When that happens at your risk ID session, it is unmistakable. People see the logic of the method and acknowledge that it is working.”
Innovation: How Can My Organization Get Started? – free introductory course
[edited for clarity]
Episode 16: Fixing Common ERM Challenges. In this episode what we want to do is diagnose common challenges in the enterprise risk management initiative, and to recommend courses of action to fix problems that commonly rise.
We’ll discuss in the first part of the steps in analyzing poor take-up in the enterprise risk management program, and later on in this podcast we’ll look at the mid-life stage of an ERM program — the typical characteristics of a program that could be sort of midway along in its evolutionary development, and is typically, at that point, overly elaborate.
[01:21] To start out, if you perceive at a certain point in your implementation that there’s really poor take-up and it’s just not working out as you had hoped, the first step is to take a step back and analyze the whole situation. That means that you have to take into account what the nature of the ERM mandate actually is; what your role in it is. And then you have to look at the motives that are driving the behaviour of senior executive and the employees themselves.
Now this approach makes sense, because if the nature of the ERM mandate and your organization is weak — if it’s just sort of a notional idea, just a peripheral effort, that sort of thing, and your own job description really is not invested fully in in the ERM mandate, but rather, you’re doing the job perhaps off the side of your desk, then the amount of effort that you’re going to invest in trying to fix the program is going to be measured accordingly, and of course compared directly with opportunity cost.
[02:28] We started out the whole series in Ep 1 with this curious juxtaposition of the evident need for enterprise risk management as opposed to it’s relatively poor take-up and poor results shown in surveys. That strange juxtaposition or contradiction persists. That said, you know, all these decisions about what you’re going to do to fix the program are relative, and contingent upon the nature of the of the mandate and your job role, as I mentioned. The logic operating there is simply that the primary aim is to make the organization itself successful, by virtue of its own self-defined goals, and the priority is not necessarily to make a failing program successful. (It doesn’t preclude the possibility of circling back around and making the enterprise management program successful at a later date.)
It’s good to look first at the nature of senior leadership — what their attitudes are, what their motives are, what their thoughts are about the whole program. Now if they’re not supporting the program, it could be because their priorities have changed. They are simply shifting their focus onto other things that have more importance in there eyes — and I did mention that as one of the main reasons for program failure. Management direction can be rather fickle, and it it doesn’t mean it’s wrong, but it could be quite variable and changeable, we need to recognize that.
Another possibility is the senior executive people are interested (self- proclaimed), but they don’t seem to find time to participate, and that’s simply shows you that they don’t have a high priority for it.
Probably what is underlying that is that they could believe in the whole program conceptually, but somehow they still feel that it’s not advantageous to report risk; it’s not really working for them; the results of the program are just not compelling. I think it’s likely that that will be the root issue; i.e., the fact that, so far, compelling results have not been demonstrated in the program. And if that’s the case then it is still fixable. You can try to improve the quality of the risk assessment being done, and I will address that a little later on in this podcast episode.
But so far, we just want to make sure that we’ve taken account of what the senior attitudes are; what what their opinions are; and what their directions are with regard to fixing the program.
[05:05] The next thing to consider is the attitudes and motives on the part of managers and staff, the people who are actually charged with implementing risk identification on various projects and program areas.
If the ERM mandate strong — they are supposed to be doing the work, but are not really complying or delivering good results — again it could be a problem of method, so that, so far, they haven’t really been able to demonstrate to themselves anything really compelling that motivates them to continue in the program. Keep in mind that this, again, is the likely root issue when people complain that they just don’t have the time to contribute to a program that, on the surface, has merits.
The root issue is that they’re too inured to change. They are not yet convinced of the value of the new system. So at this stage of our analysis of program under-delivery it really is a question of being able to step back, as I say, interpret the business situation, discover the motives and understand the intentions with regard to [on the part of] senior executive and staff, to try to sort out the root issues.
I think there’s two broad categories of conclusions that you could arrive at in your analysis of a program under-delivery. The first one is that there are objective reasons for not continuing the program. That is senior executive has simply changed direction, and does not contribute or support the program in any meaningful way. They don’t allow the staff to have time to do risk assessment on their programs and projects, and they have other clear priorities that are being made quite evident. So these are all sort of objective reasons — that is, behaviour or directives from the leadership that contradicts or negates ERM altogether.
The other category is more subjective reasons for noncompliance or poor take-up. Despite the strong ERM mandate and their intellectual agreement with the merits of the program, they’re not following it up. And that is likely, as I said, to be due to the fact that the methods are deficient or somehow faulty.
[07:11] This becomes a difficult call, a matter of judgment, and perhaps a rather delicate matter. You have to weigh the options and the reasons carefully, and decide that in the face of more objective reasons, and perhaps a weak ERM mandate, and not so much invested personally, you might wish to let it go, or put it on the side for a while.
On the other hand, if there’s a strong ERM mandate, and you’ve got a lot invested in it in terms of your job role, then you’ll likely want to proceed with some kind of remediation of the program.
Well let’s assume now that the enterprise risk management program is something that is central and definitely worth saving and something that you want to fix. This will especially be the case if the ERM program is midway in its evolutionary development, if it’s having a sort of midlife crisis. And I’ve seen this more and more in recent years with [ERM workshop] program participants, their comments, and how things are going back at the office. Typically, what’s happened is that the ERM program is already a few years old, and the difficulty is that it’s becoming burdensome. It’s already becoming too clogged with process, too much paperwork, too much meeting time is demanded, and so on.
So whether you are catching the problems of poor take-up at the initial stages, or you’re trying to remediate a program that’s already somewhat mature, I believe that my next comments on how to fix poor risk information will be relevant to your situation.
[08:36] So the goal in our remedial efforts is to ensure that the risk information that people develop is so relevant, so compelling, so useful for them and serves really well as the basis for important decision-making, that they just can’t do without it.
Go back and revisit your deliverable for risk identification sessions. Remember, we don’t want a statement of general conditions and trends. We don’t want lengthy and complex risks statements. We don’t want a rehash of familiar issues, in repetitive terms. No — what we want is a list of cause-and- effect statements, following the rules that we set out for risk statements. [Note: see Ep. 10].
If this is done consistently, you’ll start to see results right away. It becomes evident right away that people appreciate this approach, and that they see the logic, and they acknowledged that it’s working.
It is a matter of going back to first principles and making sure that you’re following the guidelines that I had set out for the earlier stages of this whole process.
I say that because it’s quite easy to have skipped something like, you know, making sure that the plans are well substantiated and researched; to make sure that the goals and objectives are properly formulated, and stated using the SMART acronym. Let’s say you’ve started risk assessment and you didn’t do the context paper properly. One common fault, for example, is to put in the context paper a long narrative of the program history. It serves to sort of confuse them [session participants] as to the goal the exercise.
Another very typical thing is that goals, as I said, lack specificity. Or that let’s say corporate values are not really articulated — they’re not used as risk criteria.
Other problems could be procedural in nature. So let’s say for example, you didn’t bother with the context paper, but started to convene risk ID sessions without preparing the ground first. You found that you wasted a lot of time during the meeting trying to set ground rules and “level-set” — i.e., set definitions, and all the rest of it. That could be a very easy reason for for failure.
Another one is that the facilitator allows the discussion to go on far too long. [In this case] you’re not really facilitating, you’re not Intervening at the appropriate spot to identify the risk, formulate a statement, and then keep the whole process moving.
[11:00] Now, this becomes a long list. I’ve got a table actually in my book. It’s called Process Elements and Quality Checks — 13 different points to see common faults and corresponding solutions for the risk identification and assessment process. All of these will serve to sharpen and improve the quality of the risk information that you develop.
It could be a matter for some experimentation. But, as I say, when you hit on the right method, when you get the right point of view, the right angle, it’s unmistakable. People start to say: “Okay, now I understand how to assess risk on this project.” It’s becoming clear how to develop a risk register that really delivers insightful information that you just didn’t have before. That’s accomplished by focusing on the uncertainty that is associated with intended actions, and making sure the risk statements are really tight, cogent, and consistently formulated.
Some people spend much too long on trying to get the exact Likelihood and Consequence. Now the scales, the schema that you’re using for Likelihood and Consequence could, in some cases, be simplified. And it might be your advantage to do that, because you’ll get through it [the risk assessment session] faster. Keep in mind that you’re not assigning an absolute ranking to the risk. It’s just a matter of assigning relative ranking, so you know what to take action on, and what can be assigned a lower priority.
One other conceptual difficulty is the granularity of the analysis. In a strategic plan risk assessment, you want to be focusing on strategic issues, the strategic risks, and keep it all at a certain level, and not descend into the level of detail which really belongs to operational program risk assessment.
[12:41] Another conceptual difficulty could be that the groups in question are not able to engage, even though they give genuine efforts, with the risk assessment process, the way you’ve set it out, due to the fact that their organizational culture — the way they think, the way they process information and so on — is just not aligned with your text-based process. I’ve seen this on the factory shop floor, for example, where risk assessment is much better done in quick stand-ups, [using] visual cues and signs and so forth, to assist the process.
Another question that arises is: “What about opportunity?” In other words, what happens if we identify possibilities for taking action that simply arise in the course of our doing risk assessment? In that case, you can’t take an opportunity and list it along with the risks in the risk register. Of course, you have to take it offline and assess it apart, as a possible side project.
Strictly speaking, opportunities should be sought out and developed in the context of full-blown innovation program. And you can see a reference in the show notes to a free course that I give — an introductory course, actually, on innovation.
Also, another procedural point that tends to hang up risk identification sessions is to mix different disciplines or sub-disciplines in one analysis. It shouldn’t be done. In other words, if you identify hazard risk with regard to crisis and emergency planning, all of that belongs in a dedicated risk ID session for emergency planning and business continuity.
Similarly, if you identify a risk that has to do with, let’s say, security of personnel, who are subject to the possibility of violence, then that should be taken off line and convened in a separate session with the experts who know how to do security review. So don’t try to mix specialized sub-disciplines within one risk identification session.
[14:48] In sum, I think we want to come to a point where we’ve got an ERM process — a risk identification and assessment process — that gives very concise and pointed information on how to proceed, how to take action; and that is not overly bureaucratic, that doesn’t add a great deal of extra time and effort to the whole management and planning practice. Rather, the risk methods are integrated with it.
The way to accomplish that is to return to first principles with regard to the process that I’m recommending, for planning, setting goals, setting up the high-quality risk assessment process, and doing it as efficiently as possible. We could also take a second look at the principles for program success that we covered in the last podcast episode, and make sure that we’re not falling into any of the pitfalls, the common reasons for program failure that have been identified in the literature.