HIGH FLYERS THINK TANK

Supported by:
University of Melbourne

Extreme Natural Hazards

University of Melbourne, Tuesday 30 October 2007

Dr Paul Barnes
Queensland University of Technology

Paul BarnesPaul Barnes has a degree in environmental science and a PhD in risk and organisational analysis. He teaches and researches issues in risk and crisis management at the Queensland University of Technology in Brisbane. He has presented on risk and emergency management applied to critical infrastructure protection, organisational vulnerability and supply chain security around the world. He is a member of the Research Network for a Secure Australia and an EU evaluator in Risk Management and Governance Systems.

Anticipating vulnerability in infrastructure(s)

My task today is to examine infrastructure. I will do this from a slightly different perspective. I am going to talk about anticipating vulnerabilities in infrastructure as a more focused attempt at looking at what we have to actually do in preparing, preventing and completing related PPRR issues.


(Click on image for a larger version)

I will cover the following points: precursors to prevention, preparedness, response and recovery; the notion of the interconnected world of infrastructure, certainly in the urban settings that we find in this country and many other developed economies; the issue of vulnerability as possibly a more important focus of our analysis before we get to PPRR; and a slightly different take on anticipation from the papers you have heard earlier: in terms of knowing our systems of infrastructure, how can we anticipate where natural hazards and unusual climatic phenomena may impact?

In relation to storm surges and modelling impact zones, for example, if we can identify where we may have vulnerabilities in terms of how our urban centres are designed (or grow) and function, we may be able to act in advance of disturbances that have the potential to cause ongoing consequences that could have been prevented, or at least mitigated. In this sense, we might seek to anticipate the nature of the disturbances and consequent damage.

Moreover, there is also the notion of foresight and resilience – resilience in critical infrastructure or infrastructure that becomes critical, depending on the types of disturbances that we can model, anticipate, or even imagine.


(Click on image for a larger version)

As an important initial thought, we could ask, 'what are we attempting to prevent, prepare for, respond to and recover from?' Can we be sure that, with variations in climatic phenomena around the planet, we can always anticipate disaster events (as compared to identifying vulnerability within systems of infrastructure and subsequent losses)? Certainly, we can anticipate the likelihood of certain climatic events; we cannot always predict the occurrence of emergency events whose consequences may cascade and expand and aggregate up to disaster proportions.

There is of course a deep generic sense of how to prepare, respond and recover embodied in the well-established protocols of essential services: fire and rescue, paramedic and other types of community safety-focused organisations. Certainly, when large disaster events occur there is a broader scale that includes state- and intrastate-level responses, as well as national-level responses in terms of disaster mitigation and recovery: bringing systems back 'online.'

However, if we look at the situation of infrastructure generally for the modern Australian context it is a fact that most of our population resides in urban centres with lesser proportions in urban/rural interface areas and rural areas. But we have a focus in Australia of urban-centred living, and here too is where most of our infrastructure exists. So what we may need to consider is that through history, as Mike Tarrant has suggested in EMA's long evolution up until recent times, we are moving toward a new way, possibly, of looking at the PPRR rubrics. How can we change what we are doing, given that we have a whole range of potentially cascading, interacting hazards that may manifest as consequences, all of which are very difficult to anticipate? We must still attempt to anticipate them and invest in enhancing our response and recovery capacities.

So I am suggesting that before we get to this level of operation (ie, concentrating on PPRR) there is arguably a need to delve a little bit deeper in terms of where there are vulnerabilities within our infrastructure systems.

The other question that I would like to be considered, certainly by the breakout groups, is: who will implement the actions of PPRR? There are very well-established emergency management systems in Australia, at the local through to the regional, state and federal levels, but we are dealing with city- and urban-centred infrastructure and if they are disturbed to the degree that severe consequences result, how will we respond to the appropriate level?


(Click on image for a larger version)

One of the elements from socio-technical systems and emergency management within that particular sphere, which I think is very useful to carry across to the natural hazards and natural disturbance area, is the notion that there are predictive ways in which socio-technical systems fail: pre-crisis, a trigger event, the crisis itself, consequences, hopefully learning from the consequences, and post-consequence learning.

The link between 'post-consequence' via the broken line to 'learning' as shown in this slide is critical; not the least because of the regularity of this link failing. Take, for example, foot-and-mouth disease (FMD), that has significant economic impact on agricultural ecosystems as well as economies in general. England has proven a very good setting for failing to learn from instances of FMD and in providing other spectacular examples of socio-technical disasters: the mad cow disease disaster being one of them. The failure to embed 'learning' into remedial policy development and regulatory control has significant ramifications generally: at both a regional and national level. Queensland has modelled the long-term effects on the state economy of a foot-and-mouth disease outbreak. Worst-case modelling has suggested that the state economy would have recovered to pre foot-and-mouth disease levels approximately 14 years after full eradication of the disease. Over such a period, many impacts across both the public and private sectors have a capacity to create learning opportunities and define related mitigating strategies.

So there can be a range of identifiable weaknesses, vulnerabilities you might say, that are usefully examined by this particular type of model and, by analogy, to examine options for infrastructure protection.

My focus here is on infrastructure and how various types of natural disturbances may influence systems of infrastructure. I will use a risk management tool to better understand these relationships.


(Click on image for a larger version)

In this conceptual risk framework, we have firstly, on the left-hand side, threats that may be internal or external to an infrastructure system; there may be known (or knowable) hazards.

There may be systemic triggers that allow those particular threats or hazards to manifest as an incident of interest. If we have a full understanding of vulnerability within those particular infrastructure systems, we may then be able to put control and mitigation strategies in place to limit the likelihood of the incident, and certainly decrease the consequences and the magnitude of the effects of those incidents.

If some sort of disturbance occurs that is not preventable we should be able to learn from that experience and, with mitigating efforts and post-incident analysis, feedback lessons learned to increase the likelihood that we will block these disturbances or anticipate the particular sources of disturbance earlier, should they re-occur, or different systems or types of disturbances occur.

If we have this focus on vulnerabilities within infrastructures, we can design in training/awareness programs, audit/inspection capacities, and the ability to redesign infrastructure systems. And we can have real-time learning, short-cycle learning from the pre and post situations, and we can make sure that our policy development and implementation of programs are focused, timely and effective.

Again, this is a conceptual risk framework. In the real world, those sorts of linkages sometimes do not work as well as we would hope. However, theoretically and in practical terms, this is one way of dealing with the vulnerability analysis before we get to the PPRR.


(Click on image for a larger version)

We know that urbanisation and infrastructure are critically linked – complex, dynamic socio-technical relationships. We know also that there are certain key types of infrastructure where disturbances will manifest. They include interconnectivity, telecommunications – we all rely on data and networks; the commercial world, and in our normal lives society relies on these sorts of things. In the event of significant loss-causing events, we are likely to see damage to commercial premises in city spaces, and damaged housing stock as well. Perturbation of power supplies, generation and transmission capacity should also be considered, along with transport systems (road, air, water and rail).

Urbanised centres, maritime ports and airports, are critical infrastructures in their own right and are significant nodes within supply chains generating significant economic activity. Thus if we are attempting to enhance sustainable regional economies, it is logical to focus on locations where our vulnerabilities are known, where we might invest resources and apply scientific knowledge. One of the critical things that we know from previous disasters, certainly in the UK again, is that there is often a distance between scientific (technical) advisers and political decision-makers: especially in terms of the limitations of available knowledge.

In terms of looking at infrastructure, and protecting infrastructure, a focus on the interconnected systemic elements and how they may be made more safe or more reliable is a critical thing in translating what needs to be done now, or could be done in slower time, through to political leadership.


(Click on image for a larger version)

This is a picture taken in 1991, in the United Arab Emirates.


(Click on image for a larger version)

In a reasonably short time this locale translated into the scene you see here – so, the notion of urbanisation, city-building, as places where humans live is evident: we see a building in 1991 and the same building – but a significantly different locale – in 2005.

The image conveys the often-rapid evolution of complex infrastructure systems in large urbanised city spaces. Certainly, this is a desert location, but we have had a massive investment and a massive set of built physical infrastructure that covers all of those issues that I alluded to in the 'Urbanisation and infrastructure' slide.

If we are looking at preparedness, response and recovery to large cityscapes, how will we know where to invest existing essential services, or future service needs, unless we understand the vulnerabilities inherent in complex, interdependent and dependent infrastructure systems? In the case of these two slides we see a transition from semi-desert location to a large conurbation, large, complex central business districts, with all of the systems that you would imagine in a modern metropolis – complex, interactive, interdependent systems.


(Click on image for a larger version)

For example, we could look at electrical power – the slide shows all of the dependent business and related ancillary systems that link into the provision and the maintenance of electricity supplies in modern infrastructure, within a modern urban and city space.


(Click on image for a larger version)

If we drill down to a deeper level, looking at the interdependencies between oil, water, electric power, telecommunications, natural gas and transportation, we find that there are complexities within this space that we have to look at before we can get to the overarching requirements of operationalising PPRR.

We have a familiar range of natural hazards in Australia. Modelling suggests that we will have higher temperatures, an increased likelihood of flash floods, bushfires, with a range of impacts, certainly in terms of urban/rural interfaces. We know that, historically, cities in Australia have been affected by bushfires.

We may find that we have a whole range of increased likelihoods of these sorts of natural events affecting our infrastructures.


(Click on image for a larger version)

To look at low elevation issues in terms of coastal regions of Australia: Australia is identified, certainly in this source, as 'at risk'. I think that would not be an overstatement.

So we have a range of vulnerabilities that we should be focusing on. Certainly the logical ones that are easy to identify may be more accessible in terms of our thinking and clarity of advice to senior political leaders and decision-makers. What may be more problematical is the potential for unexpected linkages between interdependent infrastructure systems and subsystems that may be disturbed in unexpected ways (with significant consequences), by some of the more natural events that we are looking at today.

For infrastructure, unexpected socio-technical interaction between components is important in addition to consideration of natural sources of disturbance, as mentioned earlier. The situation in highly dense urban spaces is possibly more complex, because we live in complex interdependent systems that may of themselves fall apart because they are complex, let alone from any natural disturbance source.


(Click on image for a larger version)

We are all familiar with the pattern depicted here: we live on the coast. We live in conurbations on the coast.


(Click on image for a larger version)

Why should we change our thinking, to focus more on vulnerability analysis and anticipating some of those sorts of problems?

You have probably heard a lot over recent years of critical infrastructure protection, from a national security perspective. If the direction of our climatic systems is moving along the trajectories that it seems to be, all of our infrastructure may become critical, in the sense that we have to work out ways in which we can sustain and maintain it, because people are relying on many of the types of services provided by those systems of infrastructure.

I will refer to something that came out of the 9/11 Report by the National Commission on the terrorist attacks in the US in 2001: on page 339. There were four kinds of failure in terms of governance, with respect to the event itself: problems in imagination, policy, capabilities and management. I suspect, from my experience, that the first of those points is the most critical – failure of imagining what could happen, failure to anticipate, failure to appreciate that there are vulnerabilities in certain systems and it may be better to prevent disturbances by anticipating and investing in prevention.

I am not saying that PPRR needs to be put aside. Rather I think there needs to be some preliminary analytical considerations, as alluded to earlier. One of the critical issues is using imagination appropriately and investing scientific knowledge that follows that particular line.


(Click on image for a larger version)

One of the reasons why I think it is critical to have this change of thought may be explained by what I will show you here.

Imagine the impact of a technology that could be considered to be planned or unexpected, with the context in which that technology impacts on the world being either limited or dispersed.

Quadrant 1 may be surprise free – planned and limited contexts.

Quadrant 2, where we may have our science – our knowledge – being invested quite beneficially in different and unexpected ways, is again beneficial. It is not necessary to have a down side to that.

In quadrant 3, however, we have an unexpected but limited focus of surprise. We have some 'externalities', as economists may call them, unexpected types of phenomena. We might have been able to prevent those if we had looked at some of the vulnerabilities and some of the systemic factors where the sources of disturbance may have come from and the space in which those consequences would have manifested.

However, quadrant 4 is a techno-contextual surprise, where things are out of the box, so to speak, and can't necessarily be put back – again new variant Creutzfeldt-Jakob or mad cow disease, and again in the UK. But we have other phenomena very focused on some of the topics that we will look at today. Effective anticipation, looking at vulnerabilities, looking at systems that are complex, making sure we have data that are relevant and contextual, may allow us to avoid finding ourselves in quadrant 4.

I think this harks back to the experience of the 9/11 Report, where use of imagination and testing for the worst case and being contextual would have been useful.


(Click on image for a larger version)

There are many crisis-ready institutions that do a range of things that are relevant to what we are talking about here.

How can governments – state and federal – how can institutions at each of those levels, how can the providers of technical information enhance and promote this capacity to be adaptive in analysing vulnerabilities (certainly in infrastructure systems this seems to be something that is critically needed) and to allow people who use these infrastructure systems to be more aware and therefore contribute to debates, and certainly respond to appropriate advice from government?

How can we anticipate, more readily, counter-intuitive triggers of disturbance that will impact on infrastructure?

How can we enhance planning methodologies at various levels of government, including the 'system of systems' concepts we find in large, complex infrastructure systems?

Moreover, how can we use those sorts of capacities to look at emergent threats, particularly before they manifest as harm?

It is somewhat difficult in many cases, but there is enough evidence in various literatures that you can look to, and practices, that suggest that these things are eminently do-able and should be invested in.


(Click on image for a larger version)

Some of the analytical frameworks that can be applied to infrastructure as shown here include engineering lifelines, the biotic elements, and also the socioeconomic lifelines – critical areas where we could look at city spaces and urbanised space. Such analysis would assist in defining where the impacts occur, how severe those impacts will be from a range of sources of disturbance, those naturally introduced and those more technical.

Those analytical frameworks can be focused in terms of local, regional, state and national, and they can include the institutional capacities we currently have as embodied in Emergency Management Australia; for example, in terms of focusing on local disaster response. How can we analyse and look ahead so that we can invest in anticipating incidents that are likely to occur and the locations where the vulnerability would suggest we need to have the best capability?

We can model a range of these sorts of vulnerabilities through time, and given data feeds from various sources – Geoscience Australia, climatic-type sources of information – we can enhance our capacity to apply the PPRR rubric.


(Click on image for a larger version)

We can forecast. We can anticipate, if we have data sets, if we have applied imagination, into the future.


(Click on image for a larger version)

But equally, depending on the sorts of data we have and the way we are analysing it, we can back-cast, looking at what historical data would suggest to us, that allows us to have greater capacity to prevent and respond as appropriate.


(Click on image for a larger version)

One of the elements of resilience related to critical infrastructure is a notion that resilient infrastructure works, within certain parameters, consistently through time if we have a disturbance. This is using some of the resilience terminology of the Resilience Alliance – many of you from CSIRO would know the work of Brian Walker and others – within the particular notion of a threshold of change encompassing parameters across and within which infrastructure systems may operate. Therefore, there is a capacity for the infrastructure to fail, but we may be able, through vulnerability analysis, to identify thresholds of change – signs of when the systems of infrastructure are starting to be perturbed to the point where they cannot maintain their functional resilience.


(Click on image for a larger version)

Another way of looking at it is how we may fall into these deep, dark holes. How can we identify when systems of infrastructure are starting to perturb beyond recovery? Again, notions of vulnerability analysis and how interdependent systems may relate to each other from that perspective are needed. I think there is a lot of work being done around Australia on these sorts of things, but enhanced funding may bring more results to the fore.


(Click on image for a larger version)

How can we apply the vulnerability analysis into an infrastructure system and have some sort of adaptive resilience, where, rather than fail totally, we have capacities to fail safe, where we lose some of our functionality but we maintain degrees of output? So we have the option of retrofitting resilience into infrastructure systems; but what sort of retrofitting and how much will be determined by vulnerability analysis, based on an analysis of (anticipation) of the nature of disturbances that are likely to impact urbanised infrastructure.


(Click on image for a larger version)

The difficulty of dealing with uncertainty was a function of decision-making familiar to Herodotus in ancient Greece. I am not suggesting that we do the same thing the Persians would have done – anecdotally – and go through this process. But the ability to look to alternatives, to look for vulnerabilities, and to bring those vulnerabilities and weaknesses in our infrastructure systems into focus, and model known sources of disturbances, certainly from an extreme natural hazards perspective, would be useful.

Discussion

Question – You used an example of the anticipation of an emergent threat. Can you give an example of a failure?

Paul Barnes – New York has systems of interconnected infrastructure, a large urbanised space, and West Nile virus manifesting within that city space, eventually at an epidemic scale: propagated via pigeons. The consequence of not acting early enough was a failure of communication between two types of scientists – animal health specialists and public health specialists.

The first sign of West Nile virus manifesting was in primates in New York City Zoo. The animal medical people contacted the Centers for Disease Control. There was some degree of reticence and some difficulty in convincing the epidemiologists and human virologists that there was a potential problem.

Now, if there had been the more appropriate focus that we now see in this country and most parts of the world, where biosecurity issues are critical, especially given many known biological agents are zoonoses, we would have probably found that there would have been a rapid link-up between those two types of medical practitioner, and there might have been a capacity to move into New York and do something a little bit sooner rather than gas the entire city to kill the pigeons. And Jerry Hauer would probably still be the head of emergency management operations in New York City. They probably had to come in at a very late stage and do something that upset most of the population in the city.