Citation: Mulligan, M and Douglas, C (2019) Documentation for the EcoActuary Model V2. www.policysupport.org/ecoactuary
King's College London and AmbioTEK CIC provide these systems without warranty of merchantability or fitness for a particular purpose. We shall not be made liable for any consequential, incidental, indirect, special, punitive or exemplary damages resulting from the use of this software.
Note that input datasets are regularly updated in this system. Thus, citations to data used are not listed here but in the Prepare data area of the PSS
Eco:Actuary is a spatial policy support tool inspired by catastrophe models and designed for strategic planning aimed at risk mitigation. Eco:Actuary is a probabilistic flood model with global coverage. All the data required to run the model is included as standard; the model uses 148 input maps. However users can also upload their own maps to replace some of the input maps, if wanted. The model runs at 1 ha or 1km spatial resolution. Eco:Actuary’s approach to catastrophe modelling is to first assess the risk of a hazard, and then identify the spatial distribution and value of assets within 13 asset classes. Internationally consistent predictions are possible due to the global datasets, primarily based off Earth Observation data and OpenStreetMap. Eco:Actuary has scenario functionality which means that potential scenarios can be run are the results compared to baseline runs. Potential scenarios include changes to climate, land-use, green infrastructure and model parameters. The results of the two runs are compared, producing a map showing the percentage change in values between the two scenarios, and thus illustrating the impact of the scenario.
EcoActuary models pluvial and fluvial flood risk globally. Briefly the model works as follows: the model generates rainfall events probabilistically, the water travels through the terrain and is stored in the landscape and the excess as runoff. The specifics of the model are described below. The model uses remotely sensed data as inputs to ensure a globally consistent methodology. All calculations occur on a per-pixel basis.
Rainfall event generation. As flood risk is assessed probabilistically, numerous precipitation events are modelled and a frequency distribution of flood risk created. The user specifies the number of events to be simulated, default (and maximum on the shared public servers) is 1200 The spatial patterns and volumes of rainfall used as standard in the model is from Hijmans et al. (2005). The maximum intensity and volume measurements are scaled according to the Pareto (power-law probability distribution) to simulate the pattern of rainfall events for that region (i.e. relatively even rainfall patterns or few very large events). The default value is 1. Based on the above inputs, two independent series are generated: a series of rainfall intensities and a series of rainfall volumes. From these two data series, the model creates the specified user-defined number of events. Each month represents an event during a given month of the year. All events are independent of each other. In order to make sure that the simulated rainfall events are appropriate for the study region two random numbers are chosen between 0 and 1. If the normalised monthly rainfall for a pixel is greater than the minimum but less than the maximum of these numbers a rainfall event ‘occurs’ in that pixel.
Rainfall modelling. Rainfall is based on remotely sensed datasets for wind speed, wind direction, cloud frequency and sea level pressure and precipitation which are used to create a rainfall dataset which accounts for the effects of wind. From this the following metrics are calculated: annual rainfall, fraction of each month experiencing rain (in relation to the annual total), fraction of each month experiencing cloud (in relation to the annual total).
Rainfall infiltration. Maximum potential infiltration rate is calculated on the basis of global permeability data (see the Prepare data step for current source), the opportunity time for infiltration to occur (which is proportional to slope), the beneficial effect of tree cover (i.e. vegetation slows rainfall and runoff giving water a greater opportunity to infiltrate into the soil), and the negative effect of impermeable surfaces (i.e. rainfall can not infiltrate). Potential infiltration excess is calculated on the basis of soil permeability, soil thickness/porosity, opportunity time, and rainfall. As the storage capacity of the soil fills, the infiltration rate decreases. Soil thickness is calculated using a terrain accumulation index (akin to a topographic wetness index), which is set to 0 for impermeable surfaces (roads, urban, bare, water).
Water routing and runoff. The model determines the route water will take through the landscape based on a digital elevation model (DEM), specifically the local drainage map HydroSHEDs (Lehner et al 2008). Runoff is the difference between precipitation falling and travelling through the catchment minus the water being stored in various forms of green infrastructure (see water storage described immediately below). Runoff is indexed in two ways: infiltration excess and saturation excess. Infiltration excess is defined as the rain per rain day minus the infiltration for each month of each event, and is cumulated down the flow networks. Saturation excess is calculated as the difference between monthly rainfall and monthly soil saturation after infiltration which is then cumulated down the flow network.
Water storage. Water is stored in various ways within the landscape: floodplains, waterbodies, wetlands, tree canopy, and soil.
Floodplains are calculated using drainage direction map (Lehner et al 2008) and elevation from SRTM data (Farr and Kobrick, 2000). Floodplains are defined as pixels which are both: i) connected to a downstream pixel which has a Strahler stream order of greater than or equal to 3 and ii) have a gradient of less than or equal to 4.5 degrees (difference in elevation between the land pixel and the river pixel is less than the cell size divided by 20. Greater differences in pixel height are considered as slopes rather than floodplains). The average water storage capacity of floodplains is assumed to be 0.5m after Yamakazi et al, 2011.
Waterbodies are identified, and their storage capacity calculated, using MODIS surface water frequency data and elevation data (Lehner et al. 2011) using the methods described in detail in Mulligan (2013). In short, waterbodies are defined as areas completely covered by water; and the volume of the water body is calculated using elevation data. Non-permanent water bodies are considered to have lower total storage than those which are permanently wet. The WB volume intercept is given as 30.684. The WB volume slope is given as 0.9578.
Wetlands are identified, and their storage capacity calculated, using MODIS surface water frequency data using the methods described in detail in Mulligan (2013). This metric captures water storage in the edges of permanent water bodies and seasonally drying wetlands. Based on the frequency of water cover, the indice scales the depth of all wetlands between 5m (for pixels covered by water all the time ) to 0m (for pixels that are rarely covered by water). Wetland depth of 5m is used
Tree Canopy storage is identified based on tree cover (Hansen et al 2006), precipitation (Hijmans et al., 2005) and number of rainy days (Cramer and Leemans, 2001). The potential water storage of trees is calculated based on the assumption that a canopy stores 5mm of water per rain-day: the mid-point value of the range (0.1-9.1mm) across arid, temperate and tropical regions proposed by Davies-Barnard et al. (2014). 5mm is considered a representative average for a closed canopy. In order to account for differences in tree canopy density across the globe, the water storage values are scaled with the fraction of tree cover per pixel; the latter obtained from DiMiceli et al. (2011). Therefore in more arid environments, with more sparsely distributed trees, storage capacity values will be lower. In order to not underestimate the storage capacity of trees, the loss of water to the atmosphere (i.e. evapotranspiration) needs to be considered. To do this a rain-day metric is calculated using the precipitation and rain-days datasets listed above, calculated as the rainfall per rain day. When the rain per rain-day metric is greater than the canopy storage capacity, the storage capacity of the vegetation is the limiting factor; if the rain per rain-day metric is lower than the canopy storage capacity then the number of rain days is the limiting factor. Based on this logic, the rain per rain-day metric is multiplied by the number of rain days per year to give the total annual storage capacity. This assumes that the canopy store empties completely to the atmosphere (evapotranspiration) between each rain day. Tree canopy storage calculation ignores any water storage due to grasses and shrubs since their storage capacity is limited and short lived.
Soil storage is based on a topographic wetness index (Beven and Kirkby, 1979; Beven et al, 1984), soil depth and vegetation cover (Sexton et al. 2013; Hansen et al. 2006). Porous soil storage capacity ranges from 0m for impervious surfaces to 2m for maximum accessible soil (4m deep with 50% porosity). This based on soil depth being scaled with topographic wetness index for each pixel; with wetter areas thus having greater storage capacity. The topographic wetness is a commonly used to quantify the topographic control on hydrological processes and is calculated based on the slope and upstream area and operates between 0 and 40. The influence of vegetation on soil storage is also recognised and incorporated on the basis of adding 1mm of water storage per 1% of tree cover . For water bodies, bare rock, roads and urban areas soil storage is considered inaccessible. In the first two instances, soil storage is set to 0: for roads the fraction of the pixel occupied by the road is considered inaccessible, and for urban areas 90% of the soil storage is considered inaccessible.
Flood Risk. We use two indices to quantify flood risk: potential flood risk and realised flood risk which account for the flood risk before and after, respectively, the stores in the landscape have been considered. Flood risk is calculated as follows:
Asset datasets can be uploaded; however, if suitable datasets are not available Eco:Actuary has an in-built asset dataset in which assets have been identified by a combination of remote sensing and crowdsourcing. We assume that asset value is related to night-time brightness and building height, with brighter and higher buildings being of higher value. Using remotely sensed data we combined data on night-time brightness and building height to create a new metric called the Eco:Actuary Urban Asset Index. Night-time brightness data was obtained from NASA’s VIIRS (Visible Infrared Imaging Radiometer Suite on the Joint Polar Satellite System) at a resolution of less than 500m, and building height from JAXA’s ALOS PALSAR (Advanced Land Observation Satellite, Phased Array type L-band Synthetic Aperture Radar) at a resolution of 30m. These data layers were normalised and combined on a 1 ha grid to create the new index which can identify three types of urban assets - financial, commercial and residential - defined as follows: financial assets a height of less than 20m and brightness index of greater than 100; commercial a height of 5-20m and a brightness index of greater than 100; residential a height of less than 20m and brightness of greater than 5. As the resolution of the Eco:Actuary Urban Asset Index is limited to 1 ha an additional, non remote sensing, approach to asset identification is required. This is done through crowdsourcing and we use OpenStreetMap (OSM) to characterise different types of assets. The individual assets identified using the Eco:Actuary Urban Asset Index and OSM are then aggregated to 19 asset classes: commercial, residential, industrial, cropland, pasture, airport, bridge, highway/road, hospital, hydropower plant, pipeline, power utility, railway, retail, school, telecoms, university, waste water plant, water treatment plant. The aggregation of individual assets into land use classes is appropriate for regional studies i.e. meso-scale damage analysis (Pistrike et al. 2014).
We estimate the economic impact of flooding by considering its indirect impacts on each of the 19 asset types. We did this in two ways, by estimating the Full Replacement Cost (FRC) and Loss of Function (LOF), as assessed per pixel for each of the 19 asset types.
In Eco:Actuary, the full replacement cost (FRC) is defined as the cost of replacing the physical structure of the asset; no consideration is given to any contents in an asset. The lowest and highest full replacement costs are user-defined for each asset class (million USD per hectare). Within each asset category (with the exception of cropland and pasture), the dimmest asset (for methods refer to How are assets identified) is given the lowest valuation and the brightest asset the highest valuation. The valuation of the intermediary assets is determined by their night-time brightness and how they scale in brightness compared to the lowest and highest value assets within the study region. In order to determine the full replacement cost, the valuation of the asset is multiplied by the area the asset covers (i.e. the fraction of each pixel) and then multiplied again by the proportion of all individual assets with the study region represented by this asset category. FRC for cropland and pasture is calculated using a different method because brightness is a poor valuation indicator for these assets. For these categories a measure of agricultural suitability is used. Therefore the FRC is calculated by scaling the min. and max. asset values in the study area by brightness of nighttime lights,multiplying by area and the fraction of all assets represented by this class. For cropland and pasture, agricultural suitability are used instead of nighttime lights and mapped crop or pasture fraction used instead of OSM tag frequency. In Eco:Actuary, the loss of function (LOF) is defined according to the underlying GDP for the country in which the asset is located; The LOF is therefore calculated as GDP multiplied by the user-defined LOF fraction multiplied by the user-defined fraction of the LOFP multiplied by the fraction of all assets, within the study region, represented by that asset class. The loss of function period (LOFP) is the period of time it would take for the asset to be rebuilt, and therefore the period of time in which the asset is not functioning.
Vulnerability. The vulnerability of the assets are expressed using damage functions expressed in relation to the FRC and LOFP for each asset class. User-defined parameters are needed for the proportion of damage, experienced in relation to the FRC, when the mean excess of the flood is 0 to 5 times, 5 - 10 times, and greater than 10 times the available storage in the landscape. The flood storage values used in these calculations represent the the realised flood storage (i.e. taking into account the flood storage within the landscape).
A way to explore adaptation strategies within Eco:Actuary interface is by altering damage functions. By decreasing the damage that occurs at a given flood ratio the effect of flood adaptation activities can be simulated.
There are four intervention simulation options available: climate, land use and cover, catastrophe model parameters and green infrastructure. The parameters specified in the Pareto distribution determines the ‘chance’ element in the modelling. To therefore ensure that the scenarios are not influenced by the stochastic element to the model and are thus only testing the variable of interest, the same events are used for any scenario in which the Pareto parameters do not change. If the goal of the scenario is to investigate how changes to precipitation affect flood risk than the pareto parameters must be adjusted. All scenario outputs are expressed as differences between the baseline and the scenario.
Create an Account
Select User Level
1) To select the area to be modelled you can either move the google map or, select the country or basin from the drop down box (located beside ‘Step 1: Define area’ on the top right of the map).
To select via the map: move the Google map so that the pink crosshairs is within the tile you want to modelled. Then select from the drop down box whether you want the analysis run at 1ha resolution ‘_Tiled 1 ha’ or 1km resolution (‘_Tiled 1 km’).
To select via the dropdown box: scroll through and select from the list of countries and basins; all these runs are at a 1km resolution.
2) Provide a short name for the model run in the box ‘Run name’
3) Press the ‘Step 1: Define area’ button
1) Select ‘Step 2: Prepare data’ from the list on the left hand side of the webpage
2) If the region has never been modelled before, then the following dialogue box emerges. Click on ‘Build missing/upload map tiles’, and then follow the on -screen prompts.
3) Click on ‘Copy data to your workspace’.
4) Click on the symbol beside catastrophe model parameters
At this stage the parameters for the Hazard Ensemble, Asset Inventory, Damage Functions and Mitigation Infrastructure modules can be altered. Specific guidance on how to alter these parameters are provided given here.
5) Scroll down to the bottom of the page and select ‘Check and Submit’ and then ‘Close Window’
Click on start simulation.Depending on whether the model has been run for that area of the world before, the run will take anywhere from 10 to 60 minutes.
After the run is complete you can go to ‘Step 5: Results: maps’ to look at the various output maps. Alternatively, you can run immediately run a simulation to explore the potential effects of changes to climate, policy and model parameters. To do this proceed directly to Step 4.
There are four intervention simulation options available: climate change, land use and cover change, change catastrophe model parameters and climate change. Select the scenario option of interest. Please note that ‘Change input maps’ is not available when logged in as a policy analyst or scientist.
The climate change scenarios functionality uses downscaled IPCC models or you can create your own simple scenario. A selection of IPCC models are available - see here for more details. Once you have selected the characteristics of your climate scenario, the system will build that scenario (this may take a few minutes).
You can explore land use and land cover changes by changing the percentage of forest and herbaceous cover in all or specific parts of the study region, or you can create new land cover types according to rules.
Click on the symbol next to ‘Edit catastrophe model parameters’ and then click on the symbol next to ‘Edit flood model parameters’.
Change the parameters of the model, as required, and select the ‘Check and Submit’ button. For full details of how to edit the different parameters click here.
Once the scenario has been developed the following options are available:
Before running the scenario you can compare the baseline and scenario (if you want to see how the scenario compare to the baseline for temperature and precipitation at monthly timesteps. This data is available to use as maps, histograms and raw data. Otherwise proceed directly to ‘Run scenario’
To view the maps produced by the baseline run click on the green and white grid icon next to the explanation of the map. Information to help interpret the maps are available here.
When a scenario is run, the resulting maps show the difference between the baseline and the scenario. To view the map click on the red arrows icon next to the explanation.
The catastrophe module is broken into four sections: an assessment of the hazard (hazard ensemble), an estimation of the monetary value of the assets (asset inventory), an assessment of the vulnerability of the assets (damage functions) and then storage capabilities of nature (aka green storage/ green infrastructure).
You are given the option to edit the catastrophe model parameters at the end of ‘Step 2: Prepare Data’. The dialogue box below appears after you have prepared the data. Click on the symbol beside ‘Edit catastrophe model parameters’. You can now edit the various characteristics of the model.
This is where you can alter the precipitation data that drives the flood model, and alter the number of precipitation events simulated. The user can use the data available within the PSS, alter the characteristics of the precipitation events and/or upload their own data. Hover over the ‘?’ for specific information about what information is required in each box.
This is where you can alter the monetary valuations associated with the 19 asset classes (listed in the table below). Assets are identified using the methods described here. If you are interested in producing a monetary estimate of risk and loss then the values should be changed from the default so that the monetary value is suitable for the region under study. For each asset class the following valuations are required (from left to right in the figure below):
This is where you can specify how much damage occurs to each asset category depending on flood height. Damage is expressed as a fraction of complete replacement of the asset. These values should be changed depending on the study site and the assets under consideration.
The values provided in this section are based on global literature and should not be changed unless the user is confident that the values do not reflect the study site. Details of how these values are derived and why they are used are available here. Change the ‘Weight in total’ if you want to preferentially weight (or ignore) certain stores.
The maps can be viewed within EcoActuary or in Google Earth, Google Maps or downloaded for use in external GIS software. The data can also be viewed as a histogram and the data downloaded as an excel spreadsheet. Hover over the icons below the map to determine their function.
The maps can be downloaded for use in a variety of forms for use in other GIS software, such as ArcGIS, Integrated Land and Water Information System (ILWIS), TerrSet and open source software. Click on the download button below the
As a Scientist user you are able to download the maps as: arcascii, ilwis, geotiff and idrisi.
As a Policy user you are able to download the maps as: XXXXXXXX
The primary output of EcoActuary is maps (although data is available as histograms and raw data). The variable of interest is represented spatially across the study site based on colour. Differences in colour represent different values. For each map the magnitude of the values corresponding to a particular colour will change; but the pattern, of one colour representing the lowest value and the colours constantly shifting towards the highest value, remains the same.
First click on the map of interest
Then click on the Google Maps icon below the map
Move the crosshairs to the location of interest and then select ‘Query’ to determine the value at a specific location. The value will appear in the dialogue box to the left of ‘Query’ button.
First click on the map of interest
Then click on the Google Maps icon below the map
Move the crosshairs to the location of interest and select ‘Inputs’ to determine what data was used to create the value.
An icon of multiple transparent squares will appear next to the Inputs button. Click on this new icon to open a new window.
In the new window either click on ‘Show all’ or ‘show’ for the variable of interest in order to reveal the values of the input variables