Categorisation of pathways by climate impact and computation of metadata indicators

Notebook sr15_2.0_categories_indicators

This notebook is based on the Release 1.1 of the IAMC 1.5C Scenario Explorer and Data and refers to the published version of the IPCC Special Report on Global Warming of 1.5C (SR15).


The notebook is run with pyam release 0.2.0.

The source code of this notebook is available on GitHub (release 2.0.1).

sr15_2.0_categories_indicators

IPCC SR15 scenario assessment

Scenario categorization and indicators

This notebook assigns the categorization by warming outcome and computes a range of descriptive indicators for the scenario assessment of the IPCC's "Special Report on Global Warming of 1.5°C". It generates a sr15_metadata_indicators.xlsx spreadsheet, which is used in other notebooks for this assessment for categorization and extracting descriptive indicators.

Scenario ensemble data

The scenario data used in this analysis can be accessed and downloaded at https://data.ene.iiasa.ac.at/iamc-1.5c-explorer.

Bibliographic details of the scenario ensemble and all studies that contributed scenarios to the ensemble are included in this repository as Endnote (enl), Reference Manager (ris), and BibTex (bib) format.

This notebook is licensed under the Apache License, Version 2.0.

Please refer to the README for the recommended citation of the scenario ensemble and the notebooks in this repository.


Import dependencies and define general notebook settings

In [1]:
import math
import io
import yaml
import re
import pandas as pd
import numpy as np
from IPython.display import display

Introduction and tutorial for the pyam package

This notebook (and all other analysis notebooks in this repository) uses the pyam package, an open-source Python package for IAM scenario analysis and visualization (https://software.ene.iiasa.ac.at/pyam/).

For an introduction of the notation and features of the pyam package, please refer to this tutorial.
It will take you through the basic functions and options used here, and provide further introduction and guidelines.

In [2]:
import pyam
logger = pyam.logger()

Import Matplotlib and set figure layout defaults in line with SR1.5 guidelines

In [3]:
import matplotlib.pyplot as plt
plt.style.use('style_sr15.mplstyle')

Import scenario snapshot and define auxiliary dictionaries

This notebook only assigns indicator based on global timeseries data.

The dictionary meta_tables is used to collect definitions of categories and secondary scenario classification throughout this script. These definitions are exported to the metadata/categorization Excel workbook at the end of the script for completeness. The dictionary meta_docs collects definitions used for the documentation tags in the online scenario explorer.

The dictionary specs collects lists and the run control specifications to be exported to JSON and used by other notebooks for the SR1.5 scenario analysis.

The plotting_args dictionary assigns the default plotting arguemnts in this notebook.

In [4]:
sr1p5 = pyam.IamDataFrame(data='../data/iamc15_scenario_data_world_r1.1.xlsx')
INFO:root:Reading `../data/iamc15_scenario_data_world_r1.1.xlsx`
In [5]:
meta_tables = {}
meta_docs = {}
In [6]:
specs = {}
In [7]:
plotting_args = {'color': 'category', 'linewidth': 0.2}
specs['plotting_args'] = plotting_args

Verify completeness of scenario submissions for key variables

Verify that every scenario except for Shell Sky and the historical reference scenarios reports CO2 Emissions in 2030.

In [8]:
sr1p5.require_variable(variable='Emissions|CO2', year=2030, exclude_on_fail=False)
INFO:root:3 scenarios do not include required variable `Emissions|CO2`
Out[8]:
model scenario
0 Reference CEDS
1 Reference IEA Energy Statistics (r2017)
2 Shell World Energy Model 2018 Sky

Check MAGICC postprocessing prior to categorization

Assign scenarios that could not be postprocessed by probabilistic MAGICC to respective categories:

  • data not available for full century
  • insufficient reporting of emission species
  • reference scenario
In [9]:
sr1p5.set_meta(name='category', meta= 'uncategorized')
In [10]:
reference = sr1p5.filter(model='Reference')
pd.DataFrame(index=reference.meta.index)
Out[10]:
model scenario
Reference CEDS
IEA Energy Statistics (r2017)
In [11]:
sr1p5.set_meta(meta='reference', name='category', index=reference)
In [12]:
no_climate_assessment = (
    sr1p5.filter(category='uncategorized').meta.index
    .difference(sr1p5.filter(year=2100, variable='Emissions|CO2').meta.index)
)
pd.DataFrame(index=no_climate_assessment)
Out[12]:
model scenario
GENeSYS-MOD 1.0 1.0
IEA Energy Technology Perspective Model 2017 B2DS
Shell World Energy Model 2018 Sky
In [13]:
sr1p5.set_meta(meta='no-climate-assessment', name='category', index=no_climate_assessment)

Categorization of scenarios

This section applies the categorization of scenario as defined in Chapter 2 of the Special Report for unique assignment of scenarios.

The category specification as agreed upon at LAM 3 in Malmö is repeated here for easier reference.

The term $P_{x°C}$ refers to the probability of exceeding warming of $x°C$ throughout the century in at least one year and $P_{x°C}(y)$ refers to the probability of exceedance in a specific year $y$.

Categories Subcategories Probability to exceed warming threshold Acronym Color
Below 1.5°C Below 1.5°C (I) $P_{1.5°C} \leq 0.34$ Below 1.5C (I) xkcd:baby blue
Below 1.5°C (II) $0.34 < P_{1.5°C} \leq 0.50$ Below 1.5C (II)
1.5°C return with low OS Lower 1.5°C return with low OS $0.50 < P_{1.5°C} \leq 0.67$ and $P_{1.5°C}(2100) \leq 0.34$ (Lower) 1.5C low OS xkcd:bluish
Higher 1.5°C return with low OS $0.50 < P_{1.5°C} \leq 0.67$ and $0.34 < P_{1.5°C}(2100) \leq 0.50$ (Higher) 1.5C low OS
1.5°C return with high OS Lower 1.5°C return with high OS $0.67 < P_{1.5°C}$ and $P_{1.5°C}(2100) \leq 0.34$ (Lower) 1.5C high OS xkcd:darkish blue
Higher 1.5°C return with high OS $0.67 < P_{1.5°C}$ and $0.34 < P_{1.5°C}(2100) \leq 0.50$ (Higher) 1.5C high OS
Lower 2.0°C $P_{2.0°C} \leq 0.34$ (excluding above) Lower 2C xkcd:orange
Higher 2.0°C $0.34 < P_{2.0°C} \leq 0.50$ (excluding above) Higher 2C xkcd:red
Above 2.0°C $P_{2.0°C} > 0.50$ for at least 1 year Above 2C darkgrey

Category definitions to Excel

The following dictionary repeats the category definitions from the table above and saves them as a pandas.DataFrame to a dictionary meta_tables. Throughout the notebook, this dictionary is used to collect definitions of categories and secondary scenario classification. These definitions are exported to the metadata/categorization Excel workbook at the end of the script for easy reference.

In [14]:
dct = {'Categories of scenarios': 
           ['Below 1.5°C', 
            '', 
            '1.5°C return with low overshoot',
            '',
            '1.5°C return with high overshoot',
            '',
            'Lower 2.0°C',
            'Higher 2.0°C',
            'Above 2.0°C'],
        'Subcategories': 
           ['Below 1.5°C (I)', 
            'Below 1.5°C (II)', 
            'Lower 1.5°C return with low overshoot',
            'Higher 1.5°C return with low overshoot',
            'Lower 1.5°C return with high overshoot',
            'Higher 1.5°C return with high overshoot',
            '',
            '',
            ''],
       'Criteria for assignment to category':
           ['P1.5°C ≤ 0.34',
            '0.34 < P1.5°C ≤ 0.50',
            '0.50 < P1.5°C ≤ 0.67 and P1.5°C(2100) ≤ 0.34',
            '0.50 < P1.5°C ≤ 0.67 and 0.34 < P1.5°C(2100) ≤ 0.50',
            '0.67 < P1.5°C and P1.5°C(2100) ≤ 0.34',
            '0.67 < P1.5°C and 0.34 < P1.5°C(2100) ≤ 0.50',
            'P2.0°C ≤ 0.34 (excluding above)',
            '0.34 < P2.0°C ≤ 0.50 (excluding above)',
            'P2.0°C > 0.50 during at least 1 year'
           ],
       'Acronym':
           ['Below 1.5C (I)',
            'Below 1.5C (II)',
            'Lower 1.5C low overshoot',
            'Higher 1.5C low overshoot',
            'Lower 1.5C high overshoot',
            'Higher 1.5C high overshoot',
            'Lower 2C',
            'Higher 2C',
            'Above 2C'],
        'Color':
           ['xkcd:baby blue',
            '',
            'xkcd:bluish',
            '',
            'xkcd:darkish blue',
            '',
            'xkcd:orange',
            'xkcd:red',
            'darkgrey']
      }
cols = ['Categories of scenarios', 'Subcategories', 'Criteria for assignment to category', 'Acronym', 'Color']
categories_doc = pd.DataFrame(dct)[cols]
meta_tables['categories'] = categories_doc
meta_docs['category'] = 'Categorization of scenarios by global warming impact'
meta_docs['subcategory'] = 'Sub-categorization of scenarios by global warming impact'
In [15]:
other_cats = ['no-climate-assessment', 'reference']

cats = ['Below 1.5C', '1.5C low overshoot', '1.5C high overshoot', 'Lower 2C', 'Higher 2C', 'Above 2C']
all_cats = cats + other_cats

subcats = dct['Acronym']
all_subcats = subcats + other_cats
In [16]:
specs['cats'] = cats
specs['all_cats'] = all_cats

specs['subcats'] = subcats
specs['all_subcats'] = all_subcats

Subcategory assignment

We first assign the subcategories, then aggregate those assignment to the main categories. The categories assigned above to indicate reasons for non-processing by MAGICC are copied over to the subcategories.

Keep in mind that setting a category will re-assign scenarios (in case they have already been assigned). So in case of going back and forth in this notebook (i.e., not executing the cells in the correct order), make sure to reset the categorization.

In [17]:
def warming_exccedance_prob(x):
    return 'AR5 climate diagnostics|Temperature|Exceedance Probability|{} °C|MAGICC6'.format(x)

expected_warming = 'AR5 climate diagnostics|Temperature|Global Mean|MAGICC6|Expected value'
median_warming = 'AR5 climate diagnostics|Temperature|Global Mean|MAGICC6|MED'
In [18]:
sr1p5.set_meta(meta=sr1p5['category'], name='subcategory')
In [19]:
pyam.categorize(sr1p5, exclude=False, subcategory='uncategorized',
                value='Below 1.5C (I)', name='subcategory',
                criteria={warming_exccedance_prob(1.5): {'up': 0.34}},
                color='xkcd:baby blue')
INFO:root:No scenarios satisfy the criteria
In [20]:
pyam.categorize(sr1p5, exclude=False, subcategory='uncategorized',
                value='Below 1.5C (II)', name='subcategory',
                criteria={warming_exccedance_prob(1.5): {'up': 0.50}},
                color='xkcd:baby blue')
INFO:root:9 scenarios categorized as `subcategory: Below 1.5C (II)`

To categorize by a variable using multiple filters (here: less than 66% probability of exceeding 1.5°C at any point during the century and less than 34% probability of exceeding that threshold in 2100) requires to perform the assignment in three steps - first, categorize to an intermediate low OS category and, in a second step, assign to the category in question. The third step resets all scenarios still categorized as intermediate after the second step back to uncategorized.

In [21]:
pyam.categorize(sr1p5, exclude=False, subcategory='uncategorized',
                value='low overshoot', name='subcategory',
                criteria={warming_exccedance_prob(1.5): {'up': 0.67}})
INFO:root:55 scenarios categorized as `subcategory: low overshoot`
In [22]:
pyam.categorize(sr1p5, exclude=False, subcategory='low overshoot',
                value='Lower 1.5C low overshoot', name='subcategory',
                criteria={warming_exccedance_prob(1.5): {'up': 0.34, 'year': 2100}},
                color='xkcd:bluish')
INFO:root:34 scenarios categorized as `subcategory: Lower 1.5C low overshoot`
In [23]:
pyam.categorize(sr1p5, exclude=False, subcategory='low overshoot',
                value='Higher 1.5C low overshoot', name='subcategory',
                criteria={warming_exccedance_prob(1.5): {'up': 0.50, 'year': 2100}},
                color='xkcd:bluish')
INFO:root:10 scenarios categorized as `subcategory: Higher 1.5C low overshoot`

Display scenarios that satisfy the low overshoot criterion but are not assigned to Lower 1.5C lower overshoot or Higher 1.5C lower overshoot. Then, reset them to uncategorized.

In [24]:
sr1p5.filter(subcategory='low overshoot').meta
Out[24]:
exclude category subcategory
model scenario
AIM/CGE 2.0 ADVANCE_2020_WB2C False uncategorized low overshoot
SFCM_SSP2_EEEI_1p5Degree False uncategorized low overshoot
SFCM_SSP2_LifeStyle_1p5Degree False uncategorized low overshoot
SFCM_SSP2_Ref_1p5Degree False uncategorized low overshoot
SFCM_SSP2_ST_CCS_1p5Degree False uncategorized low overshoot
SFCM_SSP2_ST_bio_1p5Degree False uncategorized low overshoot
SFCM_SSP2_ST_nuclear_1p5Degree False uncategorized low overshoot
SFCM_SSP2_ST_solar_1p5Degree False uncategorized low overshoot
SFCM_SSP2_ST_wind_1p5Degree False uncategorized low overshoot
SFCM_SSP2_SupTech_1p5Degree False uncategorized low overshoot
SFCM_SSP2_combined_1p5Degree False uncategorized low overshoot
In [25]:
sr1p5.set_meta(meta='uncategorized', name='subcategory', index=sr1p5.filter(subcategory='low overshoot'))

Determine all scenarios with a probability to exceed 1.5°C greater than 66% in any year throughout the century. The function categorize() cannot be used for this selection, because it would either check for the criteria being true for all years or for a particular year.

In [26]:
df = sr1p5.filter(exclude=False, subcategory='uncategorized', variable=warming_exccedance_prob(1.5)).timeseries()
sr1p5.set_meta(meta='high overshoot', name='subcategory', 
               index=df[df.apply(lambda x: max(x), axis=1) > 0.66].index)
In [27]:
pyam.categorize(sr1p5, exclude=False, subcategory='high overshoot',
                value='Lower 1.5C high overshoot', name='subcategory',
                criteria={warming_exccedance_prob(1.5): {'up': 0.34, 'year': 2100}},
                color='xkcd:darkish blue')
INFO:root:19 scenarios categorized as `subcategory: Lower 1.5C high overshoot`
In [28]:
pyam.categorize(sr1p5, exclude=False, subcategory='high overshoot',
                value='Higher 1.5C high overshoot', name='subcategory',
                criteria={warming_exccedance_prob(1.5): {'up': 0.50, 'year': 2100}},
                color='xkcd:darkish blue')
INFO:root:18 scenarios categorized as `subcategory: Higher 1.5C high overshoot`

Reset scenarios that satisfy the high overshoot criterion but are not assigned to Lower 1.5C high overshoot or Higher 1.5C high overshoot.

In [29]:
sr1p5.set_meta(meta='uncategorized', name='subcategory', index=sr1p5.filter(subcategory='high overshoot'))
In [30]:
pyam.categorize(sr1p5, exclude=False, subcategory='uncategorized',
                value='Lower 2C', name='subcategory',
                criteria={warming_exccedance_prob(2.0): {'up': 0.34}},
                color='xkcd:orange')
INFO:root:74 scenarios categorized as `subcategory: Lower 2C`
In [31]:
pyam.categorize(sr1p5, exclude=False, subcategory='uncategorized',
                value='Higher 2C', name='subcategory',
                criteria={warming_exccedance_prob(2.0): {'up': 0.50}},
                color='xkcd:red')
INFO:root:58 scenarios categorized as `subcategory: Higher 2C`
In [32]:
pyam.categorize(sr1p5, exclude=False, subcategory='uncategorized',
                value='Above 2C', name='subcategory',
                criteria={warming_exccedance_prob(2.0): {'up': 1.0}},
                color='darkgrey')
INFO:root:189 scenarios categorized as `subcategory: Above 2C`

Aggregation of subcategories to categories

In [33]:
rc = pyam.run_control()
def assign_rc_color_from_sub(cat, sub):
    rc.update({'color': {'category': {cat: rc['color']['subcategory'][sub]}}})
In [34]:
sr1p5.set_meta(meta='Below 1.5C', name='category',
               index=sr1p5.filter(subcategory=['Below 1.5C (I)', 'Below 1.5C (II)']).meta.index)
assign_rc_color_from_sub('Below 1.5C', 'Below 1.5C (II)')
In [35]:
sr1p5.set_meta(meta='1.5C low overshoot', name='category',
               index=sr1p5.filter(subcategory=['Lower 1.5C low overshoot', 'Higher 1.5C low overshoot']))
assign_rc_color_from_sub('1.5C low overshoot', 'Lower 1.5C low overshoot')
In [36]:
sr1p5.set_meta(meta='1.5C high overshoot', name='category',
               index=sr1p5.filter(subcategory=['Lower 1.5C high overshoot', 'Higher 1.5C high overshoot']))
assign_rc_color_from_sub('1.5C high overshoot', 'Lower 1.5C high overshoot')
In [37]:
cats_non15 = ['Lower 2C', 'Higher 2C', 'Above 2C']
df_2c = sr1p5.filter(subcategory=cats_non15)
sr1p5.set_meta(meta=df_2c['subcategory'], name='category')

for c in cats_non15:
    assign_rc_color_from_sub(c, c)

Additional assessment of categorization

Check whether there are any scenarios that return to 1.5°C by the end of the century and exceed the 2°C threshold with a likelyhood higher than 34% or 50% (i.e., the Lower 2C or the Higher 2C categories respectively). Having scenario categorized as 1.5C but with a higher-than-50% probability of exceeding 2°C at some point in the century may need to be considered separately in subsequent assessment.

In [38]:
cats_15 = ['Below 1.5C', '1.5C low overshoot', '1.5C high overshoot']
specs['cats_15'] = cats_15
In [39]:
cats_15_no_lo = ['Below 1.5C', '1.5C low overshoot']
specs['cats_15_no_lo'] = cats_15_no_lo
In [40]:
cats_2 = ['Lower 2C', 'Higher 2C']
specs['cats_2'] = cats_2
In [41]:
df = sr1p5.filter(exclude=False, category=cats_15, variable=warming_exccedance_prob(2.0)).timeseries()
ex_prob_2 = df.apply(lambda x: max(x))
In [42]:
if max(ex_prob_2) > 0.34:
    logger.warning('The following 1.5C-scenarios are not `Lower 2C` scenarios:')
    display(df[df.apply(lambda x: max(x), axis=1) > 0.34])
WARNING:root:The following 1.5C-scenarios are not `Lower 2C` scenarios:
2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 ... 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100
model scenario region variable unit
REMIND 1.7 ADVANCE_2030_Price1.5C World AR5 climate diagnostics|Temperature|Exceedance Probability|2.0 °C|MAGICC6 - 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.168333 0.166667 0.16 0.158333 0.156667 0.15 0.145 0.143333 0.138333 0.136667

1 rows × 101 columns

In [43]:
if max(ex_prob_2) > 0.50:
    logger.warning('The following 1.5C-scenarios are not `2C` scenarios:')
    display(df[df.apply(lambda x: max(x), axis=1) > 0.50])

Counting and evaluation of scenario assignment categories

Count the number of scenarios assigned to each category.

This table is the basis for Tables 2.1 and 2.A.11 in the SR1.5.

In [44]:
lst = sr1p5.meta.groupby(['category', 'subcategory']).count()
(
    lst
    .reindex(all_cats, axis='index', level=0)
    .reindex(all_subcats, axis='index', level=1)
    .rename(columns={'exclude': 'count'})
)
Out[44]:
count
category subcategory
Below 1.5C Below 1.5C (II) 9
1.5C low overshoot Lower 1.5C low overshoot 34
Higher 1.5C low overshoot 10
1.5C high overshoot Lower 1.5C high overshoot 19
Higher 1.5C high overshoot 18
Lower 2C Lower 2C 74
Higher 2C Higher 2C 58
Above 2C Above 2C 189
no-climate-assessment no-climate-assessment 3
reference reference 2

Check whether any scenarios are still marked as uncategorized. This may be due to missing MAGICC postprocessing.

In [45]:
if any(sr1p5['category'] == 'uncategorized'):
    logger.warning('There are scenarios that are no yet categorized!')
    display(sr1p5.filter(category='uncategorized').meta)

Validation of Kyoto GHG emissions range (SAR-GWP100)

Validate all scenarios whther aggregate Kyoto gases are outside the range as assessed by the Second Assessment Report (SAR) using the Global Warming Potential over 100 years (GWP100). These scenarios are excluded from some figures and tables in the assessment.

In [46]:
invalid_sar_gwp = sr1p5.validate(criteria={'Emissions|Kyoto Gases (SAR-GWP100)':
                                 {'lo': 44500, 'up': 53500, 'year':2010}}, exclude_on_fail=False)
INFO:root:30 of 3208986 data points to not satisfy the criteria
In [47]:
name='Kyoto-GHG|2010 (SAR)'
sr1p5.set_meta(meta='in range', name=name)
sr1p5.set_meta(meta='exclude', name=name, index=invalid_sar_gwp)

meta_docs[name] = 'Indicator whether 2010 Kyoto-GHG reported by the scenario (as assessed by IPCC SAR) are in the valid range'

Assignment of baseline scenarios

This section assigns a baseline reference for scenarios from selected model intercomparison projects and indivitual submissions.

In [48]:
def set_baseline_reference(x):
    m, s = (x.name[0], x.name[1])
    
    b = None
    if s.startswith('SSP') and not 'Baseline' in s:
        b = '{}Baseline'.format(s[0:5])
    if s.startswith('CD-LINKS') and not 'NoPolicy' in s:
        b = '{}NoPolicy'.format(s[0:9])
    if s.startswith('EMF33') and not 'Baseline' in s:
        b = '{}Baseline'.format(s[0:6])
    if s.startswith('ADVANCE') and not 'NoPolicy' in s:
        b = '{}NoPolicy'.format(s[0:8])
    if s.startswith('GEA') and not 'base' in s:
        b = '{}base'.format(s[0:8])
    if s.startswith('TERL') and not 'Baseline' in s:
        b = s.replace('15D', 'Baseline').replace('2D', 'Baseline')
    if s.startswith('SFCM') and not 'Baseline' in s:
        b = s.replace('1p5Degree', 'Baseline').replace('2Degree', 'Baseline')
    if s.startswith('CEMICS') and not s == 'CEMICS-Ref':
        b = 'CEMICS-Ref'
    if s.startswith('SMP') and not 'REF' in s:
        if s.endswith('Def') or s.endswith('regul'):
            b = 'SMP_REF_Def'
        else:
            b = 'SMP_REF_Sust'
    if s.startswith('DAC'):
        b = 'BAU'
    
    # check that baseline scenario exists for specific model `m`
    if (m, b) in sr1p5.meta.index:
        return b
    # else (or if scenario name not in list above), return None
    return None
In [49]:
name = 'baseline'
sr1p5.set_meta(sr1p5.meta.apply(set_baseline_reference, raw=True, axis=1), name)
meta_docs[name] = 'Name of the respective baseline (or reference/no-policy) scenario'

Assignent of marker scenarios

The following scenarios are used as marker throughout the analysis and visualization, cf. Figure 2.7 (SOD):

Marker Model & scenario name Reference Symbol
S1 AIM/CGE 2.0 / SSP1-19 Fujimori et al., 2017 white square
S2 MESSAGE-GLOBIOM 1.0 / SSP2-19 Fricko et al., 2017 yellow square
S5 REMIND-MAgPIE 1.5 / SSP5-19 Kriegler et al., 2017 black square
LED MESSAGEix-GLOBIOM 1.0 / LowEnergyDemand Grubler et al., 2018 white circle
In [50]:
dct = {'Marker': 
           ['S1', 
            'S2', 
            'S5',
            'LED'],
        'Model and scenario name': 
           ['AIM/CGE 2.0 / SSP1-19', 
            'MESSAGE-GLOBIOM 1.0 / SSP2-19', 
            'REMIND-MAgPIE 1.5 / SSP5-19',
            'MESSAGEix-GLOBIOM 1.0 / LowEnergyDemand'],
       'Reference':
           ['Fujimori et al., 2017',
            'Fricko et al., 2017',
            'Kriegler et al., 2017',
            'Grubler et al., 2018'],
       'Symbol':
           ['white square',
            'yellow square',
            'black square',
            'white circle']
}
cols = ['Marker', 'Model and scenario name', 'Reference', 'Symbol']
markers_doc = pd.DataFrame(dct)[cols]
meta_tables['marker scenarios'] = markers_doc
meta_docs['marker'] = 'Illustrative pathways (marker scenarios)'
In [51]:
specs['marker'] = ['S1', 'S2', 'S5', 'LED']
In [52]:
sr1p5.set_meta('', 'marker')
rc.update({'marker': {'marker': {'': None}}})
In [53]:
m = 'S1'
sr1p5.set_meta(m, 'marker',
               sr1p5.filter(model='AIM/CGE 2.0', scenario='SSP1-19'))
rc.update({'marker': {'marker': {m: 's'}},
           'c': {'marker': {m: 'white'}},
           'edgecolors': {'marker': {m: 'black'}}}
         )
In [54]:
m = 'S2'
sr1p5.set_meta(m, 'marker',
               sr1p5.filter(model='MESSAGE-GLOBIOM 1.0', scenario='SSP2-19'))
rc.update({'marker': {'marker': {m: 's'}},
           'c': {'marker': {m: 'yellow'}},
           'edgecolors': {'marker': {m: 'black'}}})
In [55]:
m = 'S5'
sr1p5.set_meta(m, 'marker',
               sr1p5.filter(model='REMIND-MAgPIE 1.5', scenario='SSP5-19'))
rc.update({'marker': {'marker': {m: 's'}},
           'c': {'marker': {m: 'black'}},
           'edgecolors': {'marker': {m: 'black'}}})
In [56]:
m = 'LED'
sr1p5.set_meta(m, 'marker',
               sr1p5.filter(model='MESSAGEix-GLOBIOM 1.0', scenario='LowEnergyDemand'))
rc.update({'marker': {'marker': {m: 'o'}},
           'c': {'marker': {m: 'white'}},
           'edgecolors': {'marker': {m: 'black'}}})

Visual analysis of emission and temperature pathways by category

First, we plot all carbon dioxide emissions trajectories colored by category and the CO2 emissions from the AFOLU sector. Then, show the warming trajectories by category.

In [57]:
horizon = list(range(2000, 2020, 5)) + list(range(2020, 2101, 10))
df = sr1p5.filter(year=horizon)
In [58]:
df.filter(exclude=False, variable='Emissions|CO2').line_plot(**plotting_args, marker='marker')
Out[58]:
<matplotlib.axes._subplots.AxesSubplot at 0x129f20f0630>
In [59]:
df.filter(exclude=False, variable='Emissions|CO2|AFOLU').line_plot(**plotting_args, marker='marker')
Out[59]:
<matplotlib.axes._subplots.AxesSubplot at 0x1298f626208>
In [60]:
df.filter(exclude=False, variable=expected_warming).line_plot(**plotting_args, marker='marker')
Out[60]:
<matplotlib.axes._subplots.AxesSubplot at 0x129f57696a0>

Import scientific references and publication status

The following block reads in an Excel table with the details of the scientific references for each scenario.

The main cell of this section loops over all entries in this Excel table, filters for the relevant scenarios, and assigns a short reference and the publication status. If multiple references are relevant for a scenario, the references are compiled, and the 'highest' publication status is written to the metadata.

In [61]:
ref_cols = ['project', 'model', 'scenario', 'reference', 'doi', 'bibliography']
In [62]:
sr1p5.set_meta('undefined', 'reference')
sr1p5.set_meta('unknown', 'project')
In [63]:
refs = pd.read_csv('../bibliography/scenario_references.csv', encoding='iso-8859-1')

_refs = {'index': []}
for i in ref_cols:
    _refs.update({i.title(): []})
In [64]:
for cols in refs.iterrows():
    c = cols[1]
    filters = {}
    
    # check that filters are defined
    if c.model is np.NaN and c.scenario is np.NaN:
        logger.warn('project `{}` on line {} has no filters assigned'
                    .format(c.project, cols[0]))
        continue

    # filter for scenarios to apply the project and publication tags
    filters = {}
    for i in ['model', 'scenario']:
        if c[i] is not np.NaN:
            if ";" in c[i]:
                filters.update({i: re.sub(";", "", c[i]).split()})
            else:
                filters.update({i: c[i]})
    
    df = sr1p5.filter(**filters)
    if df.scenarios().empty:
        logger.warn('no scenarios satisfy filters for project `{}` on line {} ({})'
                    .format(c.project, cols[0], filters))
        continue

    # write to meta-tables dictionary
    _refs['index'].append(cols[0])
    for i in ref_cols:
        _refs[i.title()].append(c[i])

    sr1p5.meta.loc[df.meta.index, 'project'] = c['project']

    for i in df.meta.index:
        r = c['reference']
        sr1p5.meta.loc[i, 'reference'] = r if sr1p5.meta.loc[i, 'reference'] == 'undefined' \
            else '{}; {}'.format(sr1p5.meta.loc[i, 'reference'], r)
In [65]:
cols = [i.title() for i in ref_cols]
meta_tables['references'] = pd.DataFrame(_refs)[cols]
meta_docs['reference'] = 'Scientific references'
meta_docs['project'] = 'Project identifier contributing the scenario'

Peak warming and indicator of median global warming peak-and-decline

Determine peak warming (relative to pre-industrial temperature) and end-of century warming and add this to the scenario metadata. Then, compute the "peak-and-decline" indicator as the difference between peak warming and warming in 2100.

In [66]:
def peak_warming(x, return_year=False):
    peak = x[x == x.max()]
    if return_year:
        return peak.index[0]
    else:
        return float(max(peak))
In [67]:
median_temperature = sr1p5.filter(variable=median_warming).timeseries()
In [68]:
name = 'median warming at peak (MAGICC6)'
sr1p5.set_meta(median_temperature.apply(peak_warming, raw=False, axis=1), name)
meta_docs[name] = 'median warming above pre-industrial temperature at peak (°C) as computed by MAGICC6'
In [69]:
name = 'year of peak warming (MAGICC6)'
sr1p5.set_meta(median_temperature.apply(peak_warming, return_year=True, raw=False, axis=1), name)
meta_docs[name] = 'year of peak median warming as computed by MAGICC6'
In [70]:
name = 'median warming in 2100 (MAGICC6)'
sr1p5.set_meta(median_temperature[2100], name)
meta_docs[name] = 'median warming above at peak above pre-industrial temperature as computed by MAGICC6'
In [71]:
name = 'median warming peak-and-decline (MAGICC6)'
peak_decline = sr1p5['median warming at peak (MAGICC6)'] - sr1p5['median warming in 2100 (MAGICC6)']
sr1p5.set_meta(peak_decline, name)
meta_docs[name] = 'median warming peak-and-decline from peak to temperature in 2100 (°C) as computed by MAGICC6'

Add mean temperature at peak from 'FAIR' model diagnostics

In [72]:
median_temperature_fair = sr1p5.filter(variable='AR5 climate diagnostics|Temperature|Global Mean|FAIR|MED')\
    .timeseries()
In [73]:
name = 'median warming at peak (FAIR)'
sr1p5.set_meta(median_temperature_fair.apply(peak_warming, raw=False, axis=1), name)
meta_docs[name] = 'median warming above pre-industrial temperature at peak (°C) as computed by FAIR'
In [74]:
name = 'year of peak warming (FAIR)'
sr1p5.set_meta(median_temperature_fair.apply(peak_warming, return_year=True, raw=False, axis=1), name)
meta_docs[name] = 'year of peak median warming as computed by FAIR'
In [75]:
fig, ax = plt.subplots()
sr1p5.filter(category=cats).scatter(ax=ax,
                                    x='median warming at peak (MAGICC6)',
                                    y='median warming at peak (FAIR)', color='category')
ax.plot(ax.get_xlim(), ax.get_xlim())
Out[75]:
[<matplotlib.lines.Line2D at 0x129f00b1668>]
In [76]:
import matplotlib
matplotlib.__version__
Out[76]:
'3.0.3'
In [77]:
fig, ax = plt.subplots()
sr1p5.scatter(ax=ax, x='year of peak warming (MAGICC6)', y='year of peak warming (FAIR)', color='category')
ax.plot(ax.get_xlim(), ax.get_xlim())
Out[77]:
[<matplotlib.lines.Line2D at 0x1299554a128>]

Computation of threshold exceedance year and 'overshoot' year count

Determine the year when a scenario exceeds a specific temperature threshold, and for how many years the threshold is exceeded.

This section uses the function exceedance() to determine the exceedance and return years. The function overshoot_severity() computes the cumulative exceedance of the 1.5°C threshold (i.e., the sum of temperature-years above the threshold).

In [78]:
def exceedance(temperature, years, threshold):
    exceedance_yr = None
    return_yr = None
    overshoot_yr_count = None
    prev_temp = 0
    prev_yr = None

    for yr, curr_temp in zip(years, temperature):
        if np.isnan(curr_temp):
            continue
        
        if exceedance_yr is None and curr_temp > threshold:
            x = (curr_temp - prev_temp) / (yr - prev_yr) # temperature change per year
            exceedance_yr = prev_yr + int((threshold - prev_temp) / x) + 1 # add one because int() rounds down
        if exceedance_yr is not None and return_yr is None and curr_temp < threshold:
            x = (prev_temp - curr_temp) / (yr - prev_yr) # temperature change per year
            return_yr = prev_yr + int((prev_temp - threshold) / x) + 1
        prev_temp = curr_temp
        prev_yr = yr

    if return_yr is not None and exceedance_yr is not None:
        overshoot_yr_count = int(return_yr - exceedance_yr)
    if exceedance_yr is not None:
        exceedance_yr = int(exceedance_yr)
    if return_yr is not None:
        return_yr = int(return_yr)

    return [exceedance_yr, return_yr, overshoot_yr_count]
In [79]:
exceedance_meta = median_temperature.apply(exceedance, axis=1, raw=True,
                                           years=median_temperature.columns, threshold=1.5)
In [80]:
name = 'exceedance year|1.5°C'
sr1p5.set_meta(exceedance_meta.apply(lambda x: x[0]), name)
meta_docs[name] = 'year in which the 1.5°C median warming threshold is exceeded'

name = 'return year|1.5°C'
sr1p5.set_meta(exceedance_meta.apply(lambda x: x[1]), name)
meta_docs[name] = 'year in which median warming returns below the 1.5°C threshold'

name = 'overshoot years|1.5°C'
sr1p5.set_meta(exceedance_meta.apply(lambda x: x[2]), name)
meta_docs[name] = 'number of years where 1.5°C median warming threshold is exceeded'
In [81]:
def overshoot_severity(x, meta):
    exceedance_yr = meta.loc[x.name[0:2]]['exceedance year|1.5°C']
    return_yr = meta.loc[x.name[0:2]]['return year|1.5°C'] - 1 
    # do not include year in which mean temperature returns to below 1.5
    if exceedance_yr > 0 and return_yr > 0:
        return pyam.cumulative(x, exceedance_yr, return_yr) - (return_yr - exceedance_yr + 1) * 1.5
In [82]:
name = 'exceedance severity|1.5°C'
sr1p5.set_meta(median_temperature.apply(overshoot_severity, axis=1, raw=False, meta=sr1p5.meta), name)
meta_docs[name] = 'sum of median temperature exceeding the 1.5°C threshold'
In [83]:
exceedance_meta = median_temperature.apply(exceedance, axis=1, raw=True,
                                           years=median_temperature.columns, threshold=2)
In [84]:
name = 'exceedance year|2.0°C'
sr1p5.set_meta(exceedance_meta.apply(lambda x: x[0]), name)
meta_docs[name] = 'year in which the 2.0°C median warming threshold is exceeded'

name = 'return year|2.0°C'
sr1p5.set_meta(exceedance_meta.apply(lambda x: x[1]), name)
meta_docs[name] = 'year in which median warming returns below the 2.0°C threshold'

name = 'overshoot years|2.0°C'
sr1p5.set_meta(exceedance_meta.apply(lambda x: x[2]), name)
meta_docs[name] = 'number of years where 2.0°C median warming threshold is exceeded'

Secondary categorization and meta-data assignment according to CO2 emissions

Defining the range for cumulative indicators and units

All cumulative indicators are computed over the time horizon 2016-2100 (including the year 2100 in every summation).

In [85]:
baseyear = 2016
lastyear = 2100
In [86]:
def filter_and_convert(variable):
    return (sr1p5
            .filter(variable=variable)
            .convert_unit({'Mt CO2/yr': ('Gt CO2/yr', 0.001)})
            .timeseries()
           )

unit = 'Gt CO2/yr'
cumulative_unit = 'Gt CO2'
In [87]:
co2 = filter_and_convert('Emissions|CO2')
In [88]:
name = 'minimum net CO2 emissions ({})'.format(unit)
sr1p5.set_meta(co2.apply(np.nanmin, axis=1), name)
meta_docs[name] = 'Minimum of net CO2 emissions over the century ({})'.format(unit)

Indicators from cumulative CO2 emissions over the entire century (2016-2100)

Compute the total cumulative CO2 emissions for secondary categorization of scenarios. Cumulative CO2 emissions are a first-order proxy for global mean temperature change. Emissions are interpolated linearly between years. The last_year value is included in the summation.

The function pyam.cumulative() defined below aggregates timeseries values from first_year until last_year, including both first and last year in the total. The function assumes linear interpolation for years where no values are provided.

In [89]:
name = 'cumulative CO2 emissions ({}-{}, {})'.format(baseyear, lastyear, cumulative_unit)
sr1p5.set_meta(co2.apply(pyam.cumulative, raw=False, axis=1, first_year=baseyear, last_year=lastyear), name)
meta_docs[name] = 'Cumulative net CO2 emissions from {} until {} (including the last year, {})'.format(
    baseyear, lastyear, cumulative_unit)
In [90]:
ccs = filter_and_convert('Carbon Sequestration|CCS')
In [91]:
cum_ccs_label = 'cumulative CCS ({}-{}, {})'.format(baseyear, lastyear, cumulative_unit)
sr1p5.set_meta(ccs.apply(pyam.cumulative, raw=False, axis=1, first_year=baseyear, last_year=lastyear), cum_ccs_label)
meta_docs[cum_ccs_label] = 'Cumulative carbon capture and sequestration from {} until {} (including the last year, {})'\
    .format(baseyear, lastyear, cumulative_unit)
In [92]:
beccs = filter_and_convert('Carbon Sequestration|CCS|Biomass')
In [93]:
cum_beccs_label = 'cumulative BECCS ({}-{}, {})'.format(baseyear, lastyear, cumulative_unit)
sr1p5.set_meta(beccs.apply(pyam.cumulative, raw=False, axis=1, first_year=baseyear, last_year=lastyear), cum_beccs_label)
meta_docs[cum_beccs_label] = 'Cumulative carbon capture and sequestration from bioenergy from {} until {} (including the last year, {})'.format(
    baseyear, lastyear, cumulative_unit)

Issue #9 requested to add the data for scenario where timeseries data for bioenergy with CCS was not provided explicitly (and hence not captured by the computation above) but could implicitly by assessed from the CCS timeseries data.

In [94]:
filled_ccs = sr1p5.meta[sr1p5.meta[cum_ccs_label] == 0][cum_beccs_label]
In [95]:
sr1p5.set_meta(name=cum_beccs_label, meta=0, index=filled_ccs[filled_ccs.isna()].index)
In [96]:
seq_lu = filter_and_convert('Carbon Sequestration|Land Use')
In [97]:
name = 'cumulative sequestration land-use ({}-{}, {})'.format(baseyear, lastyear, cumulative_unit)
sr1p5.set_meta(seq_lu.apply(pyam.cumulative, raw=False, axis=1, first_year=baseyear, last_year=lastyear), name)
meta_docs[name] = 'Cumulative carbon sequestration from land use from {} until {} (including the last year, {})'.format(
    baseyear, lastyear, cumulative_unit)

Cumulative CO2 emissions until peak warming

In [98]:
def get_from_meta_column(df, x, col):
    val = df.meta.loc[x.name[0:2], col]
    return val if val < np.inf else max(x.index)
In [99]:
name = 'cumulative CO2 emissions ({} to peak warming, {})'.format(baseyear, cumulative_unit)
sr1p5.set_meta(co2.apply(lambda x: pyam.cumulative(x, first_year=baseyear,
                                                   last_year=get_from_meta_column(sr1p5, x,
                                                                                  'year of peak warming (MAGICC6)')),
                         raw=False, axis=1), name)
meta_docs[name] = 'cumulative net CO2 emissions from {} until the year of peak warming as computed by MAGICC6 (including the year of peak warming, {})'.format(
    baseyear, cumulative_unit)
In [100]:
(
    sr1p5
    .filter(category=cats)
    .scatter(x='cumulative CO2 emissions (2016 to peak warming, {})'.format(cumulative_unit),
             y='median warming at peak (MAGICC6)', color='category')
)
Out[100]:
<matplotlib.axes._subplots.AxesSubplot at 0x1298f1f2cc0>

Cumulative CO2 emissions until net-zero of total emissions

In [101]:
def year_of_net_zero(data, years, threshold):
    prev_val = 0
    prev_yr = np.nan

    for yr, val in zip(years, data):
        if np.isnan(val):
            continue
        
        if val < threshold:
            x = (val - prev_val) / (yr - prev_yr) # absolute change per year
            return prev_yr + int((threshold - prev_val) / x) + 1 # add one because int() rounds down
        
        prev_val = val
        prev_yr = yr
    return np.inf
In [102]:
name = 'year of netzero CO2 emissions'
sr1p5.set_meta(co2.apply(year_of_net_zero, years=co2.columns, threshold=0, axis=1), name)
meta_docs[name] = 'year in which net CO2 emissions reach zero'
In [103]:
name = 'cumulative CO2 emissions ({} to netzero, {})'.format(baseyear, cumulative_unit)
sr1p5.set_meta(co2.apply(lambda x: pyam.cumulative(x, first_year=baseyear,
                                                   last_year=get_from_meta_column(sr1p5, x,
                                                                                  'year of netzero CO2 emissions')),
                         raw=False, axis=1), name)
meta_docs[name] = 'net CO2 emissions from {} until the year of peak warming (including the last year, {})'.format(
    baseyear, cumulative_unit)
In [104]:
name = 'warming at netzero (MAGICC6)'
sr1p5.set_meta(median_temperature.apply(lambda x: x[get_from_meta_column(sr1p5, x,
                                                                         'year of netzero CO2 emissions')],
                                        raw=False, axis=1), name)
meta_docs[name] = 'median warming above pre-industrial temperatures in the year of net-zero CO2 emission (MAGICC, °C)'.format(
    baseyear, cumulative_unit)
In [105]:
(
    sr1p5
    .scatter(x='cumulative CO2 emissions (2016 to netzero, {})'.format(cumulative_unit),
             y='warming at netzero (MAGICC6)', color='category')
)
Out[105]:
<matplotlib.axes._subplots.AxesSubplot at 0x129861ec390>
In [106]:
fig, ax = plt.subplots()
(
    sr1p5
    .scatter(ax=ax, x='cumulative CO2 emissions (2016 to peak warming, {})'.format(cumulative_unit),
             y='cumulative CO2 emissions (2016 to netzero, {})'.format(cumulative_unit),
             color='category')
)
ax.plot(ax.get_xlim(), ax.get_xlim())
Out[106]:
[<matplotlib.lines.Line2D at 0x12985e91a58>]
In [107]:
fig, ax = plt.subplots()
(
    sr1p5
    .scatter(ax=ax, x='median warming at peak (MAGICC6)',
             y='warming at netzero (MAGICC6)', color='category')
)
x = np.linspace(*ax.get_xlim())
ax.plot(ax.get_xlim(), ax.get_xlim())
Out[107]:
[<matplotlib.lines.Line2D at 0x1298694a7b8>]
In [108]:
fig, ax = plt.subplots()
(
    sr1p5
    .scatter(ax=ax, x='median warming in 2100 (MAGICC6)',
             y='warming at netzero (MAGICC6)', color='category')
)
x = np.linspace(*ax.get_xlim())
ax.plot(ax.get_xlim(), ax.get_xlim())
Out[108]:
[<matplotlib.lines.Line2D at 0x129865b93c8>]

Categorization and meta-data assignment according to final energy demand

Add a categorization column to the metadata categorization based on final energy demand at the end of the century.

In [109]:
horizon = list(range(2000, 2020, 5)) + list(range(2020, 2101, 10))
df = sr1p5.filter(year=horizon)
In [110]:
fe_df = df.filter(variable='Final Energy')
fe_df.line_plot(**plotting_args, marker='marker')
Out[110]:
<matplotlib.axes._subplots.AxesSubplot at 0x12987112a90>