logo

Working with Leaf Area Index available through CDS to convert effective LAI to true LAI#

This notebook can be run on free online platforms, such as Binder, Kaggle and Colab, or they can be accessed from GitHub. The links to run this notebook in these environments are provided here, but please note they are not supported by ECMWF.

binder kaggle colab github

Introduction#

Leaf Area Index (LAI) is a measure of the total one-sided leaf surface area per unit ground area. It is known to be a proxy for surface aspects that directly influence energy exchange, carbon uptake, and water cycling in ecosystems, playing a critical role in climate modeling and land surface monitoring. Without exact knowledge of the canopy structure this true LAI is hard to retrieve with optical remote sensing, leading to the concept of effective LAI. This notebook demonstrates the conversion from effective to true LAI using a clumping factor. It includes the propagation of the uncertainty to the true LAI, including many but not all additional sources of uncertainty.

Learning objectives 🎯#

In this Jupyter notebook, you will:

  • Understand the difference between effective and true Leaf Area Index, and why conversion may be required for your use case;

  • Learn how to set up and prepare data for a Leaf Area Index (LAI) conversion algorithm, and then execute the conversion process;

  • Access and utilize LAI-related data from the Climate Data Store (CDS);

  • Explore the complexities of LAI conversions and recognize the importance of careful interpretation of results;

  • Get an impression of the uncertainties involved in the computation of true LAI.

Methodology#

The retrieval of LAI from optical sensors has to make assumptions about the canopy structure, which modifies the signal retrieved by the satellite. Different distributions (clumping) of the same amount of one-sided leaf area per unit ground (the definition of LAI) leads to different optical signals. Some products use a-priori information about the biome type, others, like TIP (used for C3S LAI v2 – v4 or higher), assume a turbid medium. This is called an effective LAI. Arguably, depending on the degree of realism of the modelled canopy structure, there are differing degrees of effectiveness. The clumping parameter \(\Omega\) (sometimes \(\zeta\)) relates true LAI to effective LAI as (cf. Pinty et al. 2006 or Fang et al. 2019): \begin{equation} \textit{LAI}_\textit{eff}(\theta)=\Omega(\theta) \times \textit{LAI}. \end{equation} The dependence on the solar zenith angle \(\theta\) is included here only for completeness. C3S-TIP LAI is computed for diffuse isotropic illumination.

Depending on the application, it may or may not be meaningful to convert effective LAI to true LAI. Here, we use the Land Cover Class (LCC) - specific clumping indices found by Chen et al. (2005) (also available here) over a period of 8 months (Nov. 1996 – June 1997) together with the C3S LCC product.

The computation of the true LAI from effective LAI by dividing it with the clumping index may seem trivial, but knowing the correct clumping index is not easy. It actually introduces a number of uncertainties, as there are:

  1. uncertainty of the land cover classification

  2. clumping variation within same land cover class (different plant functional types, growth states &c.)

  3. seasonal variation of clumping

  4. mapping uncertainty between land cover classes

(1) is addressed by using the confusion matrix of the C3S LCC product (C3S LCC v2.1 Product user guide), also averaging (with probability weights) over the classes. This does, however, not account for temporal LCC changes which have not yet found their way into the LCC product.

(2) is partly addressed by converting the range of Chen et al.’s clumping indices into an uncertainty by assuming that they define the 97.7%-quantiles, corresponding to 2 sigma of a Gaussian.

(3) is neglected here, as well as the reported tendencies. Also note, that the clumping parameters of Chen et al. (2005) were not computed over a full year.

(4) is caused by the somewhat differing classification schemes used by Chen et al. (2005) and C3S LCC. It is partly accounted for by averaging over the corresponding clumping indices where multiple ‘Chen’-classes are assigned to one C3S LCC.

Therefore, the presented methodology and its results should be interpreted with caution, as it merely demonstrates an approach to investigate the magnitude of the difference and the effects of the sources of uncertainty. A clumping index derived on the basis of plant functional types, including a phenological cycle is expected to give superior results.

Prepare your environment#

The first thing we need to do, is make sure you are ready to run the code while working through this notebook. There are some simple steps you need to do to get yourself set up.

Set up CDSAPI and your credentials#

The code below will ensure that the cdsapi package is installed. If you have not setup your ~/.cdsapirc file with your credenials, you can replace None with your credentials that can be found on the how to api page (you will need to log in to see your credentials).

!pip install -q cdsapi
# If you have already setup your .cdsapirc file you can leave this as None
cdsapi_key = None
cdsapi_url = None

(Install and) Import libraries#

We use xarray for handling netCDF data sets and numpy for the computations. OrderedDict is required to access dictionaries by index, and datetime for monitoring the performance of the individual cells. Enabling dask would be required for larger datasets, but it proved problematic to use a function of a dask array as index for the pre-computed factors in function convert_LAI. This may however be solvable.

#import dask # to be done
import xarray as xr
import numpy as np
from collections import OrderedDict
from datetime import datetime

Explore data#

Input data#

Set things up#

To actually run the conversion, definitions for the algorithm need to be made, data needs to be read, the pre-computation steps need to be called, and the output needs to be prepared. Before any data is downloaded, you will have a chance to check these settings in section ‘Aside: something to play around with’ further below.

Correspondence between land cover class systems#

We have implemented the land cover classes system (LCCS, cf. Appendix A of C3S LCC v2.1 Product user guide) and its mapping to the class numbers used in Table 3 of Chen et al. (2005) as ordered dictionaries. This mapping is not authoritative and open to revision by the informed reader. We put all the Chen classes and related variables into a python class named Clumping to achieve a name space separation. This class also contains an uncertainty estimate of the Clumping index, derived from the range given in Chen et al.’s table in the way explained in the Methodology section above.

LCCS_codes = [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200, 210, 220 ]

LCCS_legend = OrderedDict(
    {
        0   : "No Data",
        10  : "Cropland, rainfed",
        20  : "Cropland, irrigated or post-flooding",
        30  : "Mosaic cropland (>50%) / natural vegetation (tree, shrub, herbaceous cover) (<50%)",
        40  : "Mosaic natural vegetation (tree, shrub, herbaceous cover) (>50%) / cropland (<50%)",
        50  : "Tree cover, broadleaved, evergreen, closed to open (>15%)",
        60  : "Tree cover, broadleaved, deciduous, closed to open (>15%)",
        70  : "Tree cover, needleleaved, evergreen, closed to open (>15%)",
        80  : "Tree cover, needleleaved, deciduous, closed to open (>15%)",
        90  : "Tree cover, mixed leaf type (broadleaved and needleleaved)",
        100 : "Mosaic tree and shrub (>50%) / herbaceous cover (<50%)",
        110 : "Mosaic herbaceous cover (>50%) / tree and shrub (<50%)",
        120 : "Shrubland",
        130 : "Grassland",
        140 : "Lichens and mosses",
        150 : "Sparse vegetation (tree, shrub, herbaceous cover) (<15%)",
        160 : "Tree cover, flooded, fresh or brackish water",
        170 : "Tree cover, flooded, saline water",
        180 : "Shrub or herbaceous cover, flooded, fresh/saline/brackish water",
        190 : "Urban areas",
        200 : "Bare areas",
        210 : "Water bodies",
        220 : "Permanent snow and ice"
    }
)

class Clumping:
#> Table 3 of Chen et al. (2005); https://doi.org/10.1016/j.rse.2005.05.003
#> The land cover classification does not fully match the FAO Land Cover
#> Classification System (LCCS).
    
    class_codes = [ 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]
    legend = OrderedDict(
        {
            "1"  : "Tree Cover, broadleaf, evergreen",
            "2"  : "Tree Cover, broadleaf, deciduous, closed",
            "3"  : "Tree Cover, broadleaf, deciduous, open",
            "4"  : "Tree Cover, needleleaf, evergreen",
            "5"  : "Tree Cover, needleleaf, deciduous",
            "6"  : "Tree Cover, mixed leaf type",
            "7"  : "Tree Cover, regularly flooded, fresh water",
            "8"  : "Tree Cover, regularly flooded, saline water",
            "9"  : "Mosaic: Tree Cover / Other natural vegetation",
            "10" : "Tree Cover, burnt",
            "11" : "Shrub Cover, closed-open, evergreen",
            "12" : "Shrub Cover, closed-open, deciduous",
            "13" : "Herbaceous Cover, closed-open",
            "14" : "Sparse herbaceous or sparse shrub cover",
            "15" : "Reg. flooded shrub and/or herbaceous cover",
            "16" : "Cultivated and managed areas",
            "17" : "Mosaic: Cropland / Tree Cover / Natural veg",
            "18" : "Mosaic: Cropland / Shrub and/or grass cover",
            "19" : "Bare Areas" }
    )

#> This is the mapping from the LCCS to the classes used in Chen et al.; this
#> may require adjustment! E.g. a grassland class seems to be missing.
    class_of_LCCS = OrderedDict( # not mapped to: 10; unsure: 18, 19
        {
            10 : [16],
            20 : [15,16],
            30 : [17,18],
            40 : [9],
            50 : [1],
            60 : [2,3],
            70 : [4],
            80 : [5],
            90 : [6],
            100 : [9],
            110 : [9,13],
# Chen et al.: "The 'Shrub' and 'Grassland' classes were retained, as the modeling results for 
#               broadleaf trees can be extended to these classes [...]"
            120 : [1,11,12],
            130 : [1], 
            140 : [19],
            150 : [14],
            160 : [7],
            170 : [8],
            180 : [15],
            190 : [19],
            200 : [19],
            210 : [19],
            220 : [19]
        }
    )
# clumping index min, max, and mean from Chen et al.'s table:
    ci_min = np.array([0.59,	0.59,	0.62,	0.55,	0.60,	0.58,	0.61,
                       0.65,	0.64,	0.65,	0.62,	0.62,	0.64,	0.67,
                       0.68,	0.63,	0.64,	0.65,	0.75])
    
    ci_max = np.array([0.68,	0.79,	0.78,	0.68,	0.77,	0.79,	0.69,
                       0.79,	0.82,	0.86,	0.80,	0.80,	0.83,	0.84,
                       0.85,	0.83,	0.76,	0.81,	0.99])

    ci_mean = np.array([0.63,	0.69,	0.70,	0.62,	0.68,	0.69,	0.65,
                        0.72,	0.72,	0.75,	0.71,	0.71,	0.74,	0.75,
                        0.77,	0.73,	0.70,	0.73,	0.87])

# For the uncertainty, we assume that the distribution is uniform above
# and below the mean, respectively, thus making the mean also the
# median. We then assume that the reported max and min values are the 97.7%-quantiles 
# which correspond to 2 sigma of a Gaussian distribution. The average over both sided is taken, to
# account for cases where the values are not symmetric. Since no information
# about the distribution is available in Chen et al. (2005), the clumping
# index uncertainty is modelled as a Gaussian here. 
    ci_uncertainty = 0.5 * 0.5 * (ci_max - ci_min) # 1-sigma; ((max-mean)+(mean-min)=(max-min)
# tendencies from Table 3, not used:
    ci_d_NL = np.array([ -0.006 , -0.019,  -0.005,  -0.012, -0.033 , -0.024, np.nan   ,  np.nan   ,  -0.013,  -0.036, -0.020,  -0.016,  -0.016,  -0.019,  -0.026,  -0.018,  -0.011,  -0.018,  -0.032 ])
    ci_d_EQ = np.array([ -0.004 , -0.001,  0.007 ,  -0.017, np.nan    , -0.018, -0.002,  -0.006,  0.008 ,  np.nan   , -0.010,  0.009 ,  0.003 ,  0.008 ,  0.004 ,  -0.006,  -0.004,  0.001 ,  -0.03  ])
    ci_d_SL = np.array([ 0.024  , 0.021 ,  0.025 ,  0.009 , np.nan    , 0.011 , np.nan   ,  np.nan   ,  np.nan   ,  np.nan   , 0.024 ,  0.022 ,  0.026 ,  0.024 ,  0.024 ,  0.026 ,  0.024 ,  0.026 ,  0.027  ])

Here, we provide the confusion matrix, copied from the C3S LCC v2.1 Product quality assessment report for Sentinel-3 OLCI and SLSTR (PQAR):

# Confusion matrices of LCC, from C3S ICDR Land Cover Product Quality
# Assessment Report (D5.2.2_PQAR_ICDR_LC_v2.1.x_PRODUCTS_v1.0)
cm_2016 = np.array([
    [ 121,	30,	0,	0,	2,	1,	0,	0,	0,	0,	0,	5,	14,	0,	2,	0,	0,	1,	0,	1,	0,	0  ],
    [ 9,	24,	0,	0,	0,	2,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0  ],
    [ 9,	0,	0,	0,	4,	1,	0,	0,	0,	0,	0,	2,	3,	0,	0,	0,	0,	0,	0,	0,	0,	0  ],
    [ 8,	1,	0,	0,	0,	7,	0,	0,	0,	0,	0,	3,	7,	0,	0,	0,	0,	0,	0,	0,	0,	0  ],
    [ 3,	0,	0,	0,	199,	15,	3,	0,	3,	0,	0,	2,	1,	0,	0,	0,	0,	0,	0,	0,	0,	0  ],
    [ 1,	0,	0,	0,	6,	72,	1,	8,	16,	0,	0,	13,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0  ],
    [ 0,	0,	0,	0,	10,	3,	53,	2,	16,	0,	0,	2,	0,	0,	4,	0,	0,	0,	0,	0,	0,	0  ],
    [ 0,	0,	0,	0,	0,	0,	3,	23,	3,	0,	0,	3,	3,	0,	4,	0,	0,	0,	0,	0,	0,	0  ],
    [ 0,	0,	0,	0,	0,	2,	1,	1,	14,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0  ],
    [ 2,	0,	0,	0,	8,	10,	3,	1,	0,	0,	0,	6,	5,	0,	2,	0,	0,	1,	0,	0,	0,	0  ],
    [ 0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	1,	3,	0,	0,	0,	0,	1,	0,	0,	0,	0  ],
    [ 8,	0,	0,	0,	7,	19,	0,	1,	0,	0,	0,	105,	21,	0,	8,	0,	0,	0,	0,	1,	0,	0  ],
    [ 8,	3,	0,	0,	0,	0,	0,	0,	1,	0,	0,	19,	64,	1,	12,	0,	0,	1,	1,	25,	1,	1  ],
    [ 0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	2,	1,	0,	0,	0,	0,	0,	0,	0  ],
    [ 5,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	23,	22,	3,	25,	0,	0,	0,	0,	11,	0,	0  ],
    [ 0,	0,	0,	0,	6,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0  ],
    [ 0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	1,	0,	0,	0,	0  ],
    [ 0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	6,	0,	1,	1,	0,	4,	0,	0,	0,	0  ],
    [ 0,	0,	0,	0,	0,	1,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	3,	0,	0,	0  ],
    [ 2,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	1,	3,	0,	13,	0,	0,	1,	0,	61,	0,	0  ],
    [ 0,	1,	0,	0,	0,	0,	0,	0,	1,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	57,	0  ],
    [ 0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0  ]
])

cm_2017 = np.array([
    [ 121,	30,	0,	0,	2,	1,	0,	0,	0,	0,	0,	5,	14,	0,	2,	0,	0,	1,	0,	1,	0,	0 ],
    [ 9,	24,	0,	0,	0,	2,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 9,	0,	0,	0,	4,	1,	0,	0,	0,	0,	0,	2,	3,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 8,	1,	0,	0,	0,	7,	0,	0,	0,	0,	0,	3,	7,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 3,	0,	0,	0,	199,	15,	3,	0,	3,	0,	0,	2,	1,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 1,	0,	0,	0,	6,	72,	1,	8,	16,	0,	0,	13,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	10,	3,	54,	2,	16,	0,	0,	2,	0,	0,	4,	0,	0,	0,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	0,	0,	3,	23,	3,	0,	0,	3,	3,	0,	4,	0,	0,	0,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	0,	2,	1,	1,	14,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 2,	0,	0,	0,	8,	10,	3,	1,	0,	0,	0,	6,	5,	0,	2,	0,	0,	1,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	1,	3,	0,	0,	0,	0,	1,	0,	0,	0,	0 ],
    [ 8,	0,	0,	0,	7,	19,	0,	1,	0,	0,	0,	105,	21,	0,	8,	0,	0,	0,	0,	1,	0,	0 ],
    [ 8,	3,	0,	0,	0,	0,	0,	0,	1,	0,	0,	19,	64,	1,	12,	0,	0,	1,	1,	25,	1,	1 ],
    [ 0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	2,	1,	0,	0,	0,	0,	0,	0,	0 ],
    [ 5,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	23,	22,	3,	25,	0,	0,	0,	0,	11,	0,	0 ],
    [ 0,	0,	0,	0,	6,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	1,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	6,	0,	1,	1,	0,	4,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	0,	1,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	3,	0,	0,	0 ],
    [ 2,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	1,	3,	0,	13,	0,	0,	1,	0,	61,	0,	0 ],
    [ 0,	1,	0,	0,	0,	0,	0,	0,	1,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	57,	0 ],
    [ 0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0 ]
])

cm_2018 = np.array([
    [ 119,	30,	0,	0,	2,	1,	0,	0,	0,	0,	0,	5,	14,	0,	2,	0,	0,	1,	0,	1,	0,	0 ],
    [ 9,	24,	0,	0,	0,	2,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 9,	0,	0,	0,	4,	1,	0,	0,	0,	0,	0,	2,	3,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 8,	1,	0,	0,	0,	7,	0,	0,	0,	0,	0,	3,	7,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 3,	0,	0,	0,	197,	14,	3,	0,	3,	0,	0,	2,	1,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 1,	0,	0,	0,	6,	71,	1,	8,	16,	0,	0,	13,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	10,	3,	53,	2,	16,	0,	0,	2,	0,	0,	4,	0,	0,	0,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	0,	0,	3,	23,	3,	0,	0,	2,	3,	0,	4,	0,	0,	0,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	0,	2,	1,	1,	14,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 2,	0,	0,	0,	8,	10,	3,	1,	0,	0,	0,	6,	5,	0,	2,	0,	0,	1,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	0,	0,	1,	0,	0,	0,	0,	2,	3,	0,	0,	0,	0,	1,	0,	0,	0,	0 ],
    [ 8,	0,	0,	0,	7,	19,	0,	1,	0,	0,	0,	105,	21,	0,	8,	0,	0,	0,	0,	1,	0,	0 ],
    [ 8,	3,	0,	0,	0,	0,	0,	0,	1,	0,	0,	19,	64,	1,	14,	0,	0,	1,	1,	25,	1,	1 ],
    [ 0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	2,	1,	0,	0,	0,	0,	0,	0,	0 ],
    [ 5,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	23,	22,	3,	24,	0,	0,	0,	0,	12,	0,	0 ],
    [ 0,	0,	0,	0,	6,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	1,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	6,	0,	1,	1,	0,	4,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	0,	1,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	3,	0,	0,	0 ],
    [ 2,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	1,	3,	0,	12,	0,	0,	1,	0,	60,	0,	0 ],
    [ 0,	1,	0,	0,	0,	0,	0,	0,	1,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	57,	0 ],
    [ 0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0 ]
])

cm_2019 = np.array([
    [ 119,	30,	0,	0,	2,	1,	0,	0,	0,	0,	0,	5,	14,	0,	2,	0,	0,	1,	0,	1,	0,	0 ],
    [ 9,	24,	0,	0,	0,	2,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 8,	0,	0,	0,	4,	1,	0,	0,	0,	0,	0,	2,	3,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 8,	1,	0,	0,	0,	7,	0,	0,	0,	0,	0,	2,	7,	0,	1,	0,	0,	0,	0,	0,	0,	0 ],
    [ 3,	0,	0,	0,	198,	14,	3,	0,	2,	0,	0,	3,	1,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 1,	0,	0,	0,	6,	71,	1,	8,	16,	0,	0,	12,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	10,	3,	53,	2,	16,	0,	0,	2,	0,	0,	4,	0,	0,	0,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	0,	0,	2,	23,	3,	0,	0,	2,	3,	0,	4,	0,	0,	0,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	0,	2,	1,	1,	14,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 3,	0,	0,	0,	8,	10,	2,	1,	0,	0,	0,	5,	5,	0,	2,	0,	0,	1,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	0,	0,	1,	0,	0,	0,	0,	2,	3,	0,	0,	0,	0,	1,	0,	0,	0,	0 ],
    [ 7,	0,	0,	0,	7,	19,	2,	1,	0,	0,	0,	105,	21,	0,	8,	0,	0,	0,	0,	1,	0,	0 ],
    [ 9,	3,	0,	0,	0,	0,	0,	0,	1,	0,	0,	19,	64,	1,	14,	0,	0,	1,	1,	25,	1,	1 ],
    [ 0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	2,	1,	0,	0,	0,	0,	0,	0,	0 ],
    [ 5,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	23,	22,	3,	23,	0,	0,	0,	0,	12,	0,	0 ],
    [ 0,	0,	0,	0,	6,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	1,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	6,	0,	1,	1,	0,	4,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	0,	1,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	3,	0,	0,	0 ],
    [ 2,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	1,	3,	0,	12,	0,	0,	1,	0,	60,	0,	0 ],
    [ 0,	1,	0,	0,	0,	0,	0,	0,	1,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	57,	0 ],
    [ 0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0 ]
])

cm_2020 = np.array([
    [ 119,	30,	0,	0,	2,	1,	0,	0,	0,	0,	0,	5,	14,	0,	2,	0,	0,	1,	0,	1,	0,	0 ],
    [ 9,	24,	0,	0,	0,	2,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 8,	0,	0,	0,	4,	1,	0,	0,	0,	0,	0,	2,	3,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 8,	1,	0,	0,	0,	7,	0,	0,	0,	0,	0,	2,	7,	0,	1,	0,	0,	0,	0,	0,	0,	0 ],
    [ 3,	0,	0,	0,	197,	14,	3,	0,	2,	0,	0,	3,	1,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 1,	0,	0,	0,	6,	71,	1,	8,	16,	0,	0,	12,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	10,	3,	53,	2,	16,	0,	0,	2,	0,	0,	4,	0,	0,	0,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	0,	0,	2,	23,	3,	0,	0,	2,	2,	0,	4,	0,	0,	0,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	0,	2,	1,	1,	14,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 3,	0,	0,	0,	8,	10,	2,	1,	0,	0,	0,	5,	5,	0,	2,	0,	0,	1,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	0,	0,	1,	0,	0,	0,	0,	2,	3,	0,	0,	0,	0,	1,	0,	0,	0,	0 ],
    [ 7,	0,	0,	0,	7,	19,	2,	1,	0,	0,	0,	105,	22,	0,	8,	0,	0,	0,	0,	1,	0,	0 ],
    [ 9,	3,	0,	0,	0,	0,	0,	0,	1,	0,	0,	19,	64,	1,	14,	0,	0,	1,	1,	26,	1,	1 ],
    [ 0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	2,	1,	0,	0,	0,	0,	0,	0,	0 ],
    [ 5,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	23,	22,	3,	23,	0,	0,	0,	0,	12,	0,	0 ],
    [ 0,	0,	0,	0,	6,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	1,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	6,	0,	1,	1,	0,	4,	0,	0,	0,	0 ],
    [ 0,	0,	0,	0,	0,	1,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	3,	0,	0,	0 ],
    [ 2,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	1,	3,	0,	12,	0,	0,	0,	0,	59,	0,	0 ],
    [ 0,	1,	0,	0,	0,	0,	0,	0,	1,	0,	0,	0,	0,	0,	0,	0,	0,	1,	0,	0,	57,	0 ],
    [ 0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0,	0 ]
])

In order to convert the entries of the confusion matrices from absolute values into probabilities, we define the following routine:

def abs_to_prob(confusion_matrix_LCC):
    # Convert absolute numbers into probabilities:
    confusion_matrix_prob_LCC = np.empty([len(confusion_matrix_LCC),len(confusion_matrix_LCC)])
    for irow in range(len(confusion_matrix_LCC)):
        rowsum = np.sum( confusion_matrix_LCC[irow][:] )
        for icol in range(len(confusion_matrix_LCC[irow])):
            if ( rowsum > 0 ):
                confusion_matrix_prob_LCC[irow][icol] = ( confusion_matrix_LCC[irow][icol] / rowsum )
            else:
                confusion_matrix_prob_LCC[irow][icol] = 0.
        # debug:print(irow,np.sum(confusion_matrix_prob_LCC[irow]))
    return confusion_matrix_prob_LCC

We then can pre-compute the factors required in the conversion and in the uncertainty propagation, since they are a time-independent function of the LCC. The LAIs of all possible ‘Chen’-classes related to one LCC by the confusion matrix and by mapping ambiguity are averaged, weighted by probability, to get one most probable LAI per C3S LCC type: \begin{equation} \textit{LAI}i(\textit{LAI}\textit{eff},\textrm{LCC}) = \frac{1}{n}\sum_{i=1}^n p_i(\textrm{LCC}) \times \frac{ \textit{LAI}_\textit{eff} } { \Omega_i(\textrm{LCC}) }, \end{equation} where \(i\) is an index running over all \(n\) ‘Chen’-classes confused with one C3S LCC.

Since the factor \(\textit{LAI}_\textit{eff}\) is constant in the sum, the sum can be pre-computed into a conversion factor: \begin{equation} f_LAI_cl(LCC) = \frac{1}{n} \sum_{i=1}^n \frac{ p_i(\textrm{LCC}) } { \Omega_i(\textrm{LCC}) }. \end{equation} With this factor, the conversion becomes \begin{equation} \textit{LAI}i(\textit{LAI}\textit{eff},\textrm{LCC}) = f_LAI_cl(LCC) \times \textit{LAI}_\textit{eff}. \end{equation}

For the uncertainty propagation, this means: \begin{equation} \sigma_{\textit{LAI}}(\textit{LAI}\textit{eff},\textrm{LCC})^2 = \left( \sigma{\textit{LAI}\textit{eff}} \times f_LAI_cl \right)^2 +
\left( \sigma
{\Omega(\textrm{LCC})} \times \textit{LAI}\textit{eff} \times ( -\frac{1}{n})\sum{i=1}^n \frac{ p_i(\textrm{LCC}) } { \Omega_i(\textrm{LCC})^2 } \right)^2. \end{equation} We also pre-compute the LAI-independent part of the factor in the second term of the above equation as \begin{equation} f_LAI_unc2_cl(LCC) = \left( \sigma_{\Omega(\textrm{LCC})} \times (-\frac{1}{n})\sum_{i=1}^n \frac{ p_i(\textrm{LCC}) } { \Omega_i(\textrm{LCC})^2 } \right)^2. \end{equation} In the computation of the uncertainty, we are treating the uncertainty of effective LAI and the uncertainty of the clumping factor as independent.

def LCC_to_ix(LCC):
    # convert a Land Cover Class to an array index
    return(int)((LCC-10)/10)

def vLCC_to_ix(LCC):
    # convert a scalar LCC or array of LCC's to an array index
    mLCC = np.asarray(LCC)
    scalar_input = False
    if mLCC.ndim == 0:
        mLCC = mLCC[None]  # Makes x 1D
        scalar_input = True
    if scalar_input:
        return np.squeeze((int)((mLCC-10)/10+0.5))
    return ((LCC-10)/10).round().astype(dtype=np.int8)
    
   
def pre_compute_factors(confusion_matrix_prob_LCC):
    #
    # Pre-computation of factors for LAI and uncertainty computation
    #
    # Accounts for three sources of uncertainty:
    # (a) Uncertainty of the effective LAI
    # (b) Uncertainty caused by mis-classification of Land Cover Class
    # (c) Uncertainty of the clumping factors
    # (d) Mapping ambiguity between C3S LCC and Chen et al.'s classes
    #
    # Intermediate "true" LAI uncertainty factor from clumping uncertainty:
    f_LAI_unc_cl = np.zeros([len(LCCS_codes),len(Clumping.class_codes)]) 
    f_LAI_unc2_cl = np.zeros([len(LCCS_codes)])
    # Intermediate "true" LAI factor:
    f_LAI_cl = np.zeros([len(LCCS_codes)])
    for iLCC in range(len(LCCS_codes)):
        if iLCC != LCC_to_ix(LCCS_codes[iLCC]):
            raise "dask limitation requires functional dependence between LCC code and array index."       
        # Compute variance caused by potential mis-classification
        for icol in range(len(confusion_matrix_prob_LCC[iLCC])):
            if ( confusion_matrix_prob_LCC[iLCC][icol] > 0. ):
                # Loop over list of mapped clumping classes
                class_list = Clumping.class_of_LCCS[LCCS_codes[icol]]
                nclasses = len(class_list)
                print("LCC",LCCS_codes[iLCC],"could be LCC",
                      LCCS_codes[icol],"with probability",int(1000*confusion_matrix_prob_LCC[iLCC][icol])/1000,"which maps to Chen et al. class(es)",
                      class_list, "(",nclasses,")") 
                if ( nclasses <= 0 ):
                    raise Exception("unmapped class")
                for ct in class_list: # mapping ambiguity between LCCS and Chen classes, no uncertainty assigned
                    iclump = Clumping.class_codes.index(ct)
                    # Accumulate LAI factor:
                    f_LAI_cl[iLCC] += ( confusion_matrix_prob_LCC[iLCC][icol] /
                                      Clumping.ci_mean[iclump] ) / nclasses 
                    # Accumulate uncertainty factor per clumping class; negative sign from derivative arbitrary (squared away in next step), but
                    # would matter if uncertainty sources were treated as correlated (extra terms):
                    f_LAI_unc_cl[iLCC,iclump] -=  (
                        Clumping.ci_uncertainty[iclump] *
                        confusion_matrix_prob_LCC[iLCC][icol] /
                        Clumping.ci_mean[iclump]**2 ) /  nclasses
        f_LAI_unc2_cl[iLCC] = np.sum( f_LAI_unc_cl[iLCC,:]**2 )
    return f_LAI_cl,f_LAI_unc2_cl

In the actual computation, we can then make use of these factors, limiting the number of operations per pixel to the required minimum.

def convert_LAI(LAI_eff, LAI_eff_uncertainty,LCC_type):
    #
    # returns the "true" LAI and its uncertainy after
    # 
    # - conversion from effective to "true" LAI using associated clumping factor
    # - propagation of uncertainty
    #
    iLCC = vLCC_to_ix(LCC_type-10)
    LAI = f_LAI_cl[iLCC] * LAI_eff
    LAI_uncertainty = np.sqrt( ( f_LAI_unc2_cl[iLCC] * LAI_eff**2 ) +
                               ( f_LAI_cl[iLCC] * LAI_eff_uncertainty )**2 )
    LAI_uncertainty = LAI_uncertainty.rename(LAI_eff_uncertainty.name)
    LAI_uncertainty.attrs['units'] = LAI_eff_uncertainty.attrs['units'] # forward unit
    return LAI.where(np.isfinite(LAI_eff)),LAI_uncertainty.where(np.isfinite(LAI_eff))

Preparing for conversion#

To actually run the conversion, data needs to be read, the pre-computation steps need to be called, and the output needs to be prepared. Here, we start with the set-up by providing the confusion matrices for all years. They are quite similar, and because there is only a limited number of cases, the sum of them all is thought to give a more robust statistics for the estimation of the confusion probabilites.

# Set-up
confusion_matrix_prob_LCC = abs_to_prob(cm_2016+cm_2017+cm_2018+cm_2019+cm_2020)
f_LAI_cl,f_LAI_unc2_cl = pre_compute_factors(confusion_matrix_prob_LCC)
LCC 10 could be LCC 10 with probability 0.681 which maps to Chen et al. class(es) [16] ( 1 )
LCC 10 could be LCC 20 with probability 0.17 which maps to Chen et al. class(es) [15, 16] ( 2 )
LCC 10 could be LCC 50 with probability 0.011 which maps to Chen et al. class(es) [1] ( 1 )
LCC 10 could be LCC 60 with probability 0.005 which maps to Chen et al. class(es) [2, 3] ( 2 )
LCC 10 could be LCC 120 with probability 0.028 which maps to Chen et al. class(es) [1, 11, 12] ( 3 )
LCC 10 could be LCC 130 with probability 0.079 which maps to Chen et al. class(es) [1] ( 1 )
LCC 10 could be LCC 150 with probability 0.011 which maps to Chen et al. class(es) [14] ( 1 )
LCC 10 could be LCC 180 with probability 0.005 which maps to Chen et al. class(es) [15] ( 1 )
LCC 10 could be LCC 200 with probability 0.005 which maps to Chen et al. class(es) [19] ( 1 )
LCC 20 could be LCC 10 with probability 0.257 which maps to Chen et al. class(es) [16] ( 1 )
LCC 20 could be LCC 20 with probability 0.685 which maps to Chen et al. class(es) [15, 16] ( 2 )
LCC 20 could be LCC 60 with probability 0.057 which maps to Chen et al. class(es) [2, 3] ( 2 )
LCC 30 could be LCC 10 with probability 0.462 which maps to Chen et al. class(es) [16] ( 1 )
LCC 30 could be LCC 50 with probability 0.215 which maps to Chen et al. class(es) [1] ( 1 )
LCC 30 could be LCC 60 with probability 0.053 which maps to Chen et al. class(es) [2, 3] ( 2 )
LCC 30 could be LCC 120 with probability 0.107 which maps to Chen et al. class(es) [1, 11, 12] ( 3 )
LCC 30 could be LCC 130 with probability 0.161 which maps to Chen et al. class(es) [1] ( 1 )
LCC 40 could be LCC 10 with probability 0.307 which maps to Chen et al. class(es) [16] ( 1 )
LCC 40 could be LCC 20 with probability 0.038 which maps to Chen et al. class(es) [15, 16] ( 2 )
LCC 40 could be LCC 60 with probability 0.269 which maps to Chen et al. class(es) [2, 3] ( 2 )
LCC 40 could be LCC 120 with probability 0.1 which maps to Chen et al. class(es) [1, 11, 12] ( 3 )
LCC 40 could be LCC 130 with probability 0.269 which maps to Chen et al. class(es) [1] ( 1 )
LCC 40 could be LCC 150 with probability 0.015 which maps to Chen et al. class(es) [14] ( 1 )
LCC 50 could be LCC 10 with probability 0.013 which maps to Chen et al. class(es) [16] ( 1 )
LCC 50 could be LCC 50 with probability 0.882 which maps to Chen et al. class(es) [1] ( 1 )
LCC 50 could be LCC 60 with probability 0.064 which maps to Chen et al. class(es) [2, 3] ( 2 )
LCC 50 could be LCC 70 with probability 0.013 which maps to Chen et al. class(es) [4] ( 1 )
LCC 50 could be LCC 90 with probability 0.011 which maps to Chen et al. class(es) [6] ( 1 )
LCC 50 could be LCC 120 with probability 0.01 which maps to Chen et al. class(es) [1, 11, 12] ( 3 )
LCC 50 could be LCC 130 with probability 0.004 which maps to Chen et al. class(es) [1] ( 1 )
LCC 60 could be LCC 10 with probability 0.008 which maps to Chen et al. class(es) [16] ( 1 )
LCC 60 could be LCC 50 with probability 0.051 which maps to Chen et al. class(es) [1] ( 1 )
LCC 60 could be LCC 60 with probability 0.615 which maps to Chen et al. class(es) [2, 3] ( 2 )
LCC 60 could be LCC 70 with probability 0.008 which maps to Chen et al. class(es) [4] ( 1 )
LCC 60 could be LCC 80 with probability 0.068 which maps to Chen et al. class(es) [5] ( 1 )
LCC 60 could be LCC 90 with probability 0.137 which maps to Chen et al. class(es) [6] ( 1 )
LCC 60 could be LCC 120 with probability 0.108 which maps to Chen et al. class(es) [1, 11, 12] ( 3 )
LCC 70 could be LCC 50 with probability 0.11 which maps to Chen et al. class(es) [1] ( 1 )
LCC 70 could be LCC 60 with probability 0.033 which maps to Chen et al. class(es) [2, 3] ( 2 )
LCC 70 could be LCC 70 with probability 0.589 which maps to Chen et al. class(es) [4] ( 1 )
LCC 70 could be LCC 80 with probability 0.022 which maps to Chen et al. class(es) [5] ( 1 )
LCC 70 could be LCC 90 with probability 0.177 which maps to Chen et al. class(es) [6] ( 1 )
LCC 70 could be LCC 120 with probability 0.022 which maps to Chen et al. class(es) [1, 11, 12] ( 3 )
LCC 70 could be LCC 150 with probability 0.044 which maps to Chen et al. class(es) [14] ( 1 )
LCC 80 could be LCC 70 with probability 0.068 which maps to Chen et al. class(es) [4] ( 1 )
LCC 80 could be LCC 80 with probability 0.608 which maps to Chen et al. class(es) [5] ( 1 )
LCC 80 could be LCC 90 with probability 0.079 which maps to Chen et al. class(es) [6] ( 1 )
LCC 80 could be LCC 120 with probability 0.063 which maps to Chen et al. class(es) [1, 11, 12] ( 3 )
LCC 80 could be LCC 130 with probability 0.074 which maps to Chen et al. class(es) [1] ( 1 )
LCC 80 could be LCC 150 with probability 0.105 which maps to Chen et al. class(es) [14] ( 1 )
LCC 90 could be LCC 60 with probability 0.111 which maps to Chen et al. class(es) [2, 3] ( 2 )
LCC 90 could be LCC 70 with probability 0.055 which maps to Chen et al. class(es) [4] ( 1 )
LCC 90 could be LCC 80 with probability 0.055 which maps to Chen et al. class(es) [5] ( 1 )
LCC 90 could be LCC 90 with probability 0.777 which maps to Chen et al. class(es) [6] ( 1 )
LCC 100 could be LCC 10 with probability 0.063 which maps to Chen et al. class(es) [16] ( 1 )
LCC 100 could be LCC 50 with probability 0.212 which maps to Chen et al. class(es) [1] ( 1 )
LCC 100 could be LCC 60 with probability 0.265 which maps to Chen et al. class(es) [2, 3] ( 2 )
LCC 100 could be LCC 70 with probability 0.069 which maps to Chen et al. class(es) [4] ( 1 )
LCC 100 could be LCC 80 with probability 0.026 which maps to Chen et al. class(es) [5] ( 1 )
LCC 100 could be LCC 120 with probability 0.148 which maps to Chen et al. class(es) [1, 11, 12] ( 3 )
LCC 100 could be LCC 130 with probability 0.132 which maps to Chen et al. class(es) [1] ( 1 )
LCC 100 could be LCC 150 with probability 0.053 which maps to Chen et al. class(es) [14] ( 1 )
LCC 100 could be LCC 180 with probability 0.026 which maps to Chen et al. class(es) [15] ( 1 )
LCC 110 could be LCC 70 with probability 0.096 which maps to Chen et al. class(es) [4] ( 1 )
LCC 110 could be LCC 120 with probability 0.258 which maps to Chen et al. class(es) [1, 11, 12] ( 3 )
LCC 110 could be LCC 130 with probability 0.483 which maps to Chen et al. class(es) [1] ( 1 )
LCC 110 could be LCC 180 with probability 0.161 which maps to Chen et al. class(es) [15] ( 1 )
LCC 120 could be LCC 10 with probability 0.044 which maps to Chen et al. class(es) [16] ( 1 )
LCC 120 could be LCC 50 with probability 0.041 which maps to Chen et al. class(es) [1] ( 1 )
LCC 120 could be LCC 60 with probability 0.111 which maps to Chen et al. class(es) [2, 3] ( 2 )
LCC 120 could be LCC 70 with probability 0.004 which maps to Chen et al. class(es) [4] ( 1 )
LCC 120 could be LCC 80 with probability 0.005 which maps to Chen et al. class(es) [5] ( 1 )
LCC 120 could be LCC 120 with probability 0.615 which maps to Chen et al. class(es) [1, 11, 12] ( 3 )
LCC 120 could be LCC 130 with probability 0.124 which maps to Chen et al. class(es) [1] ( 1 )
LCC 120 could be LCC 150 with probability 0.046 which maps to Chen et al. class(es) [14] ( 1 )
LCC 120 could be LCC 200 with probability 0.005 which maps to Chen et al. class(es) [19] ( 1 )
LCC 130 could be LCC 10 with probability 0.06 which maps to Chen et al. class(es) [16] ( 1 )
LCC 130 could be LCC 20 with probability 0.021 which maps to Chen et al. class(es) [15, 16] ( 2 )
LCC 130 could be LCC 90 with probability 0.007 which maps to Chen et al. class(es) [6] ( 1 )
LCC 130 could be LCC 120 with probability 0.136 which maps to Chen et al. class(es) [1, 11, 12] ( 3 )
LCC 130 could be LCC 130 with probability 0.461 which maps to Chen et al. class(es) [1] ( 1 )
LCC 130 could be LCC 140 with probability 0.007 which maps to Chen et al. class(es) [19] ( 1 )
LCC 130 could be LCC 150 with probability 0.095 which maps to Chen et al. class(es) [14] ( 1 )
LCC 130 could be LCC 180 with probability 0.007 which maps to Chen et al. class(es) [15] ( 1 )
LCC 130 could be LCC 190 with probability 0.007 which maps to Chen et al. class(es) [19] ( 1 )
LCC 130 could be LCC 200 with probability 0.181 which maps to Chen et al. class(es) [19] ( 1 )
LCC 130 could be LCC 210 with probability 0.007 which maps to Chen et al. class(es) [19] ( 1 )
LCC 130 could be LCC 220 with probability 0.007 which maps to Chen et al. class(es) [19] ( 1 )
LCC 140 could be LCC 140 with probability 0.666 which maps to Chen et al. class(es) [19] ( 1 )
LCC 140 could be LCC 150 with probability 0.333 which maps to Chen et al. class(es) [14] ( 1 )
LCC 150 could be LCC 10 with probability 0.056 which maps to Chen et al. class(es) [16] ( 1 )
LCC 150 could be LCC 120 with probability 0.259 which maps to Chen et al. class(es) [1, 11, 12] ( 3 )
LCC 150 could be LCC 130 with probability 0.248 which maps to Chen et al. class(es) [1] ( 1 )
LCC 150 could be LCC 140 with probability 0.033 which maps to Chen et al. class(es) [19] ( 1 )
LCC 150 could be LCC 150 with probability 0.27 which maps to Chen et al. class(es) [14] ( 1 )
LCC 150 could be LCC 200 with probability 0.13 which maps to Chen et al. class(es) [19] ( 1 )
LCC 160 could be LCC 50 with probability 1.0 which maps to Chen et al. class(es) [1] ( 1 )
LCC 170 could be LCC 180 with probability 1.0 which maps to Chen et al. class(es) [15] ( 1 )
LCC 180 could be LCC 130 with probability 0.5 which maps to Chen et al. class(es) [1] ( 1 )
LCC 180 could be LCC 150 with probability 0.083 which maps to Chen et al. class(es) [14] ( 1 )
LCC 180 could be LCC 160 with probability 0.083 which maps to Chen et al. class(es) [7] ( 1 )
LCC 180 could be LCC 180 with probability 0.333 which maps to Chen et al. class(es) [15] ( 1 )
LCC 190 could be LCC 60 with probability 0.25 which maps to Chen et al. class(es) [2, 3] ( 2 )
LCC 190 could be LCC 190 with probability 0.75 which maps to Chen et al. class(es) [19] ( 1 )
LCC 200 could be LCC 10 with probability 0.025 which maps to Chen et al. class(es) [16] ( 1 )
LCC 200 could be LCC 120 with probability 0.012 which maps to Chen et al. class(es) [1, 11, 12] ( 3 )
LCC 200 could be LCC 130 with probability 0.037 which maps to Chen et al. class(es) [1] ( 1 )
LCC 200 could be LCC 150 with probability 0.156 which maps to Chen et al. class(es) [14] ( 1 )
LCC 200 could be LCC 180 with probability 0.01 which maps to Chen et al. class(es) [15] ( 1 )
LCC 200 could be LCC 200 with probability 0.758 which maps to Chen et al. class(es) [19] ( 1 )
LCC 210 could be LCC 20 with probability 0.016 which maps to Chen et al. class(es) [15, 16] ( 2 )
LCC 210 could be LCC 90 with probability 0.016 which maps to Chen et al. class(es) [6] ( 1 )
LCC 210 could be LCC 180 with probability 0.003 which maps to Chen et al. class(es) [15] ( 1 )
LCC 210 could be LCC 210 with probability 0.962 which maps to Chen et al. class(es) [19] ( 1 )

Aside: something to play around with#

Feel free to vary the inputs in the following cell, as long as all arrays have the same length. This should give you an idea of the magnitude of change of LAI and its uncertainties caused by the conversion for different LCCs.

# Output
LAI = np.empty([6])
LAI_uncertainty = np.empty([6])
# Input
LAI_eff = xr.DataArray([1.,2.,3.,1.,2.,3.],name='LAI',attrs={'units':'m2.m-2'})
LAI_eff_uncertainty = xr.DataArray([0.2,0.2,0.2,0.2,0.2,0.2],name='LAI_ERR',attrs={'units':'m2.m-2'})
LCC_type = xr.DataArray([160,160,160, 120, 120, 120],name='LCCS_type')
# Actual conversion:
LAI, LAI_uncertainty = convert_LAI(LAI_eff, LAI_eff_uncertainty, LCC_type)
print("LCC type",LCC_type)
for cLCC in LCC_type:
    print(cLCC," : ",LCCS_legend[(int)(cLCC.compute())])
print("LAI_eff :",LAI_eff,"\tLAI_eff_unc : ",LAI_eff_uncertainty)
print("LAI     :",LAI    ,"\tLAI_unc     : ",LAI_uncertainty    )
LCC type <xarray.DataArray 'LCCS_type' (dim_0: 6)> Size: 48B
array([160, 160, 160, 120, 120, 120])
Dimensions without coordinates: dim_0
<xarray.DataArray 'LCCS_type' ()> Size: 8B
array(160)  :  Tree cover, flooded, fresh or brackish water
<xarray.DataArray 'LCCS_type' ()> Size: 8B
array(160)  :  Tree cover, flooded, fresh or brackish water
<xarray.DataArray 'LCCS_type' ()> Size: 8B
array(160)  :  Tree cover, flooded, fresh or brackish water
<xarray.DataArray 'LCCS_type' ()> Size: 8B
array(120)  :  Shrubland
<xarray.DataArray 'LCCS_type' ()> Size: 8B
array(120)  :  Shrubland
<xarray.DataArray 'LCCS_type' ()> Size: 8B
array(120)  :  Shrubland
LAI_eff : <xarray.DataArray 'LAI' (dim_0: 6)> Size: 48B
array([1., 2., 3., 1., 2., 3.])
Dimensions without coordinates: dim_0
Attributes:
    units:    m2.m-2 	LAI_eff_unc :  <xarray.DataArray 'LAI_ERR' (dim_0: 6)> Size: 48B
array([0.2, 0.2, 0.2, 0.2, 0.2, 0.2])
Dimensions without coordinates: dim_0
Attributes:
    units:    m2.m-2
LAI     : <xarray.DataArray 'LAI' (dim_0: 6)> Size: 48B
array([1.40312771, 2.80625543, 4.20938314, 1.51246069, 3.02492137,
       4.53738206])
Dimensions without coordinates: dim_0
Attributes:
    units:    m2.m-2 	LAI_unc     :  <xarray.DataArray 'LAI_ERR' (dim_0: 6)> Size: 48B
array([0.28257368, 0.28833913, 0.29770019, 0.30473558, 0.31136893,
       0.3221211 ])
Dimensions without coordinates: dim_0
Attributes:
    units:    m2.m-2

Select the data#

We will use the effective LAI, flags, and uncertainty and C3S LCC data from the CDS.

Having selected the correct dataset, we now need to specify what parameters we are interested in. These parameters can all be selected in the “Download data” tab. In this tab a form appears in which we will select the following parameters to download:

Parameters of data to download
  • Format: tgz

  • Variable: LAI

  • Satellite: Sentinel 3

  • Sensor: OLCI and SLSTR

  • Horizontal resolution: 300m

  • Product version: v4

  • Year: 2019

  • Month: 04

  • Nominal day: 10

  • Geographical area: 60, 0, 40, 20

At the end of the download form, select “Show API request”. This will reveal a block of code, which you can simply copy and paste into a cell of your Jupyter Notebook (see cell below). Having copied the API request into the cell below, running this will retrieve and download the data you requested into your local directory.

Warning

Please remember to accept the terms and conditions of the dataset, at the bottom of the CDS download form!

Download the data#

Our code defines a python subroutine get_data which downloads LAI and fAPAR data as gzip’ped tar archive from the CDS and writes it to the file given by the target argument of get_data. The data is retained in a file to avoid repeated downloading when the notebook is run repeatedly and to avoid holding the whole data in memory. This comes at the cost of duplicating the data on disk in the subsequent extraction step. Currently, there seems to be no way to avoid this, because the CDS always delivers compressed archives when the data is ordered as a file.

For this tutorial, we are downloading global effective LAI V4 based on Sentinel-3 (OLCI and SLSTR) data with a spatial resolution of 300 m for April 10, 2019.

def get_data(dataset, request, target):
    import cdsapi
    import os.path
    if os.path.isfile(target):
        print("target",target,"already exists.")
    else:
        client = cdsapi.Client(key = cdsapi_key, url = cdsapi_url)
        client.retrieve(dataset,request,target) #.download()
            
starttime = datetime.now()
dataset = 'satellite-lai-fapar'
request = {
    'format': 'tgz',
    'variable': ['lai'],
    'satellite': ['sentinel_3'],
    'sensor': 'olci_and_slstr',
    'horizontal_resolution': ['300m'],
    'product_version': 'v4',
    'year': ['2019'],
    'month': ['04'],
    'nominal_day': ['10'],
    'area': [60, 0, 40, 20]
}

laifile = 'laidata.tgz'
get_data(dataset,request,target=laifile)
print('got LAI data,       elapsed:',datetime.now()-starttime)
2025-10-17 12:19:38,226 INFO Request ID is bd0b7862-b2d4-4f5f-a7d0-64f8a6c4ac90
2025-10-17 12:19:38,351 INFO status has been updated to accepted
2025-10-17 12:19:52,107 INFO status has been updated to running
2025-10-17 12:20:28,611 INFO status has been updated to successful
got LAI data,       elapsed: 0:01:06.051862

A similar step is required for the LCC data, to get the corresponding land cover classes for 2019. The resolution of this product is 300 m.

starttime = datetime.now()
dataset = "satellite-land-cover"
request = {
    'format': 'tgz',
    'variable': 'all',
    'year': ['2019'],
    'version': ['v2_1_1'],
    'area': [60, 0, 40, 20]
}
lccfile = 'lccdata.tgz'
get_data(dataset,request,target=lccfile)
print('got LCC data,       elapsed:',datetime.now()-starttime)
2025-10-17 12:21:59,555 INFO [2025-07-04T00:00:00] Due to a transition between project phases, there are changes to the timeline of this dataset updates, which are usually on an annual basis with a one year delay: 2023 and 2024 data updates are now expected during 2026. Please watch the [forum](https://forum.ecmwf.int/c/announcements/5) for future announcements.
2025-10-17 12:21:59,556 INFO Request ID is c24c4716-f904-4103-83b9-d833375b20ed
2025-10-17 12:22:01,624 INFO status has been updated to accepted
2025-10-17 12:22:10,145 INFO status has been updated to running
2025-10-17 12:22:15,298 INFO status has been updated to successful
got LCC data,       elapsed: 0:00:28.258244

Unpack the data#

The .tgz file used in the transfer from the CDS is like a tightly packed box of information (a compressed archive). To use the data inside, we first need to unpack and decompress.

def unpack_data(file):
    import tarfile
    import os.path
    tf = tarfile.open(name=file,mode='r')
    print('opened tar file,  elapsed:',datetime.now()-starttime)
    tf.list()
    print('listing,     elapsed:',datetime.now()-starttime)
    # just extract what is not present:
    for xfile in tf:
        if os.path.isfile(xfile.name) == False:
            print('extracting ',xfile.name)
            tf.extract(member=xfile.name,path='.') # uses current working directory        
        else:
            print('present    ',xfile.name)
    return tf

starttime = datetime.now()
lai_tarfileinfo = unpack_data(laifile)
lcc_tarfileinfo = unpack_data(lccfile)
print('unpacked data,    elapsed:',datetime.now()-starttime)
opened tar file,  elapsed: 0:00:00.001783
?rw-r--r-- root/root  111877524 2025-10-16 11:06:11 c3s_LAI_20190410000000_GLOBE_SENTINEL3_V4.0.1.area-subset.60.20.40.0.nc 
listing,     elapsed: 0:00:00.071985
extracting  c3s_LAI_20190410000000_GLOBE_SENTINEL3_V4.0.1.area-subset.60.20.40.0.nc
opened tar file,  elapsed: 0:00:00.201828
?rw-r--r-- root/root   54252498 2025-10-16 11:14:50 C3S-LC-L4-LCCS-Map-300m-P1Y-2019-v2.1.1.area-subset.60.20.40.0.nc 
listing,     elapsed: 0:00:00.242124
extracting  C3S-LC-L4-LCCS-Map-300m-P1Y-2019-v2.1.1.area-subset.60.20.40.0.nc
unpacked data,    elapsed: 0:00:00.315786

Inspect the data#

The data are prepared for reading by passing their names to xarray file objects called laifiledata and lccfiledata here. The commented-out code lines are alternatives that may be used when the computations run with dask – which is currently not the case because of the limitations in section ‘(Install and) Import libraries’ above.

starttime = datetime.now()
varname = 'LAI' 
uncname = varname + '_ERR' # name of uncertainty layer
# Extract the file names containting `varname` from `tarfileinfo`
inputfiles = [] # start with empty list
for xfile in lai_tarfileinfo:
    if varname.casefold() in xfile.name.casefold():
        inputfiles.append(xfile.name)
# For dask: Give the list to an `xarray` multi-file object
#dask.config.set({"array.slicing.split_large_chunks": True})
#laifiledata = xr.open_mfdataset(inputfiles,chunks='auto',parallel=True)
#laifiledata = xr.open_dataset(inputfiles[0],chunks='auto')
# Without dask:
laifiledata = xr.load_dataset(inputfiles[0])
lccname = 'lccs_class'
inputfiles = [] # start with empty list
for xfile in lcc_tarfileinfo:
    if 'LCCS-Map'.casefold() in xfile.name.casefold():
        inputfiles.append(xfile.name)
#lccfiledata = xr.open_mfdataset(inputfiles,chunks='auto',parallel=True)
#lccfiledata = xr.open_dataset(inputfiles[0],chunks='auto')
lccfiledata = xr.load_dataset(inputfiles[0])

#
print('set up inout files:',datetime.now()-starttime)
set up inout files: 0:00:03.422162

Apply quality flags#

TIP LAI and fAPAR come with a set of flags containing information about the retrieval and its quality. They are stored as individual bits in the layer retrieval_flag. We are using the hexadecimal representation 0x1C1 of the bit array 111000001, here, to avoid cells with the conditions obs_is_fillvalue, tip_untrusted,obs_unusable, and obs_inconsistent (see v4.0 Product user guide for reference). In the end, we must not forget to define the units of the result:

def apply_flags(data,fielddict):
    import numpy as np
    func     = lambda val, flags : np.where( (np.bitwise_and(flags.astype('uint32'),0x1C1) == 0x0 ), val, np.nan )
    units = data[fielddict['variable']].attrs['units']
    clean_data = xr.apply_ufunc(func,\
                                data[fielddict['variable']],\
                                data[fielddict['flags']],\
                                dask="allowed",dask_gufunc_kwargs={'allow_rechunk':True})
    # Set units of result:
    clean_data.attrs['units'] = units
    return clean_data

starttime = datetime.now()
laifiledata[varname] = apply_flags(laifiledata,{'variable':varname,'flags':'retrieval_flag'})
laifiledata[uncname] = apply_flags(laifiledata,{'variable':uncname,'flags':'retrieval_flag'})
print('applied flags,  elapsed:',datetime.now()-starttime)
applied flags,  elapsed: 0:00:00.300890

Run the conversion#

Now that the data and set up are ready, it is time to run the conversion. Note that with these data, it turns out that the datasets to be combined are not defined on the same grid, even if both use a global regular grid with a spatial resolution of approximately 300 m at the equator. Therefore, a nearest neighbour interpolation is nested into the call of conversion_func.

starttime = datetime.now()
# Test output to check objects
print (laifiledata[varname])
print (laifiledata[uncname])
lccfiledata['time'] = laifiledata['time'] # to align time dimension
print(lccfiledata[lccname])
print(lccfiledata[lccname].\
                         sel(lat=slice(laifiledata.lat.max(),laifiledata.lat.min())).\
                         interp(coords={'lon':laifiledata.lon,'lat':laifiledata.lat},method='nearest',assume_sorted=False))
# Define conversion as lambda function to apply in dask delayed computation
conversion_func = lambda var, unc, lcc : convert_LAI(LAI_eff=var, LAI_eff_uncertainty=unc, LCC_type=lcc)
# Set up the conversion
#LAI  = dask.array.apply_gufunc(conversion_func,
#                      '(i,j),(i,j),(i,j)->(i,j)',
#                      laifiledata[varname],
#                      laifiledata[uncname],
#                      lccfiledata[lccname].\
#                         sel(lat=slice(laifiledata.lat.min(),laifiledata.lat.max())).\
#                         interp(coords={'lon':laifiledata.lon,'lat':laifiledata.lat},method='nearest',assume_sorted=True)
#                      )
#LAI, LAIu  = xr.apply_ufunc(conversion_func,
#                      laifiledata[varname],
#                      laifiledata[uncname],
#                      lccfiledata[lccname].\
#                         sel(lat=slice(laifiledata.lat.max(),laifiledata.lat.min())).\
#                         interp(coords={'lon':laifiledata.lon,'lat':laifiledata.lat},method='nearest',assume_sorted=False),
#                      dask="allowed")
LAI = convert_LAI(LAI_eff = laifiledata[varname],
                   LAI_eff_uncertainty = laifiledata[uncname],
                   LCC_type = lccfiledata[lccname].\
                         sel(lat=slice(laifiledata.lat.max(),laifiledata.lat.min())).\
                         interp(coords={'lon':laifiledata.lon,'lat':laifiledata.lat},method='nearest',assume_sorted=False))

print('set up computation,  elapsed:',datetime.now()-starttime)
<xarray.DataArray 'LAI' (time: 1, lat: 6720, lon: 6720)> Size: 181MB
array([[[       nan,        nan,        nan, ...,        nan,
                nan,        nan],
        [       nan,        nan,        nan, ...,        nan,
                nan,        nan],
        [       nan,        nan,        nan, ...,        nan,
                nan,        nan],
        ...,
        [0.75202197, 0.64169085, 0.6520677 , ..., 0.8435831 ,
         0.9189684 , 0.55409735],
        [0.6748054 , 0.61712193,        nan, ..., 0.8974515 ,
         0.8507554 , 0.916069  ],
        [0.6218526 , 0.58675414, 0.6010987 , ..., 0.8415993 ,
         0.55287653, 0.61162823]]], shape=(1, 6720, 6720), dtype=float32)
Coordinates:
  * time     (time) datetime64[ns] 8B 2019-04-10
  * lon      (lon) float64 54kB 8.185e-10 0.002976 0.005952 ... 19.99 19.99 20.0
  * lat      (lat) float64 54kB 60.0 59.99 59.99 59.99 ... 40.01 40.01 40.0 40.0
Attributes:
    units:    m2.m-2
<xarray.DataArray 'LAI_ERR' (time: 1, lat: 6720, lon: 6720)> Size: 181MB
array([[[       nan,        nan,        nan, ...,        nan,
                nan,        nan],
        [       nan,        nan,        nan, ...,        nan,
                nan,        nan],
        [       nan,        nan,        nan, ...,        nan,
                nan,        nan],
        ...,
        [0.61208606, 0.5263238 , 0.5374637 , ..., 0.84816116,
         0.9334656 , 0.5267816 ],
        [0.5716466 , 0.528155  ,        nan, ..., 0.9446055 ,
         0.7863574 , 0.8448039 ],
        [0.55272394, 0.51960933, 0.5835495 , ..., 0.7639249 ,
         0.49641386, 0.441935  ]]], shape=(1, 6720, 6720), dtype=float32)
Coordinates:
  * time     (time) datetime64[ns] 8B 2019-04-10
  * lon      (lon) float64 54kB 8.185e-10 0.002976 0.005952 ... 19.99 19.99 20.0
  * lat      (lat) float64 54kB 60.0 59.99 59.99 59.99 ... 40.01 40.01 40.0 40.0
Attributes:
    units:    m2.m-2
<xarray.DataArray 'lccs_class' (time: 1, lat: 7200, lon: 7200)> Size: 52MB
array([[[210, 210, 210, ..., 210, 210, 210],
        [210, 210, 210, ..., 210, 210, 210],
        [210, 210, 210, ..., 210, 210, 210],
        ...,
        [ 12,  12,  12, ...,  12,  40,  40],
        [ 12,  12,  12, ...,  12,  12,  30],
        [ 12,  12,  12, ...,  12,  12,  12]]],
      shape=(1, 7200, 7200), dtype=uint8)
Coordinates:
  * lat      (lat) float64 58kB 60.0 60.0 59.99 59.99 ... 40.01 40.01 40.0 40.0
  * lon      (lon) float64 58kB 0.001389 0.004167 0.006944 ... 19.99 20.0 20.0
  * time     (time) datetime64[ns] 8B 2019-04-10
Attributes:
    standard_name:        land_cover_lccs
    flag_colors:          #ffff64 #ffff64 #ffff00 #aaf0f0 #dcf064 #c8c864 #00...
    long_name:            Land cover class defined in LCCS
    valid_min:            1
    valid_max:            220
    ancillary_variables:  processed_flag current_pixel_state observation_coun...
    flag_meanings:        no_data cropland_rainfed cropland_rainfed_herbaceou...
    flag_values:          [  0  10  11  12  20  30  40  50  60  61  62  70  7...
<xarray.DataArray 'lccs_class' (time: 1, lat: 6720, lon: 6720)> Size: 361MB
array([[[ nan,  nan,  nan, ...,  nan,  nan,  nan],
        [ nan, 210., 210., ..., 210., 210., 210.],
        [ nan, 210., 210., ..., 210., 210.,  70.],
        ...,
        [ nan,  12.,  12., ...,  11.,  12.,  40.],
        [ nan,  12.,  12., ...,  10.,  12.,  12.],
        [ nan,  nan,  nan, ...,  nan,  nan,  nan]]], shape=(1, 6720, 6720))
Coordinates:
  * time     (time) datetime64[ns] 8B 2019-04-10
  * lon      (lon) float64 54kB 8.185e-10 0.002976 0.005952 ... 19.99 19.99 20.0
  * lat      (lat) float64 54kB 60.0 59.99 59.99 59.99 ... 40.01 40.01 40.0 40.0
Attributes:
    standard_name:        land_cover_lccs
    flag_colors:          #ffff64 #ffff64 #ffff00 #aaf0f0 #dcf064 #c8c864 #00...
    long_name:            Land cover class defined in LCCS
    valid_min:            1
    valid_max:            220
    ancillary_variables:  processed_flag current_pixel_state observation_coun...
    flag_meanings:        no_data cropland_rainfed cropland_rainfed_herbaceou...
    flag_values:          [  0  10  11  12  20  30  40  50  60  61  62  70  7...
/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/xarray/core/duck_array_ops.py:237: RuntimeWarning: invalid value encountered in cast
  return data.astype(dtype, **kwargs)
set up computation,  elapsed: 0:00:02.454319

There may be an “RuntimeWarning: invalid value encountered in cast” caused by the missing values in the input data, which seems safe to be ignored.

With dask, the computation could actually be delayed until the data values are used, and large datasets could be processed in parallel. Here, we run an output method, which would start the sliced computations:

print(LAI)
#
# Output to file triggers the delayed computation:
#
def write_result(output,outfile):
    import os.path
    if os.path.isfile(outfile):
        print("file",outfile,"already exists. Writing skipped.")
    else:
        print("Writing to ",outfile)
        output.to_netcdf(outfile,mode='w',encoding={varname:{"zlib": True, "complevel": 4,},uncname:{"zlib": True, "complevel": 4}})
    return

starttime = datetime.now()
output = xr.merge(LAI)
outfile = varname + '-clumped.nc'
write_result(output,outfile)
output.close()
print('After writing output :',datetime.now()-starttime)
(<xarray.DataArray 'LAI' (time: 1, lat: 6720, lon: 6720)> Size: 361MB
array([[[       nan,        nan,        nan, ...,        nan,
                nan,        nan],
        [       nan,        nan,        nan, ...,        nan,
                nan,        nan],
        [       nan,        nan,        nan, ...,        nan,
                nan,        nan],
        ...,
        [1.04131915, 0.        , 0.        , ..., 0.        ,
         0.        , 0.8122886 ],
        [0.93439795, 0.        ,        nan, ..., 0.        ,
         0.        , 0.        ],
        [0.86107457, 0.81247404, 0.83233686, ..., 1.16535618,
         0.76556396, 0.846917  ]]], shape=(1, 6720, 6720))
Coordinates:
  * time     (time) datetime64[ns] 8B 2019-04-10
  * lon      (lon) float64 54kB 8.185e-10 0.002976 0.005952 ... 19.99 19.99 20.0
  * lat      (lat) float64 54kB 60.0 59.99 59.99 59.99 ... 40.01 40.01 40.0 40.0
Attributes:
    units:    m2.m-2, <xarray.DataArray 'LAI_ERR' (time: 1, lat: 6720, lon: 6720)> Size: 361MB
array([[[       nan,        nan,        nan, ...,        nan,
                nan,        nan],
        [       nan,        nan,        nan, ...,        nan,
                nan,        nan],
        [       nan,        nan,        nan, ...,        nan,
                nan,        nan],
        ...,
        [0.84930186, 0.        , 0.        , ..., 0.        ,
         0.        , 0.77273374],
        [0.79306436, 0.        ,        nan, ..., 0.        ,
         0.        , 0.        ],
        [0.76667871, 0.72075493, 0.80921031, ..., 1.05955842,
         0.68854762, 0.61354773]]], shape=(1, 6720, 6720))
Coordinates:
  * time     (time) datetime64[ns] 8B 2019-04-10
  * lon      (lon) float64 54kB 8.185e-10 0.002976 0.005952 ... 19.99 19.99 20.0
  * lat      (lat) float64 54kB 60.0 59.99 59.99 59.99 ... 40.01 40.01 40.0 40.0
Attributes:
    units:    m2.m-2)
Writing to  LAI-clumped.nc
After writing output : 0:00:09.189088

Output data#

The notebook leaves the following files behind:

Take home messages 📌#

  • The Climate Data Store contains many useful LAI and LAI-related datasets.

  • The distinction between effective and true LAI is very important – conversion may be needed to ensure accurate representation of vegetation properties for your use case.

  • Successfully converting LAI data requires careful preparation and an understanding of the input data structure, formats and the uncertainties introduced by the process.

  • LAI conversions are complex. While they appear trivial, they involve assumptions that can impact results, so it’s important to interpret outputs cautiously and be aware of potential uncertainties.