New catalogues of dense temporal data will soon dominate the global EO archives. However, there has been little exploration of deep learning techniques to leverage this new temporal dimension at scale. Especially, existing approaches have struggled to combine the power of different sensors to make use of all available information. RapidAI4EO, a H2020-funded project, is making substantial progress towards reaching our overall goals to 1) create a spatio-temporal machine learning (ML) training dataset for land monitoring applications and 2) to develop a solution for higher frequency updates of the CORINE Land Cover (CLC) inventory. Today, the CORINE inventory is Europe’s key dataset for land use and land cover monitoring, underpinning various EU policies in the domains of environment, but also agriculture, transport, spatial planning etc. It was launched in 1985 (reference year 1990) and since 2000 it is updated every six years in an extensive process involving many actors across Europe. We launched RapidAI4EO with the firm belief that by combining Sentinel-2 with the higher resolution and frequency observations of PlanetScope and the power of machine learning we can contribute to increasing the update frequency of CORINE and demonstrate the added-value of integrating state-of-the-art technology in the process.
For 500,000 locations all over Europe, the final ML trainingset, which is planned to be released in July 2022, will consist of Sentinel-2 as well as Planet Fusion yearly time series generated from daily PlanetScope imagery. Planet Fusion consists of cloud-free, very high resolution, temporally consistent, radiometrically robust, harmonized and sensor agnostic surface reflectance feeds, synergizing inputs from both public and private sensor sources. For our initial ML experiments, we created a CLC annotated dataset for 50,000 of those 500,000 locations. This first dataset enabled us to perform early-stage experiments with the aim of getting early insights before the production of the entire corpus, in particular with respect to the dataset sampling, our land cover classification approaches as well as our change detection models.
Last month, we presented experimental results that address three main research questions at an invited session of the 2021 IEEE International Geoscience and Remote Sensing Symposium (IGARSS). First, we studied the quality with which we can classify the CLC classes with the Sentinel-2 imagery. In this context, we analyzed different land cover ontologies as proposed by the Level 1 (L1) to Level 3 (L3) CLC classes. Second, we evaluated whether we can enhance the performance of the classification models using Planet Fusion data (without considering any further temporal information). Third, we analyzed the impact of the temporal frequency of the satellite observations within the models, i.e., whether models trained with time series of satellite images allow us to enhance the CLC classification further.
Land Cover Classification with Neural Networks
In our first experiment, we studied the performance of a state-of-the-art classification model, a ResNet-50 CNN model, with the different land cover ontologies Level 1 (L1) – Level 3 (L3). Level 1 (L1), for instance, contains the classes of artificial surfaces, agricultural areas, forest and semi-natural areas, wetlands and water bodies. These classes are refined in Level 2 (L2) and refined further in Level 3 (L3).
Figure 1. Mono-Temporal (Multi-Label) LULC Classification with Sentinel-2 Imagery
We used an adapted version of the ResNet-50 model which could incorporate multi-spectral satellite imagery (by changing the first layer) and evaluated the models based on the computed F1 score. The classification results in Figure 1 show that the classification model reaches an accuracy level of 65.66 % in the L3 scenario and up to 74.24% in the L2 scenario. As we aggregate the classes to the five-class land cover task, the performance increases to 89.50 % in the L1 scenario.
It is comprehensible that the classification results differ based on the number of classes. In particular with our first dataset covering 50,000 patches and with classes that are close together and thus challenging to distinguish. This applies, for instance, to classes whose classification benefits from temporal information, such as agricultural classes or the CLC class that covers burned areas.
Comparison of Sentinel-2 and Planet Fusion Imagery
In our second experiment, we studied to what extent we can enhance the performance of the classification models using Planet Fusion imagery. To answer this question, we once again used a ResNet-50 CNN model. For this experiment, we relied on monthly images from the Planet Fusion dataset. Please note that the CNN model was not designed to learn temporal dependencies.
Comparing the results of the models trained with Sentinel-2 and Planet Fusion in Figure 2, we can clearly see in both cases, the L3 and L2 scenario, that PlanetScope Fusion imagery allows the model to classify the images with an F1 score of around 3% higher compared to the model trained only with Sentinel-2 imagery.
Figure 2. LULC Classification Comparison of Sentinel-2 and Planet Fusion (PF) Imagery
Temporal Models
To study the impact of the temporal data, we compared the proposed CNN model with an LSTM model trained with ResNet-50 features. This provides the opportunity to assess whether the temporal signal from the satellites can further improve the CLC classification. For the following comparison, we used monthly samples from all months over the year 2018 from the Sentinel-2 and the Planet Fusion imagery.
The results in Figure 3 show that by including temporal information in our model, we can increase performance by an additional 3%. We can thus conclude that the LSTM model proposed leads to a superior classification approach compared to the non-temporal CNN model.
Figure 3. Multi-Temporal LULC Classification with Planet Fusion Imagery
When we look at the F1-score computed per class shown in Figure 4, we can identify in which classes the LSTM model substantially outperforms the CNN model. For instance, we see a strong difference in the mining, dump and construction class or in the agricultural classes artificial non-agricultural vegetated areas and permanent crop.
Figure 4. Multi-Temporal LULC Classification with Planet Fusion Imagery. F1-Score per Class.
Change Detection Models
The trained networks can be used in the development of change detection models. The baseline model which demonstrates the potential of this approach compares two images from the same AOI. In case the multi-label prediction by the model changes, a change has been recognized. Figure 5 shows two examples of recognized changes. In the first scenario (top line), we see the built-up of industrial buildings that have been recognized in Turkey in 2019. In the second scenario in the Czech Republic (bottom line), the model detected the logging of forests in 2018.
Figure 5. Change Detection with the Supervised LULC Classification Models
In the following months, we will develop advanced approaches to enhance our LULC classification as well as our change detection based on Sentinel-2 and Planet Fusion imagery. In this context, we will develop unsupervised and supervised approaches and we will generate heat maps of changes that allow us to support the CLC update process with a demonstration deployed on the DIAS platform.
Comments are closed.