Projects


PhenoRob – Plant Growth Modeling

Plant growth modeling is of crucial importance for wide areas of agriculture, biology and breeding. Our goal is to develop image-based growth models that, for example, represent future phenotypes of a plant on an image. We are working on making these images look as close as possible to real sensor data, on the basis of which further analyses can be carried out. In addition, we aim to include other growth influencing factors, such as seed density, irrigation, fertilization, mixed cropping environment (as in the picture), to ultimately enable image-based growth simulations. Methodically, our models are based on generative adversarial networks and transformer architectures.

Research focus:
Lukas Drees

Research partners:
PhenoRob
ZALF
TUM – Computer Vision Research Group


EDDY – Multi-Modal Neural Networks for Identification, Tracking and Classification of Ocean Eddies

The detection of mesoscale ocean eddies, the ‘weather of the ocean’, is one example of a possible application on Multi-Modal Neural Networks. They transport water mass, heat, salt and carbon and have been identified as hot spots of biological activitiy. Monitoring eddies is therefore of interest among others to marine biologists and to fishery. The identification and tracking of eddies are challenging due to their spatio-temporal dynamic behaviour and so far has been treated as isolated tasks rather than a joint task which can lead to suboptimal overall results. In our research, we use the current advances in the deep learning area in order to develop a state-of-the-art multi-modal neural network and tackle this challenge by joining the tasks of identification and tracking into one.

Research focus:
Eike Bolmer

Research partners:
Prof. Dr.-Ing. Jürgen Kusche, University of Bonn
PD Dr.-Ing. habil. Luciana Fenoglio-Marc, University of Bonn
Dr. Adili Abulaitijiang, Technical University of Denmark
Dr. Sophie Stolzenberger, University of Bonn

  • [PDF] K. Franz, R. Roscher, A. Milioto, S. Wenzel, and J. Kusche, “Ocean eddy identification and tracking using neural networks,” in Proc. of the ieee international geoscience and remote sensing symposium (IGARSS), 2018.
    [Bibtex]
    @InProceedings{Franz2018,
    author = {Franz, Katharina and Roscher, Ribana and Milioto, Andres and Wenzel, Susanne and Kusche, J{\"u}rgen},
    title = {Ocean Eddy Identification and Tracking using Neural Networks},
    booktitle = {Proc. of the IEEE International Geoscience and Remote Sensing Symposium ({IGARSS})},
    year = {2018},
    url = {https://arxiv.org/pdf/1803.07436.pdf},
    }


OPTIKO – Optimization of cauliflower cultivation by monitoring with UAVs and machine learning

In the project OPTIKO, we deal with the optimization of cauliflower cultivation using image-based Unmanned Aerial Vehicle (UAV) data and its automatic analysis by means of machine learning (ML) methods. Our overall goal is to analyze the development and growth of cauliflower to support the farmers’ decision-making.

Cauliflower is a high-value crop that needs to fulfill high-quality criteria. It could be called the diva of crops. Each plant develops differently, which makes harvest estimation difficult. Typically, farmers and agricultural advisors monitor fields regularly through spot checks of individual plants but they do not get an overview of the whole field. Also, abiotic and biotic stress factors must be monitored throughout the growing season to ensure healthy growth. For monitoring entire fields, remote sensing can help.

Harvesting of the cauliflower heads (known from the supermarket) is done by hand due to the great variability in the development of the plants. Since the cauliflower head is not visible due to the leaf canopy, each plant must be touched by hand to determine the size of the head, making it difficult to estimate when to harvest. For sales reasons, each cauliflower must be harvested within about one week, when the heads are of sufficient size but not yet overripe. Thus, several passes over the field are necessary which increases the labor intensity and is, therefore, more costly for farmers.

In the project, the entire growing period of different cauliflower fields is visually surveyed using RGB and multi-spectral UAV images. Additionally, we georeference the data so that we can derive a georeferenced coordinate as well as time series for each individual plant from the images. ML helps us to detect the plants and determine their phenotypic traits like the developmental stage as well as plant and cauliflower size. Furthermore, ML supports identifying stresses automatically throughout the field and helping the farmer to optimize fertilization and pesticide use. Also, the estimation of harvest window and cauliflower head size supports the farmer in decision-making and facilitates his work.

Research focus:
Jana Kierdorf

Research partners:
Forschungszentrum Jülich
JB Hyperspectral Devices UG
Schwarz Gemüse- & Erdbeerbau

  • [PDF] [DOI] J. Kierdorf, L. V. Junker-Frohn, M. Delaney, M. D. Olave, A. Burkart, H. Jaenicke, O. Muller, U. Rascher, and R. Roscher, “Growliflower: an image time-series dataset for growth analysis of cauliflower,” Journal of field robotics, 2022.
    [Bibtex]
    @article{kierdorf2022growliflower,
    title={GrowliFlower: An image time-series dataset for GROWth analysis of cauLIFLOWER},
    author={Kierdorf, Jana and Junker-Frohn, Laura Verena and Delaney, Mike and Olave, Mariele Donoso and Burkart, Andreas and Jaenicke, Hannah and Muller, Onno and Rascher, Uwe and Roscher, Ribana},
    journal={Journal of Field Robotics},
    year={2022},
    publisher={Wiley Online Library},
    url={https://doi.org/10.1002/rob.22122},
    doi={10.1002/rob.22122}
    }


KI:STE and MapInWild: Exploring Wilderness Using Explainable Machine Learning in Satellite Imagery

Wilderness areas offer important ecological and social benefits, and therefore warrant monitoring and preservation. Yet, what makes a place “wild” is vaguely defined, making the detection and monitoring of wilderness areas via remote sensing techniques a challenging task.

We explore the characteristics and appearance of the vague concept of wilderness areas via multispectral satellite imagery. For this, we apply a novel explainable machine learning technique on a curated dataset, which is sophisticated for the task to investigate wild and anthropogenic areas in Fennoscandia. Our dataset contains Sentinel-2 images of areas representing 1) protected areas with the aim of preserving and retaining the natural character and 2) anthropogenic areas consisting of artificial and agricultural landscapes.

With our technique, we predict continuous, detailed and high-resolution sensitivity maps of unseen remote sensing data in regards to wild and anthropogenic characteristics. Our neural network provides an interpretable activation space in which regions are semantically arranged in regards to wild and anthropogenic characteristics and certain land cover classes. This increases confidence in the method and allows for new explanations in regards to the investigated concept.

Our model advances explainable machine learning for remote sensing, offers opportunities for comprehensive analyses of existing wilderness, and practical relevance for conservation efforts.

Research focus:
Timo Stomberg

Research partners:
Immanuel Weber
Prof. Dr.-Ing. Michael Schmitt, Technical University of Munich
KI:STE: AI Strategy for Earth System Data

  • [PDF] [DOI] T. Stomberg, I. Weber, M. Schmitt, and R. Roscher, “Jungle-net: using explainable machine learning to gain new insights into the appearence of wilderness in satellite imagery,” Isprs annals of the photogrammetry, remote sensing and spatial information sciences, vol. 3, p. 317–324, 2021.
    [Bibtex]
    @article{stomberg2021jungle,
    title={jUngle-Net: Using Explainable Machine Learning to Gain New Insights into the Appearence of Wilderness in Satellite Imagery},
    author={Stomberg, T and Weber, I and Schmitt, M and Roscher, R},
    journal={ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    volume={3},
    pages={317--324},
    year={2021},
    publisher={Copernicus GmbH},
    doi={10.5194/isprs-annals-V-3-2021-317-2021},
    url={https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/V-3-2021/317/2021/},
    }


DETECT – Land use and land cover reconstruction

While several continental regions on Earth are getting wetter, others are drying out not only in terms of precipitation but also measured by the increase or decrease in surface water, water stored in the soils, the plant root zone, and in groundwater. Observations, however, do not support a simple dry-gets-dryer and wet-gets-wetter logic and existing climate models fail to explain observed patterns of hydrological change sufficiently.

The central hypothesis of DETECT is that – next to known local effects – human land management, land and water use changes have altered the regional atmospheric circulation and related water transports. These changes in the water balance’s spatial patterns have, it is hypothesized, created and amplified imbalances that lead to excessive drying or wetting in more remote regions.

The remote sensing group contributes to DETECT in the project B03 “Deep learning for satellite-based land use and land cover reconstruction”. The goal of this project is the determination of land use and land cover from optical satellite data for specific points in time (as a snapshot) or for longer periods of time (e.g. one season). For this purpose, deep neural networks will be developed that take into account the specific biogeographical characteristics of the regions of interest in order to ensure a high generalization capability. Furthermore, spatiotemporal data gaps will be closed to improve the data basis for the developed methods and data and model uncertainties for the derived land use and land cover maps will be determined.

Research Focus:
Johannes Leonhardt

Research partners:
DETECT