Big Spatial Data Analytics

What we do

We live in a time where vast amounts of spatial data are generated by technical sensors, social media users, and volunteers via crowdsourcing. Processing such large data poses complex challenges due to their mere volume and semantic complexity. We help by rendering these datasets usable for your application, while always taking into account the spatial context. Our long experience gathered in numerous research projects enables us to serve as an interface between technology and its application. Based on your specific needs, we develop processes and tools for assessing the quality and enriching heterogeneous Web 2.0 data by applying innovative methods from spatial data mining and deep learning.

ohsome.org

OpenStreetMap data are continuously amended, updated, and corrected. All changes are saved to ensure full traceability. The ohsome analysis platform enables easy access to the full history of OpenStreetMap – worldwide and precise to the second. This way, all (historical) OpenStreetMap elements ever recorded can be reconstructed and analyzed. One important objective is the improved assessment of the quality and usability of OpenStreetMap for your individual application.

Big Data Technology

We deploy big data technology and cluster computing to enable parallel data processing on a scalable server cluster.

Integration through API

Our programming interfaces can be integrated into different systems. This feature enables users to implement their own customized analyses based on the OpenStreetMap history.

Intrinsic Quality Assessment

Based on the ohsome platform, we provide intrinsic quality indicators to effectively support the data quality assessment for various applications.

DeepVGI

Deep VGI (Deep Learning with Volunteered Geographic Information) connects user-generated geodata with machine-based learning. Learning algorithms are already successfully used in the field of geoinformation; however, the required training data are often scarce, particularly for rural areas and developing countries. This gap is filled by volunteered geographic information, which helps us optimize the machine-based identification of buildings on satellite images.

Wealth of Information

We use OpenStreetMap data and MapSwipe data due to their abundance and comprehensiveness, making them an ideal foundation for improving the precision of machine-based learning algorithms. We extract relevant information and gain a better understanding of spatial structures and processes.

Crowd and Machine

Analysis tasks involving the automated gathering of information support users and free human analysis skills for tasks that can’t be automated. The consequent savings of time and resources is particularly relevant in time-sensitive situations, such as the mapping efforts for destroyed infrastructure in the aftermath of disaster events.

Improve Data Quality

Machine-based learning is another option for analyzing geodata quality. The automatically generated results help us better understand the user-generated geodata, and to quickly and reliably identify areas with “good” and “poor” geodata.

Additional Services

The Team

Michael Auer

Research Associate

Sascha Fendrich

PostDoc Researcher

Fabian Kowatsch

Research Associate

Lukas Loos

Research Associate

Martin Raifer

Research Associate

Rafael Troilo

Research Associate / System Administration