AI-assisted mapping workflows increasingly rely on human editors to validate and correct automatically generated features. However, the specific actions humans perform when interacting with AI-generated map data remain unquantified. This lack of empirical understanding contributes to ambiguity around the human role in the loop and fuels speculation about responsibility, trust, and data quality in AI-assisted mapping pipelines. However, what we know is that, when an editor moves a node, they are correcting for specific types of „error“ inherent in the AI’s data.
This project seeks to design a controlled experiment to measure how humans modify AI-generated road geometries at the atomic level. Using road data produced by state-of-the-art AI models, participants edit features in a controlled JOSM environment, enabling direct comparison between AI-only and human-edited geometries.
We will use data from this exercise to answer to main questions:
- What exactly does the human in the loop do?
- How can we translate these human actions into quality indicators to improve the current validation workflow?
Register to the event and answer the survey to help us assess your mapping experience.