Context: Why “GeoAI” fails without standards
When we say “AI + geospatial,” many people imagine a chatbot that can talk about maps. However, most operational use cases require more than narration: they require machine-actionable access to distributed geospatial resources to build reliable maps. In practice, GeoAI applications often do not fail due to a lack of data or AI model capability; they fail due to insufficient interoperability, traceability, and machine readability to discover, request, and process the available data:
- Data is distributed across different services and formats.
- Queries are often not “AI-enabled” (too little metadata, too few guardrails, unclear costs/granularity).
- Results are difficult to reproduce or audit – especially in crisis situations where trust, provenance, access control, and context are crucial.
The OGC AI-DGGS pilot project for disaster management addressed this issue precisely: not “AI as a demo,” but AI as an orchestrator for interoperable geoservices – with a clear focus on standards, implementability, and real-world integration challenges.
What we built and demonstrated in the pilot
In the pilot, we used DGGS (Discrete Global Grid Systems) as a common “spatial language” to consistently reference and analyze heterogeneous disaster data. A Discrete Global Grid System is a way to represent the Earth as a hierarchical grid of (mostly) equal-area cells, assigning every location a consistent cell ID so diverse geospatial datasets can align, aggregate, and be queried uniformly across scales.
Core idea
DGGS cells are to geospatial AI what tokens are to language models: stable, hierarchical, machine-readable. This facilitates aggregation, multi-resolution analyses, and the interaction of different data sources. A DGGS server allows querying data by cell ID. DGGS servers that use the same underlying DGGRS (Discrete Global Grid Reference System) provide data for the same real area when they receive the same cell ID. This makes a computationally very complex spatial query as cost-effective as a simple index query. Thanks to the hierarchical structure of DGGS cells, this applies to all areas, regardless of size. The following figure illustrates these concepts. The first row illustrates the idea of organizing the entire Earth into an equal-area, hierarchical set of grids, often using hexagonal cells. The second row illustrates how DGGS handles spatial references and diverse data efficiently and helps to AI-enable the data.
Architecture in a sentence
We have linked several independent DGGS data servers and AI clients so they behave like an interoperable analysis engine. We also implemented a Common Operating Picture (COP) layer for context, trust, and sharing of “what applies when to whom.”
What ran together interoperably (high level)
- Multiple DGGS/DGGRS implementations (including different Discrete Global Grid Reference Systems, i.e., specific grid reference systems with particular cell shapes, indexing, resolution scheme, and rules: H3, A5, various ISEA variants, etc.)
- Multiple server implementations (different technology stacks and operated by different providers)
- Multiple AI clients and agent-based workflows (Instead of standard AI interactions, where a large language model (LLM) simply predicts the next word in a sentence, the clients in this pilot interacted with the servers using Retrieval-Augmented Generation (RAG) and the Model Context Protocol (MCP) to avoid hallucinations.
- OWS Context as a transfer mechanism for situation assessment + security/trust/provenance. OGC OWS Context is an OGC standard originally developed for packaging and sharing a map “session” as a document. It bundles view settings (area, layers, styles) and links to OGC web services so others can reproduce the same context across clients.
Why is this important?
What emerged from the demos and discussions is that the bottleneck isn’t the language model. Instead, it’s the interface between models and geospatial infrastructure. To answer even “simple” operational questions (e.g., Which areas will flood, who is exposed, and which roads are impacted?), an agent must first discover which datasets and endpoints exist, understand constraints like resolution limits, supported query patterns, and expected cost, interpret the semantics of fields and grid identifiers so it combines the right things, and carry provenance (sources, timestamps, uncertainty) so results are auditable and trustworthy.
Because it shows that “AI reasoning” in geospatial does not primarily scale through model training – but through standardized, tool-like interfaces, machine-readable metadata, and reproducible service chains.
The most important findings
This pilot made it clearer than ever: DGGS can serve as a shared spatial language that lets multiple independent systems behave like one analysis fabric, while AI clients orchestrate standards-based queries instead of improvising spatial inputs.
In practical terms, participants demonstrated cross-implementation interoperability (multiple DGGS servers and DGGRSs) and showed that this is already usable for real workflows: discover data, select the right resolution, combine layers, and produce map-ready indicators for decision support (with Manitoba flooding as the anchor scenario).
Equally important, the discussions in the pilot converged on a core lesson for “agentic” GeoAI: reliability does not primarily scale through more model training, but through standardized, tool-like interfaces, machine-readable metadata, and reproducible service chains.
Four frictions that we need to address specifically
What we learned the hard way: four frictions we must address and standardize next.
- Geometric alignment friction (datum/model differences) – e.g., when authalic/spherical assumptions meet WGS84/ellipsoidal expectations; if parameters aren’t explicit, clients start reverse-engineering and risk errors.
- Performance & sparsity friction (high-res EO is sparse) – disaster workflows trigger many iterative calls; gaps (“data holes”) plus overfetching can kill latency/cost stability unless responses/encodings are designed for sparsity.
- Topology / “stacking paradox” friction (sub-zones and overlaps) – some grid designs break naïve “parent = exact union of children” assumptions; clients must distinguish partition vs cover and avoid double-counting.
- Registry/metadata friction (same label, different parameters) – similar names can hide different parameterizations or ID variants; an authoritative registry and cross-references are necessary for reliable interoperability.
AI readiness: What OGC standards must now deliver
The pilot has shown that “AI-ready” does not mean “chat interface,” but rather an interface and ecosystem that reliably enable agentic use—with clear semantics, machine-readable constraints, and reproducible results.
AI readiness requires:
- Tool-ability: Endpoints must be described in a way that allows agents to use them robustly.
- Machine-readable metadata: queryables, limits, cost indicators, uncertainties, resolution/granularity.
- Guardrails: Protection against overfetching, incorrect resolution selection, and uncontrolled costs.
- Reproducibility: Queries and results must be reproducible – especially for situation assessments.
- Trust & security: Context, identity, provenance, access policies.
The discussion on the further development of OWS Context was particularly relevant here: OWS Context can serve as a basis for transferring a common operating picture between organizations, but it must be updated to meet today’s requirements (services/workflows, dynamic events, security/classification, AI-RAG/agent pipelines).
Benchmarks & implementability: Why “DGGRS choice” is not neutral
The pilot an important, practical point during the implementation of the various DGGRS: DGGRS implementations do not perform or operate identically. Benchmarks and optimizations (including format/compression) were discussed in the pilot; among other things, it was pointed out that individual systems can be significantly slower/faster in certain operations.
Takeaway: Interoperability does not mean everything is equally fast, but standards should make capabilities, expected costs, and suitable options transparent.
Roadmap: Six concrete steps we derive from the pilot
Very specific standardization and community tasks can be derived from the pilot:
- OGC DGGS/DGGRS Registry: Parameterization, date references, indexing schemas, cross-reference.
- H3 Best Practice: clear guidance on model/data interpretation and alignment expectations.
- Better query mechanics: clearer queryables, more robust patterns for bounding/selection/aggregation; optional “on-ramp” for non-DGGS clients (e.g., geometry-first request that resolves to DGGS on the server side).
- Temporal gridding as a first-class topic: DGGS is not just “space”; disaster is always space+time.
- Operationalize COP + Trust/Provenance (IPT): Further develop OWS Context into a machine-readable situation picture container including security/policy/access/provenance.
- Analytical Extensions: clear catalog of which analytics should be available in a standardized “cell-wise” manner (aggregation, zonal stats, indices, etc.).
Invitation: How you can contribute as an implementer or member
We want to bring the pilot results to the community and turn them into prioritized, implementable building blocks.
If you are an OGC member:
- Get involved in the Agora discussion on registries, best practices, and queryables. In Agora, you will find more detailed articles discussing the pilot results and providing detailed insights on how to use DGGS most efficiently.
- Share real-world “frictions” from your implementation
If you are an implementer:
- Check your parameterization against other libraries/servers.
- Provide feedback on: “What metadata does an agent really need?”
- Come join the OGC community to learn all details about DGGS and help shaving the next generation of standards that AI-enable data.
If you help shape standards:
- Help define conformance tests that reveal precisely these frictions.
Conclusion: “Real Friction, Real Fix”
The pilot has shown that we are close to bringing “AI for geospatial data” from the demo level to reliable, interoperable practice—but only if we standardize the real frictions: registry, alignment, sparsity-compatible encodings, machine-readable metadata, trust/context.
This is the real opportunity: standards make AI accountable.
If you want to work on registry/best practices, COP/OWS context evolution, or AI-ready metadata: [Contact / Agora thread / Project page link].
Appendix A
DGGS vs DGGRS
A DGGRS (Discrete Global Grid Reference System) is a complete, operational spatial reference system combining three components:
- DGGH (Discrete Global Grid Hierarchy): The hierarchical tessellation of Earth’s surface into zones at successive refinement levels
- ZIRS (Zone Identifier Reference System): A scheme for uniquely naming and addressing each zone
- Deterministic sub-zone ordering: A standardized sequence for organizing child zones within parent zones, enabling optimized data encoding
In essence, a DGGRS is a ready-to-use system for referencing and organizing geospatial data on a global grid, whereas a DGGS is the broader integrated software framework that may implement one or more DGGRS alongside quantization functions, query capabilities, and interoperability tools.