Blog article

The “All or Nothing” Myth of Interoperability.

Interoperability isn’t about perfection—it’s about progress. Learn how incremental, machine-readable standards unlock real ROI and reduce integration costs.

Think you need one big, perfect standard to achieve interoperability? Think again.

The truth is: interoperability doesn’t require perfection—it requires progress.

Too often, organizations fall into the trap of thinking that if a standard can’t do everything, it’s not worth doing anything. Custom systems dominate because they’re seen as faster or more tailored, and teams defer standardization for “later”—once the future magically simplifies. But waiting for the perfect, universal standard is not only unrealistic, it’s unnecessary.

The real value of standards lies in their return on investment (ROI)—and that ROI isn’t all-or-nothing. Even partial standardization of specific aspects can unlock major benefits. Some elements of a system may yield high ROI from standardization, while others may not yet be worth the effort. Interoperability is not a binary state—it’s an optimization process.

This is especially true in complex systems like supply chains, analytical workflows, and digital twins. These ecosystems involve many actors, technologies, and data sources. Interoperability in such systems is best approached incrementally—by standardizing specific components of how systems interact over time.

In the beginning, most system interactions rely on human-readable descriptions—manuals, specs, emails, and informal agreements. These help people understand each other, but they don’t help machines. Without standardized, machine-readable structures, systems can’t exchange data efficiently, and automation becomes costly or impossible.

So how do we move from that fragmented starting point to a fully interoperable ecosystem?

We break it down. Step by step.

Step 1: Descriptive Text without Standardization

At the outset, data and processes are typically expressed through natural language documentation, such as manuals, specifications, or informal agreements between parties. These documents provide semantic guidance but do not enable automated data processing or seamless system integration. The absence of machine-readable standards introduces ambiguity, making data exchange and automation costly, inefficient, and error-prone.

Example:

  • A sensor reading may be described in text as: “Temperature at location X is measured in Celsius and recorded every 10 minutes.”

  • However, without standardized data models, each system might record or represent this data differently.

Step 2: Incremental Standardization of Specific Aspects

To optimize interoperability, standards can be incrementally introduced, focusing on specific aspects of system transactions. This approach reduces disruption while progressively improving system efficiency. Incremental standardization often starts by addressing:

  1. Data models: Define standard data structures and semantics for common information types.


  2. Exchange formats: Establish syntactic formats (e.g., JSON, XML) for data exchange.

  3. Reference vocabularies: Provide agreed-upon definitions for key terms (e.g., “temperature,” “sensor,” “location”).

  4. Protocol bindings: Specify the technical mechanisms for data transfer (e.g., HTTP APIs).

  5. Conformance criteria: Ensure predictable behavior across different implementations.

At each stage, the system becomes more interoperable, allowing for more efficient data exchange and less dependency on manual processing.

Step 3: Actionable Machine-Readable Standards

The optimization target is to achieve actionable machine-readable standards, where:

  • Semantics are explicit: Machines can interpret the meaning of exchanged data.

  • Data structures are predictable: Systems understand the format without requiring extensive analysis of documentation and development and testing of transformation mechanisms into the desired inputs for further use.

  • Extract/Transform mechanisms can be tested and shared if the source and targets are standards (allowing different domains and applications to have different views if necessary)

  • Actions can be automated: Processes like data ingestion, validation, and integration can occur without human intervention.

In the earlier example, an interoperable standard might now express the temperature reading as:

{

  “@context”: “https://example.org/context.jsonld”,

  “@type”: “Observation”,

  “observedProperty”: “temperature”,

  “unit”: “Celsius”,

  “interval”: “PT10M”,

  “location”: “urn:location:someIdentifier”

}

In the second example, “@context” maps each element to a unique identifier, which in turn can be used to retrieve detailed descriptions. This structure is both machine-readable and standardized, enabling seamless data integration between different systems.

Note – this example leverages the existing data exchange standards to link typical schemas, APIs, semantic models and vocabularies together for the first time.  This provides a game-changing opportunity to optimise system interoperability through augmentation with machine readable annotations (at any point in a data supply chain) rather than wholesale re-engineering around particular standardised data structures.

Step 4: Recursive Optimization

As interoperability improves, further optimizations become possible by standardizing:

  • Processes (e.g., procedures for data collection, validation, and reporting).

  • Governance models (e.g., access controls, data licensing).

  • Inference and automation rules (e.g., automated generation of insights).

The result is a system where data, processes, and actions are increasingly standardized, resulting in a large-scale reduction in “transaction costs” and integration complexity.

Optimization Outcome

The optimization process is inherently incremental because it balances:

  • Preserving existing data and practices.

  • Introducing standardization in manageable increments.

  • Progressively reducing ambiguity and increasing automation.

The final optimization goal is a system where standards are actionable by machines, minimizing the need for custom integration work and maximizing interoperability across heterogeneous systems. This is achieved through a staged migration from descriptive text to formalized, actionable standards that incrementally cover the full spectrum of system transactions.

Step 5: Economies of Participation

Description:

 At this stage, organizations recognize that the value of interoperability is tightly coupled to the availability and reuse of shared components — such as vocabularies, APIs, schemas, and governance models — across a community or ecosystem.

Core Tension:

 Organizations must balance:

  • The benefits of interoperability (e.g., automation, integration, data reuse),

  • Against the real implementation costs, which are:

    • Lowered when shared standards and tools already exist.
    • Much higher when these need to be developed in isolation.

Key Insight:

Widespread participation in shared standards creates positive network effects: the more entities that adopt and contribute, the more reuse is possible, and the lower the cost for each new participant. Conversely, if an actor must implement everything from scratch (e.g., vocabularies, reference services, governance rules), the barrier to entry may outweigh perceived benefits, especially in the short term.

Optimization Outcome:

  • Strategic Collaboration Becomes Essential: Success now depends not only on internal optimization but also on external alignment — choosing when to adopt, adapt, or contribute to shared assets.

  • Sustainability Through Reuse: The ecosystem’s maturity is reflected in the availability of high-quality, maintained, and trusted resources that reduce implementation overhead for newcomers.

  • Investment Decisions Become Context-Dependent: Participants weigh short-term costs against long-term gains based on the maturity and traction of shared infrastructures.

Where to Go from Here

Interoperability isn’t achieved overnight. It’s a process—one that starts small, builds gradually, and delivers real value at every stage. By moving from loosely defined descriptions to machine-readable, standards-based systems, organizations reduce friction, improve scalability, and lay the foundation for more intelligent, automated collaboration. These basic ideas are the technical pillars of the geospatial ecosystem, discussed in our previous blog – The Shift That’s Reshaping Geospatial—and Why It Matters Now

We can also explore how to make existing and new standards more valuable by demonstrating how they can work together to optimise systems, rather than focusing too much on the implementation of specific components.

You don’t need to wait for the perfect moment—or the perfect standard. The sooner you begin, the sooner you see results.

Join the conversation.

Whether you’re standardizing your first data model or designing systems that support global integration, your experience matters. Share your use cases, your challenges, and your goals—so we can shape practical, scalable standards together.

Contact us at innovation@ogc.org or get involved through our working groups to participate in stepwise interoperability enhancements for your system and other systems.

This blog is part of our “10 Ideas in 10 Weeks” series, highlighting bold ideas and real-world innovation across the OGC community. 

Explore previous insights:

Follow us on LinkedIn for more stories about the people, projects, and standards shaping the future of geospatial.