API Infrastructure Development Direction

This post is to outline a general design for building a full suite of API interfaces for ICON. The intention is to get community feedback on design and offer alternatives paths.

Background:

ICON right now is only accessible via a JSON-RPC interface with ICON 2.0 bringing websocket support. Other blockchains offer alternative APIs that greatly enhance developer capabilities such as GraphQL and REST along with middleware connectors to feed data into message queues or other systems. The components laid out here will bring ICON up to feature parity with top blockchain ecosystems with all features.

Ultimately developers will have access to the following endpoints

  1. JSON-RPC and websockets (same as solidwallet.io)
  2. Event registration and broadcasting tools (to be run on prem)
  3. REST API with OpenAPI spec
  4. GraphQL API

These endpoints will give developers the interfaces that the vast majority of Web2 applications are built around.

To build these APIs, we are proposing the following components:

  1. Redundant set of globally distributed endpoints
  2. Realtime data ingestor and middleware broadcaster
  3. Kafka backend
  4. GraphQL API
  5. REST API
  6. Caching layer

These components are laid out in this order as each builds on one another. Their architecture and purposes are laid out below.

Redundant set of globally distributed endpoints

For endpoints to be reliable, they need to be deployed in multiple regions with failover capabilities. We have submitted a grant to build these endpoints which will come in two flavors. The first is a community deployment based on VMs intended for anyone who needs their own local node without production features. The second is a kubernetes based deployment that will be used for the main endpoints and will be the focus of development for additional features.

We are starting by building a reliable set of endpoints but the code base will be integrated along the way into the kubernetes deployment. Ultimately kubernetes is the right path for these endpoints and hence why they will be used.

Stream data producer and event registration and broadcasting service

Events that come off the blockchain need a way to be ingested by middleware layers. In Ethereum, they have a tool called Eventeum which this layer aims to replicate. Our implementation as laid out in our grant submission will be decoupled and consist of two services, a stream data producer and an event registration and broadcasting service.

The stream data producer is a real time data ingesting agent to extract all blocks by polling a node with a specified interval (~100ms). Blocks, transactions, transaction logs, and receipts are then parsed and forwarded on to a variety of configurable backends such as postgres and kafka. The implementation will be an extension of icon-etl with streaming data improvements per the best practices of blockchain-etl which we are members of.

The event registration and broadcasting service will mimic the functionality of eventeum but instead leveraging kafka as a message queue to feed broadcasts. With eventeum, the events are registered via a REST API and broadcasted to middleware layers configured via a config file. In ICON, we will also expose a REST API which will inform the creation of topics from which broadcasts will be processed. Eventeum when setup in HA will duplicate broadcasts which our architecture will not do and instead aim for “exactly-once” delivery where even in the event of a service failure that events can be picked back up to be processed.

REST API

The event streaming service will populate a postgres DB from which a REST API can be exposed. The API will have an OpenAPI spec exposed out of the box with this spec being the ground source of truth from which other APIs are derived. In ethereum, they have recently decided on OpenAPI as their base spec with Infura aiming to support REST APIs in the future. REST queries will allow for classical caching mechanisms to be implemented and more compact range queries to be expressed compared to looped / batched JSON-RPC.

GraphQL API

Similarly with REST APIs, GraphQL APIs can be easily built once you have a postgres database with the relevant data. There are many different server implementations from Hasura to Apollo but both work by reading the information schema from postgres to inform the construction of the API. Custom GraphQL endpoints and graphs will should then be maintained as a community. Once we expose a GraphQL API, customizations on the query level will become the main burden of development.

Caching layer

Caching is a very complicated topic for JSON-RPC since all requests are POST requests which don’t play nice with generic replacement policies. In order to build a reliable cache, packet inspection needs to occur to route requests to the appropriate cache. This can be done once optimized backends have been populated to route the given request to. It will require a specialized proxy such as jsonrpc-proxy as classical API gateways don’t support JSON-RPC transcoding.

This is a long and complicated discussion from here which will be referred to in later posts but for now it is safe to say that this is 1, something that will be most easily addressed once modules of the above components have been developed and 2, a later problem that we’ll deal with once we really need it. APIs such as REST and GraphQL which, in the Web2 world, enjoy several orders of magnitude more adoption than JSON-RPC, will hopefully be the main ways users interact with the blockchain because of their ease of use and caching capabilities. This should reduce the burden for directly querying via JSON-RPC and hence the demands on optimizing this interface.

Why Lay This All Out?

The above would be a major initiative needing full community buy-in and multiple parties collaborating to support. These are developer tools where no business model can be directly ascribed to but, on the same token, will empower all development on ICON. If this is a direction the community feels like we should go, we at Insight have submitted proposals to start on the first two components (Autoscaling Endpoints and Event Based Architecture but would love to collaborate / modify per community feedback on a design pattern that gets us these features.

1 Like