Event Based Architecture
December 21st - January 21st
We have initially started working on upgrading the icon-etl package to support streaming data into our backend services. To support multiple backends, configurable outputs are implemented to send data into postgres or kafka with potential support for google pub/sub. The event based architecture is based on kafka and as such, data flow now starts from icon-etl, is sent to multiple topics in kafka, streamed into mongodb, and then made accessible from various APIs. Currently all these components are working for blocks and transactions REST APIs. For event logs, we’re building a stream processor to serialize event streams so that they can be inserted into the DB and queried from a REST API.
Project Completion Percentage
Remaining Time to Completion
Expected Results for the Next Period
Fully functional event streaming architecture and backend microservices configured with a REST API. User workflow would begin by filling out a config file and running a docker-compose with all the components. User will then configure a broadcaster either through a config file or through registering the broadcaster with a POST/PUT request. Event registrations and broadcasters are persisted in a postgres database for easy failure recovery. Initially we will be supporting HTTP and Kafka topic broadcasters which can then be subsequently connected to a broad range of backends or subsequent stream processors.
Once the broadcaster has been configured, the user will then be able to submit another POST/PUT request to register specific events to flow to the registered broadcaster. Events can either be block, transaction, or event log events and can be filtered on logical fields per the following expected workflows.
- Stream all blocks
- Broadcast at specific blockheight
- Stream all transactions
- Broadcast on any transaction from a either an individual or combination of wallet or contract addresses
- Stream all event logs or filtered on a given contract
- Broadcast events from a contract based on a given indexed event keyword
On top of the event registration architecture, we are also building REST, websockets, and graphql endpoints. The REST endpoints are mostly built with websocket support coming up next on each exposed path (
/logs, etc). REST endpoints have OpenAPI spec and rendered API docs at the
/docs endpoint. A decision is being made in the coming weeks about whether to integrate graphql support on top of the existing mongodb backend or build it as a standalone service fronted by a graphql engine such as Hasura and backed by postgres with streaming updates.
While we expect most of the functionality to be in place next month, a full project handoff is unlikely as further testing will be done along with padding for unforeseen issues. The project execution is being accelerated based on the original scope due to an additional engineer being allocated to the project along with other internal organizational changes to be announced at a later date.