[Grant Application] ICON Fault-Tolerant Cluster

ICON Fault-Tolerant Cluster

Project category

Development

Project description

We always try to implement the best industry practices in all things. A team of our experienced DevOps engineers works around the clock, ensuring that all systems are up and running smoothly.

Everstake going to create a public fault-tolerant cluster for the ICON blockchain

Cluster for mainnet will consist of 4 full archive nodes behind a load balancer, and split traffic between them, in case of disconnection of one of the nodes the other will change it automatically, it will lead to fault tolerance.

By leveraging backup power supplies, backups, monitoring, and alerting systems, we deliver high uptime and complete reliability without a single point of failure.

The Problem

  • Developers waste time managing infrastructure
  • a lack of reliable public RPC nodes
  • the risk of centralization and dependency
  • the risk of DDOS attack

Solution

  • optimize responses
  • involve multiple operators
  • unite middle-priced hardware into a cluster

What to expect?

  • 100% Open source
  • decentralized RPC cluster
  • each node will be cross backuped
  • protecting archive node from malicious request and basic DDoS protection

As recognized active community members and proven Icon builders, we are hoping to be supported by the Icon Foundation in order to continue the following activities:

  • Public infrastructure
  • Decentralization
  • Dapp-community building

Project Duration

2 months

Project Milestones

Milestone #0 - DELIVERED

  1. 2 upstream archive nodes hosted on Hetzner

  2. 2 nodes by Everstake

  3. 2 HTTP load balancers hosted on Hetzner in different datacenters

  4. Test net dedicated server

  5. testnet - 2 nodes

  6. Grafana/Prometheus data processing

  7. haproxy TPC/IP Balancer

  8. Several security bug research/reports fix support

Team and Resources

  1. 2 bare metal dedicated servers
  2. 1 full-time middle DevOps
  3. 2 part-time middle-level DevOps
  4. 1 part-time senior-level developer
  5. 2 part-time middle-level developers
  6. 1 part-time project manager

Milestone #1

  1. Expand geographical presence in North America, Europe, and Asia
  2. Provide research to find optimal (price/configuration) hardware
  3. Fix problems or issues in the current setup
  4. Use more reliable, but more expensive cloud hosting (google/amazon) as a backup

Implementation

  1. Failover Cloud instances

  2. amazon services is a priority

  3. Google services

  4. multiply upstreams in different geographical locations. Preferable point of cross-link with CloudFlare

  5. London

  6. Frankfurt

  7. Singapore

  8. Tokyo

  9. US West

  10. US East

3.Cloudflare Pro

  1. DNS balancer

  2. route optimizer

  3. API integration

  4. nginx

  5. haproxy

  6. disable cache to test real-time performance

  7. customized stress-test & latency-test software

Team and Resources

  1. google/amazon cloud VM
  2. 1 full-time middle DevOps
  3. 1 part-time senior-level developer
  4. 2 part-time middle-level developers
  5. 1 part-time project manager

Costs

DevOps: 100 hours, ($45 per hour); $4500

Middle Developer: 30 hours, ($45 per hour); $1350

Senior Developer: 20 hours, ($55 per hour); $1100

Project Manager: 15 hours, ($30 per hour); $450

The total budget for Milestone #1: $7400

Milestone #2

Goals

  1. Collect all logs
  2. Determine slow requests
  3. Catch failed requests
  4. Determine the most popular requests
  5. Comprehensive analysis tools
  6. Activity visualization

Implementation

  1. Standardized logs format
  2. Deploy Elasticsearch cluster
  3. Deploy Logstash
  4. Deploy Kibana or Grafana
  5. Configure data pipeline
  6. Configure log processing
  7. Multiply node operator
  8. haproxy - TCP/HTTP failover with multiple upstream
  9. nginx - HTTP cache server

Team and Resources

  1. 2 full-time middle DevOps
  2. 1 part-time senior-level developer
  3. 2 part-time middle-level developers
  4. 1 part-time project manager

Costs

DevOps: 120 hours, ($45 per hour); $5400

Middle Developer: 30 hours, ($45 per hour); $1350

Senior Developer: 25 hours, ($55 per hour); $1375

Project Manager: 15 hours, ($30 per hour); $450

The total budget for Milestone #2: $8575

Funding Amount Requested

$15 975
Everstake will cover 35% of the costs from P-rep reward

Total project budget: $15 975 - 35% = $10 000

Official team name

Everstake P-Rep

Contact information

Email: inbox@everstake.one

Telegram: @everstake_chat / @bo_opryshko

Public address

hx8e6dcffdf06f850af5d372ac96389135e17d56d3

Please check the questions below and leave a reply or edit your proposal

  • Overall, we need more information to understand this project.
    • need more description
    • need the entire architecture
    • Need more details on the solution that you suggested (It’s too ambiguous)
  • What’s the exact meaning of the full archive nodes? Do you mean it is a citizen node?
  • How will the cluster be grouped and how will the fault-tolerance of the node be worked?
  • How do you do a health-check of each node?
  • How to set the RPC cluster?
  • Are you planning to defend DDoS using Cloudflare?
  • What’s the exact role of the Multiply node operator?
  • What’s the plan to use the data you’ve collected?
  • Is Everstake going to keep running and managing all that infra and logs?
  • What cache data does Nginx deliver? Can you do caching on a payload basis?

Hello, thanks for the questions!

Have to discuss with our DevOps and CTO and get back to you with all answers.

1 Like

We deeply believe that the delivery of a truly decentralized and geographically distributed RPC cluster will add enormous value to the ICON Ecosystem. Icon cluster will become a fault-tolerant, censorship resistant gateway into the Icon ecosystem. The main goal of the project is to build a sustainable and useful infrastructure, which can be maintained by the community.

1

2

As recognized active community members and proven ICON builders, we are hoping to be supported by the ICON Foundation in order to continue the following activity:

Public infrastructure
It is impossible to reduce the cost of operation without reducing the quality of provided service for a free public RPC endpoint. With a constantly growing ICON community and high levels of expectations speed and RPC availability; such level of quality is something that users are used to and taken for granted. However, we know from previous experiences that there were no other RPC endpoints with similar characteristics. By creating a dedicated and sustainable Node operation, this will allow us to continue to provide a free and first-class RPC service.

What’s the exact meaning of the full archive nodes? Do you mean it is a citizen node?

Not exactly, it’s more like Full Node to which any operator or user can be connected, or simply a dapp that needs a node

How will the cluster be grouped and how will the fault-tolerance of the node be worked?

The cluster will be distributed on different DC - on different hosting providers - on different node operators

How do you do a health-check of each node?

Haproxy with custom checking script - we are checking the head block on-chain and check all nodes in the cluster and exclude lagging node from the cluster.

How to set the RPC cluster?

what exactly do you mean by this question?

Are you planning to defend DDoS using Cloudflare?

Yeap

What’s the exact role of the Multiply node operator?

Decentralization, each node operator support his node included in the whole cluster

What’s the plan to use the data you’ve collected?

Data will be used to collect a statistic for the most frequent requests. Then in the future, we could add all this info to - https://iconvotemonitor.com/

Is Everstake going to keep running and managing all that infra and logs?

Sure, for logs we are planning to use the Elasticsearch cluster

What cache data does Nginx deliver? Can you do caching on a payload basis?

Yeap, before that we will collect request statistics to define requests for caching purposes

To discuss everything in more detail, we can schedule a call with our teams

Hey guys since this is an issue we have and looking for solutions I was following this. I just don’t want to get into citizen node prep node difference since the only difference is actually having key to sign created block and being in the communication cluster. So what do you guys state as a Full Node is the exact role of citizen node. If you guys mean your citizen nodes will sync from everstake prep node instead of main endpoint every node sync that’s a great +.

Hi,

I am glad to see others are thinking about this issue. We have been doing research on how to deliver a good caching solution and have been working towards getting a grant from another blockchain to build a prototype. I have a few questions about your approach though and would be happy to share notes on how we’re thinking of approaching this problem and what I have been able to glean from Infura’s implementation.

  • Can you describe how you are doing the caching? JSON-RPC is generally very difficult to cache. Are you doing any traffic steering into your cache or is there going to be a generic replacement policy?
  • What is the “Google Reserve node” in the diagram and how is traffic directed to it from cloudflare? Also what function is it playing next to the API node?
  • Do you have any plans to implement autoscaling and have you implemented the health check yet? If not we have this mostly automated.
  • Is all of this going to be manually deployed? Can you describe what kind of automation tools or scripts you will create and how people might collaborate / use your work.

We have a lot of this stuff automated with terraform on AWS, GCP, and Azure but have not worked with Hetzner. I’d be concerned that the nodes on Hetzner will not be able to sync in a couple months from now due to chain growth unless using block storage. For these types of archive nodes, IOPS is the most important thing which makes instances with attached NVMe volumes the most favorable.

Anyways, hope my questions don’t come off wrong as I think the community generally needs to put all our heads together to figure out a solution to this. Happy to to collab on parts as we have a lot of code from another project we did for polkadot deploying one-click that has a lot of the same components less the caching layer.

Thank you for your answer above! Here are the following questions:

  • What’s the relationship between diagram 1 and diagram 2?
  • All nodes should have a heartbeat checker? Does the heartbeat checker cover all the nodes in the cluster?
  • What’s the purpose of the Google Reserve Node?
  • Are you planning to set up servers or use the Cloudflare?
  • What’s the plan if the Cloudflare is stuck?
  • Generally, Nginx supports only URI based cache key. All the URL is “/api/v3/” - Are you planning to develop a separate plugin?
    • Need to break down the payload and separate it into Read (getBalance) and Write (sendTransaction) properties
  • Due to the increase of the network hop (Cloudflare => Haproxy => Nginx), the entire response time can increase - do you have any solution for that?
  • What’s the caching strategy? Caching may not be meaningful due to low hit rate and more upstream may be required

Hi, thanks for the questions!

Have to discuss with our CTO and get back to you with all answers.

  • What’s the relationship between diagram 1 and diagram 2?

Different stages/milestones, diagram 2 shows the latest architecture.

  • All nodes should have a heartbeat checker? Does the heartbeat checker cover all the nodes in the cluster?

heartbeat checker covers full nodes, we can additionally install Zabbix monitoring for balancers

  • What’s the purpose of the Google Reserve Node?

Google cloud is the best location for projects, which are hosted as GC as well. Moreover - it’s one of the most reliable DC.

  • Are you planning to set up servers or use Cloudflare?

CF will be used to manage DNS and DDOS protection, if any, so it depends on the situation and request types and response min/max time.

  • What’s the plan if the Cloudflare is stuck?

Obviously , we have own balancers as well

  • Generally, Nginx supports only URI based cache key. All the URL is “/api/v3/” - Are you planning to develop a separate plugin?

no, we are going to use native cache plugin

  • Due to the increase of the network hop (Cloudflare => Haproxy => Nginx), the entire response time can increase - do you have any solution for that?

Haproxy => Nginx will located in the same DC so 99,99% have near 0 latency
Cloudflare has hundreds servers around the world, we installing our infrastructure in point of presents of CF.

  • What’s the caching strategy? Caching may not be meaningful due to low hit rate and more upstream may be required

We have not answer right now, we need to analyse logs in production mode in order to develop a proper caching strategy

Initial Review Result Comments

Review Result

Reject

Review Comments

Thank you for all the efforts that you’ve been putting into this proposal. Definitely this is a valuable project and very good for our community. However, we’ve reviewed a proposal similar to this one proposed by the insight team and we decided to support that project unfortunately. We think you can revisit this project when our community needs more clusters in the future.

Thank you for your comments, noted.