A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | CONTACT DETAILS | NODE VALIDATOR EXPERIENCE | EMERGENCY RESPONSE & MONITORING | SERVER INFRASTRUCTURE & SECURITY | CONTRIBUTIONS TO SIFCHAIN | |||||||||||||||
2 | Sifchain Address | Contact Handles | What other blockchains do you *currently* run nodes for? | Please share your experience in operating a Cosmos-SDK/Tendermint based blockchains. | What kind of emergency response plans do you have? | You have an uptime below 90% during the past one thousand blocks. To restore stability, what measures will you take? | Your validator node is unstable due to a sudden rise in network transaction. What measures will you take? | Do you have 24/7 node monitoring tools? If so, could you please briefly describe it? | How are you alerted when your validator node has a problem? | Where are you going to operate your machine? | Please share the security measures you use for your validator nodes. | How long are you planning to run your validator node? | In what kind of situation will you stop operating your node? | What can you contribute to the Sifchain community aside from what we've asked of you? | Any other important notes you want to add about yourself and/or your node operations. | |||||
3 | Jhongotz#6364 | Hopr | Yes | Home | Long | No profit | ||||||||||||||
4 | sif14tm9600fx088jw55gypcwkwh04j34e9jp68t8r (sifvaloper14tm9600fx088jw55gypcwkwh04j34e9jgc0p8n) | Danil Ushakov#5735 | Solana, Stafi, Darwinia, Oasis (Tendermint), Akash (Tendermint), Certik (Tendermint), Avalanche, Pocket (Tendermint) | I'm Danil Ushakov (nickname) a professional blockchains dev & run a validator nodes on a lot of blockchains. I have ready infrastructure for rapid deploy a new testnet or mainnet nodes in most popular ecosystems (Polkadot substrate and Cosmos tendermint). My experience: - Participation in mainnet as a validator: Solana, Pocket, Stafi, Darwinia, Oasis, Akash, Certik, Avalanche. - Participation in testnets as a validator: Oasis, Avalanche, Nucypher, KEEP, Casper Labs, Pocket, Akash, Plasm network, Near, Solana, Darwinia, Joystream, Bluzelle, Matic, Celer, Elrond, Stegos, Certik, Stafi, Phala, Bifrost, Crust, Sifchain, CODA/MINA, Orai I use moniker "ushakov" for my nodes. Special award "Zaki Prize" in the KEEP testnet from Zaki Manian (Cosmos): https://blog.keep.network/1-million-keep-awarded-as-playing-for-keeps-wraps-up-its-first-month-8091c8fb4765 Successful attack in the Akash testnet (under the nickname devurandom, as a member of the Russian Validator Club team): https://akash.network/blog/the-akashian-challenge-phase-1-eridani-release-announcement/ Medium: https://medium.com/@novysf/the-outcome-from-akash-testnet-zero-fee-transaction-attack-5fd4aaa68d97 Detail testing report TBTC dApp KEEP Network (in cooperation): https://medium.com/@ushakovdanil/btc-dapp-keep-network-testing-report-at-ropsten-ethereum-network-ecefa037cea0 My Professional skill: - Installation, configuration, and monitoring of validating nodes in PoS blockchains. Experience of 3 years. - Programmer, linux servers administration. Experience of 7 years. Knowledge: Ecosystems: Cosmos (Tendermint), Polkadot (Substrate). Programming: c/c++, bash, python, solidity Github: https://github.com/k0kk0k | I use a backup node that runs in parallel on a separate server and is fully synchronized (hot standby) | In general, in other blockchains, I have not yet had such a situation, but I am ready to quickly solve a similar problem. Action plan: I will switch the validator to the secondary server/node. On the primary server, I will upgrade the CPU/ram/network, depending on the reason for skipping blocks. | Server with the validator node has a large reserve of network bandwidth (1Gbps) | I collect runtime metrics with prometheus+grafana. Also i use alerts in grafana for critical parameters (block height, ram/cpu/ssd usage) that send me notify messages to my telegram and email. | telegram message / email | Data Center | I use sentry node for protect a validator node from flood/direct access. Remote access to the validator node is performed using ssh keys. | at least 5 years | infrastructure maintenance costs will be higher than income within 3 months | I am a member of the Russian validator club, and I can provide technical support to users | ||||||
5 | sif1fydtkmm7u6smh4rvae70n8y79lg6sarrqqz366 | stakely.io#9147 | Mainet: Certik, Stafi, marlin, fantom Testnet: crypto.com, nym, bluzelle and others | We run nodes as operators in stakely.io. Check our web site. We have experience with tendermint and sentry nodes, signing nodes, etc. We were selected node for genesis Certik and still running. | We configure online monitoring alerts about system status, node health and events as jailed. We run always a node bakup (not signer but sync) for mainnet nodes. We can rotate our nodes in case os disaster configuring priv node key easy. We use script to this action. | Review memory and cpu usage. Could be a restart service. Check the network, too many connections. Parameter max_open_connections | Up the maximum number of transactions in the mempool with size parameter and restart. | prometheus + grafana, and uptimerobot with specific querys that we publish in port webserver. We use SSL + atuh basic to access statistics. | Email + telegram bot. Several persons notificated | Data Center | TLS+ firewalls + sentry nodes + private key access , no password allow, ssh | Years | several months that node dont cover hardware costs | I have pacobits.medium.com. I can take the project to spanish community. | ||||||
6 | sif1q4zknsmnzjtn569mmttufauwpgh5j9rc78dzcx | Alibaba#0280 | Oasis Mainnet, Solana Mainnet, the Graph Mainnet, Certik Mainnet, Dock testnet, Crypto.com testnet, Kira testnet, Casper testnet, Robonomics testnet, Bitsong testnet | Being in crypto since 2018 I took part in a number of various testnet and mainnet launches, took part in the community contribution, and successful spin off of the various projects, like, but not limited to: Sero testnet and mainnet, QuarkChain, NKN, etc. Was interested in POW and then mid 2019 moved to POS investigations and took part in many testnets, like Elrond (and now participating in pre-mainnet beta test), CELO – Testnet, Oasis (and now I'm winner of testnet, received the rewards and running pre-mainnet validator with this stake), Solana (and now I'm mainnet-beta qualified validator, with the stake received from Development team), Regen (I’ve received points for testnet), Desmos, and number of Cosmos and olkadot projects. I'm interested in running new, and innovative validating projects, especially if there is some interest for early adopters and validators -- if I would be able to run the mainnet validator with your Foundation votes/support and compensation - this is one of the best approaches I've seen, with proven efficiency. | alerts, back-up servers, sentry nodes as applicable | sync from the back-up blockchain I'm usually taking | balancing the load, using only powerful dedicated servers with redundant capabilities | Yes, telegram bots, Graphana+Prometheus alerts | Telegram, email | Data Center | DDoS protection, Firewalls, VPN, UFW and endpoint encryption if needed | As long as required | The profitability of the node | |||||||
7 | sif1jdrz7y4u8e9ang8d2tfjgwwl4r72yp8ry8hdn4 | lukfm#2154 | plasm, hopr, akash, agoric, concordium, edgeware | 1 year of nodes... | backuping/mirroring | first of all - changing the server or Internet provider. (depending on the situation) | immediate upgrade) | I have my own hardware control system, I am a miner with 7 years of experience) i have backup internet channels, i can reboot hardware by telephone call, and etc ... | I will write a script that will monitor the node in the dashboard and send SMS if something is wrong) | my mining farm | i have my own vpn srvers, think it's enough) | depends on payback. years is good. | if costs are higher than income, for more than 3 months | part of the structure =) | ||||||
8 | sif1madlfff36f0dtq0tzcjfstfkzt80cu83f6sgv0 | OranG3#1415 | Mainnet: Certik, Bluzelle, Akash, Avalanche, Nucypher. Testnet: Desmos, Mina Protocol, Near, xx network, Bitsong, Concordium. | In fact, the validators in the cosmos network are my favorite =). As you may have noted above, most of my nodes are currently in the cosmos network, but I also like other networks, because it promotes the development and learning of something new, without which I can not live: - D. | I am always in touch in telegram or in discord. I also want to note that we run nodes together. I monitor the network support during the day, my friend-at night, so we are always in touch 24/7. And there have never been situations that have presented us with unsolvable problems. | In my experience in Cosmos networks, this has not yet happened, because I have 2-3 sentry nodes running in the mainnet for all nodes. But if such a situation still occurs, I will check the possible problems with other validators in Discord, check the connect of my node, the amount of remaining space on the server, and the work of my sentry nodes. | I always allocate servers for my nodes with a large reserve of resources, so I have not had such problems yet. I think that with the stable operation of the network as a whole, the technical characteristics of the server are most important in this situation. | To all my nodes, I connect the service https://www.netdata.cloud/, which in turn I configure for notifications in the bot in telegram. The service is very flexible and allows you to set up flexible notifications. I also use zabbix, which, if configured correctly, allows me to monitor the node's performance in a timely manner ;-) | I think I answered this question in the previous sentence =) | Data Center | Good passwords and simple things like fail2ban and a well-configured UFW help me a lot :-D However, I always listen to the recommendations from the creators of the project. | I am here for a long time, as long as there is a project, and I think your project will always exist, judging by the professionalizm, I will be in your team! | Only in one case, if the project ceases to exist =( | I'm awesome at making cool memes :-D | I currently validate in the testnet, my moniker is Orang3club: https://blockexplorer-testnet.sifchain.finance/account/sif1madlfff36f0dtq0tzcjfstfkzt80cu83f6sgv0 | |||||
9 | sif1yxu747s6wj49q7rf7cav206mqcv2tjlqw2ays5 | Photon#4219 | NEAR, CertiK, Matic | Our mainnet validators for CertiK and Matic use Cosmos SDK. We are also betanet/testnet validators for Nym and Sifchain, which also use Cosmos. Our mainnet validators can be found here: https://www.majlovesreg.one/blockchain/validators. We are very comfortable with the Cosmos SDK and have already created many monitoring, mitigation, and automation scripts for Cosmos. | Bash scripts for monitoring, alerting, and automation of tasks. Another node in a different datacenter, syncing but not validating. | Earlier than failing to 90%, the scripts must have already restarted the `sifnoded` service. If it still keeps failing, scripts will stop the validator and start the backup node using the validator keys. Logs will be analyzed for the source of the issue. If the backup still fails, message will be sent to validator chat groups (e.g. Discord) and see why the chain is failing. | Create a new instance with a higher capacity from the latest backup image, depending on the bottle neck (e.g. CPU, RAM, or both). This should take only between 3 to 10 minutes depending on the disk size. Update the `~/.sifnoded/data` from the running validator to the new instance. Stop the old instance and run the new higher capacity instance. | We use internal (running in the same machine) and external (calling APIs) Bash scripts to monitor, and more importantly, mitigate the situation. Alerts are set for cases needing human intervention, such as when mitigations did not succeed. The Bash scripts have worked very well in monitoring and mitigations in the other chains we validate on. | Email and Telegram bot. | Data Center | Latest best-practices on hardened configurations for exposed services (e.g. SSH), complete firewall allowing only the required exposure (UFW), DDoS protection from the datacenter provider (GCP, AWS, Hetzner). The OS running the nodes is updated every hour to patch any security vulnerabilities. Total sandbox: the OS runs minimal software only needed to operate the node, and the instance is dedicated only to the node software. This minimizes the attack surface on the validator. | As long as it will be possible. | When Sifchain becomes a dead project. | We are guild leaders for the NEAR Protocol. We can launch educational campaigns on Sifchain and blockchain in general. | We are longtime Linux sys admins focused on security and automation. We have just recently turned our attention to blockchain and are in the process of building up our validator portfolio. | |||||
10 | sif1d7vvatc4rqwqf99z90rqf6dlc484p02qjsv2h9 | whataday2day#1271 | Solana, Mina Protocol, Akash Network, Plasm, Crust Spacemesh, NYM, Robonomics | https://github.com/c29r3/cosmos-discord-faucet https://github.com/c29r3/sentinel-tx-stress-test https://medium.com/@novysf/the-outcome-from-akash-testnet-zero-fee-transaction-attack-5fd4aaa68d97 | I am setting up monitoring and alerting for all mainnet solutions. There is always a backup node, to which I can quickly move the validator Snapshot share system - https://github.com/c29r3/cosmos-snapshots | First, we need to find out the reason that caused such a low aptime, and then take measures to eliminate it - Check logs - View monitoring data (Grafana) - Check configuration files For example, if the problem is the lack of performance of the dedicated server, you need to transfer the validator to another server If the problem is on the side of the provider, it is necessary to change the provider If there is a problem with the software version, you can switch to another version | increase ulimit increase the size of mempool If it's a garbage transaction with a fee of 0, I'll put a fee limit in the app.toml If resources are scarce, I will move to a more productive server | Monitoring system: Grafana Prometheus Node exporter Custom node exporters | Alerting via Telegram (Prometheus) | Data Center | RPC, REST = localhost Access to Prometheus metrics is limited to a white list of IP addresses Hetzner DDoS protection I use containerization as an additional isolated layer Each project creates a separate user Fail2ban I don't use passwords for authorization, only ssh keys | Validator is a business. Any business makes sense as long as it is cost-effective | Lack of profit at a long time interval | Now I'm writing a TG bot that will track validators, block omissions, validator status, new delegations, and so on | I am a technical ambassador in Mina Protocol. I know 2 programming languages (Python, Golang) | |||||
11 | sif1dlhmeawzx72ll7x88k9e8llf2ggrz7u9yk28ch | slayer_hellraiser#5093 | Plasm, Certik, Solana, Oasis, Keep, Subsocial, Regen | Great experience | monitoring and backup server | restarting node, and upgrade it hardware if needed | Upgrade it | Zabbix monitoring | telegram bot | Data Center | all of them | 12+ months | If the profit from it decreases to zero | |||||||
12 | sif1cx05z2y0kwg0t6kf2pvkqnzepq95eg0nfyug0v | B0ban#4075 | Graph, Kusama, Stafi, Darwinia, IPCI, Robonomics, Sifchain, Clover, Plasm, Cryptocom | I participated in test networks as a validator Secret Network, Sifchain, Cryptocom | I have set up automatic notification and restart of the node in case of problems, there are also monitoring tools | First, I will look at the system load, check the amount of free space on the server and the server's uptime. Then I will see how stable the blockchain works for all validators. If everything is fine, I will restart the node. If this does not help, I will think about moving to another server | I will increase the power of the validator, so that it can cope with the load | tools like Prometheus and Zabbix | Zabbix and telegram bot | Data Center | All the time while the project is running | when the project stops working | any what you want | experienced validator | ||||||
13 | sif100ezpz94hxqzph3cfxejpaq2gkkrsfn523aqhp | TommyWesley#5661 | Casperlabs, Certik, Skale, Mobilecoin, Coda(Mina), Nucypher | yes, grafana | Data Center | |||||||||||||||
14 | sif15cacq7v7us0ykrqp0t79f6tgulrtkzw4mjhx4e | Tyler34214#4119 | AVA, Matic, Near, Bluzelle, Plasm, Dusty, Spacemesh | Cloud | about 6-24 months | |||||||||||||||
15 | sif1rjylaeyxeswfdnhcpuzh3v60faapyjlge3lp3h | Kusama, Regen, Certik, PolyMath, Agoric, Dusty | Investigatioan about this problem | performance measures | grafana, zabbix | Data Center | 9+ mo | |||||||||||||
16 | sif1hfwhl3hxpe8ddrl09059zzd4x9kpwrfsr9nf44 | FredrikMalmqvist#9370 | Scale, Solana, Pocket Network, NuCypher, KEEP Network | Data Center | ||||||||||||||||
17 | sif1jd6e4zqhq7jl57rkdf0mpkrazfrmz945gq94w5 | Olai Olsen#0267 | Dusty, Solana, Plasm, Oasis, witnet, Acala, CasperLabs | Data Center | firewall, non standard ssh port, ssh keys etc | I have some skills in participating in testnets (many all in last 2 years) | ||||||||||||||
18 | sif1eksamtmtx7vtta0m2m9rlzc47s046we4yeka53 | OliGarr#8360 | Centrifuge, Certik, KAVA, Stafi, Mobilecoin, meter, Near | Cloud | ||||||||||||||||
19 | sif1cuqsgxw32tc3t9u2vxnkak2auyxl8aq2r46ffa | Stafi, Kusama, Casperlabs, Certik, Pocket, Akash | Data Center | |||||||||||||||||
20 | sif1ft8ejg5qq4rxfexnlegdp36u5xejxftg3ufh7d | SimonHugo#7540 | Matic, Nest, Oasis, Plasm, Solana, Mina, Skale, Casper | Data Center | ||||||||||||||||
21 | sif1n7g5vm6vgmegtkp8g9jx9j2sfc2dt0mndd3ekm | VilliamSivertsen#1052 | Openlibra, Pocket, Nucypher, Matic, Regen, Potecoin, Mobilecoin, Akash | Cloud | ||||||||||||||||
22 | sif1xgph2jw93ugmrlmz0rrah6lr6papxlkd7htzlq | JanLiamNilsson#7070 | Dock, Pocket, Spacemesh, Plasm, Dusty, Certik, Casperlabs | Data Center | >12 months | |||||||||||||||
23 | sif104lk9kwvk7zhqqzstjtjug3tj8wvzqt2cltfad | Launo.Osku.Arttu#5333 | NYMTech, Sentinel, Skale, Dusty, Plasm, Certik, KAVA, MINA | Cloud | ||||||||||||||||
24 | sif1hy84ze2uam6s7p82wxudd9qqedfedprn3naywk | Solana, NU, Mina, Certik, BlockStack, Near | Data Center | |||||||||||||||||
25 | sif1sykaa5wz4t0p9dq620uhlshpyeslnpqtdydk4k | Certik, KAVA, AKash, Spacemesh, Marlin | Data Center | |||||||||||||||||
26 | sif1uq7nkerr8jkvtnc6nntml2yu068ardlvmn2afz | vanyok_kangaroo#0512 | Hopr testnet | I have little experience in operating, I'm still learning. | stop and restart the node, restart the server | maybe I'll move to another server, or ask in Discord) | stop and restart the node, restart the server, ask in Discord | I look at the statistics in the explorer, several times a day | I don't have such tools | Cloud | I have not used a firewall for the Sifchain testnet, but I am going to configure ufw in the future. | minimum three months, maximum a couple of years | emergencies with me when I am unable to support the node | I can tell in the Russian-speaking community about the progress of the project development | ||||||
27 | sif16t3g9kl26e58r4dd49qc0p70yqex4rymg76nus | OkunDima#7060 | Darwinia, Plasm, Certik, akash, casperlabs, Skale, XX | Data Center | ||||||||||||||||
28 | sif1ast0jm7uhp2y2xuns7f0wqh958xyqyhw9cvdek | CELO, KAVA, COSMOS, IRIS, MATIC, Polkadot, Kusama, NEAR, The Graph, SECRET-Network, Centrifuge, STAFI, PLASM Network, CERTIK, SOLANA | Our entry into the blockchain world as a validator took place via the Blockchain Cosmos and some Gaia Tesntnets. We have expanded our experience with Cosmos SDK-based test networks such as REGEN NETWORK, KAVA, BITSONG, IRIS, Ki-Chain and Certik. We are now happy to be part of the Sifchain project and to accompany it in the mainnet. | In every project we have at least one fully synchronized hot backup node with priv_validator_key.json and node_key.json in a special folder. | I will switch to Hot Backup Server. | First set a higher gas price, then consider whether to switch to a larger server model. | because we don't have time to look at graphana dashboards any more, we let the following parameters be monitored with a python file and output via telegram bot. | Telegramm Bot and SMS In case Telegram bot fails | Data Center | We try to use the hardware that meets the respective project requirements. VM machines are usually used for testnets. For mainnets I use half or full dedicated servers, as a validator node and one or two sentry nodes for a project. set up own private network between Validator and Sentry Nodes and connected it to the public chain. We change the SSH port, only connect via authorized ssh keys, mostly use the ufw or iptables as a firewall. | As long as the project exists | when it becomes uneconomical and our expenses exceed income | we can help by building a German-speaking community | |||||||
29 | sif18upctdzw8kdsuk54z0ze7f4vv89wrqdjkuh8a0 | bestlet#9508 | Bluzelle, Akash Network, Certik, Mina | I've successfully participated in testnets and mainets of various Cosmos-based chains | I can setup backup nodes and solutions | I'll change hosting to more powerful and unjail validator if it got to jail | this is unlikely but I'll change hosting to more powerful | I have Prometheus / Grafana monitoring with alerting via Telegram and SMS | I'm alerting from Prometheus / Grafana via Telegram and SMS | Data Center | Hosting is protected from DDoS and protected by firewall | forever | if the cost of maintaining the node is higher than the profit | I can translate manuals to slavic languages | I've participated in all your past testnets and hope to be a successful mainnet validator | |||||
30 | sif1wvq855v7adaflq3e8l36l30n72y5msg48eud0m | aliefaisala#3118 | Currently I am Genesis and Technical Ambassador of Mina Protocol, then validator of Agoric, validator of Nymtech, validator of HOPR, validator of Stafi, validator of Dock, validator of Meter DeFi, validator of Near Protocol, validator of Marlin Protocol, and the last I am validator of Pirl. | Operating Cosmos-SDK is more simple than other based blockchain, like Substrate Polkadot. | I will setup Monitoring and Alert tools for my node. | Check the logs and make sure my node is not jailed. If jailed I need to unjailed then node. | Check the CPU and Memory usage, if needed to upgraded the machine. | I can use Prometheus with Grafana dashboards to monitoring my node. | Prometheus periodically extracts values to a time-series database, and it will give me alert when my validator node has a problem. | Cloud | DDoS protection and Firewall. | I will keep maintain and help the Sifchain network node as long as Team give me reward to play as validator role. | if I cannot pay my Cloud bill. | I can help the community and help new comers to knowing more about Sifchain | I am professional validator and has been working in a lot of other Blockchain project, so I hope can help Sifchain network and get selected as validator Mainnet. | |||||
31 | sif1un6rzuv5gpeul673jrgxvl4fr58wrw3jskmajc | DmitrievSerg#4760 | Near, meter, Certik, Skale, XX Network, Celo, AVA etc. | Data Center | ||||||||||||||||
32 | sif1fmsl5zcfhrlwlul4gvyht62vkdmu3eklqjqa0r | concordium, dock, near, marlin, akash, agoric, nucypher, matic, pocket | Cloud | |||||||||||||||||
33 | sif17kvern0jcm73uzaxy86e0rzpmyn66cnwhwgyxu | gladvlad#5053 | BlockStack, Kusama, Certik, Keep, Vega, Skale, Crypto.com, XX | Data Center | ||||||||||||||||
34 | sif1squnxw2ts0uyn8rqx66mwjw8pzyq90452fqvlg | RamazKar#8592 | Certik, Centrifuge, Acala, Kusama, Matic, Pocket, Elrond, Celo | Data Center | ||||||||||||||||
35 | sif1wvrpeykzrvzpmzr32lhcd2606wmwrhw950q8kl | kantartu#4838 | NYM, Near, Certik, Stafi, Akash, Concordium, KAVA | Data Center | ||||||||||||||||
36 | sif15kt23267r6hteqpeds84qdjs3dgzzq2dplflvj | UteGulshat#6116 | AKash, KEep, Nucypher, Certik, Centrifuge, Mina | Cloud | ||||||||||||||||
37 | sif1aqetrprwmjljqq0kxumgsl7u52emkgc8gd7cpv | John88#2911 | Elixxir, Certik, Bitsong, AVA, Bluzzele, Cocordium, Mina | Big expirience, know many commands. Bluzzelle, Certik, bitsong it's comsos network | fast change server, if it needed, backups keys and db, using servises (not sreen or tmux) | firts i must understand problem-through logs, check version , then i must look into ussage of CPU, memory, disk. if i did not find mistakes-go to discrod and ask another users. also i must create new server from zero in parallel! | Must choose better vps with better hardware! | Graphana, Uprobot | yes | Data Center | nonroot user-ufw enabled-ssh acces to server-vpn. Also in cosmos we can create sentry! | 2 years minmum | Stop-no, paused, maybe illness, maybe some home troubles | Videos for validator and delegator! | Was paid validator - it was very interesing and cool! Wondered by peggy! | |||||
38 | sif18hlpn5kvpcn5wml5cx3s4uquyqm7ter49q8gug | Stafi, Certik, Darwinia, Near, Meter, Bluzelle, Akash, POcket, Agoric, Desmos | Cloud | |||||||||||||||||
39 | sif1n69c52shtqqlfxk6utltyegahtpeu3hha9r8tj | Bagi#7045 | mina | Home | ||||||||||||||||
40 | At the moment, our focus is only Sifchain. But we're running nodes on several testnets(Desmos, Regen Network) scheduled to launch in 2021. | Our engineers are on standby 24/7. We have 2+ years node operation experience, including Cosmos hub and Irisnet. We use multiple public sentry nodes with different cloud providers and multi-cloud and multi-region. We use an additional DDoS protection layer (firewall and load balancer). Our objectives are stability, speed, scalability, and security. | To respond to emergencies, we always prepare reserve nodes and use snapshots to restore our node. Our primary focus is to avoid double-signing, and our emergency measures are validated through our team's internal testnet. | First of all, we would check the hardware status of our validator and sentry nodes—for instance, CPU, Memory, Disk I/O, Network I/O, etc. Then, we will check the state of our nodes. Below are the few questions we would ask ourself. - Are there too many tx in the mempool? If so, adjust the mempool size or increase the transaction fee. - How's the peer status? If there's a shortage, increase the number of peers. Or contact other validators who are geographically adjacent to us and connect private peers. The above mentioned are rudimentary measures. Depending on the situation, we will take additional actions. | Increase the mempool size or temporarily increase tx fee. If transactions continue to grow, we will upgrade our hardware. | We use two proprietary monitoring tools. The first one runs within our node and checks the server's status. It has a self reboot function along with an alerting system. The second one runs on an independent server and externally checks the node status. We use Prometheus and Grafana too. Thus, we utilize a three-layer node monitoring system. | VoIP voice call, automated Discord, and Telegram messages | Some nodes will be on IDC, and the majority will run on the cloud. | We use all the above-mentioned security measures. However, due to latency problems, we may not use HSM. Furthermore, we have strict server access policies. We use a multi-layer access control system, which deters hackers from causing damage even if they access our machine via ssh. | Our general policy is two years. After this period, our team revaluates the project and decides whether we will operate the future validator node. We check the development progress comparing with the project's roadmap. We consider the economic incentives – whether the fees are enough to cover server and security costs, etc. Please note that the actual operation period may vary based on other factors. | We will shut down our node if the project violates local or the project's domiciled jurisdiction's law. Also, if there is no development activity, we will consider halting our validator node. | We are still working on onboarding experienced validators to the Sifchain community. Besides that, we are in talks with Sifchain Korea community manager to come up with some promotions. | We are very bullish on Sifchain and glad to contribute to the project. | |||||||
41 | sif17rug7q4hwnn9840zqm4al2wmhlfl4ymhvl6s76 | OlgaSidir#1109 | Kava, Oasis, Stafi, Dusty, Plasm, Darwinia, some others | Data Center | ||||||||||||||||
42 | sif1eksamtmtx7vtta0m2m9rlzc47s046we4yeka53 | Pocket, Centrifuge, Bluzelle, Vega, Nest Protocol, Certik | Cloud | |||||||||||||||||
43 | sif1ll3uula8mcgk57p4fwkx99vztytr5kfntsf4uc | AntonMatsul#8820 | Certik, Pocket, Keep, Nucypher, Skale, Matic | Data Center | ||||||||||||||||
44 | sif16kevvjla75ncfcrehyus9hlh7ufm5mpa0l2lmf | 3DC, Certik, Matic, XX, Skale, witnet, NYM | Data Center | |||||||||||||||||
45 | sif1t2vr9n7mc6n43zh42wpv2v633tzplpyjv0ymqw | MazulSveta#4282 | meter, near, matic, kava, nucypher, certik, witnet | Cloud | ||||||||||||||||
46 | sif1f6aradyumq7p6gdzynuxlu2xyskft8mr99vnd9 | haroondilshad#6890 | ZRX | The experience has been mostly smooth. The community is always great help. | I am a developer and run the nodes with other developer friends of mine. We have a DevOps guy in the team as well and everyone is equipped with the knowledge to keep the nodes running. | First off I'll check if this is an issue with my server and reach out to community. If it's a problem with just my node, I'll check the netsec groups and make sure my node can reach the internet and other peers. I'll ensure my scripts and daemons don't crash and personally monitor the node for an extended period of time to ensure it's able to validate new blocks | I use Managed Instance Groups (MIGs) which auto scale based on the load. | At the moment I use Stack Driver for cloud monitoring which gives me useful analytics and helps me with assessment logging and diagnostics should a problem present itself. | I get Email, Slack and webhook notifications should my validator go down | Cloud | Verifiable integrity with secure and measured boot, Firewall, no open ports, Live migration and patching | I have long-term plans | If there are some personal unavoidable struggless I might stop operating the node | I can help new validators with their questions | ||||||
47 | sif1d5rn577vuue3mq33dakwupkhuvp4wqrl0agqve | evoitenko#9994 | Acala, darwinia, certik, witnet, celo, matic | Cloud | ||||||||||||||||
48 | my address: sif1qyyp6svg083ss7pl0kr4rkzzptxf5zm67vrysn | Wetez#9950 | Cosmos, Kava, Akash, IRISnet, Tezos, Polkadot, Kim | We are wetez, we developed the wallet when cosmos went online | 1. Analyze the cause, if it is a server problem, you need to replace a better server. 2. If the process is interrupted, a daemon needs to be added 3. Add monitoring, if the block synchronization is not timely, an alarm will be issued | 1. Improve the network 2. Use ssd disk | 1. The cloud platform monitors the server, such as cpu, memory, network, etc. 2. Monitoring the difference between local blocks and browsing blocks | If the server or node is abnormal, we will send out SMS alarm | Cloud | 1. Use highly secure platforms such as aws and aliyun 2. The control port is open to the outside world 3. Configure sentinel nodes | When the profit cannot cover the cost for a long time | Community operation and promotion | Wetez is the most professional team in the POS ( Proof of Stake) field. | |||||||
49 | sif1hmt5reavj4dwn89r4tx8uha34gcjy6lehyfpyu | silas#1155 | iris, mina, thegraph, matic, oasis, nucypher | I have participated in Iris testnet bifrost and I have completed all of the mission. And I have developed a service based on cosmos-sdk. | security response plan and node health response plan | checking the problem with vps, find the reason of node down. Monitor the cpu, ram, ssd and port status. | Monitor the cpu, ram, ssd, network status. Find bottlenecks that limit transaction processing | SeaLion:SeaLion is a cloud-based Linux server monitoring tool. It also monitors all server indicators through a unified dashboard. It only takes a few minutes to complete the setup, and it has an instant alarm function so that when a problem occurs, you can quickly receive notifications and daily data summaries. | mails and tg push | Cloud | Firewalls:close unused port. SSH: only witelist ip with prv key could connet the shell. DDoS protection: use tcp_syncookies, limit tcp_synack_retries and add tcp_max_syn_backlog. | all the time and same on the mainnet | reward is less more than node cost | organize meetup | I'm from solidstake(solidstake.net), and we have bought token from wsifchain(though it's not the address I've provided). And we are proficient in node operation and maintenance and are committed to providing ecological construction for the public chain | |||||
50 | sif14cn9sneeyu3q42486qlynwev5svwr9akl2aa3c | UdenLee#4949 | Ethereum, Polkadot, FusionNetwork, IOST | I studied and used some cosmos based blockchain project, such as Binance Chain, Dfinance, IRIS | Build an monitor service or website to watch my node, to ensure that is running well. In emergency I'll be received the waring email and fix it in a moment, and if is a bug of the core code or a blockchain error, I will report that to the official or community. | Check the error, fix the problem, discussing it in the discord and add an issue on the github sifnode project if needed, and make sure the node running well for a long time. | check the server resource is enough, if not, upgrade the server CPU, memory or bandwidth, if it not worked, ask for help in discord untill it have been solved. | Yeah, I wrote a watching service based on nodejs to monitor linux service or port is available, it haven't been opensource, just running on my local server. It is very easy, it's a schedule that execute per minute, call a linux command to check the service if running well, and report the error via email or an IM robot hooks. | I will recevied the email, robot message, even a sms, it wont mute until I fix it. | Cloud | I will close the unnecessary port, usually just open the 22 port, and forbid root login, use ssh login instead password, event limit the IP. use TLS, mounte the domain to Cloudflare. | From the sifnode start and to the world end | When I upgrade the node, upgrade the server, and fix the error that connection error(The error was frequently appeared, and I'm not confirm the problem, just upgrade the code, download json file, update peers, restart, it will be worked) | Contribute code to sifchain opensource, fix the bug if I can, answer the community's questions, help other developer to build their node, develop some funny little dapp. | Is the validator node is limited number, or need compete for that? | |||||
51 | sif16266grdv67vc8qsrn32sq2clvl7fpt7a0nna2h | NadyaVas#5798 | Regen network, Keep Network, Certik, Pocket, Nest, Skale | Data Center | ||||||||||||||||
52 | sif1hy9cgqnnssgq060hzl8zd7v8wavhx3halfh42r | Anna1242#2262 | Nucypher, Mina, Certik, Keep network, xx network. | I currently run node in Certik, whick is Cosmos based. Also previously I participated in such testnets as: Bluzelle, Bitsong, Akash. | I have a telegram bot set up that alerts me if something goes wrong with my server. I also use sentry to protect my validator. | I will eliminate the reason why there was such a problem. And it will most likely be associated with a server outage. | I will move the node to a more powerful server, so that such problems do not happen in the future. | I use Zabbix. | I have a bot set up that notifies me of similar problems in a telegram. | Data Center | I will have the port open only for sentry node access. Also, the necessary protection for the server will be provided by datacenter employees. | I'm interested in the project, so as long as it takes. | I won't stop it =) | |||||||
53 | sif12yx7gk3dku33872zazy2e890ew03fnv5yyttzn | haroondilshad#6890 | LISK | None yet | Multiple redundant VMs running in different geo origins. A team in multiple time zones so someone will always be available to do rescue work. | Check my VMs health, rummage through logs and monitoring dashboards to make sure everything looks alright. Check netsec groups, firewalls for outgoing connections, resource usage of the VMs, check to see if HEAD is at the latest in the repo, reach out to community to see if someone else already faced and solved the recurrant issues. Manually monitor the VMs for some time. | Check to ensure my node's resources will suffice, already have auto scaling shield VMs running on Google cloud which will come in handy. | I've used Google stack driver in the past. | Email, slack and webhook notifications | Cloud | Built in DDoS protection, Google Shield VMs, Firewall locked in, Verifiable integrity with secure and measured boot, Trusted UEFI firmware, Live migration and patching | Long term | Unavoidable personal issues | Help new joiners with node setups and sysadmin, create PRs in sifnode repos for some features/bugfixes | I'm a Software Engineer by profession currently working in the IoT division of a renowned Equipment manufacturer. My team has a DevOps Engineer, Full stack Engineers based across 3 continents and we're all technology enthusiasts | |||||
54 | sif1hu8tpf6fmhh4kyj0stqpheh39043wclap7askt | dolphintwo#5196 | provide pos pool | Validator(dolphintwo) in iris, ops for cosmos, kava, terra, okexchain | hot standby for all node, set sentry nodes | check resource consumption and network status, move to bak node | Improve hardware configuration | zabbix to check node status, alart will email me | email and dingtalk | Cloud | firewalls only allow program, vpn for login to host | always | the machine cost cannot be covered | community promotion | operator experience of more than 40 blockchains | |||||
55 | Serhip#8971 | Mina protocol | ||||||||||||||||||
56 | sif1uu9m35eev3sg5rs9ry3d6244q7qxw6f3yhgn33 | Audi#4222 | AVA, Cocnordium, Bluzzele, Near, XXnetwork, Idena, Casperlabs | About one year experience | Find mistakes in logs, setuping service or docker run, setupping config | Understanding the problem, changing hardware, find mistakes in light setup | will searching for problem, asking in discord | yes, graphana, zabbix | on tg and mail | Home | VPN with microtic firewall, additional ubuntu ufw | long term | Force majeure situations | can try any of it needed | Let's do world bettter | |||||
57 | sif19luwftvvl3fdu0m8ehpfyzcxe3ge0w627lje2u | StakeTab#5707 | Solana, Mina, Crust, Avalanche, HOPR | Big experience in Bluzelle, Desmos, Crypto-com | Launching the Sentry node. | I'll move to a more powerful server. | Increase ulimit | Grafana monitoring. | Alerts on Grafana. | Cloud dedicated server | Firewall and VPN | As much as you need | There is no situation in which I stop operating my node | I create high-quality guides. Examples made for other projects: https://icohigh.gitbook.io/mina-node-testnet/ https://icohigh.gitbook.io/keep-nodes/ | Also i am a staking provider staketab.com | |||||
58 | sif1dqjc7mr8cpug4w0l36k20lrxd4pv68avv76r27 | Wanderer#1042 | Oasis, Solana, Crust, Robonomics, the Graph, Nucypher (now stopped), Regen (testnet), Bitsong, Desmos, Sentinel, Certik (testnet closed) | I have extended experience in running Cosmos networks. Since 2019 | alerts, notifications, backup | hot swap backup | scale of servers | yes, telegram, scripts (self made) and grafana tools | telegram, email | Data Center | DDOS protection, firewall, VPN for sentries | long term as needed | not expected | tbc | I'm well experienced and known in various testnets and mainnets validator. I expect Sifchain foundation delegation votes to receive and foster decentralization. Welcome to the delegators, as the nodes are 100% reliable and ready. | |||||
59 | sif1lqhyug025nlsutrccckmnlm33r349cuqlu2fze | haroondilshad#6890 | Lisk | None yet | Multiple redundant VMs running in different geo origins. A team in multiple time zones so someone will always be available to do rescue work. | Check my VMs health, rummage through logs and monitoring dashboards to make sure everything looks alright. Check netsec groups, firewalls for outgoing connections, resource usage of the VMs, check to see if HEAD is at the latest in the repo, reach out to community to see if someone else already faced and solved the recurrant issues. Manually monitor the VMs for some time. | I've used Google stack driver in the past. | Email, slack and webhook notifications | Cloud | Built in DDoS protection, Google Shield VMs, Firewall locked in, Verifiable integrity with secure and measured boot, Trusted UEFI firmware, Live migration and patching | Long term | Personal reasons | Can be an active menber of the community trying to help newcomers get up and running, answer their questions | I'm a Software Engineer by profession currently working in the IoT division of a renowned Equipment manufacturer. My team has a DevOps Engineer, Full stack Engineers based across 3 continents and we're all technology enthusiasts | ||||||
60 | sif1ll8rk9nc2htvn9tgrk4jpayd6gjc8l2uguhdwn | Alexandr0#3398 | oasis, certik, akash, solana | reserved node running in parallel main node | upgrade server (probably cpu and network issues) | swith to reserve node | i use grafana monitoring tool with email alarms | Data Center | at least 1year | high operating costs that are not covered by validation awards | ||||||||||
61 | sif18amckc3fhslgl4kly62h3rw5y0q59ew59dw7pe | Max999#8995 | Solana, Avalanche | Hot reserved server with full synced running node | Upgrade network and CPU | Switch to reserved node | Custom bash script that verify block height and send alarm if it differs from actual block height | Send email and telegram message | Data Center | 3-5 years | i don't plan to do this | |||||||||
62 | sif1nhz30nvsvu3elruxwtueusmdzfxxvjc46amn32 | Zorian#0336 | MAINNETS: Solana, Pocket, Stafi, Darwinia. Paticipated in testnets: Oasis, Nucypher, KEEP, Casper Labs, Bluzelle, Matic, Celer, Elrond, Stafi, CODA/MINA, Orai | Very experienced | I have an emergency plan | Upgrade server | Upgrade server | Custom monitoring scripts | Email, telegram, sms | Data Center | Server provider have a DDoS protection | At least 1+ year | In case no delegations for long period (3-4 months) | |||||||
63 | sif159p7d04xeuxz4l4letvnsfq6ax79fv0p7k9lgt | galector#7182 | AVAX, SOLANA, STAFI | Data Center | 3-5years | no plans | ||||||||||||||
64 | sif1hj828qezp2scwqgwve8rxqgrnyhp0js6qfx2mu | MuMuOp#1045 | Oasis, Akash | I participated in many tendermint-based blockchain testnets: Oasis, Pocket, Certik, Matic, Bluzelle, Akash | Double reservation server, RAID 5 SSD | Upgrade server | Zabbix | email, sms | Data Center | one year + | - | |||||||||
65 | sif1kyx9g0nrtkt378mltstvatw3eddckye489u | dotslash | None | |||||||||||||||||
66 | sif10ps0r8zym8f9pw4euvs9qgqxjrwyzpyugcvvgc | akshayalenchery#5574 | Casper | 1 year | Monit along with email notifiers | 1. Debug cause of failure - cloud vs service 2. Check Logs for crash points 3. Add Automated restart scripts (monit already done) For other factors causing low rate (high/spiking traffiic) etc.. there will be a different approach | Ideally the blockchain should support docker-swarm/kubernetes In current system: Spawn multiple instances the service and load balance using nginx / some other reverse proxy (redundancy) Go for additional independent nodes (we are currently running it like this) | Currently monit raises raises an alarm if the service goes down - continue with same Alternatively, if docker is brought into the scene - we will go for a swarmpit / traefik deployment with automated management and updates | Email / discord message | Cloud | - AWS Firewall - Rotating Keys - SSH only access from org VPN | As long as the chain is alive | Cost of operation mismatch | Architecture, Code reviews, Modules | Ideal workflow: We take care of infrastructure, security and uptime and serve as a validator node. Staking is something users can do and increase our voting power while we scale as per the transactions and add more nodes | |||||
67 | sif1vqysw7rv9lf6ryk8xx3t2t2r9pmc5nhmryvhdl | Perfect Stake#5940 | Celo, Kusama, Band, Solana, Crypto.com, Certik, Plasm, Centrifuge | PerfectStake = A reliable staking service provider Our operations consist of dedicated servers located in the specialized and highly qualified well-known data centers around the world, using only the highly secure networking and role-model infrastructure setups with the full regular back-up using the N + M scheme, controlled by the external independent watchdog services. Our team core value is delivering the maximum reliability and security for the various blockchain projects coupled with the maximum transparency for important and respectful delegators. We're operating in many cosmos-based projects and have strong experience. Please refer to: https://perfectstake.com/ | back-up, hot-swap nodes, alarms and tier 1 data centers | back-up and hot-swapping | running the back-up validator | telegram, monitoring tools (prometheus+alarms) | email, sms, telegram message | Data Center | DDOS protection by default, UFW. | as long as required | to be determined at a later stage | can post the twitter reposts and promote the project as applicable | Perfect Stake PoS history started in 2019 from the various testnets and now Perfect Stake supports many of these networks in mainnet. Our team credo is high demand to ourselves and accountability in delivering the maximum reliability and security for the various blockchain projects, with the maximum transparency for our important and respectful delegators. Web: https://perfectstake.com — Twitter: @perfectstake — Email: perfectstake@gmail.com — Github: https://github.com/perfectstake — Keybase: perfectstake | |||||
68 | sif14xuftjs6k3hecsw3g63nelkwr3jff300wq6z6z | beny1234#4111 | oasis, nucypher, irisnet | irisnet | we are professional | to monitor and restart | change vps | yes shell and use systemd | Cloud | SSH Key | forever | have little delegate | we have hundreds people community | no | ||||||
69 | sif1chky4sqt0nqegd2cz8c2gtchtnw9ctu2f546fl | Breather#0955 | Solana, Robonomics, Bitsong, Oasis, and else | yes, I have strong experience | I have 100% uptime and back up | alerts | back up | yes; email and telegram alerts | as soon as possible | Data Center | DDoS protection, Firewalls, VPN | as long as I receive the delegations from Sifchain foundation | the level of delegations | |||||||
70 | ||||||||||||||||||||
71 | ||||||||||||||||||||
72 | ||||||||||||||||||||
73 | ||||||||||||||||||||
74 | ||||||||||||||||||||
75 | ||||||||||||||||||||
76 | ||||||||||||||||||||
77 | ||||||||||||||||||||
78 | ||||||||||||||||||||
79 | ||||||||||||||||||||
80 | ||||||||||||||||||||
81 | ||||||||||||||||||||
82 | ||||||||||||||||||||
83 | ||||||||||||||||||||
84 | ||||||||||||||||||||
85 | ||||||||||||||||||||
86 | ||||||||||||||||||||
87 | ||||||||||||||||||||
88 | ||||||||||||||||||||
89 | ||||||||||||||||||||
90 | ||||||||||||||||||||
91 | ||||||||||||||||||||
92 | ||||||||||||||||||||
93 | ||||||||||||||||||||
94 | ||||||||||||||||||||
95 | ||||||||||||||||||||
96 | ||||||||||||||||||||
97 | ||||||||||||||||||||
98 | ||||||||||||||||||||
99 | ||||||||||||||||||||
100 |