As announced on this post, we are close to having a full deployment of the P2P Network for Hive Engine nodes. This will allow anyone to set up their own Hive Engine node, participate in our consensus/verification mechanism, and earn witness rewards for doing so. At the same time, any participating node can then be used for querying sidechain data.
You may recall our previous work towards this effort which stalled when the Steem/Hive split occurred. At the time the testnet had healthy participation and while the witness mechanism was working, the hashes between participants were not matching.
We are establishing a second run of the P2P testnet which will run on mainnet transactions, but with the witness contracts deployed internally. This will not affect the primary node and will allow us to do a dry run / dress rehearsal before deploying P2P fully.
The changes made significantly improve the hash calculation issues that plagued the previous testnet, and we are already operating a 3-node version of this test which was used to fix a few other bugs.
For this reason, we expect to run this testnet for 2 weeks before deploying fully.
The timeline is as follows:
- We will have the new version of the core node tagged for release and deployed to the primary node, where we will take a snapshot of the DB. This is scheduled for 1/18 evening EST.
- We will then launch the second public "testnet" described above and allow anyone to join and make sure that it behaves as expected. Instructions can be found below.
- After two weeks of stable runs and verifying the data, we will then proceed to launch the P2P network for real.
A significant reason for the hash discrepancies during the previous testnet was that the node was not robust to abnormal termination conditions, and could corrupt the database if not cleanly exited. This was especially the case when
pm2 was used with the wrong parameters (default kills all forked processes immediately without allowing a clean exit).
My recent work eliminates that possibility by essentially guaranteeing that any updates to the database while processing a block are committed on an all-or-nothing basis. My backup node with these changes have tailed the primary node and replicated the hashes as well as verified data consistency, and am confident that a second run of a testnet will be a lot smoother. The same test has also allowed me to find bugs with how the primary node handles hash updates, as well as discovering when the data was tampered with, resulting in hash differences.
Post will be updated with instructions soon. Please indicate your interest in joining the public test in the Hive Engine Discord, or DM me at eonwarped#2295.