The Decentralizatoin Of Hive Infrastructure

in LeoFinance5 months ago

For the last 6 months or so, most users were not impressed by the coding that was taking place on the Hive blockchain. We read the posts put up by @blocktrades each week, ones that make little sense. Even the parts were grasp are about as inspiring as watching paint dry.

That said, we are starting to see the results. The idea behind optimization and all the steps taken is to make the blockchain run more efficiently. This means that it lowers the cost of operating the chain, perhaps decentralizing the infrastructure.


We are starting to see some of the results of it in a couple posts that appeared the last couple days.

The first is by @apshamilton. In it, he talks about the cost of setting up an API node for around $750. He also goes on to describe setting it up at home.

We then see another one put up by @techcoderx. He also goes through the detail in a video of how to set up a Hive API on one's residential internet.

With all that is taking place with Big Tech, this is an idea worthy of exploration. The key is to remove as many points of attack as possible. Nodes running on major data centers could be hit, especially if they are mostly with one company.

However, we also do not want to put the entire blockchain on residential Internet service. In the first article, @themarkymark brings up a vapid point.


The problem is that, if the Internet goes down, the replay time takes a number of days. This only grows as the blockchain gets longer.

We also see the situation where, in addition to Internet, power outages could be a problem depending upon where one lives. Ironically, in many areas, power is more variable than Internet (people in California and Florida know this). Of course, this could be mitigated somewhat with battery backup systems as well as if a home is powered by solar.

Few fall under this category so most are dependent upon the centralized grid for their power.

Decentralization Requires A Mixture

In the quest for decentralization, we are better served with more API nodes for people to choose from. Having a mixture is the best approach and, I think, that is the point that @apshamilton is making.

Certainly the "backbone" of the blockchain has to be run on servers with the major data centers. They excel at uptime and have multitudes of backups to prevent loss of either Internet or power.

Nevertheless, having a number of nodes running on residential Internet services is not a bad thing. It helps to decentralize things while giving users more choices. We also see frontends like Peakd implementing automated node switching so if one is down, a node that is operating will be utilized.

Another avenue to pursue is for owners of businesses with commercial Internet services to set up nodes in their offices. The Internet will likely be a higher priority than residential with the possibility of back up power generation depending upon the type of business.

This also opens the door for different applications to run their own nodes. They can set up what is needed for themselves. It allows them to have greater control over what their users experience albeit not being exempt from the same issues.

Ultimately, it comes down to how do we decentralize while also maintaining reliability? There are always issues and trade-offs when looking at trying to maximize multiple characteristics. It is like the old decentralization-scalability-security debate, how do you get all 3? So far, you do not.


We obviously need nodes that have as much uptime as possible. Thus, the data center servers are the best option. This is going to provide the reliability that is required, especially as the blockchain grows. We cannot have most nodes down for 3-4 days while a replay is taking place.

Of course, having a number that are run outside the centralized data farms is not a bad idea. These will be up the majority of the time but can have a few times throughout the year when they are down. Having dozens of nodes that fit into this category could serve as a nice auxiliary to the core node system that is in place.

As always, this is all a process. We are not going to wake up one and day suddenly say, "hey we are decentralized". It is a step-by-step grind, with each activity, hopefully, working in that direction.

The key is the trend that is in place. Hive's infrastructure only went live 10 months ago. We have only been at the process a short time. Sure, we have the legacy from the other chain yet our start was truly in March.

For the most part, the general direction is becoming obvious. As we grow, more is taking place. This includes UIs, witnesses, and API nodes. It is not happening overnight yet it is taking place.

Each idea is worthy of pursuit. It is great to see people setting up these nodes and someone like Marky providing the counterpoints and areas of vulnerability. That is what is required.

We have a lot of room to grow. After all, Ethereum 2.0 already have over 30,000 validators testing that network. Hive is a bit smaller but still moving in that direction.

Like always, we just keep forging ahead each day.

If you found this article informative, please give an upvote and rehive.

gif by @doze


logo by @st8z

Posted Using LeoFinance Beta


In my opinion, Hive could be operated now entirely off home-based nodes (except for the issue of decentralized hosting of images and video, but work is being done here).

Traffic can easily be distributed across multiple API nodes using "hive availability" software such as haxproy and this software can also detect when individual API nodes are failing and can't respond to traffic. This is free software we use now to distribute the traffic of across multiple internal nodes (we don't do this for performance reasons, but for redundancy, and to make it easier for us to do upgrades without resulting in an outage during the upgrade time).

Not every home-based API node would have the network bandwidth to run an haproxy node in this setup, but I'm certain that many internet offerings here in the US, and most of the western world, can support a node with the necessary bandwidth. Anyone running such an haproxy node in this scenario would also need to make sure they have an "unlimited data plan" to avoid excessive fees by their ISP.

Is there potential for Hive to be sharded like Ethereum is going towards/happening on? I wasn't sure if Hive was already or not. thanks!

Hive can be sharded if necessary, it's a generalized database technique. It's not now, mainly because it's not needed at this point.

I have 1Gbit Ethernet yet I have a 1TB data cap. A witness node uses about 250-300GB of traffic/month from my observations, an API node far more than that.

I also live in an area where we get snow and occasional bad weather that can knock out the power for 1 minute to as long as 4 days.

More and more ISP are implementing data caps, Comcast just announced they will be enforcing data caps for everyone, previously it was only certain states.

While 99% of the time, my connection is fast and reliable, I avoid hosting anything off it due to these issues.

One of the problems issues I have been noticing lately is not all API nodes are running the latest version or correctly functioning, but to HAProxy they would return a heathy status and would continue to serve traffic.

Comcast, my home provider, just recently instituted data caps this month, but you can "upgrade" your plan back to unlimited for $30/month.

I think it's $100 for me, I have to look I don't think we have caps just yet, not till next month I think. I pay for 1Gbit Internet and have the same cap as someone with 50Mbps.

but to HAProxy they would return a heathy status and would continue to serve traffic.

No, that's not necessarily true, and it's not true the way we use it. Haproxy allows you to define your health check.

Do you have it setup to find issues like Anyx's node running an old version? That caused a lot of grief for a few days.

It's not setup that way now, because we use it to check the health of our own hiveminds, and we keep them at least functionally similar at all times (although we run different codes on them simultaneously sometimes, to see how they behave differently, especially when testing optimizations).

There are more advanced things that could be done though, such as having a haxproxy that is distributing its load do cross-checking on the responses from different nodes to API calls that should have the same answers from each node. But I think that's overkill for now.

But is this needed? IMO because we pay witnesses, servers don't need to be super cheap. Sure the chain can't die from that point.

I'm not that technical, but how much resources would some premade smart contracts eat onchain?

Some examples:

Loan Hive to HDB
W-... for onchain exchange ( would be an awesome dex IMO)

I think some basic Smart contracts on Hive would add more value in use cases and Token value. Also for Rcs.

Servers need to be cheap if you want resistance against infrastructure attacks by large infrastructure providers.

I agree with you.

If I remember right steemit pays around 250k a month for multiple nodes. It looks really crazy to me what possible is.

I think now is time to add more applications to hive. Because with the time computing power and internet connection getting cheaper too. So the time should work for hive.

With a partner, we have been running a witness till March on the other blockchain. It was a server that we had it at my partners location in the factory he owns, with a controlled climatic environment and a backup generator. We had also the backup server there with a second internet connection in case one was down. This was in a country with one of the best and cheapest internet services in the world, so the second connection was never used.

The things we gave it up was the cost. As we had a separate contract for the machines, we could see how much we have been spending on infrastructure and what was coming in. A lot of long nights, due to failed HF and being somewhere in the 70ish position made us take the decision to plug out the witness server and rent the server to other firms. What I want to say is that we might need to change the payout distribution so that we have more people coming in.

Losing money to run a server or a node is not most people want to do, so with a more linear distribution we might get more people to run the much needed machines. For this the top20 need to agree to a huge pay cut.

Posted Using LeoFinance Beta

A witness node is really quite cheap to run nowadays on Hive. IMO, the biggest real "cost" at this point is just the skilled labor to update the software and deal with occasional hardware problems, etc.

Running an API node that can serve traffic is a little more costly than just running a witness, but still pretty cheap to run now, and even in that case I'd argue that it is the personal labor costs that dominate.

Thank you for the comment. Agree with you that the labor is the most costly and time intensive. Can we somehow incentive more the one that are running API nodes? I would definitely support a proposal for this. I would write it, but here I think that a node runner shall write it, at least with the sum to cover the cost.

Would be beneficial to make the reward curve for witnesses a little different, so that the block production distribution is different or do you see the risk higher that more blocks are missed by lower witnesses?

I hope that my comments are not seen ac critic as I think that a lot of people are investing a good amount of resources and time into development and I think that 99% of the people in here want Hive to moon!

Posted Using LeoFinance Beta

At the moment, I think we have an abundance of API nodes, relative to traffic levels. So I think there's enough incentivization now for at least top witnesses to run API nodes. Not all of them do, but again, it's not really needed for all of them to do so now, and they can always make contributions in other ways.

Now the above paragraph is mostly about "now". It's hard to say how things may look in the future, and we may be able to decentralize the infrastructure on a much more massive scale in the future, in a way that would make it less difficult to operate a node in the system, and so that we place less reliance on individual nodes. I think a sort of ideal state would be where the entire system decentralizes down to the level of individual users who all collectively support the infrastructure, and I think it's achievable in the long run.

I don't think it makes sense to change the reward curve for block producers right now, mostly because the witnesses act as elected representatives for consensus changes, and it gets too difficult for voters to track the motivations and opinions of these representatives when too many witnesses are involved in this process.

That, in and of itself, doesn't necessarily mean that we couldn't change reward curve (as was done once before), but leave the relative number of blocks produced among the top 20 witnesses the same (the relative amount of blocks produced is really important, because that's what drives the forking logic in the case of consensus disagreements). And in that case, it wouldn't even likely change the number of missed blocks, unless we assume that top witnesses would simply stop caring about missed blocks because the financial penalties for missing a block was just so small.

But as far as I can tell, there's no major issue here right now, so with many tasks to take on in the development of Hive, I think we can probably class this one in the category of "if it ain't broke, don't fix it".

Thank you for the time invested in the reply. It underlined again my belief, from our past interaction, online and offline, that you have not only the technical know-how, but also the social competence to be able to see different point of views and to discuss open upon them and I admit as for now there are resources, we need more users and this is the duty of everyone

And in that case, it wouldn't even likely change the number of missed blocks, unless we assume that top witnesses would simply stop caring about missed blocks because the financial penalties for missing a block was just so small.

This explanation is really good and makes a lot of sense.

"if it ain't broke, don't fix it"

This is also fully understandable as there are other things to focus on, like further development, which is seen better by novices like me on the updates that are appearing.

Posted Using LeoFinance Beta

The problem could be that we do not understand what blocktrades is doing but maybe that is only for those of us who come here only to publish and exchange our content, what we do know is that the technology works very well, very, very well.
For this, it seems quite complicated to apply and install a node in our house that really is very very technical for a simple mortal.
Well, a few years ago we would never have imagined that if a node fell or something like that, it would affect us or that if the internet went we would have a problem and see today, the electricity and the internet go down and it is capable of stopping the world since many supply chains although most depend on networks anchored to a node.
sparks, imagine how things have changed, well let's hope that although we will not reach the ETH network in its growth and development at least we will achieve something, we are already on the way.

Posted Using LeoFinance Beta

Definitely. Setting up a node, either at home or on a server farm, is not for everyone. Few of us have the ability to do that.

Posted Using LeoFinance Beta

Exactly that is quite complicated and technical and in addition to doing it there are some costs that my God, I recently found out that a graphic video card that is necessary to do some of those things costs around US $ 900.00 something that is not available to anyone , but hey, we are still in this world even if it is coming from the base without having much possibility of going up so much, but hey at least for now we are here, learning a lot from this wonderful world.

Posted Using LeoFinance Beta

. . . a few years ago we would never have imagined that if a node fell or something like that, it would affect us or that if the internet went we would have a problem and see today . . .

So much has changed so fast, and for the better. Like going from Beta to Stable! It's a sight to behold! The more widely distributed and decentralized the network, the lesser the potential impact of any given individual failure, which, in truly widely distributed conditions, really does only amount to a proverbial "blip" as the entire ecosystem continues forward with not-so-much as noticing an isolated error message (as far as the network as a whole is concerned). No worries about those individual failures at all! And the economic viability is decided on the individual level too! It's a complete win-win. A wonder to behold! Did I say that already? 😃

Just got to get the specs and HowTo docs out there so interested people can give it a go - I'd recommend it if only for the learning experience!

You are absolutely right, in truth that as a learning experience it would be something so wonderful for those of us who are not involved in this complicated world or perhaps it is not, you just have to dedicate a little time and a couple of readings here and there, that allow us to follow the steps for the configuration and coding of a node, if in the future we hope that with massive decentralization and web 3.0 the world will be a place where we can converge between the digital world and the real world in which for now We are the majority, but little by little we are joining this new order.
Thanks for the encouragement to keep learning these pretty technical things that only require a little effort and dedication.

Posted Using LeoFinance Beta


Thanks sr.

Thank you for your engagement on this post, you have recieved ENGAGE tokens.

Finding a compromise between reliability and scalability is tough. Especially since everyone will be running the same software but have different specs, internet connect and other factors. There is no easy answer to building a decentralized systems with so many varying factors so trial and error will be required. I do think there should some way to redirect traffic if a node has issues or goes down. For example if periodically we check how well a node can perform a certain set of actions and if it fails, you flag the node for review.

Posted Using LeoFinance Beta

A while back we added capability to hive-js (one of the primary libraries used to develop frontends like for it to detect when a node starts failing and allow for it to failover to other nodes. The frontends nowadays also allow for you to set which set of nodes you want it to be able to failover too, and the order in which that should take place. Still, there's probably more than can be done to improve the failover process.

Thanks for letting me know. I have seen a few things fail from time to time. When I was on Leofinance, sometimes it doesn't update correctly (I see it update correctly in PeakD and Eccency). Even when changing the node I was viewing on Leofiance, it would not change so it might not be a node problem but a Leofiance problem.

Although besides that error, there are some problems occasionally about fetching delegations and HIVE Engine issues. Sometimes they last longer and others times, they are only around for like 5 minutes.

Posted Using LeoFinance Beta

Not sure why it's not switching on Leofinance, but one possibility is that they don't have it configured to auto-failover to other nodes (each frontend can configure which API nodes it wants to "trust").

Hive engine issues are outside of the scope of what hivemind does, as it is a distinct 2nd layer service with separate infrastructure established for that. I believe they're currently working on decentralizing that infrastructure and making it more robust against single point failures.

The more nodes we have the better as it becomes less reliant on what we have already. This is needed for going forward as I read somewhere that we could scale up to 6 or 10 x what we have already (can't remember exactly) with what we have in place. This is obviously not enough so more infrastructure is required which is being added constantly it seems.

Posted Using LeoFinance Beta

At this point, the software is about 10x more scalable than it was. But also note that is 10x over it capability, not what it's currently serving. So its current scalability is much more than 10x current traffic.

Thank you for clearing that up as it didn't quite make sense what I was reading and this does. Good to know this as we are going to need it one of these days and thank you for replying.

Posted Using LeoFinance Beta

Strong community, strong foundation to build on. Cool to be on hive on this time

I looked into setting up a Hive node, and found the exact quote you have about home internet and decided to drop the idea.

Posted Using LeoFinance Beta

Working for a school district myself, I have been kicking around the idea of running a node or something similar on one of my servers that has spare processing power available. My biggest issue is I am not sure how the intermediate school district that handles our Internet would appreciate the extra traffic. Additionally, since Hive is a "censorship" free blockchain, we could get into some hot water over the content on the chain being stored on our servers. All of that being said, I am a little surprised with the growing number of blockchain groups at the University level, there aren't more higher education institutions running nodes. Maybe they just need to be approached about it. I know in the past universities and colleges have been more than happy to share open source FTP repositories on their servers for the public to access. You can still find many Linux distros available for download on collegiate servers across the country.

Posted Using LeoFinance Beta

I almost did a similar thing to what you suggested in 2017-2018. I worked for the university's "student" IT department... Had access to the servers and hundreds (maybe thousands?) of computers. I was talking with some supervisors, who were on board with the idea, and we found out that the university had a clause in it that stated that outside university nodes were not allowed to be ran and any crypto mined/money made was subject to university confiscation and possible expulsion/firing of the individual(s) mining the crypto/making money.

I know a school district is going to be different than how a university would approach. I would absolutely get approval prior to going through with it. I'd also imagine you would be able to secure the stored-information which could help constrain their issues with the stored content.

Posted Using LeoFinance Beta

That is very interesting. I will have go dig through our policies to see if we have anything in place. I manage all of it and do most of the policy stuff, but we do have some boiler plate items at the district level that might apply here and I should probably be aware of.

Posted Using LeoFinance Beta

LOL I am not sure that using the school district's equipment for a hive node is the best idea and going to be looked at favorably. I would believe they do not want personal stuff running on their servers.

I am not sure most universities want anything that is censorship resistant. They talk about free speech but are the first to shut it down.

Posted Using LeoFinance Beta

Maybe UC Berkeley? They have always been a little crazy there :) I run the tech department so quite honestly, no one would really ever know unless we got flagged for the traffic or the content. As far as resources go, I have the hardware to spare. As I think about it more, it is probably something I will not do, but I'd love to be able to one day. It would be cool to have a more active role in the blockchain.

Posted Using LeoFinance Beta

We also see the situation where, in addition to Internet, power outages could be a problem depending upon where one lives. Ironically, in many areas, power is more variable than Internet (people in California and Florida know this). Of course, this could be mitigated somewhat with battery backup systems as well as if a home is powered by solar.

I have neither stable internet nor constant power . So even though I am interested in running a node it would literally be impossible for me unless I get generator + better internet ( which is a big problem since I live in rural area) so the 750 $ isn't really accurate for all members willing to setup the node.

Posted Using LeoFinance Beta

Compared to 5 or 6 months ago, things have really improved in a broader sense.

This is essential not only for us, but who is also considering getting on that same boat.

Posted Using LeoFinance Beta

Hi @wiseagent

Sorry for interrupting. Could I ask you to check out your discord DM?

Little by little the bee build the hive.
Providing a node is different from being a witness right ?
Node providers won't add block to the chain but just make available a part of frontend for Hive users.

Posted Using LeoFinance Beta

There are API nodes and witness nodes.

Some of the witnesses run both but they serve different functions. API nodes are what the apps tie into which provide the data they need.

Witness nodes validate the blocks the blockchain produces.

Posted Using LeoFinance Beta

Bang, I did it again... I just rehived your post!
Week 40 of my contest just can now check the winners of the previous week!

Sorry, out of BEER, please retry later...

It is really amazing how complex is the universe of this Blockchain. We only enjoy the sharing of ideas and texts, images, videos... But it sounds kind of complex the technical aspects, for example the nodes in residential internet. I don't know how important is the stability of internet but some places in the world don't have an optimal service.
Well, there is a lot to learn about.
Thank you for this information!

Fantastic! 1.jpg

brings up a vapid point.

I think you meant to put Valid! ;)

good post ! :)

It's amazing how deep and a bit complicated the world of bockchain is, for those of us who only enjoy sharing texts and ideas. Thank you very much for this information

At some point I'd imagine hobbyist nodes with minimum specs aren't going to cut it anymore for some applications within this ecosystem--especially where DPOS is concerned. It's good to see at least some bare metal out there, but just because it can run on a set of hardware, doesn't mean it will run well.

I also shudder to think how much infrastructure is running on the same VPS service providers or on hardware that the operator can barely afford to maintain and lifecycle. You could drop a datacenter grade server in a colo facility where half a dozen ISPs meet, I'm talking redundancy, high availability, etc. At some level of growth you'd think these would become major considerations. I.E., it doesn't matter how popular you are as a node operator if your node has 1200ms latency to get to it, or your 5400RPM drives aren't quite cutting it anymore. It should be a huge consideration with DPOS, and I'm sure it will eventually be if it isn't already.