Xx network Economic Tweaks - Realtime Failure Deductions

a lot can be done to improve latency and I have found the connection between node and gateway to be important. (network peering and vpn is fun!)
I have never experienced a power outage… so with a synchronous internet connection that is “somewhat” dedicated (set limits in router) - rewarding high performance nodes makes a lot of sense to me.
I actually stopped using the machines for other stuff on the side. like I did for a while… this made performance take a noticeable but not immediately obvious leap. I deactivated syncthing, ensured I had no aggressive anti-virus or rootkit scans running… shaved several small percentage points off my timeouts.
sure - this economic tweak can be seen as punishing low or medium performing nodes and we can all agree that “earning less coin” is not as good as “earning more coin”… but this is a decentralized, high performance project - which needs incentivization towards performance and quality of service.
please activate this tweatk.

This may be a coincidence but in recent days (in fact it may even be two weeks), we’ve had a small number (maybe 3-4) of home-based nodes visibly impact the entire network. Two of them have been having frequent CUDA failures, impacting realtime. I took this just now and currently the second (12.54%) worst is now finally offline, but the other three are happily c-mixing.

image

There are also issues with high precomp rates among these but also different nodes).

image

So maybe it’s one of those days, but you can see that realtime is more impacted by home-based nodes than by cloud-based nodes. (I checked ISP column, none are hyperscaler-based). Maybe there are other instances where hyperscaler nodes impact more - this may be anecdotal.

But in any case, the result today is that everyone is suffering and there’s nothing we can do.

I have a longer post about this elsewhere but not directly related to realtime failures, so I’ll just say that another suggestion I have for the XX Team in that other place is to consider how to wind down Multiplier Program because these problems show it’s not just one unlucky node. Today we lost 20% of throughput - it’s like being attacked except that it’s probably our own validators.

image

I could make a better argument for this with the help of Excel, but it is my intuition that it’s going to be very hard to fix realtime failures as long as fixed and generous subsidies remain in place. Chilling knocks you out for a short while, but as long as you’ve been around for 6 months or longer, it’s very likely that you can easily get reelected because you’ll again get that 100’000+xx multiplier.

The other scenario is a validator can have a persistently annoying realtime (or precomp) failure rate of say 3.8% and still make a decent ROI using elevated commission rates thanks to the multiplier and ease of getting elected.

Today it’s realtime cMix rounds, tomorrow it’s could be slow gateway nodes and then we’ll have another discussion, about gateway network and database performance. So I’d suggest to consider a realtime penalty and an aggregate Multiplier Decay Factor that would impact node multiplier so that nodes that perform poorly over time (weeks) lose Team Multiplier sooner than the rest. For example, worst 10% of nodes would lose it in 4-5 months, average node in 8, and best in 10. Or deduct that realtime penalty from both the multiplier and cMix earnings to make the reelection of bad nodes gradually more difficult.

1 Like