Author: Markus Hinrichs
Editor: Randall Roland
Leap 3.2 was released, it was revealed in this round table. Furthermore, the State Database's fundamental issue—its astronomically high RAM usage—was discussed. Potential long and short-term fixes, as well as trade-offs, were explored.
Moreover, a brief summary of the specifics of the NET plugin enhancements was given from the ENF at the conclusion of the meeting, where the schedule for the following round table was also decided: Leap 4.0's Prometheus Exporter features statistics for the black box nodEOS. 13 participants joined the round table this time.
Software development is the main topic of discussion during the weekly EOS Node Operator Round Table sessions. Developers, Block Producers, blockchain engineers, and community members who want to learn more about the EOS development process all benefit from the knowledge it provides.
An ecosystem may grow healthily and naturally by often sharing and interacting. The EOS Network Foundation has received favorable comments on its progress from BPs and developers. The EOS Community is conscious of the fact that its voice is now appreciated and heard. Last but not least, responses to community concerns are being made.
Summary of the Antelope Leap Updates on the way
from Stephen Diesel (ENF, Product Manager of Leap)
UPDATES | RELEASE TIMEFRAME |
Leap 3.2 final release | released on Github |
System contract updates | on the way |
Release of DUNE | December, 2022 |
At the beginning of this meeting Stephen announced that Leap 3.2 is released on Github, but it is not a consensus upgrade and therefore optional.
Brian Hazzard agrees to be available for any questions regarding the upgrade in various channels. Next week there will be an update of the Net Plugin Enhancements document, as the last meeting collected great ideas and defined some new potential features for the backlog.
State Database Trimming Reframed: Too much RAM used for storing State History
Michael from EOSUSA couldn't be at the meeting, but he had suggested the topic for this meeting: State Database Trimming. The participants agreed that the real problem is that too much RAM is being used. Per Stephen an RFP with the goal of doing research to investigate the RAM issue is being drafted.
Trade-off Performance vs RAM Size.
The main question was posed by Kevin Heifner: "How much performance are you willing to trade off for RAM size? Are you willing to trade-off 1 block production cycle for loading data into RAM (like a warming block)?" However, there lies a high risk of spam with this solution.
There is simply a huge demand for RAM that never seems to be met. At the moment 128 GB of RAM are needed to run the WAX state database without problems. The problem is that normal devices don't have enough room for more RAM. It's hard to find devices with strong CPUs and plenty of room for RAM. Perhaps graphic designer/animator devices could meet future requirements.
Defined short and long term opportunities (quoted)
Short term opportunities
An RFP is being drafted by the Antelope coalition to research this issue.
Make Heap mode startup and shutdown faster
Is there an opportunity to make the tmpfs more out of the box?
Leaving account queries disabled can save on RAM (opportunity for node operator configuration, already possible)
Store account queries to disk (maybe 4GB for roughly 14M accounts on WAX of ram in savings here, probably not worth it)
Long term opportunities
Can smart contracts be incentivized to specify RAM vs Disk storage?
Hardware vendors need to start offering very fast CPU cores with high amounts of RAM
P2P Improvements (Net Plugin) (by Brain Hazzard)
Brian Hazzard quickly touched on the following concerns and offered to present them in more detail in the following meeting after a brief inquiry from the host, Daniel Keyes, whether he could give a short summary of the specific proposals for enhancement of the Net Plugin recently discussed internally.
There would be the possibility to make lighter validations that occur for blocks on relay. This could save time and make relay faster
It would be possible to codify if a block is full (in terms of built CPU time) broadcasting and starting on the next block.
Auto configuring peers to optimize latency (BPs do it manually at the moment).
optimize for the schedule (Which BP comes before, which after?)
optimize in terms of latency
Next week's agenda
A debate on what data should be included in the prometheus exporter in the next Leap 4.0 proposed by Matthew from EOS Nation:
nodeEOS is like a black box, many node operators have no idea what's happening inside. There is a request to give some statistics about it.
The attendees of the node meeting are encouraged to bring their wish list for prometheus during next meeting.
Participants (13) of this round table:
Randall Roland | EOSsupport.io
Dario | EOSsupport.io
Kevin Heifner | OCI
Matt Witherspoon | ENF
Brian Hazzard
Jannis - Rakeden (Jannis)
Max Cho | KOREOS
Daniel Keyes | EOS Nation
Stephen Diesel | ENF
Matthew Darwin | EOS Nation
Corvin Meyer auf der Heide | liquiid.io
Patrick Burns | Aloha EOS
Ross Dold | EOSphere
Sources & References
Github: Antelope Leap 3.2.0 RC1
Image Credits
Banner by EOS Support Graphics