You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 16, 2023. It is now read-only.
The code effectively projects out the stakes and stakers for the network over the next year. For each day in the next year, it gets information on all of the active stakes. When the network is large this is exceptionally computationally expensive.
By simply commenting it out it has improved our data collection round time to about 1 minute (down from 15minutes sometimes 😱 ) and improved stability; node count was about 1800 nodes.
crawler_1 | Scraping Round #613 ========================
crawler_1 | ✓ ... Current Period
crawler_1 | ✓ ... Date/Time of Next Period [0s]
crawler_1 | ✓ ... Latest Teacher [0s]
crawler_1 | ✓ ... Previous Fleet States [0s]
crawler_1 | ✓ ... Network Event Details [0s]
crawler_1 | ✓ ... Known Node Details [0s]
crawler_1 | ✓ ... Known Nodes [31.0s]
crawler_1 | ✓ ... Staker Confirmation Status [31.0s]
crawler_1 | ✓ ... Global Network Locked Tokens
crawler_1 | ✓ ... Top Stakes [0s]
crawler_1 | Scraping round completed (duration 0:01:02).
If we do want to keep that functionality, how can that functionality be optimized.
The text was updated successfully, but these errors were encountered:
derekpierre
changed the title
Optimization work for the Crawler/Dashboard functionality
Optimization work done by the Crawler/Dashboard
Oct 30, 2020
derekpierre
changed the title
Optimization work done by the Crawler/Dashboard
Optimization of work done by the Crawler/Dashboard
Oct 30, 2020
I think we don't need this functionality, or more accurately, we don't need it with the same frequency than the main crawler loop (mainly discovery loop stuff). For this specific case, a secondary loop that runs daily would be more than enough.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Functionality like this, which projects stakes and stakers, can be problematic when the network is as large as it is:
The code effectively projects out the stakes and stakers for the network over the next year. For each day in the next year, it gets information on all of the active stakes. When the network is large this is exceptionally computationally expensive.
Two questions arise:
By simply commenting it out it has improved our data collection round time to about 1 minute (down from 15minutes sometimes 😱 ) and improved stability; node count was about 1800 nodes.
The text was updated successfully, but these errors were encountered: