Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(API): Split gas component cron jobs up into different files #1372

Merged
merged 17 commits into from
Jan 14, 2025

Conversation

nicholaspai
Copy link
Member

@nicholaspai nicholaspai commented Jan 14, 2025

Updating the gas costs and gas prices in the same cron job makes the cache resets slower due to the memory constraints of a single file, and when updating the gas price every 5s, this makes a difference.

This PR splits the gas cost updates, which are 10s and 30s for l1 data fees and native gas costs, separate from the gas price updates so that the gas price updates can be closer to 5s. Currently its anywhere from 5s to 10s.

Additionally, this PR adds try-catches around the await cache.set() calls so that the a failure due to an upstream RPC call (e.g. I've seen this happen with linea_estimateGas calls a lot) doesn't take down the cron job for the full run.

Finally, there was an issue with the current cron job where all combinations of chainId+outputToken were repeated many times, leading to tons of wasteful computation. This PR uses Set's to make sure costs are computed once for each destinationChain+outputToken combination

Updating the gas costs and gas prices in the same cron job makes the cache resets slower due to the memory constraints of a single file, and when updating the gas price every 5s, this makes a difference.

This PR splits the gas cost updates, which are 10s and 30s for l1 data fees and native gas costs, separate from the gas price updates so that the gas price updates can be closer to 5s. Currently its anywhere from 5s to 10s.

Additionally, this PR adds try-catches around the `await cache.set()` calls so that the a failure due to an upstream RPC call (e.g. I've seen this happen with linea_estimateGas calls a lot) doesn't take down the cron job for the full run.
Copy link

vercel bot commented Jan 14, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
app-frontend-v3 ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jan 14, 2025 6:10am
sepolia-frontend-v2 ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jan 14, 2025 6:10am

const gasCost = await gasCostCache.get();
if (utils.chainIsOPStack(chainId)) {
const cache = getCachedOpStackL1DataFee(depositArgs, gasCost);
try {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This try-catch is added around the cache.set

if (diff >= maxDurationSec * 1000) {
break;
}
try {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This try-catch is added around the cache.set()

// The above promises are supposed to complete after maxDurationSec seconds, but there are so many of them
// (one per route) that they last one can run quite a bit longer, so force the function to stop after maxDurationSec
// so that the serverless function can exit successfully.
await Promise.race([cacheUpdatePromises, utils.delay(maxDurationSec)]);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Force the many promises to stop after 60s, since there are so many of them this Promise.all wasn't completing in under 90s before, evevn though each promise was designed to finish in 60s

// Set lower than TTL in getCachedNativeGasCost. This should rarely change so we should just make sure
// we keep this cache warm.
const updateNativeGasCostIntervalsSecPerChain = {
default: 20,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lowered this to 20s from 30s since there are so many routes to update, the real latency of updating this per route is practically quite a bit higher (5-10s) than whatever is set here. We want to make sure this cache gets reset once per run so that the cache stays warm. The TTL is 120s so one update per run should be totally fine

@nicholaspai nicholaspai changed the title feat(API): Split part of cron-cache-gas-prices into cron-cache-gas-costs feat(API): Split gas component cron jobs up into different files Jan 14, 2025

const cacheUpdatePromise = Promise.all(
mainnetChains
.filter((chain) => utils.chainIsOPStack(chain.chainId))
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added this filter here since we don't need to update l1 date fees for non op stack chains

availableRoutes
.filter(({ destinationChainId }) => destinationChainId === chainId)
.forEach(({ destinationToken }) => {
if (!destinationTokens.has(destinationToken)) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using a set here cuts down computation a lot for duplicate routes

}
try {
await cache.set();
updateCounts[chainId][outputTokenAddress]++;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This updateCount helped debug a lot that I was sending tons of duplicate queries for chain+outputToken combinations

Copy link
Contributor

@dohaki dohaki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes sense to me

@nicholaspai nicholaspai merged commit b42d17f into master Jan 14, 2025
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants