-
Notifications
You must be signed in to change notification settings - Fork 445
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce type checking overhead #4815
Comments
Tagging @ChrisDodd |
I am wondering if we should say that from some point in the compilation (ideally from the initial type-check, but later may be needed) the types in the I believe most passes don't actually need
There are some other features of This way we may need to occasionally run the read-only type checker to validate the |
This was the original design -- once type inferencing runs, the |
Don't we already use |
There's also |
After refmap changes in frontend,
TypeInference
is the pass that is called most often. It can operate in two modes:For one downstream case I noticed that it was invoked 76 times on the top level and only few of those are read-write. Each of these sweeps over the input could take up to ~300 ms, meaning that we are spending ~20 seconds a total just for type checking. Given that overall compilation time is ~70 seconds, the typechecking time is quite large. In addition to this type checking is run on a subtree during each method call resolution, etc.
While it is possible to adorn expressions with inferred types, the passes are not yet prepared for this and require a
TypeMap
to operate.One possible way to improve the things is to make type checking (read-only type inference) a proper
Inspector
. Despite the name, type checking is still aTransform
, so it clones the whole IR just to drop the result as nothing was changed. This causes big malloc traffic, increased garbage collection pressure, plusTransform
logic is more complex that ofInspector
.This would require some refactoring to allow code reuse between
Transform
andInspector
modes.Experiments show that we can expect improvements within 5-10% range from frontend / midend time.
The text was updated successfully, but these errors were encountered: