Conversation
|
I sent a PR for documentation purposes. I'm not planning to merge it for now. |
|
It may be something very specific to the Remote codebase, or at least I am not seeing any significant improvements over main at compiling my work project, compile times are pretty much the same, maybe slightly faster test loading (9.8s vs 10.3s in |
|
@michallepicki I could not measure such a large difference in Livebook as well. My measurements tell me that this is faster because it avoids the single core bottleneck of the code server, so the more modules and the more cores, the more likely this will yield positive results. The overall goal is to expose this as an option. |
|
Have you measured memory usage of this change? This seems to significantly use more memory than the current behaviour. At least with the project I'm trying. When I try to compile the project inside an Ubuntu VM with 16 giga memory, the OOM killer always reaps the compilation. In htop I see memory usage of up to 21 GiB. Whereas the current behaviour barely uses 500 MiB of memory.
Tested with latest OTP 27 and 28 release. |
732e701 to
3bfb201
Compare
|
@exception-one I did not track memory usage before but I just ran some experiments now and I could see anything meaningful. I also wouldn't expect it to change so drastically, but perhaps it is unrelated to this commit? Could you please try again with latest? In latest, you have to explicitly opt-in to interpreted module in your Then you should be able to compare both modules directly and see if it is related to interpreted module or not. |
|
💚 💙 💜 💛 ❤️ |
|
took us from 43s to 34s. overall, we've shaved nearly half our compile time from 1.19 to latest elixir. thanks much! with a sample size of one on my local machine, with
|
This explores a different approach to execute module definitions that uses the evaluator (interpreted) rather than the compiler (compiled). I have tried this a long time ago but it was never faster, but I assume a combination of the JIT and different optimizations have made it viable.
To try it out, you have to explicitly opt-in to interpreted module in your
mix.exs:Note this does not change the generated artefact in any way. Each function in the module is still compiled and optimized within the generated
.beamfile.Early experiments are very promising. This mode is 5x faster for Remote's codebase compared to 1.19 and 3x faster than main (which already has other improvements:
Next steps: