- Issue created by @fjgarlin
- 🇪🇸Spain fjgarlin
We should have a sort of before/after to see if it makes any difference.
- First commit to issue fork.
- Merge request !358#3524189 Use fastzip to reduce artifact cache time → (Open) created by jonathan1055
- 🇬🇧United Kingdom jonathan1055
Two "before" runs
https://git.drupalcode.org/issue/scheduler-3445052/-/pipelines/496804
Composer runtime 1 min 27, artifact downloaded .zip was 158MB, unzipped 104,610 items, 717MBhttps://git.drupalcode.org/issue/scheduler-3445052/-/pipelines/496884
Composer runtime 1 min 42, artifact downloaded .zip same as above (not surprisingly)Using this MR
https://git.drupalcode.org/issue/scheduler-3445052/-/pipelines/496898
Composer runtime 1 min 45, artifact downloaded .zip was the same 158MB (unzipped same as above)Maybe I mis-understood what the effect of this MR would be. Do I need to look at the logs of the subsequent jobs that use the composer artifact?
- 🇪🇸Spain fjgarlin
I think the diference should be noticed in the jobs using the artifacts.
But... I'm confused. It seems that the flag needs to be enabled on the runner (according to the documentation and this issue), as the default value seems to be "false".
But we haven't made any change to the actual running configuration in +1 year (other than the regular updates).
So my confusion actually comes from the core issue, where it says that it "just works": https://www.drupal.org/project/drupal/issues/3521894#comment-16090831 📌 Stop trying to cache node_modules in gitlab jobs Active . Maybe it's somehow enabled via the GitLab admin UI (I don't have access to this).
I want to say that given that this MR is so simple we should probably go ahead and merge it anyway. There is no negative effects and if (or when) the flag is enabled, we should notice some gains.
RTBC.
- 🇬🇧United Kingdom jonathan1055
I think the diference should be noticed in the jobs using the artifacts.
On the small sample here, all of the linting jobs actully took longer in runtime in the 'after' pipeline.
- 🇪🇸Spain fjgarlin
In that case. Let's close this for now and focus on the other issues.
Maybe we can try enabling this feature in the future in the gitlab runner, but for now, we're trying to keep changes to a minimum, and this is not a real issue that we are facing right now.
Thanks for the MR and the investigation.
- 🇬🇧United Kingdom jonathan1055
The sample was too small (with too much variation) to be conclusive, so I don't want you to think I am stating that this change did nothing. But it will need more controlled testing and more runs, to get better average times, when we pick it up in future.
- 🇪🇸Spain fjgarlin
No worries at all. I know it was a small sample size, but as it requires coordination and possible changes with the runner it can wait.