- Issue created by @heyyo
- ๐ง๐ชBelgium wim leers Ghent ๐ง๐ช๐ช๐บ
Thanks so much for the profiling!
Yep, the BE is known to be slow with many components. We were already aware, @kristen pol provided me a sample page on Monday too.
The BE is 0% optimized. 32 million function calls already says as much :D That also means that it will be easy to make it significantly faster.
(It might be hard to make it fast enough for all scenarios, but faster than today is clearly trivial.)โPremature optimization is โฆโ and all that :)
Iโm SUPER stoked to see that the bulk of that time is going to shape matching and everything that calls, because thatโs literally the area Iโm working on next! ๐ Specifically: ๐ [later phase] Support matching `{type: array, โฆ}` prop shapes Postponed . Which is exactly why not having optimized this makes sense: itโs not yet feature-complete!
- ๐ฌ๐งUnited Kingdom longwave UK
I had a hunch that ๐ Split model values into resolved and raw Active would largely fix this, and it seems correct; tested with xb_demo there is a 230x speed improvement in
::clientModelToInput()
which is by far the largest single section of the flamegraph. - ๐ฎ๐ฑIsrael heyyo Jerusalem
Wouah 230X faster, impressive, I will have to try this !
- ๐ง๐ชBelgium wim leers Ghent ๐ง๐ช๐ช๐บ
๐ Split model values into resolved and raw Active is in!