Right now there are two ways that performance QA is handled in core:
1. Someone says something in an issue thread about performance, throws a "Needs profiling" tag on the issue, a long period of time goes by and eventually one of a handful of people with the knowhow and inclination profile the patch - most of the time the patch is fine. For example, the Twig conversions for D8 that have been for the most part do-able by novices have been blocked on profiling for months and months, despite heroic efforts by a few individuals and Drupalcon sprinters.
2. Nobody says anything about performance, patch goes in and much later somebody testing something unrelated discovers a nasty performance regression (see how https://drupal.org/node/2005970#comment-7691393 turns into https://drupal.org/node/2051847 for an example)
It's pretty haphazard either way. Speed is a killer feature, Drupal is slow - the community needs to show that we seriously care about monitoring and ensuring performance and scalability in our code base or we'll never earn that "Enterprise software" badge for ourselves.
We have the hardware and scripts available to automate this stuff, if we can put together a plan and execute it things could be way more awesome than they are now.
Proposed resolution
Broadly, a list of what we need is:
- Hardware to run the site instances we want to profile - virtual hardware won't be good enough as the results won't be as reliable
- xhprof scripts that handle git branches and collating stats - I'm aware of decent offerings put together by @Fabianx, @Cottser and @msonnabaum but I'm sure there are others
- A list of "standard scenarios" that we can use as simulations/proxies for "real world" websites (eg. front page with 50 nodes with up to 4 comments each)
- A "schedule" to run our profiling on, daily? hourly? once for every patch?
- A way of displaying/reporting our results.
h3. Possible proof-of-concept
On a _dedicated_ server:
* Frequently (in a loop)
* Get new code
* Get list of new commits
**** For every new commit do: ****
* rm -rf sites/default
* Reset branch 8.x
* Run drush site-install ...
* Run drush dl -y devel
* Create 50 nodes and 4 comments via devel
* Cleanup /tmp/*.xhprof to start from clean state
* Run (for example) xhprof-kit - benchmark-branch.sh 8.x
* Run summary report of XHProfCli /tmp/
* Move data to data/commitid/runs/ (to compare later)
* Put summary report on an HTML page (for now)
This can be done even in retrospective and create a graph of how performance has changed over time.