- @mondrake opened merge request.
- Status changed to Needs review
almost 2 years ago 6:48pm 7 March 2023 - 🇮🇹Italy mondrake 🇮🇹
Rebased after 📌 Introduce TestRun objects and refactor TestDatabase to be testable Fixed and changed to MR workflow.
- 🇮🇹Italy mondrake 🇮🇹
@longwave and I discussed on Slack on whether working this issue would still making sense given to upcoming move to GitLab testing. We leaned to a no, but would like to get more feedback.
For sure it improves the console log output by showing execution time, passes, skips, incomplete etc., see latest MR test log https://dispatcher.drupalci.org/job/drupal_patches/171125/consoleFull
- 🇺🇸United States smustgrave
Would also say no but depends when exactly that switch will happen?
- Status changed to Closed: won't fix
almost 2 years ago 9:42pm 10 March 2023 - Status changed to Needs work
5 months ago 11:56am 1 August 2024 - 🇬🇧United Kingdom catch
We're relying on run-tests.sh in gitlab CI, and didn't get very far trying to switch to paratest, so this is still relevant.
- 🇬🇧United Kingdom alexpott 🇪🇺🌍
@catch but now we have the reports in gitlab CI we are finally seeing which tests are skipped. You can see this in the gitlab reports. Unfortunately it does not tell you why but it is more information.
- 🇮🇹Italy mondrake 🇮🇹
Opened a new branch, just tried to merge the old MR with 11.x. Will fail badly...
- 🇬🇧United Kingdom catch
@alexpott right but we still print the run-tests.sh output in the job logs, and it'd be better if it matched the test reports in gitlab. I always end up going to the jobs about 95% of the time to see what failed.
- 🇮🇹Italy mondrake 🇮🇹
In PHPUnit 10+ the Junit log does not separate between skipped and incomplete tests, both are reported as ‘skipped’. Also, there is no specific logic around risky tests.
We can simplify here accordingly.
- Status changed to Needs review
5 months ago 6:57pm 3 August 2024 - 🇮🇹Italy mondrake 🇮🇹
This is an issue where the review makes more sense when tests fail :)
What's being output currently by run-tests.sh in the 'Detailed test results' section is very simpletest-ish, and possibly needs reconsideration.
1) the 'Group' column in PHPUnit always bears 'Other', so it's useless. I replaced it with the execution time of the single test case.
2) the 'Function' column in PHPUnit is the name of the test, including the name of the dataset provided by a data provider. I renamed the column to 'test'.
3) all reported lines are adding an empty line - it's meant to bear a message, but in PHPUnit passed and skipped test cases never have one. So just skipped here to print anything if the message is missing.Also, in HEAD right now if a PHPUnit test run ends with a 'error' exit code, we have a duplicate line reported for the same test class. Fixed here.
Also fixed that if the PHPUnit runner fails for failure/errors, we miss details of testcases passed/failed/errored.Reviews, based on the output of the MR, appreciated.
- 🇮🇹Italy mondrake 🇮🇹
Rebased after commit of 📌 Show test run time by class in run-tests.sh output Fixed .
The Needs Review Queue Bot → tested this issue. It fails the Drupal core commit checks. Therefore, this issue status is now "Needs work".
This does not mean that the patch necessarily needs to be re-rolled or the MR rebased. Read the Issue Summary, the issue tags and the latest discussion here to determine what needs to be done.
Consult the Drupal Contributor Guide → to find step-by-step guides for working with issues.
- 🇺🇸United States mile23 Seattle, WA
This is all great.
Do we need child issues to fix existing skipped tests that were previously ignored?
- 🇮🇹Italy mondrake 🇮🇹
@mile23 the tests were not ignored, actually. They were reported as if they passed, which is the original reason of this issue - and a cause of confusion.
PHPUnit tests can be marked as ‘skipped’ or ‘ignored’ in the test code, but both end up as ‘skipped’ in the JUnit log - that’s a JUnit limitation AFAICS.
I do not think there’s a need for generic followups to unskip tests - that’s very much dependent on the use case and should be decided case by case. For example, database driver specific kernel tests are prepared for all drivers all times, but then those of the drivers that are not of the currently running database get skipped. By design.
- 🇺🇸United States mile23 Seattle, WA
OK, I guess I kind of misspoke... I meant maybe there were skipped tests that would otherwise fail, or otherwise cause CI to report failed only because we've made the skipped and incomplete tests visible.
Like, in one of those 'after' images, we see a block test marked red. Do we need to file an issue to fix that test?
I guess if the CI is passing on the PR for this issue then we can decide what to do about them after the PR is merged.
- 🇮🇹Italy mondrake 🇮🇹
in one of those 'after' images, we see a block test marked red. Do we need to file an issue to fix that test?
Aha 😀
Those screenshots actually captured a random test failure… those won’t go away with this MR 😀