Allow run-tests.sh to report skipped/risky/incomplete PHPUnit-based tests

Created on 28 August 2017, over 7 years ago
Updated 7 March 2023, almost 2 years ago

Problem/Motivation

This is almost a bug, but I'll call it a task.

run-tests.sh can run PHPUnit-based tests.

However, it can't reflect that a test was skipped.

This is due to two factors:

1) simpletest (and thus run-tests.sh) doesn't have a concept of a skipped test.

2) When a PHPUnit test (< v.5) is skipped, only junit reports will give a clue, which is that the test doesn't pass, fail, or error out. (PHPUnit v.6 junit will tells us that a skip occurred.)

What run-tests.sh does with this is turn it into a pass, because nothing failed or errored out.

Since functional-javascript tests are skipped when phantomJS is not available, this means that tests run using run-tests.sh could look like they're passing when in fact they did not run.

Unfortunately, there doesn't seem to be an easy way to turn skipped tests into fails, without adding a test listener.

Thus, we should modify run-tests.sh to look for the special case of skipped so that users will know that their phantomJS-based tests did not pass because they didn't run.

Since we're doing that, we should work on risky and incomplete status, as well.

Proposed resolution

Remaining tasks

User interface changes

API changes

Data model changes

🐛 Bug report
Status

Needs work

Version

10.1

Component
PHPUnit 

Last updated 1 day ago

Created by

🇺🇸United States mile23 Seattle, WA

Live updates comments and jobs are added and updated live.
  • Needs tests

    The change is currently missing an automated test that fails when run with the original code, and succeeds when the bug has been fixed.

  • Needs issue summary update

    Issue summaries save everyone time if they are kept up-to-date. See Update issue summary task instructions.

Sign in to follow issues

Merge Requests

Comments & Activities

Not all content is available!

It's likely this issue predates Contrib.social: some issue and comment data are missing.

  • @mondrake opened merge request.
  • Status changed to Needs review almost 2 years ago
  • 🇮🇹Italy mondrake 🇮🇹

    Rebased after 📌 Introduce TestRun objects and refactor TestDatabase to be testable Fixed and changed to MR workflow.

  • 🇮🇹Italy mondrake 🇮🇹

    @longwave and I discussed on Slack on whether working this issue would still making sense given to upcoming move to GitLab testing. We leaned to a no, but would like to get more feedback.

    For sure it improves the console log output by showing execution time, passes, skips, incomplete etc., see latest MR test log https://dispatcher.drupalci.org/job/drupal_patches/171125/consoleFull

  • 🇬🇧United Kingdom alexpott 🇪🇺🌍

    I'd lean towards a no too.

  • 🇺🇸United States smustgrave

    Would also say no but depends when exactly that switch will happen?

  • Status changed to Closed: won't fix almost 2 years ago
  • 🇮🇹Italy mondrake 🇮🇹

    Four leaning no account for at least a full no…

  • Status changed to Needs work 5 months ago
  • 🇬🇧United Kingdom catch

    We're relying on run-tests.sh in gitlab CI, and didn't get very far trying to switch to paratest, so this is still relevant.

  • 🇬🇧United Kingdom alexpott 🇪🇺🌍

    @catch but now we have the reports in gitlab CI we are finally seeing which tests are skipped. You can see this in the gitlab reports. Unfortunately it does not tell you why but it is more information.

  • Merge request !9027Closes #2905007 → (Open) created by mondrake
  • 🇮🇹Italy mondrake 🇮🇹

    Opened a new branch, just tried to merge the old MR with 11.x. Will fail badly...

  • Pipeline finished with Failed
    5 months ago
    Total: 259s
    #240792
  • Pipeline finished with Failed
    5 months ago
    Total: 278s
    #240797
  • Pipeline finished with Failed
    5 months ago
    #240805
  • Pipeline finished with Failed
    5 months ago
    Total: 254s
    #240998
  • Pipeline finished with Failed
    5 months ago
    Total: 657s
    #241005
  • 🇬🇧United Kingdom catch

    @alexpott right but we still print the run-tests.sh output in the job logs, and it'd be better if it matched the test reports in gitlab. I always end up going to the jobs about 95% of the time to see what failed.

  • Pipeline finished with Failed
    5 months ago
    Total: 545s
    #242062
  • Pipeline finished with Failed
    5 months ago
    Total: 629s
    #242123
  • Pipeline finished with Failed
    5 months ago
    Total: 438s
    #242244
  • 🇮🇹Italy mondrake 🇮🇹

    In PHPUnit 10+ the Junit log does not separate between skipped and incomplete tests, both are reported as ‘skipped’. Also, there is no specific logic around risky tests.

    We can simplify here accordingly.

  • 🇮🇹Italy mondrake 🇮🇹
  • Pipeline finished with Failed
    5 months ago
    Total: 189s
    #242585
  • Pipeline finished with Failed
    5 months ago
    Total: 586s
    #242590
  • Pipeline finished with Failed
    5 months ago
    Total: 177s
    #242901
  • Pipeline finished with Failed
    5 months ago
    Total: 705s
    #242909
  • Status changed to Needs review 5 months ago
  • 🇮🇹Italy mondrake 🇮🇹

    This is an issue where the review makes more sense when tests fail :)

    What's being output currently by run-tests.sh in the 'Detailed test results' section is very simpletest-ish, and possibly needs reconsideration.

    1) the 'Group' column in PHPUnit always bears 'Other', so it's useless. I replaced it with the execution time of the single test case.
    2) the 'Function' column in PHPUnit is the name of the test, including the name of the dataset provided by a data provider. I renamed the column to 'test'.
    3) all reported lines are adding an empty line - it's meant to bear a message, but in PHPUnit passed and skipped test cases never have one. So just skipped here to print anything if the message is missing.

    Also, in HEAD right now if a PHPUnit test run ends with a 'error' exit code, we have a duplicate line reported for the same test class. Fixed here.
    Also fixed that if the PHPUnit runner fails for failure/errors, we miss details of testcases passed/failed/errored.

    Reviews, based on the output of the MR, appreciated.

  • 🇮🇹Italy mondrake 🇮🇹

    Rebased after commit of 📌 Show test run time by class in run-tests.sh output Fixed .

  • Pipeline finished with Failed
    4 months ago
    Total: 673s
    #253916
  • Pipeline finished with Failed
    4 months ago
    Total: 463s
    #266348
  • Pipeline finished with Failed
    4 months ago
    Total: 448s
    #266817
  • Pipeline finished with Failed
    4 months ago
    Total: 864s
    #267189
  • Pipeline finished with Canceled
    4 months ago
    Total: 560s
    #267239
  • Pipeline finished with Success
    4 months ago
    Total: 593s
    #267249
  • 🇮🇹Italy mondrake 🇮🇹

    OK, fully green now.

  • Pipeline finished with Failed
    3 months ago
    Total: 631s
    #289549
  • Pipeline finished with Success
    2 months ago
    Total: 411s
    #305553
  • The Needs Review Queue Bot tested this issue. It fails the Drupal core commit checks. Therefore, this issue status is now "Needs work".

    This does not mean that the patch necessarily needs to be re-rolled or the MR rebased. Read the Issue Summary, the issue tags and the latest discussion here to determine what needs to be done.

    Consult the Drupal Contributor Guide to find step-by-step guides for working with issues.

  • Pipeline finished with Failed
    2 months ago
    Total: 582s
    #306602
  • Pipeline finished with Failed
    about 2 months ago
    Total: 1014s
    #326711
  • 🇮🇹Italy mondrake 🇮🇹

    rebased

  • Pipeline finished with Failed
    about 2 months ago
    Total: 445s
    #327118
  • Pipeline finished with Failed
    about 2 months ago
    Total: 704s
    #327171
  • Pipeline finished with Running
    about 2 months ago
    Total: 693s
    #327211
  • Pipeline finished with Success
    about 2 months ago
    Total: 1031s
    #328135
  • Pipeline finished with Failed
    about 1 month ago
    Total: 285s
    #336561
  • 🇮🇹Italy mondrake 🇮🇹

    rebased

  • 🇮🇹Italy mondrake 🇮🇹

    adding before/after screenshots

  • Pipeline finished with Success
    about 1 month ago
    Total: 6468s
    #336677
  • 🇺🇸United States mile23 Seattle, WA

    This is all great.

    Do we need child issues to fix existing skipped tests that were previously ignored?

  • 🇮🇹Italy mondrake 🇮🇹

    @mile23 the tests were not ignored, actually. They were reported as if they passed, which is the original reason of this issue - and a cause of confusion.

    PHPUnit tests can be marked as ‘skipped’ or ‘ignored’ in the test code, but both end up as ‘skipped’ in the JUnit log - that’s a JUnit limitation AFAICS.

    I do not think there’s a need for generic followups to unskip tests - that’s very much dependent on the use case and should be decided case by case. For example, database driver specific kernel tests are prepared for all drivers all times, but then those of the drivers that are not of the currently running database get skipped. By design.

  • 🇺🇸United States mile23 Seattle, WA

    OK, I guess I kind of misspoke... I meant maybe there were skipped tests that would otherwise fail, or otherwise cause CI to report failed only because we've made the skipped and incomplete tests visible.

    Like, in one of those 'after' images, we see a block test marked red. Do we need to file an issue to fix that test?

    I guess if the CI is passing on the PR for this issue then we can decide what to do about them after the PR is merged.

  • 🇮🇹Italy mondrake 🇮🇹

    in one of those 'after' images, we see a block test marked red. Do we need to file an issue to fix that test?

    Aha 😀

    Those screenshots actually captured a random test failure… those won’t go away with this MR 😀

Production build 0.71.5 2024