Allow run-tests.sh to report skipped/incomplete PHPUnit-based tests

Created on 28 August 2017, about 7 years ago
Updated 28 August 2024, 22 days ago

Problem/Motivation

This is almost a bug, but I'll call it a task.

run-tests.sh can run PHPUnit-based tests.

However, it can't reflect that a test was skipped.

This is due to two factors:

1) simpletest (and thus run-tests.sh) doesn't have a concept of a skipped test.

2) When a PHPUnit test (< v.5) is skipped, only junit reports will give a clue, which is that the test doesn't pass, fail, or error out. (PHPUnit v.6 junit will tells us that a skip occurred.)

What run-tests.sh does with this is turn it into a pass, because nothing failed or errored out.

Since functional-javascript tests are skipped when phantomJS is not available, this means that tests run using run-tests.sh could look like they're passing when in fact they did not run.

Unfortunately, there doesn't seem to be an easy way to turn skipped tests into fails, without adding a test listener.

Thus, we should modify run-tests.sh to look for the special case of skipped so that users will know that their phantomJS-based tests did not pass because they didn't run.

Since we're doing that, we should work on risky and incomplete status, as well.

Proposed resolution

Remaining tasks

User interface changes

API changes

Data model changes

🐛 Bug report
Status

Needs review

Version

11.0 🔥

Component
PHPUnit 

Last updated 10 minutes ago

Created by

🇺🇸United States Mile23 Seattle, WA

Live updates comments and jobs are added and updated live.
  • Needs tests

    The change is currently missing an automated test that fails when run with the original code, and succeeds when the bug has been fixed.

  • Needs issue summary update

    Issue summaries save everyone time if they are kept up-to-date. See Update issue summary task instructions.

Sign in to follow issues

Merge Requests

Comments & Activities

Not all content is available!

It's likely this issue predates Contrib.social: some issue and comment data are missing.

  • @mondrake opened merge request.
  • Status changed to Needs review over 1 year ago
  • 🇮🇹Italy mondrake 🇮🇹

    Rebased after 📌 Introduce TestRun objects and refactor TestDatabase to be testable Fixed and changed to MR workflow.

  • 🇮🇹Italy mondrake 🇮🇹

    @longwave and I discussed on Slack on whether working this issue would still making sense given to upcoming move to GitLab testing. We leaned to a no, but would like to get more feedback.

    For sure it improves the console log output by showing execution time, passes, skips, incomplete etc., see latest MR test log https://dispatcher.drupalci.org/job/drupal_patches/171125/consoleFull

  • 🇬🇧United Kingdom alexpott 🇪🇺🌍

    I'd lean towards a no too.

  • 🇺🇸United States smustgrave

    Would also say no but depends when exactly that switch will happen?

  • Status changed to Closed: won't fix over 1 year ago
  • 🇮🇹Italy mondrake 🇮🇹

    Four leaning no account for at least a full no…

  • Status changed to Needs work about 2 months ago
  • 🇬🇧United Kingdom catch

    We're relying on run-tests.sh in gitlab CI, and didn't get very far trying to switch to paratest, so this is still relevant.

  • 🇬🇧United Kingdom alexpott 🇪🇺🌍

    @catch but now we have the reports in gitlab CI we are finally seeing which tests are skipped. You can see this in the gitlab reports. Unfortunately it does not tell you why but it is more information.

  • Merge request !9027Closes #2905007 → (Open) created by mondrake
  • 🇮🇹Italy mondrake 🇮🇹

    Opened a new branch, just tried to merge the old MR with 11.x. Will fail badly...

  • Pipeline finished with Failed
    about 2 months ago
    Total: 259s
    #240792
  • Pipeline finished with Failed
    about 2 months ago
    Total: 278s
    #240797
  • Pipeline finished with Failed
    about 2 months ago
    #240805
  • Pipeline finished with Failed
    about 2 months ago
    Total: 254s
    #240998
  • Pipeline finished with Failed
    about 2 months ago
    Total: 657s
    #241005
  • 🇬🇧United Kingdom catch

    @alexpott right but we still print the run-tests.sh output in the job logs, and it'd be better if it matched the test reports in gitlab. I always end up going to the jobs about 95% of the time to see what failed.

  • Pipeline finished with Failed
    about 2 months ago
    Total: 545s
    #242062
  • Pipeline finished with Failed
    about 2 months ago
    Total: 629s
    #242123
  • Pipeline finished with Failed
    about 2 months ago
    Total: 438s
    #242244
  • 🇮🇹Italy mondrake 🇮🇹

    In PHPUnit 10+ the Junit log does not separate between skipped and incomplete tests, both are reported as ‘skipped’. Also, there is no specific logic around risky tests.

    We can simplify here accordingly.

  • Pipeline finished with Failed
    about 2 months ago
    Total: 189s
    #242585
  • Pipeline finished with Failed
    about 2 months ago
    Total: 586s
    #242590
  • Pipeline finished with Failed
    about 2 months ago
    Total: 177s
    #242901
  • Pipeline finished with Failed
    about 2 months ago
    Total: 705s
    #242909
  • Status changed to Needs review about 2 months ago
  • 🇮🇹Italy mondrake 🇮🇹

    This is an issue where the review makes more sense when tests fail :)

    What's being output currently by run-tests.sh in the 'Detailed test results' section is very simpletest-ish, and possibly needs reconsideration.

    1) the 'Group' column in PHPUnit always bears 'Other', so it's useless. I replaced it with the execution time of the single test case.
    2) the 'Function' column in PHPUnit is the name of the test, including the name of the dataset provided by a data provider. I renamed the column to 'test'.
    3) all reported lines are adding an empty line - it's meant to bear a message, but in PHPUnit passed and skipped test cases never have one. So just skipped here to print anything if the message is missing.

    Also, in HEAD right now if a PHPUnit test run ends with a 'error' exit code, we have a duplicate line reported for the same test class. Fixed here.
    Also fixed that if the PHPUnit runner fails for failure/errors, we miss details of testcases passed/failed/errored.

    Reviews, based on the output of the MR, appreciated.

  • 🇮🇹Italy mondrake 🇮🇹

    Rebased after commit of 📌 Show test run time by class in run-tests.sh output Fixed .

  • Pipeline finished with Failed
    about 1 month ago
    Total: 673s
    #253916
  • Pipeline finished with Failed
    23 days ago
    Total: 463s
    #266348
  • Pipeline finished with Failed
    22 days ago
    Total: 448s
    #266817
  • Pipeline finished with Failed
    22 days ago
    Total: 864s
    #267189
  • Pipeline finished with Canceled
    22 days ago
    Total: 560s
    #267239
  • Pipeline finished with Success
    22 days ago
    Total: 593s
    #267249
  • 🇮🇹Italy mondrake 🇮🇹

    OK, fully green now.

Production build 0.71.5 2024