UTC-4
Account created on 22 June 2009, over 15 years ago
#

Merge Requests

More

Recent comments

🇨🇦Canada mparker17 UTC-4

I've reviewed the code changes in the merge request, and it looks good to me.

I ran some manual tests (documented below) of the basic functionality on the code in !4 and Drupal 11, and I was able to get it to work.

Here's what I did to test !4...

  1. Install ddev (I tested on version 1.24.1)
  2. Clone the module, issue fork, and branch:
    1. git clone --branch '2.x' https://git.drupalcode.org/project/masquerade_field.git && cd masquerade_field/ - clone the project
    2. git remote add masquerade_field-3495033 git@git.drupal.org:issue/masquerade_field-3495033.git && git fetch masquerade_field-3495033 - add a remote for the issue fork
    3. git checkout -b '3495033--drupal-11' --track masquerade_field-3495033/'3495033--drupal-11' - switch to the branch for merge request !4
  3. Set up a test site for the module with the ddev/ddev-drupal-contrib plugin:
    1. ddev config --project-type=drupal --docroot=web --php-version=8.3 --corepack-enable --project-name=masquerade-field - create a ddev project for testing the module
    2. ddev add-on get ddev/ddev-drupal-contrib && ddev start && ddev poser && ddev symlink-project - run the setup steps for the ddev/ddev-drupal-contrib addon
    3. Go to https://masquerade-field.ddev.site/core/install.php in a browser. Confirm that I saw the Drupal 11.1.0 installer. Install the site with the "Standard" install profile.
    4. Go to /admin/people/create and create a user with the Content editor role. I gave mine the username test_editor.
    5. Go to /admin/people/create and create a user with the Authenticated role. I gave mine the username test_user.
  4. Set up the module for testing:
    1. Go to /admin/modules and install masquerade_field and its dependencies
    2. Go to /admin/people/permissions, grant the Content editor role the following permissions:
      • Masquerade Field -> Edit the masquerade field
      • Masquerade Field -> View any masquerade field
      • Masquerade Field -> View own masquerade field
      • User -> View user information

      ... click Save permissions

  5. Test the basic module functions in Drupal 11:
    1. Log out of the administrator account, and log in as the test_editor Content editor user created earlier.
    2. Go to the edit tab for the test_editor account (on my test site, it was at /user/2/edit). You see a Masquerade as field.
    3. Enter test_editor into the Masquerade as autocomplete, and select it from the autocomplete drop-down (on my test site, that made it say test_editor (2).
    4. Click Save at the bottom of the page.
    5. You see the error message User test_editor cannot masquerade as itself.
    6. Enter test_user into the Masquerade as autocomplete, and select it from the autocomplete drop-down (on my test site, that made it say test_user (3))
    7. Click Save at the bottom of the page.
    8. You see the status message The changes have been saved.
    9. Click the View tab at the top of the page.
    10. You see a Masquerade as field, with a link to masquerade as test_user.
    11. Click the test_user link to masquerade as the test_user.
    12. You see the status message You are now masquerading as test_user.
    13. Click the Unmasquerade link in the site header. If you can't see it, you may need to click the Menu button in the top-right corner of the page. If you're not using the Olivero theme, the Unmasquerade link may be located elsewhere.
    14. You see the status message You are no longer masquerading as test_user.
  6. Make sure there are no errors being logged:
    1. Log out of the test_editor user created earlier; and log in as the administrator again
    2. Go to /admin/reports/dblog to ensure our setup/testing did not result in any error messages related to this module.
🇨🇦Canada mparker17 UTC-4

@cb_govcms, good catch, thank you!

@nghua, thanks for fixing the issues raised by myself and @cb_govcms. (I don't have permission to mark the thread that @cb_govcms started as "resolved" however)

I've code-reviewed the updated merge-request, and I'm happy with it. I also re-ran my test case from earlier on a freshly-cloned copy of the merge request, and it works for me.

I'm going to mark this as Reviewed and Tested by the Community. Thanks everyone!

🇨🇦Canada mparker17 UTC-4

Added a short version of instructions for setting up ElasticSearch Connector 8.0.x

🇨🇦Canada mparker17 UTC-4

Awesome, thanks @heddn!

🇨🇦Canada mparker17 UTC-4

+1 to RTBC: I have code-reviewed and manually tested the code in merge request !3, and I'm satisfied with it.

Here is what I did to manually-test !3:

  1. Install ddev (I tested on version 1.24.1)
  2. Clone the module, issue fork, and branch:
    1. git clone --branch '3.0.x' https://git.drupalcode.org/project/cloudfront_cache_path_invalidate.git && cd cloudfront_cache_path_invalidate - clone the project
    2. git remote add cloudfront_cache_path_invalidate-3429249 https://git.drupalcode.org/issue/cloudfront_cache_path_invalidate-3429249.git && git fetch cloudfront_cache_path_invalidate-3429249 - add a remote for the issue fork
    3. git checkout -b 'project-update-bot-only' --track cloudfront_cache_path_invalidate-3429249/'project-update-bot-only' - switch to the branch for merge request !3
  3. Set up a test site for the module with the ddev/ddev-drupal-contrib plugin:
    1. ddev config --project-type=drupal --docroot=web --php-version=8.3 --corepack-enable --project-name=cloudfront-cache-path-invalidate
    2. ddev add-on get ddev/ddev-drupal-contrib && ddev start && ddev poser && ddev symlink-project - run the setup steps for the ddev/ddev-drupal-contrib addon
    3. Go to https://cloudfront-cache-path-invalidate.ddev.site/core/install.php in a browser. Confirm that I saw the Drupal 11.1.0 installer. Install the site with the "Standard" install profile
    4. Go to /admin/people/create and create a user with the Content editor role. I gave mine the username test_editor
  4. Set up the module for testing:
    1. Go to /admin/modules; enable the cloudfront_cache_path_invalidate module and all its dependencies.
    2. Go to /admin/people/permissions, grant the Content editor role the following permissions:
      • Cloudfront Cache Path Invalidate -> Use Cloudfront Cache Invalidate Form

      ... click Save permissions

    3. Edit web/sites/default/settings.php, adding (and filling in values for) the following lines for the website that is cached with CloudFront, as directed in README.md:
                $settings['aws.distributionid'] = '';
                $settings['aws.region'] = '';
                $settings['s3fs.access_key'] = '';
                $settings['s3fs.secret_key'] = '';
              
  5. Test the basic module functions in Drupal 11:
    1. Log out of the administrator account, and log in as test_editor that I created earlier
    2. In another tab, open a page on the website that is cached with CloudFront (I will call this $TEST_PAGE_URL below — I picked a page that doesn't get much traffic). Opened my browser's Developer Tools' Network console. Refreshed $TEST_PAGE_URL until I saw an HTTP Response Header that looks like x-cache: Hit from cloudfront
    3. In another tab, logged into my AWS Cloudfront UI, go to the distribution for the website that is cached with CloudFront, and then go to the Invalidations tab for that distribution. Take note of the most-recent Invalidation's Date.
    4. Back in my Drupal 11 test site, go to /admin/config/services/cloudfront-invalidate-url. I see a Cloudfront Cache Setting form.
    5. In the URL to invalidate Cloudfront cache textarea, enter the path component of $TEST_PAGE_URL, took note of the current date/time, then click Invalidate Cloudfront Cache (e.g.: /node/123 or /path/to/url/alias). I see the status message Cloudfront URL Cache invalidation is in progress.
    6. Switch to the tab with my AWS Cloudfront UI for my Distribution. Refresh the Invalidations tab for that distribution. I see a new Invalidation (i.e.: different from the previous invalidation I noted earlier). The new Invalidation's Date was the date/time that I clicked the Invalidate Cloudfront Cache button in the previous step. The new Invalidation's Object paths matched the path I entered in the previous step. The new Invalidation's Status was Completed.
    7. Switch to the tab where I had $TEST_PAGE_URL open. Opened my browser's Developer Tools' Network console. Refreshed $TEST_PAGE_URL: I now see an HTTP Response Header that looks like x-cache: Miss from cloudfront.
  6. Make sure there are no errors being logged:
    1. Log out of the test_editor user created earlier; and log in as the administrator again
    2. Go to /admin/reports/dblog to ensure our setup/testing did not result in any error messages related to this module.
🇨🇦Canada mparker17 UTC-4

I've tried, but as mentioned in #7, my D7 to D10 site migration is complete and my D7 site retired, so I have no way to test that it still worked the way it did 10 months ago.

🇨🇦Canada mparker17 UTC-4

I have code-reviewed and manually tested the code in merge request !34, and I'm satisfied with it.

Here is what I did to manually-test !34:

  1. Install ddev (I tested on version 1.24.1)
  2. Clone the module, issue fork, and branch:
    1. git clone --branch '2.0.x' https://git.drupalcode.org/project/openid_connect_windows_aad.git && cd openid_connect_windows_aad - clone the project
    2. git remote add openid_connect_windows_aad-3485376 https://git.drupalcode.org/issue/openid_connect_windows_aad-3485376.git && git fetch openid_connect_windows_aad-3485376 - add a remote for the issue fork
    3. git checkout -b '3485376-drupal-11-support' --track openid_connect_windows_aad-3485376/'3485376-drupal-11-support' - switch to the branch for merge request !34
  3. Set up a test site for the module with the ddev/ddev-drupal-contrib plugin:
    1. ddev config --project-type=drupal --docroot=web --php-version=8.3 --corepack-enable --project-name=openid-connect-windows-aad
    2. ddev add-on get ddev/ddev-drupal-contrib && ddev start && ddev poser && ddev symlink-project - run the setup steps for the ddev/ddev-drupal-contrib addon
    3. Patch openid_connect with #3486049-6: 'Settings' option not accessible to fix a bug unrelated to this module: cd web/modules/contrib/openid_connect && curl -OL https://www.drupal.org/files/issues/2024-11-11/openid_connect-3486049-6.patch && patch -p1 < openid_connect-3486049-6.patch && cd -
    4. Go to https://openid-connect-windows-aad.ddev.site/core/install.php in a browser. Confirm that I saw the Drupal 11.0.9 installer. Install the site with the "Standard" install profile
  4. Set up the module for testing:
    1. Go to /admin/modules and enable the openid_connect_windows_aad module and its dependencies
    2. Go to /admin/config/people/openid-connect/settings and set the following options: (if you get an error when visiting this URL, don't forget to patch openid_connect with #3486049-6: 'Settings' option not accessible to fix the bug in that module)
      1. Save user claims on every login = (checked)
      2. Override registration settings = (checked)
      3. OpenID buttons display in user login form = Above
      4. Advanced -> Automatically connect existing users = (checked) (warning: you probably don't want to use this setting on a real site, but it's good enough to test openid_connect_windows_aad's basic functionality in D11)

      ... then click Save configuration.

    3. Go to /admin/config/people/accounts, and set Who can register accounts? to Visitors, then click Save configuration. (warning: you probably don't want to use this setting on a real site, but it's good enough to test openid_connect_windows_aad's basic functionality in D11)
    4. Set up a Microsoft Entra ID app configuration as described in this module's documentation , i.e.:
      1. Go to https://portal.azure.com and log in if needed. Go to the hamburger menu -> All services. Under Identity, click Microsoft Entra ID
      2. Go to Add -> App registration:
        • Name = openid-connect-windows-aad-drupal11-test
        • Supported account types = (whatever makes sense for your use case)
        • Redirect URI:
          • Select a platform = Web
          • Redirect URI = https://openid-connect-windows-aad.ddev.site/openid-connect/azure_oidc_d11_test

        ... then click Register.

      3. Go to https://portal.azure.com again. Go to the hamburger menu -> All services. Under Identity, click Microsoft Entra ID
      4. In the Microsoft Entra ID sidebar, go to Manage -> App registrations and click openid-connect-windows-aad-drupal11-test
      5. In the openid-connect-windows-aad-drupal11-test app's sidebar, go to Manage -> Certificates & secrets. In the main area of the page, under Client secrets, click New client secret. Set Description = testd11 and Expires = 90 days (3 months). Click Add. Copy the Value.
      6. In Drupal, go to /admin/config/system/keys/add, enter:
        • Key name = oidc_entra_app_key
        • Key type = Encryption
        • Key size = Other
        • Custom key size = 320
        • Key provider = Configuration
        • [Key] Base64-encoded = TRUE
        • Key value = (paste the client secret you created in the previous step)
        • [Value] Base64-encoded = FALSE

        ... click Save. You see the message The key oidc_entra_app_key has been added.

      7. In the Azure Portal, in the openid-connect-windows-aad-drupal11-test app's sidebar, click Overview. Under Essentials, copy the Application (client) ID.
      8. In Drupal, go to /admin/config/people/openid-connect/add/windows_aad. You see a Add OpenID Connect client form. Enter:
        • Name = azure_oidc_d11_test
        • Client ID = (paste the client ID you copied in the previous step)

        ... don't submit the form yet...

      9. In the Azure Portal, still on the openid-connect-windows-aad-drupal11-test app's Overview page, click Endpoints at the top. An Endpoints sidebar opens:
        • Copy OAuth 2.0 authorization endpoint (v2) to a temporary file
        • Copy OAuth 2.0 token endpoint (v2) to a temporary file
      10. In Drupal, on the Add OpenID Connect client form:
        • Allowed domains = (the scheme and authority part of the OAuth 2.0 authorization endpoint (v2), e.g.: https://login.microsoftonline.com
        • Authorization endpoint = (paste the OAuth 2.0 authorization endpoint (v2) you copied in the previous step)
        • Token endpoint = (paste the OAuth 2.0 token endpoint (v2) you copied in the previous step)
        • End session endpoint = (leave blank)
        • Map user's AD groups to Drupal roles = (unchecked)
        • User info endpoint configuration = Alternate or no user endpoint
        • Alternate UserInfo endpoint = (leave blank)
        • Use Graph API otherMails property for email address = (unchecked)
        • Update email address in user profile = (unchecked)
        • Hide missing email address warning = (unchecked)
        • Subject key = sub
        • Check that the Redirect URL matches the Redirect URI you entered when setting up the Entra ID App (e.g.: https://openid-connect-windows-aad.ddev.site/openid-connect/azure_oidc_d11_test)

        ... click Create OpenID Connect client. You see the message OpenID Connect client azure_oidc_d11_test has been added.

    5. In Drupal, Go to /admin/config/development/performance click Clear all caches
  5. Test the basic module functions in Drupal 11:
    1. Log out from the administrator account.
    2. Go to /user/login. You should see a Log in with azure_oidc_d11_test button above the Username and Password fields.
    3. Click the Log in with azure_oidc_d11_test button and authenticate with your Microsoft credentials. You are logged in.
  6. Make sure there are no errors being logged:
    1. Log out of the test_editor user created earlier; and log in as the administrator again
    2. Go to /admin/reports/dblog to ensure our setup/testing did not result in any error messages related to this module.
🇨🇦Canada mparker17 UTC-4

+1 to RTBC.

I have code-reviewed and manually tested the code in merge request !11, and I'm satisfied with it.

Here's what I did to manually-test !11:

  1. Install ddev (I tested on version 1.24.1)
  2. Clone the module, issue fork, and branch:
    1. git clone --branch '1.x' https://git.drupalcode.org/project/field_protect.git && cd field_protect - clone the project
    2. git remote add field_protect-3430524 https://git.drupalcode.org/issue/field_protect-3430524.git && git fetch field_protect-3430524 - add a remote for the issue fork
    3. git checkout -b 'project-update-bot-only' --track field_protect-3430524/'project-update-bot-only' - switch to the branch for merge request !11
  3. Set up a test site for the module with the ddev/ddev-drupal-contrib plugin:
    1. ddev config --project-type=drupal --docroot=web --php-version=8.3 --corepack-enable --project-name=field-protect - create a ddev project for testing the module
    2. ddev add-on get ddev/ddev-drupal-contrib && ddev start && ddev poser && ddev symlink-project - run the setup steps for the ddev/ddev-drupal-contrib addon
    3. Go to https://field-protect.ddev.site/core/install.php in a browser. Confirm that I saw the Drupal 11.0.9 installer. Install the site with the "Standard" install profile.
    4. Go to /admin/people/create and create a user with the Content editor role. I gave mine the username test_editor.
  4. Set up the module for testing:
    1. Go to /admin/modules and install field_protect and its dependencies
    2. Go to /admin/people/permissions, grant the Content editor role the following permissions:
      • Field Protect -> Remember field unlock

      ... click Save permissions.

    3. Go to /admin/structure/types/manage/page/form-display. Click the gear in to the Title row. Check Field Protect: Protect from accidental changes.. In Field Protect: Message, enter Changing the title may result in the URL alias changing!, then click Update. You see the warning message You have unsaved changes.. Click Save. You see the status message Your settings have been saved. The Title row now shows Field protected against accidental changes 🔒 and, indented, Changing the title may result in the URL alias changing!.
  5. Test the module's basic functionality:
    1. Log out of the administrator account and log in as the test_editor user you created earlier.
    2. Go to /node/add/page. Set Title = Lorem and Body = Ipsum. Click Save. You see the status message Basic page Lorem has been created.
    3. Click Edit in the primary tabs. You see an Edit Basic page Lorem form. The Title field is greyed out with a Unlock field link.
    4. Click the Unlock field link. You see a modal that looks like:

      Please confirm before you continue

      This field is locked

      You can edit this field, but you need to understand the implications. This field is protected with the following message:

      Changing the title may result in the URL alias changing!

      Are you sure you want to proceed?

      ... you see Cancel and Unlock buttons.

    5. Click the Cancel button in the modal. The modal disappears, and the field remains locked.
    6. Click the Unlock field link. You see the same modal from earlier.
    7. Click the Unlock button in the modal. The title field is now editable again. Set Title = Dolor. Click Save. You see the status message Basic page Dolor has been updated.
🇨🇦Canada mparker17 UTC-4

mparker17 changed the visibility of the branch 3431824-automated-drupal-11 to hidden.

🇨🇦Canada mparker17 UTC-4

I have code-reviewed and manually tested the code in merge request !5, and I'm satisfied with it, so I'm marking it as RTBC.

Here's what I did to test !5:

  1. Install ddev (I tested on version 1.24.1)
  2. Clone the module, issue fork, and branch:
    1. git clone --branch '8.x-1.x' https://git.drupalcode.org/project/masquerade_log.gitt && cd masquerade_log - clone the project
    2. git remote add masquerade_log-3431824 git@git.drupal.org:issue/masquerade_log-3431824.git && git fetch masquerade_log-3431824 - add a remote for the issue fork
    3. git checkout -b 'project-update-bot-only' --track masquerade_log-3431824/'project-update-bot-only' - switch to the branch for merge request !5
  3. Set up a test site for the module with the ddev/ddev-drupal-contrib plugin:
    1. ddev config --project-type=drupal --docroot=web --php-version=8.3 --corepack-enable --project-name=masquerade-log - create a ddev project for testing the module
    2. ddev add-on get ddev/ddev-drupal-contrib && ddev start && ddev poser && ddev symlink-project - run the setup steps for the ddev/ddev-drupal-contrib addon
    3. Go to /core/install.php in a browser. Confirm that I saw the Drupal 11.0.9 installer. Install the site with the "Standard" install profile.
    4. Go to /admin/people/create and create a user with the Content editor role. I gave mine the username test_editor.
    5. Go to /admin/people/create and create a user with the Authenticated user role. I gave mine the username test_user.
  4. Set up the module for testing:
    1. Go to https://masquerade-log.ddev.site/admin/modules and install masquerade_log and its dependencies
    2. Go to /admin/people/permissions, grant the Content editor role the following permissions:
      • Masquerade -> Masquerade as Authenticated user

      ... click Save permissions.

    3. Go to /admin/people/permissions, grant the Authenticated user role the following permissions:
      • Toolbar -> Use the toolbar

      ... click Save permissions.

  5. Test the module's basic functionality:
    1. Log out of the administrator account, and log in as the test_editor user you created earlier.
    2. Go to /masquerade - you see a Masquerade form.
    3. In the Masquerade as... box, enter test_user and click Switch.
    4. You switch to the new user account (see the toolbar). Note when I tested this, it took me to the /masquerade form, and I got an "access denied" because the test_user account isn't allowed to masquerade... but this is a problem with the masquerade module, not masquerade_log.
    5. Click Unmasquerade in the toolbar to unmasquerade. You see the status message You are no longer masquerading as test_user.
    6. Log out of the test_editor user you created earlier. Log in as the administrator again.
    7. Go to /admin/reports/dblog. You see a list of Recent log messages. You see the following log messages:
      • Type = masquerade; Message = User test_editor stopped masquerading as test_user.
      • Type = masquerade; Message = User test_editor masqueraded as test_user.
🇨🇦Canada mparker17 UTC-4

mparker17 changed the visibility of the branch project-update-bot-only to active.

🇨🇦Canada mparker17 UTC-4

mparker17 changed the visibility of the branch project-update-bot-only to hidden.

🇨🇦Canada mparker17 UTC-4

I've tested the basic functionality of the module at commit 5901169 from merge request !22 on a local D11 site, and it works for me, so +1 to RTBC from me!

(for the sake of brevity, I'm leaving out my test steps, but let me know if you'd like me to post them)

🇨🇦Canada mparker17 UTC-4

Okay... when I tried this a second time on a different computer, my test case of the basic module functions worked perfectly. As far as I'm concerned, this means the module is working correctly.

@shashi_shekhar_18oct, if I could trouble you to change the core_version_requirement line in modules/dropzonejs/media_bulk_upload_dropzonejs.info.yml from core_version_requirement: '>=10.2' to core_version_requirement: ^10.2 || ^11 as you did before, then I'll do one last code review/test, then mark this as RTBC.

Thank you very much in advance!

🇨🇦Canada mparker17 UTC-4

Taking a look at the code in merge request !17, I found the following things that will need to be fixed:

  1. There's still an instance of the old core_version_requirement: '>=10.2' in modules/dropzonejs/media_bulk_upload_dropzonejs.info.yml

... so I'm marking this as "Needs work".

Running manual tests, I couldn't get !17 to work on D11, so I'm marking this as "Needs work". I'm not yet sure if this is something that I'm doing wrong, or an actual problem with D11 compatibility. I'm going to test again on a different computer to see if my setup on this one is broken.

Here's what I did to test !17...

  1. Install ddev (I tested on version 1.24.1)
  2. Clone the module, issue fork, and branch:
    1. git clone https://git.drupalcode.org/project/media_bulk_upload.git && cd media_bulk_upload - clone the project
    2. git remote add media_bulk_upload-3431856 git@git.drupal.org:issue/media_bulk_upload-3431856.git && git fetch media_bulk_upload-3431856 - add a remote for the issue fork
    3. git checkout -b '3431856-d11_ready' --track media_bulk_upload-3431856/'3431856-d11_ready' - switch to the branch for merge request !17
  3. Set up a test site for the module with the ddev/ddev-drupal-contrib plugin:
    1. ddev config --project-type=drupal --docroot=web --php-version=8.3 --corepack-enable --project-name=media-bulk-upload - create a ddev project for testing the module
    2. ddev add-on get ddev/ddev-drupal-contrib && ddev start && ddev poser && ddev symlink-project - run the setup steps for the ddev/ddev-drupal-contrib addon
    3. mkdir -p web/libraries/dropzone && curl -L -o web/libraries/dropzone/dropzone.min.js https://unpkg.com/dropzone@5/dist/min/dropzone.min.js - quickly download and install the latest 5.x version of dropzonejs
    4. mkdir -p temp-documents - create a folder outside the webroot to hold the bulk-uploaded documents before the media entities are fully created
    5. Go to https://media-bulk-upload.ddev.site/core/install.php in a browser. Confirm that I saw the Drupal 11.0.9 installer. Install the site with the "Standard" install profile.
    6. Go to /admin/people/create and create a user with the Content editor role. I gave mine the username test_editor.
  4. Set up the module for testing:
    1. Go to /admin/modules and install media_bulk_upload and its dependencies
    2. Go to /admin/modules and install dropzonejs and its dependencies
    3. Go to /admin/config/media/media-bulk-config/add, and enter:
      • Label = File
      • Media types = Document
      • Form mode = - None -
      • Upload location = ../temp-documents (I tried various other values for this, see below)

      ... then click Save. You see the status message "Created the File Media Bulk Config."

    4. Go to /admin/people/permissions/module/media_bulk_upload, grant the Content editor role the following permissions:
      • dropzonejs -> Dropzone upload files
      • Help -> Use help pages
      • Media Bulk Upload -> File : Use upload form
      • Media Bulk Upload -> Configure Media Upload form
  5. If you don't already have a bunch of files to bulk-upload for testing, you can generate 9 ".doc" files with the following command in sh or bash: cd ~/Desktop && mkdir test_media_bulk_upload && cd test_media_bulk_upload && for planet in Mercury Venus Earth Mars Jupiter Saturn Uranus Neptune Pluto; do echo $planet > $planet.doc ; done (if you use some other shell, your mileage may vary)
  6. Test the basic module functions in Drupal 11:
    1. Log out of the administrator account, and log in as the test_editor Content editor user created earlier.
    2. Go to /admin/help/media_bulk_upload. Confirm that I can see a help page.
    3. Go to /admin/config/media/media-bulk-config. Confirm that I can see a "Bulk upload media" configuration page.
    4. Click Edit next to the File Media Bulk Config I created earlier. Confirm that I can see a "Edit File" form. Click "Save" without making any changes. Confirm that I can see the status message "Saved the File Media Bulk Config."
    5. Go to /media/bulk-upload/file. Confirm that I can see a "Multiple upload" form.
    6. Try bulk-uploading some of the files I generated earlier.
      • Expected behavior: the files were successfully uploaded
      • Actual behavior: I got an error, The file could not be uploaded because the destination "../temp-documents/" is invalid
        • I tried various other things for Upload location in the configuration... /var/www/html/temp-documents, sites/default/files/temp-documents, etc. These were reflected in the error message, suggesting they were properly saved; but I got the same error in each case.
  7. Make sure there are no errors being logged:
    1. Log out of the test_editor user created earlier; and log in as the administrator again
    2. Go to /admin/reports/dblog to ensure our setup/testing did not result in any error messages related to this module.
🇨🇦Canada mparker17 UTC-4

@shashi_shekhar_18oct, thank you very much!

Quick process note: as a general rule, you shouldn't mark your own changes as RTBC ("it works on my machine" is a cliché because it happens to all of us, regardless of our experience-level!).

So I'm moving it back to "Needs review" for now... but I have time to code review now, test on my own site, and if everything works, mark it as RTBC!

Thanks again!

🇨🇦Canada mparker17 UTC-4

mparker17 changed the visibility of the branch 3431856-automated-drupal-11 to hidden.

🇨🇦Canada mparker17 UTC-4

mparker17 changed the visibility of the branch project-update-bot-only to hidden.

🇨🇦Canada mparker17 UTC-4

There are, apparently, 3 branches, 3 merge requests, and 3 patches associated with this issue. Both !16 and !17 were updated recently when I wrote this comment. I code reviewed both branches.

!17:

  1. adds a valid-but-unconventional core_version_requirement: '>=10.2',
  2. adds a .gitlab-ci.yml (even though that wasn't part of the issue scope),
  3. src/Form/MediaBulkUploadForm.php is missing a newline at the end of the file,
  4. MediaTypeManager::getTargetFieldMaxSize() has a change that looks like:
    -return !empty($targetFieldSettings['max_filesize']) ? $targetFieldSettings['max_filesize'] : (string)
    format_size(Environment::getUploadMaxSize());
    +return !empty($targetFieldSettings['max_filesize']) ? $targetFieldSettings['max_filesize'] : (string)
    DeprecationHelper::backwardsCompatibleCall(\Drupal::VERSION, '10.2.0', fn() =>
    ByteSizeMarkup::create(Environment::getUploadMaxSize()), fn() => format_size(Environment::getUploadMaxSize()));
        

    ... in this change, I like the use of \Drupal\Component\Utility\DeprecationHelper, but it was missing a fallback value that is present in !16 (i.e.: ?? 0, see below)

!16 is identical to !17, except for the following things:

  1. !16 has a more-conventional core_version_requirement - which notably didn't reflect the change in MediaTypeManager::getTargetFieldMaxSize() below,
  2. !16 does not add a .gitlab-ci.yml (in keeping with the issue scope),
  3. src/Form/MediaBulkUploadForm.php has the newline at the end of the file,
  4. MediaTypeManager::getTargetFieldMaxSize() has a change that looks like:
    -return !empty($targetFieldSettings['max_filesize']) ? $targetFieldSettings['max_filesize'] : (string)
    format_size(Environment::getUploadMaxSize());
    +return !empty($targetFieldSettings['max_filesize']) ? $targetFieldSettings['max_filesize'] : (string)
    ByteSizeMarkup::create(Environment::getUploadMaxSize() ?? 0);
        

    ... I like the fallback (?? 0), but it is missing the use of \Drupal\Component\Utility\DeprecationHelper, which would drop support 10.2, as ByteSizeMarkup doesn't exist in 10.2.

My recommendation is that we...

  1. Hide all the files associated with the ticket to reduce confusion (I will do this)
  2. Close !13 and !16 (I will do this)
  3. Change the core_version_requirement in !17 to something like core_version_requirement: ^10.2 || ^11
  4. Add the newline to the end of src/Form/MediaBulkUploadForm.php in !17
  5. Delete the .gitlab-ci.yml in !17
  6. Add the fallback (i.e.: ?? 0) to the MediaTypeManager::getTargetFieldMaxSize() return statement from !16.

... if someone who isn't me could take on the last 4 tasks, then I can test again, and re-mark this as RTBC if it all works... my client wants this module updated to D11 so I have been assigned some time to help out in the next month or so.

🇨🇦Canada mparker17 UTC-4

I don't understand what needs to be done here. I'm going to mark it as Closed (outdated) for now, but feel free to reopen this issue if you disagree.

🇨🇦Canada mparker17 UTC-4

Elasticsearch 8 does support field collapsing and field denormalization; so it's still worth doing this.

However, the Search API Grouping module is only compatible with Drupal core 8 and 9 (both of which are unsupported); and it (currently) has a hard dependency on search_api_solr. So I'll move this to the 8.0.x branch, but mark it as postponed for now.

🇨🇦Canada mparker17 UTC-4

I've created a merge request, but I'm not sure if it's the correct solution, because I'm not sure what depends on the hidden field_name field that's set here. The only other instance of that string in that file is in the defaultConfiguration() function.

🇨🇦Canada mparker17 UTC-4

> The patch needs to be moved into a merge request.

Done!

***

> Finally, it would be nice to have test coverage for this. I'm not sure how you would test in this case, however.

I'm also not sure how to add test coverage for this change.

🇨🇦Canada mparker17 UTC-4

I've tested this patch on two different sites, and it works for me!

🇨🇦Canada mparker17 UTC-4

Yay, success. Thanks for your patience!

I'll create a new release with this change tomorrow!

🇨🇦Canada mparker17 UTC-4

Lets merge into 2.0.x first, then 1.0.x afterwards.

🇨🇦Canada mparker17 UTC-4

Awesome! Moving to RTBC. I'll review and merge shortly. Thanks everyone!

🇨🇦Canada mparker17 UTC-4

The patch didn't apply so I had to re-do the changes. Let's see if this improves the speed that the tests run.

🇨🇦Canada mparker17 UTC-4

Going to mark this as "Needs review" so I can get feedback!

In Highlighting support (leverage Elasticsearch highlighting) Needs review , I'm thinking of adding a test that runs against the ElasticSearch environment in CI: if I am successful, I may port it here... but I would appreciate more eyes on the patch in the meantime.

🇨🇦Canada mparker17 UTC-4

Updated the issue summary, for parity with search_api_opensearch's issue Support OpenSearch server highlighting Active .

Moving to Needs Work, because I think I'd like to add another test, this time against the ElasticSearch environment in CI.

🇨🇦Canada mparker17 UTC-4

I've created merge request !73. Reviews are welcome!

This seems like something we could also contribute to Search API OpenSearch !

🇨🇦Canada mparker17 UTC-4

I've created a patch for ElasticSearch 8, that leverages Search API's API a little better, and supports a handful more options. Let's see if we can get it into the 8.0.x branch first, then backport it.

🇨🇦Canada mparker17 UTC-4

@it-cru, awesome, thank you very much!

This patch looks great!

But, I think this is a common-enough scenario that we should have a test to make sure that this issue doesn't regress (that's why I'm marking it as "Needs work" and adding the "Needs tests" tag).

@it-cru, do you feel comfortable writing a test? (I am happy to help if you'd like!) If not, then I would be happy to write the test.

I think the reason why your change makes the error go away is because there are some cases where $this->entityTypeManager->getStorage($entity_type_id)->load($id) returns NULL (e.g.: I could see this happening if an entity was deleted from Drupal, but the corresponding entry in the ElasticSearch index hadn't been deleted yet).

Because PHP interprets NULL as "false-y", the subsequent code (i.e.: $datasource->getItemId($entity->getTypedData())) isn't run (and doesn't throw an error) after your change.

🇨🇦Canada mparker17 UTC-4

@kim.pepper: to be honest, I haven't tested it with large data sets, so I don't know for sure.

Both operations in my proof-of-concept took only a few milliseconds, but they're also only working with 3-4 pieces of very very simple data.

That being said, I would assume that the OpenSearch _reindex operation would be faster than what we have to do now, which is to: (a) clear the index, and (b) walk through all the content in Drupal and re-post it into the now-empty index in OpenSearch. I would expect it to be faster because...

  1. the _reindex operation only involves one system (OpenSearch, vs. what we have to do now with MySQL+PHP+OpenSearch); and;
  2. the data doesn't have to be transformed during the _reindex operation (OpenSearch internal format -> OpenSearch internal format; vs. what we have to do now with RDBMS internal -> SQL result -> (network) -> PHP memory -> JSON -> (network) -> OpenSearch internal)
🇨🇦Canada mparker17 UTC-4

Briefly, I found this worked on Elasticsearch 7 and OpenSearch 2, so I created Support Index Aliases and zero downtime mapping updates Active in the Search API OpenSearch queue

🇨🇦Canada mparker17 UTC-4

(copy of comment from #3248665-8: Support Aliases API and zero downtime mapping updates , but lightly edited to mention the version of OpenSearch I used and use an openseach URL in the variables — but note the request syntax is unchanged and therefore might not take full advantage of OpenSearch-specific features)

I've done some prototyping with PHPStorm's HTTP Client and OpenSearch 2.17.1. If you have an IntelliJ IDE you can add all the code snippets below to a .http file, modify the variables, and try it out for yourself... but I'm going to break it up so I can explain what each section does.

A brief note on these listings... the JSON is used in the request bodies. For the sake of brevity, I am not showing the response bodies, but you can test it for yourself if you want to see the results.

The following code sets up some variables we will use throughout the demo... you'll probably want to modify them for your environment...

### Variables
@host = https://opensearch:9200
@index = zerodowntime

I usually start by running a connection test to see if everything's okay (which is only useful for this demo).

### Connection test
GET {{host}}/_cluster/health

Set up an index for the first time

Now, let's pretend that we're creating an index in the Search API settings...

We start by setting up the indexes that will store the data.

### Index setup: Reserve the green index namespace
PUT {{host}}/{{index}}_green

### Index setup: Reserve the blue index namespace
PUT {{host}}/{{index}}_blue

Let's arbitrarily pick the "green" index to start using (i.e.: the "active" index)...

### Index setup: Close the green index so we can set mappings
POST {{host}}/{{index}}_green/_close

### Index setup: Set mappings on the green index
PUT {{host}}/{{index}}_green/_mappings
Content-Type: application/json

{
    "properties": {
        "name": {
            "type": "text",
            "fields": {
                "keyword": {
                    "type": "keyword",
                    "ignore_above": 256
                },
                "suggest": {
                    "type": "completion"
                }
            }
        },
        "author": {
            "type": "keyword",
            "ignore_above": 256
        },
        "release_date": {
            "type": "date",
            "format": "strict_date_optional_time||epoch_second"
        },
        "page_count": {"type": "integer"}
    }
}

Next, we create an alias that points to the "green" index...

### Alias setup: Open the green index so we can set an alias
POST {{host}}/{{index}}_green/_open

### Alias setup: Create an alias for _green
POST {{host}}/_aliases
Content-Type: application/json

{
    "actions": [
        {
            "add": {
                "index": "{{index}}_green",
                "alias": "{{index}}",
                "is_write_index": true
            }
        }
    ]
}

Normal usage 1

Now, let's use the index normally with the original configuration... I'm assuming "normal" usage is creating documents (i.e.: with Search API's tracker) and searching (i.e.: with a Search API front-end of some kind).

### Usage: Add Data 1 into the active index via its alias (pointing to green)
POST {{host}}/{{index}}/_doc
Content-Type: application/json

{"name": "Ansible for DevOps", "author": "Jeff Geerling", "release_date": "2011-01-01", "page_count": 452}

### Usage: Add Data 2 into the active index via its alias (pointing to green)
POST {{host}}/{{index}}/_doc
Content-Type: application/json

{"name": "The Design of Everyday Things", "author": "Don Norman", "release_date": "2013-01-01", "page_count": 180}

### Usage: Add Data 3 into the active index via its alias (pointing to green)
POST {{host}}/{{index}}/_doc
Content-Type: application/json

{"name": "Drupal 8 Module Development", "author": "Daniel Sipos", "release_date": "2017-01-01", "page_count": 547}

### Test-only usage: Flush data after writing documents
POST {{host}}/{{index}}/_flush

### Usage: Query the active index alias (pointing to green) for Data 1: expect 1 result
GET {{host}}/{{index}}/_search
Content-Type: application/json

{
    "query": {
        "match": {
            "name": "DevOps"
        }
    }
}

Changing field mappings 1

Now, let's say an administrator changes some field settings that would normally require reindexing all the data (in this case, "author" changes from type Keyword to type Text)...

We start by deleting and re-creating the inactive (blue) index, then set the new mappings on it (note that we don't strictly have to delete and re-create the "blue" index before setting mappings in this particular case because we didn't set any mappings on it during the setup... but if we had used it before — as we do with "green" below in the "Changing field mappings 2" section — then we would have to delete and re-create it).

### Change settings: Delete the blue index
DELETE {{host}}/{{index}}_blue

### Change settings: Create the blue index
PUT {{host}}/{{index}}_blue

### Change settings: Close the blue index so we can set mappings
POST {{host}}/{{index}}_blue/_close

### Change settings: Set new mappings on the blue index
POST {{host}}/{{index}}_blue/_mappings
Content-Type: application/json

{
    "properties": {
        "name": {
            "type": "text",
            "fields": {
                "keyword": {
                    "type": "keyword",
                    "ignore_above": 256
                },
                "suggest": {
                    "type": "completion"
                }
            }
        },
        "author": {
            "type": "text",
            "fields": {
                "keyword": {
                    "type": "keyword",
                    "ignore_above": 256
                },
                "suggest": {
                    "type": "completion"
                }
            }
        },
        "release_date": {
            "type": "date",
            "format": "strict_date_optional_time||epoch_second"
        },
        "page_count": {"type": "integer"}
    }
}

### Change settings: Open the blue index for reindexing
POST {{host}}/{{index}}_blue/_open

Now we can reindex from the old-active index to the new-active index

### Change settings: Reindex data from green to blue
POST {{host}}/_reindex
Content-Type: application/json

{
  "source": {
    "index": "{{index}}_green"
  },
  "dest": {
    "index": "{{index}}_blue"
  }
}

Now we can update the alias...

### Change settings: Update the (active) index alias to point to the blue index
POST {{host}}/_aliases
Content-Type: application/json

{
    "actions": [
        {
            "remove": {
                "index": "{{index}}_green",
                "alias": "{{index}}"
            }
        },
        {
            "add": {
                "index": "{{index}}_blue",
                "alias": "{{index}}",
                "is_write_index": true
            }
        }
    ]
}

### Change settings: Close the (now-inactive) green index for usage
POST {{host}}/{{index}}_green/_close

Normal usage 2

Now, let's use the index normally with the new configuration...

### Usage: Add Data 4 into the active index via its alias (pointing to blue)
POST {{host}}/{{index}}/_doc
Content-Type: application/json

{"name": "Linux Kernel in a Nutshell", "author": "Greg Kroah-Hartman", "release_date": "2007-01-01", "page_count": 182}

### Test-only usage: Flush data after writing documents
POST {{host}}/{{index}}/_flush

### Usage: Query the active index via its alias (pointing to blue) for Data 1: expect 1 result
GET {{host}}/{{index}}/_search
Content-Type: application/json

{
    "query": {
        "match": {
            "name": "DevOps"
        }
    }
}

### Usage: Query the active index via its alias (pointing to blue) for Data 4: expect 1 result
GET {{host}}/{{index}}/_search
Content-Type: application/json

{
    "query": {
        "match": {
            "author": "kroah"
        }
    }
}

Changing field mappings 2

Now, let's say an administrator changes some more field settings changes, that — again — would require reindexing all the data (in this case, we change "author" from Text back to Keyword)...

We start by deleting and re-creating the inactive (green) index, then set the new mappings on it (note that, this time, we must delete the green index first, otherwise we will get an error).

### Change settings: Delete the (inactive) green index
DELETE {{host}}/{{index}}_green

### Change settings: Re-create the green index
PUT {{host}}/{{index}}_green

Set the new mappings, and reindex to green again...

### Change settings: Close the green index so we can set mappings
POST {{host}}/{{index}}_green/_close

### Change settings: Set new mappings on the green index
POST {{host}}/{{index}}_green/_mappings
Content-Type: application/json

{
    "properties": {
        "name": {
            "type": "text",
            "fields": {
                "keyword": {
                    "type": "keyword",
                    "ignore_above": 256
                },
                "suggest": {
                    "type": "completion"
                }
            }
        },
        "author": {
            "type": "keyword",
            "ignore_above": 256
        },
        "release_date": {
            "type": "date",
            "format": "strict_date_optional_time||epoch_second"
        },
        "page_count": {"type": "integer"}
    }
}

### Change settings: Open the green index for reindexing
POST {{host}}/{{index}}_green/_open

### Change settings: Reindex data from blue to green
POST {{host}}/_reindex
Content-Type: application/json

{
  "source": {
    "index": "{{index}}_blue"
  },
  "dest": {
    "index": "{{index}}_green"
  }
}

### Change settings: Update the alias to point to the green index
POST {{host}}/_aliases
Content-Type: application/json

{
    "actions": [
        {
            "remove": {
                "index": "{{index}}_blue",
                "alias": "{{index}}"
            }
        },
        {
            "add": {
                "index": "{{index}}_green",
                "alias": "{{index}}",
                "is_write_index": true
            }
        }
    ]
}

### Change settings: Close the (now-inactive) blue index for usage
POST {{host}}/{{index}}_blue/_close

Normal usage 3

Now, let's use the index normally with the new-new configuration...

### Usage: Add Data 5 into the index alias (pointing to green)
POST {{host}}/{{index}}/_doc
Content-Type: application/json

{"name": "Drupal 7 Module Development", "author": "Matt Butcher", "release_date": "2010-01-01", "page_count": 394}

### Test-only usage: Flush data after writing documents
POST {{host}}/{{index}}/_flush

### Usage: Query the index alias (pointing to green) for Data 2: expect 1 result
GET {{host}}/{{index}}/_search
Content-Type: application/json

{
    "query": {
        "match": {
            "name": "DevOps"
        }
    }
}

### Usage: Query the index alias (pointing to green) for Data 4: expect 1 result
GET {{host}}/{{index}}/_search
Content-Type: application/json

{
    "query": {
        "match": {
            "author": "kroah"
        }
    }
}

### Usage: Query the index alias (pointing to green) for Data 3 and Data 5: expect 2 results
GET {{host}}/{{index}}/_search
Content-Type: application/json

{
    "query": {
        "match": {
            "name": "Drupal"
        }
    }
}

Deleting the index

If you want to re-run this test, then you'll have to clean up the alias and both indexes afterwards.

Search API indexes also get deleted sometimes; we can use the same procedure when that happens too...

### Teardown: Delete the alias
POST {{host}}/_aliases
Content-Type: application/json

{
    "actions": [
        {
            "remove": {
                "index": "{{index}}_green",
                "alias": "{{index}}"
            }
        }
    ]
}

### Teardown: Delete the blue index
DELETE {{host}}/{{index}}_blue

### Teardown: Delete the green index
DELETE {{host}}/{{index}}_green
🇨🇦Canada mparker17 UTC-4

I've done some prototyping with PHPStorm's HTTP Client and Elasticsearch 8.10.2. If you have an IntelliJ IDE you can add all the code snippets below to a .http file, modify the variables, and try it out for yourself... but I'm going to break it up so I can explain what each section does.

A brief note on these listings... the JSON is used in the request bodies. For the sake of brevity, I am not showing the response bodies, but you can test it for yourself if you want to see the results.

The following code sets up some variables we will use throughout the demo... you'll probably want to modify them for your environment...

### Variables
@host = https://elasticsearch:9200
@index = zerodowntime

I usually start by running a connection test to see if everything's okay (which is only useful for this demo).

### Connection test
GET {{host}}/_cluster/health

Set up an index for the first time

Now, let's pretend that we're creating an index in the Search API settings...

We start by setting up the indexes that will store the data.

### Index setup: Reserve the green index namespace
PUT {{host}}/{{index}}_green

### Index setup: Reserve the blue index namespace
PUT {{host}}/{{index}}_blue

Let's arbitrarily pick the "green" index to start using (i.e.: the "active" index)...

### Index setup: Close the green index so we can set mappings
POST {{host}}/{{index}}_green/_close

### Index setup: Set mappings on the green index
PUT {{host}}/{{index}}_green/_mappings
Content-Type: application/json

{
    "properties": {
        "name": {
            "type": "text",
            "fields": {
                "keyword": {
                    "type": "keyword",
                    "ignore_above": 256
                },
                "suggest": {
                    "type": "completion"
                }
            }
        },
        "author": {
            "type": "keyword",
            "ignore_above": 256
        },
        "release_date": {
            "type": "date",
            "format": "strict_date_optional_time||epoch_second"
        },
        "page_count": {"type": "integer"}
    }
}

Next, we create an alias that points to the "green" index...

### Alias setup: Open the green index so we can set an alias
POST {{host}}/{{index}}_green/_open

### Alias setup: Create an alias for _green
POST {{host}}/_aliases
Content-Type: application/json

{
    "actions": [
        {
            "add": {
                "index": "{{index}}_green",
                "alias": "{{index}}",
                "is_write_index": true
            }
        }
    ]
}

Normal usage 1

Now, let's use the index normally with the original configuration... I'm assuming "normal" usage is creating documents (i.e.: with Search API's tracker) and searching (i.e.: with a Search API front-end of some kind).

### Usage: Add Data 1 into the active index via its alias (pointing to green)
POST {{host}}/{{index}}/_doc
Content-Type: application/json

{"name": "Ansible for DevOps", "author": "Jeff Geerling", "release_date": "2011-01-01", "page_count": 452}

### Usage: Add Data 2 into the active index via its alias (pointing to green)
POST {{host}}/{{index}}/_doc
Content-Type: application/json

{"name": "The Design of Everyday Things", "author": "Don Norman", "release_date": "2013-01-01", "page_count": 180}

### Usage: Add Data 3 into the active index via its alias (pointing to green)
POST {{host}}/{{index}}/_doc
Content-Type: application/json

{"name": "Drupal 8 Module Development", "author": "Daniel Sipos", "release_date": "2017-01-01", "page_count": 547}

### Test-only usage: Flush data after writing documents
POST {{host}}/{{index}}/_flush

### Usage: Query the active index alias (pointing to green) for Data 1: expect 1 result
GET {{host}}/{{index}}/_search
Content-Type: application/json

{
    "query": {
        "match": {
            "name": "DevOps"
        }
    }
}

Changing field mappings 1

Now, let's say an administrator changes some field settings that would normally require reindexing all the data (in this case, "author" changes from type Keyword to type Text)...

We start by deleting and re-creating the inactive (blue) index, then set the new mappings on it (note that we don't strictly have to delete and re-create the "blue" index before setting mappings in this particular case because we didn't set any mappings on it during the setup... but if we had used it before — as we do with "green" below in the "Changing field mappings 2" section — then we would have to delete and re-create it).

### Change settings: Delete the blue index
DELETE {{host}}/{{index}}_blue

### Change settings: Create the blue index
PUT {{host}}/{{index}}_blue

### Change settings: Close the blue index so we can set mappings
POST {{host}}/{{index}}_blue/_close

### Change settings: Set new mappings on the blue index
POST {{host}}/{{index}}_blue/_mappings
Content-Type: application/json

{
    "properties": {
        "name": {
            "type": "text",
            "fields": {
                "keyword": {
                    "type": "keyword",
                    "ignore_above": 256
                },
                "suggest": {
                    "type": "completion"
                }
            }
        },
        "author": {
            "type": "text",
            "fields": {
                "keyword": {
                    "type": "keyword",
                    "ignore_above": 256
                },
                "suggest": {
                    "type": "completion"
                }
            }
        },
        "release_date": {
            "type": "date",
            "format": "strict_date_optional_time||epoch_second"
        },
        "page_count": {"type": "integer"}
    }
}

### Change settings: Open the blue index for reindexing
POST {{host}}/{{index}}_blue/_open

Now we can reindex from the old-active index to the new-active index

### Change settings: Reindex data from green to blue
POST {{host}}/_reindex
Content-Type: application/json

{
  "source": {
    "index": "{{index}}_green"
  },
  "dest": {
    "index": "{{index}}_blue"
  }
}

Now we can update the alias...

### Change settings: Update the (active) index alias to point to the blue index
POST {{host}}/_aliases
Content-Type: application/json

{
    "actions": [
        {
            "remove": {
                "index": "{{index}}_green",
                "alias": "{{index}}"
            }
        },
        {
            "add": {
                "index": "{{index}}_blue",
                "alias": "{{index}}",
                "is_write_index": true
            }
        }
    ]
}

### Change settings: Close the (now-inactive) green index for usage
POST {{host}}/{{index}}_green/_close

Normal usage 2

Now, let's use the index normally with the new configuration...

### Usage: Add Data 4 into the active index via its alias (pointing to blue)
POST {{host}}/{{index}}/_doc
Content-Type: application/json

{"name": "Linux Kernel in a Nutshell", "author": "Greg Kroah-Hartman", "release_date": "2007-01-01", "page_count": 182}

### Test-only usage: Flush data after writing documents
POST {{host}}/{{index}}/_flush

### Usage: Query the active index via its alias (pointing to blue) for Data 1: expect 1 result
GET {{host}}/{{index}}/_search
Content-Type: application/json

{
    "query": {
        "match": {
            "name": "DevOps"
        }
    }
}

### Usage: Query the active index via its alias (pointing to blue) for Data 4: expect 1 result
GET {{host}}/{{index}}/_search
Content-Type: application/json

{
    "query": {
        "match": {
            "author": "kroah"
        }
    }
}

Changing field mappings 2

Now, let's say an administrator changes some more field settings changes, that — again — would require reindexing all the data (in this case, we change "author" from Text back to Keyword)...

We start by deleting and re-creating the inactive (green) index, then set the new mappings on it (note that, this time, we must delete the green index first, otherwise we will get an error).

### Change settings: Delete the (inactive) green index
DELETE {{host}}/{{index}}_green

### Change settings: Re-create the green index
PUT {{host}}/{{index}}_green

Set the new mappings, and reindex to green again...

### Change settings: Close the green index so we can set mappings
POST {{host}}/{{index}}_green/_close

### Change settings: Set new mappings on the green index
POST {{host}}/{{index}}_green/_mappings
Content-Type: application/json

{
    "properties": {
        "name": {
            "type": "text",
            "fields": {
                "keyword": {
                    "type": "keyword",
                    "ignore_above": 256
                },
                "suggest": {
                    "type": "completion"
                }
            }
        },
        "author": {
            "type": "keyword",
            "ignore_above": 256
        },
        "release_date": {
            "type": "date",
            "format": "strict_date_optional_time||epoch_second"
        },
        "page_count": {"type": "integer"}
    }
}

### Change settings: Open the green index for reindexing
POST {{host}}/{{index}}_green/_open

### Change settings: Reindex data from blue to green
POST {{host}}/_reindex
Content-Type: application/json

{
  "source": {
    "index": "{{index}}_blue"
  },
  "dest": {
    "index": "{{index}}_green"
  }
}

### Change settings: Update the alias to point to the green index
POST {{host}}/_aliases
Content-Type: application/json

{
    "actions": [
        {
            "remove": {
                "index": "{{index}}_blue",
                "alias": "{{index}}"
            }
        },
        {
            "add": {
                "index": "{{index}}_green",
                "alias": "{{index}}",
                "is_write_index": true
            }
        }
    ]
}

### Change settings: Close the (now-inactive) blue index for usage
POST {{host}}/{{index}}_blue/_close

Normal usage 3

Now, let's use the index normally with the new-new configuration...

### Usage: Add Data 5 into the index alias (pointing to green)
POST {{host}}/{{index}}/_doc
Content-Type: application/json

{"name": "Drupal 7 Module Development", "author": "Matt Butcher", "release_date": "2010-01-01", "page_count": 394}

### Test-only usage: Flush data after writing documents
POST {{host}}/{{index}}/_flush

### Usage: Query the index alias (pointing to green) for Data 2: expect 1 result
GET {{host}}/{{index}}/_search
Content-Type: application/json

{
    "query": {
        "match": {
            "name": "DevOps"
        }
    }
}

### Usage: Query the index alias (pointing to green) for Data 4: expect 1 result
GET {{host}}/{{index}}/_search
Content-Type: application/json

{
    "query": {
        "match": {
            "author": "kroah"
        }
    }
}

### Usage: Query the index alias (pointing to green) for Data 3 and Data 5: expect 2 results
GET {{host}}/{{index}}/_search
Content-Type: application/json

{
    "query": {
        "match": {
            "name": "Drupal"
        }
    }
}

Deleting the index

If you want to re-run this test, then you'll have to clean up the alias and both indexes afterwards.

Search API indexes also get deleted sometimes; we can use the same procedure when that happens too...

### Teardown: Delete the alias
POST {{host}}/_aliases
Content-Type: application/json

{
    "actions": [
        {
            "remove": {
                "index": "{{index}}_green",
                "alias": "{{index}}"
            }
        }
    ]
}

### Teardown: Delete the blue index
DELETE {{host}}/{{index}}_blue

### Teardown: Delete the green index
DELETE {{host}}/{{index}}_green
🇨🇦Canada mparker17 UTC-4

This also seems to be working for me again.

The Elasticsearch incident report is still open, so I'm going to leave this ticket open.

🇨🇦Canada mparker17 UTC-4

@rodrigoaguilera, this happened before I became a maintainer, so I'm not sure exactly.

Looks like it was deleted from the 8.x-5.x branch about a month after it was added...

$ git checkout 8.x-5.x
Switched to branch '8.x-5.x'
Your branch is up to date with 'origin/8.x-5.x'.

$ git log -- 'modules/elasticsearch_connector_devel/elasticsearch_connector_devel.info'
commit f33acbf882fe95e63b4f92eebae2f276d8447bf7
Author: Nikolay Ignatov
Date:   2014-05-26 23:53:55 +0300
 
    Clearing some 7.x-1.x code and adding some initial yml file.
 
commit 58873ed84103aea8cfa299db2b9c0549ad098e23
Author: Nikolay Ignatov
Date:   2014-04-12 12:49:58 +0300
 
    Fixing issue #2218455. Adding a debugger module.

... it also looks like it was never ported to 8.x-6.x or 8.x-7.x...

⋯/elasticsearch_connector on  8.x-5.x via 🐘 
$ git checkout 8.x-6.x
Switched to branch '8.x-6.x'
Your branch is up to date with 'origin/8.x-6.x'.

$ git log -- 'modules/elasticsearch_connector_devel/elasticsearch_connector_devel.info'

$ git checkout 8.x-7.x
Switched to branch '8.x-7.x'
Your branch is up to date with 'origin/8.x-7.x'.

$ git log -- 'modules/elasticsearch_connector_devel/elasticsearch_connector_devel.info'

$

It's not exactly the same, but there is a "Enable debugging mode: log ElasticSearch network traffic" option when you're configuring a server in 8.0.x, which will log information about queries and responses to Drupal's logger mechanism: https://git.drupalcode.org/project/elasticsearch_connector/-/blob/8.0.x/... - see 📌 Logging should be configurable Fixed for more information.

Does this "Enable debugging mode: log ElasticSearch network traffic" option satisfy your needs?

🇨🇦Canada mparker17 UTC-4

This is blocking my work as well. Thank you for the links to the upstream incident report and forum post.

I apologize, but I do not work at Elasticsearch B.V. (the company that controls the upstream library that is breaking), and — as far as I'm aware — none of my elasticsearch_connector co-maintainers work at Elasticsearch B.V. either.

Given that none of the Drupal module maintainers work at the company that controls the upstream library that is breaking, and given that @zuernbernhard already reached out to Elasticsearch B.V. through a back-channel, I am not aware of anything else that the Drupal module maintainers can do to move this issue forward at this time (other than reaching out to Elasticsearch B.V. through our own back-channels, and closing this issue when we see that the issue has been resolved upstream).

@zuernbernhard, given that you filed this issue, was there anything else that you would like the Drupal module maintainers and/or Drupal community to do to resolve this situation?

🇨🇦Canada mparker17 UTC-4

I've been working on writing change records for the differences between 8.x-7.x and 8.0.x, but currently they're still in draft: https://www.drupal.org/list-changes/elasticsearch_connector/drafts

Thanks for your patience with me!

🇨🇦Canada mparker17 UTC-4

I feel like this needs tests but I'm not sure where to add them, because it's not clear to me what tests covers \Drupal\jsonapi_search_api_facets\Plugin\facets\facet_source\JsonApiFacetsDeriver::getDerivativeDefinitions()

Still, I would appreciate a review.

🇨🇦Canada mparker17 UTC-4

mparker17 changed the visibility of the branch 3482559-missing-server-when-deriving-facets to active.

🇨🇦Canada mparker17 UTC-4

mparker17 changed the visibility of the branch 3482559-missing-server-when-deriving-facets to hidden.

🇨🇦Canada mparker17 UTC-4

Done! I'll put this issue back for review!

I was hesitant to drop support for 10.2 before its support is officially dropped, because one of the organizations that is paying for my contributions to ckeditor_abbreviation is still on 10.2... but my client's D10.3/D11 upgrade plans are progressing nicely, so as long as ckeditor_abbreviation-4.0.x remains supported while 10.2 remains supported (i.e.: until the end of 2024), then it should be fine (and as a co-maintainer, I'm happy to continue supporting that branch until the end of 2024).

But, I agree with your reasoning - it will be much easier to support Drupal 10.3 and 11 going forward.

Regarding the upgrade status job...

My understanding is that the module that the job runs to generate the report gets updated frequently so that it is aware of APIs that are deprecated in D11 and removed/changed in D12.

So - in theory - this job is supposed to notify us as soon as an API that we are using becomes deprecated, i.e.: giving us lots of time to find a solution/workaround (or push back in the core issue queues). In practice, however, its introductory blog post doesn't really have much information on how we're supposed to use it once it is set up.

On the one hand, it seems strange to say that ckeditor_abbreviation supports D12 (i.e.: by changing our core_version_requirement to (i.e.: ^10.3 || ^11 || ^12) before D12 even has an alpha release. But, on the other hand, seeing the test failure because of our core_version_requirement makes it easy to get into the habit of ignoring the output from that job (i.e.: because it always fails), leading us to miss when an API that we actually use is removed/changed.

So... what could we do?

  1. We could leave it disabled until a D12 alpha version comes out; then re-enable it and update our core_version_requirement at the same time.... but if we do that, instead of fixing old API usage gradually over the next ~2 years, those problems will pile up and we won't notice until we re-enable the job after the first D12 alpha comes out. And, this approach would make it hard to test for people to test ckeditor_abbreviation on D12 in the meantime.
    • Note that I have disabled the job for now, so we've already started on this path.
  2. We could mark the 5.0.x branch as compatible with D12, so that the upgrade_status report is successful... but if we do that, then we're not really being entirely truthful about our D12 support (i.e.: we're only supporting it on a best-effort basis, i.e.: only fixing D12 support when we notice an issue in the report and we have time to fix it).
  3. We could create a 6.0.x branch that is compatible with D12, enable the upgrade_status job on that branch, and leave it disabled it on the 5.0.x branch... but if we do that. we will have to remember to merge our own ckeditor_abbreviation issues to both branches (and there is a chance that people might be more-hesitant to contribute a fix to a 6.0.x branch that they don't use yet).
🇨🇦Canada mparker17 UTC-4

Ready for review again!

🇨🇦Canada mparker17 UTC-4

Apologies! I thought the text was referred to the popup. I'll fix it right away! Thanks!

🇨🇦Canada mparker17 UTC-4

Added Simplify composer.json Active as a nice-to-have.

🇨🇦Canada mparker17 UTC-4

Tests pass; merging.

🇨🇦Canada mparker17 UTC-4

Added a merge request; waiting to see if tests will pass.

🇨🇦Canada mparker17 UTC-4

Huh, I can see the definition in the annotation, and it looks like this was moved to the Fixed status but somehow got re-opened?

I'm going to mark this as "Fixed" again.

🇨🇦Canada mparker17 UTC-4

Some background information... according to #2708461: Port to Drupal 8/9 , the original work to port was done by @bircher (and others) in @bircher's fork of the module on GitHub... the path had already been changed in the initial port to D9, and because I hadn't used the D7 version when I became a maintainer of the module and finished the port, I didn't notice the change in behavior from D7.

@DomoSapiens, from the issue title and description you wrote, it sounds like you thought the module was broken because the list of feedback messages was missing. Because the new page is working (I just wrote automated tests to make certain!) - albeit at a different path - I am going to mark this as "Closed (works as designed)", in part so that we can move forward with making a stable release.

However, if you strongly feel that the list of feedback messages should be at /admin/reports/feedback instead of its new location, then I'd encourage you to open a separate Feature Request to ask that we change it (it's pretty easy to do so - but if you could add any justification, that would help the the ~26 sites using a 3.x version (i.e.: using the new path) understand why we are changing it for them).

Thanks for your patience with me!

🇨🇦Canada mparker17 UTC-4

The list of feedback messages - at /admin/reports/feedback in the 7.x.2.x release series - is available at /admin/content/feedback_message in the 3.x release series.

I'll write a change record.

🇨🇦Canada mparker17 UTC-4

Marking as fixed.

🇨🇦Canada mparker17 UTC-4

Adding Add tests for views component Active as a release-blocker because it's the only part of the module missing tests.

Adding Add database updates test Active as a release-blocker because it's useful to have a baseline before a release.

Also Add automated tests Active is done now, yay!

🇨🇦Canada mparker17 UTC-4

Add the Needs tests tag.

🇨🇦Canada mparker17 UTC-4

I think it might be worth splitting the remaining tests into separate issues, and merging this one, so I'm going to do that.

🇨🇦Canada mparker17 UTC-4

Merged the Message Listing test into the CRUD message tests; it kinda makes more sense there.

Added Create/Read/Update/Delete Message-Type tests.

The CRUD message-type tests briefly broke because of 🐛 Feedback message type entity doesn't declare its success_message property Active , but after merging that, they are working again.

🇨🇦Canada mparker17 UTC-4

Added Create/Read/Update/Delete tests.

🇨🇦Canada mparker17 UTC-4

Added a test of the list functionality.

🇨🇦Canada mparker17 UTC-4

This looks good to me now. I've merged it. Thanks everyone!

🇨🇦Canada mparker17 UTC-4

While writing tests in Add automated tests Active , I ran into this problem as well.

I daresay calling it access feedback message list would be even more clear.

FWIW, this wasn't proposed in the patch; but it also wasn't explicitly stated in any comments: I think a new permission to access the list of feedback message is better than replacing the administer feedback message entities permission, because administer feedback message entities is used in \Drupal\feedback\FeedbackMessageAccessControlHandler::checkFieldAccess() to control field access - and it is plausible for a site admin to want some users to see a list of feedback messages without letting them modify which fields are on feedback messages.

🇨🇦Canada mparker17 UTC-4

mparker17 made their first commit to this issue’s fork.

🇨🇦Canada mparker17 UTC-4

Lets move this to "Needs work"

🇨🇦Canada mparker17 UTC-4

Have a test of basic functionality in place.

Updated the list of remaining tasks to include other tests we need.

Production build 0.71.5 2024