Account created on 15 September 2006, about 19 years ago
  • Systems and Software Architect at Chromatic 
#

Merge Requests

More

Recent comments

🇺🇸United States apotek

Thank you @catch for your affirmation. I'll take a look and see if I can put together a doable factor of this.

🇺🇸United States apotek
🇺🇸United States apotek
🇺🇸United States apotek

apotek created an issue.

🇺🇸United States apotek

This feels like it should be a core Media issue.

🇺🇸United States apotek

apotek created an issue.

🇺🇸United States apotek

@moshe-weitzman, reporting back that after making changes to the constructor of a Hook and then running /core/rebuild.php, the changes are picked up by Drupal. But not when running `drush cache:rebuild`. So there seems to be a delta between core/rebuild.php in Drupal 11.2.5 and Drush 13's `cache:rebuild` command.

🇺🇸United States apotek

@moshe Thank you for the helpful nudge. I will try your suggestion and report back both here and in the drush issue I created.

🇺🇸United States apotek

Closing this. This seems to be a bug with drush, not Drupal.

If I use `admin/config/development/performance` to clear all caches, autowiring picks up the changes immediately, without requiring a deployment.

There seems to be something added to the callback from admin/config/development/performance "Clear all caches" that has not been added to drush's 'cache:rebuild'

🇺🇸United States apotek

apotek created an issue.

🇺🇸United States apotek

While the patch from comment #52 might no longer be applying successfully to core dev, it does apply cleanly to 11.2.4 and works there.

🇺🇸United States apotek

Incredibly, Dave's now six years old patch from comment #11 still applies cleanly to 11.2.4 and continues to fix the issue.

🇺🇸United States apotek

Patch in #96 applies cleanly to 11.2.5 and works well.

🇺🇸United States apotek

Patch #86 applies cleanly to 11.2.4 and works well.

🇺🇸United States apotek

> It seems like maybe the module needs to be updated to generate config that uses CaffeineCache instead, which is the new version supported in 9. (Or to not specify cache and let it select the default, which is CaffeineCache.

Interestingly, the search_api_solr/jumpstart/solr9 config sets contain the correct solrconfig_query.xml for solr9, but the button to download the config set does not use it! It provides the solr7/8 version instead, despite correctly identifying my solr server as version 9! And if I go to the files tab in the module, and look for the solrconfig_query.xml file, I get, yes, the 7/8 version, not the 9 version.

Very odd.

🇺🇸United States apotek

There is a QueueServiceInterface::emptyQueue() that would do what the requester has reasonably asked. But it doesn't look the Queue implements that interface. The only way to do what the requester asks currently is to use deleteQueue() + createQueue(), which seems a little brutal.

🇺🇸United States apotek

Like many others here, I don't think forcing users to turn off purging on cache rebuilds only by means of disabling the entire module (and thus all its services) is the right approach at all. Almost all other modules that connect to an external service have a means of turning production functionality off if either a setting is set, or an environment variable is unconfigured. One shouldn't have to install config split and then set up different configurations just to ensure that purges aren't triggered in testing and dev when they only lead to errors.

Nor should we be forcing developers to have to set up a Fastly service just for QA or just for Dev. That's a lot of extra cost and very very few developers get to make purchasing decisions anyway.

Here's another use case. I have written a drush command that depends on the presence of some of the services defined in the Fastly module. I need my custom drush command available in dev for testing. Which means, that the Fastly module needs to be enabled in dev for my testing to work. If I disable the Fastly module, the code I have that depends on the services defined in the Fastly module fails. In fact, I can't even deploy the site due to missing service dependencies if the Fastly module is disabled.

Having a Fastly purge ON/OFF switch makes sense, makes everybody's life easier, and is necessary in cases where other code depends on the Fastly module. It is not hard to implement. Patches have been contributed. The resistance to this feature is really strange.

🇺🇸United States apotek

This set of changes has been tested and the module works under Drupal 11.

🇺🇸United States apotek

I'm running into an issue with Drush that is making it difficult to test these changes with Drush 13. Everything works fine on Drush 12.5.3.

🇺🇸United States apotek

> Drush 11 is not recommended for Drupal 10 and will be EOL in Nov 2023.

Let's Drop backwards compatibility to Drush 11.

🇺🇸United States apotek

apotek created an issue.

🇺🇸United States apotek

So, in conclusion.

1. The hook that runs database updates is called `hook_update_N()` and lives in the .install(!) file.
2. The hook that runs updates right after the updates and before anything else is done is called `hook_post_deploy_NAME()` instead of (the more logical) `hook_post_update_NAME()`.

(No, developers, `hook_post_deploy_NAME()` does _not_ run after the deployment, even though it is named as though it would and even though it is located in a file called MODULE.post_update.php, and not in the .install file where its sibling `hook_update_N()` is.)

3. Instead of simply providing a way to run steps truly after a deployment has run as requested, the solution is.... a new requirement! You must install drush, and be able to use drush in your environment. Or, you have to do a separate step in the UI.
4. The name of the hook that the Drush project decided to use to run events _after_ a deployment is called :drumroll: .... `hook_deploy_NAME()` and lives in a MODULE.deploy.php file! I kid you not.

After 111 comments and many rejected patches, this is where we are.

🇺🇸United States apotek

Just wanted to say that this tutorial is excellent. Thorough, readable, and with a complete code example to iterate on. Thank you very much.

🇺🇸United States apotek

Just reporting in that we are seeing this frequent visitor in our logs too.

🇺🇸United States apotek

On one of our client sites, core/install.php takes about seven seconds to load. We just had a site outage this morning because someone was hammering that url until our client's php-fpm pool ran out.

The page returns cache headers that make the response uncacheable by your CDN so repeated requests do go to origin every time.

So while you are absolutely right that the page is not a security risk, it can easily be abused for DDOS purposes. Someone could bring your site down with a relatively low request rate of 1 or 2 requests per second (too low for your rate limiter or WAF to think anything is wrong).

So I absolutely recommend that, in a public environment, requests to this endpoint be blocked. Or remove that file. Some options:

1. When you build your deployment artifact, script the removal of that file.

2. Or use a composer install hook to remove that file after composer installs your dependencies.

🇺🇸United States apotek

apotek created an issue.

🇺🇸United States apotek

The merge request provided does not actually implement the logging. Given the complexity (and enormity) of the request parameters, it is highly unlikely that whatever the maintainers would choose to log would not suit many.

The better option here, then, is to dispatch a Response event and allow users of this module to write an EventSubscriber that listens for the api response event and then log whatever data they wish, as they see fit.

🇺🇸United States apotek

A change request has been logged against the orange-dam-php library.

🇺🇸United States apotek

Thank you @fjgarlin and @jonathan1055 for digging in to the issue with me. I appreciate it.

As a module maintainer, I do appreciate the gitlab CI automations, but given the ad hoc and voluntary nature of module maintenance, it would be good if this tooling defaulted to advisory notices (as agreed in this thread) for anything the module has not yet claimed or delivered upon.

As a maintainer, I do want my phpstan and phpcs to fail for all versions for which I claim compatibility in my existing code. But I do not want them to fail for versions for which I have not begun the update effort and for which I am not claiming compatibility.

I also note that since Drupal 11 is out, next major would be Drupal 12. Or am I misunderstanding what next major signifies?

Thank you again for your contributions. I just wanted to offer some feedback in the issue itself as history. Thank you for #3471235 too.

🇺🇸United States apotek

Thank you for the great work! This will make module maintenance much clearer!

🇺🇸United States apotek

With update to migrate_source_queue, this is working in Drupal 10, and is non-breaking in Drupal 11.

🇺🇸United States apotek

Thank you for making the update @dieterholvoet! You have fixed the issue we were having with orange_dam module.

🇺🇸United States apotek

I see the @dieterholvoet has made a new release (1.0.2) of migrate_source_queue!

I'll update and see how testing proceeds.

🇺🇸United States apotek

Would we be able to merge this? The merge request looks straightforward and we could bump the compatibility up to Drupal 11.

🇺🇸United States apotek

Hi fjgarlin,

Thank you for the response and clarification. It was helpful to note: "This job depends on the "composer (next major)" being successful".

This makes sense, but the thing is that the composer (next major) job is allowed to fail without failing. But this job marks a big red x failure when it fails. If this job depends on the success of the first (composer) job, then there is some dissonance here. It seems if the first job is allowed to fail, then this job should also be allowed to fail.

It looks like I will have to disable my CI. It was working fine before, but is now marking the merge requests as failing, despite having no discernible issues is the versions the module has demonstrated compatibility for.

Overall, it seems odd to fail a module for not having Drupal 11 compatibility when it doesn't even _claim_ to be compatible yet. If the module claimed version 11 compatibility and yet failed the composer (next major) job or failed the phpstan (next major), that would make sense. But in this case, these two jobs are introducing requirements that are not claimed by the module,which means these two jobs are _advisory_ and should not be failing merge requests.

The specific run is here: https://git.drupalcode.org/project/orange_dam/-/jobs/2803541

🇺🇸United States apotek

Unfortunately, this did not get ported to the 8x-1.7 release and there is no stable 2 release. So while this is closed, it isn't really fixed for any sites which require stable releases.

Is there a plan to make a 2.x release? As more and more people update to PHP 8.2/8.3 (Acquia is moving everyone onto those versions now), this issue will only get more widespread.

🇺🇸United States apotek

Gloria in excelsis mundi !!! Thank you to everyone for this epic improvement.

🇺🇸United States apotek

I reviewed and approved, though I did, like others, also feel that fixing the default scheme is a bit out of scope here.

Noting also that it's been a year since any maintainer closed or merged an issue on this project, so while this is good work here, we might be out of luck getting this into the project, so thank you to everyone who put up patches. I did write a long time ago asking about co-maintaining, but understandably did not hear back. Perhaps there is someone on this thread who would have the stature in the community to be able to make the same offer so we can address some of the issues that are beginning to stretch this module now with 10.3 out.

🇺🇸United States apotek

Just reporting back, in case anyone else is experiencing this issue:

The status check fails in two odd cases.

Case 1: Where the exported config set and the installed config set are identical except for the order of some of the properties on the XML nodes. The value of the properties and the properties themselves are identical, just in a different order. So functionally, the XML is identical but the status report will still trigger a warning, which is unwarranted in this case.

Case 2: In my QA environment, I have configured the solr server to symlink /var/solr/data/mycore/conf to a mounted solr-conf directory. The contents of that mounted solr-conf directory consists exactly and exclusively of the config-set I exported from that very server before rebuilding it. So I downloaded the config-set from the QA server. I expanded the zip file into my solr-conf directory, rebuilt my QA solr server, and I still get the status report warning. If I shell into my QA solr server and compare the files in /var/solr/data/mycore/conf to the files I get when downloading the zip, they are identical.

I can only conclude that the comparison in this function is a little off and I advise people to ignore the warning if they have aleady made a good faith effort to resolve it with a new config.zip retrieval.

🇺🇸United States apotek

This was handled in https://www.drupal.org/project/orange_dam/issues/3441899 Make OrangeDamApi::getKeywords a pure get function Fixed

🇺🇸United States apotek

Already done. Though we could start planning version 3 :)

🇺🇸United States apotek

This patch seems to work as advertised. I don't know why this issue was taken from needs review to needs work without feedback. What is the patch lacking? Can that be made explicit? Putting this back to needs review.

🇺🇸United States apotek

I am also confused by this. If hook_install/uninstall are documented as being the way by which we can install custom entities or custom fields on entities (and remove them), but we are being told *not* to make such changes in these hooks during a config:import (which is the way most sites these days install and uninstall modules, not via explicity drush commands or the web gui), then it seems we are unable to have our custom fields or entities installed as part of a site deployment.

Or are hook_uninstall() and hook_install() run twice? Once during config:import and once during some other part of the deployment process?

Very confused here as this requirement to not make changes to entities in hook_install seems to run counter to the whole point of the hook.

🇺🇸United States apotek

Full migration tests were run. Everything checked out.

🇺🇸United States apotek

Thank you @markdorison and @rajan-kumar2026. In addition to the suggestions Mark is making, I think the point of this ticket is to figure out if we want to support Drush 12 or not. Currently, our drush commands are not compatible with Drush 12. So if we are going to add

"drush/drush": "^11 || ^12 "

we are basically saying we support drush 12, when we _don't_ support drush 12 (yet).

If we are going to add `^12` we will have to rewrite a lot of code in this module so that it can still output to the console. It's a big job.

Production build 0.71.5 2024