I would say this should be a config dependency: https://www.drupal.org/docs/drupal-apis/configuration-api/configuration-... →
Correct me if I'm wrong, but shouldn't Simple XML Sitemap keep track of dependencies to other modules when a plugin is added to its configuration?
🙌 Excellent. I'll see if I can take a crack at that soon.
I ran into what appears to be the same issue and I think it is mostly unrelated to migration. There is an easy workaround, which is to remove the plugin reference from the configuration. Nevertheless, it can give some nasty effects when disabling a module that contains UrlGenerator plugins that are in use. I added some steps to reproduce.
I *assume* the OP was having this situation too. If not, my apologies. Let me know, than I'll create a different issue. Of course, the same could happen if the "custom" plugin got its plugin ID changed, which is basically the same thing; the configured plugin is no longer available.
I just found 📌 Support OOP hooks Active and noticed there that currently most hooks are implemented using the OO class_resolver pattern, which makes the official OO hooks conversion much easier. So we should at least change that, possibly do it all in one go if 📌 Support OOP hooks Active lands first.
Ah, I had not noticed this issue before. I added a hook_cron implementation in ✨ Add a cleanup function Active and did not use this pattern because of the Drupal 9 support (although I may have used some PHP features that rule out D9, not sure🫢). Will probably depend on the sequence of merging what needs to be adapted to what.
As expected, this didn't go flawlessly, I ran into out of memory exceptions.
- I added a batching operation, deleting 100 log entries at a time, with an cache clear for the entity storage in between.
- I added a time limit, so the hook cron will not run longer than 10 seconds.
Typically, I would say this is enough for most setups. Maybe it is worth mentioning somewhere in the documentation that if you consistently get warnings that the time limit was exceeded, you need to either increase the frequency the cron hook is ran (maybe with a reference to ultimate cron) or reduce the amount of log entries created.
Added a quick implementation using Cursor. I am not convinced the "just delete everything older than x" through the entity interface is a great idea. Especially if there is a large backlog I don't think it will finish in a reasonable amount of time. But maybe I'm mistaken, needs a little more testing (like with a prod database from my example project :) ).
See https://www.drupal.org/node/3082634 → . What this issue seems to need is:
- An updated change record
- A call to Drupal.deprecationError() referring to said change record
- A follow-up issue to actually delete the functionality in Drupal 12
Updated issue summary.
Adding the issue that this MR is currently rolling back.
@steven jones Thanks for that clarification. Our use case does not rely on the file being managed, so your proposal sounds fine.
Strictly speaking, when this is a security vulnerability, this should have been reported through https://security.drupal.org. The impact is fairly small, although depending on site styling an "empty" section might still have some impact on a page, I suppose. Maybe best to do so still, just to be on the safe side. Maybe the security team has means to remove public reference to the issue (like also the Git commits).
Our use case is pretty simple: the file needs to be available at a specified location, under a specific, descriptive name. We are actually generating the file in the private file location, although I could imagine the need to put it somewhere else. I'm not sure what you mean by "I wonder about leaving the generation of the file alone, and then at the end of the Drush command, it can simply copy the file to the new location." Do you mean to keep the different name out of the scope of this Drush command and leave it to the caller to move the file afterwards? In our case, that would make things more complicated, as we currently have a single line in our crontab to generate the file in a specified location using this command. I suppose it is possible to do that all in a single line in a crontab, but I'd also say that allowing the user to specify a file name for the export file is a very reasonable feature to have.
#34 (i.e. https://www.drupal.org/files/issues/2020-08-05/2887450-34_0.patch → ) is a very old patch... Are your patches based on the patch or the merge request?
I've hidden MR 5 en 48. 5 was superseded by 14, as mentioned in this MR comment. 48 seems to be a completely different approach to the issue and doesn't do what the issue title says ("do not reimport existing entities"). Judging by the title, it first removes existing entities and then reimports the entities from default content, which is definitely a legitimate scenario, but something else entirely. I think it probably warrants its own issue (which should probably be postponed on this one, to keep things manageable).
eelkeblok → changed the visibility of the branch 2698425-delete-existing-n-import to hidden.
eelkeblok → changed the visibility of the branch 2.0.x to hidden.
eelkeblok → created an issue.
Looks like this was resolved in a different way in the latest release ( 3.0.3 → ).
@claudiu.cristea would you maybe assign some credit for this issue? It is valuable for me and my company. Thanks in advance.
eelkeblok → changed the visibility of the branch 2928564-add-stagefileproxy-stream to active.
eelkeblok → changed the visibility of the branch 2928564-add-stagefileproxy-stream to hidden.
WRT the origin_path setting, there seems to be some doubt as to whether support for private files belongs in this issue. I would say, no. There already is a lot of value in converting only public files to using stream wrappers (e.g. the use case with responsive images, where the responsive image module tries to open the physical file to get information on its size), and adding private file support into the mix complicates this issue. I think my next task will be removing support for private files from the merge request. Maybe we should consider re-opeing #2879046: Compability with private files → and postponing it on this?
Going to call this, a bit opportunistically, a bug, because that would mean I can still include it in the 2.1.x branch.
Forgot to actually merge.
eelkeblok → created an issue.
The question is, would it be possible to device something to support the batch that is potentially built by the hook? If so, that would also open up an opportunity to remove the code that reimplements the core cancellation methods.
Does indeed resolve the error.
Ah, no, it was meant to say I adjusted the MR. Fair enough to get some tests.
Can you elaborate on the reason this needs work? AFAICT the merge request is mergeable.
avpaderno → credited eelkeblok → .
Thanks. The latest commit fails its tests, but that is not due to these changes.
Thanks. The latest commit fails its tests, but that is not due to these changes.
It seems the Gitlab job for the next major Drupal version (i.e. 11) is actually failing. I did not dive in too deep to see what is going wrong, but it seems a pretty fundamental problem for this particular issue :)
I put a reference to the Backdrop version on the project page. I'll close this issue as fixed.
This project is no longer maintained. Closing.
Drupal 7 has reached end of life. Closing.
This project is no longer maintained. Closing.
Drupal 7 has reached end of life. Closing.
Drupal 7 has reached end of life. Closing.
Drupal 7 has reached end of life. Closing.
Drupal 7 has reached end of life. Closing.
Drupal 7 has reached end of life and this module is no longer maintained. Closing.
eelkeblok → created an issue.
That should do it. I added some tests in the process.
I think this is a duplicate of 🐛 Permissions not checked for "Si Prepublish" and "Si Recheck" task links Active .
eelkeblok → created an issue.
We applied this in our project and it does the job. Seems good to go to me.
eelkeblok → created an issue.
eelkeblok → created an issue.
I've been looking into an issue with this too, but to me it looked like the theme JavaScript was previously running before the module's JS, whereas now it is running after (I say "looked like" because I didn't go deeper into the particular issue, because it already took me long enough to get this far, and other issues doing maintenance also took plenty of time already).
I would think the latter is the correct order; first the module JS, then the theme JS. In my case, this caused problems, but in my view that means my project was (is) relying on a buggy behaviour of core. Maybe this is the case for the TS as well. However, without more details, I don't think we can determine one way or the other.
FWIW, I released a D11-compatible version of Whoops.
What would you suggest is the correct asset order? I am not quite sure what I'm looking for yet. Maybe illustrate the problem with some screenshots?
@ludo.r Can you please update the merge request? Often, it is enough to merge in the (updated) target branch. If patch no longer applies, changes are you need to solve some merge conflicts (which you probably had to do to get an updated patch).
If you have a suggestion as to the formulation of that part of the README, merge requests are welcome. Fundamentally, the WYSIWYG/RTE editor doesn't have much to do with what this module does. However, removing support for the drupal-entity tags from the input filter (and the editor configuration, when one is linked to the input filter) will mean the editor does not recognize the tags anymore, and may filter out the unknown tags; it would be critical not to save the resulting content, because that would wipe out the old tags before the module gets a chance to convert them. Also, it would be good to confirm at a HTML level (e.g. by looking in the database) if the tags were removed, or "just" not converted. You may have found another case of the module failing to recognize a certain flavour of entity embed tags.
Disclaimer: this is just what I can come up with what might have happened theoretically.
eelkeblok → made their first commit to this issue’s fork.
Heard about this only minutes ago when listening to the last Talking Drupal 😊 I have maintainer rights, but unfortunately I do not seem to have permission to add additional maintainers. Maybe I can be of help getting stuff merged, though.
@zenimagine To answer your question, I'd say it should be fine to update to 4.0, since you inadvertently were on 4 already, having installed 3.0.6. Can you set this issue to "Fixed" if your questions are answered?
@claudiu.cristea Yeah, I'm a big fan of visual Git clients, basically for reasons such as this. It becomes trivially obvious when you are always confronted with the commit graph, whereas this is a very easy mistake to make when just coming at it from the command line (not saying it is impossible to happen, of course, one can "miss-click" as well 😊).
https://www.drupal.org/project/private_message/releases/3.0.7 →
Thanks @claudiu.cristea
Part of the confusion might be 🐛 3.0.6 was released from the 4.0.x branch Active .
eelkeblok → created an issue.
Applied the review points from @penyaskito and created a new MR because the old one's target was 3.0.x.
Tests should now be green 🤞
The last hurdle was the assertion on the derived image being created. For some reason, the test couldn't assert the existence of the file, eventhough manual testing seems to work fine. Since we are basically depending on core doing its thing after we've downloaded the original, I opted to remove the check for the style image.
I restructured the branch a little because it seems some previous merge conflicts were not resolved optimally.I rebased pervious commits on that merge.
Thanks. The customer was already in the process of moving to GTM, so this was a nice extra nudge :)
It is unclear to me whether !61 is just a rebase or whether there are other differences to !38. I updated !38 by merging in 3.2.x and resolving conflicts.
Actually, there already is a method getRawText() that will do nicely.
eelkeblok → created an issue.
I updated MR !3 with a fix for the problem because I needed a solution for the time being. Leaving on Needs work for the feedback about whether we need an additional place to set these dimensions, or whether we should be looking into where the dimensions are applied when it does work and whether we can make adjustments to make that work. It seems that we are now introducing some duplicate code that could have similar bugs. I tried finding out how it works for other situations, but couldn't figure it out in the limited time I had available.
Here's a file that can be used to reproduce the problem mentioned above. It works with the default 400 wide cropping thumbnail. I didn't dive into the exact mechanics how the width ends up being larger that the available width, but it obviously is some sort of rounding error. Intuitively, I'd say a simple round should be enough to get it back to 400 instead of 400-point-something, I can't imagine the rounding error would be large enough to get it up to 401. The alternative would be to do a floor, but that would work out to 399 (as long as we're sticking to the default cropping preview for the example) in a significant number of cases.
Also, I am wondering why we need extra lines of code to pass this information to the cropper; it would seem this is already happening somewhere else, so isn't it possible to leverage that in this particular case. The solution seems a bit "smelly", but maybe I am misunderstanding the problem.
I've found this patch can actually also run into a floating point issue; on certain image sizes, the dimensions of the crop area may work out to be slightly larger than the preview image used for the cropping, in which case Cropper will decide to show the default crop area. I'll see if I can work up a reproduction case and an updated patch.
When granting credit, please also include @idebr at iO since he actually found this issue.