- Issue created by @khiminrm
- Status changed to Needs review
5 months ago 10:56am 9 July 2024 - Status changed to Closed: works as designed
5 months ago 8:01pm 9 July 2024 - 🇬🇧United Kingdom jonathanshaw Stroud, UK
I don't agree this is a problem.
The database backend here is being deliberately set up so that it is possible to queue the same job many times, but at any one time there can only be one of that job in the queue.
I think this is correct, because I think it's more likely the more common requirement. And anyway, it's be a serious BC break to change it now.
The simple solution is to make a custom backend for your project extending the provided one.
We could in theory provide an API for this by allowing the job plugin annotation to define which states the backend should consider for duplicates. Whether the maintainer would support this added complexity, I'm unsure.
- Status changed to Active
4 months ago 6:25am 10 July 2024 - 🇮🇱Israel jsacksick
Calling this is a BC break is a strech, this was released as part of an RC release which isn't even visible on the project page.
I think the job type should be responsible for making the fingerprint unique. I agree with bojanz take from https://www.drupal.org/project/advancedqueue/issues/2918866#comment-1567... ✨ Add support for unique jobs Fixed .A fingerprint should be unique... It should be the job type responsability to make sure it's unique in case you'd want to queue the same job multiple times...
TBH, the unique jobs support is a feature I was expecting specifically for Commerce Recurring. If we can't even leverage it there without defining a custom backend, we may as well revert the whole feature as it is useless in its current form...
- 🇮🇱Israel jsacksick
Coming back to this again, I've been discussing this with Bojan on Slack. I understand the original intent and merging the MR would prevent a job to be ever requeued.
That said, I think the problem we have to fix, either in AQ or in Commerce recurring, is ensuring we don't requeue the job, if the job didn't permanently fail... In other words, requeuing should only be allowed after the maximum amount of retries (i.e. after a permanent failure).
- 🇷🇸Serbia bojanz
The original code has the right intent, we can't make the fingerprint unique across job statuses, because that would mean that once a job completes, it can never be requeued again. Same with failure, if we gave up on a job, it needs to be requeueable.
However, that doesn't cover failing-but-retrying jobs, if a job is retrying, we should not allow a duplicate to be queued. I think that's what we missed.
- 🇮🇱Israel jsacksick
@khiminrm: I see that the recurring jobs aren't configured to "retry"... So what is it a problem to have duplicate jobs? Once the job failed, the job state is "failure" and it shouldn't be reprocessed again, unless it's manually requeued (which shouldn't be the case)?
I thought what you were complaing about was the fact that the job would be retried, and we were still somehow queuing duplicate jobs...I actually agree with @ jonathanshaw, we should probably close this...
- 🇬🇧United Kingdom jonathanshaw Stroud, UK
if a job is retrying, we should not allow a duplicate to be queued.
Agreed. But the database backend sets the state of a failed job back to 'queued' if it should be retried, so duplicates will not be created. Not sure yet that we've missed anything.
- Status changed to Closed: works as designed
4 months ago 3:48pm 11 July 2024