- π³π±Netherlands megachriz
To make the chance of regressions smaller, I've decided to make the scope of π FetcherResult::checkFile() throws \RuntimeException, which is never caught Needs review smaller. The fix in that issue now no longer aborts the import process on cron runs if fetching a source fails with a 404. Instead, Feeds will retry over and over again. That's not great, but I don't want imports to fail that experience a temporary hiccup. In such cases, I think it would be good if Feeds retries. Though Feeds should not attempt to retry forever as it is now. So this issue therefore still is a valid issue.
The issue does need an issue summary update, because some side effects reported in it are no longer true:
- When a feed gets unlocked, any queue items related to that feed now get removed first. So multiple imports for the same feed happening simultaneously should no longer happen.
- Feeds no longer unlocks a feed after 12 hours are passed. It unlocks it only if Feeds thinks the import no longer runs (by checking the number of items on the queue). If it detects that an import could still be running, the lock gets extended instead. And a warning about the lock getting extended gets logged.
Again, what still is an issue is that Feeds would try to fetch a resource that results into 404 forever, once on every cron run. And we should somehow limit the number of retries. It attempted to do that in π Add DelayedRequeueException feature to feeds queue - D9.1 Active , but that became quite complicated.
- π©πͺGermany demonde
I have a feed that after 2 month running without problems turned into
Lock feeds_feed:2 got extended.
and stays in this lock forever. For my project this is a extremly problematic for it is not possible to check this manually.If I get it right due to this change https://www.drupal.org/node/3363755 β
If there a good reasons to extend a lock automaically and other scenarios where this is very problematic maybe it is a good solution to make this an option in the feeds settings.
Automatically extend locks.
for each feed in its settings or so. - πΊπΈUnited States chrisgross
This just happened to me for the first time, as well. There definitely needs to be a way to configure whether locks stay in place or not, and perhaps configure the expiration time. I just had a mission-critical feed stay locked for a week before anyone noticed (that might sound contradictory, but it's not).
- π³π±Netherlands megachriz
@chrisgross
Agreed that this needs some work. There is now only a message in the logs about the lock getting extended. Perhaps there indeed needs to be a maximum lock time in the form of a config option.Do note that the lock only gets extended if Feeds thinks that the import is still active. So perhaps that detection mechanism could be finetuned too. The lock gets extended when:
- There are still tasks for the feed in question on the queue.
- There was progress being made in the last hour.
So I think that the underlying issue is actually that there are tasks on the queue that are getting stuck. There could be different reasons for this as said in π Processing of failed queues never finish, feeds import gets stuck Active . Finetuning the detection mechanism would then thus be: is there a way for Feeds to know if the tasks on the queue got stuck?
But a config option for a maximum lock time would nevertheless be useful. I only don't know what it should default to. If it is set to 'Never' by default, then you may only think about setting the option when you stumble upon this issue. If it is set to '12 hours', then that could be an issue for people running very large and long imports and they then detect after 12 hours that their import suddenly aborted. I think that Feeds cannot know at the start whether it gets fed a source that is going take a long time. Because not only this depends on the size of the source, but also on how often cron runs.
- π©πͺGermany demonde
I only don't know what it should default to.
I think it should default to the smallest effect on users. Thus the best would probably to use a big time frame like 12 hours. There will be few users that have a feed running longer. I would assume that these users rather check the settings description carefully and change the settings to their needs.
Besides I have the problem that I have no clue why my feed hangs up from the log messages. Probably the remote server has a downtime after initializing the feeds run. But this is just a guess.
- π³π±Netherlands megachriz
I think it should default to the smallest effect on users. Thus the best would probably to use a big time frame like 12 hours. There will be few users that have a feed running longer. I would assume that these users rather check the settings description carefully and change the settings to their needs.
Alright, that sounds reasonable. This issue reported in the issue summary looks to be more about that Feeds never stops retrying fetching when it encounters a 404. While that is a possible cause for a feed getting stuck and as a result locks keep getting extended, maybe we would need to handle implementing lock settings in a separate issue?
- π³π±Netherlands megachriz
I created β¨ Add a setting for limiting the amount of times a lock may be extended Active
If someone wants to take a first step implementing a bit of it, that would be appreciated. π