Feed fetch errors don't seem to be handled gracefully

Created on 27 September 2022, about 2 years ago
Updated 23 May 2024, 6 months ago

Problem/Motivation

If you have a feed configured that fetches from a remote URL and that URL fetcher returns a 404 for example, then eventually feeds will request the URL a lot.

Steps to reproduce

For example:

  • On day 1, you set up a feed to: https://example.com/feed
  • It imports fine for a few days.
  • On day 4, https://example.com/feed starts returning a 404. Feeds enqueues a feeds_feed_refresh: task for the feed, but when processing this queue item this fails, but is kept in the queue.
  • 12 hours later, feeds_cron runs, and spots that the feed is still enqueued and resets the feed.
  • On the next cron run, feeds enqueues a feeds_feed_refresh: task for the feed.
  • The default queue implementation does some garbageCollection: \Drupal\Core\Queue\DatabaseQueue::garbageCollection which re-enqueues the first queue item for this feed fetch.
  • There are now two feeds_feed_refresh: items that will make an outbound HTTP request per cron run.

Now imagine that situation persisting for a couple of years, with the right settings and we have well over a thousand fetches of the 404-ing HTTP feed on each cron run. Eeek.

Proposed resolution

It seems to me that if feeds is assuming that the feed update fails after 12 hours in feeds_cron then it should also take care of removing any stale queue items, or it shouldn't leave the fetcher error as unhandled, and then it could essentially return a 'success' for the purposes of queue. But it's your module, so over to you :)

Remaining tasks

Decide what to do!

User interface changes

None.

API changes

Unknown at this point.

Data model changes

None.

πŸ› Bug report
Status

Active

Version

3.0

Component

Code

Created by

πŸ‡¬πŸ‡§United Kingdom steven jones

Live updates comments and jobs are added and updated live.
  • Needs issue summary update

    Issue summaries save everyone time if they are kept up-to-date. See Update issue summary task instructions.

Sign in to follow issues

Comments & Activities

Not all content is available!

It's likely this issue predates Contrib.social: some issue and comment data are missing.

  • πŸ‡³πŸ‡±Netherlands megachriz

    To make the chance of regressions smaller, I've decided to make the scope of πŸ› FetcherResult::checkFile() throws \RuntimeException, which is never caught Needs review smaller. The fix in that issue now no longer aborts the import process on cron runs if fetching a source fails with a 404. Instead, Feeds will retry over and over again. That's not great, but I don't want imports to fail that experience a temporary hiccup. In such cases, I think it would be good if Feeds retries. Though Feeds should not attempt to retry forever as it is now. So this issue therefore still is a valid issue.

    The issue does need an issue summary update, because some side effects reported in it are no longer true:

    • When a feed gets unlocked, any queue items related to that feed now get removed first. So multiple imports for the same feed happening simultaneously should no longer happen.
    • Feeds no longer unlocks a feed after 12 hours are passed. It unlocks it only if Feeds thinks the import no longer runs (by checking the number of items on the queue). If it detects that an import could still be running, the lock gets extended instead. And a warning about the lock getting extended gets logged.

    Again, what still is an issue is that Feeds would try to fetch a resource that results into 404 forever, once on every cron run. And we should somehow limit the number of retries. It attempted to do that in πŸ“Œ Add DelayedRequeueException feature to feeds queue - D9.1 Active , but that became quite complicated.

  • πŸ‡©πŸ‡ͺGermany demonde

    I have a feed that after 2 month running without problems turned into Lock feeds_feed:2 got extended. and stays in this lock forever. For my project this is a extremly problematic for it is not possible to check this manually.

    If I get it right due to this change https://www.drupal.org/node/3363755 β†’

    If there a good reasons to extend a lock automaically and other scenarios where this is very problematic maybe it is a good solution to make this an option in the feeds settings.

    Automatically extend locks. for each feed in its settings or so.

  • πŸ‡ΊπŸ‡ΈUnited States chrisgross

    This just happened to me for the first time, as well. There definitely needs to be a way to configure whether locks stay in place or not, and perhaps configure the expiration time. I just had a mission-critical feed stay locked for a week before anyone noticed (that might sound contradictory, but it's not).

  • πŸ‡³πŸ‡±Netherlands megachriz

    @chrisgross
    Agreed that this needs some work. There is now only a message in the logs about the lock getting extended. Perhaps there indeed needs to be a maximum lock time in the form of a config option.

    Do note that the lock only gets extended if Feeds thinks that the import is still active. So perhaps that detection mechanism could be finetuned too. The lock gets extended when:

    • There are still tasks for the feed in question on the queue.
    • There was progress being made in the last hour.

    So I think that the underlying issue is actually that there are tasks on the queue that are getting stuck. There could be different reasons for this as said in πŸ› Processing of failed queues never finish, feeds import gets stuck Active . Finetuning the detection mechanism would then thus be: is there a way for Feeds to know if the tasks on the queue got stuck?

    But a config option for a maximum lock time would nevertheless be useful. I only don't know what it should default to. If it is set to 'Never' by default, then you may only think about setting the option when you stumble upon this issue. If it is set to '12 hours', then that could be an issue for people running very large and long imports and they then detect after 12 hours that their import suddenly aborted. I think that Feeds cannot know at the start whether it gets fed a source that is going take a long time. Because not only this depends on the size of the source, but also on how often cron runs.

  • πŸ‡©πŸ‡ͺGermany demonde

    I only don't know what it should default to.

    I think it should default to the smallest effect on users. Thus the best would probably to use a big time frame like 12 hours. There will be few users that have a feed running longer. I would assume that these users rather check the settings description carefully and change the settings to their needs.

    Besides I have the problem that I have no clue why my feed hangs up from the log messages. Probably the remote server has a downtime after initializing the feeds run. But this is just a guess.

  • πŸ‡³πŸ‡±Netherlands megachriz

    I think it should default to the smallest effect on users. Thus the best would probably to use a big time frame like 12 hours. There will be few users that have a feed running longer. I would assume that these users rather check the settings description carefully and change the settings to their needs.

    Alright, that sounds reasonable. This issue reported in the issue summary looks to be more about that Feeds never stops retrying fetching when it encounters a 404. While that is a possible cause for a feed getting stuck and as a result locks keep getting extended, maybe we would need to handle implementing lock settings in a separate issue?

  • πŸ‡³πŸ‡±Netherlands megachriz

    I created ✨ Add a setting for limiting the amount of times a lock may be extended Active
    If someone wants to take a first step implementing a bit of it, that would be appreciated. πŸ™‚

Production build 0.71.5 2024