Cron doesnt process queues

Created on 24 March 2023, over 1 year ago

Problem/Motivation

The hook_cron in the module doesnt actually "process" anything. All it does is continually queue items.... The same items. Over and over and over. So by the time they are processed with drush, drush is processing the same items over and over and over. Which is completely pointless. The fix for this is to NEVER allow an item to remain in the queue. Ever. For any reason. Queue really isnt the right method to be using here IMO but thats a different discussion (it should actually use concurrency or simply a loop instead). In order to fix this, simply empty the queue immidiately after it's been filled.

I was able to empty the queue using the following as a TEST. Yes I the code I show below is a modified hook_cron from this module, but thats only for an example for this post. In my code it's actually a custom module (which shouldnt have been needed imo to begin with).

I was able to do this using:

/**
 * Implements hook_cron().
 */
function warmer_cron()
{
  HookImplementations::enqueueWarmers();

  //Process the queue items immediately
  $queue_factory = \Drupal::service('queue');
  $queue = $queue_factory->get('warmer');
  $queue_worker = \Drupal::service('plugin.manager.queue_worker')->createInstance('warmer');
  while ($item = $queue->claimItem())
  {
    $queue_worker->processItem($item->data);
    $queue->deleteItem($item);
  }
}

Steps to reproduce

Just install the module and force run the "warmer" cron

Proposed resolution

Add an option in the settings page that tells hook_cron to immediately process the items that are in queue... duuh.

Remaining tasks

User interface changes

API changes

Data model changes

✨ Feature request
Status

Postponed: needs info

Version

2.0

Component

Code

Created by

πŸ‡ΊπŸ‡ΈUnited States crystaldawn

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Comments & Activities

  • Issue created by @crystaldawn
  • πŸ‡ΊπŸ‡ΈUnited States crystaldawn
  • πŸ‡ΊπŸ‡ΈUnited States crystaldawn
  • πŸ‡ΊπŸ‡ΈUnited States crystaldawn
  • Status changed to Postponed: needs info over 1 year ago
  • e0ipso Can Picafort

    Perhaps there is something additional in your site that is interacting with queues. I would imagine that an issue as severe as the one described here would raise much attention in the other +14k sites.

    The queue worker plugin is configured to be run with cron (separately from the hook_cron), which seems to not be working in your case.

  • πŸ‡ΊπŸ‡ΈUnited States crystaldawn

    Erm. Where in the README does it mention anything about a "worker" plugin? It doesnt. I dont even know what you're referring to actually if it's not the drush command provided by the module. The only thing it ever mentions is the enqueue CRON job that comes with the module itself. It also only mentions using Drush to actually run queues and then putting drush on a server cron service. It makes no mention about a cron job that runs a "worker" plugin which would ensure that the queues never fill up with duplicates.

    This sounds more like a documentation bug because I absolutely followed the Readme exactly as written. Your comment seems to indicate you dont actually understand the problem I'm describing which is duplication. Whether its a bug or a feature request, thats debatable so I put it as a feature request. The worker you're referring to must be the drush command which isnt a plugin, its a drush command since you mention it's to be run separate from hook_cron. The process you just described where queueing and running the queue via separate process == queue duplication just waiting to bite you in the butt. That's no good. We also dont even want to use drush to run the queue processing anyway. We want the cron job itself to queue and then immediately begin working on the queue so as to reduce the possibility of duplicates (assuming queue process finishes before next enqueue happens via cron). Thats why we had to add our own cron job that runs the code I showed above. That could have been avoided by adding the option I mentioned in the original post (which is the feature request).

    So I still see the feature request as still valid if you're not referring to a feature to immediately clear the queue that is created by the CRON job (hook_cron) which is where the duplication problem is coming from that we're seeing in our network and server load stats and it's SIGNIFICANT since we have 1200+ sites (acquia sitefactory which is multisite). If this were a single site on it's own little server somewhere, it probably wouldnt be a problem. Duplicate entries definitely need to be avoided at all costs when working at this scale and I've seen mention of duplication problems in other issue posts in the queue here. So this isnt a problem thats new, it seems to be known.

Production build 0.69.0 2024