Fastly purge gets initiated without api_key value, causes 404 errors

Created on 12 September 2017, about 7 years ago
Updated 25 April 2024, 7 months ago

Our local instances have no api_key or service_id, but running a drush entup command (which requires a new entity type to be installed) triggers a series of 404 errors. (The production site has the values provided via settings.php)

Is it possible to disable these purges unless a valid api_key and service_id are provided? (Or is there a clean way to disable them on local environments?)

UPDATE: It's not just on drush entup -- Drupal's error logs also reveal a LOT of 'critical' fastly 404 purge errors.

There are a series of errors, taking the form:

Client error: `POST https://api.fastly.com/service//purge` resulted in a `404 Not Found`    [error]
response:
<h1>Not Found</h1>

Unable to purge key(s) xxx from Fastly. Purge Method: soft.    [error]

The entity is successfully created, and subsequent runs of drush entup complete without any errors, but our build process involves grabbing a database snapshot and so will continue to detect the error until the underlying change is pushed into production. (Which is tricky, since our staging build fails due to this error...)

I think this might be changed behaviour since 8.x-3.3 or 3.4 -- we've certainly created new entity types without seeing this error in the past.

Notes:

The required entity update is from AMP Metadata:

The following updates are pending:

amp_metadata entity type : 
  The AMP Metadata entity type needs to be installed. Do you wish to run all pending updates? (y/n): y

Local settings:

$ drush cget fastly.settings --include-overridden
api_key: ''
service_id: ''
purge_method: soft
stale_while_revalidate_value: 604800
stale_if_error: true
stale_if_error_value: 604800

Production settings:

$ drush @master cget fastly.settings --include-overridden
api_key: 0ecXXXXXXXXXXXX6ab
service_id: 6FnXXXXXXXXXXXXhpU
purge_method: soft
stale_while_revalidate_value: 604800
stale_if_error: true
stale_if_error_value: 604800
πŸ› Bug report
Status

RTBC

Version

4.0

Component

Code

Created by

πŸ‡¬πŸ‡§United Kingdom jrsouth

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Comments & Activities

Not all content is available!

It's likely this issue predates Contrib.social: some issue and comment data are missing.

  • πŸ‡ΊπŸ‡ΈUnited States bvoynick

    A use case where I'm seeing this come up is importing a snapshot of a production environment elsewhere. I just had a content restoration process on a pre-production environment killed during its drush deploy, because runtime went over 15 minutes, due to the large amount of POSTs it was attempting to chew through. (The API key for this site is provided by environment variable, in Production only, and not stored in the Drupal database. Hence, even though I am using a production database with the processors enabled, there is nevertheless no API key in the context of restoring that database in this other environment.)

    The site is set up so that `drush cim` / `drush deploy` will indeed uninstall all processors, and in fact all Purge & Fastly suite modules, when in a non-Production environment. But module installation state is in the database. When importing a Prod database elsewhere, the modules will have to be actively uninstalled during configuration import. Simply uninstalling these modules is triggering these POSTs, and resulting 404s, due to the lack of service ID and API key.

  • πŸ‡¬πŸ‡§United Kingdom jrsouth

    @bvoynick It's not the cleanest, but we've been using the below to manually invalidate the purge credentials in non-production deployments (mostly local environments).

    drush sset fastly.state.valid_purge_credentials FALSE

    May be of use in your case ― perhaps before the drush cim?

    (...Might be of no use at all!)

  • πŸ‡¨πŸ‡­Switzerland Lukas von Blarer

    Could we get this committed?

  • Status changed to RTBC 7 months ago
  • πŸ‡©πŸ‡°Denmark arnested

    This is a good and useful improvement.

    I have tested the code in our environments and reviewed the patch. All good!

Production build 0.71.5 2024