Sone if this might be needed for ✨ Add download zip of archives specific dataset id Active
This adds both a deploy hook that should generate all the "current" archives and a drush command that does the same.
This is now complete. Files will move correctly from local to AWS S3 remote if the connections are established.
I agree that we can't have it both ways. A submodule either needs to be treated like another package or not be treated like a separate package at ll. Currently it is being treated somewhere in-between
Current Situation
composer require 'drupal/projectA_submoduleG' Results in The parent module being included using the parent module's composer.json. The submoduleG's composer.json is ignored. So the parent module must include all dependencies of both the parent and any optional submodules or there will be mayhem. A lack of efficiency is the result.
Solution One
Composer has no full awareness of the submodules. composer require 'drupal/projectA_submoduleG' would result in: Could not find drupal/projectA_submoduleG' did you mean drupal/projectA'
Solution Two
Treat submodules like distinct packages. composer require 'drupal/projectA_submoduleG' would result in: that parent module being downloaded and would utilize the composer.json within projectA_submoduleG and would also use the composer.json in projectA because it is a dependency of projectA_submoduleG.
MR 45 is going to need to be merged and unfortunately tagged and released in order to bump the version of FlySystem from V1 to V3. Then I can continue work on getting the remote file processing using the new version of FlySystem.
I released part of this in
https://www.drupal.org/project/dkan_dataset_archiver/releases/1.0.0-alpha16 →
However there is still more needed to call the file sync working.
This is creating files on AWS. Unfortunately they are all empty. But they carry the right names and the right paths. So the solution is close.
I am merging this PR now so that I can include what I have in a release.... because the previous reliease had a pretty significant bug.
Closing this as it has to be site specific and is not within the scope of this module.
That does help a bunch. I see in the screenshot that it is actually happening from the validation from the cron used to generate the audit report. So quite likely the location it is attributing it to at admin/config is just gibberish since cron is running it.
The other option is that there is a config form field that is an image field, that it is accidentally running into issues.
Instead of salt generated at install and saved, that then has to be constantly retrieved, could something like the full server name and file path be used? That way it would be something calculated instantly rather than looked up?
It would be unique per environment/instance.
Anybody trying to reproduce it would find it easier to just compare hashes of the full library file to compare signatures.
Ooh I like this. It reminds me of all the doors at the beginning of an episode of Get Smart.
Thank you for submitting this.
Ooh I like this. It reminds me of all the doors at the beginning of an episode of Get Smart.
Thank you for submitting this.
Thank you for reporting this cndexter.
I am not clear on what is going on. The Validation should only be running during a save or preview, so I am not clear on why it would be doing anything at /admin/config. This makes me wonder if it is being correctly attributed to that path.
Does it appear in the log after visiting or refreshing the page at /admin/config ?
Or perhaps do you have any kind of form that is visible in an overlay or something at /admin/config ?
I can make that function more defensive which would make it go away, but I think something bigger is at play here that I want to understand first.
There are two primary ways for data dictionaries to be connecteed in DKAN
- The describedBy field could provide a url to a json file that is a dictionary.
- Can be created by hand in the UI.
It is not clear to me how the UI generated dictionaries get attached to datasets, so for now I will focus on the primary method of pulling them from the dataset metadata describedBy field.
Chris said Webber is with one 'b' but what he meant to say is the credit for cwebber should be for cosmicdreams
This has been completed. The "current" api endpoint now exists.
The api access is now based on the perm "access dataset archive api"
What shows up within the api is also dependendent on the view pemissions of access levels.
Oops, just saw it was merged. Any future MRs will get tugboat preview environments.
No need to change anything. The tugboat env goes away after 10 days of no use. Closing the PR and reopening it a minute later should revive it.
Tugboat previews will start working again when this is done. Right now they are broken because of the docroot / web discrepancy for the patch.
Needs a setting too or enable current aggregation.
This is needed for the current functionality on PDC.
Solution is something like this:
- After as part of the theme aggregation the theme name is kept and used to trigger a current aggregation.
- It will trigger its own cache tag invalidation
- It will load all published datasets and add them to a zip.
- Of course there will have to be separate aggregation of private.
- Manifest should also be included.
This MR also included work for ✨ Archives marked private should be private Active
This was pretty involved to account for multiple options for handling private files. But I think I have it working.
Revised this after ✨ Remove settings for different retention periods Active was merged.
Archve timing is corrected to be in the final hour of the year. This also gets the functionality of the annual archive process working.
This may have to be too specific to a given site. It may have to be a custom Batch Operation.
This was competes as part of the work along with ✨ Add api endpoint for topic archives Active
This issue should only be closed after This module and all its dependencies have stable releases.