EntityViewsData is incredibly crusty and hasn't had much change in years, and a cursory search shows that this code is still here:
if ($revisionable) {
$revision_table = $this->entityType->getRevisionTable() ?: $this->entityType->id() . '_revision';
}
It means that tests that need more altering will need to fiddle with the trait to alias its alter() method.
I think it's a simpler and clearer pattern if the trait supplies a non-interface method alterTimeService(), and all services implement alter() to call it.
The alter() should not be in a trait. Tests may need to perform other alterations to the service container.
> However, it seems like it would do a bad job at handling multiple logged errors, because we could only rethrow one.
I think that's a good thing! The first one fails the test, you fix it, then you get another one, and you fix that one. This is the same way that test failures work -- you only see the first one.
I'm concerned there could be other places where migrate messages are misused, but let's fix this as a novice issue.
Found it:
// A shared table contains rows for entities where the field is empty
// (since other fields stored in the same table might not be empty), thus
// the only columns that can be 'not null' are those for required
// properties of required fields. For now, we only hardcode 'not null' to a
// few "entity keys", in order to keep their indexes optimized.
// @todo Fix this in https://www.drupal.org/node/2841291.
$not_null_keys = $this->entityType->getKeys();
// Label and the 'revision_translation_affected' fields are not necessarily
// required.
unset($not_null_keys['label'], $not_null_keys['revision_translation_affected']);
// Because entity ID and revision ID are both serial fields in the base and
// revision table respectively, the revision ID is not known yet, when
// inserting data into the base table. Instead the revision ID in the base
// table is updated after the data has been inserted into the revision
// table. For this reason the revision ID field cannot be marked as NOT
// NULL.
if ($table_name == $base_table) {
unset($not_null_keys['revision']);
}
Urgghh. Spooky side-effects FTL :(
We probably need to add 'changed' to the keys that are unset here.
And this is a spooky side-effect of entity keys which AFAIK isn't documented at all!
Totally tripped us up on 🐛 entities implementing EntityChangedInterface should have a 'changed' entity key Active .
Ok I can confirm that with the MR, the 'changed' field on taxonomy terms doesn't get an ALLOW NULL, and without the MR, it does.
I have NO idea how!!!
I can't reproduce the test failure locally -- this gets in the way: #3552984: MigrateTestBase::display() causes a TypeError if a test fails a migration → .
> Unless our custom endpoint includes the JSON:API data, in which case we're no longer using vanilla JSON:API at all!
And if we do that, then I would seriously consider moving away from bundle channels, as having to make a channel for every bundle is a pain. Bundle-based endpoints is part of Drupal JSONAPI's opinionated design, which works for using JSONAPI in general, but isn't really useful or helpful for Entity Share.
Wow, that's some impressive test failures!
AFAIK nothing is looking for the 'changed' entity key, since, again AFAIK, we're inventing it in this MR. And yet, things fall over dramatically:
> ├ SQLSTATE[23000]: Integrity constraint violation: 1048 Column 'changed' cannot be null: INSERT INTO "test36480953taxonomy_term_field_data" ("tid", "revision_id", "vid", "langcode", "status", "name", "description__value", "description__format", "weight", "changed", "default_langcode", "revision_translation_affected") VALUES (:db_insert_placeholder_0, :db_insert_placeholder_1, :db_insert_placeholder_2, :db_insert_placeholder_3, :db_insert_placeholder_4, :db_insert_placeholder_5, :db_insert_placeholder_6, :db_insert_placeholder_7, :db_insert_placeholder_8, :db_insert_placeholder_9, :db_insert_placeholder_10, :db_insert_placeholder_11); Array
@mradcliffe I don't understand what the IS is missing.
> We should look at other core implementations of the method to find other exceptions that they might throw, and write documentation that covers those reasons.
What else needs to be said?
The text at the top of https://www.drupal.org/docs/develop/standards/obsolete-php/obsolete-api-... →
> The standards have moved to GitLab pages, Drupal coding standards.
links to the wrong place: https://project.pages.drupalcode.org/coding_standards/composer/package-n...
> This seems as desired? I assume that the various other tests edited in the MR were edited to fix them after they were broken by the MR?
Yup. The tests have been updated to work with the new factory service, so running tests without the new code will fail.
I might have a better approach for #108.
It's possible to store an exception and throw it later:
<?php
function sut() {
throw new \Exception();
}
class Catcher {
protected $e;
function catching() {
try {
sut();
}
catch (\Exception $e) {
$this->e = $e;
}
}
function check() {
throw $this->e;
}
}
$c = new Catcher();
$c->catching();
$c->check();
This fails on the thrown exception in check(), but the error reported makes it look like it happened in catching().
This means that our logger could throw an exception at the point it wants to say things have gone wrong, but catch and store it, and then when asked by the test whether everything is OK, throw that exception.
You'd then get the backtrace of what actually caused the problem, which otherwise could have been swallowed up by something like the batch system or cron.
Do we really need tests for this? It's going to need an entire test module with bad config just for this test.
Nitpicking my own decisions: should it be Mock singular rather than Mocks plural in the namespace?
Please don't use AI to summarise the issue. It just adds a huge amount of noise and nothing useful.
There's no need for a test for documentation.
Are you using an AI to write your comments? Your comment reads quite a bit like what an AI bot would say. I'd rather not interact with AI, and I advice against using it to make the patch.
> Also for JSON:API Extras field enhancer plugins, no more need to alter JSON:API output to provide data for the client to search for referenced blocks
It's true, we could totally ditch the need for JSON:API enhancers.
But then that would be a big change in the philosophy of the module, that it uses JSON:API as its transfer method. We'd be saying that we use JSON:API for the content, but custom endpoints for relationships.
It would then mean twice as many requests, since for each entity we'd need to call first the custom relationships endpoint, and then the JSON:API endpoint for the content. Unless our custom endpoint includes the JSON:API data, in which case we're no longer using vanilla JSON:API at all!
> Server website will have already prepared the JSON:API urls to search directly.
The server would need plugins to do that for the various things -- books, embedded, aliases, etc etc. So we'd either need a new family of plugins, or use processor plugins both server- and client-side. In either case we'd need some sort of config on the server that mirrors the import config on the client.
> $this->handlers = [];
There's nothing else in EntityTypeManager which does this, so still needed.
> Mind blown - I must have missed this in the instructions
:)
Are there places where extra instructions could be added, or made clearer? Please file a docs issue if so!
Is the MR still needed?
> In import config I have this check box selected "Embedded entity"
You also need to enable the JSONAPI enhancer for that field.
I don't think it's a good idea for a tests issue to be making changes in non-test code as well. Those probably need their own issues.
> AdminToolbarToolsConstants
It looks like this is only used for tests? It makes more sense to put this trait in tests/src, or in a test base class.
Also, what are the functional tests checking? Could this be done with Kernel tests instead? We presumably already have Functional tests that test the menu works correctly. Here we just care that certain conditions produce certain menu item plugins.
What would be the benefit of gathering dependencies on the server rather than on the client?
Here's an updated patch for 3.6.2 for people using this with Composer patches.
> Applying this patch on both 3.3.0 and dev of admin_toolbar gives me the following error after clearing caches. Drupal 9.5.3 and Feeds Beta 3.
I've just tried installing drupal/feeds (3.0.0), and the MR here doesn't crash.
I don't get an 'Add Feed' menu item though -- that is maybe because Feeds is adding a collection menu link itself? Needs further debugging.
This is also still postponed!
> I couldn't use issue fork, since this code needs to be applied on top 2985168: Allow media items to be edited in a modal when using the field widget .
You can pull the branch from another MR into your fork, and then put your branch on top of it.
+++ b/core/modules/path_alias/src/PathAliasViewsData.php
@@ -0,0 +1,90 @@
+ 'relationship' => [
+ 'base' => 'node_field_data',
+ 'base field' => 'nid',
+ 'id' => 'standard',
I don't understand how this is going to work!
I've made a contrib module which adds Views support: https://www.drupal.org/project/path_alias_views →
> If we mess with the file cache, we're inviting ourselves to enter a world of pain.
The file cache issue is something PHP is going to have to solve at some point -- the appearance of a new package which gives meaning to a hitherto dormant attribute is a problem that all composed PHP applications will have.
> I also think the third option is the most "obvious" one, and therefore best for DX.
I think it's got the worst DX!
> I also like that it allows different plugin types to decide if they want to support additional properties or not.
The lack of consistency is also bad DX.
Yeah, you've put your commits and the MR on the wrong branch.
Leave 4.0.x as is.
Put your commits on the feature branch which you've created -- computed_field-3353839/3353839-show-computed-field
> if the URL is not reachable, an UnexpectedValueException is thrown, causing a WSOD.
A WSOD is a bug! :)
I can't find it now, but I'm sure we had a documentation standard that said that code comments should not refer to fixed bugs. Code is not history, it exists in the present. So comments that say 'code is like this to fix bug 12345' should not be present.
Now I think about it, I think while I got the requirements right, the process to handle it I got wrong.
What we want is to not have comments for closed issues in code.
But checking this is a one-time task. Once those are gone, it's done.
To prevent this in future, we need a CI job which looks for code comments that mention the *current issue the MR being tested is for*.
That is probably something that should be a specific CI job, rather than done by Coder.
@cmlara has explained on slack that it's so that $allow_invalid could have data to return.
So this works as designed, but might an idea to document this so it doesn't catch people out.
If they hadn't named their interface method now(), this would be an interesting idea! But having
$request_time->now()->getTimestamp();
$current_time->now()->getTimestamp();
is not good DX and asking to create bugs.
While I agree that using PSR interfaces is a good think, I don't like PSR-20. For years now, Drupal has had the two concepts of request time and current time, which are often not the same. PSR-20 only has the current time, and furthermore, calling it 'now' is ambiguous when there's a need for the request time as well. It seems to be rather lacking for PHP applications, so I'm not sure what its indended use case is.
So we should adopt it, yes, but I think it's best if we keep our own Time service as the main API in Drupal, with its clear semantic method names.
PSR-20 only has a now() method, it does not have a concept of 'request time' like Drupal does.
Those issues linked in #27 don't get in the way of this.
> I'm confused about the code comments referring to "current time" and the implementation referring to "request time". Which one are we mocking?
Good point, will tweak.
> If media entity is embedded in textarea directly like this
You need to escape your tags -- we can't see them.
Have you set up the entity_share_embedded_entities enhancer on your fields?
Setting to minor.
With entities such as redirects which don't have a changed field, this just results in an empty column in the pull form.
Might cause log errors on the server though!
I wonder if instead of fixing/changing the Skip Imported processor, we should do this in a new SkipExisting processor.
It would be semantically clearer, and also mean we don't change existing behaviour (even if I do think it's buggy).
Thoughts?
Add details about unpublished entities and entity types with no view permission.
Ah I hadn't thought of using a $ at the end!
In which case, we can maybe close this as not necessary.
Hi!
Thanks for the MR.
If you understand OAuth, could you have a look at fixing the tests too please?
🐛
AuthenticationOAuthTest is broken
Active
. It would be really great to get those passing so we can be confident in committing this.
Ok, so the question is:
Why do we do
$this->cache[$cid]->expire = $this->time->getRequestTime() - 1;
instead of
unset($this->cache[$cid]);
Is there a performance reason?
The save works -- there is a 'field.storage.entity_test.field_test_text' row in {config} after the save() call.
So it's the attempt to load it during the save of the field config that goes wrong.
Ok so I think the problem is that with a request time of 0, code lines like this in MemoryBackend:
$this->cache[$cid]->expire = $this->time->getRequestTime() - 1;
make the expiry equal to const CACHE_PERMANENT = -1;
But still -- why should incorrectly making cached items permanent cause saving to fail?
The way to reproduce this is by running Drupal\KernelTests\Core\Entity\EntityValidationTest::testEntityChangedConstraintOnConcurrentMultilingualEditing with the MR from 📌 FIx TODO in Drupal\entity_test\Plugin\Field\FieldType\ChangedTestItem Active and with TestMockedTime::$requestTime changed to have a default value of 0.
The only things I can see that call the request time during the config entity save are to do with MemoryBackend.
But surely nothing that happens in a cache backend should influence saving?! If the cache doesn't work properly, then either things aren't cached, or things fail to load from cache. But saving shouldn't be affected!
Doesn't coder have sniffs for documentation problems as well as for code?
Let's see if any other tests are relying on the sleeping field type.