Account created on 20 November 2009, about 15 years ago
  • Media technologist at WebCoo 
#

Merge Requests

More

Recent comments

🇳🇱Netherlands megachriz

Sorry for the late reply, I somehow did not get an email notification about this issue.

This is probably caused by the "minimum-stability" setting in the composer.json file in your Drupal root folder. By default, this setting is set to "stable". Because of that, Composer won't download the Tamper module (which is dependency of Feeds Tamper), because Tamper has no stable release yet.

To fix that:

  • Either set "minimum-stability" to "alpha" in the composer.json of your Drupal root folder (because Tamper is currently in alpha);
  • Or composer require the Tamper module as well with the following command:
    composer require 'drupal/tamper:^1.0@alpha'
    

I hope that this helps!

🇳🇱Netherlands megachriz

I've checked again. I added code to support base urls that do not include the scheme. So previously, "example.com" was marked as incorrect, because parse_url() does not see that as a url that contains a domain name. With the regular expression /^([^\/]+\.[a-z]+)\//i, the code can accept "example.com" as a base url. Tests cases are added for this and it is checked that "www.example.com", "example.com/cat" and "example.com/foo.bar" are correctly parsed as well.

There is one remaining unresolved thread and that is that there is a inconsistency with other Tamper plugins in that it tries to convert the incoming data to a string, where other plugins would throw a TamperException. I'm not sure if that is a problem or if it is worth it to let this issue hang because of that. In my opinion, at least in the context of Feeds, I think it is a good thing if the Tamper plugin tries to be "friendly" and just see if it can use the data. Perhaps the other Tamper plugins need to be changed to become friendlier to or maybe there should exist a strict mode if it is important that the source data is of the correct data type. Changing other Tamper plugins are out of scope for this issue, however.

I think that this is ready to go, although it is possible that a small change is needed after Improve handling of empty data Active is resolved.

🇳🇱Netherlands megachriz

Tests are passing! This looks ready for a new review!

Highlights of the changes:

  • Column selection: if a field has more than one 'column', you can select a column. For example, a link field has "uri" and "title"; a body field has "value", "summary" and "format".
  • Selecting a bundle is optional. A field can be chosen without needing to select a bundle first. Selecting a bundle does narrow down the list of available fields.
  • The list of entity types are grouped by provider (usually the module defining them), making it easier to find the one you need.
  • Made updating the form via AJAX work outside of Feeds Tamper context.
  • Lots of automated tests.

@hepabolu
Thanks for your contribution 🙂. I see that your plugin has one feature that looks useful that Entity Finder currently doesn't have: a return field. Entity Finder now always returns the entity ID, but being able to select a different field to return sounds very useful.
I'm not sure yet if we should add it to Entity Finder right now (since this issue has been open for a very long time already) or if it would be better to add it in a follow-up. Do you want to help adding that feature here?

🇳🇱Netherlands megachriz

I'm not sure, but the url for the image on the edit page looks wrong.

Instead of
example.com/system/files/styles/medium/http/artifact-info%3A8888/sites/default/files/feeds/auctions/images/06-09-19/1_2.jpg.webp?itok=fHyV7p0l
I would expect something like this to be the url:
example.com/system/files/styles/medium/private/feeds/auctions/images/06-09-19/1_2.jpg.webp?itok=fHyV7p0l
Or
/sites/default/files/styles/medium/public/feeds/auctions/images/06-09-19/1_2.jpg.webp?itok=fHyV7p0l

Perhaps it is an issue caused by Add a mapping target to media field Needs review ?

🇳🇱Netherlands megachriz

@trickfun
Your issue looks like to be reported in #3261011: Wrong CSV parser lines when feed is using CRON on large files . It could be related to cron being ran very often. Or perhaps that multiple Feeds tasks run in parallel.

🇳🇱Netherlands megachriz

Closed 🐛 Find replace (multiline) adding a carriage return to end of replacements Active as a duplicate.

Are you able to make progress on this, @brooke_heaton?

🇳🇱Netherlands megachriz

Thanks for reporting!

This should be fixed in the Tamper module and the issue has already been reported there:
🐛 Multiline Find/Replace Adds Carriage Return Active

@brooke_heaton said in Slack that he want to give it a go to fix the issue.

🇳🇱Netherlands megachriz

@ptmkenny
Thanks for testing! What do you think of the changes that I made for Aggregate, WordCount, Math and KeywordFilter? For example, do you think it is okay that the Math plugin handles an empty value as if it were a zero? Because this ensures that the Math plugin produces a number. It does error in case the plugin is configured to divide by the source value, as in math dividing by zero is not allowed.

🇳🇱Netherlands megachriz

Tests are passing again. 🙂

🇳🇱Netherlands megachriz

@trickfun
Interesting, though if I use the Directory fetcher instead of File Upload using the feed type provided earlier and put the file 'data.csv' in sites/default/files/csv-import-test and then on the feed form set "Server file or directory path" to "public://csv-import-test" and then run import in the UI, all 600 items get imported. I tried it again using the file with 250,000 items and import that on background. After a few cron runs, a few thousands items got imported. I did test with only one file in that directory, however. Do you use more that one file?

In order to debug this, it would be helpful if you:

  1. Can produce this issue on a clean install;
  2. Provide the feed type configuration that you used on that clean install;
  3. Provide the file(s) that you tried to import. This may also be a sample file, if the issue can be reproduced with that.
🇳🇱Netherlands megachriz

@trickfun
I generated a CSV file with 1 header row and 600 data rows (see attached) and could import the complete file just fine. I tried to do this via file upload, running the import in the UI. And I also tried it by running the import in background and then run cron. All 600 rows got imported as nodes.

I also tried to import a file with 250,000 items using the "Import in background" option and on the command line I ran cron 5 times using drush core-cron. The import was not completed yet, but already more than 5000 nodes were imported.

See the attached feed type for the configuration that I used, line_limit is set to 100. line_limit really only affects the number of items parsed at a time and is not a value for the maximum number of items that you can import. At least, that is not the intention of that setting.

So there must be something else happening that causes the import to not continue after 200 items. Perhaps there is a bug in Feeds that only occurs in very specific circumstances? Or perhaps there is a bug in an other module that disrupts the import process?

Can you try to reproduce your issue on a clean install?

🇳🇱Netherlands megachriz

For your interest, in the latest dev of Tamper, json_decode will now return an array instead of an object. Further fixes for the encode plugin are planned in Create separate Decode-plugin RTBC .

🇳🇱Netherlands megachriz

I think this is indeed a bug in the Tamper module. The encode plugin says that it accepts an array, but then passes that to json_decode() which only accepts a string.

Planned to be fixed in Create separate Decode-plugin RTBC . Feel free to share your thoughts there.

🇳🇱Netherlands megachriz

@smustgrave
Setting default descriptions was postponed to a follow-up, to reduce the complexity of this issue and and to increase the likelyhood of getting this done (less chance for merge conflicts). Does it then still make sense to set them to empty strings here? The code ensures that getDescription() returns an empty string when the property is not set. Or should each property of a config entity always be set?
Also, the update tests made things more complicated when rerolling patches.

🇳🇱Netherlands megachriz

Since the latest patch no longer applied on Drupal 10.4.0, I've "rerolled" the patch into a MR. And with the following changes:

  • Default descriptions are removed. @rkoller created follow-ups Create default role descriptions Active and Create additional default role descriptions unique to the demo_umami profile Active for this earlier.
  • And because of that also config updates and their tests are removed.
  • Made the $description property in the Role class protected instead of public.
  • Added more strict typing for getDescription() and setDescription().
  • Updated RoleListBuilderTest as per #93.
  • Changed the description of the description field to "Displays on the Roles page", following the suggestion from #88.
  • Moved tests for setting/getting description to a new Unit test called RoleTest.

Also attached a patch for Drupal 10.4.0, with the changes to tests removed. The MR is based on Drupal 11.1.x.

Hopefully, it passes tests.

🇳🇱Netherlands megachriz

The merge conflict should be resolved. Conflict was caused by additional test methods in EncodeTest.

🇳🇱Netherlands megachriz

This is pretty straightforward and the additional test coverage fails without the changes, so I merged it.

🇳🇱Netherlands megachriz

📌 Encode plugin: only call function from the available options Active is now merged.

From #20:

the method multiple() should return TRUE if the result is an array. The plugin 'Required' does this as well. The Keyword Filter plugin does not, but I think that is a bug. I will create a new issue for this later.

Created 🐛 Keyword Filter should report that multivalued data remains multivalued Active for this.

🇳🇱Netherlands megachriz

It would be worthwhile to also check if Improve handling of empty data Active could solve some issues. I would love some feedback there on the latest changes.

The issue with the solution here is that it results into Tampers getting skipped and that could be bad (see #10). Though letting the import crash is probably worse. I hope to address this issue in more depth in a few months, after I got the Tamper module into the beta stage.

🇳🇱Netherlands megachriz

I've added a table to the issue summary that gives an overview of all modes and whether or not they can work with arrays.

Keeping it in a single plugin is doable

Adding a separate Decode-plugin and then deprecating the decode options for the encode plugin sounds like a nice idea. But at the same time I think we could fix the issue in the Encode plugin itself by just checking whether or not we receive an array or string. If it is an array, we could just iterate across all values. The Keyword Filter plugin does this as well. This would be less disruptive for existing users of the plugin.

Return value can be multiple values sometimes

Additionally, the plugin does need to report whether the result is multiple values or not. 'Unserialize', 'JSON decode' and 'YAML decode' can return multiple values, so for these modes the method multiple() should return TRUE if the result is an array. The plugin 'Required' does this as well. The Keyword Filter plugin does not, but I think that is a bug. I will create a new issue for this later.

Test coverage

If we keep it all in a single plugin, we need additional test coverage to ensure that each mode can handle multiple values. We also need test coverage for the method multiple() returning the right value: TRUE when an array is the result, FALSE otherwise. I've added tasks for these to the remaining tasks list in the issue summary.

Since 📌 Encode plugin: only call function from the available options Active touches the same code, I think it would be better to first finalize that one and then work on a fix for this one. Test coverage could already be written however. I think I'll make a start on that.

Let me know your thoughts! :)

🇳🇱Netherlands megachriz

@sourabhsisodia
This looks related to 🐛 Literal "_none" value saved for seelct fields Active , but this issue is not a duplicate of that one. This issue is about a situation where the option "_none" is not defined. That other issue is about literally saving "_none" in the database when that option is selected.

🇳🇱Netherlands megachriz

I've added a possible fix for the case from #11.

🇳🇱Netherlands megachriz

To address #11, I've added test coverage for using "_none" as value for text fields. And the tests are failing. If you for example enter "_none" for the field "Family" an empty value gets saved for that field. If the name field is required you get the more cryptic error message "This value should not be null.".

🇳🇱Netherlands megachriz

megachriz made their first commit to this issue’s fork.

🇳🇱Netherlands megachriz

Thanks for reporting! I move this issue to the Tamper project, because it would need to be fixed there.

🇳🇱Netherlands megachriz

I'm not sure yet if adding all feeds as menu item in the admin content menu is a good idea. That list could potentially become large, because multiple feeds of the same type could exist.

However, in the D7 version of Feeds, we do had "feeds" listed in the admin content menu, but that worked a bit differently. For importers that used a "standalone import form", you could access existing feeds this way, because you would only have one feed per importer in this case. But for importers attached to a content type, you would be redirected to a node/add page instead, because multiple feeds per importer could exist. In the modern Drupal version of Feeds the "standalone import form" concept no longer exists and instead we have feeds and feed types which is conceptually closer to the "importers attached to a content type" concept.

I think it would be cool if /feed would redirect to /admin/content/feed.

🇳🇱Netherlands megachriz

For the modern Drupal version, you no longer have Feeds Tamper plugins, but you have Tamper plugins instead. These plugins live in the Tamper project. Feeds Tamper is now a bridge between the Feeds and the Tamper module. There is no step to step guide for how to write your own Tamper plugin, but there are plugins in the Tamper project that could perhaps be used as an example.

In short:

  • Tamper plugins are recommended to be in the namespace \Drupal\mymodule\Plugin\Tamper.
  • Each plugin extends
    \Drupal\tamper\TamperBase</code.</li>
    <li>If your Tamper plugin requires configuration, implement at least <code>defaultConfiguration()

    , buildConfigurationForm() and submitConfigurationForm() and add config schema for the plugin. Implement validateConfigurationForm() if needed.

  • Implement tamper() to apply the transformation. Return the transformed data.
  • If you like the plugin to be added to the Tamper module a Unit test that extends \Drupal\Tests\tamper\Unit\Plugin\Tamper\TamperPluginTestBase is required and a functional test that extends \Drupal\Tests\tamper\Functional\Plugin\Tamper\TamperPluginTestBase is required. (With both base classes you already get some test coverage for the plugin, such as if the methods return data of the correct type, if the form for adding a Tamper plugin instance can be submitted without errors and if config schema is correct.)

Does this help? If not, feel free to reopen the issue.

🇳🇱Netherlands megachriz

I've made the following changes:

  • Tests testWithNullValue() and testWithEmptyString() are moved to the base class and tests for certain plugins are overridden as necessary.
  • Instead of assertEquals() for testing with null values or empty strings, assertSame() is used.
  • Empty value handling for the Aggregate plugin is added. It returns NULL in case of an empty array, unless the used function is 'count' or 'sum'.
  • Empty value handling for the WordCount plugin is added. NULL is returned in case of a null value. In case of an empty string, the number of words should be zero. The WordCount actually returned 1 in this case, so additional code is added to fix that.
  • The Math plugin now treats null and empty strings as zero. Test coverage is added for empty data.
  • The Keyword Filter doesn't abort in case of empty values, but it does cast the input data to a string.
🇳🇱Netherlands megachriz

This is well on its way! Good work!

  1. I've gone through all Tamper plugins that we have and I think that Aggregate and WordCount (relatively new plugins) should also do something about handling empty data.
  2. The Math plugin throws an exception on an empty string. Perhaps we need to return an empty string here too?
  3. For test coverage, I think it would be better if testing with null and an empty string is added to TamperPluginTestBase instead. This way, we'll see how each plugin reacts on that input and we can override the tests for plugins as necessary. I'm working on this one.
  4. Some plugins treat the integer 0 (like ArrayFilter) as empty and return it as is. Other plugins mark it as invalid input data. More of this below.
  5. The Keyword Filter plugin treats the integer 0 as empty. I think that this is wrong. You may want to filter out items whose value is zero. I wonder if Keyword Filter should abort early at all.
  6. FindReplace and FindReplaceRegex accept integers and floats as valid input, but FindReplaceMultiline doesn't. Maybe fixing that is out of scope for this issue.
  7. We may need additional tests for how plugins deal with 0 (as integer or float) and with boolean FALSE. But we could handle that as well in a follow-up, as handling that too makes the scope of this issue bigger.
🇳🇱Netherlands megachriz

Can you share the editable file of this design? I might want the circles to be green in case that looks good with the design. See also 📌 Project Browser: Create logo for Tamper Needs review .

🇳🇱Netherlands megachriz

Can you share the edit file of this logo? I think I would like the image a bit bigger and I would like the circle to be green. As I think the color green communicates that the object is now "good", as in you managed to transform the data in the way you wanted it to be. I'm also still considering my own design, but I don't know if that fits with the proposed logo for Feeds Tamper: 📌 Create logo for Feeds Tamper for Project Browser Initiative Active .

🇳🇱Netherlands megachriz

The tests look like they need work. I get indeed a random test failure.

🇳🇱Netherlands megachriz

I see that RedirectTest is failing, that's unrelated to this issue, but it's caused by that the signature of createUser() has changed in Drupal. It now expects a list of permissions as first parameter.

🇳🇱Netherlands megachriz

In the MR I provided a fix and tests. Some code for the tests are copied from the Commerce Abandoned Cart tests. I did try to programmatically created order by reusing code from RedirectTest, but I ran into the issue that the path /cart did not exist, probably related to the code that fakes a request.

I hope that the tests pass. I just noticed that there might be a random test failure happening.

🇳🇱Netherlands megachriz

I updated the logo for Feeds Extensible Parsers, based on the new Feeds logo.

I do had the idea to include something from the following too, but I wasn't sure yet how to nicely incorporate them.

Xpath:

QueryPath:

🇳🇱Netherlands megachriz

I'm still using the patch from #79. I tried to add test coverage for it sometime ago, but I didn't find a way yet to write a test that failed without the fix. The test might have failed locally at the time.

🇳🇱Netherlands megachriz

The change is failing tests.

🇳🇱Netherlands megachriz

This is still an issue. You cannot rely on routing when requesting the available checkout steps outside the checkout process.

Use case: display the checkout steps on the /cart page. Multiple carts can exist on this page, so there is no order in the route.

🇳🇱Netherlands megachriz

@jacobupal
Well, there is more config in Drupal that you cannot import if other config does not exist. And you could still technically apply the workaround if you remove the dependency line from the config file.

🇳🇱Netherlands megachriz

Alright, thanks for reporting back! Closing this issue.

🇳🇱Netherlands megachriz

@jacobupal
Prevented. If a feed type would have a dependency on the feeds_item field, you cannot import the feed type configuration if the feeds_item field does not exist. So in that case you would also need to import the feeds_item field configuration.

🇳🇱Netherlands megachriz

@jacobupal
The feed type indeed cannot work correctly if you don't have the feed_item field. So perhaps the bug is here that the feed type doesn't have declared a dependency on the feed_item field.

🇳🇱Netherlands megachriz

Can you provide an export of your feed type configuration and a sample source file?

Also, I think that this issue belongs to Feeds Extensible Parsers?

🇳🇱Netherlands megachriz

That's odd, I don't see getFromAddress() being declared twice in this file:
https://git.drupalcode.org/project/commerce_abandoned_carts/-/blob/2.1.x...

Also, when I download 2.1.x-dev module manually and search for "getFromAddress", it only appears in the file AbandonedCartMail.php.

I also don't see protected $time; twice in AbandonedCartMail.php.

Do you perhaps have patched applied to the module? Or maybe you have two copies of this module on the system?

🇳🇱Netherlands megachriz

@batigolix and I looked at this issue yesterday at the DICTU Drupal Developers Day in Assen.
@batigolix successfully imported a media image on a node. There were some errors, however:

  • When mapping to a media field first and then checkout the code from this issue he got an error for getSummary().
  • The first import test resulted into a SQL error related to language (language cannot be NULL). When changing the language on the target then to "English", the import went fine. It could be that there's an issue when creating a Media entity when the language is not specified. I did see that on his configuration the language on the target was set to an empty string, so perhaps that is related to the error.

@batigolix is working on a test, but ran into the error "filename does not exist", which I did not have an explanation for right away.

🇳🇱Netherlands megachriz

@arccoss and I looked at this issue yesterday at the DICTU Drupal Developers Day in Assen.
@arccoss confirmed that the MR !89 fixes the issue for images. We also found out why the test is currently failing. It is because the call to Drupal\feeds\Feeds\Target\File::getMimeSubtypeFromUrl() doesn't work in a Kernel test. The method is using cURL and since a Kernel test only has PHP and a database, but not an accessible site via HTTP, trying to retrieve something from a url like "http://localhost/modules/custom/feeds/tests/resources/assets/attersee_no..." will result into a 404 HTML-document, hence why the test said the extension is html, because that is what Drupal\feeds\Feeds\Target\File::getMimeSubtypeFromUrl() will get from a 404.

A way to deal with this and keep the Kernel test, is to somehow override the curl part of Drupal\feeds\Feeds\Target\File::getMimeSubtypeFromUrl() in the test. So here are some thoughts about what we could do:

  • The method could be split up into two parts, one that does the curl thing and one that does the "finfo" thing.
  • In FileTest.php and in ImageTest.php we add a class that extends Drupal\feeds\Feeds\Target\File and Drupal\feeds\Feeds\Target\Image respectively and change FileTest::getTargetPluginClass() and ImageTest::getTargetPluginClass() to return the newly created classes. In these classes a method would need to be overridden:
    • getMimeSubtypeFromUrl() could be completely overridden, returning just an extension string.
    • Or just the curl part would get overridden, but that method override must be capable of returning something similar as what curl can do and I don't know enough about curl to know if that's doable.
  • An other thought, instead of using curl directly here, replace the logic with using a service. I'm not sure if the same result could be achieved using Guzzle?
  • I believe that there's also a possibility that curl is not available and perhaps the code should also account for that situation. At least in the D7 version of Feeds there was a check for this in the function http_request_use_curl(). Maybe we can avoid doing a check like this if we can replace the usage of curl with a service, as said in the point above.

I also wondered why what there is another MR here. It seems like an other approach, but there's no clear explanation of why it exists.
@imash Can you explain why you opened another MR? Did the other MR not work or does your MR solves something that is not covered by the other one?

🇳🇱Netherlands megachriz

I see that the latest patch includes other changes that were made after rc2 (not related to this issue), like coding standard fixes. So it makes sense that the latest patch won't apply on rc3.

🇳🇱Netherlands megachriz

I think that you need to update Coder module, because coding standard rules changed recently. The changes in the MR now introduce coding standard issues instead of fixing them.

See also the following commits:
https://git.drupalcode.org/project/floating_block/-/commit/e63b808c79208... (floating_block_test.css)
https://git.drupalcode.org/project/floating_block/-/commit/d10c07dacdef5... (HelperTest.php)

🇳🇱Netherlands megachriz

@rob230
I wasn't aware that this issue caused a WSOD, I thought it generated only some errors on the feed page.

Thanks for creating the patch, I can see that applying the diff could be troublesome because of the binary file. I hadn't tried applying the diff with Composer, I only tried it like this:

curl https://git.drupalcode.org/project/feeds/-/merge_requests/199.diff | git apply
curl https://git.drupalcode.org/project/feeds/-/merge_requests/205.diff | git apply

I didn't think about that for Composer it could cause trouble.

While I think this issue doesn't affect everyone who updates Feeds (it only affects sites that have unfinished imports at the time of updating), good call on creating a new release. I can do that this Thursday.

🇳🇱Netherlands megachriz

I see that on a site where I had this issue I'm still using the solution in #7.

I do plan to try to update that site to as much D11 compatible releases as possible, so perhaps I can test the changes from this issue along with it. But no promises, I noticed that when I tried to do composer update now, quite a few patches could no longer be applied. It depends on how much time it will take to update these if there will be any time left to look at this issue.

🇳🇱Netherlands megachriz

Yes, this is possible for all config.

Example for overriding configuration for Commerce Abandoned Carts in settings.php:

$config['commerce_abandoned_carts.settings']['testmode'] = TRUE;
$config['commerce_abandoned_carts.settings']['testmode_email'] = 'foo@example.com';
🇳🇱Netherlands megachriz

Thanks for merging!

Yes, I've seen the Drush issue. I do have worked on Drush commands in the past, but I'm not up to date yet with the latest API and best practices. I could try to write tests for the commands, however - if I can find enough time for it.

🇳🇱Netherlands megachriz

On the Commerce Abandoned Carts settings you can figure a test mail address. Or you can disable test mode. Do one of these and that should resolve this issue.

If it doesn't resolve your issue, feel free to reopen this issue.

🇳🇱Netherlands megachriz

Because tests are failing, I added a task to the list that says "Try to adjust the code or the tests so that they pass.".

🇳🇱Netherlands megachriz

I added a list of tasks to the issue summary that could help with fixing this issue.

🇳🇱Netherlands megachriz

I added a list of tasks to the issue summary that could help with fixing this issue.

🇳🇱Netherlands megachriz

@rudi teschner
Cool, thanks for testing. I merged the code!

🇳🇱Netherlands megachriz

It sounds like this could be related to or a duplicate of Add support for remote file systems for http downloaded data Active .

The duplicate entry issue (for the feeds_clean_list table) could be a completely different issue, however.

🇳🇱Netherlands megachriz

That's interesting that a MacBook going in sleep mode could affect the import from hanging. For imports in the UI I think that makes sense (if the webserver is running on the MacBook), but I would expect that with imports on cron, the import would eventually continue.

I have a Mac too. Would be interesting to run a large import and then at the exact moment that cron is running, put the Mac into sleep mode. And then see if that makes the import hang.

🇳🇱Netherlands megachriz

I've updated the issue summary and added a list of tasks remained to be done.

🇳🇱Netherlands megachriz

If you can figure out why it hangs and then resolve that issue, then you could start the import from scratch. But if it is going to hang once and you don't get that issue resolved, it will likely hang again on a restart of the import. So in that situation I would choose to remove the items that were already imported from the CSV file.

🇳🇱Netherlands megachriz

The provided fix in the MR should be fixing this issue. 🙂

It would be great if you can test it, I plan to merge it this Thursday.

🇳🇱Netherlands megachriz
  1. I don't have experience importing a CSV file with that many lines, but I would choose to import in background and let the import be done in chunks using cron.
  2. I don't know. There could be a bug (either in Feeds or an other module) causing the import to stop. If this is the case, you should be able to find something about it on the server logs (the error may not be logged by Drupal). An other possibility is that the server thinks "This process runs for way too long, I'm going to stop it". Or a lack of memory. Probably these are reported on the server logs too.
  3. If you unlock the feed and then restart the import, the import will start from the beginning. If you have configured a CSV column as unique, Feeds would skip items it already has imported, but it will still go through each item in the file in order to check that. Say the import hangs at 2000 items, and you restart the import, then for the first 2000 items of the CSV file Feeds will check if they are already imported and would see that this is the case (and not import them again). But just doing these checks can also take a long time.
  4. Yes, setting cron running once an hour would work, but I would try to run cron more often. If for example Feeds would manage to import 500 items per cron run, then it would take 648 hours to import all 324000 items. That's almost a month. If you want to stop the cron import, then you would unlock the feed.
  5. An import running in the UI (where you see a progress bar) stops shortly after you close the browser (it would just finish only the last chunk it was busy with). An import running on cron does not depend on the browser. You can restart the import by unlocking the feed and then start the import again.
  6. Yes, an import running on cron does not depend on the browser.

Since I don't know what makes the import hang, I cannot guarantee that the import of all 324000 items will be successful when imported during cron. If it happens to hang, I would first check the server logs to see if there's any information what made it hang. Then I would check how many items were already imported and remove that many items from the CSV file and try again.
You could see that the import hangs if the amount of imported items stays the same after a cron run.

By installing the Queue UI module, you can inspect/monitor the import tasks that are scheduled to run. If the same task is retried over and over again, then the import hangs. (One day I hope to add functionality to Feeds that would detect that the same task is retried over and over again, so that it can warn the user that something went wrong during the import - or maybe even skip the task so it can continue doing the rest of the import.)

🇳🇱Netherlands megachriz

@anybody
Thanks for your compliments! 😊
I'm pleased as well that I could get this one finished. Up to the next thing!

🇳🇱Netherlands megachriz

The "Delete" button on a feed, deletes the feed itself. Not the import items.

🇳🇱Netherlands megachriz

Ah, I see you just updated the issue summary. Feel free to add/update the documentation . :)

🇳🇱Netherlands megachriz
  1. Active checkbox: you can configure feed types to import sources regularly. This is called "Periodic import" (see image). Only feeds that are active will be used for periodic import. So when unactivating it, this particular feed will no longer be imported regularly, only when you click "Import" or "Import in background".
  2. When an import for a feed starts, the feed gets locked. This is to prevent running another import for it. If two imports for the same feed would run at the same time, that could cause issues, for example it could make earlier imported items being deleted that should not be deleted (if you have configured to delete previously imported items that are no longer in the feed). Unlocking the feed you would do if you believe the import got stuck. You can then restart the import. When unlocking, Feeds cleanups the metadata for the import that did not finish.
  3. If you want to start the import over, first unlock the feed. Then you can delete all imported items.
  4. "Delete items" only deletes items that are created or updated with this feed. It doesn't delete items from other feeds - if those items are not created or updated with this feed. It is technically possible to configure feed types so that two feeds update the same content. Say feed 1 creates items A, B and C and feed 2 creates items D and E and updates item C. Then "Delete items" on feed 1 would delete items A, B and C and "Delete items" on feed 2 would delete items C, D and E. So in this case they would both delete item C, because they both "touched" item C.

A tip for importing large files: I think it is a better idea to import these in background (by using the "Import in background" button). This way, the import runs in chunks during cron runs. This way the chance that the import will hang is smaller, because it doesn't depend on the browser being kept open. It can still hang or get stuck however. For example when a fatal PHP error occurs in the process, or when the server shuts down. Or perhaps when running module updates (because that could cause module files temporary getting removed and that could possibly cause fatal PHP errors too).
Import in background does require cron to be configured. Per cron run, the import process runs for about a minute. So I can imagine 324000 lines would take quite a large number of cron runs too.

I hope this answers your questions. Feel free to reopen this issue if you have more questions. :)

🇳🇱Netherlands megachriz

I don't understand your issue. I tried to reproduce it in the following way:

  1. Created a feed type with the HTTP Fetcher (download from url) and the CSV parser.
  2. Mapped to "title" to node title and "body" to the body field.
  3. Created a feed, I noticed that delimiter was set to , (comma), changed it to ; (semicolon)
  4. Imported the following source:
    title;body
    Foo;Bar
    
    

And the content imported fine.

So perhaps there's a specific plugin or module that disrupts the process?

🇳🇱Netherlands megachriz

I would like the strokes of the bear to be as thick as on the Feeds logo:

🇳🇱Netherlands megachriz

This is the logo we added to the project. The quality was set to 90 and the size has become 20k.

The following command was used to downsize the PNG from the afdesign export.
pngquant Feeds.png --force --quality 90

🇳🇱Netherlands megachriz

I provided a possible fix in the MR. I did not test it yet though. I'm still working on the automated tests.

If you want to apply the code to Feeds 8.x-3.0-rc2, you also need to apply the code from 🐛 Error: Serialization of 'CurlHandle' is not allowed in serialize() Active .

🇳🇱Netherlands megachriz

Because there is now an issue related to this one opened whose fix would cause overlap with this one, I merged the code. Feel free to reopen this issue if it didn't completely fix the problem for you.

The related issue: 🐛 Error: Call to undefined method Drupal\feeds\State::messenger() Active

🇳🇱Netherlands megachriz

Thanks, I hope I can review this one soon.

Production build 0.71.5 2024