Account created on 13 February 2015, almost 10 years ago
#

Merge Requests

More

Recent comments

In a global sense, this issue must be fixed in TMGMT itself. Because this option comes from TMGMT Core module and it's totally ok to check this checkbox while submitting and if you do so with selected suggestions then all selected suggestions must be added to all the TMGMT Jobs in the queue by the TMGMT. Worth creating ticket there I believe.

We decided to unlock this checkbox but leave it enabled by default. So this way our logic is not changed but allows to submit TMGMT Jobs one by one if needed.

Hi.

It isn't really needed.

This was done intentionally because Smartling has own concept of Jobs - something that includes files. So, when you select to submit content to multiple languages it means that multiple TMGMT Jobs (files) will be created (1 file per language) and all that files must be added to the newly created or existing Smartling job. That's why we decided to check this `Submit all N translation jobs with the same settings` option by default - we want all the files to be submitted to the one Smartling job.

But now that you mention suggestions, I would probably agree that it makes sense not to check by default but to let users add suggested TMGMT Job items if they exist for 2nd and next TMGMT Jobs in a queue.

But this issue applies to TMGMT itself, it needs to provide some UI where users can add all suggestions for needed TMGMT Jobs and only then submit them. Now it's a tradeoff between clicking through TMGMT Jobs queue and clicking once but without suggestions.

Will merge soon.

Hi,

1. Not sure if it's a good idea to override TMGMT Job Item's item id/type even though it's cloned object.
2. I initially thought that \Drupal::entityTypeManager()->getStorage($usage->layout_entity_type)->load($usage->layout_entity_id); will return a node or any other content entity but it returns actual layout entity. It only has information that it can work with node of some type but it doesn't have information about the exact node (id).

> Always pulling context from the local environment seems to make more sense

It makes sense with queue items generated from local env. It doesn't make sense with data generated on another env. Context still works if you submit content from your local env. You can always remove context related queue items from the queue table during your db sync process I think.

> Instead of postponing all the data collection for context until a cron job, what if this was all done at time of job submission?

This was specifically done through the queue to avoid slowing down the request translation process (which is already quite slow due to amount of API calls). In theory, can be done right away. You can decorate `RequestTranslationSubscriber` and override `onUploadRequest` so that it uploads right away instead of putting to the queue. But if you have TMGMT Job with 100500 TMGMT Job Items then you will sequentially upload those contexts and you will fail 100% due to max execution timeout. Or it will require rewriting the whole submitting process to run drupal batch operation, don't know.

> You wouldn't even need to worry about running any of this context user logic at all. Because the logged in user can see his/her content.

In your particular case, yes. But I can imagine a separate user with a special role that can only submit context for translation (different clients have different flows). So, again, this user switching logic was implement with intention that clients will create separate user with special permissions assigned for context.

> We almost want to do the context gathering at time of "add to cart", not at time of job submission.

Context is being assigned to a smartling source file. I think when content is added to a cart then those tmgmt jobs do not have translators assigned yet thus we can't get a filename yet. At least this is what I see for now.

This looks like a patch to TMGMT to be honest, why separate module?

Thanks @Flow1990. This will be included in the next release.

I ran functional tests with this change. Unfortunately, context related tests failed (testContextSendingByCron and some others).

I'm going to postpone this issue since you are not hitting this scenario anymore. Will get back to it once I have a capacity.

Thanks. Right now I'm working on another feature and already refactored this place (config is not needed dependency in this service so it will be removed). Next release will cover this.

> After debugging things, it is entirely around the fact that impersonation of another user via their session and a session cookie generated for that user doesn't work until we save the session.

It doesn't sound right. It works in our test envs and on other clients too. Please share your Drupal core version, I will debug on that exact core.

p.s. just curious, doesn't masquerade also work on your env? We use exactly the same approach as they use.

As I understand you include inline blocks as TMGMT "suggested" items. So, TMGMT creates TMGMT Job items per every requested entity and packs them into TMGMT Job. Our context feature generates a URL from the entity attached to the TMGMT Job item. There is no relation like "this block/incline block/etc is placed/attached on/to the entity from TMJMT Job item (node or something else) in this TMGMT Job". The connector can't guess what entity it needs to pick to render blocks. It just renders the given entity from the job item by whatever link it provides.

It's also not clear what entity to pick because as I understand inline block could be placed on multiple entities.

However, you can use hook_tmgmt_smartling_context_url_alter and implement your own rules for overriding context URL. Only you know what page to choose to render block as a context.

I wouldn't consider this an issue. It doesn't affect your translation process in any way. You just have error logs, which is expected because you run prod data processing on a completely different environment.

If you look at the issue from another angle - let's say we implement this change, then it would be incorrect that you will get url from prod, "switch" domain to your local one, grab this context, and upload to some completely different smartling project (because you need to have different projects per different envs). Of course, an annoying error log would disappear but the whole process would have no logic at all.

What is your process of submitting content? Context is generated when you submit content from Drupal to Smartling. Why is it important to submit content from prod/live but submit context from local/dev env? Just want to understand your case.

If you need to test context then just submit content from your local/dev env, it will generate link to your env. If you have local/dev env pointing to the same prod/live envs Smartling projects (and if your smartling bucket names (can be checked at provider config page at the bottom) are the same on those envs and they are likely the same as you copy database as I get) then you have 2 problems:
1. You have files with different content but with the same names being pushed to Smartling resulting in overriding content (Smartling project may contain content from local/dev env mixed with prod/live content).
2. You local/dev instance may run cron and download translated content to your local/dev env resulting in the issue when prod/live never downloads translated content because it was already downloaded on another instance.

Make sure your local/dev env uses different smartling project. I mentioned this because your flow with context sounds like this case.

> If we grab the DB from the prod site and pull it down to localdev, the absolute URL won't work. Is there any way to generate the absolute URLs when processing the queue item?

We could rewrite URL generation, and do it while processing queue item but for me, it sounds like a misuse of connector. I mean I don't see a reason/benefit in doing this because submitting content and its related context must happen from one env.

Did you try enabling "Context silent user authentication" option? If enabled it skips user login/logout hooks being called when impersonating. It might be the case that some of saml/sso modules hook into those actions and override session.

Hi @heddn.

Thanks. I have a few questions though:
1. You've mentioned (in emails or in support ticket) that it doesn't work only for some exact users. Does it mean that on your Drupal instance, you have some saml/sso modules installed that serve only some users thus context user impersonating doesn't work only in those cases? I would like to reproduce and debug the issue before merging this branch, I'm not sure where exactly the session gets dropped/overridden.
2. From our logs I see you've tried the context debug module on your instances (dev and prod I believe). On prod, a request to `/global/en/admin/content/block/N` gives 403 response code with `1006*****` response body, which seems to be a Cloudflare firewall-related. Sounds like a request for grabbing context didn't even hit the Drupal instance. Just want to clarify if this is related or if you reconfigured something on your side. Otherwise, it's not clear how it works for one Drupal user and not for another.

Btw this "switcher" is based on masquerade module, it might be affected by the issue as well I believe.

Hi, sorry for the delay.

Not quite, i am looking to track that say "Text 1" in that test is the same even if "Text 2" were added as a second value in the multivalue field and even if the "Text 1" value switched position from delta 0 to delta 1.

So, basically, you want the rule like values are the same if only values are the same regardless of their positions in multi values fields?

Sounds like you could implement an extended field comparator service and implement any rule you need by overriding DefaultFieldComparator::compareFieldValues method, like sort values somehow as you need and only then run compareFieldValues from the parent, something like that. Unless I totally misunderstood the whole idea.

argh, to build on it getComparableProperties() in Drupal\changed_fields\FieldComparatorPluginManager available through the plugin.manager.changed_fields.field_comparator would have to be a public function or otherwise available.

getComparableProperties is only used for defining what properties to look for in a given field definition. If you need to add support for a custom field then you need to implement getDefaultComparableProperties and look for field type there and return an array of custom properties. If you need to alter/extend the list of comparable properties of supported field types then I think you could implement extendComparableProperties method in your extended field comparator and do whatever you need there. It's basically called from getComparableProperties here.

This "merge train" thing is strange. build starts before merge and after merge. even though the build with exact the same diff was already passed. And also, in this case the credit was not given to your acc, not sure why.

Oh, nice catch. this only happens when smartling provider is not configured I believe. I tested everything but this case.

Hi.

Interesting. Just to confirm, this is because you have `cl_server` module installed that decorates file generator service?

Warning message is moved to status page (implemented in hook requirements) in this issue https://www.drupal.org/project/tmgmt_smartling/issues/3450538 📌 Add possibility to disable cron / queue warnings Fixed .

Closing with `works as designed` status since Smartling module needs cron handler to fulfill download queue.

Got rid of request subscriber that showed cron related warning messages on all pages for users with see smartling messages and moved messages to requirements hook that renders those messages at the status page - proper place for such messages I believe.
Also, moved hook requirements implementation from module file to install file.

thanks

Hi bceyssens,

this happens inside Smartling's api-sdk-php lib on upload and match context sync call. What it means is that smartling context api processes context asynchronously and we poll async process 15 seconds and if it's not completed just throw an exception there.

But in your case I do see a lot of `Async operation is not completed after 15s seconds.` log records (from `94e2dcf74` smartling project) but I don't see appropriate HTTP requests to context API for polling async processes. For example, for `2024-06-26 08:08:43.820 - 2024-06-26 08:12:03.370` I see 7 such errors from `reynaers-acc` host but I see no `/context-api/v2/projects/94e2dcf74/processes/...` HTTP requests for that period of time. Could you please confirm you see those requests from your side? Are those requests blocked?

Hi @vipul tulse, this is by design. The method you've mentioned is about retrieving page html for the visual context and this CURLOPT_SSL_VERIFYHOST option is only applied to the curl options when context_skip_host_verifying is enabled in connector settings.

moreover, we explicitly tell the user when to enable this setting and warn user to do so only on dev envs:

If checked, curl won't verify host during retrieving context (CURLOPT_SSL_VERIFYHOST = 0). Use only for developing and testing purposes on NON PRODUCTION environments.

Hi guys, thank you for your work here, but the module already supports D10 (8.x-9.x branch). Closing this for now.

As for == vs >=. Both have pros and cons. Logic == is straightforward but too verbose. On the other hand >= logic is flexible enough (you can ignore a bunch of logs with only one rule defined) but it's too flexible :) (there might be collisions in logic but, to be honest, it's not the issue as we could implement "first/last matching rule wins" logic). So, both options are good enough. It's just the question of taste. Personally, I would prefer == logic as it's simple, straightforward and doesn't require handling collisions and extra considerations for severity levels.

This is no longer the case. Since you are changing the visibility scope in LoggerChannelTestable.

Oh, thank you. Fixed.

Another thing I notice is that there is no use of the Splat operator in core yet. There is a relevant issue here: #2551661: Speed up Drupal with splat (variadics) which is quite old and the reasons that blocked the process are no longer valid today.

According to this post since Drupal 8.7.x it's required to use PHP 7 and, thereby, splat operator in the core is no longer the issue. So it should be safe to use it in this patch since it against 8.7.x.

Hi, I know you were working on the Drupal 8.6.0, it is great to see the new release!

I just want to draw a few attention to this ticket. I suggest marking this as RTBC in order to gather more reviews/attention here.

syslog and syslog_test module are no longer required as far I can see.

No, they are required as we test cases with two loggers presented. Take a look at assertions of full_match, for example. We ignore messages for LoggingTest logger for debug level and for channel a channel and that's why there is 'debug_messages_count' => 3, assertion (2 messages for channel_b and one for channel_a).

There are a few coding standards warnings reported too:

Done. Also, it's weird that test bot doesn't fail the build if there are some coding standard issues. Developer sees the green build, how he/she could know if there are some issues?) I didn't even have a thought to open the build console if it's green.

I don't really understand why do we need this additional test. Seems like we've covered all the cases in Drupal\Tests\system\Kernel\Logging\LoggingTest test.

1. Got it. Done.
2. Didn't notice this method. Done.
3. Done.
4. Done.
5. Oh, it was my point why I don't use data providers. I didn't even realize we can set meaningful keys for cases array) Now I see all the power of data providers :D Done.

I didn't find any occurrences of array[string]bool in the core and I'm not sure if it's a correct way to say array containing string keys and bool values.

Thank you for the review!

This is still not addressed. I think we should use the LogLevel::EMERGENCY, LogLevel::ERROR, etc instead.

Done.

I think we could add a check to verify there something defined in $this->ignoreLogs.

Done.

the code uses the numeric RfcLogLevel::* constants, so we can not really use the strings or the LogLevel::*

Internally code uses numeric RfcLogLevel constants but in settings user defines rules with LogLevel string constants. Not sure why can't we use string constants here.

Also testLogRecursionProtection() should probably use the RfcLogLevel::* constants instead of hardcoding 0 to 7, I guess.

Done.

since this is new code for 8.7 which won't be backported to 8.6, AIUI we can require PHP7 syntax, probably taking advantage of the ?? null-coalescing syntax instead of using isset() or empty().

Done.

Since the constructor is called 2 or 3 times in each execution, it could be interesting to move the code that reads the settings from the constructor outside. For example directly into isIgnoredLog. That the first time checks if the settings are initialized, and if they are not, load them from there.

Done.

Hi, @dagmar and @fgm.

Thanks for the feedback and fixing sniffer issues. What are the next steps regarding this patch? Are there some other stages to be done before getting this into the codebase?

You're right. I think using isset() instead of in_array() will help not to impact performance dramatically. Additionally, not evaluating conditions in advance will help as well.

I've rewritten the patch in order to meet all the requirements/suggestions/review issues (I believe :))

  • Switched ">" logic to "=" logic
  • Rewritten integration tests
  • Added unit tests (data provider is used because seems like this approach is accepted throughout the unit tests)
  • Renamed setting name to suggested ignore_logs
  • Added new method: LoggerChannel::isIgnoredLog() which returns TRUE in case we need to ignore current log record

But @dagmar, if we choose

$settings['ignore_logs'] = [
  [
    'channel' => 'page not found',
    'level' => '*'
    'logger' => 'Drupal\dblog\Logger\DbLog'
  ],
  [ 
    'channel' => 'php'
    'level' => 'info'
    'logger' => '*'
  ]
];

we need to admit we would need to iterate through all the settings in each log() call.

Mm..ok, agree, let's change > logic to = logic. By "all options" you mean something like this: "channel->level->*", "channel->*->loggers", "*->level->loggers", "channel->*->*", "*->*->loggers", "*->level->*" and "*->*->*" (probably just "*")? Just want to make sure we discussed all the things before implementing.

Hi @dagmar

I improved the description and fixed linter reported issues. As for the log_filters, I would like to suggest logs_suppression. What do you think?

I avoided using data provider intentionally because I think it's not so good. What if you used data provider with a lot of input sets and the test fails? The only info you will be given is just "Test failed on data set N" (or something close to this). It's really bad because you don't really know what that data set means and what it tests. But if you have separate tests then you will see "thatLongTestNameMethodWhichTestsSomething failed". This error message gives some additional context about the error. And moreover, test will remain that big even if you will use data providers just because you will need to describe your inputs and outputs in one big array.

99% I might be wrong with all of this, just wanted to hear from you about this. If using data providers is a common practice then for sure I will rewrite tests.

Refactored:

  • Filtering mapping is not a config but setting in settings.php instead
  • Added tests

Since we don't use config object here maybe it would be better to patch existing LoggerChannel instead of defining the new one and overriding the service definition? I mean LoggerChannel doesn't depend on core's module config.

Hi, @dagmar

In my opinion, I think is better to expose this as a config setting in settings.php.

Oh, ok. Even better to have $filtering_mapping defined as php array.

This could be refactored to exit the function sooner, instead of having so many nested foreach and ifs.

Yes, this looks terrible for now. I've started writing unit tests and found out that there are some issue with current implementation. Will work on this.

Here you could check if the settings are defined to alter the logger.

Got it, thanks.

Finally, there is one way to test the syslog functionality. Take a look to #2874069: Make syslog testable to know how initial tests were implemented.

Yes, I've looked at syslog tests and took them as a base for my tests.

Hi, @fgm

Regarding testing, #2874069: Make syslog testable is a start, but a good next step would be to have some SyslogInterface for this wrapping implementation, one level up from LoggerInterface, allowing a simpler spy mechanism than the existing kerneltest which needs to touch the filesystem, probably going one level down to an actual in-memory unit test. This could be done as part of the test for this patch, and possibly taken from there back to the syslog tests.

Will think about this, thank you. Probably I will implement tests in syslog-like style and then refactor them as you said.

Thank you, guys, for the feedback. I'll continue to work on this.

Hi. I've faced the same need and found this thread. Pretty informative and provides some good ideas. But, @fgm, seems like your idea with re-assigning parent for logger channels will not work as LoggerChannelFactory::get() creates only instance of LoggerChannel class ($instance = new LoggerChannel($channel);).

So, in order to get this work, you need to re-define the whole chain: logger factory -> logger channel. I suggest even easier approach: don't change parents for all the channels but change the class of logger.channel_base instead.

I didn't find the better approach than just patch system module and implement logger channel there. It could be done directly in LoggerChannel but I found it weird that class from Drupal\Core (lib) must know something about module's config.

Please, take a look at this patch. With this applied you can setup filtering mapping for each channel and even loggers. For example:

my_channel:
  warning:
    - Drupal\dblog\Logger\DbLog
  debug:
    - Drupal\dblog\Logger\SysLog

In this example, DbLog will log records which have level warning and higher. Syslog will log every record. All these rules are applied only to my_channel channel.

Production build 0.71.5 2024