Using an edge ngram filter highlights the whole word instead of ngrams

Created on 10 March 2023, over 1 year ago
Updated 6 July 2023, over 1 year ago

Problem/Motivation

Edge ngrams are used to split up a word into chunks of characters and are useful for autocomplete etc. ElasticSearch/Opensearch have filters and tokenizers. The modules custom edge_ngram_analyzer is currently using an edge_ngram filter with a standard tokenizer. The standard tokenizir splits text up into words. When requesting highlights from opensearch (I'm using the bodybuilder.js library) the current setup returns a highlight of only entire words, even if the requested text is just a chunk of text. For example, if the request is "Marou" the highlighted excerpt that is returned is "Maroubra" when it should be "Maroubra". I've attached screenshots of the current and expected behavior (After I made some changes).

Steps to reproduce

N/A

Proposed resolution

Change EdgeNgram and Ngram to use tokenizers instead of filters. Right now I'm just working around this using the AlterSettingsEvent but it should be changed on the EdgeNgram plugin. Instead of using a filter we should use an edge_ngram tokenizer. I've attached screenshots of my current settings.

Remaining tasks

N/A

User interface changes

N/A

API changes

N/A

Data model changes

N/A

πŸ› Bug report
Status

Closed: won't fix

Version

2.0

Component

Code

Created by

achap πŸ‡¦πŸ‡Ί

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Comments & Activities

  • Issue created by @achap
  • Status changed to Postponed: needs info over 1 year ago
  • πŸ‡¦πŸ‡ΊAustralia kim.pepper πŸ„β€β™‚οΈπŸ‡¦πŸ‡ΊSydney, Australia

    Seems like a reasonable change. Are you able to submit a PR?

  • Assigned to achap
  • achap πŸ‡¦πŸ‡Ί

    Yeah I can do it when I get some free time.

  • πŸ‡¦πŸ‡ΊAustralia kim.pepper πŸ„β€β™‚οΈπŸ‡¦πŸ‡ΊSydney, Australia

    Can you take a look at ✨ Add a search_as_you_type data type Fixed to see if that is a better fit for your case?

  • achap πŸ‡¦πŸ‡Ί

    Thanks for putting that together. From what I'm seeing it actually has the same issue as the original edge n-gram implementation, i.e. it's highlighting the entire word rather than the n-grams themselves. Not sure why that is based on the docs https://opensearch.org/docs/latest/search-plugins/searching-data/highlight/

  • @achap opened merge request.
  • Status changed to Needs review over 1 year ago
  • achap πŸ‡¦πŸ‡Ί

    Switching from filter to tokenizer is working for me with Edge N-gram filters. I guess the two plugins can co-exist?

  • Status changed to Postponed: needs info over 1 year ago
  • πŸ‡¦πŸ‡ΊAustralia kim.pepper πŸ„β€β™‚οΈπŸ‡¦πŸ‡ΊSydney, Australia

    Yeah they can both exist.

    I wonder if you can get the same results with search_as_you_type by just playing with the highlighter options? https://www.elastic.co/guide/en/elasticsearch/reference/current/highligh...

  • achap πŸ‡¦πŸ‡Ί

    I have previously played around with those settings on the edge n-gram field before using a custom tokenizer and it didn't appear to do anything but I haven't had a chance to try it out yet for search_as_you_type. I imagine it's caused by the same issue, i.e. that search_as_you_type is probably using the standard tokenizer which splits tokens up on word boundaries rather than individual characters.

    This SO question appears to solve it in the same way for the search_as_you_type implementation (implementing an edge n-gram tokenizer) https://stackoverflow.com/questions/59677406/how-do-i-get-elasticsearch-to-highlight-a-partial-word-from-a-search-as-you-type

    From the https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-tokenizers.html#analysis-tokenizers it says that a tokenizer is among other things responsible for:

    • Order or position of each term (used for phrase and word proximity queries)
    • Start and end character offsets of the original word which the term represents (used for highlighting search snippets).

    If I analyze a title field that is using the custom edge ngram tokenizer I get the following token information for the sentence "This is a title":

    {
      "tokens" : [
        {
          "token" : "t",
          "start_offset" : 0,
          "end_offset" : 1,
          "type" : "word",
          "position" : 0
        },
        {
          "token" : "th",
          "start_offset" : 0,
          "end_offset" : 2,
          "type" : "word",
          "position" : 1
        },
        {
          "token" : "thi",
          "start_offset" : 0,
          "end_offset" : 3,
          "type" : "word",
          "position" : 2
        },
        {
          "token" : "this",
          "start_offset" : 0,
          "end_offset" : 4,
          "type" : "word",
          "position" : 3
        },
        {
          "token" : "i",
          "start_offset" : 5,
          "end_offset" : 6,
          "type" : "word",
          "position" : 4
        },
        {
          "token" : "is",
          "start_offset" : 5,
          "end_offset" : 7,
          "type" : "word",
          "position" : 5
        },
        {
          "token" : "a",
          "start_offset" : 8,
          "end_offset" : 9,
          "type" : "word",
          "position" : 6
        },
        {
          "token" : "t",
          "start_offset" : 10,
          "end_offset" : 11,
          "type" : "word",
          "position" : 7
        },
        {
          "token" : "ti",
          "start_offset" : 10,
          "end_offset" : 12,
          "type" : "word",
          "position" : 8
        },
        {
          "token" : "tit",
          "start_offset" : 10,
          "end_offset" : 13,
          "type" : "word",
          "position" : 9
        },
        {
          "token" : "titl",
          "start_offset" : 10,
          "end_offset" : 14,
          "type" : "word",
          "position" : 10
        },
        {
          "token" : "title",
          "start_offset" : 10,
          "end_offset" : 15,
          "type" : "word",
          "position" : 11
        }
      ]
    }
    

    If I analyze a search_as_you_type field I get the following information:

    {
      "tokens" : [
        {
          "token" : "this",
          "start_offset" : 0,
          "end_offset" : 4,
          "type" : "<ALPHANUM>",
          "position" : 0
        },
        {
          "token" : "is",
          "start_offset" : 5,
          "end_offset" : 7,
          "type" : "<ALPHANUM>",
          "position" : 1
        },
        {
          "token" : "a",
          "start_offset" : 8,
          "end_offset" : 9,
          "type" : "<ALPHANUM>",
          "position" : 2
        },
        {
          "token" : "title",
          "start_offset" : 10,
          "end_offset" : 15,
          "type" : "<ALPHANUM>",
          "position" : 3
        }
      ]
    }
    

    So if the offset information is used for highlighting that explains why only the edge_ngram_tokenizer is working as expected.

  • πŸ‡¦πŸ‡ΊAustralia kim.pepper πŸ„β€β™‚οΈπŸ‡¦πŸ‡ΊSydney, Australia

    OK. Makes sense. Now we just need to decide whether highlighting whole words or tokens should be the default.

  • achap πŸ‡¦πŸ‡Ί

    Sorry for not replying, got a bit side tracked :D I've been using this patch in production without issues for a while now. In terms of which one should be default I guess something to consider is index size and performance. Don't have any hard data to back this up but I guess tokenizing every character is a lot more expensive than every word. So maybe because of that and also preserving backwards compatibility it makes sense to keep filter as the default and add the tokenizer as a new plugin?

  • πŸ‡¦πŸ‡ΊAustralia kim.pepper πŸ„β€β™‚οΈπŸ‡¦πŸ‡ΊSydney, Australia

    I'm inclined to push people towards the search_as_you_type approach rather than getting into specific tokenizers and analyzers etc. If people want to build their own custom solutions they can do that.

  • achap πŸ‡¦πŸ‡Ί

    No worries I will move this patch into our own codebase :)

  • Status changed to Closed: won't fix over 1 year ago
  • πŸ‡¦πŸ‡ΊAustralia kim.pepper πŸ„β€β™‚οΈπŸ‡¦πŸ‡ΊSydney, Australia

    OK cool. I'll close this for now then.

Production build 0.71.5 2024