Add option to TextSplitter to return individual sentences. Adding general SaT model support.#93
Conversation
jrobble
left a comment
There was a problem hiding this comment.
@jrobble reviewed 4 of 4 files at r1, all commit messages.
Reviewable status: all files reviewed, 4 unresolved discussions (waiting on @hhuangMITRE)
a discussion (no related file):
Mention SaT here:
# To hold spaCy, WtP, and other potential sentence detection models in cache
Mention SaT here:
log.warning(
"Invalid model setting '%s'. Only `cpu` and `cuda` "
"(or `gpu`) WtP model options available at this time. "
"Defaulting to `cpu` mode.", model_setting)
Mention SaT in install.sh and LICENSE.
detection/nlp_text_splitter/nlp_text_splitter/__init__.py line 83 at r1 (raw file):
self._update_wtp_model(model_name, model_setting, default_lang) self.split = self._split_wtp log.info("Setup WtP model: %s", model_name)
Generally, 'f' strings are preferred since they keep the variable name inline with the text. It makes things easier to read.
detection/nlp_text_splitter/tests/test_text_splitter.py line 68 at r1 (raw file):
self.assertEqual(2, len(actual)) self.assertEqual('Hello, what is your name? ', actual[0]) self.assertEqual('My name is John.', actual[1])
These asserts as the same as above test_sat_basic_sentence_split test. I would feel better if we can prove that the different splitting behaviors return different results.
detection/nlp_text_splitter/tests/test_text_splitter.py line 104 at r1 (raw file):
500, len, self.sat_model,split_mode=SplitMode.SENTENCE))
Formatting nitpick: Move split_mode to next line.
detection/nlp_text_splitter/tests/test_text_splitter.py line 106 at r1 (raw file):
self.sat_model,split_mode=SplitMode.SENTENCE)) self.assertEqual(input_text, ''.join(actual)) self.assertEqual(2, len(actual))
These asserts as the same as above. I would feel better if we can prove that the different splitting behaviors return different results.
hhuangMITRE
left a comment
There was a problem hiding this comment.
Reviewable status: 0 of 8 files reviewed, 4 unresolved discussions (waiting on @hhuangMITRE and @jrobble)
a discussion (no related file):
Previously, jrobble (Jeff Robble) wrote…
Mention SaT here:
# To hold spaCy, WtP, and other potential sentence detection models in cacheMention SaT here:
log.warning( "Invalid model setting '%s'. Only `cpu` and `cuda` " "(or `gpu`) WtP model options available at this time. " "Defaulting to `cpu` mode.", model_setting)Mention SaT in
install.shandLICENSE.
Done.
detection/nlp_text_splitter/nlp_text_splitter/__init__.py line 83 at r1 (raw file):
Previously, jrobble (Jeff Robble) wrote…
Generally, 'f' strings are preferred since they keep the variable name inline with the text. It makes things easier to read.
Updated, thanks!
detection/nlp_text_splitter/tests/test_text_splitter.py line 68 at r1 (raw file):
Previously, jrobble (Jeff Robble) wrote…
These asserts as the same as above
test_sat_basic_sentence_splittest. I would feel better if we can prove that the different splitting behaviors return different results.
I've added in the new test cases. There's also some new differences in translation which I've added to the other PR.
detection/nlp_text_splitter/tests/test_text_splitter.py line 104 at r1 (raw file):
Previously, jrobble (Jeff Robble) wrote…
Formatting nitpick: Move
split_modeto next line.
Done!
detection/nlp_text_splitter/tests/test_text_splitter.py line 106 at r1 (raw file):
Previously, jrobble (Jeff Robble) wrote…
These asserts as the same as above. I would feel better if we can prove that the different splitting behaviors return different results.
I've tweaked the test, right now SaT seems more sensitive to splitting it seems.
…sentence-mode-sat-model-update
Issues:
Related PRs:
Summary:
This PR updates the nlp_text_splitter to add SaT (segment any text, https://github.com/segment-any-text/wtpsplit) model support, newline handling, sentence splitting options, and preferred-limit chunking.
Before this PR, only WtP and spaCy models are available for text segmentation. There also was a general segmentation strategy: estimate a sentence break near a hard size limit, then walk back to the nearest sentence boundary to generate the largest possible chunk. This works well for components with a large character or token text limit, however it may create issues for other components where a smaller text limit is needed (where possible).
This update adds:
SaT model support.
Newline normalization so line breaks can be treated as spaces, removed, or preserved depending on language/script estimated.
New options for chunk vs individual sentence splitting.
A new preferred/soft limit so users can try to generate smaller chunks while still respecting a hard text limit.
Along the way, an improved breakpoint alignment function was added so sentence boundaries are computed against the original text, as SaT and WtP appeared to remove extraneous whitespace while splitting. Additional changes were made to handle edge cases for empty outputs, zero-length text, and mid-word splits
Finally, support for Flores/NLLB language codes in WtpLanguageSettings is being transferred over to this PR.
This change is