Skip to content

OLS-2629: Vale checks for modules in about assembly#106325

Merged
rh-tokeefe merged 1 commit intoopenshift:lightspeed-docs-mainfrom
rh-tokeefe:OLS-2629
Feb 18, 2026
Merged

OLS-2629: Vale checks for modules in about assembly#106325
rh-tokeefe merged 1 commit intoopenshift:lightspeed-docs-mainfrom
rh-tokeefe:OLS-2629

Conversation

@rh-tokeefe
Copy link
Contributor

@rh-tokeefe rh-tokeefe commented Feb 10, 2026

Affects:
lightspeed-main
lightspeed-docs-1.0

PR must be CP'd back to the lightspeed-docs-1.0 branch.

Issue:
https://issues.redhat.com/browse/OLS-2629

Link to docs preview:

QE review:

  • QE has approved this change.

Additional information:

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Feb 10, 2026
@openshift-ci-robot
Copy link

openshift-ci-robot commented Feb 10, 2026

@rh-tokeefe: This pull request references OLS-2629 which is a valid jira issue.

Details

In response to this:

Version(s):

Issue:

Link to docs preview:

QE review:

  • QE has approved this change.

Additional information:

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot added the size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. label Feb 10, 2026
@ocpdocs-previewbot
Copy link

ocpdocs-previewbot commented Feb 10, 2026

🤖 Wed Feb 18 16:12:50 - Prow CI generated the docs preview:

https://106325--ocpdocs-pr.netlify.app/openshift-lightspeed/latest/about/ols-about-openshift-lightspeed.html

@openshift-ci openshift-ci bot added size/S Denotes a PR that changes 10-29 lines, ignoring generated files. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Feb 11, 2026
@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 12, 2026
@openshift-ci openshift-ci bot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Feb 12, 2026
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 13, 2026
@openshift-ci-robot
Copy link

openshift-ci-robot commented Feb 13, 2026

@rh-tokeefe: This pull request references OLS-2629 which is a valid jira issue.

Details

In response to this:

Affects:
lightspeed-main
lightspeed-docs-1.0

PR must be CP'd back to the lightspeed-docs-1.0 branch.

Issue:
https://issues.redhat.com/browse/OLS-2629

Link to docs preview:

QE review:

  • QE has approved this change.

Additional information:

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link
Contributor

@gabriel-rh gabriel-rh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - some minor comments

[role="_abstract"]

{ols-long} supports operation in disconnected environments that do not have full internet access.
{ols-long} works in disconnected clusters that do not without full internet access.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing work here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -43,7 +42,8 @@ spec:
feedbackDisabled: true # <1>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you need to remove the # <1> etc here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you! Done

{ols-long} collects opt-in user feedback from the virtual assistant interface to analyze response accuracy and improve Service quality.

If you submit feedback, the feedback score (thumbs up or down), text feedback (if entered), your query, and the large language model (LLM) provider response are stored and sent to {red-hat} on the same schedule as transcript collection. If you are using the filtering and redaction functionality, the filtered or redacted content is sent to {red-hat}. {red-hat} will not see the original non-redacted content, and the redaction takes place before any content is captured in logs.
If you submit feedback, {red-hat} stores and receives your feedback score, text, and query. {red-hat} also receive the large language model (LLM) response on the same schedule as transcripts. When you use the redaction tools, {red-hat} receives only the filtered data. {red-hat} does not see the original data. {ols-long} hides your data before the system logs it.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If you submit feedback, {red-hat} stores and receives your feedback score, text, and query. {red-hat} also receive the large language model (LLM) response on the same schedule as transcripts. When you use the redaction tools, {red-hat} receives only the filtered data. {red-hat} does not see the original data. {ols-long} hides your data before the system logs it.
If you submit feedback, {red-hat} stores and receives your feedback score, text, and query. {red-hat} also receives the large language model (LLM) response on the same schedule as transcripts. When you use the redaction tools, {red-hat} receives only the filtered data. {red-hat} does not see the original data. {ols-long} hides your data before the system logs it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

{ols-long} supports Software as a Service (SaaS) and self-hosted large language model (LLM) providers that meet defined authentication requirements.

The LLM is a type of machine learning model that interprets and generates human-like language. When an LLM is used with a virtual assistant, the LLM can accurately interpret questions and provide helpful answers in a conversational manner. The {ols-long} Service must have access to an LLM provider.
The LLM is a type of machine learning model that interprets and generates human-like language. When you use a LLM with a virtual assistant, the LLM can accurately interpret questions and offers helpful answers in a conversational manner. The {ols-long} Service must have access to a LLM provider.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The LLM is a type of machine learning model that interprets and generates human-like language. When you use a LLM with a virtual assistant, the LLM can accurately interpret questions and offers helpful answers in a conversational manner. The {ols-long} Service must have access to a LLM provider.
The LLM is a type of machine learning model that interprets and generates human-like language. When you use an LLM with a virtual assistant, the LLM can accurately interpret questions and offers helpful answers in a conversational manner. The {ols-long} Service must have access to an LLM provider.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Understood. Discussed in slack. Using ISG.

The LLM is a type of machine learning model that interprets and generates human-like language. When you use a LLM with a virtual assistant, the LLM can accurately interpret questions and offers helpful answers in a conversational manner. The {ols-long} Service must have access to a LLM provider.

The Service does not provide an LLM for you, so you must configure the LLM prior to installing the {ols-long} Operator.
The Service does not provide a LLM for you, so you must configure the LLM before installing the {ols-long} Operator.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a or an?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@openshift-ci
Copy link

openshift-ci bot commented Feb 18, 2026

@rh-tokeefe: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@rh-tokeefe rh-tokeefe merged commit 9703cdf into openshift:lightspeed-docs-main Feb 18, 2026
2 checks passed
@rh-tokeefe
Copy link
Contributor Author

/cherrypick lightspeed-docs-1.0

@openshift-cherrypick-robot

@rh-tokeefe: new pull request created: #106908

Details

In response to this:

/cherrypick lightspeed-docs-1.0

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants