OLS-2629: Vale checks for modules in about assembly#106325
OLS-2629: Vale checks for modules in about assembly#106325rh-tokeefe merged 1 commit intoopenshift:lightspeed-docs-mainfrom
Conversation
|
@rh-tokeefe: This pull request references OLS-2629 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
🤖 Wed Feb 18 16:12:50 - Prow CI generated the docs preview: |
|
@rh-tokeefe: This pull request references OLS-2629 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
gabriel-rh
left a comment
There was a problem hiding this comment.
LGTM - some minor comments
| [role="_abstract"] | ||
|
|
||
| {ols-long} supports operation in disconnected environments that do not have full internet access. | ||
| {ols-long} works in disconnected clusters that do not without full internet access. |
| @@ -43,7 +42,8 @@ spec: | |||
| feedbackDisabled: true # <1> | |||
There was a problem hiding this comment.
I think you need to remove the # <1> etc here
| {ols-long} collects opt-in user feedback from the virtual assistant interface to analyze response accuracy and improve Service quality. | ||
|
|
||
| If you submit feedback, the feedback score (thumbs up or down), text feedback (if entered), your query, and the large language model (LLM) provider response are stored and sent to {red-hat} on the same schedule as transcript collection. If you are using the filtering and redaction functionality, the filtered or redacted content is sent to {red-hat}. {red-hat} will not see the original non-redacted content, and the redaction takes place before any content is captured in logs. | ||
| If you submit feedback, {red-hat} stores and receives your feedback score, text, and query. {red-hat} also receive the large language model (LLM) response on the same schedule as transcripts. When you use the redaction tools, {red-hat} receives only the filtered data. {red-hat} does not see the original data. {ols-long} hides your data before the system logs it. |
There was a problem hiding this comment.
| If you submit feedback, {red-hat} stores and receives your feedback score, text, and query. {red-hat} also receive the large language model (LLM) response on the same schedule as transcripts. When you use the redaction tools, {red-hat} receives only the filtered data. {red-hat} does not see the original data. {ols-long} hides your data before the system logs it. | |
| If you submit feedback, {red-hat} stores and receives your feedback score, text, and query. {red-hat} also receives the large language model (LLM) response on the same schedule as transcripts. When you use the redaction tools, {red-hat} receives only the filtered data. {red-hat} does not see the original data. {ols-long} hides your data before the system logs it. |
| {ols-long} supports Software as a Service (SaaS) and self-hosted large language model (LLM) providers that meet defined authentication requirements. | ||
|
|
||
| The LLM is a type of machine learning model that interprets and generates human-like language. When an LLM is used with a virtual assistant, the LLM can accurately interpret questions and provide helpful answers in a conversational manner. The {ols-long} Service must have access to an LLM provider. | ||
| The LLM is a type of machine learning model that interprets and generates human-like language. When you use a LLM with a virtual assistant, the LLM can accurately interpret questions and offers helpful answers in a conversational manner. The {ols-long} Service must have access to a LLM provider. |
There was a problem hiding this comment.
| The LLM is a type of machine learning model that interprets and generates human-like language. When you use a LLM with a virtual assistant, the LLM can accurately interpret questions and offers helpful answers in a conversational manner. The {ols-long} Service must have access to a LLM provider. | |
| The LLM is a type of machine learning model that interprets and generates human-like language. When you use an LLM with a virtual assistant, the LLM can accurately interpret questions and offers helpful answers in a conversational manner. The {ols-long} Service must have access to an LLM provider. |
There was a problem hiding this comment.
Understood. Discussed in slack. Using ISG.
| The LLM is a type of machine learning model that interprets and generates human-like language. When you use a LLM with a virtual assistant, the LLM can accurately interpret questions and offers helpful answers in a conversational manner. The {ols-long} Service must have access to a LLM provider. | ||
|
|
||
| The Service does not provide an LLM for you, so you must configure the LLM prior to installing the {ols-long} Operator. | ||
| The Service does not provide a LLM for you, so you must configure the LLM before installing the {ols-long} Operator. |
|
@rh-tokeefe: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
/cherrypick lightspeed-docs-1.0 |
|
@rh-tokeefe: new pull request created: #106908 DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Affects:
lightspeed-main
lightspeed-docs-1.0
PR must be CP'd back to the lightspeed-docs-1.0 branch.
Issue:
https://issues.redhat.com/browse/OLS-2629
Link to docs preview:
QE review:
Additional information: