Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 9 additions & 26 deletions about/ols-about-openshift-lightspeed.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -17,54 +17,37 @@ include::modules/ols-openshift-requirements.adoc[leveloffset=+1]

include::modules/ols-minimum-cluster-resource-requirements.adoc[leveloffset=+2]

[role="_additional-resources"]
.Additional resources

* link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/support/remote-health-monitoring-with-connected-clusters#about-remote-health-monitoring[About remote health monitoring]

include::modules/ols-large-language-model-requirements.adoc[leveloffset=+1]

//Xavier wanted to remove vLLM until further testing is performed.
//include::modules/ols-about-openshift-ai-vllm.adoc[leveloffset=+2]

include::modules/ols-openshift-lightspeed-fips-support.adoc[leveloffset=+1]

include::modules/ols-supported-platforms.adoc[leveloffset=+1]

include::modules/ols-about-running-openshift-lightspeed-in-disconnected-mode.adoc[leveloffset=+1]

[role="_additional-resources"]
.Additional resources

* link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/disconnected_environments/mirroring-in-disconnected-environments[Mirroring in disconnected environments]

include::modules/ols-about-data-use.adoc[leveloffset=+1]

[role="_additional-resources"]
.Additional resources

* xref:../configure/ols-configuring-openshift-lightspeed.adoc#ols-filtering-and-redacting-information_ols-configuring-openshift-lightspeed[Filtering and redacting information]

include::modules/ols-about-data-telemetry-transcript-and-feedback-collection.adoc[leveloffset=+1]

include::modules/ols-remote-health-monitoring-overview.adoc[leveloffset=+1]

[role="_additional-resources"]
.Additional resources
// OpenShift Lightspeed is a layered product that publishes using the standalone doc approach. Links to core OCP have to use links instead of xref.
* link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/support/remote-health-monitoring-with-connected-clusters#about-remote-health-monitoring[About remote health monitoring]

* link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/support/remote-health-monitoring-with-connected-clusters#opting-out-remote-health-reporting[Opting out of remote health reporting]

include::modules/ols-transcript-collection-overview.adoc[leveloffset=+2]

include::modules/ols-feedback-collection-overview.adoc[leveloffset=+2]

include::modules/ols-disabling-data-collection-operator.adoc[leveloffset=+2]

//The following resources are for the assembly, and link withing the standalone doc set
[id="additional-resources_{context}"]
== Additional resources

* link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/support/remote-health-monitoring-with-connected-clusters#about-remote-health-monitoring[About remote health monitoring]

* link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/disconnected_environments/mirroring-in-disconnected-environments[Mirroring in disconnected environments]

* xref:../configure/ols-configuring-openshift-lightspeed.adoc#ols-filtering-and-redacting-information_ols-configuring-openshift-lightspeed[Filtering and redacting information]

* link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/support/remote-health-monitoring-with-connected-clusters#opting-out-remote-health-reporting[Opting out of remote health reporting]

* xref:../configure/ols-configuring-openshift-lightspeed.adoc#ols-creating-the-credentials-secret-using-web-console_ols-configuring-openshift-lightspeed[Creating the credential secret using the web console]

* xref:../configure/ols-configuring-openshift-lightspeed.adoc#ols-creating-the-credentials-secret-using-cli_ols-configuring-openshift-lightspeed[Creating the credential secret using the CLI]
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
// This module is used in the following assemblies:
// about/ols-about-openshift-lightspeed.adoc
// Module included in the following assemblies:
// * about/ols-about-openshift-lightspeed

:_mod-docs-content-type: CONCEPT
[id="ols-about-data-telemetry-transcript-and-feedback-collection_{context}"]
= About data, telemetry, transcript, and feedback collection

[role="_abstract"]

{ols-long} processes natural-language messages and cluster metadata through a redaction layer before transmitting the data to your configured large language model (LLM) provider.
{ols-long} sends your messages and cluster data through a redaction layer. It does this to clean the data before it goes to the LLM.

Do not enter any information into the {ols-long} user interface that you do not want sent to the LLM provider.
Do not enter anything into the {ols-long} interface that you want to keep private from the LLM.

The transcript recording data uses the {red-hat} Insights system back-end and is subject to the same access restrictions and other security policies described in link:https://www.redhat.com/en/technologies/management/insights/data-application-security[Red Hat Insights data and application security].
The transcript recording data uses the {red-hat} Insights system. It follows the same security rules and access limits as that system. You can learn more in the link:https://www.redhat.com/en/technologies/management/insights/data-application-security[Red Hat Insights security guide].
8 changes: 4 additions & 4 deletions modules/ols-about-data-use.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,10 @@

[role="_abstract"]

{ols-long} enriches user chat messages with cluster and environment context before sending them to the configured large language model (LLM) provider for response generation.
{ols-long} adds cluster and environment details to your messages. Then, it sends this data to the large language model (LLM) to get an answer.

{ols-long} has limited capabilities to filter or redact the data and information you provide to the LLM. Do not enter data and information into the {ols-long} interface that you do not want to send to the LLM provider.
{ols-long} has limited ability to filter or hide the data you send to the LLM. Do not enter any information into the interface that you want to keep private from the LLM.

By sending transcripts or feedback to Red{nbsp}Hat you agree that Red{nbsp}Hat can use the data for quality assurance purposes. The transcript recording data uses the back-end of the Red{nbsp}Hat{nbsp}Insights system, and is subject to the same access restrictions and other security policies.
When you send transcripts or feedback to Red{nbsp}Hat, you agree that Red{nbsp}Hat can use the data to improve our Service. The transcript recording data uses the Red{nbsp}Hat{nbsp}Insights system. It follows the same security rules and access limits as that system.

You can email mailto:openshift-lightspeed-alpha@redhat.com[Red Hat] and request that your data be deleted.
You can email mailto:openshift-lightspeed-alpha@redhat.com[Red Hat] and ask us to delete your data.
4 changes: 2 additions & 2 deletions modules/ols-about-product-coverage.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,12 +7,12 @@

[role="_abstract"]

{ols-official} provides answers to questions by generating responses derived directly from official {ocp-product-title} documentation.
{ols-official} answers your questions by using information from official {ocp-product-title} documentation.

[id="product-exceptions_{context}"]
== Product exceptions

The {ocp-product-title} product documentation does not include information about all products in the {red-hat} portfolio. As a result, the {ols-official} Service uses the large language model (LLM) you provide to produce output for the following products or components:
The {ocp-product-title} documentation does not cover every {red-hat} product. Because of this, {ols-long} uses your large language model (LLM) to create answers for these products:

* {builds-title}
* {cluster-security-title}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@

[role="_abstract"]

{ols-long} supports operation in disconnected environments that do not have full internet access.
{ols-long} works in disconnected clusters without full internet access.

In a disconnected environment, you must mirror the required container images into the environment. For more information, see "Mirroring in disconnected environments" in the {ocp-product-title} product documentation.
In a disconnected cluster, you must mirror the container images you need. For more help, see "Mirroring in disconnected environments" in the {ocp-product-title} documentation.

[NOTE]
====
When you mirror the images in a disconnected environment, you must list the {ols-long} Operator when you use the `oc mirror` command.
When you mirror images in a disconnected cluster, list the {ols-long} Operator with the `oc mirror` command.
====
14 changes: 7 additions & 7 deletions modules/ols-disabling-data-collection-operator.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

Disable data collection for {ols-short} by updating the telemetry settings in the `OLSConfig` custom resource (CR) file settings.

By default, {ols-long} collects information about the questions you ask and the feedback you provide on the answers that the Service generates.
By default, {ols-long} collects information about the questions you ask and the feedback you offer on the answers that the Service generates.

.Prerequisites

Expand All @@ -28,9 +28,8 @@ By default, {ols-long} collects information about the questions you ask and the
$ oc edit olsconfig cluster
----

. Modify the `spec.ols.userDataCollection` field to disable data collection for the {ols-long} CR.
. Change the `spec.ols.userDataCollection` field to disable data collection for the {ols-long} CR.
+
.Example `OLSConfig` CR
[source,yaml]
----
apiVersion: ols.openshift.io/v1alpha1
Expand All @@ -40,10 +39,11 @@ metadata:
spec:
ols:
userDataCollection:
feedbackDisabled: true # <1>
transcriptsDisabled: true # <2>
feedbackDisabled: true
transcriptsDisabled: true
----
<1> Specify whether to disable the feedback collection.
<2> Specify whether to disable the transcript collection.
* `spec.ols.userDataCollection.feedbackDisabled` specifies if the Service collects your feedback.

* `spec.ols.userDataCollection.transcriptsDisabled` specifies if the Service collects your chat log transcripts.

. Save the file.
8 changes: 4 additions & 4 deletions modules/ols-feedback-collection-overview.adoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
// This module is used in the following assemblies:
// about/ols-about-openshift-lightspeed.adoc
// Module included in the following assemblies:
// * lightspeed-docs-main/about/ols-about-openshift-lightspeed.adoc

:_mod-docs-content-type: CONCEPT
[id="ols-feedback-collection-overview_{context}"]
Expand All @@ -9,6 +9,6 @@

{ols-long} collects opt-in user feedback from the virtual assistant interface to analyze response accuracy and improve Service quality.

If you submit feedback, the feedback score (thumbs up or down), text feedback (if entered), your query, and the large language model (LLM) provider response are stored and sent to {red-hat} on the same schedule as transcript collection. If you are using the filtering and redaction functionality, the filtered or redacted content is sent to {red-hat}. {red-hat} will not see the original non-redacted content, and the redaction takes place before any content is captured in logs.
If you submit feedback, {red-hat} stores and receives your feedback score, text, and query. {red-hat} also receives the large language model (LLM) response on the same schedule as transcripts. When you use the redaction tools, {red-hat} receives only the filtered data. {red-hat} does not see the original data. {ols-long} hides your data before the system logs it.

Feedback is associated with the cluster from which it originated, and {red-hat} can attribute specific clusters to specific customer accounts. Feedback does not contain any information about which user submitted the feedback, and feedback cannot be tied to any individual user.
Your feedback stays associated with the cluster where it began. {red-hat} can match these clusters to specific customer accounts. This feedback does not contain any user details, and {red-hat} cannot link the feedback to any specific person.
26 changes: 13 additions & 13 deletions modules/ols-large-language-model-requirements.adoc
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
// This module is used in the following assemblies:

// * about/ols-about-openshift-lightspeed.adoc
// Module included in the following assemblies:
// * lightspeed-docs-main/about/ols-about-openshift-lightspeed.adoc

:_mod-docs-content-type: CONCEPT
[id="ols-large-language-model-requirements"]
[id="ols-large-language-model-requirements_{context}"]
= Large language model (LLM) requirements
:context: ols-large-language-model-requirements

[role="_abstract"]

{ols-long} supports Software as a Service (SaaS) and self-hosted large language model (LLM) providers that meet defined authentication requirements.

The LLM is a type of machine learning model that interprets and generates human-like language. When an LLM is used with a virtual assistant, the LLM can accurately interpret questions and provide helpful answers in a conversational manner. The {ols-long} Service must have access to an LLM provider.
The LLM is a type of machine learning model that interprets and generates human-like language. When you use the LLM with a virtual assistant, the LLM can accurately interpret questions and offers helpful answers in a conversational manner. The {ols-long} Service must have access to the LLM provider.

The Service does not provide an LLM for you, so you must configure the LLM prior to installing the {ols-long} Operator.
The Service does not provide the LLM for you, so you must configure the LLM before installing the {ols-long} Operator.

[NOTE]
====
Expand All @@ -32,7 +32,7 @@ If you want to self-host a model, you can use {rhoai} or {rhelai} as your model
[id="ibm-watsonx_{context}"]
== {watsonx}

To use {watsonx} with {ols-official}, you need an account with link:https://www.ibm.com/products/watsonx-ai[IBM Cloud watsonx]. For more information, see the link:https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/welcome-main.html?context=wx[Documentation for IBM watsonx as a Service].
To use {watsonx} with {ols-official}, you need an account with link:https://www.ibm.com/products/watsonx-ai[{ibm-cloud-title} watsonx]. For more information, see the link:https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/welcome-main.html?context=wx[Documentation for IBM watsonx as a Service].

[id="open-ai_{context}"]
== Open AI
Expand All @@ -47,19 +47,19 @@ To use {azure-official} with {ols-official}, you need access to link:https://azu
[id="rhelai_{context}"]
== {rhelai}

{rhelai} is OpenAI API-compatible, and is configured in a similar manner as the OpenAI provider.
{rhelai} is OpenAI API-compatible, and you configure {rhelai} in a similar manner as the OpenAI provider.

You can configure {rhelai} as the LLM provider.

Because the {rhel} is in a different environment than the {ols-long} deployment, the model deployment must allow access using a secure connection. For more information, see link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.2/html-single/building_your_rhel_ai_environment/index#creating_secure_endpoint[Optional: Allowing access to a model from a secure endpoint].
Because the {rhel} is in a different environment than the {ols-long} deployment, the model deployment must allow access by using a secure connection. For more information, see link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.2/html-single/building_your_rhel_ai_environment/index#creating_secure_endpoint[Optional: Allowing access to a model from a secure endpoint].

{ols-long} version 1.0 and later supports vLLM Server version 0.8.4 and later. When self-hosting an LLM with {rhelai}, you can use vLLM Server as the inference engine.
{ols-long} version 1.0 and later supports vLLM Server version 0.8.4 and later. When self-hosting the LLM with {rhelai}, you can use vLLM Server as the inference engine.

[id="rhoai_{context}"]
== {rhoai}

{rhoai} is OpenAI API-compatible, and is configured largely the same as the OpenAI provider.
{rhoai} is OpenAI API-compatible, and you configure {rhoai} largely the same as the OpenAI provider.

You must deploy an LLM on the {rhoai} single-model serving platform that uses the virtual large language model (vLLM) runtime. If the model deployment resides in a different {ocp-short-name} environment than the {ols-long} deployment, include a route to expose the model deployment outside the cluster. For more information, see link:https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2-latest/html/serving_models/serving-large-models_serving-large-models#about-the-single-model-serving-platform_serving-large-models[About the single-model serving platform].
You must deploy the LLM on the {rhoai} single-model serving platform that uses the virtual large language model (vLLM) runtime. If the model deployment runs in a different {ocp-short-name} environment than the {ols-long} deployment, include a route to expose the model deployment outside the cluster. For more information, see link:https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2-latest/html/serving_models/serving-large-models_serving-large-models#about-the-single-model-serving-platform_serving-large-models[About the single-model serving platform].

{ols-long} version 1.0 and later supports vLLM Server version 0.8.4 and later. When self-hosting an LLM with {rhoai}, you can use vLLM Server as the inference engine.
{ols-long} version 1.0 and later supports vLLM Server version 0.8.4 and later. When self-hosting the LLM with {rhoai}, you can use vLLM Server as the inference engine.
9 changes: 4 additions & 5 deletions modules/ols-minimum-cluster-resource-requirements.adoc
Original file line number Diff line number Diff line change
@@ -1,14 +1,13 @@
// This module is used in the following assemblies:

// * about/ols-about-openshift-lightspeed.adoc
// Module included in the following assemblies:
// * lightspeed-docs-main/about/ols-about-openshift-lightspeed.adoc

:_mod-docs-content-type: REFERENCE
[id="ols-cluster-minimum-resource-requirements_{context}"]
= Cluster resource requirements

[role="_abstract"]

Ensure that {ols-long} has sufficient CPU, memory, and storage allocations to maintain Service performance and cluster stability without impacting other cluster workloads.
Ensure that {ols-long} has enough CPU, memory, and storage allocations to support Service performance and cluster stability without impacting other cluster workloads.

[cols="1,1,1,1"]
|===
Expand All @@ -19,7 +18,7 @@ Ensure that {ols-long} has sufficient CPU, memory, and storage allocations to ma
| 1 GB
| 4 Gi

| Postgres database
| PostgreSQL database
| 0.3
| 300 Mi
| 2 Gi
Expand Down
4 changes: 2 additions & 2 deletions modules/ols-openshift-lightspeed-fips-support.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,11 @@

:_mod-docs-content-type: CONCEPT
[id="openshift-lightspeed-fips-support_{context}"]
= {ols-long} FIPS support
= {ols-long} Federal Information Processing Standards (FIPS) support

[role="_abstract"]

{ols-official} supports Federal Information Processing Standards (FIPS) and can be deployed on {ocp-short-name} clusters running in FIPS mode.
{ols-official} supports Federal Information Processing Standards (FIPS). You can run {ols-official} on {ocp-short-name} clusters that use FIPS mode.

FIPS is a set of publicly announced standards developed by the National Institute of Standards and Technology (NIST), a part of the U.S. Department of Commerce. The primary purpose of FIPS is to ensure the security and interoperability of computer systems used by U.S. federal government agencies and their associated contractors.

Expand Down
2 changes: 1 addition & 1 deletion modules/ols-openshift-lightspeed-overview.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,4 @@

[role="_abstract"]

Use {ols-official} to troubleshoot and manage {ocp-short-name} clusters through a natural-language virtual assistant in the web console.
Use {ols-official} to fix and manage your {ocp-short-name} clusters. You can operate the virtual assistant by using plain English right inside the {ocp-short-name} web console.
Loading