Installation
The deployment of the orchestrator involves multiple independent components, each with its unique installation process. In an OpenShift Cluster, the Red Hat Catalog provides an operator that can handle the installation for you. This installation process is modular, as the CRD exposes various flags that allow you to control which components to install. For a vanilla Kubernetes, there is a helm chart that installs the orchestrator components.
The Orchestrator deployment encompasses the installation of the engine for serving serverless workflows and Backstage, integrated with orchestrator plugins for workflow invocation, monitoring, and control.
In addition to the Orchestrator deployment, we offer several workflows (linked below) that can be deployed using their respective installation methods.
1 - RBAC
The RBAC policies for RHDH Orchestrator plugins v1.6 are listed here
2 - Disconnected Environment
To install the Orchestrator and its required components in a disconnected environment, there is a need to mirror images and NPM packages.
Please ensure the images are added using either ImageDigestMirrorSet
or ImageTagMirrorSet
, depending on the format of their values.
Images for a disconnected environment
The following images need to be added to the image registry:
Recommendation:
When fetching the list of required images, ensure that you are using the latest version of the bundle operator when appropriate. This helps avoid missing or outdated image references.
RHDH Operator:
registry.redhat.io/rhdh/rhdh-hub-rhel9@sha256:8729c21dc4b6e1339ed29bf87e2e2054c8802f401a029ebb1f397408f3656664
registry.redhat.io/rhdh/rhdh-operator-bundle@sha256:f2d99c68895d8e99cfd132c78bc39be5f2d860737f6e7d2520167404880ed865
registry.redhat.io/rhdh/rhdh-rhel9-operator@sha256:2f72c8706af43c0fbf8afc82d1925c77887aa7c3c3b1cb28f698bc4e4241ed4d
registry.redhat.io/rhel9/postgresql-15@sha256:ddf4827c9093a0ec93b5b4f4fd31b009c7811c38a406187400ab448579036c6c
OpenShift Serverless Operator:
registry.access.redhat.com/ubi8/nodejs-20-minimal@sha256:a2a7e399aaf09a48c28f40820da16709b62aee6f2bc703116b9345fab5830861
registry.access.redhat.com/ubi8/openjdk-21@sha256:441897a1f691c7d4b3a67bb3e0fea83e18352214264cb383fd057bbbd5ed863c
registry.access.redhat.com/ubi8/python-39@sha256:27e795fd6b1b77de70d1dc73a65e4c790650748a9cfda138fdbd194b3d6eea3d
registry.redhat.io/openshift-serverless-1/kn-backstage-plugins-eventmesh-rhel8@sha256:69b70200170a2d399ce143dca9aff5fede2d37a74040dc5ddf2206deadc9a33f
registry.redhat.io/openshift-serverless-1/kn-client-cli-artifacts-rhel8@sha256:d8e04e8d46ecec005504652b8cb4ead29452a6a89e47d568df0a24971240e9d9
registry.redhat.io/openshift-serverless-1/kn-client-kn-rhel8@sha256:989cb97cf626ae8637b32d519802250d208f466a5d6ff05d6bab105b978c976a
registry.redhat.io/openshift-serverless-1/kn-ekb-dispatcher-rhel8@sha256:4cb73eedb5c7841bff08ba5e55a48fde37ed9a0921fb88b381eaa7422fe2b00d
registry.redhat.io/openshift-serverless-1/kn-ekb-kafka-controller-rhel8@sha256:4fa519b1d4ef7f0219bae21febe73012ca261c12b3c08a9732088b7dfe37f65a
registry.redhat.io/openshift-serverless-1/kn-ekb-post-install-rhel8@sha256:402956ddf4f8da30aa234cf1d151b02f1bef29de604cad2441d65584117a3912
registry.redhat.io/openshift-serverless-1/kn-ekb-receiver-rhel8@sha256:bd48166615c132dd95a3792a6c610b1d977bad7c126a5532c47330ad3899e1ef
registry.redhat.io/openshift-serverless-1/kn-ekb-webhook-kafka-rhel8@sha256:7a4ffa3ae32dc289917b9a9c7c5ca251dc8586ba64719a126164656eecfeef14
registry.redhat.io/openshift-serverless-1/kn-eventing-apiserver-receive-adapter-rhel8@sha256:8ebbf3cd6a980896e03dc4818dede80856743c24a551d9c399f9b65c0816e2b3
registry.redhat.io/openshift-serverless-1/kn-eventing-channel-controller-rhel8@sha256:b3c9b5db3db34f454a86a81b87843934a5b8e5960cf1fa446650a35b7c2b1778
registry.redhat.io/openshift-serverless-1/kn-eventing-channel-dispatcher-rhel8@sha256:97adc8d4ab32770e00a2ae0096d45d9cd0c053a99292202bc24e6e9a60d92970
registry.redhat.io/openshift-serverless-1/kn-eventing-controller-rhel8@sha256:d6aff2e731bd8fa4f8a472ab2b6cb08103e0ba04ba353918484813864d89c082
registry.redhat.io/openshift-serverless-1/kn-eventing-filter-rhel8@sha256:e348715064edc914fd45071cb2e5e0e967bd26ce0542372a833a4ede78bf2822
registry.redhat.io/openshift-serverless-1/kn-eventing-ingress-rhel8@sha256:4519eba6fa2a6c6c10f0d97992c1e911ea1ce4cf00ac9025b9b334671b0d1e14
registry.redhat.io/openshift-serverless-1/kn-eventing-integrations-aws-ddb-streams-source-rhel8@sha256:6e2272266a877c42350c6e92bd9d97e407160de8bc29c1ab472786409548f69d
registry.redhat.io/openshift-serverless-1/kn-eventing-integrations-aws-s3-sink-rhel8@sha256:a6649ecd10ea7e3cca8d254a4a4a203d585cf1a485532fcb8f77053422ab0405
registry.redhat.io/openshift-serverless-1/kn-eventing-integrations-aws-s3-source-rhel8@sha256:ac8fad706d8e47118572a5c99f669b337962920498fd4c31796e2e707f8ff11e
registry.redhat.io/openshift-serverless-1/kn-eventing-integrations-aws-sns-sink-rhel8@sha256:e0b8f3759beb0a01314c3e6f9a165d286ac7e0e5ed9533df30209f873d3e8787
registry.redhat.io/openshift-serverless-1/kn-eventing-integrations-aws-sqs-sink-rhel8@sha256:7fc8171b21af336f5c512d0f484e363d0d32f6f11211621f572827cf71bf4cf6
registry.redhat.io/openshift-serverless-1/kn-eventing-integrations-aws-sqs-source-rhel8@sha256:925b30dbcc13075348fa35ad8e28abad88b1e632e45ff76bcd40dcacf1eaf5c1
registry.redhat.io/openshift-serverless-1/kn-eventing-integrations-log-sink-rhel8@sha256:c4641ac936196229a6dc035194799d24493eaa45cc3e0b21d79a9704860d2028
registry.redhat.io/openshift-serverless-1/kn-eventing-integrations-timer-source-rhel8@sha256:3c054f0fbbeb1428b8d88927d6b219bf5ba8c744434ebc4013351ad6494540a3
registry.redhat.io/openshift-serverless-1/kn-eventing-integrations-transform-jsonata-rhel8@sha256:1451bcf5004a32a6a183836ebf3f5c0af397da6c8d176a36bcc750c726e1f408
registry.redhat.io/openshift-serverless-1/kn-eventing-istio-controller-rhel8@sha256:a39bc62f77a5303f286e43bc8c47bb0452ad6f44228efc3e8d54798b5aaeb4d6
registry.redhat.io/openshift-serverless-1/kn-eventing-jobsink-rhel8@sha256:2553b7302376ec89216934b783e9db8122693f74b428a41e94c5ec7ffc48a414
registry.redhat.io/openshift-serverless-1/kn-eventing-migrate-rhel8@sha256:6538bbb2a59b31e03d2e74e93db81b15647308812f2354d6868680d8b48a706c
registry.redhat.io/openshift-serverless-1/kn-eventing-mtchannel-broker-rhel8@sha256:65c7c98a65f09ff01ef875d505be153bad54213bf6c3210fecee238e45887b0b
registry.redhat.io/openshift-serverless-1/kn-eventing-mtping-rhel8@sha256:887f33ae9c7d8e52764b3af4a78898769cd52eb47e6e9913fe71d7e890d9816a
registry.redhat.io/openshift-serverless-1/kn-eventing-webhook-rhel8@sha256:4a2924e282a3612e00de4bfee5a8c963c9b65b962a4c7d72f999bd493026f92a
registry.redhat.io/openshift-serverless-1/kn-plugin-event-sender-rhel8@sha256:f7795088777ea84fc6180b81b6131962944e34918e2c06671033a1a572581773
registry.redhat.io/openshift-serverless-1/kn-plugin-func-func-util-rhel8@sha256:b0eb1f0b2f180afb207186267601665f2979c4cf21a0e434e7601123e3826716
registry.redhat.io/openshift-serverless-1/kn-serving-activator-rhel8@sha256:4cf5431ee984d7cb7e6a87504e151a31130e18f1448d1eca56fbc294ee3020e4
registry.redhat.io/openshift-serverless-1/kn-serving-autoscaler-hpa-rhel8@sha256:f55ccbe4baf5829f98eb4fe7f802165d9209fe34dc8854a4eef70e471dcc1f97
registry.redhat.io/openshift-serverless-1/kn-serving-autoscaler-rhel8@sha256:0e273607b7d8ee6e2e542e02a2f6cfb04c144d4b70cf1fbc58d1041e26d283ab
registry.redhat.io/openshift-serverless-1/kn-serving-controller-rhel8@sha256:fdf01c170795da9598007bddf34c74e4a2b6d4c10ac2a0ad7010f30c8eb84149
registry.redhat.io/openshift-serverless-1/kn-serving-queue-rhel8@sha256:be27abd8e30d0e9b0245d5d99800290231aa246931bdbf65a757eac49f7d9ad9
registry.redhat.io/openshift-serverless-1/kn-serving-storage-version-migration-rhel8@sha256:dafcf4ee3a5836f2744e786fafd2911264a6f043d7cf17bf8cdf7b75ab9b3ff6
registry.redhat.io/openshift-serverless-1/kn-serving-webhook-rhel8@sha256:6dfc77b18f5f03fbc918f33ab5916344b546085e3cd57632d71ddb73022b5222
registry.redhat.io/openshift-serverless-1/net-istio-controller-rhel8@sha256:06100687f4d3b193fe289b45046d11bf5439f296f0c9b1e62fe16ed8624ae251
registry.redhat.io/openshift-serverless-1/net-istio-webhook-rhel8@sha256:6939d0ec31480dbfa172783d2531f6497c38dd18b0cbcc1597413e7dd49a4d62
registry.redhat.io/openshift-serverless-1/net-kourier-kourier-rhel8@sha256:1b3f3be13ff69f520ace648989ae7053b26a872af3c2baade05adfc8513f2afd
registry.redhat.io/openshift-serverless-1/serverless-ingress-rhel8@sha256:db94f6b64ac3e618c0dad70032ad3e723122d2dd566dd4099cd5f81e3f28ae8e
registry.redhat.io/openshift-serverless-1/serverless-kn-operator-rhel8@sha256:dd788378be08cd5de076fe6fe7255ec21486697197f9390c0f8afc6be0901150
registry.redhat.io/openshift-serverless-1/serverless-must-gather-rhel8@sha256:5b7aba60fba1db136c893ecdd34aa592f6079564457b6bff183218ea29f1aae1
registry.redhat.io/openshift-serverless-1/serverless-openshift-kn-rhel8-operator@sha256:9d89f51d04418acaeb36c3c0c9d6917ea29ca1d5b39df05a80da19318ea2c51c
registry.redhat.io/openshift-service-mesh/proxyv2-rhel8@sha256:8ee57a44b1fc799fd8565eb339955773bd9beedcbf46f68628ee0bd4abf26515
registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:92a83b201580d29aec7ee85ccc2984576c4a364b849e504225888d6f1fb9b0d2
registry.redhat.io/rhel8/buildah@sha256:3d505d9c0f5d4cd5a4ec03b8d038656c6cdbdf5191e00ce6388f7e0e4d2f1b74
registry.redhat.io/openshift-serverless-1/serverless-operator-bundle@sha256:2d675f8bf31b0cfb64503ee72e082183b7b11979d65eb636fc83f4f3a25fa5d0
OpenShift Serverless Logic Operator:
gcr.io/kaniko-project/warmer:v1.9.0
gcr.io/kaniko-project/executor:v1.9.0
registry.redhat.io/openshift-serverless-1/logic-jobs-service-postgresql-rhel8:1.36.0
registry.redhat.io/openshift-serverless-1/logic-jobs-service-ephemeral-rhel8:1.36.0
registry.redhat.io/openshift-serverless-1/logic-data-index-postgresql-rhel8:1.36.0
registry.redhat.io/openshift-serverless-1/logic-data-index-ephemeral-rhel8:1.36.0
registry.redhat.io/openshift-serverless-1/logic-db-migrator-tool-rhel8:1.36.0
registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8:1.36.0
registry.redhat.io/openshift-serverless-1/logic-swf-devmode-rhel8:1.36.0
registry.redhat.io/openshift-serverless-1/logic-rhel8-operator@sha256:8d3682448ebdac3aeabb2d23842b7e67a252b95f959c408af805037f9728fd3c
registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:4564ca3dc5bac80d6faddaf94c817fbbc270698a9399d8a21ee1005d85ceda56
registry.redhat.io/openshift-serverless-1/logic-rhel8-operator@sha256:8d3682448ebdac3aeabb2d23842b7e67a252b95f959c408af805037f9728fd3c
registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:4564ca3dc5bac80d6faddaf94c817fbbc270698a9399d8a21ee1005d85ceda56
registry.redhat.io/openshift-serverless-1/logic-operator-bundle@sha256:5fff2717f7b08df2c90a2be7bfb36c27e13be188d23546497ed9ce266f1c03f4
Orchestrator Operator:
registry.redhat.io/rhdh-orchestrator-dev-preview-beta/controller-rhel9-operator@sha256:32e556fe067074d1f0ef0eb1f5483f62cc63d31a04c5fb2dcaea657a6471c081
registry.redhat.io/rhdh-orchestrator-dev-preview-beta/orchestrator-operator-bundle@sha256:266366306f3977ae74e1ce3d06856a709d888163bf7423b6b941adfeb8ded6c2
Note:
If you encounter issues pulling images due to an invalid GPG signature, consider updating the /etc/containers/policy.json
file to reference the appropriate beta GPG key.
For example, you can use:
/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta
This may be required when working with pre-release or beta images signed with a different key than the default.
NPM packages for a disconnected environment
The packages required for the Orchestrator can be downloaded as tgz files from:
Or using NPM packages from https://npm.registry.redhat.com e.g. by:
npm pack "@redhat/backstage-plugin-orchestrator@1.6.0" --registry=https://npm.registry.redhat.com
npm pack "@redhat/backstage-plugin-orchestrator-backend-dynamic@1.6.0" --registry=https://npm.registry.redhat.com
npm pack "@redhat/backstage-plugin-scaffolder-backend-module-orchestrator-dynamic@1.6.0" --registry=https://npm.registry.redhat.com
npm pack "@redhat/backstage-plugin-orchestrator-form-widgets@1.6.0" --registry=https://npm.registry.redhat.com
For maintainers
The images in this page were listed using the following set of commands, based on each of the operator bundle images:
RHDH
The RHDH bundle version should match the one being used by the Orchestrator operator as pointed by the (rhdhSubscriptionStartingCSV attribute)[https://github.com/rhdhorchestrator/orchestrator-go-operator/blob/main/internal/controller/rhdh/backstage.go#L31].
The list of images was obtained by:
bash <<'EOF'
set -euo pipefail
IMG="registry.redhat.io/rhdh/rhdh-operator-bundle:1.6.1"
DIR="local-manifests-rhdh"
CSV="$DIR/rhdh-operator.clusterserviceversion.yaml"
podman pull "$IMG" --quiet >/dev/null 2>&1
BUNDLE_DIGEST=$(podman image inspect "$IMG" --format '{{ index .RepoDigests 0 }}')
podman create --name temp "$IMG" > /dev/null
podman cp temp:/manifests "$DIR"
podman rm temp > /dev/null
yq e '
.spec.install.spec.deployments[].spec.template.spec.containers[].image,
.spec.install.spec.deployments[].spec.template.spec.containers[].env[]
| select(.name | test("^RELATED_IMAGE_")).value
' "$CSV" | cat - <(echo "$BUNDLE_DIGEST") | sort -u
EOF
OpenShift Serverless
The list of images was obtained by:
IMG=registry.redhat.io/openshift-serverless-1/serverless-operator-bundle:1.36.0
podman run --rm --entrypoint bash "$IMG" -c "cat /manifests/serverless-operator.clusterserviceversion.yaml" | yq '.spec.relatedImages[].image' | sort | uniq
podman pull "$IMG"
podman image inspect "$IMG" --format '{{ index .RepoDigests 0 }}'
OpenShift Serverless Logic
podman create --name temp-container registry.redhat.io/openshift-serverless-1/logic-operator-bundle:1.36.0-8
podman cp temp-container:/manifests ./local-manifests-osl
podman rm temp-container
yq -r '.data."controllers_cfg.yaml" | from_yaml | .. | select(tag == "!!str") | select(test("^.*\\/.*:.*$"))' ./local-manifests-osl/logic-operator-rhel8-controllers-config_v1_configmap.yaml
yq -r '.. | select(has("image")) | .image' ./local-manifests-osl/logic-operator-rhel8.clusterserviceversion.yaml
Orchestrator
The list of images was obtained by:
bash <<'EOF'
set -euo pipefail
IMG="registry.redhat.io/rhdh-orchestrator-dev-preview-beta/orchestrator-operator-bundle:1.6-1751040440"
DIR="local-manifests-orchestrator"
CSV="$DIR/orchestrator-operator.clusterserviceversion.yaml"
podman pull "$IMG" --quiet >/dev/null 2>&1
BUNDLE_DIGEST=$(podman image inspect "$IMG" --format '{{ index .RepoDigests 0 }}')
podman create --name temp "$IMG" > /dev/null
podman cp temp:/manifests "$DIR"
podman rm temp > /dev/null
yq e '.spec.install.spec.deployments[].spec.template.spec.containers[].image' "$CSV" | cat - <(echo "$BUNDLE_DIGEST") | sort -u
EOF
3 - Orchestrator CRD Versions
The following table shows the list of supported Orchestrator Operator versions with their compatible CRD version.
Orchestrator Operator Version | CRD Version |
---|
1.3 | v1alpha1 |
1.4 | v1alpha2 |
1.5 | v1alpha3 |
1.6 | v1alpha3 |
3.1 - CRD Version v1alpha3
The Go-Based Operator was introduced in Orchestrator 1.5 since the helm-based operator is currently in maintenance mode.
Also, with major changes to the CRD, the v1alpha3 version
of Orchestrator CRD
was introduced and is not backward compatible.
In 1.6 version, the CRD field structure has completely changed with most fields either removed or renamed and
restructured.
To see more information about the CRD fields, check out the
full Parameter list.
The following Orchestrator CR is a sample of the api v1alpha3 version.
apiVersion: rhdh.redhat.com/v1alpha3
kind: Orchestrator
metadata:
labels:
app.kubernetes.io/name: orchestrator-sample
name: orchestrator-sample
spec:
serverlessLogic:
installOperator: true # Determines whether to install the ServerlessLogic operator. Defaults to True. Optional
serverless:
installOperator: true # Determines whether to install the Serverless operator. Defaults to True. Optional
rhdh:
installOperator: true # Determines whether the RHDH operator should be installed.This determines the deployment of the RHDH instance. Defaults to False. Optional
devMode: true # Determines whether to enable the guest provider in RHDH. This should be used for development purposes ONLY and should not be enabled in production. Defaults to False. Optional
name: "my-rhdh" # Name of RHDH CR, whether existing or to be installed. Required
namespace: "rhdh" # Namespace of RHDH Instance, whether existing or to be installed. Required
plugins:
notificationsEmail:
enabled: false # Determines whether to install the Notifications Email plugin. Requires setting of hostname and credentials in backstage secret. The secret, backstage-backend-auth-secret, is created as a pre-requisite. See value backstage-backend-auth-secret. See plugin configuration at https://github.com/backstage/backstage/blob/master/plugins/notifications-backend-module-email/config.d.ts
port: 587 # SMTP server port. Defaults to 587. Optional
sender: "" # Email address of the Sender. Defaults to empty string. Optional
replyTo: "" # Email address of the Recipient. Defaults to empty string. Optional
postgres:
name: "sonataflow-psql-postgresql" # The name of the Postgres DB service to be used by platform services. Cannot be empty.
namespace: "sonataflow-infra" # The namespace of the Postgres DB service to be used by platform services.
authSecret:
name: "sonataflow-psql-postgresql" # Name of existing secret to use for PostgreSQL credentials. Required
userKey: postgres-username # Name of key in existing secret to use for PostgreSQL credentials. Required
passwordKey: postgres-password # Name of key in existing secret to use for PostgreSQL credentials. Required
database: sonataflow # Name of existing database instance used by data index and job service. Required
platform: # Contains the configuration for the infrastructure services required for the Orchestrator to serve workflows by leveraging the OpenShift Serverless and OpenShift Serverless Logic capabilities.
namespace: "sonataflow-infra"
resources:
requests:
memory: "64Mi" # Defines the Memory resource limits. Optional
cpu: "250m" # Defines the CPU resource limits. Optional
limits:
memory: "1Gi" # Defines the Memory resource limits. Optional
cpu: "500m" # Defines the CPU resource limits. Optional
eventing:
broker: { }
# To enable eventing communication with an existing broker, populate the following fields:
# broker:
# name: "my-knative" # Name of existing Broker instance.
# namespace: "knative" # Namespace of existing Broker instance.
monitoring:
enabled: false # Determines whether to enable monitoring for platform. Optional
tekton:
enabled: false # Determines whether to create the Tekton pipeline and install the Tekton plugin on RHDH. Defaults to false. Optional
argocd:
enabled: false # Determines whether to install the ArgoCD plugin and create the orchestrator AppProject. Defaults to False. Optional
namespace: "orchestrator-gitops" # Namespace where the ArgoCD operator is installed and watching for argoapp CR instances. Optional
Migrating to the v1alpha3 CRD version involves upgrading the operator. Please follow
the Operator Upgrade documentation
3.2 - CRD Version v1alpha2
The v1alpha2 version of Orchestrator CRD was introduced in Orchestrator 1.4 version and is currently supported.
New Fields
In OSL 1.35, these new features are introduced:
- Support for Workflow Monitoring
- Support for Knative Eventing
Hence, the CRD schema extends to allow configuration for these features by the user.
- orchestrator.sonataflowPlatform.monitoring.enabled
- orchestrator.sonataflowPlatform.eventing.broker.name
- orchestrator.sonataflowPlatform.eventing.broker.namespace
Deleted Fields
In RHDH 1.4, the notifications and signals plugins are now part of RHDH image and no longer need to be configured by the user.
Hence, these plugin fields are now removed from the CRD schema.
- rhdhPlugins.notifications.package
- rhdhPlugins.notifications.integrity
- rhdhPlugins.notificationsBackend.package
- rhdhPlugins.notificationsBackend.integrity
- rhdhPlugins.signals.package
- rhdhPlugins.signals.integrity
- rhdhPlugins.signalsBackend.package
- rhdhPlugins.signalsBackend.integrity
- rhdhPlugins.notificationsEmail.package
- rhdhPlugins.notificationsEmail.integrity
Renamed Fields
For consistency in the subscription resource/configuration in the CRD, these fields are renamed.
- sonataFlowOperator.subscription.source
- serverlessOperator.subscription.source
The following Orchestrator CR is an sample of the api v1alpha2 version.
apiVersion: rhdh.redhat.com/v1alpha2
kind: Orchestrator
metadata:
name: orchestrator-sample
spec:
sonataFlowOperator:
isReleaseCandidate: false # Indicates RC builds should be used by the chart to install Sonataflow
enabled: true # whether the operator should be deployed by the chart
subscription:
namespace: openshift-serverless-logic # namespace where the operator should be deployed
channel: alpha # channel of an operator package to subscribe to
installPlanApproval: Automatic # whether the update should be installed automatically
name: logic-operator-rhel8 # name of the operator package
source: redhat-operators # name of the catalog source
startingCSV: logic-operator-rhel8.v1.35.0 # The initial version of the operator
serverlessOperator:
enabled: true # whether the operator should be deployed by the chart
subscription:
namespace: openshift-serverless # namespace where the operator should be deployed
channel: stable # channel of an operator package to subscribe to
installPlanApproval: Automatic # whether the update should be installed automatically
name: serverless-operator # name of the operator package
source: redhat-operators # name of the catalog source
startingCSV: serverless-operator.v1.35.0 # The initial version of the operator
rhdhOperator:
isReleaseCandidate: false # Indicates RC builds should be used by the chart to install RHDH
enabled: true # whether the operator should be deployed by the chart
enableGuestProvider: true # whether to enable guest provider
secretRef:
name: backstage-backend-auth-secret # name of the secret that contains the credentials for the plugin to establish a communication channel with the Kubernetes API, ArgoCD, GitHub servers and SMTP mail server.
backstage:
backendSecret: BACKEND_SECRET # Key in the secret with name defined in the 'name' field that contains the value of the Backstage backend secret. Defaults to 'BACKEND_SECRET'. It's required.
github: # GitHub specific configuration fields that are injected to the backstage instance to allow the plugin to communicate with GitHub.
token: GITHUB_TOKEN # Key in the secret with name defined in the 'name' field that contains the value of the authentication token as expected by GitHub. Required for importing resource to the catalog, launching software templates and more. Defaults to 'GITHUB_TOKEN', empty for not available.
clientId: GITHUB_CLIENT_ID # Key in the secret with name defined in the 'name' field that contains the value of the client ID that you generated on GitHub, for GitHub authentication (requires GitHub App). Defaults to 'GITHUB_CLIENT_ID', empty for not available.
clientSecret: GITHUB_CLIENT_SECRET # Key in the secret with name defined in the 'name' field that contains the value of the client secret tied to the generated client ID. Defaults to 'GITHUB_CLIENT_SECRET', empty for not available.
gitlab: # Gitlab specific configuration fields that are injected to the backstage instance to allow the plugin to communicate with Gitlab.
host: GITLAB_HOST # Key in the secret with name defined in the 'name' field that contains the value of Gitlab Host's name. Defaults to 'GITHUB_HOST', empty for not available.
token: GITLAB_TOKEN # Key in the secret with name defined in the 'name' field that contains the value of the authentication token as expected by Gitlab. Required for importing resource to the catalog, launching software templates and more. Defaults to 'GITLAB_TOKEN', empty for not available.
k8s: # Kubernetes specific configuration fields that are injected to the backstage instance to allow the plugin to communicate with the Kubernetes API Server.
clusterToken: K8S_CLUSTER_TOKEN # Key in the secret with name defined in the 'name' field that contains the value of the Kubernetes API bearer token used for authentication. Defaults to 'K8S_CLUSTER_TOKEN', empty for not available.
clusterUrl: K8S_CLUSTER_URL # Key in the secret with name defined in the 'name' field that contains the value of the API URL of the kubernetes cluster. Defaults to 'K8S_CLUSTER_URL', empty for not available.
argocd: # ArgoCD specific configuration fields that are injected to the backstage instance to allow the plugin to communicate with ArgoCD. Note that ArgoCD must be deployed beforehand and the argocd.enabled field must be set to true as well.
url: ARGOCD_URL # Key in the secret with name defined in the 'name' field that contains the value of the URL of the ArgoCD API server. Defaults to 'ARGOCD_URL', empty for not available.
username: ARGOCD_USERNAME # Key in the secret with name defined in the 'name' field that contains the value of the username to login to ArgoCD. Defaults to 'ARGOCD_USERNAME', empty for not available.
password: ARGOCD_PASSWORD # Key in the secret with name defined in the 'name' field that contains the value of the password to authenticate to ArgoCD. Defaults to 'ARGOCD_PASSWORD', empty for not available.
notificationsEmail:
hostname: NOTIFICATIONS_EMAIL_HOSTNAME # Key in the secret with name defined in the 'name' field that contains the value of the hostname of the SMTP server for the notifications plugin. Defaults to 'NOTIFICATIONS_EMAIL_HOSTNAME', empty for not available.
username: NOTIFICATIONS_EMAIL_USERNAME # Key in the secret with name defined in the 'name' field that contains the value of the username of the SMTP server for the notifications plugin. Defaults to 'NOTIFICATIONS_EMAIL_USERNAME', empty for not available.
password: NOTIFICATIONS_EMAIL_PASSWORD # Key in the secret with name defined in the 'name' field that contains the value of the password of the SMTP server for the notifications plugin. Defaults to 'NOTIFICATIONS_EMAIL_PASSWORD', empty for not available.
subscription:
namespace: rhdh-operator # namespace where the operator should be deployed
channel: fast-1.4 # channel of an operator package to subscribe to
installPlanApproval: Automatic # whether the update should be installed automatically
name: rhdh # name of the operator package
source: redhat-operators # name of the catalog source
startingCSV: "" # The initial version of the operator
targetNamespace: rhdh-operator # the target namespace for the backstage CR in which RHDH instance is created
rhdhPlugins: # RHDH plugins required for the Orchestrator
npmRegistry: "https://npm.registry.redhat.com" # NPM registry is defined already in the container, but sometimes the registry need to be modified to use different versions of the plugin, for example: staging(https://npm.stage.registry.redhat.com) or development repositories
scope: "https://github.com/rhdhorchestrator/orchestrator-plugins-internal-release/releases/download/1.4.0-rc.7"
orchestrator:
package: "backstage-plugin-orchestrator-1.4.0-rc.7.tgz"
integrity: sha512-Vclb+TIL8cEtf9G2nx0UJ+kMJnCGZuYG/Xcw0Otdo/fZGuynnoCaAZ6rHnt4PR6LerekHYWNUbzM3X+AVj5cwg==
orchestratorBackend:
package: "backstage-plugin-orchestrator-backend-dynamic-1.4.0-rc.7.tgz"
integrity: sha512-bxD0Au2V9BeUMcZBfNYrPSQ161vmZyKwm6Yik5keZZ09tenkc8fNjipwJsWVFQCDcAOOxdBAE0ibgHtddl3NKw==
notificationsEmail:
enabled: false # whether to install the notifications email plugin. requires setting of hostname and credentials in backstage secret to enable. See value backstage-backend-auth-secret. See plugin configuration at https://github.com/backstage/backstage/blob/master/plugins/notifications-backend-module-email/config.d.ts
port: 587 # SMTP server port
sender: "" # the email sender address
replyTo: "" # reply-to address
postgres:
serviceName: "sonataflow-psql-postgresql" # The name of the Postgres DB service to be used by platform services. Cannot be empty.
serviceNamespace: "sonataflow-infra" # The namespace of the Postgres DB service to be used by platform services.
authSecret:
name: "sonataflow-psql-postgresql" # name of existing secret to use for PostgreSQL credentials.
userKey: postgres-username # name of key in existing secret to use for PostgreSQL credentials.
passwordKey: postgres-password # name of key in existing secret to use for PostgreSQL credentials.
database: sonataflow # existing database instance used by data index and job service
orchestrator:
namespace: "sonataflow-infra" # Namespace where sonataflow's workflows run. The value is captured when running the setup.sh script and stored as a label in the selected namespace. User can override the value by populating this field. Defaults to `sonataflow-infra`.
sonataflowPlatform:
monitoring:
enabled: true # whether to enable monitoring
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
eventing:
broker:
name: "my-knative" # Name of existing Broker instance. Optional
namespace: "knative" # Namespace of existing Broker instance. Optional
tekton:
enabled: false # whether to create the Tekton pipeline resources
argocd:
enabled: false # whether to install the ArgoCD plugin and create the orchestrator AppProject
namespace: "" # Defines the namespace where the orchestrator's instance of ArgoCD is deployed. The value is captured when running setup.sh script and stored as a label in the selected namespace. User can override the value by populating this field. Defaults to `orchestrator-gitops` in the setup.sh script.
networkPolicy:
rhdhNamespace: "rhdh-operator" # Namespace of existing RHDH instance
4 - Requirements
Operators
The Orchestrator runtime/deployment is made of two main parts: OpenShift Serverless Logic operator
and RHDH operator
OpenShift Serverless Logic operator requirements
OpenShift Serverless Logic operator resource requirements are described OpenShift Serverless Logic Installation Requirements. This is mainly for local environment settings.
The operator deploys a Data Index service and a Jobs service.
These are the recommended minimum resource requirements for their pods:
Data Index pod
:
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 250m
memory: 64Mi
Jobs pod:
resources:
limits:
cpu: 200m
memory: 1Gi
requests:
cpu: 100m
memory: 1Gi
The resources for these pods are controlled by a CR of type SonataFlowPlatform. There is one such CR in the sonataflow-infra namespace.
RHDH operator requirements
The requirements for RHDH operator and its components are described here
Workflows
Each workflow has its own logic and therefore different resource requirements that are influenced by its specific logic.
Here are some metrics for the workflows we provide. For each workflow you have the following fields: cpu idle, cpu peak (during execution), memory.
- greeting workflow
- cpu idle: 4m
- cpu peak: 12m
- memory: 300 Mb
- mtv-plan workflow
- cpu idle: 4m
- cpu peak: 130m
- memory: 300 Mb
How to evaluate resource requirements for your workflow
Locate the workflow pod in OCP Console. There is a tab for Metrics. Here you’ll find the CPU and memory. Execute the workflow a few times. It does not matter whether it succeeds or not as long as all the states are executed. Now you can see the peak usage (execution) and the idle usage (after a few executions).
5 - Orchestrator on OpenShift
Installing the Orchestrator is facilitated through an operator available in the Red Hat Catalog in the OLM package. This operator is responsible for installing all of the Orchestrator components.
The Orchestrator is based on the SonataFlow and the Serverless Workflow technologies to design and manage the workflows.
The Orchestrator plugins are deployed on a Red Hat Developer Hub
instance, which serves as the frontend.
When installing a Red Hat Developer Hub (RHDH) instance using the Orchestrator operator, the RHDH configuration is managed through the Orchestrator resource.
To utilize Backstage capabilities, the Orchestrator imports software templates designed to ease the development of new workflows and offers an opinionated method for managing their lifecycle by including CI/CD resources as part of the template.
Orchestrator Documentation
For comprehensive documentation on the Orchestrator, please
visit https://www.rhdhorchestrator.io.
Installing the Orchestrator Go Operator
Deploy the Orchestrator solution suite in an OCP cluster using the Orchestrator operator.
The operator installs the following components onto the target OpenShift cluster:
- RHDH (Red Hat Developer Hub) Backstage
- OpenShift Serverless Logic Operator (with Data-Index and Job Service)
- OpenShift Serverless Operator
- Knative Eventing
- Knative Serving
- (Optional) An ArgoCD project named
orchestrator
. Requires an pre-installed ArgoCD/OpenShift GitOps instance in the
cluster. Disabled by default - (Optional) Tekton tasks and build pipeline. Requires an pre-installed Tekton/OpenShift Pipelines instance in the
cluster. Disabled by default
Important Note for ARM64 Architecture Users
Note that as of November 6, 2023, OpenShift Serverless Operator is based on RHEL 8 images which are not supported on the
ARM64 architecture. Consequently, deployment of this operator on
an OpenShift Local cluster on MacBook laptops with M1/M2
chips is not supported.
Prerequisites
- Logged in to a Red Hat OpenShift Container Platform (version 4.14 +) cluster as a cluster administrator.
- OpenShift CLI (oc)
is installed.
- Operator Lifecycle Manager (OLM) has been installed in your
cluster.
- Your cluster has
a default storage class
provisioned.
- A GitHub API Token - to import items into the catalog, ensure you have a
GITHUB_TOKEN
with the necessary permissions
as detailed here.- For classic token, include the following permissions:
- repo (all)
- admin:org (read:org)
- user (read:user, user:email)
- workflow (all) - required for using the software templates for creating workflows in GitHub
- For Fine grained token:
- Repository permissions: Read access to metadata, Read and Write access to actions, actions
variables, administration, code, codespaces, commit statuses, environments, issues, pull requests, repository
hooks, secrets, security events, and workflows.
- Organization permissions: Read access to members, Read and Write access to organization
administration, organization hooks, organization projects, and organization secrets.
⚠️Warning: Skipping these steps will prevent the Orchestrator from functioning properly.
Deployment with GitOps
If you plan to deploy in a GitOps environment, make sure you have installed the ArgoCD/Red Hat OpenShift GitOps
and
the Tekton/Red Hat Openshift Pipelines Install
operators following
these instructions.
The Orchestrator installs RHDH and imports software templates designed for bootstrapping workflow development. These
templates are crafted to ease the development lifecycle, including a Tekton pipeline to build workflow images and
generate workflow K8s custom resources. Furthermore, ArgoCD is utilized to monitor any changes made to the workflow
repository and to automatically trigger the Tekton pipelines as needed.
ArgoCD/OpenShift GitOps
operator
- Ensure at least one instance of
ArgoCD
exists in the designated namespace (referenced by ARGOCD_NAMESPACE
environment variable).
Example here - Validated API is
argoproj.io/v1alpha1/AppProject
Tekton/OpenShift Pipelines
operator
- Validated APIs are
tekton.dev/v1beta1/Task
and tekton.dev/v1/Pipeline
- Requires ArgoCD installed since the manifests are deployed in the same namespace as the ArgoCD instance.
Remember to
enable argocd
in your CR instance.
Detailed Installation Guide
From OperatorHub
- Deploying PostgreSQL reference implementation
- If you do not have a PostgreSQL instance in your cluster
you can deploy the PostgreSQL reference implementation by following the
steps here. - If you already have PostgreSQL running in your cluster
ensure that the default settings in
the PostgreSQL values
file match the postgres
field provided in
the Orchestrator CR
file.
- Install Orchestrator operator
- Go to OperatorHub in your OpenShift Console.
- Search for and install the Orchestrator Operator.
- Run the Setup Script
- Follow the steps in the Running the Setup Script section to download and execute the
setup.sh script, which initializes the RHDH environment.
- Create an Orchestrator instance
- Once the Orchestrator Operator is installed, navigate to Installed Operators.
- Select Orchestrator Operator.
- Click on Create Instance to deploy an Orchestrator instance.
- Verify resources and wait until they are running
From console run the following command get the necessary wait commands:
oc describe orchestrator orchestrator-sample -n openshift-operators | grep -A 10 "Run the following commands to wait until the services are ready:"
\
The command will return an output similar to the one below, which lists several oc wait commands. This depends on
your specific cluster.
oc wait -n openshift-serverless deploy/knative-openshift --for=condition=Available --timeout=5m
oc wait -n knative-eventing knativeeventing/knative-eventing --for=condition=Ready --timeout=5m
oc wait -n knative-serving knativeserving/knative-serving --for=condition=Ready --timeout=5m
oc wait -n openshift-serverless-logic deploy/logic-operator-rhel8-controller-manager --for=condition=Available --timeout=5m
oc wait -n sonataflow-infra sonataflowplatform/sonataflow-platform --for=condition=Succeed --timeout=5m
oc wait -n sonataflow-infra deploy/sonataflow-platform-data-index-service --for=condition=Available --timeout=5m
oc wait -n sonataflow-infra deploy/sonataflow-platform-jobs-service --for=condition=Available --timeout=5m
oc get networkpolicy -n sonataflow-infra
Copy and execute each command from the output in your terminal. These commands ensure that all necessary services
and resources in your OpenShift environment are available and running correctly.
If any service does not become available, verify the logs for that service or
consult troubleshooting steps.
Manual Installation
Deploy the PostgreSQL reference implementation for persistence support in SonataFlow following
these instructions
Create a namespace for the Orchestrator solution:
oc new-project orchestrator
Run the Setup Script
- Follow the steps in the Running the Setup Script section to download and execute the
setup.sh script, which initializes the RHDH environment.
Use the following manifest to install the operator in an OCP cluster:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: orchestrator-operator
namespace: openshift-operators
spec:
channel: stable
installPlanApproval: Automatic
name: orchestrator-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
Run the following commands to determine when the installation is completed:
wget https://raw.githubusercontent.com/rhdhorchestrator/orchestrator-go-operator/release-1.6/hack/wait_for_operator_installed.sh -O /tmp/wait_for_operator_installed.sh && chmod u+x /tmp/wait_for_operator_installed.sh && /tmp/wait_for_operator_installed.sh
During the installation process, the Orchestrator Operator creates the sub-components
operators: RHDH operator, OpenShift Serverless operator and OpenShift Serverless Logic operator. Furthermore, it
creates the necessary CRs and resources needed for orchestrator to function properly.
Apply the Orchestrator custom resource (CR) on the cluster to create an instance of RHDH and resources of
OpenShift
Serverless Operator and OpenShift Serverless Logic Operator.
Make any changes to
the CR
before applying it, or test the default Orchestrator CR:
oc apply -n orchestrator -f https://raw.githubusercontent.com/rhdhorchestrator/orchestrator-go-operator/refs/heads/main/config/samples/_v1alpha3_orchestrator.yaml
Note: After the first reconciliation of the Orchestrator CR, changes to some of the fields in the CR may not be
propagated/reconciled to the intended resource. For example, changing the platform.resources.requests
field in
the Orchestrator CR will not have any effect on the running instance of the SonataFlowPlatform (SFP) resource.
For the sake of simplicity, that is the current design and may be revisited in the near future. Please refer to
the CRD Parameter List
to know which fields can be reconciled.
Running The Setup Script
The setup.sh script simplifies the initialization of the RHDH environment by creating the required authentication secret
and labeling GitOps namespaces based on the cluster configuration.
Create a namespace for the RHDH instance. This namespace is predefined as the default in both the setup.sh script and
the Orchestrator CR but can be overridden if needed.
Download the setup script from the github repository and run it to create the RHDH secret and label the GitOps
namespaces:
wget https://raw.githubusercontent.com/rhdhorchestrator/orchestrator-go-operator/release-1.6/hack/setup.sh -O /tmp/setup.sh && chmod u+x /tmp/setup.sh
Run the script:
/tmp/setup.sh --use-default
NOTE: If you don’t want to use the default values, omit the --use-default
and the script will prompt you for
input.
The contents will vary depending on the configuration in the cluster. The following list details all the keys that can
appear in the secret:
BACKEND_SECRET
: Value is randomly generated at script execution. This is the only mandatory key required to be in
the secret for the RHDH Operator to start.K8S_CLUSTER_URL
: The URL of the Kubernetes cluster is obtained dynamically using oc whoami --show-server
.K8S_CLUSTER_TOKEN
: The value is obtained dynamically based on the provided namespace and service account.GITHUB_TOKEN
: This value is prompted from the user during script execution and is not predefined.GITHUB_CLIENT_ID
and GITHUB_CLIENT_SECRET
: The value for both these fields are used to authenticate against
GitHub. For more information open this link.GITLAB_HOST
and GITLAB_TOKEN
: The value for both these fields are used to authenticate against
GitLab.ARGOCD_URL
: This value is dynamically obtained based on the first ArgoCD instance available.ARGOCD_USERNAME
: Default value is set to admin
.ARGOCD_PASSWORD
: This value is dynamically obtained based on the ArgoCD instance available.
Keys will not be added to the secret if they have no values associated. So for instance, when deploying in a cluster
without the GitOps operators, the ARGOCD_URL
, ARGOCD_USERNAME
and ARGOCD_PASSWORD
keys will be omitted in the
secret.
Sample of a secret created in a GitOps environment:
$> oc get secret -n rhdh -o yaml backstage-backend-auth-secret
apiVersion: v1
data:
ARGOCD_PASSWORD: ...
ARGOCD_URL: ...
ARGOCD_USERNAME: ...
BACKEND_SECRET: ...
GITHUB_TOKEN: ...
K8S_CLUSTER_TOKEN: ...
K8S_CLUSTER_URL: ...
kind: Secret
metadata:
creationTimestamp: "2024-05-07T22:22:59Z"
name: backstage-backend-auth-secret
namespace: rhdh-operator
resourceVersion: "4402773"
uid: 2042e741-346e-4f0e-9d15-1b5492bb9916
type: Opaque
Enabling Monitoring for Workflows
If you want to enable monitoring for workflows, you shall enable it in the Orchestrator
CR as follows:
apiVersion: rhdh.redhat.com/v1alpha3
kind: Orchestrator
metadata:
name: ...
spec:
...
platform:
...
monitoring:
enabled: true
...
After the CR is deployed, follow
the instructions to deploy
Prometheus, Grafana and the sample Grafana dashboard.
Using Knative eventing communication
To enable eventing communication between the different components (Data Index, Job Service and
Workflows), a broker should be used. Kafka is a good candidate as it fulfills the reliability need. You can find the
list of available brokers for Knative is
here: https://knative.dev/docs/eventing/brokers/broker-types/
Alternatively, an in-memory broker could also be used, however it is not recommended to use it for production purposes.
Follow
these instructions
to setup the Knative broker communication.
Proxy configuration
Your Backstage instance might be configured to work with a proxy. In that case you need to tell Backstage to bypass the
workflow for requests to workflow namespaces and sonataflow namespace (sonataflow-infra
). You need to add the
namespaces to the environment variable NO_PROXY
. E.g. NO_PROXY=current-value-of-no-proxy, .sonataflow-infra
,
.my-workflow-namespace
. Note the .
before the namespace name.
Additional Workflow Namespaces
When deploying a workflow in a namespace different from where Sonataflow services are running (e.g., sonataflow-infra),
several essential steps must be followed:
Allow Traffic from the Workflow Namespace:
To allow Sonataflow services to accept traffic from workflows, either create an additional network policy or update
the existing policy with the new workflow namespace.
Create Additional Network Policy
oc create -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-external-workflows-to-sonataflow-infra
# Namespace where network policies are deployed
namespace: sonataflow-infra
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
# Allow Sonataflow services to communicate with new/additional workflow namespace.
kubernetes.io/metadata.name: <new-workflow-namespace>
EOF
Alternatively - Update Existing Network Policy
oc -n sonataflow-infra patch networkpolicy allow-rhdh-to-sonataflow-and-workflows --type='json' \
-p='[
{
"op": "add",
"path": "/spec/ingress/0/from/-",
"value": {
"namespaceSelector": {
"matchLabels": {
"kubernetes.io/metadata.name": <new-workflow-namespace>
}
}
}
}]'
Identify the RHDH Namespace:
Retrieve the namespace where RHDH is running by executing:
Store the namespace value in $RHDH_NAMESPACE
in the Network Policy manifest below.
Identify the Sonataflow Services Namespace:
Check the namespace where Sonataflow services are deployed:
oc get sonataflowclusterplatform -A
If there is no cluster platform, check for a namespace-specific platform:
oc get sonataflowplatform -A
Store the namespace value in $WORKFLOW_NAMESPACE
.
Set Up a Network Policy:
Configure a network policy to allow traffic only between RHDH, Knative, Sonataflow services, and workflows.
oc create -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-rhdh-to-sonataflow-and-workflows
namespace: $ADDITIONAL_NAMESPACE
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
# Allows traffic from pods in the RHDH namespace.
kubernetes.io/metadata.name: $RHDH_NAMESPACE
- namespaceSelector:
matchLabels:
# Allow traffic from pods in the in the Workflow namespace.
kubernetes.io/metadata.name: $WORKFLOW_NAMESPACE
- namespaceSelector:
matchLabels:
# Allows traffic from pods in the K-Native Eventing namespace.
kubernetes.io/metadata.name: knative-eventing
- namespaceSelector:
matchLabels:
# Allows traffic from pods in the K-Native Serving namespace.
kubernetes.io/metadata.name: knative-serving
EOF
To allow unrestricted communication between all pods within the workflow’s namespace, create the
allow-intra-namespace
network policy.
oc create -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-intra-namespace
namespace: $ADDITIONAL_NAMESPACE
spec:
# Apply this policy to all pods in the namespace
podSelector: {}
# Specify policy type as 'Ingress' to control incoming traffic rules
policyTypes:
- Ingress
ingress:
- from:
# Allow ingress from any pod within the same namespace
- podSelector: {}
EOF
Ensure Persistence for the Workflow:
If persistence is required, follow these steps:
By following these steps, the workflow will have the necessary credentials to access PostgreSQL and will correctly
reference the service in a different namespace.
GitOps environment
See the
dedicated document
Deploying PostgreSQL reference implementation
See here
ArgoCD and workflow namespace
If you manually created the workflow namespaces (e.g., $WORKFLOW_NAMESPACE
), run this command to add the required
label that allows ArgoCD to deploy instances there:
oc label ns $WORKFLOW_NAMESPACE argocd.argoproj.io/managed-by=$ARGOCD_NAMESPACE
Workflow installation
Follow Workflows Installation
Cleanup
/!\ Before removing the orchestrator, make sure you have first removed any installed workflows. Otherwise the
deletion may become hung in a terminating state.
To remove the operator, first remove the operand resources
Run:
oc delete namespace orchestrator
to delete the Orchestrator CR. This should remove the OSL, Serverless and RHDH Operators, Sonataflow CRs.
To clean up the rest of resources run
oc delete namespace sonataflow-infra rhdh
If you want to remove knative related resources, you may also run:
oc get crd -o name | grep -e knative | xargs oc delete
To remove the operator from the cluster, delete the subscription:
oc delete subscriptions.operators.coreos.com orchestrator-operator -n openshift-operators
Note that the CRDs created during the installation process will remain in the cluster.
Compatibility Matrix between Orchestrator Operator and Dependencies
Orchestrator Operator | RHDH | OSL | Serverless |
---|
Orchestrator 1.6.0 | 1.6.0 | 1.36.0 | 1.36.0 |
Compatibility Matrix for Orchestrator Plugins
6 - Orchestrator on Kubernetes
The following guide is for installing on a Kubernetes cluster. It is well tested and working in CI with a kind installation.
Here’s a kind configuration that is easy to work with (the apiserver port is static, so the kubeconfig is always the same)
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
apiServerAddress: "127.0.0.1"
apiServerPort: 16443
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
- |
kind: KubeletConfiguration
localStorageCapacityIsolation: true
extraPortMappings:
- containerPort: 80
hostPort: 9090
protocol: TCP
- containerPort: 443
hostPort: 9443
protocol: TCP
- role: worker
Save this file as kind-config.yaml
, and now run:
kind create cluster --config kind-config.yaml
kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
kubectl patch daemonsets -n projectcontour envoy -p '{"spec":{"template":{"spec":{"nodeSelector":{"ingress-ready":"true"},"tolerations":[{"key":"node-role.kubernetes.io/control-plane","operator":"Equal","effect":"NoSchedule"},{"key":"node-role.kubernetes.io/master","operator":"Equal","effect":"NoSchedule"}]}}}}'
The cluster should be up and running with Contour ingress-controller installed, so localhost:9090 will direct the traffic to Backstage, because of the ingress created by the helm chart on port 80.
Orchestrator-k8s helm chart
This chart will install the Orchestrator and all its dependencies on kubernetes.
THIS CHART IS NOT SUITED FOR PRODUCTION PURPOSES, you should only use it for development or tests purposes
The chart deploys:
Usage
helm repo add orchestrator https://rhdhorchestrator.github.io/orchestrator-helm-chart
helm install orchestrator orchestrator/orchestrator-k8s
Configuration
All of the backstage app-config is derived from the values.yaml.
Secrets as env vars:
To use secret as env vars, like the one used for the notification, see charts/Orchestrator-k8s/templates/secret.yaml
Every key in that secret will be available in the app-config for resolution.
Development
git clone https://github.com/rhdhorchestrator.github.io/orchestrator-helm-chart
cd orchestrator-helm-chart/charts/orchestrator-k8s
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add backstage https://backstage.github.io/charts
helm repo add postgresql https://charts.bitnami.com/bitnami
helm repo add redhat-developer https://redhat-developer.github.io/rhdh-chart
helm repo add workflows https://rhdhorchestrator.io/serverless-workflows-config
helm dependencies build
helm install orchestrator .
The output should look like that
$ helm install orchestrator .
Release "orchestrator" has been upgraded. Happy Helming!
NAME: orchestrator
LAST DEPLOYED: Tue Sep 19 18:19:07 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
This chart will install RHDH-backstage(RHDH upstream) + Serverless Workflows.
To get RHDH's route location:
$ oc get route orchestrator-white-backstage -o jsonpath='https://{ .spec.host }{"\n"}'
To get the serverless workflow operator status:
$ oc get deploy -n sonataflow-operator-system
To get the serverless workflows status:
$ oc get sf
The chart notes will provide more information on:
- route location of backstage
- the sonata operator status
- the sonata workflow deployed status
7 - Orchestrator on existing RHDH instance
When RHDH is already installed and in use, reinstalling it is unnecessary. Instead, integrating the Orchestrator into such an environment involves a few key steps:
- Utilize the Orchestrator operator to install the requisite components, such as the OpenShift Serverless Logic Operator and the OpenShift Serverless Operator, while ensuring the RHDH installation is disabled.
- Manually update the existing RHDH ConfigMap resources with the necessary configuration for the Orchestrator plugin.
- Import the Orchestrator software templates into the Backstage catalog.
Prerequisites
- RHDH is already deployed with a running Backstage instance.
- Software templates for workflows requires GitHub provider to be configured.
- Ensure that a PostgreSQL database is available and that you have credentials to manage the tablespace (optional).
- For your convenience, a reference implementation is provided.
- If you already have a PostgreSQL database installed, please refer to this note regarding default settings.
In this approach, since the RHDH instance is not managed by the Orchestrator operator, its configuration is handled through the Backstage CR along with the associated resources, such as ConfigMaps and Secrets.
The installation steps are detailed here.
8 - Workflows
In addition to deploying the Orchestrator, we provide several preconfigured workflows that serve either as ready-to-use solutions or as starting points for customizing workflows according to the user’s requirements. These workflows can be installed through a Helm chart.
8.1 - Deploy From Helm Repository
Orchestrator Workflows Helm Repository
This repository serves as a Helm chart repository for deploying serverless workflows with the Sonataflow Operator. It encompasses a collection of pre-defined workflows, each tailored to specific use cases. These workflows have undergone thorough testing and validation through Continuous Integration (CI) processes and are organized according to their chart versions.
The repository includes a variety of serverless workflows, such as:
- Greeting: A basic example workflow to demonstrate functionality.
- Migration Toolkit for Application Analysis (MTA): This workflow evaluates applications to determine potential risks and the associated costs of containerizing the applications.
- Move2Kube: Designed to facilitate the transition of an application to Kubernetes (K8s) environments.
- …
Usage
Prerequisites
To utilize the workflows contained in this repository, the Orchestrator Deployment must be installed on your OpenShift Container Platform (OCP) cluster. For detailed instructions on installing the Orchestrator, please visit the Orchestrator Helm Based Operator Repository
Installation
helm repo add orchestrator-workflows https://rhdhorchestrator.io/serverless-workflows
View available workflows on the Helm repository:
helm search repo orchestrator-workflows
The expected result should look like (with different versions):
NAME CHART VERSION APP VERSION DESCRIPTION
orchestrator-workflows/greeting 0.4.2 1.16.0 A Helm chart for the greeting serverless workflow
orchestrator-workflows/move2kube 0.2.16 1.16.0 A Helm chart to deploy the move2kube workflow.
orchestrator-workflows/mta 0.2.16 1.16.0 A Helm chart for MTA serverless workflow
orchestrator-workflows/workflows 0.2.24 1.16.0 A Helm chart for serverless workflows
...
You can install the workflows following their respective README
Installing workflows in additional namespaces
When deploying a workflow in a namespace different from where Sonataflow services are running (e.g. sonataflow-infra), there are essential steps to follow. For detailed instructions, see the Additional Workflow Namespaces section.
Version Compatibility
The workflows rely on components included in the Orchestrator Operator. Therefore, it is crucial to match the workflow version with the corresponding Orchestrator version that supports it.
The list below outlines the compatibility between the workflows and Orchestrator versions:
Workflows | Chart Version | Orchestrator Operator Version |
---|
move2kube | 1.6.x | 1.6.x |
create-ocp-project | 1.6.x | 1.6.x |
request-vm-cnv | 1.6.x | 1.6.x |
modify-vm-resources | 1.6.x | 1.6.x |
mta-v7 | 1.6.x | 1.6.x |
mtv-migration | 1.6.x | 1.6.x |
mtv-plan | 1.6.x | 1.6.x |
move2kube | 1.5.x | 1.5.x |
create-ocp-project | 1.5.x | 1.5.x |
request-vm-cnv | 1.5.x | 1.5.x |
modify-vm-resources | 1.5.x | 1.5.x |
mta-v7 | 1.5.x | 1.5.x |
mtv-migration | 1.5.x | 1.5.x |
mtv-plan | 1.5.x | 1.5.x |
Helm index
https://www.rhdhorchestrator.io/serverless-workflows/index.yaml