Documentation
Orchestrator
Choose a section from the list below. For Orchestrator introduction, check the Quick Start.
Release notes
New Featuers
1 - Quick Start
Quickstart Guide
This quickstart guide will help you install the Orchestrator using the Helm-based operator and execute a sample workflow through the Red Hat Developer Hub orchestrator plugin UI.
Install Orchestrator:
Follow the installation instructions for Orchestrator.
Install a sample workflow:
Follow the installation instructions for the greetings workflow.
Access Red Hat Developer Hub:
Open your web browser and navigate to the Red Hat Developer Hub application. Retrieve the URL using the following OpenShift CLI command.
oc get route backstage-backstage -n rhdh-operator -o jsonpath='{.spec.host}'
Make sure the route is accessible to you locally.
Login to Backstage
Login to Backstage with the Guest account.
Navigate to Orchestrator:
Navigate to the Orchestrator page by clicking on the Orchestrator icon in the left navigation menu.

Execute Greeting Workflow:
Click on the ‘Execute’ button in the ACTIONS column of the Greeting workflow.
The ‘Run workflow’ page will open. Click ‘Next step’ and then ‘Run’

Monitor Workflow Status:
Wait for the status of the Greeting workflow execution to become Completed. This may take a moment.

2 - Architecture
The Orchestrator architecture comprises several integral components, each contributing to the seamless execution and management of workflows. Illustrated below is a breakdown of these components:
- Red Hat Developer Hub: Serving as the primary interface, Backstage fulfills multiple roles:
- Orchestrator Plugins: Both frontend and backend plugins are instrumental in presenting deployed workflows for execution and monitoring.
- Notifications Plugin: Employs notifications to inform users or groups about workflow events.
- OpenShift Serverless Logic Operator: This controller manages the Sonataflow custom resource (CR), where each CR denotes a deployed workflow.
- Sonataflow Runtime/Workflow Application: As a deployed workflow, Sonataflow Runtime is currently managed as a Kubernetes (K8s) deployment by the operator. It operates as an HTTP server, catering to requests for executing workflow instances. Within the Orchestrator deployment, each Sonataflow CR corresponds to a singular workflow. However, outside this scope, Sonataflow Runtime can handle multiple workflows. Interaction with Sonataflow Runtime for workflow execution is facilitated by the Orchestrator backend plugin.
- Data Index Service: This serves as a repository for workflow definitions, instances, and their associated jobs. It exposes a GraphQL API, utilized by the Orchestrator backend plugin to retrieve workflow definitions and instances.
- Job Service: Dedicated to orchestrating scheduled tasks for workflows.
- OpenShift Serverless: This operator furnishes serverless capabilities essential for workflow communication. It employs Knative eventing to interface with the Data Index service and leverages Knative functions to introduce more intricate logic to workflows.
- OpenShift AMQ Streams (Strimzi/Kafka): While not presently integrated into the deployment’s current iteration, this operator is crucial for ensuring the reliability of the eventing system.
- KeyCloak: Responsible for authentication and security services within applications. While not installed by the Orchestrator operator, it is essential for enhancing security measures.
- PostgreSQL Server - Utilized for storing both Sonataflow information and Backstage data, PostgreSQL Server provides a robust and reliable database solution essential for data persistence within the Orchestrator ecosystem.

3 - Core Concepts
3.1 - Workflow Types
π¨ Deprecation Notice: π¨
In the next Orchestrator release, Workflow Types will be retired. All workflows will act as infrastructure workflows, and no workflow will act as an assessment workflow.
The Orchestrator features two primary workflow categories:
- Infrastructure workflows: focus on automating infrastructure-related tasks
- Assessment workflows: focus on evaluating and analyzing data to suggest suitable infrastructure workflow options for subsequent execution
Infrastructure workflow
In the Orchestrator, an infrastructure refers to a workflow that executes a sequence of operations based on user input (optional) and generates output (optional) without requiring further action.
To define this type, developers need to include the following annotation in the workflow definition file:
annotations:
- "workflow-type/infrastructure"
The Orchestrator plugin utilizes this metadata to facilitate the processing and visualization of infrastructure workflow inputs and outputs within the user interface.
Examples:
Assessment workflow
In the Orchestrator, an assessment is akin to an infrastructure workflow that concludes with a recommended course of action.
Upon completion, the assessment yields a workflowOptions object, which presents a list of infrastructure workflows suitable from the user’s inputs evaluation.
To define this type, developers must include the following annotation in the workflow definition file:
annotations:
- "workflow-type/assessment"
The Orchestrator plugin utilizes this metadata to facilitate the processing and visualization of assessment workflow inputs and outputs within the user interface.
This includes generating links to initiate infrastructure workflows from the list of recommended options, enabling seamless execution and integration.
The workflowOptions object must possess six essential attributes with specific types, including lists that can be empty or contain objects with id
and name
properties, similar to the currentVersion
attribute. See an example in the below code snippet.
It is the assessment workflow developer’s responsibility to ensure that the provided workflow id in each workflowOptions attribute exists and is available in the environment.
{
"workflowOptions": {
"currentVersion": {
"id": "_AN_INFRASTRUCTURE_WORKFLOW_ID_",
"name": "_AN_INFRASTRUCTURE_WORKFLOW_NAME_"
},
"newOptions": [],
"otherOptions": [],
"upgradeOptions": [],
"migrationOptions": [
{
"id": "_ANOTHER_INFRASTRUCTURE_WORKFLOW_ID_",
"name": "_ANOTHER_INFRASTRUCTURE_WORKFLOW_NAME_"
}
],
"continuationOptions": []
}
}
Examples:
Note
If the aforementioned annotation is missing in the workflow definition file, the Orchestrator plugin will default to treating the workflow as an infrastructure workflow, without considering its output.
To avoid unexpected behavior and ensure clarity, it is strongly advised to always include the annotation to explicitly specify the workflow type, preventing any surprises or misinterpretations.
4 - Installation
The deployment of the orchestrator involves multiple independent components, each with its unique installation process. In an OpenShift Cluster, the Red Hat Catalog provides an operator that can handle the installation for you. This installation process is modular, as the CRD exposes various flags that allow you to control which components to install. For a vanilla Kubernetes, there is a helm chart that installs the orchestrator compoments.
The Orchestrator deployment encompasses the installation of the engine for serving serverless workflows and Backstage, integrated with orchestrator plugins for workflow invocation, monitoring, and control.
In addition to the Orchestrator deployment, we offer several workflows (linked below) that can be deployed using their respective installation methods.
4.1 - RBAC
The RBAC policies for RHDH Orchestrator plugins v1.5 are listed here
4.2 - Disconnected Environment
To install the Orchestrator and its required components in a disconnected environment, there is a need to mirror images and NPM packages.
Please ensure the images are added using either ImageDigestMirrorSet
or ImageTagMirrorSet
, depending on the format of their values.
Images for a disconnected environment
The following images need to be added to the image registry:
Recommendation:
When fetching the list of required images, ensure that you are using the latest version of the bundle operator when appropriate. This helps avoid missing or outdated image references.
RHDH Operator:
registry.redhat.io/rhdh/rhdh-hub-rhel9@sha256:9fd11a4551da42349809bbf34eb54c3b0ca8a3884d556593656af79e72786c01
registry.redhat.io/rhdh/rhdh-operator-bundle@sha256:c870eb3d17807a9d04011df5244ea39db66af76aefd0af68244c95ed8322d8b5
registry.redhat.io/rhdh/rhdh-rhel9-operator@sha256:df9204cfad16b43ff00385609ef4e99a292c033cb56be6ac76108cd0e0cfcb4b
registry.redhat.io/rhel9/postgresql-15@sha256:450a3c82d66f0642eee81fc3b19f8cf01fbc18b8e9dbbd2268ca1f471898db2f
OpenShift Serverless Operator:
registry.access.redhat.com/ubi8/nodejs-20-minimal@sha256:a2a7e399aaf09a48c28f40820da16709b62aee6f2bc703116b9345fab5830861
registry.access.redhat.com/ubi8/openjdk-21@sha256:441897a1f691c7d4b3a67bb3e0fea83e18352214264cb383fd057bbbd5ed863c
registry.access.redhat.com/ubi8/python-39@sha256:27e795fd6b1b77de70d1dc73a65e4c790650748a9cfda138fdbd194b3d6eea3d
registry.redhat.io/openshift-serverless-1/kn-backstage-plugins-eventmesh-rhel8@sha256:77665d8683230256122e60c3ec0496e80543675f39944c70415266ee5cffd080
registry.redhat.io/openshift-serverless-1/kn-client-cli-artifacts-rhel8@sha256:f983be49897be59dba1275f36bdd83f648663ee904e4f242599e9269fc354fd7
registry.redhat.io/openshift-serverless-1/kn-client-kn-rhel8@sha256:d21cc7e094aa46ba7f6ea717a3d7927da489024a46a6c1224c0b3c5834dcb7a6
registry.redhat.io/openshift-serverless-1/kn-ekb-dispatcher-rhel8@sha256:9cab1c37aae66e949a5d65614258394f566f2066dd20b5de5a8ebc3a4dd17e4c
registry.redhat.io/openshift-serverless-1/kn-ekb-kafka-controller-rhel8@sha256:e7dbf060ee40b252f884283d80fe63655ded5229e821f7af9e940582e969fc01
registry.redhat.io/openshift-serverless-1/kn-ekb-post-install-rhel8@sha256:097e7891a85779880b3e64edb2cb1579f17bc902a17d2aa0c1ef91aeb088f5f1
registry.redhat.io/openshift-serverless-1/kn-ekb-receiver-rhel8@sha256:207a1c3d7bf18a56ab8fd69255beeac6581a97576665e8b79f93df74da911285
registry.redhat.io/openshift-serverless-1/kn-ekb-webhook-kafka-rhel8@sha256:cafb9dcc4059b3bc740180cd8fb171bdad44b4d72365708d31f86327a29b9ec5
registry.redhat.io/openshift-serverless-1/kn-eventing-apiserver-receive-adapter-rhel8@sha256:ec3c038d2baf7ff915a2c5ee90c41fb065a9310ccee473f0a39d55de632293e3
registry.redhat.io/openshift-serverless-1/kn-eventing-channel-controller-rhel8@sha256:2c2912c0ba2499b0ba193fcc33360145696f6cfe9bf576afc1eac1180f50b08d
registry.redhat.io/openshift-serverless-1/kn-eventing-channel-dispatcher-rhel8@sha256:4d7ecfae62161eff86b02d1285ca9896983727ec318b0d29f0b749c4eba31226
registry.redhat.io/openshift-serverless-1/kn-eventing-controller-rhel8@sha256:1b4856760983e14f50028ab3d361bb6cd0120f0be6c76b586f2b42f5507c3f63
registry.redhat.io/openshift-serverless-1/kn-eventing-filter-rhel8@sha256:cec64e69a3a1c10bc2b48b06a5dd6a0ddd8b993840bbf1ac7881d79fc854bc91
registry.redhat.io/openshift-serverless-1/kn-eventing-ingress-rhel8@sha256:7e6049da45969fa3f766d2a542960b170097b2087cad15f5bba7345d8cdc0dad
registry.redhat.io/openshift-serverless-1/kn-eventing-istio-controller-rhel8@sha256:d14fd8abf4e8640dbde210f567dd36866fe5f0f814a768a181edcb56a8e7f35b
registry.redhat.io/openshift-serverless-1/kn-eventing-jobsink-rhel8@sha256:8ecea4b6af28fe8c7f8bfcc433c007555deb8b7def7c326867b04833c524565d
registry.redhat.io/openshift-serverless-1/kn-eventing-migrate-rhel8@sha256:e408db39c541a46ebf7ff1162fe6f81f6df1fe4eeed4461165d4cb1979c63d27
registry.redhat.io/openshift-serverless-1/kn-eventing-mtchannel-broker-rhel8@sha256:2685917be6a6843c0d82bddf19f9368c39c107dae1fd1d4cb2e69d1aa87588ec
registry.redhat.io/openshift-serverless-1/kn-eventing-mtping-rhel8@sha256:c5a5b6bc4fdb861133fd106f324cc4a904c6c6a32cabc6203efc578d8f46bbf4
registry.redhat.io/openshift-serverless-1/kn-eventing-webhook-rhel8@sha256:efe2d60e777918df9271f5512e4722f8cf667fe1a59ee937e093224f66bc8cbf
registry.redhat.io/openshift-serverless-1/kn-plugin-event-sender-rhel8@sha256:08f0b4151edd6d777e2944c6364612a5599e5a775e5150a76676a45f753c2e23
registry.redhat.io/openshift-serverless-1/kn-plugin-func-func-util-rhel8@sha256:01e0ab5c8203ef0ca39b4e9df8fd1a8c2769ef84fce7fecefc8e8858315e71ca
registry.redhat.io/openshift-serverless-1/kn-serving-activator-rhel8@sha256:3892eadbaa6aba6d79d6fe2a88662c851650f7c7be81797b2fc91d0593a763d1
registry.redhat.io/openshift-serverless-1/kn-serving-autoscaler-hpa-rhel8@sha256:6b30d3f6d77a6e74d4df5a9d2c1b057cdc7ebbbf810213bc0a97590e741bae1c
registry.redhat.io/openshift-serverless-1/kn-serving-autoscaler-rhel8@sha256:00777fa53883f25061ebe171b0d47025d27acd39582a619565e9167288321952
registry.redhat.io/openshift-serverless-1/kn-serving-controller-rhel8@sha256:41a21fdc683183422ebb29707d81eca96d7ca119d01f369b9defbaea94c09939
registry.redhat.io/openshift-serverless-1/kn-serving-queue-rhel8@sha256:bd464d68e283ce6c48ae904010991b491b738ada5a419f044bf71fd48326005b
registry.redhat.io/openshift-serverless-1/kn-serving-storage-version-migration-rhel8@sha256:de87597265ee5ac26db4458a251d00a5ec1b5cd0bfff4854284070fdadddb7ab
registry.redhat.io/openshift-serverless-1/kn-serving-webhook-rhel8@sha256:eb33e874b5a7c051db91cd6a63223aabd987988558ad34b34477bee592ceb3ab
registry.redhat.io/openshift-serverless-1/net-istio-controller-rhel8@sha256:ec77d44271ba3d86af6cbbeb70f20a720d30d1b75e93ac5e1024790448edf1dd
registry.redhat.io/openshift-serverless-1/net-istio-webhook-rhel8@sha256:07074f52b5fb1f2eb302854dce1ed5b81c665ed843f9453fc35a5ebcb1a36696
registry.redhat.io/openshift-serverless-1/net-kourier-kourier-rhel8@sha256:e5f1111791ffff7978fe175f3e3af61a431c08d8eea4457363c66d66596364d8
registry.redhat.io/openshift-serverless-1/serverless-ingress-rhel8@sha256:3d1ab23c9ce119144536dd9a9b80c12bf2bb8e5f308d9c9c6c5b48c41f4aa89e
registry.redhat.io/openshift-serverless-1/serverless-kn-operator-rhel8@sha256:78cb34062730b3926a465f0665475f0172a683d7204423ec89d32289f5ee329d
registry.redhat.io/openshift-serverless-1/serverless-must-gather-rhel8@sha256:119fbc185f167f3866dbb5b135efc4ee787728c2e47dd1d2d66b76dc5c43609e
registry.redhat.io/openshift-serverless-1/serverless-openshift-kn-rhel8-operator@sha256:0f763b740cc1b614cf354c40f3dc17050e849b4cbf3a35cdb0537c2897d44c95
registry.redhat.io/openshift-service-mesh/proxyv2-rhel8@sha256:b30d60cd458133430d4c92bf84911e03cecd02f60e88a58d1c6c003543cf833a
registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:3fcd8e2bf0bcb8ff8c93a87af2c59a3bcae7be8792f9d3236c9b5bbd9b6db3b2
registry.redhat.io/rhel8/buildah@sha256:3d505d9c0f5d4cd5a4ec03b8d038656c6cdbdf5191e00ce6388f7e0e4d2f1b74
registry.redhat.io/source-to-image/source-to-image-rhel8@sha256:6a6025914296a62fdf2092c3a40011bd9b966a6806b094d51eec5e1bd5026ef4
registry.redhat.io/openshift-serverless-1/serverless-operator-bundle@sha256:93b945eb2361b07bc86d67a9a7d77a0301a0bad876c83a9a64af2cfb86c83bff
OpenShift Serverless Logic Operator:
registry.redhat.io/openshift-serverless-1/logic-operator-bundle@sha256:a1d1995b2b178a1242d41f1e8df4382d14317623ac05b91bf6be971f0ac5a227
registry.redhat.io/openshift-serverless-1/logic-jobs-service-postgresql-rhel8:1.35.0
registry.redhat.io/openshift-serverless-1/logic-jobs-service-ephemeral-rhel8:1.35.0
registry.redhat.io/openshift-serverless-1/logic-data-index-postgresql-rhel8:1.35.0
registry.redhat.io/openshift-serverless-1/logic-data-index-ephemeral-rhel8:1.35.0
registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8:1.35.0
registry.redhat.io/openshift-serverless-1/logic-swf-devmode-rhel8:1.35.0
registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:4564ca3dc5bac80d6faddaf94c817fbbc270698a9399d8a21ee1005d85ceda56
registry.redhat.io/openshift-serverless-1/logic-rhel8-operator@sha256:203043ca27819f7d039fd361d0816d5a16d6b860ff19d737b07968ddfba3d2cd
registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:4564ca3dc5bac80d6faddaf94c817fbbc270698a9399d8a21ee1005d85ceda56
registry.redhat.io/openshift4/ose-cli:latest
gcr.io/kaniko-project/warmer:v1.9.0
gcr.io/kaniko-project/executor:v1.9.0
Orchestrator Operator:
registry.redhat.io/rhdh-orchestrator-dev-preview-beta/controller-rhel9-operator@sha256:ea42a1a593af9433ac74e58269c7e0705a08dbfa8bd78fba69429283a307131a
registry.redhat.io/rhdh-orchestrator-dev-preview-beta/orchestrator-operator-bundle@sha256:0a9e5d2626b4306c57659dbb90e160f1c01d96054dcac37f0975500d2c22d9c7
Note:
If you encounter issues pulling images due to an invalid GPG signature, consider updating the /etc/containers/policy.json
file to reference the appropriate beta GPG key.
For example, you can use:
/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta
This may be required when working with pre-release or beta images signed with a different key than the default.
NPM packages for a disconnected environment
The packages required for the Orchestrator can be downloaded as tgz files from:
Or using NPM packages from https://npm.registry.redhat.com e.g. by:
npm pack "@redhat/backstage-plugin-orchestrator@1.5.1" --registry=https://npm.registry.redhat.com
npm pack "@redhat/backstage-plugin-orchestrator-backend-dynamic@1.5.1" --registry=https://npm.registry.redhat.com
npm pack "@redhat/backstage-plugin-scaffolder-backend-module-orchestrator-dynamic@1.5.1" --registry=https://npm.registry.redhat.com
For maintainers
The images in this page were listed using the following set of commands, based on each of the operator bundle images:
RHDH
The list of images was obtained by:
bash <<'EOF'
set -euo pipefail
IMG="registry.redhat.io/rhdh/rhdh-operator-bundle:1.5.1"
DIR="local-manifests-rhdh"
CSV="$DIR/rhdh-operator.clusterserviceversion.yaml"
podman pull "$IMG" --quiet >/dev/null 2>&1
BUNDLE_DIGEST=$(podman image inspect "$IMG" --format '{{ index .RepoDigests 0 }}')
podman create --name temp "$IMG" > /dev/null
podman cp temp:/manifests "$DIR"
podman rm temp > /dev/null
yq e '
.spec.install.spec.deployments[].spec.template.spec.containers[].image,
.spec.install.spec.deployments[].spec.template.spec.containers[].env[]
| select(.name | test("^RELATED_IMAGE_")).value
' "$CSV" | cat - <(echo "$BUNDLE_DIGEST") | sort -u
EOF
OpenShift Serverless
The list of images was obtained by:
IMG=registry.redhat.io/openshift-serverless-1/serverless-operator-bundle:1.35.0
podman run --rm --entrypoint bash "$IMG" -c "cat /manifests/serverless-operator.clusterserviceversion.yaml" | yq '.spec.relatedImages[].image' | sort | uniq
podman pull "$IMG"
podman image inspect "$IMG" --format '{{ index .RepoDigests 0 }}'
OpenShift Serverless Logic
podman create --name temp-container registry.redhat.io/openshift-serverless-1/logic-operator-bundle:1.35.0-5
podman cp temp-container:/manifests ./local-manifests-osl
podman rm temp-container
yq -r '.data."controllers_cfg.yaml" | from_yaml | .. | select(tag == "!!str") | select(test("^.*\\/.*:.*$"))' ./local-manifests-osl/logic-operator-rhel8-controllers-config_v1_configmap.yaml
yq -r '.. | select(has("image")) | .image' ./local-manifests-osl/logic-operator-rhel8.clusterserviceversion.yaml
Orchestrator
The list of images was obtained by:
bash <<'EOF'
set -euo pipefail
IMG="registry.redhat.io/rhdh-orchestrator-dev-preview-beta/orchestrator-operator-bundle:1.5-1744669755"
DIR="local-manifests-orchestrator"
CSV="$DIR/orchestrator-operator.clusterserviceversion.yaml"
podman pull "$IMG" --quiet >/dev/null 2>&1
BUNDLE_DIGEST=$(podman image inspect "$IMG" --format '{{ index .RepoDigests 0 }}')
podman create --name temp "$IMG" > /dev/null
podman cp temp:/manifests "$DIR"
podman rm temp > /dev/null
yq e '.spec.install.spec.deployments[].spec.template.spec.containers[].image' "$CSV" | cat - <(echo "$BUNDLE_DIGEST") | sort -u
EOF
4.3 - Orchestrator CRD Versions
The following table shows the list of supported Orchestrator Operator versions with their compatible CRD version.
Orchestrator Operator Version | CRD Version |
---|
1.3 | v1alpha1 |
1.4 | v1alpha2 |
1.5 | v1alpha3 |
4.3.1 - CRD Version v1alpha3
The Go-Based Operator was introduced in Orchestrator 1.5 since the helm-based operator is currently in maintenance mode.
Also, with major changes to the CRD, the v1alpha3 version
of Orchestrator CRD
was introduced and is not backward compatible.
In this version, the CRD field structure has completely changed with most fields either removed or renamed and
restructured.
To see more information about the CRD fields, check out the
full Parameter list.
The following Orchestrator CR is a sample of the api v1alpha3 version.
apiVersion: rhdh.redhat.com/v1alpha3
kind: Orchestrator
metadata:
labels:
app.kubernetes.io/name: orchestrator-sample
name: orchestrator-sample
spec:
serverlessLogic:
installOperator: true # Determines whether to install the ServerlessLogic operator. Defaults to True. Optional
serverless:
installOperator: true # Determines whether to install the Serverless operator. Defaults to True. Optional
rhdh:
installOperator: true # Determines whether the RHDH operator should be installed.This determines the deployment of the RHDH instance. Defaults to False. Optional
devMode: true # Determines whether to enable the guest provider in RHDH. This should be used for development purposes ONLY and should not be enabled in production. Defaults to False. Optional
name: "my-rhdh" # Name of RHDH CR, whether existing or to be installed. Required
namespace: "rhdh" # Namespace of RHDH Instance, whether existing or to be installed. Required
plugins:
notificationsEmail:
enabled: false # Determines whether to install the Notifications Email plugin. Requires setting of hostname and credentials in backstage secret. The secret, backstage-backend-auth-secret, is created as a pre-requisite. See value backstage-backend-auth-secret. See plugin configuration at https://github.com/backstage/backstage/blob/master/plugins/notifications-backend-module-email/config.d.ts
port: 587 # SMTP server port. Defaults to 587. Optional
sender: "" # Email address of the Sender. Defaults to empty string. Optional
replyTo: "" # Email address of the Recipient. Defaults to empty string. Optional
postgres:
name: "sonataflow-psql-postgresql" # The name of the Postgres DB service to be used by platform services. Cannot be empty.
namespace: "sonataflow-infra" # The namespace of the Postgres DB service to be used by platform services.
authSecret:
name: "sonataflow-psql-postgresql" # Name of existing secret to use for PostgreSQL credentials. Required
userKey: postgres-username # Name of key in existing secret to use for PostgreSQL credentials. Required
passwordKey: postgres-password # Name of key in existing secret to use for PostgreSQL credentials. Required
database: sonataflow # Name of existing database instance used by data index and job service. Required
platform: # Contains the configuration for the infrastructure services required for the Orchestrator to serve workflows by leveraging the OpenShift Serverless and OpenShift Serverless Logic capabilities.
namespace: "sonataflow-infra"
resources:
requests:
memory: "64Mi" # Defines the Memory resource limits. Optional
cpu: "250m" # Defines the CPU resource limits. Optional
limits:
memory: "1Gi" # Defines the Memory resource limits. Optional
cpu: "500m" # Defines the CPU resource limits. Optional
eventing:
broker: { }
# To enable eventing communication with an existing broker, populate the following fields:
# broker:
# name: "my-knative" # Name of existing Broker instance.
# namespace: "knative" # Namespace of existing Broker instance.
monitoring:
enabled: false # Determines whether to enable monitoring for platform. Optional
tekton:
enabled: false # Determines whether to create the Tekton pipeline and install the Tekton plugin on RHDH. Defaults to false. Optional
argocd:
enabled: false # Determines whether to install the ArgoCD plugin and create the orchestrator AppProject. Defaults to False. Optional
namespace: "orchestrator-gitops" # Namespace where the ArgoCD operator is installed and watching for argoapp CR instances. Optional
Migrating to the v1alpha3 CRD version involves upgrading the operator. Please follow
the Operator Upgrade documentation
4.3.2 - CRD Version v1alpha1
The v1alpha1 version of Orchestrator CRD is supported only on Orchestrator 1.3 version.
It is deprecated and not compatible with future orchestrator versions.
The following Orchestrator CR is an sample of the api v1alpha1 version.
apiVersion: rhdh.redhat.com/v1alpha1
kind: Orchestrator
metadata:
name: orchestrator-sample
spec:
sonataFlowOperator:
isReleaseCandidate: false # Indicates RC builds should be used by the chart to install Sonataflow
enabled: true # whether the operator should be deployed by the chart
subscription:
namespace: openshift-serverless-logic # namespace where the operator should be deployed
channel: alpha # channel of an operator package to subscribe to
installPlanApproval: Automatic # whether the update should be installed automatically
name: logic-operator-rhel8 # name of the operator package
sourceName: redhat-operators # name of the catalog source
startingCSV: logic-operator-rhel8.v1.34.0 # The initial version of the operator
serverlessOperator:
enabled: true # whether the operator should be deployed by the chart
subscription:
namespace: openshift-serverless # namespace where the operator should be deployed
channel: stable # channel of an operator package to subscribe to
installPlanApproval: Automatic # whether the update should be installed automatically
name: serverless-operator # name of the operator package
sourceName: redhat-operators # name of the catalog source
rhdhOperator:
isReleaseCandidate: false # Indicates RC builds should be used by the chart to install RHDH
enabled: true # whether the operator should be deployed by the chart
enableGuestProvider: false # whether to enable guest provider
secretRef:
name: backstage-backend-auth-secret # name of the secret that contains the credentials for the plugin to establish a communication channel with the Kubernetes API, ArgoCD, GitHub servers and SMTP mail server.
backstage:
backendSecret: BACKEND_SECRET # Key in the secret with name defined in the 'name' field that contains the value of the Backstage backend secret. Defaults to 'BACKEND_SECRET'. It's required.
github: #GitHub specific configuration fields that are injected to the backstage instance to allow the plugin to communicate with GitHub.
token: GITHUB_TOKEN # Key in the secret with name defined in the 'name' field that contains the value of the authentication token as expected by GitHub. Required for importing resource to the catalog, launching software templates and more. Defaults to 'GITHUB_TOKEN', empty for not available.
clientId: GITHUB_CLIENT_ID # Key in the secret with name defined in the 'name' field that contains the value of the client ID that you generated on GitHub, for GitHub authentication (requires GitHub App). Defaults to 'GITHUB_CLIENT_ID', empty for not available.
clientSecret: GITHUB_CLIENT_SECRET # Key in the secret with name defined in the 'name' field that contains the value of the client secret tied to the generated client ID. Defaults to 'GITHUB_CLIENT_SECRET', empty for not available.
k8s: # Kubernetes specific configuration fields that are injected to the backstage instance to allow the plugin to communicate with the Kubernetes API Server.
clusterToken: K8S_CLUSTER_TOKEN # Key in the secret with name defined in the 'name' field that contains the value of the Kubernetes API bearer token used for authentication. Defaults to 'K8S_CLUSTER_TOKEN', empty for not available.
clusterUrl: K8S_CLUSTER_URL # Key in the secret with name defined in the 'name' field that contains the value of the API URL of the kubernetes cluster. Defaults to 'K8S_CLUSTER_URL', empty for not available.
argocd: # ArgoCD specific configuration fields that are injected to the backstage instance to allow the plugin to communicate with ArgoCD. Note that ArgoCD must be deployed beforehand and the argocd.enabled field must be set to true as well.
url: ARGOCD_URL # Key in the secret with name defined in the 'name' field that contains the value of the URL of the ArgoCD API server. Defaults to 'ARGOCD_URL', empty for not available.
username: ARGOCD_USERNAME # Key in the secret with name defined in the 'name' field that contains the value of the username to login to ArgoCD. Defaults to 'ARGOCD_USERNAME', empty for not available.
password: ARGOCD_PASSWORD # Key in the secret with name defined in the 'name' field that contains the value of the password to authenticate to ArgoCD. Defaults to 'ARGOCD_PASSWORD', empty for not available.
notificationsEmail:
hostname: NOTIFICATIONS_EMAIL_HOSTNAME # Key in the secret with name defined in the 'name' field that contains the value of the hostname of the SMTP server for the notifications plugin. Defaults to 'NOTIFICATIONS_EMAIL_HOSTNAME', empty for not available.
username: NOTIFICATIONS_EMAIL_USERNAME # Key in the secret with name defined in the 'name' field that contains the value of the username of the SMTP server for the notifications plugin. Defaults to 'NOTIFICATIONS_EMAIL_USERNAME', empty for not available.
password: NOTIFICATIONS_EMAIL_PASSWORD # Key in the secret with name defined in the 'name' field that contains the value of the password of the SMTP server for the notifications plugin. Defaults to 'NOTIFICATIONS_EMAIL_PASSWORD', empty for not available.
subscription:
namespace: rhdh-operator # namespace where the operator should be deployed
channel: fast-1.3 # channel of an operator package to subscribe to
installPlanApproval: Automatic # whether the update should be installed automatically
name: rhdh # name of the operator package
source: redhat-operators # name of the catalog source
startingCSV: "" # The initial version of the operator
targetNamespace: rhdh-operator # the target namespace for the backstage CR in which RHDH instance is created
rhdhPlugins: # RHDH plugins required for the Orchestrator
npmRegistry: "https://npm.registry.redhat.com" # NPM registry is defined already in the container, but sometimes the registry need to be modified to use different versions of the plugin, for example: staging(https://npm.stage.registry.redhat.com) or development repositories
scope: "@redhat"
orchestrator:
package: "backstage-plugin-orchestrator@1.3.0"
integrity: sha512-A/twx1SOOGDQjglLzOxQikKO0XOdPP1jh2lj9Y/92bLox8mT+eaZpub8YLwR2mb7LsUIUImg+U6VnKwoAV9ATA==
orchestratorBackend:
package: "backstage-plugin-orchestrator-backend-dynamic@1.3.0"
integrity: sha512-Th5vmwyhHyhURwQo28++PPHTvxGSFScSHPJyofIdE5gTAb87ncyfyBkipSDq7fwj4L8CQTXa4YP6A2EkHW1npg==
notifications:
package: "plugin-notifications-dynamic@1.3.0"
integrity: sha512-iYLgIy0YdP/CdTLol07Fncmo9n0J8PdIZseiwAyUt9RFJzKIXmoi2CpQLPKMx36lEgPYUlT0rFO81Ie2CSis4Q==
notificationsBackend:
package: "plugin-notifications-backend-dynamic@1.3.0"
integrity: sha512-Pw9Op/Q+1MctmLiVvQ3M+89tkbWkw8Lw0VfcwyGSMiHpK/Xql1TrSFtThtLlymRgeCSBgxHYhh3MUusNQX08VA==
signals:
package: "plugin-signals-dynamic@1.3.0"
integrity: sha512-+E8XeTXcG5oy+aNImGj/MY0dvEkP7XAsu4xuZjmAqOHyVfiIi0jnP/QDz8XMbD1IjCimbr/DMUZdjmzQiD0hSQ==
signalsBackend:
package: "plugin-signals-backend-dynamic@1.3.0"
integrity: sha512-5Bl6C+idPXtquQxMZW+bjRMcOfFYcKxcGZZFv2ITkPVeY2zzxQnAz3vYHnbvKRSwlQxjIyRXY6YgITGHXWT0nw==
notificationsEmail:
enabled: false # whether to install the notifications email plugin. requires setting of hostname and credentials in backstage secret to enable. See value backstage-backend-auth-secret. See plugin configuration at https://github.com/backstage/backstage/blob/master/plugins/notifications-backend-module-email/config.d.ts
package: "plugin-notifications-backend-module-email-dynamic@1.3.0"
integrity: sha512-sm7yRoO6Nkk3B7+AWKb10maIrb2YBNSiqQaWmFDVg2G9cbDoWr9wigqqeQ32+b6o2FenfNWg8xKY6PPyZGh8BA==
port: 587 # SMTP server port
sender: "" # the email sender address
replyTo: "" # reply-to address
postgres:
serviceName: "sonataflow-psql-postgresql" # The name of the Postgres DB service to be used by platform services. Cannot be empty.
serviceNamespace: "sonataflow-infra" # The namespace of the Postgres DB service to be used by platform services.
authSecret:
name: "sonataflow-psql-postgresql" # name of existing secret to use for PostgreSQL credentials.
userKey: postgres-username # name of key in existing secret to use for PostgreSQL credentials.
passwordKey: postgres-password # name of key in existing secret to use for PostgreSQL credentials.
database: sonataflow # existing database instance used by data index and job service
orchestrator:
namespace: "sonataflow-infra" # Namespace where sonataflow's workflows run. The value is captured when running the setup.sh script and stored as a label in the selected namespace. User can override the value by populating this field. Defaults to `sonataflow-infra`.
sonataflowPlatform:
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
tekton:
enabled: false # whether to create the Tekton pipeline resources
argocd:
enabled: false # whether to install the ArgoCD plugin and create the orchestrator AppProject
namespace: "" # Defines the namespace where the orchestrator's instance of ArgoCD is deployed. The value is captured when running setup.sh script and stored as a label in the selected namespace. User can override the value by populating this field. Defaults to `orchestrator-gitops` in the setup.sh script.
networkPolicy:
rhdhNamespace: "rhdh-operator" # Namespace of existing RHDH instance
4.3.3 - CRD Version v1alpha2
The v1alpha2 version of Orchestrator CRD was introduced in Orchestrator 1.4 version and is currently supported.
New Fields
In OSL 1.35, these new features are introduced:
- Support for Workflow Monitoring
- Support for Knative Eventing
Hence, the CRD schema extends to allow configuration for these features by the user.
- orchestrator.sonataflowPlatform.monitoring.enabled
- orchestrator.sonataflowPlatform.eventing.broker.name
- orchestrator.sonataflowPlatform.eventing.broker.namespace
Deleted Fields
In RHDH 1.4, the notifications and signals plugins are now part of RHDH image and no longer need to be configured by the user.
Hence, these plugin fields are now removed from the CRD schema.
- rhdhPlugins.notifications.package
- rhdhPlugins.notifications.integrity
- rhdhPlugins.notificationsBackend.package
- rhdhPlugins.notificationsBackend.integrity
- rhdhPlugins.signals.package
- rhdhPlugins.signals.integrity
- rhdhPlugins.signalsBackend.package
- rhdhPlugins.signalsBackend.integrity
- rhdhPlugins.notificationsEmail.package
- rhdhPlugins.notificationsEmail.integrity
Renamed Fields
For consistency in the subscription resource/configuration in the CRD, these fields are renamed.
- sonataFlowOperator.subscription.source
- serverlessOperator.subscription.source
The following Orchestrator CR is an sample of the api v1alpha2 version.
apiVersion: rhdh.redhat.com/v1alpha2
kind: Orchestrator
metadata:
name: orchestrator-sample
spec:
sonataFlowOperator:
isReleaseCandidate: false # Indicates RC builds should be used by the chart to install Sonataflow
enabled: true # whether the operator should be deployed by the chart
subscription:
namespace: openshift-serverless-logic # namespace where the operator should be deployed
channel: alpha # channel of an operator package to subscribe to
installPlanApproval: Automatic # whether the update should be installed automatically
name: logic-operator-rhel8 # name of the operator package
source: redhat-operators # name of the catalog source
startingCSV: logic-operator-rhel8.v1.35.0 # The initial version of the operator
serverlessOperator:
enabled: true # whether the operator should be deployed by the chart
subscription:
namespace: openshift-serverless # namespace where the operator should be deployed
channel: stable # channel of an operator package to subscribe to
installPlanApproval: Automatic # whether the update should be installed automatically
name: serverless-operator # name of the operator package
source: redhat-operators # name of the catalog source
startingCSV: serverless-operator.v1.35.0 # The initial version of the operator
rhdhOperator:
isReleaseCandidate: false # Indicates RC builds should be used by the chart to install RHDH
enabled: true # whether the operator should be deployed by the chart
enableGuestProvider: true # whether to enable guest provider
secretRef:
name: backstage-backend-auth-secret # name of the secret that contains the credentials for the plugin to establish a communication channel with the Kubernetes API, ArgoCD, GitHub servers and SMTP mail server.
backstage:
backendSecret: BACKEND_SECRET # Key in the secret with name defined in the 'name' field that contains the value of the Backstage backend secret. Defaults to 'BACKEND_SECRET'. It's required.
github: # GitHub specific configuration fields that are injected to the backstage instance to allow the plugin to communicate with GitHub.
token: GITHUB_TOKEN # Key in the secret with name defined in the 'name' field that contains the value of the authentication token as expected by GitHub. Required for importing resource to the catalog, launching software templates and more. Defaults to 'GITHUB_TOKEN', empty for not available.
clientId: GITHUB_CLIENT_ID # Key in the secret with name defined in the 'name' field that contains the value of the client ID that you generated on GitHub, for GitHub authentication (requires GitHub App). Defaults to 'GITHUB_CLIENT_ID', empty for not available.
clientSecret: GITHUB_CLIENT_SECRET # Key in the secret with name defined in the 'name' field that contains the value of the client secret tied to the generated client ID. Defaults to 'GITHUB_CLIENT_SECRET', empty for not available.
gitlab: # Gitlab specific configuration fields that are injected to the backstage instance to allow the plugin to communicate with Gitlab.
host: GITLAB_HOST # Key in the secret with name defined in the 'name' field that contains the value of Gitlab Host's name. Defaults to 'GITHUB_HOST', empty for not available.
token: GITLAB_TOKEN # Key in the secret with name defined in the 'name' field that contains the value of the authentication token as expected by Gitlab. Required for importing resource to the catalog, launching software templates and more. Defaults to 'GITLAB_TOKEN', empty for not available.
k8s: # Kubernetes specific configuration fields that are injected to the backstage instance to allow the plugin to communicate with the Kubernetes API Server.
clusterToken: K8S_CLUSTER_TOKEN # Key in the secret with name defined in the 'name' field that contains the value of the Kubernetes API bearer token used for authentication. Defaults to 'K8S_CLUSTER_TOKEN', empty for not available.
clusterUrl: K8S_CLUSTER_URL # Key in the secret with name defined in the 'name' field that contains the value of the API URL of the kubernetes cluster. Defaults to 'K8S_CLUSTER_URL', empty for not available.
argocd: # ArgoCD specific configuration fields that are injected to the backstage instance to allow the plugin to communicate with ArgoCD. Note that ArgoCD must be deployed beforehand and the argocd.enabled field must be set to true as well.
url: ARGOCD_URL # Key in the secret with name defined in the 'name' field that contains the value of the URL of the ArgoCD API server. Defaults to 'ARGOCD_URL', empty for not available.
username: ARGOCD_USERNAME # Key in the secret with name defined in the 'name' field that contains the value of the username to login to ArgoCD. Defaults to 'ARGOCD_USERNAME', empty for not available.
password: ARGOCD_PASSWORD # Key in the secret with name defined in the 'name' field that contains the value of the password to authenticate to ArgoCD. Defaults to 'ARGOCD_PASSWORD', empty for not available.
notificationsEmail:
hostname: NOTIFICATIONS_EMAIL_HOSTNAME # Key in the secret with name defined in the 'name' field that contains the value of the hostname of the SMTP server for the notifications plugin. Defaults to 'NOTIFICATIONS_EMAIL_HOSTNAME', empty for not available.
username: NOTIFICATIONS_EMAIL_USERNAME # Key in the secret with name defined in the 'name' field that contains the value of the username of the SMTP server for the notifications plugin. Defaults to 'NOTIFICATIONS_EMAIL_USERNAME', empty for not available.
password: NOTIFICATIONS_EMAIL_PASSWORD # Key in the secret with name defined in the 'name' field that contains the value of the password of the SMTP server for the notifications plugin. Defaults to 'NOTIFICATIONS_EMAIL_PASSWORD', empty for not available.
subscription:
namespace: rhdh-operator # namespace where the operator should be deployed
channel: fast-1.4 # channel of an operator package to subscribe to
installPlanApproval: Automatic # whether the update should be installed automatically
name: rhdh # name of the operator package
source: redhat-operators # name of the catalog source
startingCSV: "" # The initial version of the operator
targetNamespace: rhdh-operator # the target namespace for the backstage CR in which RHDH instance is created
rhdhPlugins: # RHDH plugins required for the Orchestrator
npmRegistry: "https://npm.registry.redhat.com" # NPM registry is defined already in the container, but sometimes the registry need to be modified to use different versions of the plugin, for example: staging(https://npm.stage.registry.redhat.com) or development repositories
scope: "https://github.com/rhdhorchestrator/orchestrator-plugins-internal-release/releases/download/1.4.0-rc.7"
orchestrator:
package: "backstage-plugin-orchestrator-1.4.0-rc.7.tgz"
integrity: sha512-Vclb+TIL8cEtf9G2nx0UJ+kMJnCGZuYG/Xcw0Otdo/fZGuynnoCaAZ6rHnt4PR6LerekHYWNUbzM3X+AVj5cwg==
orchestratorBackend:
package: "backstage-plugin-orchestrator-backend-dynamic-1.4.0-rc.7.tgz"
integrity: sha512-bxD0Au2V9BeUMcZBfNYrPSQ161vmZyKwm6Yik5keZZ09tenkc8fNjipwJsWVFQCDcAOOxdBAE0ibgHtddl3NKw==
notificationsEmail:
enabled: false # whether to install the notifications email plugin. requires setting of hostname and credentials in backstage secret to enable. See value backstage-backend-auth-secret. See plugin configuration at https://github.com/backstage/backstage/blob/master/plugins/notifications-backend-module-email/config.d.ts
port: 587 # SMTP server port
sender: "" # the email sender address
replyTo: "" # reply-to address
postgres:
serviceName: "sonataflow-psql-postgresql" # The name of the Postgres DB service to be used by platform services. Cannot be empty.
serviceNamespace: "sonataflow-infra" # The namespace of the Postgres DB service to be used by platform services.
authSecret:
name: "sonataflow-psql-postgresql" # name of existing secret to use for PostgreSQL credentials.
userKey: postgres-username # name of key in existing secret to use for PostgreSQL credentials.
passwordKey: postgres-password # name of key in existing secret to use for PostgreSQL credentials.
database: sonataflow # existing database instance used by data index and job service
orchestrator:
namespace: "sonataflow-infra" # Namespace where sonataflow's workflows run. The value is captured when running the setup.sh script and stored as a label in the selected namespace. User can override the value by populating this field. Defaults to `sonataflow-infra`.
sonataflowPlatform:
monitoring:
enabled: true # whether to enable monitoring
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
eventing:
broker:
name: "my-knative" # Name of existing Broker instance. Optional
namespace: "knative" # Namespace of existing Broker instance. Optional
tekton:
enabled: false # whether to create the Tekton pipeline resources
argocd:
enabled: false # whether to install the ArgoCD plugin and create the orchestrator AppProject
namespace: "" # Defines the namespace where the orchestrator's instance of ArgoCD is deployed. The value is captured when running setup.sh script and stored as a label in the selected namespace. User can override the value by populating this field. Defaults to `orchestrator-gitops` in the setup.sh script.
networkPolicy:
rhdhNamespace: "rhdh-operator" # Namespace of existing RHDH instance
4.4 - Requirements
Operators
The Orchestrator runtime/deployment is made of two main parts: OpenShift Serverless Logic operator
and RHDH operator
OpenShift Serverless Logic operator requirements
OpenShift Serverless Logic operator resource requirements are described OpenShift Serverless Logic Installation Requirements. This is mainly for local environment settings.
The operator deploys a Data Index service and a Jobs service.
These are the recommended minimum resource requirements for their pods:
Data Index pod
:
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 250m
memory: 64Mi
Jobs pod:
resources:
limits:
cpu: 200m
memory: 1Gi
requests:
cpu: 100m
memory: 1Gi
The resources for these pods are controlled by a CR of type SonataFlowPlatform. There is one such CR in the sonataflow-infra namespace.
RHDH operator requirements
The requirements for RHDH operator and its components are described here
Workflows
Each workflow has its own logic and therefore different resource requirements that are influenced by its specific logic.
Here are some metrics for the workflows we provide. For each workflow you have the following fields: cpu idle, cpu peak (during execution), memory.
- greeting workflow
- cpu idle: 4m
- cpu peak: 12m
- memory: 300 Mb
- mtv-plan workflow
- cpu idle: 4m
- cpu peak: 130m
- memory: 300 Mb
How to evaluate resource requirements for your workflow
Locate the workflow pod in OCP Console. There is a tab for Metrics. Here you’ll find the CPU and memory. Execute the workflow a few times. It does not matter whether it succeeds or not as long as all the states are executed. Now you can see the peak usage (execution) and the idle usage (after a few executions).
4.5 - Orchestrator on OpenShift
Installing the Orchestrator is facilitated through an operator available in the Red Hat Catalog in the OLM package. This operator is responsible for installing all of the Orchestrator components.
The Orchestrator is based on the SonataFlow and the Serverless Workflow technologies to design and manage the workflows.
The Orchestrator plugins are deployed on a Red Hat Developer Hub
instance, which serves as the frontend.
When installing a Red Hat Developer Hub (RHDH) instance using the Orchestrator operator, the RHDH configuration is managed through the Orchestrator resource.
To utilize Backstage capabilities, the Orchestrator imports software templates designed to ease the development of new workflows and offers an opinionated method for managing their lifecycle by including CI/CD resources as part of the template.
Orchestrator Documentation
For comprehensive documentation on the Orchestrator, please
visit https://www.rhdhorchestrator.io.
Installing the Orchestrator Go Operator
Deploy the Orchestrator solution suite in an OCP cluster using the Orchestrator operator.
The operator installs the following components onto the target OpenShift cluster:
- RHDH (Red Hat Developer Hub) Backstage
- OpenShift Serverless Logic Operator (with Data-Index and Job Service)
- OpenShift Serverless Operator
- Knative Eventing
- Knative Serving
- (Optional) An ArgoCD project named
orchestrator
. Requires an pre-installed ArgoCD/OpenShift GitOps instance in the
cluster. Disabled by default - (Optional) Tekton tasks and build pipeline. Requires an pre-installed Tekton/OpenShift Pipelines instance in the
cluster. Disabled by default
Important Note for ARM64 Architecture Users
Note that as of November 6, 2023, OpenShift Serverless Operator is based on RHEL 8 images which are not supported on the
ARM64 architecture. Consequently, deployment of this operator on
an OpenShift Local cluster on MacBook laptops with M1/M2
chips is not supported.
Prerequisites
- Logged in to a Red Hat OpenShift Container Platform (version 4.14 +) cluster as a cluster administrator.
- OpenShift CLI (oc)
is installed.
- Operator Lifecycle Manager (OLM) has been installed in your
cluster.
- Your cluster has
a default storage class
provisioned.
- A GitHub API Token - to import items into the catalog, ensure you have a
GITHUB_TOKEN
with the necessary permissions
as detailed here.- For classic token, include the following permissions:
- repo (all)
- admin:org (read:org)
- user (read:user, user:email)
- workflow (all) - required for using the software templates for creating workflows in GitHub
- For Fine grained token:
- Repository permissions: Read access to metadata, Read and Write access to actions, actions
variables, administration, code, codespaces, commit statuses, environments, issues, pull requests, repository
hooks, secrets, security events, and workflows.
- Organization permissions: Read access to members, Read and Write access to organization
administration, organization hooks, organization projects, and organization secrets.
β οΈWarning: Skipping these steps will prevent the Orchestrator from functioning properly.
Deployment with GitOps
If you plan to deploy in a GitOps environment, make sure you have installed the ArgoCD/Red Hat OpenShift GitOps
and
the Tekton/Red Hat Openshift Pipelines Install
operators following
these instructions.
The Orchestrator installs RHDH and imports software templates designed for bootstrapping workflow development. These
templates are crafted to ease the development lifecycle, including a Tekton pipeline to build workflow images and
generate workflow K8s custom resources. Furthermore, ArgoCD is utilized to monitor any changes made to the workflow
repository and to automatically trigger the Tekton pipelines as needed.
ArgoCD/OpenShift GitOps
operator
- Ensure at least one instance of
ArgoCD
exists in the designated namespace (referenced by ARGOCD_NAMESPACE
environment variable).
Example here - Validated API is
argoproj.io/v1alpha1/AppProject
Tekton/OpenShift Pipelines
operator
- Validated APIs are
tekton.dev/v1beta1/Task
and tekton.dev/v1/Pipeline
- Requires ArgoCD installed since the manifests are deployed in the same namespace as the ArgoCD instance.
Remember to
enable argocd
in your CR instance.
Detailed Installation Guide
From OperatorHub
- Deploying PostgreSQL reference implementation
- If you do not have a PostgreSQL instance in your cluster
you can deploy the PostgreSQL reference implementation by following the
steps here. - If you already have PostgreSQL running in your cluster
ensure that the default settings in
the PostgreSQL values
file match the postgres
field provided in
the Orchestrator CR
file.
- Install Orchestrator operator
- Go to OperatorHub in your OpenShift Console.
- Search for and install the Orchestrator Operator.
- Run the Setup Script
- Follow the steps in the Running the Setup Script section to download and execute the
setup.sh script, which initializes the RHDH environment.
- Create an Orchestrator instance
- Once the Orchestrator Operator is installed, navigate to Installed Operators.
- Select Orchestrator Operator.
- Click on Create Instance to deploy an Orchestrator instance.
- Verify resources and wait until they are running
From console run the following command get the necessary wait commands:
oc describe orchestrator orchestrator-sample -n openshift-operators | grep -A 10 "Run the following commands to wait until the services are ready:"
\
The command will return an output similar to the one below, which lists several oc wait commands. This depends on
your specific cluster.
oc wait -n openshift-serverless deploy/knative-openshift --for=condition=Available --timeout=5m
oc wait -n knative-eventing knativeeventing/knative-eventing --for=condition=Ready --timeout=5m
oc wait -n knative-serving knativeserving/knative-serving --for=condition=Ready --timeout=5m
oc wait -n openshift-serverless-logic deploy/logic-operator-rhel8-controller-manager --for=condition=Available --timeout=5m
oc wait -n sonataflow-infra sonataflowplatform/sonataflow-platform --for=condition=Succeed --timeout=5m
oc wait -n sonataflow-infra deploy/sonataflow-platform-data-index-service --for=condition=Available --timeout=5m
oc wait -n sonataflow-infra deploy/sonataflow-platform-jobs-service --for=condition=Available --timeout=5m
oc get networkpolicy -n sonataflow-infra
Copy and execute each command from the output in your terminal. These commands ensure that all necessary services
and resources in your OpenShift environment are available and running correctly.
If any service does not become available, verify the logs for that service or
consult troubleshooting steps.
Manual Installation
Deploy the PostgreSQL reference implementation for persistence support in SonataFlow following
these instructions
Create a namespace for the Orchestrator solution:
oc new-project orchestrator
Run the Setup Script
- Follow the steps in the Running the Setup Script section to download and execute the
setup.sh script, which initializes the RHDH environment.
Use the following manifest to install the operator in an OCP cluster:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: orchestrator-operator
namespace: openshift-operators
spec:
channel: stable
installPlanApproval: Automatic
name: orchestrator-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
Run the following commands to determine when the installation is completed:
wget https://raw.githubusercontent.com/rhdhorchestrator/orchestrator-go-operator/release-1.5/hack/wait_for_operator_installed.sh -O /tmp/wait_for_operator_installed.sh && chmod u+x /tmp/wait_for_operator_installed.sh && /tmp/wait_for_operator_installed.sh
During the installation process, the Orchestrator Operator creates and monitors the lifecycle of the sub-components
operators: RHDH operator, OpenShift Serverless operator and OpenShift Serverless Logic operator. Furthermore, it
creates the necessary CRs and resources needed for orchestrator to function properly.
Please refer to the troubleshooting-section for known issues with the operator resources.
Apply the Orchestrator custom resource (CR) on the cluster to create an instance of RHDH and resources of
OpenShift Serverless Operator and OpenShift Serverless Logic Operator.
Make any changes to
the CR
before applying it, or test the default Orchestrator CR:
oc apply -n orchestrator -f https://raw.githubusercontent.com/rhdhorchestrator/orchestrator-go-operator/refs/heads/release-1.5/config/samples/_v1alpha3_orchestrator.yaml
Note: After the first reconciliation of the Orchestrator CR, changes to some of the fields in the CR may not be
propagated/reconciled to the intended resource. For example, changing the platform.resources.requests
field in
the Orchestrator CR will not have any effect on the running instance of the SonataFlowPlatform (SFP) resource.
For simplicity sake, that is the current design and may be revisited in the near future. Please refer to
the CRD Parameter List
to know which fields can be reconciled.
Running The Setup Script
The setup.sh script simplifies the initialization of the RHDH environment by creating the required authentication secret
and labeling GitOps namespaces based on the cluster configuration.
Create a namespace for the RHDH instance. This namespace is predefined as the default in both the setup.sh script and
the Orchestrator CR but can be overridden if needed.
Download the setup script from the github repository and run it to create the RHDH secret and label the GitOps
namespaces:
wget https://raw.githubusercontent.com/rhdhorchestrator/orchestrator-go-operator/release-1.5/hack/setup.sh -O /tmp/setup.sh && chmod u+x /tmp/setup.sh
Run the script:
/tmp/setup.sh --use-default
NOTE: If you don’t want to use the default values, omit the --use-default
and the script will prompt you for
input.
The contents will vary depending on the configuration in the cluster. The following list details all the keys that can
appear in the secret:
BACKEND_SECRET
: Value is randomly generated at script execution. This is the only mandatory key required to be in
the secret for the RHDH Operator to start.K8S_CLUSTER_URL
: The URL of the Kubernetes cluster is obtained dynamically using oc whoami --show-server
.K8S_CLUSTER_TOKEN
: The value is obtained dynamically based on the provided namespace and service account.GITHUB_TOKEN
: This value is prompted from the user during script execution and is not predefined.GITHUB_CLIENT_ID
and GITHUB_CLIENT_SECRET
: The value for both these fields are used to authenticate against
GitHub. For more information open this link.GITLAB_HOST
and GITLAB_TOKEN
: The value for both these fields are used to authenticate against
GitLab.ARGOCD_URL
: This value is dynamically obtained based on the first ArgoCD instance available.ARGOCD_USERNAME
: Default value is set to admin
.ARGOCD_PASSWORD
: This value is dynamically obtained based on the ArgoCD instance available.
Keys will not be added to the secret if they have no values associated. So for instance, when deploying in a cluster
without the GitOps operators, the ARGOCD_URL
, ARGOCD_USERNAME
and ARGOCD_PASSWORD
keys will be omitted in the
secret.
Sample of a secret created in a GitOps environment:
$> oc get secret -n rhdh -o yaml backstage-backend-auth-secret
apiVersion: v1
data:
ARGOCD_PASSWORD: ...
ARGOCD_URL: ...
ARGOCD_USERNAME: ...
BACKEND_SECRET: ...
GITHUB_TOKEN: ...
K8S_CLUSTER_TOKEN: ...
K8S_CLUSTER_URL: ...
kind: Secret
metadata:
creationTimestamp: "2024-05-07T22:22:59Z"
name: backstage-backend-auth-secret
namespace: rhdh-operator
resourceVersion: "4402773"
uid: 2042e741-346e-4f0e-9d15-1b5492bb9916
type: Opaque
Enabling Monitoring for Workflows
If you want to enable monitoring for workflows, you shall enable it in the Orchestrator
CR as follows:
apiVersion: rhdh.redhat.com/v1alpha3
kind: Orchestrator
metadata:
name: ...
spec:
...
platform:
...
monitoring:
enabled: true
...
After the CR is deployed, follow
the instructions to deploy
Prometheus, Grafana and the sample Grafana dashboard.
Using Knative eventing communication
To enable eventing communication communication between the different components (Data Index, Job Service and
Workflows), a broker should be used. Kafka is a good candidate as it fulfills the reliability need. You can find the
list of available brokers for Knative is
here: https://knative.dev/docs/eventing/brokers/broker-types/
Alternatively, an in-memory broker could also be used, however it is not recommended to use it for production purposes.
Follow
these instructions
to setup the Knative broker communication.
Proxy configuration
Your Backstage instance might be configured to work with a proxy. In that case you need to tell Backstage to bypass the
workflow for requests to workflow namespaces and sonataflow namespace (sonataflow-infra
). You need to add the
namespaces to the environment variable NO_PROXY
. E.g. NO_PROXY=current-value-of-no-proxy, .sonataflow-infra
,
.my-workflow-namespace
. Note the .
before the namespace name.
Additional Workflow Namespaces
When deploying a workflow in a namespace different from where Sonataflow services are running (e.g., sonataflow-infra),
several essential steps must be followed:
Allow Traffic from the Workflow Namespace:
To allow Sonataflow services to accept traffic from workflows, either create an additional network policy or update
the existing policy with the new workflow namespace.
Create Additional Network Policy
oc create -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-external-workflows-to-sonataflow-infra
# Namespace where network policies are deployed
namespace: sonataflow-infra
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
# Allow Sonataflow services to communicate with new/additional workflow namespace.
kubernetes.io/metadata.name: <new-workflow-namespace>
EOF
Alternatively - Update Existing Network Policy
oc -n sonataflow-infra patch networkpolicy allow-rhdh-to-sonataflow-and-workflows --type='json' \
-p='[
{
"op": "add",
"path": "/spec/ingress/0/from/-",
"value": {
"namespaceSelector": {
"matchLabels": {
"kubernetes.io/metadata.name": <new-workflow-namespace>
}
}
}
}]'
Identify the RHDH Namespace:
Retrieve the namespace where RHDH is running by executing:
Store the namespace value in $RHDH_NAMESPACE
in the Network Policy manifest below.
Identify the Sonataflow Services Namespace:
Check the namespace where Sonataflow services are deployed:
oc get sonataflowclusterplatform -A
If there is no cluster platform, check for a namespace-specific platform:
oc get sonataflowplatform -A
Store the namespace value in $WORKFLOW_NAMESPACE
.
Set Up a Network Policy:
Configure a network policy to allow traffic only between RHDH, Knative, Sonataflow services, and workflows.
oc create -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-rhdh-to-sonataflow-and-workflows
namespace: $ADDITIONAL_NAMESPACE
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
# Allows traffic from pods in the RHDH namespace.
kubernetes.io/metadata.name: $RHDH_NAMESPACE
- namespaceSelector:
matchLabels:
# Allow traffic from pods in the in the Workflow namespace.
kubernetes.io/metadata.name: $WORKFLOW_NAMESPACE
- namespaceSelector:
matchLabels:
# Allows traffic from pods in the K-Native Eventing namespace.
kubernetes.io/metadata.name: knative-eventing
- namespaceSelector:
matchLabels:
# Allows traffic from pods in the K-Native Serving namespace.
kubernetes.io/metadata.name: knative-serving
EOF
To allow unrestricted communication between all pods within the workflow’s namespace, create the
allow-intra-namespace
network policy.
oc create -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-intra-namespace
namespace: $ADDITIONAL_NAMESPACE
spec:
# Apply this policy to all pods in the namespace
podSelector: {}
# Specify policy type as 'Ingress' to control incoming traffic rules
policyTypes:
- Ingress
ingress:
- from:
# Allow ingress from any pod within the same namespace
- podSelector: {}
EOF
Ensure Persistence for the Workflow:
If persistence is required, follow these steps:
By following these steps, the workflow will have the necessary credentials to access PostgreSQL and will correctly
reference the service in a different namespace.
GitOps environment
See the
dedicated document
Deploying PostgreSQL reference implementation
See here
ArgoCD and workflow namespace
If you manually created the workflow namespaces (e.g., $WORKFLOW_NAMESPACE
), run this command to add the required
label that allows ArgoCD to deploy instances there:
oc label ns $WORKFLOW_NAMESPACE argocd.argoproj.io/managed-by=$ARGOCD_NAMESPACE
Workflow installation
Follow Workflows Installation
Cleanup
/!\ Before removing the orchestrator, make sure you have first removed any installed workflows. Otherwise the
deletion may become hung in a terminating state.
To remove the operator, first remove the operand resources
Run:
oc delete namespace orchestrator
to delete the Orchestrator CR. This will remove the OSL, Serverless and RHDH Operators, Sonataflow CRs.
To clean up the rest of resources run
oc delete namespace sonataflow-infra rhdh
If you want to remove knative related resources, you may also run:
oc get crd -o name | grep -e knative | xargs oc delete
To remove the operator from the cluster, delete the subscription:
oc delete subscriptions.operators.coreos.com orchestrator-operator -n openshift-operators
Note that the CRDs created during the installation process will remain in the cluster.
Compatibility Matrix between Orchestrator Operator and Dependencies
Orchestrator Operator | RHDH | OSL | Serverless |
---|
Orchestrator 1.5.0 | 1.5.1 | 1.35.0 | 1.35.0 |
Compatibility Matrix for Orchestrator Plugins
Troubleshooting/Known Issue
Zip bomb detected with Orchestrator Plugin
Currently, there is a known issue with RHDH pod starting up due to the size of the orchestrator plugin.
The error Zip bomb detected in backstage-plugin-orchestrator-1.5.0
will be seen and this can be resolved by
increasing the MAX_ENTRY_SIZE"
of the initContainer which downloads the plugins. This will be resolved in the next
operator release.
More information can be
found here.
To fix this issue, please run patch command within the RHDH instance namespace:
oc -n <rhdh-namespace> patch backstage <rhdh-name> --type=merge -p '{
"spec": {
"deployment": {
"patch": {
"spec": {
"template": {
"spec": {
"initContainers": [
{
"name": "install-dynamic-plugins",
"env": [
{
"name": "MAX_ENTRY_SIZE",
"value": "30000000"
}
]
}
]
}
}
}
}
}
}
}'
4.6 - Orchestrator on Kubernetes
The following guide is for installing on a Kubernetes cluster. It is well tested and working in CI with a kind installation.
Here’s a kind configuration that is easy to work with (the apiserver port is static, so the kubeconfig is always the same)
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
apiServerAddress: "127.0.0.1"
apiServerPort: 16443
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
- |
kind: KubeletConfiguration
localStorageCapacityIsolation: true
extraPortMappings:
- containerPort: 80
hostPort: 9090
protocol: TCP
- containerPort: 443
hostPort: 9443
protocol: TCP
- role: worker
Save this file as kind-config.yaml
, and now run:
kind create cluster --config kind-config.yaml
kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
kubectl patch daemonsets -n projectcontour envoy -p '{"spec":{"template":{"spec":{"nodeSelector":{"ingress-ready":"true"},"tolerations":[{"key":"node-role.kubernetes.io/control-plane","operator":"Equal","effect":"NoSchedule"},{"key":"node-role.kubernetes.io/master","operator":"Equal","effect":"NoSchedule"}]}}}}'
The cluster should be up and running with Contour ingress-controller installed, so localhost:9090 will direct the traffic to Backstage, because of the ingress created by the helm chart on port 80.
Orchestrator-k8s helm chart
This chart will install the Orchestrator and all its dependencies on kubernetes.
THIS CHART IS NOT SUITED FOR PRODUCTION PURPOSES, you should only use it for development or tests purposes
The chart deploys:
Usage
helm repo add orchestrator https://rhdhorchestrator.github.io/orchestrator-helm-chart
helm install orchestrator orchestrator/orchestrator-k8s
Configuration
All of the backstage app-config is derived from the values.yaml.
Secrets as env vars:
To use secret as env vars, like the one used for the notification, see charts/Orchestrator-k8s/templates/secret.yaml
Every key in that secret will be available in the app-config for resolution.
Development
git clone https://github.com/rhdhorchestrator.github.io/orchestrator-helm-chart
cd orchestrator-helm-chart/charts/orchestrator-k8s
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add backstage https://backstage.github.io/charts
helm repo add postgresql https://charts.bitnami.com/bitnami
helm repo add redhat-developer https://redhat-developer.github.io/rhdh-chart
helm repo add workflows https://rhdhorchestrator.io/serverless-workflows-config
helm dependencies build
helm install orchestrator .
The output should look like that
$ helm install orchestrator .
Release "orchestrator" has been upgraded. Happy Helming!
NAME: orchestrator
LAST DEPLOYED: Tue Sep 19 18:19:07 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
This chart will install RHDH-backstage(RHDH upstream) + Serverless Workflows.
To get RHDH's route location:
$ oc get route orchestrator-white-backstage -o jsonpath='https://{ .spec.host }{"\n"}'
To get the serverless workflow operator status:
$ oc get deploy -n sonataflow-operator-system
To get the serverless workflows status:
$ oc get sf
The chart notes will provide more information on:
- route location of backstage
- the sonata operator status
- the sonata workflow deployed status
4.7 - Orchestrator on existing RHDH instance
When RHDH is already installed and in use, reinstalling it is unnecessary. Instead, integrating the Orchestrator into such an environment involves a few key steps:
- Utilize the Orchestrator operator to install the requisite components, such as the OpenShift Serverless Logic Operator and the OpenShift Serverless Operator, while ensuring the RHDH installation is disabled.
- Manually update the existing RHDH ConfigMap resources with the necessary configuration for the Orchestrator plugin.
- Import the Orchestrator software templates into the Backstage catalog.
Prerequisites
- RHDH is already deployed with a running Backstage instance.
- Software templates for workflows requires GitHub provider to be configured.
- Ensure that a PostgreSQL database is available and that you have credentials to manage the tablespace (optional).
- For your convenience, a reference implementation is provided.
- If you already have a PostgreSQL database installed, please refer to this note regarding default settings.
In this approach, since the RHDH instance is not managed by the Orchestrator operator, its configuration is handled through the Backstage CR along with the associated resources, such as ConfigMaps and Secrets.
The installation steps are detailed here.
4.8 - Workflows
In addition to deploying the Orchestrator, we provide several preconfigured workflows that serve either as ready-to-use solutions or as starting points for customizing workflows according to the user’s requirements. These workflows can be installed through a Helm chart.
4.8.1 - Deploy From Helm Repository
Orchestrator Workflows Helm Repository
This repository serves as a Helm chart repository for deploying serverless workflows with the Sonataflow Operator. It encompasses a collection of pre-defined workflows, each tailored to specific use cases. These workflows have undergone thorough testing and validation through Continuous Integration (CI) processes and are organized according to their chart versions.
The repository includes a variety of serverless workflows, such as:
- Greeting: A basic example workflow to demonstrate functionality.
- Migration Toolkit for Application Analysis (MTA): This workflow evaluates applications to determine potential risks and the associated costs of containerizing the applications.
- Move2Kube: Designed to facilitate the transition of an application to Kubernetes (K8s) environments.
- …
Usage
Prerequisites
To utilize the workflows contained in this repository, the Orchestrator Deployment must be installed on your OpenShift Container Platform (OCP) cluster. For detailed instructions on installing the Orchestrator, please visit the Orchestrator Helm Based Operator Repository
Installation
helm repo add orchestrator-workflows https://rhdhorchestrator.io/serverless-workflows
View available workflows on the Helm repository:
helm search repo orchestrator-workflows
The expected result should look like (with different versions):
NAME CHART VERSION APP VERSION DESCRIPTION
orchestrator-workflows/greeting 0.4.2 1.16.0 A Helm chart for the greeting serverless workflow
orchestrator-workflows/move2kube 0.2.16 1.16.0 A Helm chart to deploy the move2kube workflow.
orchestrator-workflows/mta 0.2.16 1.16.0 A Helm chart for MTA serverless workflow
orchestrator-workflows/workflows 0.2.24 1.16.0 A Helm chart for serverless workflows
...
You can install the workflows following their respective README
Installing workflows in additional namespaces
When deploying a workflow in a namespace different from where Sonataflow services are running (e.g. sonataflow-infra), there are essential steps to follow. For detailed instructions, see the Additional Workflow Namespaces section.
Version Compatibility
The workflows rely on components included in the Orchestrator Operator. Therefore, it is crucial to match the workflow version with the corresponding Orchestrator version that supports it.
The list below outlines the compatibility between the workflows and Orchestrator versions:
Workflows | Chart Version | Orchestrator Operator Version |
---|
move2kube | 1.5.x | 1.5.x |
create-ocp-project | 1.5.x | 1.5.x |
request-vm-cnv | 1.5.x | 1.5.x |
modify-vm-resources | 1.5.x | 1.5.x |
mta-v7 | 1.5.x | 1.5.x |
mtv-migration | 1.5.x | 1.5.x |
mtv-plan | 1.5.x | 1.5.x |
Helm index
https://www.rhdhorchestrator.io/serverless-workflows/index.yaml
5 - Serverless Workflows
A serverless workflow in Orchestrator refers to a sequence of operations that run in response to user input (optional) and produce output (optional) without requiring any ongoing management of the underlying infrastructure. The workflow is executed automatically, and frees users from having to manage or provision servers. This simplifies the process by allowing the focus to remain on the logic of the workflow, while the infrastructure dynamically adapts to handle the execution.
5.1 - Assessment
5.1.1 - MTA Analysis
MTA - migration analysis workflow
Synopsis
This workflow is an assessment workflow type, that invokes an application analysis workflow using MTA
and returns the move2kube workflow reference, to run next if the analysis is considered to be successful.
Users are encouraged to use this workflow as self-service alternative for interacting with the MTA UI. Instead of running
a mass-migration of project from a managed place, the project stakeholders can use this (or automation) to regularly check
the cloud-readiness compatibility of their code.
Workflow application configuration
Application properties can be initialized from environment variables before running the application:
Environment variable | Description | Mandatory | Default value |
---|
BACKSTAGE_NOTIFICATIONS_URL | The backstage server URL for notifications | β
| |
NOTIFICATIONS_BEARER_TOKEN | The authorization bearer token to use to send notifications | β
| |
MTA_URL | The MTA Hub server URL | β
| |
repositoryUrl
[mandatory] - the git repo url to examinerecipients
[mandatory] - A list of recipients for the notification in the format of user:<namespace>/<username>
or group:<namespace>/<groupname>
, i.e. user:default/jsmith
.
Output
- On completion the workflow returns an options structure in the exit state of the workflow (also named variables in SonataFlow)
linking to the move2kube workflow that will generate k8s manifests for container deployment.
- When the workflow completes there should be a report link on the exit state of the workflow (also named variables in SonataFlow)
Currently this is working with MTA version 6.2.x and in the future 7.x version the report link will be removed or will be made
optional. Instead of an html report the workflow will use a machine friendly json file.
Dependencies
Runtime configuration
key | default | description |
---|
mta.url | http://mta-ui.openshift-mta.svc.cluster.local:8080 | Endpoint (with protocol and port) for MTA |
quarkus.rest-client.mta_json.url | ${mta.url}/hub | MTA hub api |
quarkus.rest-client.notifications.url | ${BACKSTAGE_NOTIFICATIONS_URL:http://backstage-backstage.rhdh-operator/api/notifications/} | Backstage notification url |
quarkus.rest-client.mta_json.auth.basicAuth.username | username | Username for the MTA api |
quarkus.rest-client.mta_json.auth.basicAuth.password | password | Password for the MTA api |
All the configuration items are on [./application.properties]
For running and testing the workflow refer to mta testing.
Workflow Diagram

Installation
See official installation guide
5.2 - Infrastructure
5.2.1 - Simple Escalation
Simple escalation workflow
An escalation workflow integrated with Atlassian JIRA using SonataFlow.
Prerequisite
- Access to a Jira server (URL, user and API token)
- Access to an OpenShift cluster with
admin
Role
Workflow diagram

Note:
The value of the .jiraIssue.fields.status.statusCategory.key
field is the one to be used to identify when the done
status is reached, all the other
similar fields are subject to translation to the configured language and cannot be used for a consistent check.
Application configuration
Application properties can be initialized from environment variables before running the application:
Environment variable | Description | Mandatory | Default value |
---|
JIRA_URL | The Jira server URL | β
| |
JIRA_USERNAME | The Jira server username | β
| |
JIRA_API_TOKEN | The Jira API Token | β
| |
JIRA_PROJECT | The key of the Jira project where the escalation issue is created | β | TEST |
JIRA_ISSUE_TYPE | The ID of the Jira issue type to be created | β
| |
OCP_API_SERVER_URL | The OpensShift API Server URL | β
| |
OCP_API_SERVER_TOKEN | The OpensShift API Server Token | β
| |
ESCALATION_TIMEOUT_SECONDS | The number of seconds to wait before triggering the escalation request, after the issue has been created | β | 60 |
POLLING_PERIODICITY (1) | The polling periodicity of the issue state checker, according to ISO 8601 duration format | β | PT6S |
(1) This is still hardcoded as PT5S
while waiting for a fix to KOGITO-9811
How to run
Example of POST to trigger the flow (see input schema in ocp-onboarding-schema.json):
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/ticket-escalation -d '{"namespace": "_YOUR_NAMESPACE_"}'
Tips:
- Visit Workflow Instances
- Visit (Data Index Query Service)[http://localhost:8080/q/graphql-ui/]
5.2.2 - Move2Kube
Move2kube (m2k) workflow
Context
This workflow is using https://move2kube.konveyor.io/ to migrate the existing code contained in a git repository to a K8s/OCP platform.
Once the transformation is over, move2kube provides a zip file containing the transformed repo.
Design diagram

Workflow

Note that if an error occurs during the migration planning there is no feedback given by the move2kube instance API. To overcome this, we defined a maximum amount of retries (move2kube_get_plan_max_retries
) to execute while getting the planning before exiting with an error. By default the value is set to 10 and it can be overridden with the environment variable MOVE2KUBE_GET_PLAN_MAX_RETRIES
.
Workflow application configuration
Move2kube workflow
Application properties can be initialized from environment variables before running the application:
Environment variable | Description | Mandatory | Default value |
---|
MOVE2KUBE_URL | The move2kube instance server URL | β
| |
BACKSTAGE_NOTIFICATIONS_URL | The backstage server URL for notifications | β
| |
NOTIFICATIONS_BEARER_TOKEN | The authorization bearer token to use to send notifications | β
| |
MOVE2KUBE_GET_PLAN_MAX_RETRIES | The amount of retries to get the plan before failing the workflow | β | 10 |
m2k-func serverless function
Application properties can be initialized from environment variables before running the application:
Environment variable | Description | Mandatory | Default value |
---|
MOVE2KUBE_API | The move2kube instance server URL | β
| |
SSH_PRIV_KEY_PATH | The absolute path to the SSH private key | β
| |
BROKER_URL | The knative broker URL | β
| |
LOG_LEVEL | The log level | β | INFO |
Components
The use case has the following components:
m2k
: the Sonataflow
resource representing the workflow. A matching Deployment
is created by the sonataflow operator..m2k-save-transformation-func
: the Knative Service
resource that holds the service retrieving the move2kube instance output and saving it to the git repository. A matching Deployment
is created by the Knative deployment.move2kube instance
: the Deployment
running the move2kube instance- Knative
Trigger
:m2k-save-transformation-event
: event sent by the m2k
workflow that will trigger the execution of m2k-save-transformation-func
.transformation-saved-trigger-m2k
: event sent by m2k-save-transformation-func
if/once the move2kube output is successfully saved to the git repository.error-trigger-m2k
: event sent by m2k-save-transformation-func
if an error while saving the move2kube output to the git repository.
- The Knative
Broker
named default
which link the components together.
Installation
See official installation guide
Usage
- Create a workspace and a project under it in your move2kube instance
- you can reach your move2kube instance by running
oc -n sonataflow-infra get routes
Sample output:NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
move2kube-route move2kube-route-sonataflow-infra.apps.cluster-c68jb.dynamic.redhatworkshops.io move2kube-svc <all> edge None
- Go to the backstage instance.
To get it, you can run
oc -n rhdh-operator get routes
Sample output:
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
backstage-backstage backstage-backstage-rhdh-operator.apps.cluster-c68jb.dynamic.redhatworkshops.io / backstage-backstage http-backend edge/Redirect None
Go to the Orchestrator
page.
Click on Move2Kube workflow
and then click the run
button on the top right of the page.
In the repositoryURL
field, put the URL of your git project
In the sourceBranch
field, put the name of the branch holding the project you want to transform
In the targetBranch
field, put the name of the branch in which you want the move2kube output to be persisted. If the branch exists, the workflow will fail
In the workspaceId
field, put the ID of the move2kube instance workspace to use for the transformation. Use the ID of the workspace created at the 1st step.
- ie:
a46b802d-511c-4097-a5cb-76c892b48d71
In the projectId
field, put the ID of the move2kube instance project under the previous workspace to use for the transformation. Use the ID of the project created at the 1st step.
- ie:
9c7f8914-0b63-4985-8696-d46c17ba4ebe
Then click on nextStep
Click on run
to trigger the execution
Once a new transformation has started and is waiting for your input, you will receive a notification with a link to the Q&A
Once you completed the Q&A, the process will continue and the output of the transformation will be saved in your git repository, you will receive a notification to inform you of the completion of the workflow.
- You can now clone the repository and checkout the output branch to deploy your manifests to your cluster! You can check the move2kube documention if you need guidance on how to deploy the generated artifacts.
5.3 - Development
Serverless-Workflows
This repository contains multiple workflows. Each workflow is represented by a directory in the project. Below is a table listing all available workflows:
Workflow Name | Description |
---|
create-ocp-project | Sets up an OpenShift Container Platform (OCP) project. |
escalation | Demos workflow ticket escalation. |
greeting | Sample greeting workflow. |
modify-vm-resources | Modifies resources allocated to virtual machines. |
move2kube | Workflow for Move2Kube tasks and transformation. |
mta-v7.x | Migration toolkit for applications, version 7.x. |
mtv-migration | Migration tasks using Migration Toolkit for Virtualization (MTV). |
mtv-plan | Planning workflows for Migration Toolkit for Virtualization. |
request-vm-cnv | Requests and provisions VMs using Container Native Virtualization (CNV). |
Here is the layout of directories per workflow. Each folder contains at least:
application.properties
the configuration item specific for the workflow app itself.${workflow}.sw.yaml
the serverless workflow definitions with respect to the best practices.specs/
optional folder with OpenAPI specs if the flow needs them.
All .svg can be ignored, there’s no real functional use for them in deployment
and all of them are created by VSCode extension.
Every workflow has a matching container image pushed to quay.io by a github workflows
in the form of quay.io/orchestrator/serverless-workflow-${workflow}
.
Current image statuses:
After image publishing, GitHub action will generate kubernetes manifests and push a PR to the workflows helm chart repo
under a directory matching the workflow name. This repo is used to deploy the workflows to an environment
with Sonataflow operator running.
How to introduce a new workflow
Follow these steps to successfully add a new workflow:
- Create a folder under the root with the name of the flow, e.x
/onboarding
- Copy
application.properties
, onboarding.sw.yaml
into that folder - Create a GitHub workflow file
.github/workflows/${workflow}.yaml
that will call main
workflow (see greeting.yaml) - Create a pull request but don’t merge yet.
- Send a pull request to serverless-workflows-config repository to add a sub-chart
under the path
charts/workflows/charts/onboarding
. You can copy the greeting sub-chart directory and files. - Create a PR to serverless-workflows-config repository and make sure its merge.
- Now the PR from 4 can be merged and an automatic PR will be created with the generated manifests. Review and merge.
See Continuous Integration with make for implementation details of the CI pipeline.
Builder image
There are two builder images under ./pipeline folder:
- workflow-builder-dev.Dockerfile - references nightly build image from
docker.io/apache/incubator-kie-sonataflow-builder:main
that doesn’t required any authorization - workflow-builder.Dockerfile - references OpenShift Serverless Logic builder image from registry.redhat.io which requires authorization.
- To use this dockerfile locally, you must be logged to registry.redhat.io. To get access to that registry, follow:
- Get tokens here. Once logged in to podman, you should be able to pull the image.
- Verify pulling the image here
Note on CI:
For every PR merged in the workflow directory, a GitHub Action runs an image build to generate manifests, and a new PR is automatically generated in the serverless-workflows-config repository. The credentials used by the build process are defined as organization level secret, and the content is from a token on the helm repo with an expiry period of 60 days. Currently only the repo owner (rgolangh) can recreate the token. This should be revised.
5.4 - Workflow Examples
Our Orchestrator Serverless Workflow Examples repository, located at GitHub, provides a collection of sample workflows designed to help you explore and understand how to build serverless workflows using Orchestrator. These examples showcase a range of use cases, demonstrating how workflows can be developed, tested, and executed based on various inputs and conditions.
Please note that this repository is intended for development and testing purposes only. It serves as a reference for developers looking to create custom workflows and experiment with serverless orchestration concepts. These examples are not optimized for production environments and should be used to guide your own development processes.
5.5 - Troubleshooting
Troubleshooting Guide
This document provides solutions to common problems encountered with serverless workflows.
Table of Contents
- HTTP Errors
- Workflow Errors
- Configuration Problems
- Performance Issues
- Error Messages
- Network Problems
- Common Scenarios
- Contact Support
HTTP Errors
Many workflow operations are REST requests to REST endpoints. If an HTTP error occurs then the workflow will fail and the HTTP code and message will be displayed. Here is an example of the error in the UI.
Please use HTTP codes documentation for understanding the meaning of such errors.
Here are some examples:
409
. Usually indicates that we are trying to update or create a resource that already exists. E.g. K8S/OCP resources.401
. Unauthorized access. A token, password or username might be wrong or expired.
Workflow Errors
Problem: Workflow execution fails
Solution:
- Examine the container log of the workflow
oc logs my-workflow-xy73lj
Problem: Workflow is not listed by the orchestrator plugin
Solution:
Examine the container status and logs
oc get pods my-workflow-xy73lj
oc logs my-workflow-xy73lj
Most probably the Data index service was unready when the workflow started.
Typically this is what the log shows:
2024-07-24 21:10:20,837 ERROR [org.kie.kog.eve.pro.ReactiveMessagingEventPublisher] (main) Error while creating event to topic kogito-processdefinitions-events for event ProcessDefinitionDataEvent {specVersion=1.0, id='586e5273-33b9-4e90-8df6-76b972575b57', source=http://mtaanalysis.default/MTAAnalysis, type='ProcessDefinitionEvent', time=2024-07-24T21:10:20.658694165Z, subject='null', dataContentType='application/json', dataSchema=null, data=org.kie.kogito.event.process.ProcessDefinitionEventBody@7de147e9, kogitoProcessInstanceId='null', kogitoRootProcessInstanceId='null', kogitoProcessId='MTAAnalysis', kogitoRootProcessId='null', kogitoAddons='null', kogitoIdentity='null', extensionAttributes={kogitoprocid=MTAAnalysis}}: java.util.concurrent.CompletionException: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: sonataflow-platform-data-index-service.default/10.96.15.153:80
Check if you use a cluster-wide platform:
$ oc get sonataflowclusterplatforms.sonataflow.org
cluster-platform
If you have, like in the example output, then use the namespace sonataflow-infra
when you look for the sonataflow services
Make sure the Data Index is ready, and restart the workflow - notice the sonataflow-infra
namespace usage:
$ oc get pods -l sonataflow.org/service=sonataflow-platform-data-index-service -n sonataflow-infra
NAME READY STATUS RESTARTS AGE
sonataflow-platform-data-index-service-546f59f89f-b7548 1/1 Running 0 11kh
$ oc rollout restart deployment my-workflow
Problem: Workflow is failing to reach an HTTPS endpoint because it can’t verify it
Solution:
- If this happens then we need to load the additional CA cert into the running
workflow container. To do so, please follow this guile from the SonataFlow guides site:
https://sonataflow.org/serverlessworkflow/main/cloud/operator/add-custom-ca-to-a-workflow-pod.html
Configuration Problems
Problem: Workflow installed in a different namespace than Sonataflow services fails to start
Solution:
When deploying a workflow in a namespace other than the one where Sonataflow services are running (e.g., sonataflow-infra
), there are essential steps to follow to enable persistence and connectivity for the workflow. See the following steps.
- Ensure PostgreSQL Pod has Fully Started
If the PostgreSQL pod is still initializing, allow additional time for it to become
fully operational before expecting the DataIndex
and JobService
pods to connect. - Verify network policies if PostgreSQL Server is in a different namespace
If PostgreSQL Server is deployed in a separate namespace from Sonataflow services
(e.g., not in sonataflow-infra
namespace), ensure that network policies in the
PostgreSQL namespace allow ingress from the Sonataflow services namespace
(e.g., sonataflow-infra
). Without appropriate ingress rules,
network policies may prevent the DataIndex
and JobService
pods from
connecting to the database.
5.6 - Configuration
5.6.1 - Configure workflow for token propagation
By default, the RHDH Orchestrator plugin adds headers for each token in the ‘authTokens’ field of the POST request that is used to trigger a workflow execution. Those headers will be in the following format: X-Authorization-{provider}: {token}
.
This allows the user identity to be propagated to the third parties and externals services called by the workflow.
To do so, a set of properties must be set in the workflow application.properties
file.
Prerequisites
- Having a Keycloak instance running with a client
- Having RHDH with the latest version of the Orchestrator plugins
- Having a workflow using openapi spec file to send REST requests to a service. Using custom REST function within the workflow will not propagate the token; it is only possible to propagate tokens when using openapi specification file.
Build
When building the workflow’s image, you will need to make sure the following extensions are present in the QUARKUS_EXTENSION
:
- io.quarkus:quarkus-oidc-client-filter # needed for propagation
- io.quarkus:quarkus-oidc # neded for token validity check thus accessing $WORKFLOW.identity
See https://github.com/rhdhorchestrator/orchestrator-demo/blob/main/scripts/build.sh#L180 to see how we do it.
Configuration
Oauth2
- In the OpenAPI spec file(s) where you want to propagate the incoming token, define the security scheme used by the endpoints youβre interested in. All endpoints may use the same security scheme if configured globally.
e.g
components:
securitySchemes:
BearerToken:
type: oauth2
flows:
clientCredentials:
tokenUrl: http://<keycloak>/realms/<yourRealm>/protocol/openid-connect/token
scopes: {}
description: Bearer Token authentication
- In the
application.properties
of your workflow, for each security scheme, add the following:
auth-server-url=https://<keycloak>/realms/<yourRealm>
client-id=<client ID>
client-secret=<client secret>
# Properties to check for identity, needed to use $WORKFLOW.identity within the workflow
quarkus.oidc.auth-server-url=${auth-server-url}
quarkus.oidc.client-id=${client-id}
quarkus.oidc.credentials.secret=${client-secret}
quarkus.oidc.token.header=X-Authorization-<provider>
quarkus.oidc.token.issuer=any # needed in case the auth server url is not the same as the one configured; e.g: localhost VS the k8S service
# Properties for propagation
quarkus.oidc-client.BearerToken.auth-server-url=${auth-server-url}
quarkus.oidc-client.BearerToken.token-path=${auth-server-url}/protocol/openid-connect/token
quarkus.oidc-client.BearerToken.discovery-enabled=false
quarkus.oidc-client.BearerToken.client-id=${client-id}
quarkus.oidc-client.BearerToken.grant.type=client
quarkus.oidc-client.BearerToken.credentials.client-secret.method=basic
quarkus.oidc-client.BearerToken.credentials.client-secret.value=${client-secret}
quarkus.openapi-generator.<spec_file_yaml_or_json>.auth.<security_scheme>.token-propagation=true
quarkus.openapi-generator.<spec_file_yaml_or_json>.auth.<security_scheme>.header-name=X-Authorization-<provider>
With:
spec_file_yaml_or_json
: the name of the spec file configured with _
as separator. E.g: if the file name is simple-server.yaml
the normalized property name will be simple_server_yaml
. This should be the same for every security scheme defined in the file.security_scheme
: the name of the security scheme for which propagates the token located in the header defined by the header-name
property. In our example it would be BearerToken
.provider
: the name of the expected provider from which the token comes from. As explained above, for each provider in RHDH, the Orchestrator plugin is adding a header with the format X-Authorization-{provider}: {token}
.keycloak
: the URL of the running Keycloak instance.yourRealm
: the name of the realm to use.client ID
: the ID of the Keycloak client to use to authenticate against the Keycloak instance.
See https://sonataflow.org/serverlessworkflow/latest/security/authention-support-for-openapi-services.html#ref-authorization-token-propagation and https://quarkus.io/guides/security-openid-connect-client-reference#token-propagation-rest for more information about token propagation.
Setting the quarkus.oidc.*
properties will enforce the token validity check against the OIDC provider. Once successful, you will be able to use $WORKFLOW.identity
in the workflow definition in order to get the identity of the user. See https://quarkus.io/guides/security-oidc-bearer-token-authentication and https://quarkus.io/guides/security-oidc-bearer-token-authentication-tutorial for more information.
Bearer token
- In the OpenAPI spec file(s) where you want to propagate the incoming token, define the security scheme used by the endpoints youβre interested in. All endpoints may use the same security scheme if configured globally.
e.g
components:
securitySchemes:
SimpleBearerToken:
type: http
scheme: bearer
- In the
application.properties
of your workflow, for each security scheme, add the following:
auth-server-url=https://<keycloak>/realms/<yourRealm>
client-id=<client ID>
client-secret=<client secret>
# Properties to check for identity, needed to use $WORKFLOW.identity within the workflow
quarkus.oidc.auth-server-url=${auth-server-url}
quarkus.oidc.client-id=${client-id}
quarkus.oidc.credentials.secret=${client-secret}
quarkus.oidc.token.header=X-Authorization-<provider>
quarkus.oidc.token.issuer=any # needed in case the auth server url is not the same as the one configured; e.g: localhost VS the k8S service
quarkus.openapi-generator.<spec_file_yaml_or_json>.auth.<security_scheme>.token-propagation=true
quarkus.openapi-generator.<spec_file_yaml_or_json>.auth.<security_scheme>.header-name=X-Authorization-<provider>
With:
spec_file_yaml_or_json
: the name of the spec file configured with _
as separator. E.g: if the file name is simple-server.yaml
the normalized property name will be simple_server_yaml
. This should be the same for every security scheme defined in the file.security_scheme
: the name of the security scheme for which propagates the token located in the header defined by the header-name
property. In our example it would be SimpleBearerToken
.provider
: the name of the expected provider from which the token comes from. As explained above, for each provider in RHDH, the Orchestrator plugin is adding a header with the format X-Authorization-{provider}: {token}
.
Setting the quarkus.oidc.*
properties will enforce the token validity check against the OIDC provider. Once successful, you will be able to use $WORKFLOW.identity
in the workflow definition in order to get the identity of the user. See https://quarkus.io/guides/security-oidc-bearer-token-authentication and https://quarkus.io/guides/security-oidc-bearer-token-authentication-tutorial for more information.
Basic auth
Basic auth token propagation is not currently supported.
A pull request has been opened to add support for it: https://github.com/quarkiverse/quarkus-openapi-generator/pull/1078
With Basic auth, the $WORKFLOW.identity
is not available.
Instead you could access the header directly: $WORKFLOW.headers.X-Authorization-{provider}
and decode it:
functions:
- name: getIdentity
type: expression
operation: '.identity=($WORKFLOW.headers["x-authorization-basic"] | @base64d | split(":")[0])' # mind the lower case!!
You can see a full example here: https://github.com/rhdhorchestrator/workflow-token-propagation-example.
5.7 - Best Practices
Best practices when creating a workflow
A workflow should be developed in accordance with the guidelines outlined in the Serverless Workflow definitions documentation.
This document provides a summary of several additional rules and recommendations to ensure smooth integration with other applications, such as the Backstage Orchestrator UI.
Workflow output schema
To effectively display the results of the workflow and any optional outputs generated by the user interface, or to facilitate the chaining of workflow executions, it is important for a workflow to deliver its output data in a recognized structured format as defined by the WorkflowResult schema.
The output meant for next processing should be placed under data.result
property.
id: my-workflow
version: "1.0"
specVersion: "0.8"
name: My Workflow
start: ImmediatelyEnd
extensions:
- extensionid: workflow-output-schema
outputSchema: schemas/workflow-output-schema.json
states:
- name: ImmediatelyEnd
type: inject
data:
result:
message: A human-readable description of the successful status. Or an error.
outputs:
- key: Foo Bar human readable name which will be shown in the UI
value: Example string value produced on the output. This might be an input for a next workflow.
nextWorkflows:
- id: my-next-workflow-id
name: Next workflow name suggested if this is an assessment workflow. Human readable, it's text does not need to match true workflow name.
end: true
Then the schemas/workflow-output-schema.json
can look like (referencing the WorkflowResult schema):
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "WorkflowResult",
"description": "Schema of workflow output",
"type": "object",
"properties": {
"result": {
"$ref": "shared/schemas/workflow-result-schema.json",
"type": "object"
}
}
}
6 - Plugins
6.1 - Notifications Plugin
How to get started with the notifications and signals
The Backstage Notifications System provides a way for plugins and external services to send notifications to Backstage users.
These notifications are displayed in the dedicated page of the Backstage frontend UI or by frontend plugins per specific scenarios.
Additionally, notifications can be sent to external channels (like email) via “processors” implemented within plugins.
Upstream documentation can be found in:
Frontend
Notifications are messages sent to either individual users or groups. They are not intended for inter-process communication of any kind.
To list and manage, choose Notifications
from the left-side menu item.
There are two basic types of notifications:
- Broadcast: Messages sent to all users of Backstage.
- Entity: Messages delivered to specific listed entities from the Catalog, such as Users or Groups.

Backend
The backend plugin provides the backend application for reading and writing notifications.
Authentication
The Notifications are primarily meant to be sent by backend plugins. In such flow, the authentication is shared among them.
To let external systems (like a Workflow) create new notifications by sending POST requests to the Notification REST API, authentication needs to be properly configured via setting the backend.auth.externalAccess
property of the app-config
.
Refer to the service-to-service auth documentation for more details, focusing on the Static Tokens section as the simplest setup option.
Creating a notification by external services
An example request for creating a broadcast notification can look like:
curl -X POST https://[BACKSTAGE_BACKEND]/api/notifications -H "Content-Type: application/json" -H "Authorization: Bearer YOUR_BASE64_SHARED_KEY_TOKEN" -d '{"recipients":{"type":"broadcast"},"payload": {"title": "Title of broadcast message","link": "http://foo.com/bar","severity": "high","topic": "The topic"}}'
Configuration
Configuration of the dynamic plugins is in the dynamic-plugins-rhdh
ConfigMap created by the Helm chart during installation.
Frontend configuration
Usually there is no need to change the defaults but little tweaks can be done on the props section:
frontend:
redhat.plugin-notifications:
dynamicRoutes:
- importName: NotificationsPage
menuItem:
config:
props:
titleCounterEnabled: true
webNotificationsEnabled: false
importName: NotificationsSidebarItem
path: /notifications
Backend configuration
Except setting authentication for external callers, there is no special plugin configuration needed.
Forward to Email
It is possible to forward notification content to email address. In order to do that you must add the Email Processor Module to your Backstage backend.
Configuration
Configuration options can be found in plugin’s documentation.
Example configuration:
pluginConfig:
notifications:
processors:
email:
filter:
minSeverity: low
maxSeverity: critical
excludedTopics: []
broadcastConfig:
receiver: config # or none or users
receiverEmails:
- foo@company.com
- bar@company.com
cache:
ttl:
days: 1
concurrencyLimit: 10
replyTo: email@company.com
sender: email@company.com
transportConfig:
hostname: your.smtp.host.com
password: a-password
username: a-smtp-username
port: 25
secure: false
transport: smtp
Ignoring unwanted notifications
The configuration of the module explains how to configure filters. Filters are used to ignore notifications that should not be forwarded to email. The supported filters include minimum/maximum severity and list of excluded topics.
User notifications
Each user notification has a list of recipients. The recipient is an entity in Backstage catalog. The notification will be sent to the email addresses of the recipients.
Broadcast notifications
In broadcast notifications we do not have recipients, the notifications are delivered to all users.
The module’s configuration supports a few options for broadcast notifications:
- Ignoring broadcast notifications to be forwarded
- Sending to predefined address list only
- Sending to all users whose catalog entity has an email