1 - Documentation

Orchestrator

Choose a section from the list below. For Orchestrator introduction, check the Quick Start.

1.1 - Quick Start

Quickstart Guide

This quickstart guide will help you install Orchestrator via Red Hat Developer Hub (RHDH) and execute a sample workflow through the Orchestrator plugin on the RHDH UI.

  1. Install Orchestrator via RHDH: Choose one of the following installation methods:

  2. Install a sample workflow: Follow the installation instructions for the greetings workflow.

  3. Access Red Hat Developer Hub: Open your web browser and navigate to the Red Hat Developer Hub application. Retrieve the URL using the following OpenShift CLI command.

    oc get route backstage-backstage -n rhdh-operator -o jsonpath='{.spec.host}'
    

    Make sure the route is accessible to you locally.

  4. Login to Backstage Login to Backstage with the Guest account.

  5. Navigate to Orchestrator: Navigate to the Orchestrator page by clicking on the Orchestrator icon in the left navigation menu. orchestratorIcon

  6. Execute Greeting Workflow: Click on the ‘Execute’ button in the ACTIONS column of the Greeting workflow. workflowsPage The ‘Run workflow’ page will open. Click ‘Next step’ and then ‘Run’ executePageNext executePageRun

  7. Monitor Workflow Status: Wait for the status of the Greeting workflow execution to become Completed. This may take a moment. workflowCompleted

1.2 - Architecture

The Orchestrator architecture comprises several integral components, each contributing to the seamless execution and management of workflows. Illustrated below is a breakdown of these components:

  • Red Hat Developer Hub: Serving as the primary interface, Backstage fulfills multiple roles:
    • Orchestrator Plugins: Both frontend and backend plugins are instrumental in presenting deployed workflows for execution and monitoring.
    • Notifications Plugin: Employs notifications to inform users or groups about workflow events.
  • OpenShift Serverless Logic Operator: This controller manages the Sonataflow custom resource (CR), where each CR denotes a deployed workflow.
  • Sonataflow Runtime/Workflow Application: As a deployed workflow, Sonataflow Runtime is currently managed as a Kubernetes (K8s) deployment by the operator. It operates as an HTTP server, catering to requests for executing workflow instances. Within the Orchestrator deployment, each Sonataflow CR corresponds to a singular workflow. However, outside this scope, Sonataflow Runtime can handle multiple workflows. Interaction with Sonataflow Runtime for workflow execution is facilitated by the Orchestrator backend plugin.
  • Data Index Service: This serves as a repository for workflow definitions, instances, and their associated jobs. It exposes a GraphQL API, utilized by the Orchestrator backend plugin to retrieve workflow definitions and instances.
  • Job Service: Dedicated to orchestrating scheduled tasks for workflows.
  • OpenShift Serverless: This operator furnishes serverless capabilities essential for workflow communication. It employs Knative eventing to interface with the Data Index service and leverages Knative functions to introduce more intricate logic to workflows.
  • OpenShift AMQ Streams (Strimzi/Kafka): While not presently integrated into the deployment’s current iteration, this operator is crucial for ensuring the reliability of the eventing system.
  • KeyCloak: Responsible for authentication and security services within applications. While not installed by the Orchestrator operator, it is essential for enhancing security measures.
  • PostgreSQL Server - Utilized for storing both Sonataflow information and Backstage data, PostgreSQL Server provides a robust and reliable database solution essential for data persistence within the Orchestrator ecosystem.
Architecture Context Diagram Architecture Container Diagram Architecture Diagram

1.3 - Installation

On previous Orchestrator versions (<1.6), an RHDH operator installation was triggered by the Orchestrator operator, or a pre-existing RHDH installation was connected. On RHDH/Orchestrator 1.7 - that is no longer the case. RHDH operator is responsible for installing the Orchestrator resources, and Orchestrator will cease to exist as a standalone operator.

Installation Methods

RHDH Operator

RHDH Helm Chart

Workflows

In addition to the Orchestrator deployment, we offer several workflows that can be deployed using their respective installation methods.

1.3.1 - Installation via RHDH Operator

The RHDH Operator provides the most streamlined way to install and configure the Orchestrator plugin on OpenShift clusters. This method handles all infrastructure requirements and plugin configuration automatically.

To install Orchestrator via the RHDH operator, please follow the instructions here

1.3.2 - Installation via RHDH Helm Charts

For environments where the RHDH Operator is not available, or to have more control on the deployment, you can install the Orchestrator plugin using Helm charts.

To install Orchestrator via the RHDH Helm chart, please follow the instructions here.

1.3.3 - RBAC

The RBAC policies for RHDH Orchestrator plugins v1.7 are listed here

1.3.4 - Disconnected Environment

To install the Orchestrator and its required components in a disconnected environment, there is a need to mirror images and NPM packages. Please ensure the images are added using either ImageDigestMirrorSet or ImageTagMirrorSet, depending on the format of their values.

Images for a disconnected environment

The following images need to be added to the image registry:

Recommendation:
When fetching the list of required images, ensure that you are using the latest version of the bundle operator when appropriate. This helps avoid missing or outdated image references.

RHDH Operator:

registry.redhat.io/rhdh/rhdh-hub-rhel9@sha256:62f32e50727c5006766a34daa5b6472b649cd9894f3fe9543b8ecc67e6760e8e
registry.redhat.io/rhdh/rhdh-operator-bundle@sha256:f743970668c72ff357ebfc910ddd5110c95a39754862d74d31e108c5993c5ace
registry.redhat.io/rhdh/rhdh-rhel9-operator@sha256:d413b6ee53b6271644b9c5d2fc1dc266212b33e419380a6dddbf4613280dfd7a
registry.redhat.io/rhel9/postgresql-15@sha256:4d707fc04f13c271b455f7b56c1fda9e232a62214ffc6213c02e41177dd4a13f

OpenShift Serverless Operator:

registry.access.redhat.com/ubi8/nodejs-20-minimal@sha256:a2a7e399aaf09a48c28f40820da16709b62aee6f2bc703116b9345fab5830861
registry.access.redhat.com/ubi8/openjdk-21@sha256:441897a1f691c7d4b3a67bb3e0fea83e18352214264cb383fd057bbbd5ed863c
registry.access.redhat.com/ubi8/python-39@sha256:27e795fd6b1b77de70d1dc73a65e4c790650748a9cfda138fdbd194b3d6eea3d
registry.redhat.io/openshift-serverless-1/kn-backstage-plugins-eventmesh-rhel8@sha256:69b70200170a2d399ce143dca9aff5fede2d37a74040dc5ddf2206deadc9a33f
registry.redhat.io/openshift-serverless-1/kn-client-cli-artifacts-rhel8@sha256:d8e04e8d46ecec005504652b8cb4ead29452a6a89e47d568df0a24971240e9d9
registry.redhat.io/openshift-serverless-1/kn-client-kn-rhel8@sha256:989cb97cf626ae8637b32d519802250d208f466a5d6ff05d6bab105b978c976a
registry.redhat.io/openshift-serverless-1/kn-ekb-dispatcher-rhel8@sha256:4cb73eedb5c7841bff08ba5e55a48fde37ed9a0921fb88b381eaa7422fe2b00d
registry.redhat.io/openshift-serverless-1/kn-ekb-kafka-controller-rhel8@sha256:4fa519b1d4ef7f0219bae21febe73012ca261c12b3c08a9732088b7dfe37f65a
registry.redhat.io/openshift-serverless-1/kn-ekb-post-install-rhel8@sha256:402956ddf4f8da30aa234cf1d151b02f1bef29de604cad2441d65584117a3912
registry.redhat.io/openshift-serverless-1/kn-ekb-receiver-rhel8@sha256:bd48166615c132dd95a3792a6c610b1d977bad7c126a5532c47330ad3899e1ef
registry.redhat.io/openshift-serverless-1/kn-ekb-webhook-kafka-rhel8@sha256:7a4ffa3ae32dc289917b9a9c7c5ca251dc8586ba64719a126164656eecfeef14
registry.redhat.io/openshift-serverless-1/kn-eventing-apiserver-receive-adapter-rhel8@sha256:8ebbf3cd6a980896e03dc4818dede80856743c24a551d9c399f9b65c0816e2b3
registry.redhat.io/openshift-serverless-1/kn-eventing-channel-controller-rhel8@sha256:b3c9b5db3db34f454a86a81b87843934a5b8e5960cf1fa446650a35b7c2b1778
registry.redhat.io/openshift-serverless-1/kn-eventing-channel-dispatcher-rhel8@sha256:97adc8d4ab32770e00a2ae0096d45d9cd0c053a99292202bc24e6e9a60d92970
registry.redhat.io/openshift-serverless-1/kn-eventing-controller-rhel8@sha256:d6aff2e731bd8fa4f8a472ab2b6cb08103e0ba04ba353918484813864d89c082
registry.redhat.io/openshift-serverless-1/kn-eventing-filter-rhel8@sha256:e348715064edc914fd45071cb2e5e0e967bd26ce0542372a833a4ede78bf2822
registry.redhat.io/openshift-serverless-1/kn-eventing-ingress-rhel8@sha256:4519eba6fa2a6c6c10f0d97992c1e911ea1ce4cf00ac9025b9b334671b0d1e14
registry.redhat.io/openshift-serverless-1/kn-eventing-integrations-aws-ddb-streams-source-rhel8@sha256:6e2272266a877c42350c6e92bd9d97e407160de8bc29c1ab472786409548f69d
registry.redhat.io/openshift-serverless-1/kn-eventing-integrations-aws-s3-sink-rhel8@sha256:a6649ecd10ea7e3cca8d254a4a4a203d585cf1a485532fcb8f77053422ab0405
registry.redhat.io/openshift-serverless-1/kn-eventing-integrations-aws-s3-source-rhel8@sha256:ac8fad706d8e47118572a5c99f669b337962920498fd4c31796e2e707f8ff11e
registry.redhat.io/openshift-serverless-1/kn-eventing-integrations-aws-sns-sink-rhel8@sha256:e0b8f3759beb0a01314c3e6f9a165d286ac7e0e5ed9533df30209f873d3e8787
registry.redhat.io/openshift-serverless-1/kn-eventing-integrations-aws-sqs-sink-rhel8@sha256:7fc8171b21af336f5c512d0f484e363d0d32f6f11211621f572827cf71bf4cf6
registry.redhat.io/openshift-serverless-1/kn-eventing-integrations-aws-sqs-source-rhel8@sha256:925b30dbcc13075348fa35ad8e28abad88b1e632e45ff76bcd40dcacf1eaf5c1
registry.redhat.io/openshift-serverless-1/kn-eventing-integrations-log-sink-rhel8@sha256:c4641ac936196229a6dc035194799d24493eaa45cc3e0b21d79a9704860d2028
registry.redhat.io/openshift-serverless-1/kn-eventing-integrations-timer-source-rhel8@sha256:3c054f0fbbeb1428b8d88927d6b219bf5ba8c744434ebc4013351ad6494540a3
registry.redhat.io/openshift-serverless-1/kn-eventing-integrations-transform-jsonata-rhel8@sha256:1451bcf5004a32a6a183836ebf3f5c0af397da6c8d176a36bcc750c726e1f408
registry.redhat.io/openshift-serverless-1/kn-eventing-istio-controller-rhel8@sha256:a39bc62f77a5303f286e43bc8c47bb0452ad6f44228efc3e8d54798b5aaeb4d6
registry.redhat.io/openshift-serverless-1/kn-eventing-jobsink-rhel8@sha256:2553b7302376ec89216934b783e9db8122693f74b428a41e94c5ec7ffc48a414
registry.redhat.io/openshift-serverless-1/kn-eventing-migrate-rhel8@sha256:6538bbb2a59b31e03d2e74e93db81b15647308812f2354d6868680d8b48a706c
registry.redhat.io/openshift-serverless-1/kn-eventing-mtchannel-broker-rhel8@sha256:65c7c98a65f09ff01ef875d505be153bad54213bf6c3210fecee238e45887b0b
registry.redhat.io/openshift-serverless-1/kn-eventing-mtping-rhel8@sha256:887f33ae9c7d8e52764b3af4a78898769cd52eb47e6e9913fe71d7e890d9816a
registry.redhat.io/openshift-serverless-1/kn-eventing-webhook-rhel8@sha256:4a2924e282a3612e00de4bfee5a8c963c9b65b962a4c7d72f999bd493026f92a
registry.redhat.io/openshift-serverless-1/kn-plugin-event-sender-rhel8@sha256:f7795088777ea84fc6180b81b6131962944e34918e2c06671033a1a572581773
registry.redhat.io/openshift-serverless-1/kn-plugin-func-func-util-rhel8@sha256:b0eb1f0b2f180afb207186267601665f2979c4cf21a0e434e7601123e3826716
registry.redhat.io/openshift-serverless-1/kn-serving-activator-rhel8@sha256:4cf5431ee984d7cb7e6a87504e151a31130e18f1448d1eca56fbc294ee3020e4
registry.redhat.io/openshift-serverless-1/kn-serving-autoscaler-hpa-rhel8@sha256:f55ccbe4baf5829f98eb4fe7f802165d9209fe34dc8854a4eef70e471dcc1f97
registry.redhat.io/openshift-serverless-1/kn-serving-autoscaler-rhel8@sha256:0e273607b7d8ee6e2e542e02a2f6cfb04c144d4b70cf1fbc58d1041e26d283ab
registry.redhat.io/openshift-serverless-1/kn-serving-controller-rhel8@sha256:fdf01c170795da9598007bddf34c74e4a2b6d4c10ac2a0ad7010f30c8eb84149
registry.redhat.io/openshift-serverless-1/kn-serving-queue-rhel8@sha256:be27abd8e30d0e9b0245d5d99800290231aa246931bdbf65a757eac49f7d9ad9
registry.redhat.io/openshift-serverless-1/kn-serving-storage-version-migration-rhel8@sha256:dafcf4ee3a5836f2744e786fafd2911264a6f043d7cf17bf8cdf7b75ab9b3ff6
registry.redhat.io/openshift-serverless-1/kn-serving-webhook-rhel8@sha256:6dfc77b18f5f03fbc918f33ab5916344b546085e3cd57632d71ddb73022b5222
registry.redhat.io/openshift-serverless-1/net-istio-controller-rhel8@sha256:06100687f4d3b193fe289b45046d11bf5439f296f0c9b1e62fe16ed8624ae251
registry.redhat.io/openshift-serverless-1/net-istio-webhook-rhel8@sha256:6939d0ec31480dbfa172783d2531f6497c38dd18b0cbcc1597413e7dd49a4d62
registry.redhat.io/openshift-serverless-1/net-kourier-kourier-rhel8@sha256:1b3f3be13ff69f520ace648989ae7053b26a872af3c2baade05adfc8513f2afd
registry.redhat.io/openshift-serverless-1/serverless-ingress-rhel8@sha256:db94f6b64ac3e618c0dad70032ad3e723122d2dd566dd4099cd5f81e3f28ae8e
registry.redhat.io/openshift-serverless-1/serverless-kn-operator-rhel8@sha256:dd788378be08cd5de076fe6fe7255ec21486697197f9390c0f8afc6be0901150
registry.redhat.io/openshift-serverless-1/serverless-must-gather-rhel8@sha256:5b7aba60fba1db136c893ecdd34aa592f6079564457b6bff183218ea29f1aae1
registry.redhat.io/openshift-serverless-1/serverless-openshift-kn-rhel8-operator@sha256:9d89f51d04418acaeb36c3c0c9d6917ea29ca1d5b39df05a80da19318ea2c51c
registry.redhat.io/openshift-service-mesh/proxyv2-rhel8@sha256:8ee57a44b1fc799fd8565eb339955773bd9beedcbf46f68628ee0bd4abf26515
registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:92a83b201580d29aec7ee85ccc2984576c4a364b849e504225888d6f1fb9b0d2
registry.redhat.io/rhel8/buildah@sha256:3d505d9c0f5d4cd5a4ec03b8d038656c6cdbdf5191e00ce6388f7e0e4d2f1b74
registry.redhat.io/openshift-serverless-1/serverless-operator-bundle@sha256:2d675f8bf31b0cfb64503ee72e082183b7b11979d65eb636fc83f4f3a25fa5d0

OpenShift Serverless Logic Operator:

gcr.io/kaniko-project/warmer:v1.9.0
gcr.io/kaniko-project/executor:v1.9.0
registry.redhat.io/openshift-serverless-1/logic-jobs-service-postgresql-rhel8:1.36.0
registry.redhat.io/openshift-serverless-1/logic-jobs-service-ephemeral-rhel8:1.36.0
registry.redhat.io/openshift-serverless-1/logic-data-index-postgresql-rhel8:1.36.0
registry.redhat.io/openshift-serverless-1/logic-data-index-ephemeral-rhel8:1.36.0
registry.redhat.io/openshift-serverless-1/logic-db-migrator-tool-rhel8:1.36.0
registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8:1.36.0
registry.redhat.io/openshift-serverless-1/logic-swf-devmode-rhel8:1.36.0
registry.redhat.io/openshift-serverless-1/logic-rhel8-operator@sha256:8d3682448ebdac3aeabb2d23842b7e67a252b95f959c408af805037f9728fd3c
registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:4564ca3dc5bac80d6faddaf94c817fbbc270698a9399d8a21ee1005d85ceda56
registry.redhat.io/openshift-serverless-1/logic-rhel8-operator@sha256:8d3682448ebdac3aeabb2d23842b7e67a252b95f959c408af805037f9728fd3c
registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:4564ca3dc5bac80d6faddaf94c817fbbc270698a9399d8a21ee1005d85ceda56
registry.redhat.io/openshift-serverless-1/logic-operator-bundle@sha256:5fff2717f7b08df2c90a2be7bfb36c27e13be188d23546497ed9ce266f1c03f4

Note:
If you encounter issues pulling images due to an invalid GPG signature, consider updating the /etc/containers/policy.json file to reference the appropriate beta GPG key.
For example, you can use:
/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta
This may be required when working with pre-release or beta images signed with a different key than the default.

NPM packages for a disconnected environment

The packages required for the Orchestrator can be downloaded as tgz files from:

Or using NPM packages from https://npm.registry.redhat.com e.g. by:

  npm pack "@redhat/backstage-plugin-orchestrator@1.7.1" --registry=https://npm.registry.redhat.com
  npm pack "@redhat/backstage-plugin-orchestrator-backend-dynamic@1.7.1" --registry=https://npm.registry.redhat.com
  npm pack "@redhat/backstage-plugin-scaffolder-backend-module-orchestrator-dynamic@1.7.1" --registry=https://npm.registry.redhat.com
  npm pack "@redhat/backstage-plugin-orchestrator-form-widgets@1.7.1" --registry=https://npm.registry.redhat.com

For maintainers

The images in this page were listed using the following set of commands, based on each of the operator bundle images:

RHDH

The RHDH bundle version should match the one being used by the Orchestrator operator as pointed by the (rhdhSubscriptionStartingCSV attribute)[https://github.com/rhdhorchestrator/orchestrator-go-operator/blob/main/internal/controller/rhdh/backstage.go#L31].

The list of images was obtained by:

bash <<'EOF'
set -euo pipefail

IMG="registry.redhat.io/rhdh/rhdh-operator-bundle:1.7.1"
DIR="local-manifests-rhdh"
CSV="$DIR/rhdh-operator.clusterserviceversion.yaml"

podman pull "$IMG" --quiet >/dev/null 2>&1
BUNDLE_DIGEST=$(podman image inspect "$IMG" --format '{{ index .RepoDigests 0 }}')

podman create --name temp "$IMG" > /dev/null
podman cp temp:/manifests "$DIR"
podman rm temp > /dev/null

yq e '
  .spec.install.spec.deployments[].spec.template.spec.containers[].image,
  .spec.install.spec.deployments[].spec.template.spec.containers[].env[]
  | select(.name | test("^RELATED_IMAGE_")).value
' "$CSV" | cat - <(echo "$BUNDLE_DIGEST") | sort -u
EOF

OpenShift Serverless

The list of images was obtained by:

IMG=registry.redhat.io/openshift-serverless-1/serverless-operator-bundle:1.36.0
podman run --rm --entrypoint bash "$IMG" -c "cat /manifests/serverless-operator.clusterserviceversion.yaml" | yq '.spec.relatedImages[].image' | sort | uniq
podman pull "$IMG"
podman image inspect "$IMG" --format '{{ index .RepoDigests 0 }}'

OpenShift Serverless Logic

podman create --name temp-container registry.redhat.io/openshift-serverless-1/logic-operator-bundle:1.36.0-8
podman cp temp-container:/manifests ./local-manifests-osl
podman rm temp-container
yq -r '.data."controllers_cfg.yaml" | from_yaml | .. | select(tag == "!!str") | select(test("^.*\\/.*:.*$"))' ./local-manifests-osl/logic-operator-rhel8-controllers-config_v1_configmap.yaml
yq -r '.. | select(has("image")) | .image' ./local-manifests-osl/logic-operator-rhel8.clusterserviceversion.yaml

Orchestrator

The list of images was obtained by:

bash <<'EOF'
set -euo pipefail

IMG="registry.redhat.io/rhdh-orchestrator-dev-preview-beta/orchestrator-operator-bundle:1.6-1751040440"
DIR="local-manifests-orchestrator"
CSV="$DIR/orchestrator-operator.clusterserviceversion.yaml"

podman pull "$IMG" --quiet >/dev/null 2>&1
BUNDLE_DIGEST=$(podman image inspect "$IMG" --format '{{ index .RepoDigests 0 }}')

podman create --name temp "$IMG" > /dev/null
podman cp temp:/manifests "$DIR"
podman rm temp > /dev/null

yq e '.spec.install.spec.deployments[].spec.template.spec.containers[].image' "$CSV" | cat - <(echo "$BUNDLE_DIGEST") | sort -u
EOF

1.3.5 - Requirements

Operators

The Orchestrator runtime/deployment is reliant on OpenShift Serverless Logic operator.

OpenShift Serverless Logic operator requirements

OpenShift Serverless Logic operator resource requirements are described OpenShift Serverless Logic Installation Requirements. This is mainly for local environment settings.
The operator deploys a Data Index service and a Jobs service. These are the recommended minimum resource requirements for their pods:
Data Index pod:

resources:
  limits:
    cpu: 500m
    memory: 1Gi
  requests:
    cpu: 250m
    memory: 64Mi

Jobs pod:

resources:
  limits:
    cpu: 200m
    memory: 1Gi
  requests:
    cpu: 100m
    memory: 1Gi

The resources for these pods are controlled by a CR of type SonataFlowPlatform. There is one such CR in the sonataflow-infra namespace.

Workflows

Each workflow has its own logic and therefore different resource requirements that are influenced by its specific logic.
Here are some metrics for the workflows we provide. For each workflow you have the following fields: cpu idle, cpu peak (during execution), memory.

  • greeting workflow
    • cpu idle: 4m
    • cpu peak: 12m
    • memory: 300 Mb
  • mtv-plan workflow
    • cpu idle: 4m
    • cpu peak: 130m
    • memory: 300 Mb

How to evaluate resource requirements for your workflow

Locate the workflow pod in OCP Console. There is a tab for Metrics. Here you’ll find the CPU and memory. Execute the workflow a few times. It does not matter whether it succeeds or not as long as all the states are executed. Now you can see the peak usage (execution) and the idle usage (after a few executions).

1.3.6 - Workflows

In addition to deploying the Orchestrator, we provide several preconfigured workflows that serve either as ready-to-use solutions or as starting points for customizing workflows according to the user’s requirements. These workflows can be installed through a Helm chart.

1.3.6.1 - Deploy From Helm Repository

Orchestrator Workflows Helm Repository

This repository serves as a Helm chart repository for deploying serverless workflows with the Sonataflow Operator. It encompasses a collection of pre-defined workflows, each tailored to specific use cases. These workflows have undergone thorough testing and validation through Continuous Integration (CI) processes and are organized according to their chart versions.

The repository includes a variety of serverless workflows, such as:

  • Greeting: A basic example workflow to demonstrate functionality.
  • Migration Toolkit for Application Analysis (MTA): This workflow evaluates applications to determine potential risks and the associated costs of containerizing the applications.
  • Move2Kube: Designed to facilitate the transition of an application to Kubernetes (K8s) environments.

Usage

Prerequisites

To utilize the workflows contained in this repository, the Orchestrator Deployment must be installed on your OpenShift Container Platform (OCP) cluster. For detailed instructions on installing the Orchestrator, please visit the Orchestrator Helm Based Operator Repository

Installation

helm repo add orchestrator-workflows https://rhdhorchestrator.io/serverless-workflows

View available workflows on the Helm repository:

helm search repo orchestrator-workflows

The expected result should look like (with different versions):

NAME                            	CHART VERSION	APP VERSION	DESCRIPTION                                      
orchestrator-workflows/greeting 	0.4.2        	1.16.0     	A Helm chart for the greeting serverless workflow
orchestrator-workflows/move2kube	0.2.16       	1.16.0     	A Helm chart to deploy the move2kube workflow.   
orchestrator-workflows/mta      	0.2.16       	1.16.0     	A Helm chart for MTA serverless workflow         
orchestrator-workflows/workflows	0.2.24       	1.16.0     	A Helm chart for serverless workflows
...

You can install the workflows following their respective README

Installing workflows in additional namespaces

When deploying a workflow in a namespace different from where Sonataflow services are running (e.g. sonataflow-infra), there are essential steps to follow. For detailed instructions, see the Additional Workflow Namespaces section.

Version Compatibility

The workflows rely on components included in the Orchestrator Operator. Therefore, it is crucial to match the workflow version with the corresponding Orchestrator version that supports it. The list below outlines the compatibility between the workflows and Orchestrator versions:

WorkflowsChart VersionOrchestrator Operator Version
move2kube1.6.x1.6.x
create-ocp-project1.6.x1.6.x
request-vm-cnv1.6.x1.6.x
modify-vm-resources1.6.x1.6.x
mta-v71.6.x1.6.x
mtv-migration1.6.x1.6.x
mtv-plan1.6.x1.6.x
move2kube1.5.x1.5.x
create-ocp-project1.5.x1.5.x
request-vm-cnv1.5.x1.5.x
modify-vm-resources1.5.x1.5.x
mta-v71.5.x1.5.x
mtv-migration1.5.x1.5.x
mtv-plan1.5.x1.5.x

Helm index

https://www.rhdhorchestrator.io/serverless-workflows/index.yaml

1.4 - Serverless Workflows

A serverless workflow in Orchestrator refers to a sequence of operations that run in response to user input (optional) and produce output (optional) without requiring any ongoing management of the underlying infrastructure. The workflow is executed automatically, and frees users from having to manage or provision servers. This simplifies the process by allowing the focus to remain on the logic of the workflow, while the infrastructure dynamically adapts to handle the execution.

1.4.1 - Workflows

1.4.1.1 - MTA Analysis

MTA - migration analysis workflow

Synopsis

This workflow invokes an application analysis workflow using MTA. You can continue to move2kube workflow after analysis is done if the analysis is considered to be successful.

Users are encouraged to use this workflow as self-service alternative for interacting with the MTA UI. Instead of running a mass-migration of project from a managed place, the project stakeholders can use this (or automation) to regularly check the cloud-readiness compatibility of their code.

Workflow application configuration

Application properties can be initialized from environment variables before running the application:

Environment variableDescriptionMandatoryDefault value
BACKSTAGE_NOTIFICATIONS_URLThe backstage server URL for notifications
NOTIFICATIONS_BEARER_TOKENThe authorization bearer token to use to send notifications
MTA_URLThe MTA Hub server URL

Inputs

  • repositoryUrl [mandatory] - the git repo url to examine
  • recipients [mandatory] - A list of recipients for the notification in the format of user:<namespace>/<username> or group:<namespace>/<groupname>, i.e. user:default/jsmith.

Output

When the workflow completes there should be a report link on the exit state of the workflow (also named variables in SonataFlow) Currently this is working with MTA version 6.2.x and in the future 7.x version the report link will be removed or will be made optional. Instead of an html report the workflow will use a machine friendly json file.

Dependencies

  • MTA version 6.2.x or Konveyor 0.2.x

    • For OpenShift install MTA using the OperatorHub, search for MTA. Documentation is here
    • For Kubernetes install Konveyor with olm
      kubectl create -f https://operatorhub.io/install/konveyor-0.2/konveyor-operator.yaml
      

Runtime configuration

keydefaultdescription
mta.urlhttp://mta-ui.openshift-mta.svc.cluster.local:8080Endpoint (with protocol and port) for MTA
quarkus.rest-client.mta_json.url${mta.url}/hubMTA hub api
quarkus.rest-client.notifications.url${BACKSTAGE_NOTIFICATIONS_URL:http://backstage-backstage.rhdh-operator/api/notifications/}Backstage notification url
quarkus.rest-client.mta_json.auth.basicAuth.usernameusernameUsername for the MTA api
quarkus.rest-client.mta_json.auth.basicAuth.passwordpasswordPassword for the MTA api

All the configuration items are on [./application.properties]

For running and testing the workflow refer to mta testing.

Workflow Diagram

mta workflow diagram

Installation

See official installation guide

1.4.1.2 - Simple Escalation

Simple escalation workflow

An escalation workflow integrated with Atlassian JIRA using SonataFlow.

Prerequisite

  • Access to a Jira server (URL, user and API token)
  • Access to an OpenShift cluster with admin Role

Workflow diagram

Escalation workflow diagram

Note: The value of the .jiraIssue.fields.status.statusCategory.key field is the one to be used to identify when the done status is reached, all the other similar fields are subject to translation to the configured language and cannot be used for a consistent check.

Application configuration

Application properties can be initialized from environment variables before running the application:

Environment variableDescriptionMandatoryDefault value
JIRA_URLThe Jira server URL
JIRA_USERNAMEThe Jira server username
JIRA_API_TOKENThe Jira API Token
JIRA_PROJECTThe key of the Jira project where the escalation issue is createdTEST
JIRA_ISSUE_TYPEThe ID of the Jira issue type to be created
OCP_API_SERVER_URLThe OpensShift API Server URL
OCP_API_SERVER_TOKENThe OpensShift API Server Token
ESCALATION_TIMEOUT_SECONDSThe number of seconds to wait before triggering the escalation request, after the issue has been created60
POLLING_PERIODICITY(1)The polling periodicity of the issue state checker, according to ISO 8601 duration formatPT6S

(1) This is still hardcoded as PT5S while waiting for a fix to KOGITO-9811

How to run

mvn clean quarkus:dev

Example of POST to trigger the flow (see input schema in ocp-onboarding-schema.json):

curl -XPOST -H "Content-Type: application/json" http://localhost:8080/ticket-escalation -d '{"namespace": "_YOUR_NAMESPACE_"}'

Tips:

  • Visit Workflow Instances
  • Visit (Data Index Query Service)[http://localhost:8080/q/graphql-ui/]

1.4.1.3 - Move2Kube

Move2kube (m2k) workflow

Context

This workflow is using https://move2kube.konveyor.io/ to migrate the existing code contained in a git repository to a K8s/OCP platform.

Once the transformation is over, move2kube provides a zip file containing the transformed repo.

Design diagram

sequence_diagram.svg design.svg

Workflow

m2k.svg

Note that if an error occurs during the migration planning there is no feedback given by the move2kube instance API. To overcome this, we defined a maximum amount of retries (move2kube_get_plan_max_retries) to execute while getting the planning before exiting with an error. By default the value is set to 10 and it can be overridden with the environment variable MOVE2KUBE_GET_PLAN_MAX_RETRIES.

Workflow application configuration

Move2kube workflow

Application properties can be initialized from environment variables before running the application:

Environment variableDescriptionMandatoryDefault value
MOVE2KUBE_URLThe move2kube instance server URL
BACKSTAGE_NOTIFICATIONS_URLThe backstage server URL for notifications
NOTIFICATIONS_BEARER_TOKENThe authorization bearer token to use to send notifications
MOVE2KUBE_GET_PLAN_MAX_RETRIESThe amount of retries to get the plan before failing the workflow10

m2k-func serverless function

Application properties can be initialized from environment variables before running the application:

Environment variableDescriptionMandatoryDefault value
MOVE2KUBE_APIThe move2kube instance server URL
SSH_PRIV_KEY_PATHThe absolute path to the SSH private key
BROKER_URLThe knative broker URL
LOG_LEVELThe log levelINFO

Components

The use case has the following components:

  1. m2k: the Sonataflow resource representing the workflow. A matching Deployment is created by the sonataflow operator..
  2. m2k-save-transformation-func: the Knative Service resource that holds the service retrieving the move2kube instance output and saving it to the git repository. A matching Deployment is created by the Knative deployment.
  3. move2kube instance: the Deployment running the move2kube instance
  4. Knative Trigger:
    1. m2k-save-transformation-event: event sent by the m2k workflow that will trigger the execution of m2k-save-transformation-func.
    2. transformation-saved-trigger-m2k: event sent by m2k-save-transformation-func if/once the move2kube output is successfully saved to the git repository.
    3. error-trigger-m2k: event sent by m2k-save-transformation-func if an error while saving the move2kube output to the git repository.
  5. The Knative Broker named default which link the components together.

Installation

See official installation guide

Usage

  1. Create a workspace and a project under it in your move2kube instance
    • you can reach your move2kube instance by running
    oc -n sonataflow-infra get routes
    
    Sample output:
    NAME                                   HOST/PORT                                                                                             PATH   SERVICES                                 PORT    TERMINATION   WILDCARD
    move2kube-route                        move2kube-route-sonataflow-infra.apps.cluster-c68jb.dynamic.redhatworkshops.io                               move2kube-svc                            <all>   edge          None
    
  2. Go to the backstage instance.

To get it, you can run

oc -n rhdh-operator get routes

Sample output:

NAME                  HOST/PORT                                                                            PATH   SERVICES              PORT           TERMINATION     WILDCARD
backstage-backstage   backstage-backstage-rhdh-operator.apps.cluster-c68jb.dynamic.redhatworkshops.io   /      backstage-backstage   http-backend   edge/Redirect   None
  1. Go to the Orchestrator page.

  2. Click on Move2Kube workflow and then click the run button on the top right of the page.

  3. In the repositoryURL field, put the URL of your git project

  4. In the sourceBranch field, put the name of the branch holding the project you want to transform

    • ie: main
  5. In the targetBranch field, put the name of the branch in which you want the move2kube output to be persisted. If the branch exists, the workflow will fail

    • ie: move2kube-output
  6. In the workspaceId field, put the ID of the move2kube instance workspace to use for the transformation. Use the ID of the workspace created at the 1st step.

    • ie: a46b802d-511c-4097-a5cb-76c892b48d71
  7. In the projectId field, put the ID of the move2kube instance project under the previous workspace to use for the transformation. Use the ID of the project created at the 1st step.

    • ie: 9c7f8914-0b63-4985-8696-d46c17ba4ebe
  8. Then click on nextStep

  9. Click on run to trigger the execution

  10. Once a new transformation has started and is waiting for your input, you will receive a notification with a link to the Q&A

  11. Once you completed the Q&A, the process will continue and the output of the transformation will be saved in your git repository, you will receive a notification to inform you of the completion of the workflow.

    • You can now clone the repository and checkout the output branch to deploy your manifests to your cluster! You can check the move2kube documention if you need guidance on how to deploy the generated artifacts.

1.4.2 - Development

Serverless-Workflows

This repository contains multiple workflows. Each workflow is represented by a directory in the project. Below is a table listing all available workflows:

Workflow NameDescription
create-ocp-projectSets up an OpenShift Container Platform (OCP) project.
escalationDemos workflow ticket escalation.
greetingSample greeting workflow.
modify-vm-resourcesModifies resources allocated to virtual machines.
move2kubeWorkflow for Move2Kube tasks and transformation.
mta-v7.xMigration toolkit for applications, version 7.x.
mtv-migrationMigration tasks using Migration Toolkit for Virtualization (MTV).
request-vm-cnvRequests and provisions VMs using Container Native Virtualization (CNV).

Each workflow is organized in its own directory, containing the following components:

  • application.properties — Contains configuration properties specific to the workflow application.
  • ${workflow}.sw.yaml — The Serverless Workflow definition, authored according to recommended best practices.
  • specs/ (optional) — Directory for OpenAPI specifications used by the workflow, if applicable.
  • schemas/ (optional) — Directory containing input and output data schemas relevant to the workflow execution.

Each workflow is built into a container image and published to Quay.io via GitHub Actions. The image naming convention follows:

quay.io/orchestrator/serverless-workflow-${workflow}

Current image statuses:

After the container image is published, a GitHub Action automatically generates the corresponding Kubernetes manifests and submits a pull request to this repository. The manifests are placed under the deploy/charts directory, in a subdirectory named after the workflow. This Helm chart structure is intended for deploying the workflow to environments where the SonataFlow Operator is installed and running. The resulting Helm charts are then published to the configured Helm repository for consumption at https://rhdhorchestrator.io/serverless-workflows

How to introduce a new workflow

Follow these steps to successfully add a new workflow:

  1. Create a folder under the root with the name of the workflow, e.x /onboarding
  2. Copy application.properties, onboarding.sw.yaml into that folder
  3. Create a GitHub workflow file .github/workflows/${workflow}.yaml that will call main workflow (e.g. greeting.yaml)
  4. Create a pull request but don’t merge yet.
  5. Send a pull request to add a sub-chart under the path deploy/charts/<WORKFLOW_ID>, e.g. deploy/charts/onboarding.
  6. Now the PR from 4 can be merged and an automatic PR will be created with the generated manifests. Review and merge.

See Continuous Integration with make for implementation details of the CI pipeline.

Builder image

workflow-builder-dev.Dockerfile - references OpenShift Serverless Logic builder image from registry.redhat.io which requires authorization.

  • To use this Dockerfile locally, you must be logged to registry.redhat.io. To get access to that registry, follow:
    1. Get tokens here. Once logged in to Podman, you should be able to pull the image.
    2. Verify pulling the image here

Note on CI: For every PR merged in the workflow directory, a GitHub Action runs an image build to generate manifests, and a new PR is automatically generated in this repository. The credentials used by the build process are defined as organization level secret, and the content is from a token on the helm repo with an expiry period of 60 days.

Using Helm Charts

Some of the workflows in this repository are released as Helm charts. To view available workflows in dev mode or prod mode use:

helm repo add orchestrator-workflows https://rhdhorchestrator.io/serverless-workflows
helm search repo orchestrator-workflows --devel

The instructions for installing each workflows can be found in the docs

1.4.3 - Workflow Examples

Our Orchestrator Serverless Workflow Examples repository, located at GitHub, provides a collection of sample workflows designed to help you explore and understand how to build serverless workflows using Orchestrator. These examples showcase a range of use cases, demonstrating how workflows can be developed, tested, and executed based on various inputs and conditions.

Please note that this repository is intended for development and testing purposes only. It serves as a reference for developers looking to create custom workflows and experiment with serverless orchestration concepts. These examples are not optimized for production environments and should be used to guide your own development processes.

1.4.4 - Troubleshooting

Troubleshooting Guide

This document provides solutions to common problems encountered with serverless workflows.

Table of Contents

  1. HTTP Errors
  2. Workflow Errors
  3. Configuration Problems
  4. Workflow not showing in RHDH UI

HTTP Errors

Many workflow operations are REST requests to REST endpoints. If an HTTP error occurs then the workflow will fail and the HTTP code and message will be displayed. Here is an example of the error in the UI. Please use HTTP codes documentation for understanding the meaning of such errors. Here are some examples:

  • 409. Usually indicates that we are trying to update or create a resource that already exists. E.g. K8S/OCP resources.
  • 401. Unauthorized access. A token, password or username might be wrong or expired.

Workflow Errors

Problem: Workflow execution fails

Solution:

  1. Examine the container log of the workflow
        oc logs my-workflow-xy73lj
    

Problem: Workflow is not listed by the orchestrator plugin

Solution:

  1. Examine the container status and logs

        oc get pods my-workflow-xy73lj
        oc logs my-workflow-xy73lj
    
  2. Most probably the Data index service was unready when the workflow started. Typically this is what the log shows:

        2024-07-24 21:10:20,837 ERROR [org.kie.kog.eve.pro.ReactiveMessagingEventPublisher] (main) Error while creating event to topic kogito-processdefinitions-events for event ProcessDefinitionDataEvent {specVersion=1.0, id='586e5273-33b9-4e90-8df6-76b972575b57', source=http://mtaanalysis.default/MTAAnalysis, type='ProcessDefinitionEvent', time=2024-07-24T21:10:20.658694165Z, subject='null', dataContentType='application/json', dataSchema=null, data=org.kie.kogito.event.process.ProcessDefinitionEventBody@7de147e9, kogitoProcessInstanceId='null', kogitoRootProcessInstanceId='null', kogitoProcessId='MTAAnalysis', kogitoRootProcessId='null', kogitoAddons='null', kogitoIdentity='null', extensionAttributes={kogitoprocid=MTAAnalysis}}: java.util.concurrent.CompletionException: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: sonataflow-platform-data-index-service.default/10.96.15.153:80
    
  3. Check if you use a cluster-wide platform:

       $ oc get sonataflowclusterplatforms.sonataflow.org
       cluster-platform
    

    If you have, like in the example output, then use the namespace sonataflow-infra when you look for the sonataflow services

    Make sure the Data Index is ready, and restart the workflow - notice the sonataflow-infra namespace usage:

        $ oc get pods -l sonataflow.org/service=sonataflow-platform-data-index-service -n sonataflow-infra
        NAME                                                      READY   STATUS    RESTARTS   AGE
        sonataflow-platform-data-index-service-546f59f89f-b7548   1/1     Running   0          11kh
    
        $ oc rollout restart deployment my-workflow
    

Problem: Workflow is failing to reach an HTTPS endpoint because it can’t verify it

  • REST actions performed by the workflow can fail the SSL certificate check if the target endpoint is signed with a CA which is not available to the workflow. The error in the workflow pod log usually looks like this:

    ```console
        sun.security.provider.certpath.SunCertPathBuilderException - unable to find valid certification path to requested target
    ```
    

Solution:

  1. If this happens then we need to load the additional CA cert into the running workflow container. To do so, please follow this guile from the SonataFlow guides site: https://sonataflow.org/serverlessworkflow/main/cloud/operator/add-custom-ca-to-a-workflow-pod.html

Configuration Problems

Problem: Workflow installed in a different namespace than Sonataflow services fails to start

Solution: When deploying a workflow in a namespace other than the one where Sonataflow services are running (e.g., sonataflow-infra), there are essential steps to follow to enable persistence and connectivity for the workflow. See the following steps.

Problem: sonataflow-platform-data-index-service pods can’t connect to the database on startup

  1. Ensure PostgreSQL Pod has Fully Started
    If the PostgreSQL pod is still initializing, allow additional time for it to become fully operational before expecting the DataIndex and JobService pods to connect.
  2. Verify network policies if PostgreSQL Server is in a different namespace
    If PostgreSQL Server is deployed in a separate namespace from Sonataflow services (e.g., not in sonataflow-infra namespace), ensure that network policies in the PostgreSQL namespace allow ingress from the Sonataflow services namespace (e.g., sonataflow-infra). Without appropriate ingress rules, network policies may prevent the DataIndex and JobService pods from connecting to the database.

Workflow not showing in RHDH UI

Problem: Workflows are not showing up the in the RHDH Orchestrator UI

  1. Ensure the workflow uses gitops profile
    In the RHDH Orchestrator UI, only the workflows using gitops profile are shown. Make sure the workflow definition and the sonataflow manifests are using this profile.

  2. Ensure the workflow’s pod has started and is ready
    THe first thing a workflow does when it starts is to create a schema for itself in the database (given persistence is enabled) and then it register itself to the Data Index. Until it was able to successfully register to the Data Index, the workflow’s pod will not be ready.

  3. Ensure the workflow’s pod can reach the Data Index
    Connect to the workflow’s pod and try to sent the following request to the Data Index:

curl -g -k  -X POST  -H "Content-Type: application/json" \
                    -d '{"query":"query{ ProcessDefinitions  { id, serviceUrl, endpoint } }"}' \
                    http://sonataflow-platform-data-index-service.sonataflow-infra/graphql

Use the service of the Data Index and its namespace as defined in your environment. Here sonataflow-platform-data-index-service is the service name and sonataflow-infra the namespace in which it is deployed.

Do the same from the RHDH pod and also make sure the workflow is reachable:

curl http://<workflow-service>.<workflow-namespace>/management/processes
  1. Ensure the Orchestrator is trying to fetch the workflow
    In the logs of the RHDH pod, you should see logs message similar to
{"level":"\u001b[32minfo\u001b[39m","message":"fetchWorkflowInfos() called: http://sonataflow-platform-data-index-service.sonataflow-infra","plugin":"orchestrator","service":"backstage","span_id":"fca4ab29f0a7aef9","timestamp":"2025-08-04 17:58:26","trace_flags":"01","trace_id":"5408d4b06373ff8fb34769083ef771dd"}

Notice the "plugin":"orchestrator" that can help filtering the messages.

  1. Ensure the Data Index properties are set in the -managed-props ConfigMap of the workflow
    Make sure to have:
kogito.data-index.health-enabled = true
kogito.data-index.url = http://sonataflow-platform-data-index-service.sonataflow-infra
...
mp.messaging.outgoing.kogito-processdefinitions-events.url = http://sonataflow-platform-data-index-service.sonataflow-infra/definitions
mp.messaging.outgoing.kogito-processinstances-events.url = http://sonataflow-platform-data-index-service.sonataflow-infra/processes

Those should be set automatically by the OSL operator when the Data Index service is enabled. You should have simlilar properties for the Job Services.

  1. Ensure the Workflow is registered in the Data Index
    To check that, you may connect to the database used by the Data Index and run the following from the PSQL instance’s pod:
$ PGPASSWORD=<psql password> psql -h localhost -p 5432 -U < user> -d sonataflow

sonataflow=# SET search_path TO "sonataflow-platform-data-index-service";
sonataflow=# select id, name from definitions;

You should see the workflows registered to the Data Index

  1. Ensure Data Index and Job Services are enabled
    If the Data Index and the Job Services are not enabled in the SontaFlowPlatform then the Orchestrator plugin cannot fetch the available workflows. Make sure to have
services:
    dataIndex:
      enabled: true
      ...
    jobService:
      enabled: true
      ...

If not, manually edit the SontaFlowPlatform instance. This should trigger the re-creation of the workflow’s related manifests.

You should now make sure the properties are correctly set in the managed-props ConfigMap of the workflow.

  1. Ensure the RBAC permissions are set correctly
    See RBAC documentation for detailed permission configuration.

To see if there is a permission issue, you have to set the log level to DEBUG, see https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.6/html/monitoring_and_logging/assembly-monitoring-and-logging-with-aws_assembly-rhdh-observability#configuring-the-application-log-level-by-using-the-operator_assembly-rhdh-observability

1.4.5 - Configuration

1.4.5.1 - Configure workflow for token propagation

By default, the RHDH Orchestrator plugin adds headers for each token in the ‘authTokens’ field of the POST request that is used to trigger a workflow execution. Those headers will be in the following format: X-Authorization-{provider}: {token}. This allows the user identity to be propagated to the third parties and externals services called by the workflow. To do so, a set of properties must be set in the workflow application.properties file.

Prerequisites

  • Having a Keycloak instance running with a client
  • Having RHDH with the latest version of the Orchestrator plugins
  • Having a workflow using openapi spec file to send REST requests to a service. Using custom REST function within the workflow will not propagate the token; it is only possible to propagate tokens when using openapi specification file.

Build

When building the workflow’s image, you will need to make sure the following extensions are present in the QUARKUS_EXTENSION:

  • io.quarkus:quarkus-oidc-client-filter # needed for propagation
  • io.quarkus:quarkus-oidc # neded for token validity check thus accessing $WORKFLOW.identity

See https://github.com/rhdhorchestrator/orchestrator-demo/blob/main/scripts/build.sh#L180 to see how we do it.

Configuration

By default, the workflow is not persisting the request headers in the database. Therefore, any token in the header will be lost if the workflow flushes its context (e.g: sleeps, goes idle, is resumed, …) as the headers will not be restored to the context from the database.

By setting the property kogito.persistence.headers.enabled to true in the application.properties file or in the config map representing it on the cluster, the workflow will persist the headers. This will enable the workflow to keep using the token from the headers even after it was interupted and restored.

You can exclude headers from being persisted using kogito.persistence.headers.excluded. See https://sonataflow.org/serverlessworkflow/main/core/configuration-properties.html and/or https://sonataflow.org/serverlessworkflow/main/use-cases/advanced-developer-use-cases/persistence/persistence-with-postgresql.html#ref-postgresql-persistence-configuration for more information.

Oauth2

  1. In the OpenAPI spec file(s) where you want to propagate the incoming token, define the security scheme used by the endpoints you’re interested in. All endpoints may use the same security scheme if configured globally. e.g
components:
  securitySchemes:
    BearerToken:
     type: oauth2
     flows:
       clientCredentials:
         tokenUrl: http://<keycloak>/realms/<yourRealm>/protocol/openid-connect/token
         scopes: {}
     description: Bearer Token authentication
  1. In the application.properties of your workflow, for each security scheme, add the following:
auth-server-url=https://<keycloak>/realms/<yourRealm>
client-id=<client ID>
client-secret=<client secret>

# Properties to check for identity, needed to use $WORKFLOW.identity within the workflow
quarkus.oidc.auth-server-url=${auth-server-url}
quarkus.oidc.client-id=${client-id}
quarkus.oidc.credentials.secret=${client-secret}
quarkus.oidc.token.header=X-Authorization-<provider>
quarkus.oidc.token.issuer=any # needed in case the auth server url is not the same as the one configured; e.g: localhost VS the k8S service

# Properties for propagation
quarkus.oidc-client.BearerToken.auth-server-url=${auth-server-url}
quarkus.oidc-client.BearerToken.token-path=${auth-server-url}/protocol/openid-connect/token
quarkus.oidc-client.BearerToken.discovery-enabled=false
quarkus.oidc-client.BearerToken.client-id=${client-id}
quarkus.oidc-client.BearerToken.grant.type=client
quarkus.oidc-client.BearerToken.credentials.client-secret.method=basic
quarkus.oidc-client.BearerToken.credentials.client-secret.value=${client-secret}

quarkus.openapi-generator.<spec_file_yaml_or_json>.auth.<security_scheme>.token-propagation=true
quarkus.openapi-generator.<spec_file_yaml_or_json>.auth.<security_scheme>.header-name=X-Authorization-<provider>

With:

  • spec_file_yaml_or_json: the name of the spec file configured with _ as separator. E.g: if the file name is simple-server.yaml the normalized property name will be simple_server_yaml. This should be the same for every security scheme defined in the file.
  • security_scheme: the name of the security scheme for which propagates the token located in the header defined by the header-name property. In our example it would be BearerToken.
  • provider: the name of the expected provider from which the token comes from. As explained above, for each provider in RHDH, the Orchestrator plugin is adding a header with the format X-Authorization-{provider}: {token}.
  • keycloak: the URL of the running Keycloak instance.
  • yourRealm: the name of the realm to use.
  • client ID: the ID of the Keycloak client to use to authenticate against the Keycloak instance.

See https://sonataflow.org/serverlessworkflow/latest/security/authention-support-for-openapi-services.html#ref-authorization-token-propagation and https://quarkus.io/guides/security-openid-connect-client-reference#token-propagation-rest for more information about token propagation.

Setting the quarkus.oidc.* properties will enforce the token validity check against the OIDC provider. Once successful, you will be able to use $WORKFLOW.identity in the workflow definition in order to get the identity of the user. See https://quarkus.io/guides/security-oidc-bearer-token-authentication and https://quarkus.io/guides/security-oidc-bearer-token-authentication-tutorial for more information.

Bearer token

  1. In the OpenAPI spec file(s) where you want to propagate the incoming token, define the security scheme used by the endpoints you’re interested in. All endpoints may use the same security scheme if configured globally. e.g
components:
  securitySchemes:
    SimpleBearerToken:
     type: http
     scheme: bearer
  1. In the application.properties of your workflow, for each security scheme, add the following:
auth-server-url=https://<keycloak>/realms/<yourRealm>
client-id=<client ID>
client-secret=<client secret>

# Properties to check for identity, needed to use $WORKFLOW.identity within the workflow
quarkus.oidc.auth-server-url=${auth-server-url}
quarkus.oidc.client-id=${client-id}
quarkus.oidc.credentials.secret=${client-secret}
quarkus.oidc.token.header=X-Authorization-<provider>
quarkus.oidc.token.issuer=any # needed in case the auth server url is not the same as the one configured; e.g: localhost VS the k8S service

quarkus.openapi-generator.<spec_file_yaml_or_json>.auth.<security_scheme>.token-propagation=true
quarkus.openapi-generator.<spec_file_yaml_or_json>.auth.<security_scheme>.header-name=X-Authorization-<provider>

With:

  • spec_file_yaml_or_json: the name of the spec file configured with _ as separator. E.g: if the file name is simple-server.yaml the normalized property name will be simple_server_yaml. This should be the same for every security scheme defined in the file.
  • security_scheme: the name of the security scheme for which propagates the token located in the header defined by the header-name property. In our example it would be SimpleBearerToken.
  • provider: the name of the expected provider from which the token comes from. As explained above, for each provider in RHDH, the Orchestrator plugin is adding a header with the format X-Authorization-{provider}: {token}.

Setting the quarkus.oidc.* properties will enforce the token validity check against the OIDC provider. Once successful, you will be able to use $WORKFLOW.identity in the workflow definition in order to get the identity of the user. See https://quarkus.io/guides/security-oidc-bearer-token-authentication and https://quarkus.io/guides/security-oidc-bearer-token-authentication-tutorial for more information.

Basic auth

Basic auth token propagation is not currently supported. A pull request has been opened to add support for it: https://github.com/quarkiverse/quarkus-openapi-generator/pull/1078

With Basic auth, the $WORKFLOW.identity is not available.

Instead you could access the header directly: $WORKFLOW.headers.X-Authorization-{provider} and decode it:

functions:
- name: getIdentity
  type: expression
  operation: '.identity=($WORKFLOW.headers["x-authorization-basic"] | @base64d | split(":")[0])' # mind the lower case!!

You can see a full example here: https://github.com/rhdhorchestrator/workflow-token-propagation-example.

Configuring OIDC properties at SonataFlowPlatform level (Cluster-wide OIDC configuration)

This short guide shows how to inject the Quarkus OIDC settings once at platform‑scope so that all present and future workflows automatically authenticate incoming requests and expose $WORKFLOW.identity.

Prerequisites

  • Namespace where the workflows run
  • Keycloak Realm URL
  • Client‑ID
  • Client‑secret

There is an assumption that the workflows and the platform are installed in the sonataflow-infra here.

export TARGET_NS=‘sonataflow-infra’ # target namespace of workflows and sonataflowplatform CR

Keep the client secret in a Secrets vault; don’t embed it as clear‑text in the CR.

Create the supporting objects

  1. Secret: holds the confidential client secret

e.g

oc create secret generic oidc-client-secret \
  -n $TARGET_NS \
  --from-literal=cred=swf-client-secret  # This is a sample value. You need to replace it with actual value.

Patch the SonataFlowPlatform CR

  1. Create patch.yaml (or paste inline):

e.g

#### All the values below need to be replaced by actual values.
spec:
  properties:
    flow:
    - name: quarkus.oidc.auth-server-url
      value: https://keycloak-host/realms/dev
    - name: quarkus.oidc.client-id
      value: swf-client
    - name: quarkus.oidc.token.header
      value: X-Authorization
    - name: quarkus.oidc.token.issuer
      value: any
    - name: quarkus.oidc.credentials.secret
      valueFrom:
        secretKeyRef:
          key: cred
          name: oidc-client-secret
  1. Apply the patch:

e.g

oc patch sonataflowplatform <Platform CR name> \
  -n $TARGET_NS \
  --type merge \
  -p "$(cat patch.yaml)"

Wait a few seconds for the operator reconcile loop.

Verify the managed properties

e.g

oc get sonataflowplatform <Platform CR name> -n $TARGET_NS -o yaml

You should see all five keys.

Restart running workflow deployments once so Quarkus reloads the file:

e.g

oc rollout restart deployment -l sonataflow.org/workflow -n $TARGET_NS

1.4.6 - Best Practices

Best practices when creating a workflow

A workflow should be developed in accordance with the guidelines outlined in the Serverless Workflow definitions documentation.

This document provides a summary of several additional rules and recommendations to ensure smooth integration with other applications, such as the Backstage Orchestrator UI.

Workflow output schema

To effectively display the results of the workflow and any optional outputs generated by the user interface, or to facilitate the chaining of workflow executions, it is important for a workflow to deliver its output data in a recognized structured format as defined by the WorkflowResult schema.

The output meant for next processing should be placed under data.result property.

id: my-workflow
version: "1.0"
specVersion: "0.8"
name: My Workflow
start: ImmediatelyEnd
extensions:
  - extensionid: workflow-output-schema
    outputSchema: schemas/workflow-output-schema.json
states:
  - name: ImmediatelyEnd
     type: inject
     data:
       result:
          message: A human-readable description of the successful status. Or an error.
          outputs:
            - key: Foo Bar human readable name which will be shown in the UI
              value: Example string value produced on the output. This might be an input for a next workflow.
          nextWorkflows:
            - id: my-next-workflow-id
              name: Next workflow name suggested if this is an assessment workflow. Human readable, it's text does not need to match true workflow name.
    end: true

Then the schemas/workflow-output-schema.json can look like (referencing the WorkflowResult schema):

{
    "$schema": "http://json-schema.org/draft-07/schema#",
    "title": "WorkflowResult",
    "description": "Schema of workflow output",
    "type": "object",
    "properties": {
        "result": {
            "$ref": "shared/schemas/workflow-result-schema.json",
            "type": "object"
        }
    }
}

1.5 - Plugins

1.5.1 - Notifications Plugin

How to get started with the notifications and signals

The Backstage Notifications System provides a way for plugins and external services to send notifications to Backstage users.

These notifications are displayed in the dedicated page of the Backstage frontend UI or by frontend plugins per specific scenarios.

Additionally, notifications can be sent to external channels (like email) via “processors” implemented within plugins.

Upstream documentation can be found in:

Frontend

Notifications are messages sent to either individual users or groups. They are not intended for inter-process communication of any kind.

To list and manage, choose Notifications from the left-side menu item.

There are two basic types of notifications:

  • Broadcast: Messages sent to all users of Backstage.
  • Entity: Messages delivered to specific listed entities from the Catalog, such as Users or Groups.

Frontend UI

Backend

The backend plugin provides the backend application for reading and writing notifications.

Authentication

The Notifications are primarily meant to be sent by backend plugins. In such flow, the authentication is shared among them.

To let external systems (like a Workflow) create new notifications by sending POST requests to the Notification REST API, authentication needs to be properly configured via setting the backend.auth.externalAccess property of the app-config .

Refer to the service-to-service auth documentation for more details, focusing on the Static Tokens section as the simplest setup option.

Creating a notification by external services

An example request for creating a broadcast notification can look like:

curl -X POST https://[BACKSTAGE_BACKEND]/api/notifications -H "Content-Type: application/json" -H "Authorization: Bearer YOUR_BASE64_SHARED_KEY_TOKEN" -d '{"recipients":{"type":"broadcast"},"payload": {"title": "Title of broadcast message","link": "http://foo.com/bar","severity": "high","topic": "The topic"}}'

Configuration

Configuration of the dynamic plugins is in the dynamic-plugins-rhdh ConfigMap created by the Helm chart during installation.

Frontend configuration

Usually there is no need to change the defaults but little tweaks can be done on the props section:

            frontend:
              redhat.plugin-notifications:
                dynamicRoutes:
                  - importName: NotificationsPage
                    menuItem:
                      config:
                        props:
                          titleCounterEnabled: true
                          webNotificationsEnabled: false
                      importName: NotificationsSidebarItem
                    path: /notifications

Backend configuration

Except setting authentication for external callers, there is no special plugin configuration needed.

Forward to Email

It is possible to forward notification content to email address. In order to do that you must add the Email Processor Module to your Backstage backend.

Configuration

Configuration options can be found in plugin’s documentation.

Example configuration:

      pluginConfig:
        notifications:
          processors:
            email:
              filter:
                minSeverity: low
                maxSeverity: critical
                excludedTopics: []
              broadcastConfig:
                receiver: config # or none or users
                receiverEmails:
                  - foo@company.com
                  - bar@company.com
              cache:
                ttl:
                  days: 1
              concurrencyLimit: 10
              replyTo: email@company.com
              sender: email@company.com
              transportConfig:
                hostname: your.smtp.host.com
                password: a-password
                username: a-smtp-username
                port: 25
                secure: false
                transport: smtp

Ignoring unwanted notifications

The configuration of the module explains how to configure filters. Filters are used to ignore notifications that should not be forwarded to email. The supported filters include minimum/maximum severity and list of excluded topics.

User notifications

Each user notification has a list of recipients. The recipient is an entity in Backstage catalog. The notification will be sent to the email addresses of the recipients.

Broadcast notifications

In broadcast notifications we do not have recipients, the notifications are delivered to all users.

The module’s configuration supports a few options for broadcast notifications:

  • Ignoring broadcast notifications to be forwarded
  • Sending to predefined address list only
  • Sending to all users whose catalog entity has an email

1.5.2 - Orchestrator Plugin

Orchestrator plugins are now installed by RHDH, but are disabled on default.

Orchestrator GitHub documentation