This is the multi-page printable view of this section. Click here to print.
Orchestrator: Workflows for Backstage
- 1: Documentation
 - 1.1: Quick Start
 - 1.2: Architecture
 - 1.3: Installation
 - 1.3.1: Installation via RHDH Operator
 - 1.3.2: Installation via RHDH Helm Charts
 - 1.3.3: RBAC
 - 1.3.4: Requirements
 - 1.3.5: Workflows
 - 1.3.5.1: Deploy From Helm Repository
 - 1.4: Serverless Workflows
 - 1.4.1: Workflows
 - 1.4.1.1: MTA Analysis
 - 1.4.1.2: Simple Escalation
 - 1.4.1.3: Move2Kube
 - 1.4.2: Development
 - 1.4.3: Workflow Examples
 - 1.4.4: Troubleshooting
 - 1.4.5: Configuration
 - 1.4.5.1: Make workflow able to use authentication request on run
 - 1.4.5.2: Configure workflow for token exchange
 - 1.4.5.3: Configure workflow for token propagation
 - 1.4.6: Best Practices
 - 1.5: Plugins
 - 1.5.1: Notifications Plugin
 - 1.5.2: Orchestrator Plugin
 
1 - Documentation
Orchestrator
Choose a section from the list below. For Orchestrator introduction, check the Quick Start.
1.1 - Quick Start
Quickstart Guide
This quickstart guide will help you install Orchestrator via Red Hat Developer Hub (RHDH) and execute a sample workflow through the Orchestrator plugin on the RHDH UI.
Install Orchestrator via RHDH: Choose one of the following installation methods:
- Installation via RHDH Operator - Recommended for most users
 - Installation via RHDH Chart - For environments where the RHDH Operator is not available
 
Install a sample workflow: Follow the installation instructions for the greetings workflow.
Access Red Hat Developer Hub: Open your web browser and navigate to the Red Hat Developer Hub application. Retrieve the URL using the following OpenShift CLI command.
oc get route backstage-backstage -n rhdh-operator -o jsonpath='{.spec.host}'Make sure the route is accessible to you locally.
Login to Backstage Login to Backstage with the Guest account.
Navigate to Orchestrator: Navigate to the Orchestrator page by clicking on the Orchestrator icon in the left navigation menu.

Execute Greeting Workflow: Click on the ‘Execute’ button in the ACTIONS column of the Greeting workflow.
The ‘Run workflow’ page will open. Click ‘Next step’ and then ‘Run’

Monitor Workflow Status: Wait for the status of the Greeting workflow execution to become Completed. This may take a moment.

1.2 - Architecture
The Orchestrator architecture comprises several integral components, each contributing to the seamless execution and management of workflows. Illustrated below is a breakdown of these components:
- Red Hat Developer Hub: Serving as the primary interface, Backstage fulfills multiple roles:
- Orchestrator Plugins: Both frontend and backend plugins are instrumental in presenting deployed workflows for execution and monitoring.
 - Notifications Plugin: Employs notifications to inform users or groups about workflow events.
 
 - OpenShift Serverless Logic Operator: This controller manages the Sonataflow custom resource (CR), where each CR denotes a deployed workflow.
 - Sonataflow Runtime/Workflow Application: As a deployed workflow, Sonataflow Runtime is currently managed as a Kubernetes (K8s) deployment by the operator. It operates as an HTTP server, catering to requests for executing workflow instances. Within the Orchestrator deployment, each Sonataflow CR corresponds to a singular workflow. However, outside this scope, Sonataflow Runtime can handle multiple workflows. Interaction with Sonataflow Runtime for workflow execution is facilitated by the Orchestrator backend plugin.
 - Data Index Service: This serves as a repository for workflow definitions, instances, and their associated jobs. It exposes a GraphQL API, utilized by the Orchestrator backend plugin to retrieve workflow definitions and instances.
 - Job Service: Dedicated to orchestrating scheduled tasks for workflows.
 - OpenShift Serverless: This operator furnishes serverless capabilities essential for workflow communication. It employs Knative eventing to interface with the Data Index service and leverages Knative functions to introduce more intricate logic to workflows.
 - OpenShift AMQ Streams (Strimzi/Kafka): While not presently integrated into the deployment’s current iteration, this operator is crucial for ensuring the reliability of the eventing system.
 - KeyCloak: Responsible for authentication and security services within applications. While not installed by the Orchestrator operator, it is essential for enhancing security measures.
 - PostgreSQL Server - Utilized for storing both Sonataflow information and Backstage data, PostgreSQL Server provides a robust and reliable database solution essential for data persistence within the Orchestrator ecosystem.
 

1.3 - Installation
On previous Orchestrator versions (<1.6), an RHDH operator installation was triggered by the Orchestrator operator, or a pre-existing RHDH installation was connected. On RHDH/Orchestrator 1.7 - that is no longer the case. RHDH operator is responsible for installing the Orchestrator resources, and Orchestrator will cease to exist as a standalone operator.
Installation Methods
RHDH Operator
- Installation via RHDH Operator - Complete setup using the RHDH Operator
 
RHDH Helm Chart
- Installation via RHDH Chart - Use Helm charts to install Orchestrator
 
Workflows
In addition to the Orchestrator deployment, we offer several workflows that can be deployed using their respective installation methods.
1.3.1 - Installation via RHDH Operator
The RHDH Operator provides the most streamlined way to install and configure the Orchestrator plugin on OpenShift clusters. This method handles all infrastructure requirements and plugin configuration automatically.
To install Orchestrator via the RHDH operator, please follow the instructions here
1.3.2 - Installation via RHDH Helm Charts
For environments where the RHDH Operator is not available, or to have more control on the deployment, you can install the Orchestrator plugin using Helm charts.
To install Orchestrator via the RHDH Helm chart, please follow the instructions here.
1.3.3 - RBAC
The RBAC policies for RHDH Orchestrator plugins v1.7 are listed here
1.3.4 - Requirements
Operators
The Orchestrator runtime/deployment is reliant on OpenShift Serverless Logic operator.
OpenShift Serverless Logic operator requirements
OpenShift Serverless Logic operator resource requirements are described OpenShift Serverless Logic Installation Requirements. This is mainly for local environment settings.
The operator deploys a Data Index service and a Jobs service.
These are the recommended minimum resource requirements for their pods:Data Index pod:
resources:
  limits:
    cpu: 500m
    memory: 1Gi
  requests:
    cpu: 250m
    memory: 64Mi
Jobs pod:
resources:
  limits:
    cpu: 200m
    memory: 1Gi
  requests:
    cpu: 100m
    memory: 1Gi
The resources for these pods are controlled by a CR of type SonataFlowPlatform. There is one such CR in the sonataflow-infra namespace.
Workflows
Each workflow has its own logic and therefore different resource requirements that are influenced by its specific logic.
Here are some metrics for the workflows we provide. For each workflow you have the following fields: cpu idle, cpu peak (during execution), memory.
- greeting workflow
- cpu idle: 4m
 - cpu peak: 12m
 - memory: 300 Mb
 
 - mtv-plan workflow
- cpu idle: 4m
 - cpu peak: 130m
 - memory: 300 Mb
 
 
How to evaluate resource requirements for your workflow
Locate the workflow pod in OCP Console. There is a tab for Metrics. Here you’ll find the CPU and memory. Execute the workflow a few times. It does not matter whether it succeeds or not as long as all the states are executed. Now you can see the peak usage (execution) and the idle usage (after a few executions).
1.3.5 - Workflows
In addition to deploying the Orchestrator, we provide several preconfigured workflows that serve either as ready-to-use solutions or as starting points for customizing workflows according to the user’s requirements. These workflows can be installed through a Helm chart.
1.3.5.1 - Deploy From Helm Repository
Orchestrator Workflows Helm Repository
This repository serves as a Helm chart repository for deploying serverless workflows with the Sonataflow Operator. It encompasses a collection of pre-defined workflows, each tailored to specific use cases. These workflows have undergone thorough testing and validation through Continuous Integration (CI) processes and are organized according to their chart versions.
The repository includes a variety of serverless workflows, such as:
- Greeting: A basic example workflow to demonstrate functionality.
 - Migration Toolkit for Application Analysis (MTA): This workflow evaluates applications to determine potential risks and the associated costs of containerizing the applications.
 - Move2Kube: Designed to facilitate the transition of an application to Kubernetes (K8s) environments.
 - …
 
Usage
Prerequisites
To utilize the workflows contained in this repository, the Orchestrator Deployment must be installed on your OpenShift Container Platform (OCP) cluster. For detailed instructions on installing the Orchestrator, please visit the Orchestrator Helm Based Operator Repository
Installation
helm repo add orchestrator-workflows https://rhdhorchestrator.io/serverless-workflows
View available workflows on the Helm repository:
helm search repo orchestrator-workflows
The expected result should look like (with different versions):
NAME                            	CHART VERSION	APP VERSION	DESCRIPTION                                      
orchestrator-workflows/greeting 	0.4.2        	1.16.0     	A Helm chart for the greeting serverless workflow
orchestrator-workflows/move2kube	0.2.16       	1.16.0     	A Helm chart to deploy the move2kube workflow.   
orchestrator-workflows/mta      	0.2.16       	1.16.0     	A Helm chart for MTA serverless workflow         
orchestrator-workflows/workflows	0.2.24       	1.16.0     	A Helm chart for serverless workflows
...
You can install the workflows following their respective README
Installing workflows in additional namespaces
When deploying a workflow in a namespace different from where Sonataflow services are running (e.g. sonataflow-infra), there are essential steps to follow. For detailed instructions, see the Additional Workflow Namespaces section.
Version Compatibility
The workflows rely on components included in the Orchestrator Operator. Therefore, it is crucial to match the workflow version with the corresponding Orchestrator version that supports it. The list below outlines the compatibility between the workflows and Orchestrator versions:
| Workflows | Chart Version | Orchestrator Operator Version | 
|---|---|---|
| move2kube | 1.6.x | 1.6.x | 
| create-ocp-project | 1.6.x | 1.6.x | 
| request-vm-cnv | 1.6.x | 1.6.x | 
| modify-vm-resources | 1.6.x | 1.6.x | 
| mta-v7 | 1.6.x | 1.6.x | 
| mtv-migration | 1.6.x | 1.6.x | 
| mtv-plan | 1.6.x | 1.6.x | 
| move2kube | 1.5.x | 1.5.x | 
| create-ocp-project | 1.5.x | 1.5.x | 
| request-vm-cnv | 1.5.x | 1.5.x | 
| modify-vm-resources | 1.5.x | 1.5.x | 
| mta-v7 | 1.5.x | 1.5.x | 
| mtv-migration | 1.5.x | 1.5.x | 
| mtv-plan | 1.5.x | 1.5.x | 
Helm index
https://www.rhdhorchestrator.io/serverless-workflows/index.yaml
1.4 - Serverless Workflows
A serverless workflow in Orchestrator refers to a sequence of operations that run in response to user input (optional) and produce output (optional) without requiring any ongoing management of the underlying infrastructure. The workflow is executed automatically, and frees users from having to manage or provision servers. This simplifies the process by allowing the focus to remain on the logic of the workflow, while the infrastructure dynamically adapts to handle the execution.
1.4.1 - Workflows
1.4.1.1 - MTA Analysis
MTA - migration analysis workflow
Synopsis
This workflow invokes an application analysis workflow using MTA. You can continue to move2kube workflow after analysis is done if the analysis is considered to be successful.
Users are encouraged to use this workflow as self-service alternative for interacting with the MTA UI. Instead of running a mass-migration of project from a managed place, the project stakeholders can use this (or automation) to regularly check the cloud-readiness compatibility of their code.
Workflow application configuration
Application properties can be initialized from environment variables before running the application:
| Environment variable | Description | Mandatory | Default value | 
|---|---|---|---|
BACKSTAGE_NOTIFICATIONS_URL | The backstage server URL for notifications | ✅ | |
NOTIFICATIONS_BEARER_TOKEN | The authorization bearer token to use to send notifications | ✅ | |
MTA_URL | The MTA Hub server URL | ✅ | 
Inputs
repositoryUrl[mandatory] - the git repo url to examinerecipients[mandatory] - A list of recipients for the notification in the format ofuser:<namespace>/<username>orgroup:<namespace>/<groupname>, i.e.user:default/jsmith.
Output
When the workflow completes there should be a report link on the exit state of the workflow (also named variables in SonataFlow) Currently this is working with MTA version 6.2.x and in the future 7.x version the report link will be removed or will be made optional. Instead of an html report the workflow will use a machine friendly json file.
Dependencies
MTA version 6.2.x or Konveyor 0.2.x
- For OpenShift install MTA using the OperatorHub, search for MTA. Documentation is here
 - For Kubernetes install Konveyor with olm
kubectl create -f https://operatorhub.io/install/konveyor-0.2/konveyor-operator.yaml 
Runtime configuration
| key | default | description | 
|---|---|---|
| mta.url | http://mta-ui.openshift-mta.svc.cluster.local:8080 | Endpoint (with protocol and port) for MTA | 
| quarkus.rest-client.mta_json.url | ${mta.url}/hub | MTA hub api | 
| quarkus.rest-client.notifications.url | ${BACKSTAGE_NOTIFICATIONS_URL:http://backstage-backstage.rhdh-operator/api/notifications/} | Backstage notification url | 
| quarkus.rest-client.mta_json.auth.basicAuth.username | username | Username for the MTA api | 
| quarkus.rest-client.mta_json.auth.basicAuth.password | password | Password for the MTA api | 
All the configuration items are on [./application.properties]
For running and testing the workflow refer to mta testing.
Workflow Diagram
Installation
1.4.1.2 - Simple Escalation
Simple escalation workflow
An escalation workflow integrated with Atlassian JIRA using SonataFlow.
Prerequisite
- Access to a Jira server (URL, user and API token)
 - Access to an OpenShift cluster with 
adminRole 
Workflow diagram
Note:
The value of the .jiraIssue.fields.status.statusCategory.key field is the one to be used to identify when the done status is reached, all the other
similar fields are subject to translation to the configured language and cannot be used for a consistent check.
Application configuration
Application properties can be initialized from environment variables before running the application:
| Environment variable | Description | Mandatory | Default value | 
|---|---|---|---|
JIRA_URL | The Jira server URL | ✅ | |
JIRA_USERNAME | The Jira server username | ✅ | |
JIRA_API_TOKEN | The Jira API Token | ✅ | |
JIRA_PROJECT | The key of the Jira project where the escalation issue is created | ❌ | TEST | 
JIRA_ISSUE_TYPE | The ID of the Jira issue type to be created | ✅ | |
OCP_API_SERVER_URL | The OpensShift API Server URL | ✅ | |
OCP_API_SERVER_TOKEN | The OpensShift API Server Token | ✅ | |
ESCALATION_TIMEOUT_SECONDS | The number of seconds to wait before triggering the escalation request, after the issue has been created | ❌ | 60 | 
POLLING_PERIODICITY(1) | The polling periodicity of the issue state checker, according to ISO 8601 duration format | ❌ | PT6S | 
(1) This is still hardcoded as PT5S while waiting for a fix to KOGITO-9811
How to run
mvn clean quarkus:dev
Example of POST to trigger the flow (see input schema in ocp-onboarding-schema.json):
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/ticket-escalation -d '{"namespace": "_YOUR_NAMESPACE_"}'
Tips:
- Visit Workflow Instances
 - Visit (Data Index Query Service)[http://localhost:8080/q/graphql-ui/]
 
1.4.1.3 - Move2Kube
Move2kube (m2k) workflow
Context
This workflow is using https://move2kube.konveyor.io/ to migrate the existing code contained in a git repository to a K8s/OCP platform.
Once the transformation is over, move2kube provides a zip file containing the transformed repo.
Design diagram
Workflow
Note that if an error occurs during the migration planning there is no feedback given by the move2kube instance API. To overcome this, we defined a maximum amount of retries (move2kube_get_plan_max_retries) to execute while getting the planning before exiting with an error. By default the value is set to 10 and it can be overridden with the environment variable MOVE2KUBE_GET_PLAN_MAX_RETRIES.
Workflow application configuration
Move2kube workflow
Application properties can be initialized from environment variables before running the application:
| Environment variable | Description | Mandatory | Default value | 
|---|---|---|---|
MOVE2KUBE_URL | The move2kube instance server URL | ✅ | |
BACKSTAGE_NOTIFICATIONS_URL | The backstage server URL for notifications | ✅ | |
NOTIFICATIONS_BEARER_TOKEN | The authorization bearer token to use to send notifications | ✅ | |
MOVE2KUBE_GET_PLAN_MAX_RETRIES | The amount of retries to get the plan before failing the workflow | ❌ | 10 | 
m2k-func serverless function
Application properties can be initialized from environment variables before running the application:
| Environment variable | Description | Mandatory | Default value | 
|---|---|---|---|
MOVE2KUBE_API | The move2kube instance server URL | ✅ | |
SSH_PRIV_KEY_PATH | The absolute path to the SSH private key | ✅ | |
BROKER_URL | The knative broker URL | ✅ | |
LOG_LEVEL | The log level | ❌ | INFO | 
Components
The use case has the following components:
m2k: theSonataflowresource representing the workflow. A matchingDeploymentis created by the sonataflow operator..m2k-save-transformation-func: the KnativeServiceresource that holds the service retrieving the move2kube instance output and saving it to the git repository. A matchingDeploymentis created by the Knative deployment.move2kube instance: theDeploymentrunning the move2kube instance- Knative 
Trigger:m2k-save-transformation-event: event sent by them2kworkflow that will trigger the execution ofm2k-save-transformation-func.transformation-saved-trigger-m2k: event sent bym2k-save-transformation-funcif/once the move2kube output is successfully saved to the git repository.error-trigger-m2k: event sent bym2k-save-transformation-funcif an error while saving the move2kube output to the git repository.
 - The Knative 
Brokernameddefaultwhich link the components together. 
Installation
See official installation guide
Usage
- Create a workspace and a project under it in your move2kube instance
- you can reach your move2kube instance by running
 
Sample output:oc -n sonataflow-infra get routesNAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD move2kube-route move2kube-route-sonataflow-infra.apps.cluster-c68jb.dynamic.redhatworkshops.io move2kube-svc <all> edge None- for more information, please refer to https://move2kube.konveyor.io/tutorials/ui
 
 - Go to the backstage instance.
 
To get it, you can run
oc -n rhdh-operator get routes
Sample output:
NAME                  HOST/PORT                                                                            PATH   SERVICES              PORT           TERMINATION     WILDCARD
backstage-backstage   backstage-backstage-rhdh-operator.apps.cluster-c68jb.dynamic.redhatworkshops.io   /      backstage-backstage   http-backend   edge/Redirect   None
Go to the
Orchestratorpage.Click on
Move2Kube workflowand then click therunbutton on the top right of the page.In the
repositoryURLfield, put the URL of your git projectIn the
sourceBranchfield, put the name of the branch holding the project you want to transform- ie: 
main 
- ie: 
 In the
targetBranchfield, put the name of the branch in which you want the move2kube output to be persisted. If the branch exists, the workflow will fail- ie: 
move2kube-output 
- ie: 
 In the
workspaceIdfield, put the ID of the move2kube instance workspace to use for the transformation. Use the ID of the workspace created at the 1st step.- ie: 
a46b802d-511c-4097-a5cb-76c892b48d71 
- ie: 
 In the
projectIdfield, put the ID of the move2kube instance project under the previous workspace to use for the transformation. Use the ID of the project created at the 1st step.- ie: 
9c7f8914-0b63-4985-8696-d46c17ba4ebe 
- ie: 
 Then click on
nextStepClick on
runto trigger the executionOnce a new transformation has started and is waiting for your input, you will receive a notification with a link to the Q&A
- For more information about what to expect and how to answer the Q&A, please visit the official move2kube documentation
 
Once you completed the Q&A, the process will continue and the output of the transformation will be saved in your git repository, you will receive a notification to inform you of the completion of the workflow.
- You can now clone the repository and checkout the output branch to deploy your manifests to your cluster! You can check the move2kube documention if you need guidance on how to deploy the generated artifacts.
 
1.4.2 - Development
Serverless-Workflows
This repository contains multiple workflows. Each workflow is represented by a directory in the project. Below is a table listing all available workflows:
| Workflow Name | Description | 
|---|---|
create-ocp-project | Sets up an OpenShift Container Platform (OCP) project. | 
escalation | Demos workflow ticket escalation. | 
greeting | Sample greeting workflow. | 
modify-vm-resources | Modifies resources allocated to virtual machines. | 
move2kube | Workflow for Move2Kube tasks and transformation. | 
mta-v7.x | Migration toolkit for applications, version 7.x. | 
mtv-migration | Migration tasks using Migration Toolkit for Virtualization (MTV). | 
request-vm-cnv | Requests and provisions VMs using Container Native Virtualization (CNV). | 
Each workflow is organized in its own directory, containing the following components:
application.properties— Contains configuration properties specific to the workflow application.${workflow}.sw.yaml— The Serverless Workflow definition, authored according to recommended best practices.specs/ (optional)— Directory for OpenAPI specifications used by the workflow, if applicable.schemas/ (optional)— Directory containing input and output data schemas relevant to the workflow execution.
Each workflow is built into a container image and published to Quay.io via GitHub Actions. The image naming convention follows:
quay.io/orchestrator/serverless-workflow-${workflow}
Current image statuses:
- https://quay.io/repository/orchestrator/serverless-workflow-mta-v7.x
 - https://quay.io/repository/orchestrator/serverless-workflow-m2k
 - https://quay.io/repository/orchestrator/serverless-workflow-greeting
 - https://quay.io/repository/orchestrator/serverless-workflow-escalation
 
After the container image is published, a GitHub Action automatically generates the corresponding Kubernetes manifests and submits a pull request to this repository. The manifests are placed under the deploy/charts directory, in a subdirectory named after the workflow. This Helm chart structure is intended for deploying the workflow to environments where the SonataFlow Operator is installed and running. The resulting Helm charts are then published to the configured Helm repository for consumption at https://rhdhorchestrator.io/serverless-workflows
How to introduce a new workflow
Follow these steps to successfully add a new workflow:
- Create a folder under the root with the name of the workflow, e.x 
/onboarding - Copy 
application.properties,onboarding.sw.yamlinto that folder - Create a GitHub workflow file 
.github/workflows/${workflow}.yamlthat will callmainworkflow (e.g.greeting.yaml) - Create a pull request but don’t merge yet.
 - Send a pull request to add a sub-chart under the path 
deploy/charts/<WORKFLOW_ID>, e.g.deploy/charts/onboarding. - Now the PR from 4 can be merged and an automatic PR will be created with the generated manifests. Review and merge.
 
See Continuous Integration with make for implementation details of the CI pipeline.
Builder image
workflow-builder-dev.Dockerfile - references OpenShift Serverless Logic builder image from registry.redhat.io which requires authorization.
- To use this Dockerfile locally, you must be logged to 
registry.redhat.io. To get access to that registry, follow: 
Note on CI: For every PR merged in the workflow directory, a GitHub Action runs an image build to generate manifests, and a new PR is automatically generated in this repository. The credentials used by the build process are defined as organization level secret, and the content is from a token on the helm repo with an expiry period of 60 days.
Using Helm Charts
Some of the workflows in this repository are released as Helm charts. To view available workflows in dev mode or prod mode use:
helm repo add orchestrator-workflows https://rhdhorchestrator.io/serverless-workflows
helm search repo orchestrator-workflows --devel
The instructions for installing each workflows can be found in the docs
1.4.3 - Workflow Examples
Our Orchestrator Serverless Workflow Examples repository, located at GitHub, provides a collection of sample workflows designed to help you explore and understand how to build serverless workflows using Orchestrator. These examples showcase a range of use cases, demonstrating how workflows can be developed, tested, and executed based on various inputs and conditions.
Please note that this repository is intended for development and testing purposes only. It serves as a reference for developers looking to create custom workflows and experiment with serverless orchestration concepts. These examples are not optimized for production environments and should be used to guide your own development processes.
1.4.4 - Troubleshooting
Troubleshooting Guide
This document provides solutions to common problems encountered with serverless workflows.
Table of Contents
- HTTP Errors
 - Workflow Errors
 - Configuration Problems
 - Workflow not showing in RHDH UI
 - [Maven mirror] (#maven-mirror)
 
HTTP Errors
Many workflow operations are REST requests to REST endpoints. If an HTTP error occurs then the workflow will fail and the HTTP code and message will be displayed. Here is an example of the error in the UI. Please use HTTP codes documentation for understanding the meaning of such errors. Here are some examples:
409. Usually indicates that we are trying to update or create a resource that already exists. E.g. K8S/OCP resources.401. Unauthorized access. A token, password or username might be wrong or expired.
Workflow Errors
Problem: Workflow execution fails
Solution:
- Examine the container log of the workflow
oc logs my-workflow-xy73lj 
Problem: Workflow is not listed by the orchestrator plugin
Solution:
Examine the container status and logs
oc get pods my-workflow-xy73lj oc logs my-workflow-xy73ljMost probably the Data index service was unready when the workflow started. Typically this is what the log shows:
2024-07-24 21:10:20,837 ERROR [org.kie.kog.eve.pro.ReactiveMessagingEventPublisher] (main) Error while creating event to topic kogito-processdefinitions-events for event ProcessDefinitionDataEvent {specVersion=1.0, id='586e5273-33b9-4e90-8df6-76b972575b57', source=http://mtaanalysis.default/MTAAnalysis, type='ProcessDefinitionEvent', time=2024-07-24T21:10:20.658694165Z, subject='null', dataContentType='application/json', dataSchema=null, data=org.kie.kogito.event.process.ProcessDefinitionEventBody@7de147e9, kogitoProcessInstanceId='null', kogitoRootProcessInstanceId='null', kogitoProcessId='MTAAnalysis', kogitoRootProcessId='null', kogitoAddons='null', kogitoIdentity='null', extensionAttributes={kogitoprocid=MTAAnalysis}}: java.util.concurrent.CompletionException: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: sonataflow-platform-data-index-service.default/10.96.15.153:80Check if you use a cluster-wide platform:
$ oc get sonataflowclusterplatforms.sonataflow.org cluster-platformIf you have, like in the example output, then use the namespace
sonataflow-infrawhen you look for the sonataflow servicesMake sure the Data Index is ready, and restart the workflow - notice the
sonataflow-infranamespace usage:$ oc get pods -l sonataflow.org/service=sonataflow-platform-data-index-service -n sonataflow-infra NAME READY STATUS RESTARTS AGE sonataflow-platform-data-index-service-546f59f89f-b7548 1/1 Running 0 11kh $ oc rollout restart deployment my-workflow
Problem: Workflow is failing to reach an HTTPS endpoint because it can’t verify it
REST actions performed by the workflow can fail the SSL certificate check if the target endpoint is signed with a CA which is not available to the workflow. The error in the workflow pod log usually looks like this:
```console sun.security.provider.certpath.SunCertPathBuilderException - unable to find valid certification path to requested target ```
Solution:
- If this happens then we need to load the additional CA cert into the running workflow container. To do so, please follow this guile from the SonataFlow guides site: https://sonataflow.org/serverlessworkflow/main/cloud/operator/add-custom-ca-to-a-workflow-pod.html
 
Configuration Problems
Problem: Workflow installed in a different namespace than Sonataflow services fails to start
Solution:
When deploying a workflow in a namespace other than the one where Sonataflow services are running (e.g., sonataflow-infra), there are essential steps to follow to enable persistence and connectivity for the workflow. See the following steps.
Problem: sonataflow-platform-data-index-service pods can’t connect to the database on startup
- Ensure PostgreSQL Pod has Fully Started
If the PostgreSQL pod is still initializing, allow additional time for it to become fully operational before expecting theDataIndexandJobServicepods to connect. - Verify network policies if PostgreSQL Server is in a different namespace
If PostgreSQL Server is deployed in a separate namespace from Sonataflow services (e.g., not insonataflow-infranamespace), ensure that network policies in the PostgreSQL namespace allow ingress from the Sonataflow services namespace (e.g.,sonataflow-infra). Without appropriate ingress rules, network policies may prevent theDataIndexandJobServicepods from connecting to the database. 
Workflow not showing in RHDH UI
Problem: Workflows are not showing up the in the RHDH Orchestrator UI
Ensure the workflow uses
gitopsprofile
In the RHDH Orchestrator UI, only the workflows usinggitopsprofile are shown. Make sure the workflow definition and thesonataflowmanifests are using this profile.Ensure the workflow’s pod has started and is ready
THe first thing a workflow does when it starts is to create a schema for itself in the database (given persistence is enabled) and then it register itself to the Data Index. Until it was able to successfully register to the Data Index, the workflow’s pod will not be ready.Ensure the workflow’s pod can reach the Data Index
Connect to the workflow’s pod and try to sent the following request to the Data Index:
curl -g -k  -X POST  -H "Content-Type: application/json" \
                    -d '{"query":"query{ ProcessDefinitions  { id, serviceUrl, endpoint } }"}' \
                    http://sonataflow-platform-data-index-service.sonataflow-infra/graphql
Use the service of the Data Index and its namespace as defined in your environment.
Here sonataflow-platform-data-index-service is the service name and sonataflow-infra the namespace in which it is deployed.
Do the same from the RHDH pod and also make sure the workflow is reachable:
curl http://<workflow-service>.<workflow-namespace>/management/processes
- Ensure the Orchestrator is trying to fetch the workflow
In the logs of the RHDH pod, you should see logs message similar to 
{"level":"\u001b[32minfo\u001b[39m","message":"fetchWorkflowInfos() called: http://sonataflow-platform-data-index-service.sonataflow-infra","plugin":"orchestrator","service":"backstage","span_id":"fca4ab29f0a7aef9","timestamp":"2025-08-04 17:58:26","trace_flags":"01","trace_id":"5408d4b06373ff8fb34769083ef771dd"}
Notice the "plugin":"orchestrator" that can help filtering the messages.
- Ensure the Data Index properties are set in the 
-managed-propsConfigMap of the workflow
Make sure to have: 
kogito.data-index.health-enabled = true
kogito.data-index.url = http://sonataflow-platform-data-index-service.sonataflow-infra
...
mp.messaging.outgoing.kogito-processdefinitions-events.url = http://sonataflow-platform-data-index-service.sonataflow-infra/definitions
mp.messaging.outgoing.kogito-processinstances-events.url = http://sonataflow-platform-data-index-service.sonataflow-infra/processes
Those should be set automatically by the OSL operator when the Data Index service is enabled. You should have simlilar properties for the Job Services.
- Ensure the Workflow is registered in the Data Index
To check that, you may connect to the database used by the Data Index and run the following from the PSQL instance’s pod: 
$ PGPASSWORD=<psql password> psql -h localhost -p 5432 -U < user> -d sonataflow
sonataflow=# SET search_path TO "sonataflow-platform-data-index-service";
sonataflow=# select id, name from definitions;
You should see the workflows registered to the Data Index
- Ensure Data Index and Job Services are enabled
If the Data Index and the Job Services are not enabled in theSontaFlowPlatformthen the Orchestrator plugin cannot fetch the available workflows. Make sure to have 
services:
    dataIndex:
      enabled: true
      ...
    jobService:
      enabled: true
      ...
If not, manually edit the SontaFlowPlatform instance. This should trigger the re-creation of the workflow’s related manifests.
You should now make sure the properties are correctly set in the managed-props ConfigMap of the workflow.
- Ensure the RBAC permissions are set correctly
See RBAC documentation for detailed permission configuration. 
To see if there is a permission issue, you have to set the log level to DEBUG, see https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.6/html/monitoring_and_logging/assembly-monitoring-and-logging-with-aws_assembly-rhdh-observability#configuring-the-application-log-level-by-using-the-operator_assembly-rhdh-observability
Maven Mirror
If you need to build a workflow’s image but you are behind a proxy that does not allow to access maven repositories on the internet, you may need to specify your own maven mirror, reachable from your network.
To do so, you need to set the MAVEN_MIRROR_URL environment variable to your own maven repository. This environment variable must be set within the Dockfile you are using to build the image using the logic-swf-builder image or any custom image you may use as base image.
Based on what you have it may or may not resemble to:
ARG BUILDER_IMAGE
ARG RUNTIME_IMAGE
FROM ${BUILDER_IMAGE} AS builder
...
...
# Setting maven mirror
ENV MAVEN_MIRROR_URL=<your repository>
...
...
RUN /home/kogito/launch/build-app.sh ./resources
#=============================
# Runtime
#=============================
FROM ${RUNTIME_IMAGE}
...
...
1.4.5 - Configuration
1.4.5.1 - Make workflow able to use authentication request on run
Starting RHDH 1.7.3, the orchestrator plugin let you specify in the worfklow’s data input schema file the required login for the workflow to work against external services.
Prerequisites
- Keycloak or another OIDC provider that supports OAuth 2.0 Token Exchange
 - A workflow calling an OpenAPI client generated from an OpenAPI specification file using an 
oauth2security scheme 
Configure data input schemas property
To enable RHDH Orchestrator plugin to dynamically prompt for authentication you need to set the authSetup property and add each authentication provider needed under authTokenDescriptors:
"authSetup": {
    "type": "string",
    "ui:widget": "AuthRequester",
    "ui:props": {
        "authTokenDescriptors": [
            {
            "provider": "oidc",
            "customProviderApiId": "internal.auth.oidc",
            "tokenType": "oauth"
            }
        ]
    }
}
In the above example, we are making sure that upon execution the RHDH will request the user to login in the OIDC provider configured in RHDH.
In the workflow, you will then be able to use the token by using the header X-Authorization-Oidc. See Token Propagation and Token Exchange pages for example in using such header.
You can specify other providers such as github or gitlab, the header’s name is formatted as follow: X-Authorization-<provider>.
1.4.5.2 - Configure workflow for token exchange
Token Exchange lets a workflow swap the incoming end‑user token for a new access token tailored to a downstream OpenAPI‑secured service. Use it when you must not forward the original token or when workflows run long enough that the original token may expire.
See the upstream reference for full details: Token Exchange for OpenAPI services in SonataFlow (https://sonataflow.org/serverlessworkflow/main/security/token-exchange-for-openapi-services.html).
Prerequisites
- Keycloak or another OIDC provider that supports OAuth 2.0 Token Exchange
 - A workflow calling an OpenAPI client generated from an OpenAPI specification file using an 
oauth2security scheme 
Build
When building the workflow image, ensure the following extensions are present (e.g., via QUARKUS_EXTENSION for the internal builder):
- io.quarkus:quarkus-oidc-client-filter
 - org.kie:kie-addons-quarkus-token-exchange
 
Optional for persistence of exchanged tokens:
- org.kie:kogito-quarkus-serverless-workflow-jdbc-token-persistence
 
See https://github.com/rhdhorchestrator/orchestrator-demo/blob/main/scripts/build.sh#L180 to see how we do it.
Configuration
1) Define an OAuth2 security scheme in your OpenAPI specification file
The OpenAPI operation(s) you call must be secured by an oauth2 scheme. The OIDC client name is derived from this scheme name (sanitized by replacing non‑alphanumerics with _).
Example:
openapi: 3.0.3
paths:
  /secured:
    get:
      operationId: callService
      responses:
        "200": { description: OK }
      security:
        - service-oauth: []
components:
  securitySchemes:
    service-oauth:
      type: oauth2
      flows:
        clientCredentials:
          authorizationUrl: https://<idp>/realms/<realm>/protocol/openid-connect/auth
          tokenUrl: https://<idp>/realms/<realm>/protocol/openid-connect/token
          scopes: {}
2) Configure application.properties
Replace placeholders with your values.
# Base URL for the generated client (service id is the sanitized OpenAPI file id)
quarkus.rest-client.<service_id>.url=http://<downstream-service>
# Enable Token Exchange for this auth name (sanitized from OpenAPI scheme name)
sonataflow.security.auth.<auth_name>.token-exchange.enabled=true
# Proactive refresh and monitor (optional, default values are shown below)
sonataflow.security.auth.<auth_name>.token-exchange.proactive-refresh-seconds=300
sonataflow.security.auth.token-exchange.monitor-rate-seconds=60
# Ensure the incoming user token is available to the workflow service
# If the end-user token is sent in a custom header (e.g., from RHDH Orchestrator plugin),
# point Quarkus to that header so the SecurityIdentity is established.
auth-server-url=https://<keycloak>/realms/<yourRealm>
client-id=<client ID>
client-secret=<client secret>
quarkus.oidc.auth-server-url=${auth-server-url}
quarkus.oidc.client-id=${client-id}
quarkus.oidc.credentials.secret=${client-secret}
quarkus.oidc.token.header=X-Authorization-<provider>
quarkus.oidc.token.issuer=any
# OIDC client for Token Exchange corresponding to the auth scheme (name sanitized)
quarkus.oidc-client.<auth_name>.discovery-enabled=false
quarkus.oidc-client.<auth_name>.auth-server-url=${auth-server-url}/protocol/openid-connect/auth
quarkus.oidc-client.<auth_name>.token-path=${auth-server-url}/protocol/openid-connect/token
quarkus.oidc-client.<auth_name>.client-id=${client-id}
quarkus.oidc-client.<auth_name>.grant.type=exchange
quarkus.oidc-client.<auth_name>.credentials.client-secret.method=basic
quarkus.oidc-client.<auth_name>.credentials.client-secret.value=${client-secret}
# Persist inbound headers so tokens survive wait/resume and restarts (recommended)
kogito.persistence.headers.enabled=true
With:
service_id: sanitized OpenAPI file id (e.g.,simple-server.yaml->simple_server_yaml).auth_name: sanitized OpenAPI oauth2 security scheme (e.g.,service-oauth->service_oauth).provider: the RHDH provider name; the Orchestrator plugin sends user tokens asX-Authorization-{provider}: {token}.
Configuration reference
- SonataFlow Token Exchange guide: Token Exchange for OpenAPI services
 - SonataFlow configuration properties (headers persistence): Core configuration properties
 - Quarkus OIDC Client: OpenID Connect (OIDC) client
 - Quarkus OIDC Client Filter (REST Client): REST Client OIDC client filter
 - Quarkiverse OpenAPI Generator client configuration: Client configuration
 - Token propagation with REST Client (contrast with exchange): Token propagation
 
3) Interplay with token propagation
Do not enable token propagation for the same <auth_name> if you need token exchange. If both are enabled, propagation takes precedence and no exchange is performed.
If you do need propagation in another specification file having a similar scheme name, configure per scheme name on the specification level:
quarkus.openapi-generator.<another_service_id>.auth.<auth_name>.token-propagation=true
quarkus.openapi-generator.<another_service_id>.auth.<auth_name>.header-name=X-Authorization-<provider>
Caching and persistence
When enabled, exchanged tokens are cached per process instance and auth name, with proactive refresh before expiry. By default, an in‑memory cache is used. To persist cache entries, add the JDBC persistence extension in the QUARKUS_EXTENSIONS when building the image: see https://sonataflow.org/serverlessworkflow/latest/cloud/operator/build-and-deploy-workflows.html#passing-build-arguments-to-internal-workflow-builder
For local debug/dev, you can add it in your local pom.xml file:
<dependency>
  <groupId>org.kie</groupId>
  <artifactId>kogito-quarkus-serverless-workflow-jdbc-token-persistence</artifactId>
  <!-- Configure a JDBC DataSource as usual -->
  <!-- The extension provides a CDI TokenCacheRepository implementation -->
</dependency>
Examples
Invoke workflow with an Authorization header
curl -X POST \
  http://localhost:8080/<workflow_id> \
  -H "Authorization: Bearer $USER_ACCESS_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"input":"value"}'
Invoke workflow with RHDH custom header
If your client sends the token via the Orchestrator plugin header, and you set quarkus.oidc.token.header=X-Authorization-<provider>:
curl -X POST \
  http://localhost:8080/<workflow_id> \
  -H "X-Authorization-<provider>: Bearer $USER_ACCESS_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"input":"value"}'
Notes
- Security scheme names are global in an OpenAPI specification file; all operations using the same scheme share the same OIDC client and 
token‑exchangeconfiguration. - Prefer enabling 
kogito.persistence.headers.enabled=truefor long‑running workflows so the incoming token is available after wait/resume or restarts. - For a general comparison and configuration of forwarding the original token, see the token propagation guide in this folder.
 
Configuring OIDC properties at SonataFlowPlatform level (Cluster‑wide OIDC configuration)
To avoid duplication, follow the platform‑level OIDC setup here: Configuring OIDC properties at SonataFlowPlatform level. This centralizes incoming request authentication and $WORKFLOW.identity. For Token Exchange, you still need to enable it per auth scheme using sonataflow.security.auth.<auth_name>.token-exchange.enabled=true.
1.4.5.3 - Configure workflow for token propagation
By default, the RHDH Orchestrator plugin adds headers for each token in the ‘authTokens’ field of the POST request that is used to trigger a workflow execution. Those headers will be in the following format: X-Authorization-{provider}: {token}.
This allows the user identity to be propagated to the third parties and externals services called by the workflow.
To do so, a set of properties must be set in the workflow application.properties file.
Prerequisites
- Having a Keycloak instance running with a client
 - Having RHDH with the latest version of the Orchestrator plugins
 - Having a workflow using openapi spec file to send REST requests to a service. Using custom REST function within the workflow will not propagate the token; it is only possible to propagate tokens when using openapi specification file.
 
Build
When building the workflow’s image, you will need to make sure the following extensions are present in the QUARKUS_EXTENSION:
- io.quarkus:quarkus-oidc-client-filter # needed for propagation
 - io.quarkus:quarkus-oidc # neded for token validity check thus accessing $WORKFLOW.identity
 
See https://github.com/rhdhorchestrator/orchestrator-demo/blob/main/scripts/build.sh#L180 to see how we do it.
Configuration
Openshift Serverless Logic (OSL) / SonataFlow related
By default, the workflow is not persisting the request headers in the database. Therefore, any token in the header will be lost if the workflow flushes its context (e.g: sleeps, goes idle, is resumed, …) as the headers will not be restored to the context from the database.
By setting the property kogito.persistence.headers.enabled to true in the application.properties file or in the config map representing it on the cluster, the workflow will persist the headers. This will enable the workflow to keep using the token from the headers even after it was interupted and restored.
You can exclude headers from being persisted using kogito.persistence.headers.excluded. See https://sonataflow.org/serverlessworkflow/main/core/configuration-properties.html and/or https://sonataflow.org/serverlessworkflow/main/use-cases/advanced-developer-use-cases/persistence/persistence-with-postgresql.html#ref-postgresql-persistence-configuration for more information.
Security related
Oauth2
- In the OpenAPI spec file(s) where you want to propagate the incoming token, define the security scheme used by the endpoints you’re interested in. All endpoints may use the same security scheme if configured globally. e.g
 
components:
  securitySchemes:
    BearerToken:
     type: oauth2
     flows:
       clientCredentials:
         tokenUrl: http://<keycloak>/realms/<yourRealm>/protocol/openid-connect/token
         scopes: {}
     description: Bearer Token authentication
- In the 
application.propertiesof your workflow, for each security scheme, add the following: 
auth-server-url=https://<keycloak>/realms/<yourRealm>
client-id=<client ID>
client-secret=<client secret>
# Properties to check for identity, needed to use $WORKFLOW.identity within the workflow
quarkus.oidc.auth-server-url=${auth-server-url}
quarkus.oidc.client-id=${client-id}
quarkus.oidc.credentials.secret=${client-secret}
quarkus.oidc.token.header=X-Authorization-<provider>
quarkus.oidc.token.issuer=any # needed in case the auth server url is not the same as the one configured; e.g: localhost VS the k8S service
# Properties for propagation
quarkus.oidc-client.BearerToken.auth-server-url=${auth-server-url}
quarkus.oidc-client.BearerToken.token-path=${auth-server-url}/protocol/openid-connect/token
quarkus.oidc-client.BearerToken.discovery-enabled=false
quarkus.oidc-client.BearerToken.client-id=${client-id}
quarkus.oidc-client.BearerToken.grant.type=client
quarkus.oidc-client.BearerToken.credentials.client-secret.method=basic
quarkus.oidc-client.BearerToken.credentials.client-secret.value=${client-secret}
quarkus.openapi-generator.<spec_file_yaml_or_json>.auth.<security_scheme>.token-propagation=true
quarkus.openapi-generator.<spec_file_yaml_or_json>.auth.<security_scheme>.header-name=X-Authorization-<provider>
With:
spec_file_yaml_or_json: the name of the spec file configured with_as separator. E.g: if the file name issimple-server.yamlthe normalized property name will besimple_server_yaml. This should be the same for every security scheme defined in the file.security_scheme: the name of the security scheme for which propagates the token located in the header defined by theheader-nameproperty. In our example it would beBearerToken.provider: the name of the expected provider from which the token comes from. As explained above, for each provider in RHDH, the Orchestrator plugin is adding a header with the formatX-Authorization-{provider}: {token}.keycloak: the URL of the running Keycloak instance.yourRealm: the name of the realm to use.client ID: the ID of the Keycloak client to use to authenticate against the Keycloak instance.
See https://sonataflow.org/serverlessworkflow/latest/security/authention-support-for-openapi-services.html#ref-authorization-token-propagation and https://quarkus.io/guides/security-openid-connect-client-reference#token-propagation-rest for more information about token propagation.
Setting the quarkus.oidc.* properties will enforce the token validity check against the OIDC provider. Once successful, you will be able to use $WORKFLOW.identity in the workflow definition in order to get the identity of the user. See https://quarkus.io/guides/security-oidc-bearer-token-authentication and https://quarkus.io/guides/security-oidc-bearer-token-authentication-tutorial for more information.
Bearer token
- In the OpenAPI spec file(s) where you want to propagate the incoming token, define the security scheme used by the endpoints you’re interested in. All endpoints may use the same security scheme if configured globally. e.g
 
components:
  securitySchemes:
    SimpleBearerToken:
     type: http
     scheme: bearer
- In the 
application.propertiesof your workflow, for each security scheme, add the following: 
auth-server-url=https://<keycloak>/realms/<yourRealm>
client-id=<client ID>
client-secret=<client secret>
# Properties to check for identity, needed to use $WORKFLOW.identity within the workflow
quarkus.oidc.auth-server-url=${auth-server-url}
quarkus.oidc.client-id=${client-id}
quarkus.oidc.credentials.secret=${client-secret}
quarkus.oidc.token.header=X-Authorization-<provider>
quarkus.oidc.token.issuer=any # needed in case the auth server url is not the same as the one configured; e.g: localhost VS the k8S service
quarkus.openapi-generator.<spec_file_yaml_or_json>.auth.<security_scheme>.token-propagation=true
quarkus.openapi-generator.<spec_file_yaml_or_json>.auth.<security_scheme>.header-name=X-Authorization-<provider>
With:
spec_file_yaml_or_json: the name of the spec file configured with_as separator. E.g: if the file name issimple-server.yamlthe normalized property name will besimple_server_yaml. This should be the same for every security scheme defined in the file.security_scheme: the name of the security scheme for which propagates the token located in the header defined by theheader-nameproperty. In our example it would beSimpleBearerToken.provider: the name of the expected provider from which the token comes from. As explained above, for each provider in RHDH, the Orchestrator plugin is adding a header with the formatX-Authorization-{provider}: {token}.
Setting the quarkus.oidc.* properties will enforce the token validity check against the OIDC provider. Once successful, you will be able to use $WORKFLOW.identity in the workflow definition in order to get the identity of the user. See https://quarkus.io/guides/security-oidc-bearer-token-authentication and https://quarkus.io/guides/security-oidc-bearer-token-authentication-tutorial for more information.
Basic auth
Basic auth token propagation is not currently supported. A pull request has been opened to add support for it: https://github.com/quarkiverse/quarkus-openapi-generator/pull/1078
With Basic auth, the $WORKFLOW.identity is not available.
Instead you could access the header directly: $WORKFLOW.headers.X-Authorization-{provider} and decode it:
functions:
- name: getIdentity
  type: expression
  operation: '.identity=($WORKFLOW.headers["x-authorization-basic"] | @base64d | split(":")[0])' # mind the lower case!!
You can see a full example here: https://github.com/rhdhorchestrator/workflow-token-propagation-example.
Configuring OIDC properties at SonataFlowPlatform level (Cluster-wide OIDC configuration)
This short guide shows how to inject the Quarkus OIDC settings once at platform‑scope so that all present and future workflows automatically authenticate incoming requests and expose $WORKFLOW.identity.
Prerequisites
- Namespace where the workflows run
 - Keycloak Realm URL
 - Client‑ID
 - Client‑secret
 
There is an assumption that the workflows and the platform are installed in the sonataflow-infra here.
export TARGET_NS=‘sonataflow-infra’ # target namespace of workflows and sonataflowplatform CR
Keep the client secret in a Secrets vault; don’t embed it as clear‑text in the CR.
Create the supporting objects
- Secret: holds the confidential client secret
 
e.g
oc create secret generic oidc-client-secret \
  -n $TARGET_NS \
  --from-literal=cred=swf-client-secret  # This is a sample value. You need to replace it with actual value.
Patch the SonataFlowPlatform CR
- Create patch.yaml (or paste inline):
 
e.g
#### All the values below need to be replaced by actual values.
spec:
  properties:
    flow:
    - name: quarkus.oidc.auth-server-url
      value: https://keycloak-host/realms/dev
    - name: quarkus.oidc.client-id
      value: swf-client
    - name: quarkus.oidc.token.header
      value: X-Authorization
    - name: quarkus.oidc.token.issuer
      value: any
    - name: quarkus.oidc.credentials.secret
      valueFrom:
        secretKeyRef:
          key: cred
          name: oidc-client-secret
- Apply the patch:
 
e.g
oc patch sonataflowplatform <Platform CR name> \
  -n $TARGET_NS \
  --type merge \
  -p "$(cat patch.yaml)"
Wait a few seconds for the operator reconcile loop.
Verify the managed properties
e.g
oc get sonataflowplatform <Platform CR name> -n $TARGET_NS -o yaml
You should see all five keys.
Restart running workflow deployments once so Quarkus reloads the file:
e.g
oc rollout restart deployment -l sonataflow.org/workflow -n $TARGET_NS
1.4.6 - Best Practices
Best practices when creating a workflow
A workflow should be developed in accordance with the guidelines outlined in the Serverless Workflow definitions documentation.
This document provides a summary of several additional rules and recommendations to ensure smooth integration with other applications, such as the Backstage Orchestrator UI.
Workflow output schema
To effectively display the results of the workflow and any optional outputs generated by the user interface, or to facilitate the chaining of workflow executions, it is important for a workflow to deliver its output data in a recognized structured format as defined by the WorkflowResult schema.
The output meant for next processing should be placed under data.result property.
id: my-workflow
version: "1.0"
specVersion: "0.8"
name: My Workflow
start: ImmediatelyEnd
extensions:
  - extensionid: workflow-output-schema
    outputSchema: schemas/workflow-output-schema.json
states:
  - name: ImmediatelyEnd
     type: inject
     data:
       result:
          message: A human-readable description of the successful status. Or an error.
          outputs:
            - key: Foo Bar human readable name which will be shown in the UI
              value: Example string value produced on the output. This might be an input for a next workflow.
          nextWorkflows:
            - id: my-next-workflow-id
              name: Next workflow name suggested if this is an assessment workflow. Human readable, it's text does not need to match true workflow name.
    end: true
Then the schemas/workflow-output-schema.json can look like (referencing the WorkflowResult schema):
{
    "$schema": "http://json-schema.org/draft-07/schema#",
    "title": "WorkflowResult",
    "description": "Schema of workflow output",
    "type": "object",
    "properties": {
        "result": {
            "$ref": "shared/schemas/workflow-result-schema.json",
            "type": "object"
        }
    }
}
1.5 - Plugins
1.5.1 - Notifications Plugin
The Backstage Notifications System provides a way for plugins and external services to send notifications to Backstage users.
These notifications are displayed in the dedicated page of the Backstage frontend UI or by frontend plugins per specific scenarios.
Additionally, notifications can be sent to external channels (like email) via “processors” implemented within plugins.
Upstream documentation can be found in:
Frontend
Notifications are messages sent to either individual users or groups. They are not intended for inter-process communication of any kind.
To list and manage, choose Notifications from the left-side menu item.
There are two basic types of notifications:
- Broadcast: Messages sent to all users of Backstage.
 - Entity: Messages delivered to specific listed entities from the Catalog, such as Users or Groups.
 

Backend
The backend plugin provides the backend application for reading and writing notifications.
Authentication
The Notifications are primarily meant to be sent by backend plugins. In such flow, the authentication is shared among them.
To let external systems (like a Workflow) create new notifications by sending POST requests to the Notification REST API, authentication needs to be properly configured via setting the backend.auth.externalAccess property of the app-config .
Refer to the service-to-service auth documentation for more details, focusing on the Static Tokens section as the simplest setup option.
Creating a notification by external services
An example request for creating a broadcast notification can look like:
curl -X POST https://[BACKSTAGE_BACKEND]/api/notifications -H "Content-Type: application/json" -H "Authorization: Bearer YOUR_BASE64_SHARED_KEY_TOKEN" -d '{"recipients":{"type":"broadcast"},"payload": {"title": "Title of broadcast message","link": "http://foo.com/bar","severity": "high","topic": "The topic"}}'
Configuration
Configuration of the dynamic plugins is in the dynamic-plugins-rhdh ConfigMap created by the Helm chart during installation.
Frontend configuration
Usually there is no need to change the defaults but little tweaks can be done on the props section:
            frontend:
              redhat.plugin-notifications:
                dynamicRoutes:
                  - importName: NotificationsPage
                    menuItem:
                      config:
                        props:
                          titleCounterEnabled: true
                          webNotificationsEnabled: false
                      importName: NotificationsSidebarItem
                    path: /notifications
Backend configuration
Except setting authentication for external callers, there is no special plugin configuration needed.
Forward to Email
It is possible to forward notification content to email address. In order to do that you must add the Email Processor Module to your Backstage backend.
Configuration
Configuration options can be found in plugin’s documentation.
Example configuration:
      pluginConfig:
        notifications:
          processors:
            email:
              filter:
                minSeverity: low
                maxSeverity: critical
                excludedTopics: []
              broadcastConfig:
                receiver: config # or none or users
                receiverEmails:
                  - foo@company.com
                  - bar@company.com
              cache:
                ttl:
                  days: 1
              concurrencyLimit: 10
              replyTo: email@company.com
              sender: email@company.com
              transportConfig:
                hostname: your.smtp.host.com
                password: a-password
                username: a-smtp-username
                port: 25
                secure: false
                transport: smtp
Ignoring unwanted notifications
The configuration of the module explains how to configure filters. Filters are used to ignore notifications that should not be forwarded to email. The supported filters include minimum/maximum severity and list of excluded topics.
User notifications
Each user notification has a list of recipients. The recipient is an entity in Backstage catalog. The notification will be sent to the email addresses of the recipients.
Broadcast notifications
In broadcast notifications we do not have recipients, the notifications are delivered to all users.
The module’s configuration supports a few options for broadcast notifications:
- Ignoring broadcast notifications to be forwarded
 - Sending to predefined address list only
 - Sending to all users whose catalog entity has an email
 
1.5.2 - Orchestrator Plugin
Orchestrator plugins are now installed by RHDH, but are disabled on default.