Customer Q&A: Common Questions About Orchestrator

Frequently asked questions and answers from our customer interactions about Orchestrator

Customer Q&A: Common Questions About Orchestrator

This document will serve as the Orchestrator’s Q&A collection. Customer submitted questions are provided along with detailed answers.

Table of Contents


Getting Started & Overview

Q: How mature is the solution Orchestrator? Is it still cutting edge technology or already used by other customers?

A: The Orchestrator was GAed in RHDH 1.5. The current version is 1.6 and it will be merged into RHDH in 1.7. At this point we will have one unified operator which supports RHDH and the Orchestrator facilities. There are other customers using the Orchestrator in production - even on a large scale with multiple thousands of users and others are onboarding to it currently.

Q: What is the difference between a simple Workflow (kn workflow create) vs. a Quarkus app workflow (kn workflow quarkus create)?

A: The main difference is in the project layout. The kn-workflow quarkus create will create a project in Quarkus layout, meaning, it will include a maven project and the workflow resources will be placed in src/main/resources At development, the developer can use Quarkus cli or maven to running the workflow, using maven/Quarkus tools, add additional dependencies to the pom file (e.g if needed for specific configuration) and have more flexibility in the development. This will use the local maven repository to download all of the resources, so the development loop is shorter.

The kn-workflow create uses the flat layout, in which there are only workflow resources and for running there will be a use of a devmode image. Therefore the workflow will be built and run inside a container

Q: What is a workflow? Is it just an app exposing a REST API?

A: In SonataFlow, a workflow is a declarative description of a sequence of steps—also called states—used to orchestrate services, functions, and logic. These workflows are defined in YAML or JSON following the Serverless Workflow Specification.

Technically, when deployed (e.g., using Quarkus in SonataFlow), the workflow is exposed as a REST endpoint. But:

  • The workflow itself is not the implementation of the logic—it’s the orchestration definition of what happens, in what order, and under what conditions.
  • The engine handles the workflow execution, state transitions, and interactions with external services.

Workflow can be triggered also by cloud-events, when the workflow application is subscribed to a Kafka topic or via Knative broker. At development time, there is no code to implement. Workflow definition is done using the spec and the code is generated by the SonataFlow tools at build time (or at runtime in development mode). Therefore, there isn’t a set of APIs, as this is an internal part of the SonataFlow. However, each workflow exposes set of APIs that can be viewed at development time using the devtools-ui by using the swagger-ui extension in the devtools dashboard.

Adding custom code to be called in the context of a workflow is also supported as part of the workflow definition. See more here: https://sonataflow.org/serverlessworkflow/latest/core/custom-functions-support.html

Q: What possibilities do we have to develop workflows? SonataFlow YAML and Quarkus only?

A: SonataFlow is an implementation of the serverless workflow spec: https://sonataflow.org/serverlessworkflow/main/core/cncf-serverless-workflow-specification-support.html It can be written in YAML or JSON format. Then the build process using sonataflow-builder image generates from the workflow definition (and schema/specs) a Quarkus application. The development tools for the workflow are mentioned in their getting started: https://sonataflow.org/serverlessworkflow/main/getting-started/getting-familiar-with-our-tooling.html

Another option is to use Quarkus cli or mvn quarkus:dev

For writing the workflows there is a VSCode extension that offers code completion and render of the diagram. For workflows that relies on extended Orchestrator features, such as the form-widgets, it is preferred to use the rhdh-local or local backstage with Quarkus.

Containers which include the workflow can then be built and deployed using the GitOps profile: https://sonataflow.org/serverlessworkflow/main/cloud/operator/deployment-profile.html

Q: What are these products, and what are their corresponding Red Hat product names? Apache KIE, Kogito, SonataFlow, Drools/jBPM

A: Apache KIE (Knowledge is Everything): https://kie.apache.org/

It is the Apache Software Foundation umbrella where we develop for example the serverless workflows execution engine, the operator, etc.

Includes sub-projects like SonataFlow (serverless workflows), Drools (business rules), jBPM (business automation), and many cross-project components.

All these projects are open source, and developed and maintained following the Apache Software Foundation governance recommendations and licensing.

Companies like Red Hat, IBM, and others, use these work as the building blocks for delivering their branded products.

Note: sometimes we don’t have a 1:1 correspondence between a sub-project and a GitHub repository.

Kogito: It is the core technology upon which we build cloud-native business automation, rules execution, serverless workflows, Data Index, etc. For example, the Drools sub-project holds mostly all related with business rules, the respective execution, and is mostly focused on that. While Kogito adds for example the ability for executing business rules via REST endpoints.

SonataFlow: It is the name of the sub-project dedicated to develop our vision of the Serverless Workflows Specification. Community documentation: https://sonataflow.org/serverlessworkflow/main/index.html

Pointers to the repositories:

Drools: Is the sub-project dedicated to business rules execution.

jBPM: Is the sub-project dedicated to business automation (BPM).

Correspondence with Red Hat products:


Development & Testing

Q: What is a good way to verify OpenAPI specs and workflows?

A: Usually we suggest running locally, testing it by solving the errors as they come, if any comes. There is no sandbox to test OpenAPI specs with workflow.

Q: What local test setup do you recommend?

A: For testing the interaction between workflows and RHDH there are few options:

  1. Use rhdh-local. This setup runs RHDH locally in its container and SonataFlow on its own container that points to local development environment.

  2. Use Orchestrator plugins development env from stable branch, e.g.: https://github.com/redhat-developer/rhdh-plugins/tree/orchestrator-1.6/workspaces/orchestrator#run-locally-from-this-repo This also runs SonataFlow locally: https://github.com/redhat-developer/rhdh-plugins/tree/orchestrator-1.6/workspaces/orchestrator#devmode-local-configuration

Q: How to find which version of a workflow a run was executed?

A: ProcessInstances information is stored in the Data-Index, therefore the information is available only there, via the GraphQL.

For getting the version for a specific instance, GraphQL supports filters, e.g.

curl -s -X POST 'http://data-index-sonataflow-infra.apps.example.lab.eng.tlv2.redhat.com/graphql/' \
 -H 'Content-Type: application/json' \
  -d '{
    "query": "query { ProcessInstances(where: { id: { equal: \"xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\" } }) { id processId state version } }"
  }' | jq
{
  "data": {
    "ProcessInstances": [
      {
        "id": "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
        "processId": "dynamic-course-select",
        "state": "COMPLETED",
        "version": "1.0"
      }
    ]
  }
}
Q: What do you recommend for testing workflows? If we have longer workflows, how can we test individual parts?

A: In the serverless-workflows repository there is CI for testing some of its workflows, e.g.: https://github.com/rhdhorchestrator/serverless-workflows/blob/main/.github/workflows/mta-v7.x-e2e.yaml At first stage, it deploys SonataFlow on a cluster with the dependencies required for the test. Next, it deploys the workflow then configures it for the testing environment. Once the workflow is ready to be tested, we issue a REST call for invoking a workflow and verifying the result is the expected one https://github.com/rhdhorchestrator/serverless-workflows/blob/main/e2e/mta-v7.x-vm.sh

Workflow is executed from a single endpoint and represents a flow that depends on previous states. There is no API to invoke only part of a workflow.

In addition to end-to-end testing, when working with Quarkus-based workflow projects, you might also consider adding JUnit tests. While not testing the final target ecosystem, it can be useful to have an initial set of tests that validates the workflow in a more manageable environment. The drawback sometimes is that when your workflow accesses external services, unless you have available instances, you might need to Mock them. On the other hand, these mocks can be easily programmed to return the exact set of values you need for testing.

You can find some simple example here:

And more elaborated cases that we use in product here:


Building & Deployment

Q: How can we build an image for a YAML only workflow (kn workflow create)?

A: The build.sh script was extended to support a non-quarkus layout using flags -S or –non-quarkus.

Q: How to configure image pull credentials? Must the ServiceAccount be configured for registry access?

A: ImagePullSecret can be defined globally in the cluster for the entire cluster and not per-image. Please view: https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/images/managing-images#images-update-global-pull-secret_using-image-pull-secrets

If there is a need to patch a resource after it was generated by the script, consider other methods such Kustomize or adding a patch command to edit the resource.

Q: What workflows do become part of the workflow deployment? Every file named *.sw.yaml or *.sw.json under src/main/resources?

A: The kn-workflow gen-manifest (the CLI to generate the manifests) expects only one sw.yaml or sw.json file. If there is more than a single file, the CLI will fail with:

❌ ERROR: generating manifests: ❌ ERROR: multiple SonataFlow definition files found

The reason for that is that the main workflow is translated into a SonataFlow CR that represents a single workflow.

If there is a need to add additional subflows, they can be placed in their own directory and referred by the kn-workflow gen-manifest CLI using the –subflows-dir flag

In that case, subflows will have to be placed under src/main/resources/subflows

Q: How can I lifecycle workflow / templates if there is a new version available?

A: It really depends on how the workflow is designed and implemented. A few options:

  • declarative configuration which is synced via GitOps might be best if PR are issued to projects, using the template. The responsible developer team. or app owners should be responsible to implement the change and test it. Tool like renovatebot or dependabot might help here.
  • update in workflows which are implemented in an imperative approach might need “update workflows” who help to bring out the changes from one version to the next one. Example could be database changes / patches to new api versions, etc.

In any case, with RHDH and Orchestrator you should be equipped to analyse which app is installed where in which version and then be able to plan next steps.

Q: Can we use the Quarkus extension for building workflow images or does the build.sh script more stuff then just building the image in the standard way?

A: The build.sh script isn’t a must. It offers an opinionated method to build workflows with specific Quarkus extensions that are recommended for production and enables the persistence.

The core part of building workflow is using the openshift-serverless-logic builder image as referenced in the build script here and here.

To build the workflow image it is enough to run (with all or part of the extensions, depends on your needs):

podman build \
 -f ../../orchestrator-demo/docker/osl.Dockerfile \
 --tag $TARGET_IMAGE \
  --platform linux/amd64 \
  --ulimit nofile=4096:4096 \
  --build-arg QUARKUS_EXTENSIONS="\
    org.kie:kie-addons-quarkus-persistence-jdbc:9.102.0.redhat-00005,\
    io.quarkus:quarkus-jdbc-postgresql:3.8.6.redhat-00004,\
    io.quarkus:quarkus-agroal:3.8.6.redhat-00004, \
    io.quarkus:quarkus-smallrye-reactive-messaging-kafka" \
  --build-arg MAVEN_ARGS_APPEND="\
    -DmaxYamlCodePoints=35000000 \
    -Dkogito.persistence.type=jdbc \
    -Dquarkus.datasource.db-kind=postgresql \
    -Dkogito.persistence.proto.marshaller=false" \
    .

Please note that these flags were correct at this point in time, and might be changed in the future.

There is a blog post that explains this into details.

Q: Best practices for building and deploying workflow?

Extended Q: When we implement our own workflow like in this example https://github.com/rhdhorchestrator/serverless-workflows/tree/main/workflows/experimentals/cluster-onboarding, how do we build and deploy such a workflow? Is there no JSON/YAML workflow specification involved when we write the workflow as code? Could we add a workflow spec and run other steps or would we install the code workflow and call it from another YAML workflow?

A: A better reference for learning about the build and deploy resource would be this repository: https://github.com/rhdhorchestrator/orchestrator-demo/

For instance, this example: https://github.com/rhdhorchestrator/orchestrator-demo/tree/main/02_advanced#building-the-workflow explains how to use a script for building the workflow, generate its manifests and deploy to the cluster.

There is no need to write a code other than:

  • The workflow definition (yaml / json)
  • The data input schema (json)
  • The data output schema (json) - common for all workflows in the context of the Orchestrator
  • The application.properties - for configuration
  • The secret.properties - for sensitive information
  • The required spec files for interacting with the external services

This is also described in two-series blog posts: One for using a script, and the other for using the build tools without the script.

Q: How can I build a workflow in the Orchestrator?

A: The Orchestrator is built on OpenShift Serverless and SonataFlow / Kogito. The Orchestrator “hooks” into the SonataFlow platform and can display, start, and show the output of workflows. Additionally, the operator Plugin provides a custom action that can be used to start workflows.

You cannot build workflows in the Orchestrator itself - the idea behind it is that an RHDH user (typically a developer) wants to use these flows, e.g., to provision external resources or similar - or they implicitly use them in the template and can then view a status (by the way, the Orchestrator also supports the RHDH RBAC model, so I can ensure that, for example, only users/groups that can see certain templates can also see corresponding workflows, etc.) To do this, you need to know that such a workflow (completely container-native, of course) always runs in a pod, is accessible via a service, and receives its schemas, properties, etc., via configMaps, for example.

In the simplest case (though not suitable for production), if the SonataFlow operator (or OpenShift Serverless Logic Operator) is installed, you can simply create a SonataFlow Custom Resource.

Examples can be viewed here:

What then happens (in the background, through the operator):

  • A build with a BuildConfig is started - the standard builder image and thus the standard runtime image can be adapted - by default these are maven/jdk and Quarkus images.
  • When the build is finished, the workflow is started and the corresponding service and even a route are created (the route, i.e., the expose, can be suppressed).
  • There is also a Development Profile - CAUTION, workflows in DevMode are not entered into the so-called “Data Index Service” and are therefore not visible in RHDH - you can test them, play with them, but not interact with them via the RHDH Orchestrator. https://kiegroup.github.io/kogito-docs/serverlessworkflow/latest/cloud/operator/developing-workflows.html

Authentication & Security

Q: How do you pass credentials to functions which are defined as OpenAPI spec?

A: The workflow is using a token which is defined on RHDH. This is supported by https://backstage.io/docs/auth/service-to-service-auth/#access-restrictions

In RHDH configmap there is the same section:

backend:
  auth:
    externalAccess:
      - type: static
        options:
          token: ${BACKEND_SECRET}
          subject: orchestrator

The value for BACKEND_SECRET is taken from a secret of RHDH, e.g. backstage-backend-auth-secret (if installed by the Orchestrator operator)

In the workflow, the secret is referenced from the application.properties property:

quarkus.rest-client.scaffolder_openapi_yaml.url=${RHDH_URL} quarkus.openapi-generator.scaffolder_openapi_yaml.auth.BearerToken.bearer-token=${SCAFFOLDER_BEARER_TOKEN}

The first one points to RHDH_URL (should use the internal service and not the openshift route to avoid certificate issues)

The second property points to the value taken from the BACKEND_SECRET.

The application.properties is translated to the configmap that its name ends with -props. The value for the variable is taken from the secret.properties that is being translated into a secret


Integration & APIs

Q: How can I add Quarkus extensions to workflows?

A: When running the workflow locally using maven, the extensions are added to pom.xml (either directly or by quarkus ext add). When workflow is built in the workflow, the QUARKUS_EXTENSIONS env var is used to add additional dependencies for the build process.

e.g:

QUARKUS_EXTENSIONS="io.quarkus:quarkus-agroal,io.quarkus:quarkus-jdbc-mysql" ~/projects/orchestrator-demo/scripts/build.sh --image=registry.internal/workflows/dbsetup:latest -P

Q: About catalog-info.yaml and Developer Hub entity model: If we want to bring in our own asset data, do we need to make sure it is exported into the specific Developer Hub format (yaml syntax)?

A: Developer Hub implements the default backstage entity model, which is also used by many plugins as a baseline to identify resources (https://backstage.io/docs/features/software-catalog/system-model/). Red Hat suggestion is to use the standard model, which then ensures best possible compatibility and future readiness. So the simplest way is then to extract the needed data and transform/format it into the default yaml syntax. If systems should be integrated which do already have its own data model and act as a single source of truth another option

Q: How can I tell in the workflow which OpenAPI Spec should use which Rest Client?

A: See https://sonataflow.org/serverlessworkflow/latest/service-orchestration/orchestration-of-openapi-based-services.html#proc-configuring-openapi-service-endpoint-url

To configure the endpoints, you must use the sanitized OpenAPI specification file name as the REST client configuration key. The configuration key must be set as a valid environment variable. For example, a file named as subtraction.yaml contains the configuration key as subtraction_yaml. For more information about how to sanitize file names, see Environment Variables Mapping Rules.

All Quarkus configuration originated in Quarkus, e.g for the rest-client: https://quarkus.io/guides/rest-client#create-the-configuration

Each spec file represent a system. In the workflow, the functions section defines the APIs of that service from the spec by referencing to it. In the application.properties we match between the spec file to the URL of the service and its authentication.

Q: How do we call the Scaffolder from a workflow? Is there an example somewhere?

A: Here is an example for running a software template from a workflow: https://github.com/rhdhorchestrator/orchestrator-demo/blob/main/05_software_template_hello_world/workflow/src/main/resources/workflow.sw.yaml

This is the scaffolder OpenAPI spec: https://github.com/rhdhorchestrator/orchestrator-demo/blob/main/05_software_template_hello_world/workflow/src/main/resources/specs/scaffolder-openapi.yaml

Please note that the argument for the software templates aren’t part of the API and should be provided as arguments according to the software template definition, e.g. https://github.com/rhdhorchestrator/orchestrator-demo/blob/main/05_software_template_hello_world/workflow/src/main/resources/workflow.sw.yaml

One option to capture the exact list of arguments is by looking at the template definition. Another option is to invoke a software template from the browser, enable web-tools in-browser (before invocation), switch to network tab and “copy as curl command” - this shows exactly how the software template was called via the API and this will be done the same from a workflow. The input values will have to be defined in the input schema of the workflow (unless we want to calculate them in-workflow and not to be provided by the user)

Q: How to integrate own CustomActions?

A: Since RHDH is not rebuilt, we cannot make any code changes, just like custom plugins, as dynamic plugins. The question aims at creating resources for which there is no template action yet - in that case, you would rather go the way of a workflow in the Orchestrator, which you can address via a template action.

For example:

- id: trigger-workflow
  name: Execute Orchestrator Workflow
  action: orchestrator:run
  input:
    workflowId: ${{ parameters.workflowId }}
    parameters: ${{ parameters.workflowParameters }}
    waitForCompletion: true
    timeout: 300

But - both are possible, many roads lead to Rome (the Orchestrator’s Template Action also comes from the Orchestrator Backend Plugin).

If there is a backend plugin in backstage to support the action, and it exposes a REST API for it, by creating OpenAPI spec for it or using the rest option, a call can be made to it. If there isn’t only by leveraging a software template for calling it we are invoking software templates and sending notifications from workflows by using the scaffolder-backend plugin and the notifications-backend plugin. If there is a backend plugin and exposed API - the workflow should be able to call it.


UI & User Experience

Q: How to use the Notification Plugin?

A: There is a need to enable also the backstage-plugin-notifications-backend-module-email-dynamic plugin. See more details about the email plugin here: https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.6/html/dynamic_plugins_reference/con-preinstalled-dynamic-plugins#rhdh-tech-preview-plugins

and here:

The user/group entity in backstage must have an email address set. See https://backstage.io/docs/features/software-catalog/descriptor-format/#kind-user

Q: What UI widgets can I use? How do I populate the User/Owner in the widget?

A: The documentation for this are here https://github.com/redhat-developer/rhdh-plugins/blob/main/workspaces/orchestrator/docs/orchestratorFormWidgets.md

It is an additional plugin that serves that extensible UI capabilities (part of the Orchestrator). The current released version is 1.6.0 and nowadays we’re working on 1.6.1 to fix minor issues.

Here is an example to populate the User: Using backstage catalog backend OpenAPI spec can be found here:

curl -s -k 'https://{backstage-url}/api/catalog/openapi.json' \
 -H 'Authorization: Bearer...'

From that spec you can learn how to fetch entities, e.g. to fetch User or Groups use: /api/catalog/entities?filter=kind=user,kind=group

From the data input schema use this as the fetch:url value:

"$schema": "http://json-schema.org/draft-07/schema#",
    "type": "object",
    "properties": {
      "selectUser": {
        "type": "string",
        "title": "Please enter user name. Start typing for autocompletion",
        "ui:widget": "ActiveTextInput",
        "ui:props": {
          "fetch:url": "$${{backend.baseUrl}}/api/catalog/entities?fields=metadata.name&filter=kind=user,kind=group",
          "fetch:response:value": "$map($, function($v) { $lowercase($v.kind) & ':' & $v.metadata.namespace & '/' & $v.metadata.name })",
          "fetch:response:autocomplete": "$map($, function($v) { $lowercase($v.kind) & ':' & $v.metadata.namespace & '/' & $v.metadata.name })",
          "fetch:method": "GET",
          "fetch:headers": {
            "Authorization": "Bearer $${{identityApi.token}}"
          }
        }
      }
    }

Example can be found here: https://github.com/rhdhorchestrator/serverless-workflows/commit/d92916902b16427a936cb9abf7ebfa4c9f229c46

Parsing the response from the fetch:url is done using JSONATA. This is a useful tool for evaluating the expressions: https://try.jsonata.org/


Subflows

Q: Is it possible to create modularized workflows and call these as a sub-workflow from another workflow?

A: The serverless workflow specification supports subflows - https://github.com/serverlessworkflow/specification/blob/0.8.x/specification.md#subflow-action.

Each workflow can include many sub-flows built into the same build image with the main workflow.

The subflow are internal to the workflow and are not exposed as the main workflow. The definition itself can be shared between multiple workflows at build time, but at runtime the sub-flows aren’t exposed.

Q: Do all subflows have to be in the same deployment? Can we deploy workflows separately? How would then the subFlowRef be resolved or how would we call other workflows?

A: Deployment represents a single workflow. For calling subflows from the main workflow, all subflows need to be built into the same image of the calling workflow.

Subflows aren’t shown in the Data Index, therefore not shown in the Orchestrator plugin. Please take a look here for a more detailed example on subflows.

Q: How do the subflows work in this example?

Extended Q: How are workflowA and workflowB started in the subflow example https://github.com/apache/incubator-kie-kogito-examples/blob/main/serverless-workflow-examples/serverless-workflow-subflows-event/src/main/resources/master.sw.json ?

In the master workflow, there are only two states: “setup” and “waitForEvents”. ‘Wait for events’ implies that we only wait for something. So where are workflowA and workflowB started?

How are events related to subworkflows? Are events required? Is the call to setup synchronous?

A: From the README (https://github.com/apache/incubator-kie-kogito-examples/tree/main/serverless-workflow-examples/serverless-workflow-subflows-event): This example illustrate how to trigger workflows manually with additional parameters calculated by an initial workflow. The workflow responsible for setting up the parameters is executed as the start state. Then, all possible workflows that might be instantiated with those parameters are registered using event state. exclusive property is set to false ensuring that the process instance remains active till all possible workflows has been executed.

So when the workflow is started, it setup things and then waits for events. The workflowA and B are “started” when receiving their corresponding event defined by eventRefs field. Until both events are received, the workflow will be waiting for the remaining events.


Architecture & Infrastructure

Q: What is a SonataFlow, SonataPlatform, SonataClusterPlatform, etc.?

A: The SonataFlow operator (upstream name of the OpenShift Serverless Logic Operator) defines the following custom resource definitions:

  1. SonataFlow - the resource that defines the workflow and its profile - https://sonataflow.org/serverlessworkflow/main/cloud/operator/deployment-profile.html
  2. SonataFlowPlatform - It’s a singleton per namespace, used to configure the workflows and manage and configure the shared services. It supports configuration for eventing, persistences, monitoring. See https://sonataflow.org/serverlessworkflow/latest/cloud/operator/supporting-services.html#deploy-supporting-services
  3. SonataFlowClusterPlatform - it is the cluster-wide equivalent of SonataFlowPlatform. While SonataFlowPlatform applies to a single namespace, the SonataFlowClusterPlatform resource is designed to provide a shared configuration across multiple namespaces in OpenShift cluster. Find more here: https://sonataflow.org/serverlessworkflow/latest/cloud/operator/supporting-services.html#cluster-scoped-eventing-system-configuration
Q: Are there central parts which are shared between all workflows (apps)? E.g. data index? How do workflows communicate/connect to it?

A: Yes, SonataFlow defines several shared infrastructure components that serve all deployed workflows across the system. These shared services are crucial for execution, observability, and coordination.

  1. Data Index Service (sonataflow-data-index)
  1. The SonataFlow Jobs Service is responsible for time-based actions

The communication between workflows to the shared services is done by cloud events. Based on platform level (shared configuration for all workflows either in a single namespace or a global one), the operator adds the required properties for workflow to interact with the services. See more about the supported configuration at https://sonataflow.org/serverlessworkflow/main/use-cases/advanced-developer-use-cases/job-service/quarkus-extensions.html#job-service-quarkus-extensions

More about the supporting services can be found here: https://sonataflow.org/serverlessworkflow/latest/cloud/operator/supporting-services.html

Q: How to handle operator created resources?

A: Manage the inputs (Backstage CR and referenced ConfigMaps), not the operator’s generated outputs. Create your own ConfigMaps for app-config.yaml (and auth fragments), reference them from the Backstage CR, and keep both the CR and those ConfigMaps under Git. Do not hand‑edit the operator’s default app-config-* ConfigMaps—the operator may recreate or override them on restart. https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.5/html-single/configuring_red_hat_developer_hub/index.xml

Application config

  • Create a ConfigMap (e.g., my-rhdh-app-config) that contains your app-config.yaml.
  • Reference it from the Backstage CR: spec.application.appConfig.configMaps.
  • Keep both the ConfigMap manifest and the Backstage CR in your Git repo (and sync with Argo CD). This is the supported way to customize RHDH without changing operator‑owned ConfigMaps.

Auth and other fragments If you split auth/config into separate fragments (e.g., an “auth” ConfigMap), add it to the same list in the Backstage CR (spec.application.appConfig.configMaps) so the operator mounts all of them. (The official examples show adding additional ConfigMaps there.)

Dynamic plugin configuration (including Orchestrator UI) Store dynamic plugin settings in a ConfigMap (commonly dynamic-plugins-rhdh) and reference it from the Backstage CR via spec.application.dynamicPluginsConfigMapName. Keep it in Git. redhat-developer.github.io

Extra files & secrets If you need to mount additional files beyond app-config.yaml (for example, CA bundles, RBAC policies, or other files), use spec.application.extraFiles in the Backstage CR - again, declared as ConfigMaps/Secrets that you version in Git.

Orchestrator plugin explicitly recommends GitOps approach (Deployment with GitOps): https://www.rhdhorchestrator.io/main/docs/installation/orchestrator/?utm_source=chatgpt.com


Advanced Topics & Troubleshooting

Q: How to implement custom logic for DB capability?

A: Java Module or Connectors with Camel K.

Q: How do we properly uninstall a workflow?

A: In SonataFlow, clean up of workflows and their runs is done via the DB, to preserve auditing information. Therefore, even removal of workflows resources (SonataFlow CR, CM, secrets) from the cluster will not influence their appearances in DI. This information is required for auditing purposes, e.g. maintaining the history of workflow runs.

In Orchestrator 1.6 the workflow will be shown as unavailable. In Orchestrator 1.7 deleted workflows will be filtered from the UI - tracked by this issue.

Q: How to most easily implement prod DB approval process?

A: Here is Example Scaffolding Template which generates 2 git repos - source code and *-gitops for kubernetes manifests - Red Hat Best Practice: https://github.com/idp-team/software-templates/tree/master/scaffolder-templates/quarkus-web-template

Q: How to add custom Java code / classes?

A: Follow this example: https://sonataflow.org/serverlessworkflow/main/core/custom-functions-support.html#con-func-java

The class needs to be defined with a full qualifier (have a package defined that matches the one referenced from the workflow as well). In addition, the class needs to be annotated with @ApplicationScoped or @Dependent for the CDI to dynamically load it.

Q: How do I implement my own logic, with Quarkus Apps or can I use my own images?

A:

  1. The general workflow logic should be described in YAML - then you can also view it in the Orchestrator, for example (or visualize it with other tools).
  2. In your image, you can do whatever you want. Here are a few examples: https://github.com/rhdhorchestrator/serverless-workflows/tree/main/workflows/experimentals

If you don’t want to leave it to the operator which image it uses for building and for runtime, then there is a blog post with explanations and links to scripts, etc. - these build everything together and with that you then have full control and can also build your own functions, which you then call from your YAML flow: https://www.rhdhorchestrator.io/blog/building-and-deploying-workflows/

Note - of course, you can also build the entire logic as an application and pack it into your image and then just call “magic_kicks_in_here” from your flow, but then that’s no longer a workflow - the boundaries are fluid, but you should ideally not put logic into your code, but only functions: https://www.rhdhorchestrator.io/1.6/docs/


Template for New Q&A Entries

When adding new questions, use this template for consistency:

<details>
<summary><strong>Q: [Your question here in clear, concise language]</strong></summary>

**A:** [Your detailed answer here]

Key points:

- Point 1
- Point 2
- Point 3

```yaml
# Include relevant code/configuration examples
apiVersion: example/v1
kind: Example
metadata:
  name: sample
```

Contributing to this Q&A

Found a question that’s not covered? Please:

  1. Check existing questions first
  2. Follow the template format above
  3. Include practical examples where applicable
  4. Link to relevant documentation
  5. Test any code snippets before adding them
Last modified August 6, 2025: aligning date and lines (f5592f7)