Skip to content

Hosted Control Plane and tenant networking

Official documentation: Net yet available

Tested with:

Component Version
OpenShift v4.21.9
OpenShift Virt v4.21.0

ToDo's

  • Add custom endpointpublishing strategy
  • Find a solution for the nodeport chicken egg problem of external api load balancer
  • Check WebUI bug: Ingress domain is wrong.

Overview

Challenge: running an hosted cluster with in different tenant network segment/vlan without widely open access from tenant segment to managment segment.

Addtional requirement, the hub cluster should not have any address or network connection into the tenant network segment. It's only allowed to place virtual machines into the network segment.

The worker nodes of the hosted cluster are quite easy to solve, just connected them into the tenant network segment (import, DHCP is required).

The hosted control plane compontents to expose into tenant network segment is more challenging. Following components have to concider:

  • API Server
  • OAuth
  • Konnectivity
  • Ignition

Here an list of possible exposing options for these components:

Component/Service Exposing strategy (servicePublishingStrategy) Kubernetes Service type LoadBalancer Ingress/Route
API Server
  • LoadBalancer (Recommended, K8s Service Type Load Balancer)
  • NodePort* (not for production)
  • OAuth
  • Route/Ingress (default)
  • NodePort* (not for production)
  • Konnectivity
  • Route/Ingress (default)
  • LoadBalancer (K8s Service Type Load Balancer)
  • NodePort* (not for production)
  • Ignition
  • Route/Ingress (default)
  • NodePort* (not for production)
  • For our proof of concept we want to try following, exposing the components via:

    • API Server: LoadBalancer
    • OAuth: Router/Ingress: via a dedicted router shard.
    • Konnectivity: via a dedicted router shard.
    • Ignition: via a dedicted router shard.

    Exposing compontents via router/ingress shard

    The idea with the dedicated router/ingress shared is to expose the router/ingress shard into the tenant network segment and only for the hosted cluster components.

    In front of the router/ingress shared is an external load balancer (for example, f5 bigip, netscaler,..) with access into the managment network segment and expose the router shared into the tenant network segment.

    Proof of concept envrioment overview

    Router between Mgmt and Tenant-A

    VyOS Router router & firewall. Do not allow Traffic between Mgmt and Tenant-A network except DNS and gateway. To provde direct internect connection.

    VyOS config commands
    set firewall group address-group ALLOWED-IPS address '10.32.96.1'
    set firewall group address-group ALLOWED-IPS address '10.32.96.31'
    set firewall group address-group ALLOWED-IPS address '10.32.111.254'
    set firewall ipv4 forward filter rule 49 action 'accept'
    set firewall ipv4 forward filter rule 49 description 'Allow IPs'
    set firewall ipv4 forward filter rule 49 destination group address-group 'ALLOWED-IPS'
    set firewall ipv4 forward filter rule 50 action 'drop'
    set firewall ipv4 forward filter rule 50 description 'Drop enire coe lab'
    set firewall ipv4 forward filter rule 50 destination address '10.32.96.0/20'
    
    set interfaces ethernet eth0 address 'dhcp'
    set interfaces ethernet eth1 address '192.168.203.1/24'
    
    set nat source rule 100 outbound-interface name 'eth0'
    set nat source rule 100 source address '192.168.203.0/24'
    set nat source rule 100 translation address 'masquerade'
    set service dhcp-server listen-interface 'eth1'
    set service dhcp-server shared-network-name coe-2003 authoritative
    set service dhcp-server shared-network-name coe-2003 subnet 192.168.203.0/24 option default-router '192.168.203.1'
    set service dhcp-server shared-network-name coe-2003 subnet 192.168.203.0/24 option name-server '10.32.96.1'
    set service dhcp-server shared-network-name coe-2003 subnet 192.168.203.0/24 range 1 start '192.168.203.100'
    set service dhcp-server shared-network-name coe-2003 subnet 192.168.203.0/24 range 1 stop '192.168.203.200'
    set service dhcp-server shared-network-name coe-2003 subnet 192.168.203.0/24 subnet-id '1'
    set service ssh
    set system host-name 'router-2003'
    set system name-server '10.32.96.1'
    set system name-server '10.32.96.31'
    

    Ingress Sharding

    Ingress Controller
    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: tenant-a
      namespace: openshift-ingress-operator
    spec:
      domain: tenant-a.coe.muc.redhat.com
    
      endpointPublishingStrategy:
        type: NodePortService
      namespaceSelector:
        matchExpressions:
          - key: kubernetes.io/metadata.name
            operator: In
            values:
              - ingress-test
              - clusters-tenant-a
    
    1
    2
    3
    % oc get svc -n openshift-ingress router-nodeport-tenant-a
    NAME                       TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)                                     AGE
    router-nodeport-tenant-a   NodePort   172.30.141.209   <none>        80:32460/TCP,443:32488/TCP,1936:32095/TCP   106s
    

    Ingress sharding load balancer is an RHEL 9 system with haproxy.

    • Install HAProxy dnf install haproxy
    • Configure selinux setsebool -P haproxy_connect_any 1
    • Apply Example haproxy.conf (don't forget to update ports)
    • Enabel and start haproxy systemctl enable --now haproxy
    HAProxy config
    global
      log         127.0.0.1 local2
      pidfile     /var/run/haproxy.pid
      maxconn     4000
      daemon
    defaults
      mode                    http
      log                     global
      option                  dontlognull
      option http-server-close
      option                  redispatch
      retries                 3
      timeout http-request    10s
      timeout queue           1m
      timeout connect         10s
      timeout client          1m
      timeout server          1m
      timeout http-keep-alive 10s
      timeout check           10s
      maxconn                 3000
    
    listen ingress-router-443
      bind *:443
      mode tcp
      balance source
      server ucs-blade-server-5 10.32.96.105:32488 check inter 1s
      server ucs-blade-server-6 10.32.96.106:32488 check inter 1s        
      server ucs-blade-server-7 10.32.96.107:32488 check inter 1s
      server ucs-blade-server-8 10.32.96.108:32488 check inter 1s
    
    listen ingress-router-80
      bind *:80
      mode tcp
      balance source
      server ucs-blade-server-5 10.32.96.105:32460 check inter 1s
      server ucs-blade-server-6 10.32.96.106:32460 check inter 1s        
      server ucs-blade-server-7 10.32.96.107:32460 check inter 1s
      server ucs-blade-server-8 10.32.96.108:32460 check inter 1s
    

    Add DNS Records

    1
    2
    3
    konnectivity.tenant-a.coe.muc.redhat.com.       IN A 192.168.203.111
    oauth.tenant-a.coe.muc.redhat.com.              IN A 192.168.203.111
    ignition.tenant-a.coe.muc.redhat.com.           IN A 192.168.203.111
    

    Start hosted control plane and nodepool

    apiVersion: hypershift.openshift.io/v1beta1
    kind: HostedCluster
    metadata:
      name: 'tenant-a'
      namespace: 'clusters'
      labels:
        "cluster.open-cluster-management.io/clusterset": 'default'
    spec:
      configuration:
        ingress:
          appsDomain: apps.tenant-a.coe.muc.redhat.com
          domain: ''
          loadBalancer:
            platform:
              type: ''
      channel: fast-4.21
      etcd:
        managed:
          storage:
            persistentVolume:
              size: 8Gi
            type: PersistentVolume
        managementType: Managed
      release:
        image: quay.io/openshift-release-dev/ocp-release:4.21.11-multi
      pullSecret:
        name: pullsecret-cluster-tenant-a
      sshKey:
        name: sshkey-cluster-tenant-a
      networking:
        clusterNetwork:
          - cidr: 10.132.0.0/14
        serviceNetwork:
          - cidr: 172.31.0.0/16
        networkType: OVNKubernetes
      controllerAvailabilityPolicy: SingleReplica
      infrastructureAvailabilityPolicy: SingleReplica
      platform:
        type: KubeVirt
        kubevirt:
          baseDomainPassthrough: false
      infraID: 'tenant-a'
      services:
        - service: APIServer
          servicePublishingStrategy:
            type: LoadBalancer
            loadBalancer:
              hostname: api.tenant-a.coe.muc.redhat.com
        - service: OAuthServer
          servicePublishingStrategy:
            type: Route
            route:
              hostname: oauth.tenant-a.coe.muc.redhat.com
        - service: OIDC
          servicePublishingStrategy:
            type: Route
        - service: Konnectivity
          servicePublishingStrategy:
            type: Route
            route:
              hostname: konnectivity.tenant-a.coe.muc.redhat.com
        - service: Ignition
          servicePublishingStrategy:
            type: Route
            route:
              hostname: ignition.tenant-a.coe.muc.redhat.com
    
    ---
    apiVersion: hypershift.openshift.io/v1beta1
    kind: NodePool
    metadata:
      name: 'tenant-a'
      namespace: 'clusters'
    spec:
      arch: amd64
      clusterName: 'tenant-a'
      replicas: 2
      management:
        autoRepair: false
        upgradeType: Replace
      platform:
        type: KubeVirt
        kubevirt:
          compute:
            cores: 2
            memory: 8Gi
          rootVolume:
            type: Persistent
            persistent:
              size: 32Gi
          additionalNetworks:
          - name: default/cudn-localnet1-2003
          attachDefaultNetwork: false
      release:
        image: quay.io/openshift-release-dev/ocp-release:4.21.11-multi
    

    Deploy external load balancer for ingress of hosted cluster

    Ingress sharding load balancer is an RHEL 9 system with haproxy.

    • Install HAProxy dnf install haproxy
    • Configure selinux setsebool -P haproxy_connect_any 1
    • Apply Example haproxy.conf (don't forget to update ports)
    • Enabel and start haproxy systemctl enable --now haproxy
    HAProxy config
    global
      log         127.0.0.1 local2
      pidfile     /var/run/haproxy.pid
      maxconn     4000
      daemon
    defaults
      mode                    http
      log                     global
      option                  dontlognull
      option http-server-close
      option                  redispatch
      retries                 3
      timeout http-request    10s
      timeout queue           1m
      timeout connect         10s
      timeout client          1m
      timeout server          1m
      timeout http-keep-alive 10s
      timeout check           10s
      maxconn                 3000
    
    listen ingress-router-443
      bind *:443
      mode tcp
      balance source
      server tenant-a-gngj5-mfwp6 192.168.203.101:30190 check inter 1s
      server tenant-a-gngj5-rrbmv 192.168.203.102:30190 check inter 1s        
    
    listen ingress-router-80
      bind *:80
      mode tcp
      balance source
      server tenant-a-gngj5-mfwp6 192.168.203.101:30282 check inter 1s
      server tenant-a-gngj5-rrbmv 192.168.203.102:30282 check inter 1s        
    

    2026-05-04 2026-05-01 Contributors: Robert Bohne