From e7a12d5f41e86705bbb0236dd15a2f8faadfb2eb Mon Sep 17 00:00:00 2001 From: Michele Baldessari Date: Mon, 26 Jan 2026 16:21:43 +0100 Subject: [PATCH 1/4] Initial pass at ingress-bgp docs --- content/patterns/ingress-mesh-bgp/_index.adoc | 33 +++ .../imbgp-getting-started.adoc | 21 ++ modules/imbgp-about.adoc | 70 ++++++ modules/imbgp-architecture.adoc | 126 ++++++++++ modules/imbgp-deploying.adoc | 220 ++++++++++++++++++ static/images/ingress-mesh-bgp/network.svg | 4 + 6 files changed, 474 insertions(+) create mode 100644 content/patterns/ingress-mesh-bgp/_index.adoc create mode 100644 content/patterns/ingress-mesh-bgp/imbgp-getting-started.adoc create mode 100644 modules/imbgp-about.adoc create mode 100644 modules/imbgp-architecture.adoc create mode 100644 modules/imbgp-deploying.adoc create mode 100644 static/images/ingress-mesh-bgp/network.svg diff --git a/content/patterns/ingress-mesh-bgp/_index.adoc b/content/patterns/ingress-mesh-bgp/_index.adoc new file mode 100644 index 000000000..40034de41 --- /dev/null +++ b/content/patterns/ingress-mesh-bgp/_index.adoc @@ -0,0 +1,33 @@ +--- +title: Ingress Mesh BGP +date: 2025-01-26 +tier: sandbox +summary: This pattern demonstrates multi-cluster service mesh networking using MetalLB with BGP and Red Hat Service Interconnect (Skupper) for cross-cluster connectivity. +rh_products: +- Red Hat OpenShift Container Platform +- Red Hat Advanced Cluster Management +- Red Hat Service Interconnect +industries: +- General +- Telecommunications +aliases: /ingress-mesh-bgp/ +# pattern_logo: ingress-mesh-bgp.png # TODO: Create pattern logo +links: + github: https://github.com/validatedpatterns/ingress-mesh-bgp + install: imbgp-getting-started + bugs: https://github.com/validatedpatterns/ingress-mesh-bgp/issues +ci: ingressmeshbgp +--- +:toc: +:imagesdir: /images +:_content-type: ASSEMBLY +include::modules/comm-attributes.adoc[] + +include::modules/imbgp-about.adoc[leveloffset=+1] + +include::modules/imbgp-architecture.adoc[leveloffset=+1] + +[id="next-steps_imbgp-index"] +== Next steps + +* link:imbgp-getting-started[Deploy the Ingress Mesh BGP pattern]. diff --git a/content/patterns/ingress-mesh-bgp/imbgp-getting-started.adoc b/content/patterns/ingress-mesh-bgp/imbgp-getting-started.adoc new file mode 100644 index 000000000..a61bbdc2a --- /dev/null +++ b/content/patterns/ingress-mesh-bgp/imbgp-getting-started.adoc @@ -0,0 +1,21 @@ +--- +title: Getting started +weight: 10 +aliases: /ingress-mesh-bgp/imbgp-getting-started/ +--- + +:toc: +:imagesdir: /images +:_content-type: ASSEMBLY +include::modules/comm-attributes.adoc[] + +include::modules/imbgp-deploying.adoc[leveloffset=1] + +[id="next-steps_imbgp-getting-started"] +== Next steps + +After the pattern is deployed and working correctly, you can: + +* Verify the BGP routing is functioning correctly by checking the routing table on the core router +* Test the anycast service by accessing it from the client VM +* Explore the Red Hat Service Interconnect (Skupper) console to see the cross-cluster connectivity diff --git a/modules/imbgp-about.adoc b/modules/imbgp-about.adoc new file mode 100644 index 000000000..d65d2c362 --- /dev/null +++ b/modules/imbgp-about.adoc @@ -0,0 +1,70 @@ +:_content-type: CONCEPT +:imagesdir: ../../images + +[id="about-ingress-mesh-bgp-pattern"] += About the Ingress Mesh BGP pattern + +Use case:: + +* Deploy multi-cluster applications with unified ingress using BGP-based load balancing. +* Enable anycast IP addressing for seamless failover between OpenShift clusters. +* Connect services across clusters using Red Hat Service Interconnect (Skupper) for secure east-west traffic. +* Demonstrate enterprise-grade BGP routing integration with Kubernetes/OpenShift environments. ++ +[NOTE] +==== +This pattern is designed for AWS environments and simulates a datacenter-like BGP network topology using EC2 instances as routing infrastructure. +==== + +Background:: +Modern distributed applications often span multiple clusters for high availability, geographic distribution, or workload isolation. Traditional ingress solutions provide per-cluster access, but organizations need unified entry points that can intelligently route traffic across clusters. + +This pattern addresses these requirements by combining: + +* *MetalLB with BGP mode* - Provides load balancer services that advertise routes via BGP, enabling anycast addressing where the same IP is reachable through multiple clusters. +* *Red Hat Service Interconnect (Skupper)* - Creates a secure application-layer virtual network connecting services across clusters without requiring VPN or special network configurations. +* *FRRouting (FRR)* - Industry-standard routing software running on EC2 instances that acts as the BGP peering infrastructure, simulating top-of-rack (TOR) switches and core routers. + +[id="about-solution-imbgp"] +== About the solution + +This pattern deploys a complete multi-cluster networking demonstration on AWS that includes: + +* Two OpenShift clusters designated as "west" (hub) and "east" (spoke) +* A simulated routing infrastructure with FRR-based routers +* MetalLB configured in BGP mode on both clusters +* Red Hat Service Interconnect linking services between clusters +* A hello-world application deployed across both clusters demonstrating the connectivity + +The solution uses ECMP (Equal-Cost Multi-Path) routing to distribute traffic across both clusters when accessing the anycast IP address. + +.Benefits of the Ingress Mesh BGP pattern: + +* *Unified service access* - Single IP address reaches services on multiple clusters +* *Automatic failover* - BGP route withdrawal provides fast failover when a cluster becomes unavailable +* *Secure cross-cluster communication* - Skupper encrypts all inter-cluster traffic using mutual TLS +* *No network infrastructure changes* - Skupper works over existing networks without VPNs or firewall changes +* *GitOps-driven deployment* - All components are deployed and managed through ArgoCD + +[id="about-technology-imbgp"] +== About the technology + +The following technologies are used in this solution: + +https://www.redhat.com/en/technologies/cloud-computing/openshift/try-it[Red Hat OpenShift Platform]:: +An enterprise-ready Kubernetes container platform built for an open hybrid cloud strategy. It provides a consistent application platform to manage hybrid cloud, public cloud, and edge deployments. + +https://www.redhat.com/en/technologies/management/advanced-cluster-management[Red Hat Advanced Cluster Management for Kubernetes]:: +Controls clusters and applications from a single console, with built-in security policies. Extends the value of Red Hat OpenShift by deploying apps, managing multiple clusters, and enforcing policies across multiple clusters at scale. + +https://www.redhat.com/en/technologies/cloud-computing/service-interconnect[Red Hat Service Interconnect]:: +Based on the open source Skupper project, Red Hat Service Interconnect enables secure communication between services across different environments without requiring VPN infrastructure or special firewall rules. It creates a virtual application network that works at Layer 7. + +https://metallb.universe.tf/[MetalLB]:: +A load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols. In this pattern, MetalLB operates in BGP mode to advertise service IPs as routes to the upstream network infrastructure. + +https://frrouting.org/[FRRouting (FRR)]:: +A free and open source Internet routing protocol suite for Linux and Unix platforms. It implements BGP, OSPF, RIP, and other protocols. In this pattern, FRR runs on EC2 instances to simulate datacenter routing infrastructure. + +Kubernetes Gateway API:: +The next generation of Kubernetes Ingress, providing a more expressive and extensible API for managing traffic into and within a cluster. Red Hat Service Interconnect uses Gateway API for exposing services. diff --git a/modules/imbgp-architecture.adoc b/modules/imbgp-architecture.adoc new file mode 100644 index 000000000..c05b25b07 --- /dev/null +++ b/modules/imbgp-architecture.adoc @@ -0,0 +1,126 @@ +:_content-type: CONCEPT +:imagesdir: ../../images + +[id="architecture-ingress-mesh-bgp"] += Architecture + +The Ingress Mesh BGP pattern demonstrates a multi-cluster networking architecture that combines BGP-based anycast ingress with service mesh connectivity. The architecture simulates an enterprise network topology on AWS. + +[id="network-topology-imbgp"] +== Network topology + +The pattern creates the following network components on AWS: + +.Network topology diagram +image::ingress-mesh-bgp/network.svg[Network Topology,700] + +=== VPCs and subnets + +The pattern provisions several VPCs to simulate separate network segments: + +* *Client-Core VPC (192.168.8.0/24)* - Contains the client VM and core router +* *Core-West TOR VPC (192.168.12.0/24)* - Connects the core router to the west cluster's top-of-rack router +* *Core-East TOR VPC (192.168.16.0/24)* - Connects the core router to the east cluster's top-of-rack router +* *West Workers VPC (10.0.0.0/16)* - The west OpenShift cluster's VPC +* *East Workers VPC (10.1.0.0/16)* - The east OpenShift cluster's VPC + +=== Routing infrastructure + +The pattern deploys EC2 instances running FRRouting to create a simulated datacenter network: + +[cols="1,1,2"] +|=== +|Component |ASN |Description + +|Core Router +|64666 +|Central router that peers with both TOR routers and advertises client network routes + +|West TOR +|64001 +|Top-of-rack router for the west cluster, peers with core and west OpenShift workers + +|East TOR +|64002 +|Top-of-rack router for the east cluster, peers with core and east OpenShift workers + +|West OpenShift (MetalLB) +|65001 +|MetalLB speakers on west cluster workers, peer with west TOR + +|East OpenShift (MetalLB) +|65002 +|MetalLB speakers on east cluster workers, peer with east TOR +|=== + +=== Anycast addressing + +Both clusters advertise the same anycast IP range (192.168.155.0/24) via BGP. When a client accesses an anycast IP: + +. The core router receives BGP advertisements from both TOR routers for the anycast range +. ECMP routing distributes traffic across both paths +. Requests reach either the west or east cluster based on the routing decision +. If one cluster becomes unavailable, BGP route withdrawal automatically redirects traffic to the remaining cluster + +[id="cluster-components-imbgp"] +== Cluster components + +=== West cluster (Hub) + +The west cluster acts as the management hub and includes: + +* *Red Hat Advanced Cluster Management* - Manages the east cluster as a spoke +* *HashiCorp Vault* - Centralized secrets management +* *External Secrets Operator* - Synchronizes secrets from Vault to Kubernetes +* *MetalLB* - Provides BGP-advertised load balancer services (ASN 65001) +* *Red Hat Service Interconnect (Skupper)* - Hosts the Skupper site with link access enabled +* *Hello-world application* - Frontend component of the demo application + +=== East cluster (Spoke) + +The east cluster is a managed spoke that includes: + +* *External Secrets Operator* - Retrieves secrets from the hub's Vault +* *MetalLB* - Provides BGP-advertised load balancer services (ASN 65002) +* *Red Hat Service Interconnect (Skupper)* - Connects back to the west cluster's Skupper site +* *Hello-world application* - Backend component of the demo application + +[id="service-interconnect-imbgp"] +== Red Hat Service Interconnect + +Red Hat Service Interconnect (based on Skupper) creates a virtual application network between the clusters: + +* The west cluster hosts a Skupper site with `linkAccess: default`, allowing other sites to connect +* The east cluster establishes a link to the west cluster using a pre-shared access token +* Services exposed through Skupper listeners become accessible across both clusters +* All traffic between sites is encrypted using mutual TLS + +The pattern uses the Skupper v2 API with the following components: + +* *Site* - Defines the Skupper installation in each namespace +* *Listener* - Exposes a service to the Skupper network +* *Connector* - Connects a local workload to a Skupper-exposed service +* *AccessGrant/AccessToken* - Manages secure connection between sites + +[id="gitops-structure-imbgp"] +== GitOps structure + +The pattern follows the Validated Patterns framework: + +---- +ingress-mesh-bgp/ +├── values-global.yaml # Global configuration +├── values-west.yaml # West (hub) cluster configuration +├── values-east.yaml # East (spoke) cluster configuration +├── charts/ +│ ├── all/ +│ │ ├── hello-world/ # Demo application +│ │ ├── metallb/ # MetalLB configuration +│ │ └── rhsi/ # Skupper configuration for west +│ └── east-site/ +│ └── rhsi-east/ # Skupper configuration for east +└── ansible/ + └── playbooks/ # Infrastructure automation +---- + +ArgoCD manages the deployment of all components, with Red Hat Advanced Cluster Management distributing configurations to the appropriate clusters. diff --git a/modules/imbgp-deploying.adoc b/modules/imbgp-deploying.adoc new file mode 100644 index 000000000..0a64cfdf8 --- /dev/null +++ b/modules/imbgp-deploying.adoc @@ -0,0 +1,220 @@ +:_content-type: PROCEDURE +:imagesdir: ../../../images + +[id="deploying-imbgp-pattern"] += Deploying the Ingress Mesh BGP pattern + +The Ingress Mesh BGP pattern demonstrates multi-cluster networking with BGP-based load balancing and service mesh connectivity. Deploying this pattern requires: + +* Two OpenShift clusters on AWS in the same region +* A BGP routing infrastructure (created by the pattern's Ansible automation) +* The Validated Patterns framework + +[NOTE] +==== +This pattern is designed specifically for AWS and requires the ability to create EC2 instances for the routing infrastructure. +==== + +.Prerequisites + +* Two OpenShift clusters on AWS: +** One cluster designated as "west" (hub) +** One cluster designated as "east" (spoke) +** To create OpenShift clusters, go to the https://console.redhat.com/[Red Hat Hybrid Cloud console] and select *OpenShift -> Red Hat OpenShift Container Platform -> Create cluster*. +** See the https://github.com/validatedpatterns/ingress-mesh-bgp/tree/main/docs/install-configs[install-configs folder] in the repository for example `install-config.yaml` files. + +* Both clusters must have dynamic `StorageClass` for `PersistentVolumes`. Verify by running: ++ +[source,terminal] +---- +$ oc get storageclass -o custom-columns=NAME:.metadata.name,PROVISIONER:.provisioner,DEFAULT:.metadata.annotations."storageclass\.kubernetes\.io/is-default-class" +---- + +* AWS credentials configured for Ansible with permissions to: +** Create and manage EC2 instances +** Create and manage VPCs and subnets +** Create and manage security groups + +* SSH key pair for accessing the EC2 instances (default: `~/.ssh/id_rsa.pub`) + +* https://validatedpatterns.io/learn/quickstart/[Install the tooling dependencies] including: +** `git` +** `podman` or `docker` +** `oc` CLI +** `ansible` with required collections + +.Procedure + +. Fork the https://github.com/validatedpatterns/ingress-mesh-bgp[ingress-mesh-bgp] repository on GitHub. + +. Clone the forked repository: ++ +[source,terminal] +---- +$ git clone git@github.com:/ingress-mesh-bgp.git +$ cd ingress-mesh-bgp +---- + +. Set up the upstream remote: ++ +[source,terminal] +---- +$ git remote add -f upstream git@github.com/validatedpatterns/ingress-mesh-bgp.git +---- + +. Create a working branch: ++ +[source,terminal] +---- +$ git checkout -b my-branch main +$ git push -u origin my-branch +---- + +. Deploy the west (hub) and east (spoke) OpenShift clusters on AWS. ++ +[NOTE] +==== +For simplicity, the pattern defaults to a single Availability Zone. See the example install-config files in `docs/install-configs/` for reference. +==== + +. Set the environment variables pointing to your cluster kubeconfig files: ++ +[source,terminal] +---- +$ export WESTCONFIG=~/west-hub/auth/kubeconfig +$ export EASTCONFIG=~/east-spoke/auth/kubeconfig +---- + +. Deploy the BGP routing infrastructure: ++ +[source,terminal] +---- +$ make bgp-routing +---- ++ +This Ansible playbook creates: + +* Client VM for testing +* Core router (FRR) with BGP ASN 64666 +* West TOR router (FRR) with BGP ASN 64001 +* East TOR router (FRR) with BGP ASN 64002 +* VPC peering between all components ++ +The command also generates `/tmp/launch_tmux.sh`, which starts a tmux session with SSH access to all EC2 instances. + +. If you used custom `install-config.yaml` files, update the MetalLB peer addresses (This is usually not needed): ++ +.. Edit `values-west.yaml` and update `metal.peerAddress` to match your west cluster's TOR router IP +.. Edit `values-east.yaml` and update `metal.peerAddress` to match your east cluster's TOR router IP +.. Commit and push the changes to your branch + +. Install the pattern on the west (hub) cluster: ++ +[source,terminal] +---- +$ ./pattern.sh make install +---- ++ +This command: + +* Installs the Validated Patterns Operator +* Deploys ArgoCD and configures GitOps +* Installs Red Hat Advanced Cluster Management +* Deploys MetalLB with BGP configuration +* Deploys Red Hat Service Interconnect (Skupper) +* Deploys the hello-world frontend application + +. Wait for all applications to synchronize in the Hub ArgoCD instance (accessible via the nine-box menu). All applications should show "Healthy" and "Synced" status. + +. Import the east (spoke) cluster into the management hub: ++ +[source,terminal] +---- +$ make import +---- ++ +This registers the east cluster with Red Hat Advanced Cluster Management, which then deploys the spoke components automatically. + +.Verification + +At this point both clusters have are fully configured. + +. Run the generated tmux launcher to access the EC2 VMs: ++ +[source,terminal] +---- +$ /tmp/launch_tmux.sh +---- + +. On the client VM, verify connectivity to the anycast IP for the hello world application. The "apps.mcg-hub.aws.validatedpatterns.io" tells us that the chosen route was going through the west TOR switch and on to the hub cluster: ++ +[source,terminal] +---- +$ curl http://192.168.155.151/hello + + + + Hello World + + +

Hello World!

+
+

+ Pod is running on Local Cluster Domain 'apps.mcg-hub.aws.validatedpatterns.io'
+
+
+
+ Hub Cluster domain is 'apps.mcg-hub.aws.validatedpatterns.io'
+

+ + +---- + +. On the core router, verify ECMP routing is configured: ++ +[source,terminal] +---- +$ ip r +---- ++ +.Expected output ++ +[source,terminal] +---- +default via 192.168.8.1 dev enX0 proto dhcp src 192.168.8.100 metric 100 +192.168.8.0/24 dev enX0 proto kernel scope link src 192.168.8.100 metric 100 +192.168.12.0/24 dev enX1 proto kernel scope link src 192.168.12.200 metric 101 +192.168.16.0/24 dev enX2 proto kernel scope link src 192.168.16.200 metric 101 +192.168.155.150 nhid 46 proto bgp metric 20 + nexthop via 192.168.12.100 dev enX1 weight 1 + nexthop via 192.168.16.100 dev enX2 weight 1 +192.168.155.151 nhid 46 proto bgp metric 20 + nexthop via 192.168.12.100 dev enX1 weight 1 + nexthop via 192.168.16.100 dev enX2 weight 1 +---- ++ +The output shows that routes to the anycast IPs (192.168.155.150 and 192.168.155.151) have multiple next-hops, indicating ECMP is working. + +. Check the BGP peering status on any FRR router: ++ +[source,terminal] +---- +$ sudo vtysh -c "show bgp summary" +---- + +[id="cleanup-imbgp"] +== Cleaning up + +To destroy the routing infrastructure: + +[WARNING] +==== +Always clean up the BGP routing infrastructure *before* destroying the OpenShift clusters. +==== + +[source,terminal] +---- +$ make bgp-routing-cleanup +---- + +After the routing infrastructure is removed, you can safely destroy the OpenShift clusters. diff --git a/static/images/ingress-mesh-bgp/network.svg b/static/images/ingress-mesh-bgp/network.svg new file mode 100644 index 000000000..0a5a30dea --- /dev/null +++ b/static/images/ingress-mesh-bgp/network.svg @@ -0,0 +1,4 @@ + + + +
Core ASN 64666
Core ASN 6...
enX0 - .100
enX0 - .100
enX0 - .10
enX0 - .10
enX1 - .200
enX1 - .200
enX2 - .200
enX2 - .200
enX0 - .100
enX0 - .100
enX0 - .100
enX0 - .100
West TOR
ASN
64001
West TOR...
East TOR
ASN 64002
East TOR...
West OCP
ASN 65001
West OCP...
East OCP
ASN 65002
East OCP...
enX1
enX1
enX1
enX1
Client-Core VPC
Client-Core VPC
Core-West Tor VPC
Core-West Tor VPC
Core-East TOR VPC
Core-East TOR VPC
West Workers VPC
West Workers VPC
East Workers VPC
East Workers VPC
192.168.8.0/24
192.168.8.0/24
192.168.12.0/24
192.168.12.0/24
192.168.16.0/24
192.168.16.0/24
Anycast Range 192.168.155.0/24
Anycast Range 192.16...
10.1.0.0/16 
10.1.0.0/1...
10.0.0.0/16
10.0.0.0/16
Text is not SVG - cannot display
\ No newline at end of file From e29a70a757e97aba4e078d72fb24abcca96eba68 Mon Sep 17 00:00:00 2001 From: Michele Baldessari Date: Wed, 25 Feb 2026 09:23:49 +0100 Subject: [PATCH 2/4] Address review feedback Addressing Ryan's comments in the PR --- modules/imbgp-about.adoc | 8 +++++--- modules/imbgp-architecture.adoc | 23 +++++++++++++++++++++++ 2 files changed, 28 insertions(+), 3 deletions(-) diff --git a/modules/imbgp-about.adoc b/modules/imbgp-about.adoc index d65d2c362..62c52ccec 100644 --- a/modules/imbgp-about.adoc +++ b/modules/imbgp-about.adoc @@ -22,6 +22,7 @@ Modern distributed applications often span multiple clusters for high availabili This pattern addresses these requirements by combining: * *MetalLB with BGP mode* - Provides load balancer services that advertise routes via BGP, enabling anycast addressing where the same IP is reachable through multiple clusters. +* *Gateway API* - Delivers L4 and L7 service routing via next generation Ingress, Load Balancing, and Service Mesh APIs. Provides GatewayClasses (infrastructure) and Gateways (operations) so application developers can create routes (HTTPRoute, GRPCRoute, etc.) for their services. * *Red Hat Service Interconnect (Skupper)* - Creates a secure application-layer virtual network connecting services across clusters without requiring VPN or special network configurations. * *FRRouting (FRR)* - Industry-standard routing software running on EC2 instances that acts as the BGP peering infrastructure, simulating top-of-rack (TOR) switches and core routers. @@ -30,7 +31,7 @@ This pattern addresses these requirements by combining: This pattern deploys a complete multi-cluster networking demonstration on AWS that includes: -* Two OpenShift clusters designated as "west" (hub) and "east" (spoke) +* Two OpenShift clusters, designated as "west" (ACM hub) and "east" (spoke) * A simulated routing infrastructure with FRR-based routers * MetalLB configured in BGP mode on both clusters * Red Hat Service Interconnect linking services between clusters @@ -45,6 +46,7 @@ The solution uses ECMP (Equal-Cost Multi-Path) routing to distribute traffic acr * *Secure cross-cluster communication* - Skupper encrypts all inter-cluster traffic using mutual TLS * *No network infrastructure changes* - Skupper works over existing networks without VPNs or firewall changes * *GitOps-driven deployment* - All components are deployed and managed through ArgoCD +* *Application owner autonomy* - App owners can describe their own routes on approved gateways and gatewayclasses without relying on network infrastructure teams [id="about-technology-imbgp"] == About the technology @@ -66,5 +68,5 @@ A load-balancer implementation for bare metal Kubernetes clusters, using standar https://frrouting.org/[FRRouting (FRR)]:: A free and open source Internet routing protocol suite for Linux and Unix platforms. It implements BGP, OSPF, RIP, and other protocols. In this pattern, FRR runs on EC2 instances to simulate datacenter routing infrastructure. -Kubernetes Gateway API:: -The next generation of Kubernetes Ingress, providing a more expressive and extensible API for managing traffic into and within a cluster. Red Hat Service Interconnect uses Gateway API for exposing services. +https://gateway-api.sigs.k8s.io/[Kubernetes Gateway API]:: +The next generation of Kubernetes Ingress, providing a more expressive and extensible API for managing traffic into and within a cluster. Gateway API is the intermediary layer between BGP and Skupper — traffic arriving via BGP passes through Gateway API for routing before reaching Skupper for inter-cluster communication. It provides GatewayClasses for infrastructure providers, Gateways for operations teams, and Routes (HTTPRoute, GRPCRoute, etc.) for application developers, enabling self-service routing without relying on network infrastructure teams. diff --git a/modules/imbgp-architecture.adoc b/modules/imbgp-architecture.adoc index c05b25b07..0551736eb 100644 --- a/modules/imbgp-architecture.adoc +++ b/modules/imbgp-architecture.adoc @@ -73,6 +73,7 @@ The west cluster acts as the management hub and includes: * *HashiCorp Vault* - Centralized secrets management * *External Secrets Operator* - Synchronizes secrets from Vault to Kubernetes * *MetalLB* - Provides BGP-advertised load balancer services (ASN 65001) +* *Gateway API* - Routes incoming traffic to appropriate services, providing the intermediary layer between BGP ingress and Skupper * *Red Hat Service Interconnect (Skupper)* - Hosts the Skupper site with link access enabled * *Hello-world application* - Frontend component of the demo application @@ -82,9 +83,31 @@ The east cluster is a managed spoke that includes: * *External Secrets Operator* - Retrieves secrets from the hub's Vault * *MetalLB* - Provides BGP-advertised load balancer services (ASN 65002) +* *Gateway API* - Routes incoming traffic to appropriate services, providing the intermediary layer between BGP ingress and Skupper * *Red Hat Service Interconnect (Skupper)* - Connects back to the west cluster's Skupper site * *Hello-world application* - Backend component of the demo application +[id="metallb-imbgp"] +== MetalLB + +MetalLB provides load balancer services on bare metal and cloud environments where cloud-native load balancers are not available or not suitable: + +* Each cluster runs MetalLB in BGP mode with a unique ASN (65001 for west, 65002 for east) +* MetalLB speakers on worker nodes peer with the local TOR router and advertise service IPs via BGP +* Both clusters advertise the same anycast IP range (192.168.155.0/24), enabling ECMP routing from the core +* When a cluster becomes unavailable, its BGP routes are withdrawn and traffic is automatically redirected to the remaining cluster + +[id="gateway-api-imbgp"] +== Gateway API + +Gateway API provides the L4/L7 routing layer between BGP ingress and application services: + +* *GatewayClass* - Defined by infrastructure providers to describe the type of gateway infrastructure available +* *Gateway* - Created by operations teams to instantiate a gateway from a GatewayClass, defining listeners and allowed routes +* *HTTPRoute / GRPCRoute* - Created by application developers to describe how traffic should be routed to their services + +Gateway API is the intermediary step between BGP and Skupper. Traffic arriving at a cluster via BGP-advertised anycast IPs passes through Gateway API for service routing before reaching Skupper for inter-cluster communication. This separation of concerns allows application developers to define their own routing rules on approved gateways without relying on network infrastructure teams. + [id="service-interconnect-imbgp"] == Red Hat Service Interconnect From c059df769b6efe13f8bd5f750ba954aa25965363 Mon Sep 17 00:00:00 2001 From: Michele Baldessari Date: Wed, 25 Feb 2026 09:41:44 +0100 Subject: [PATCH 3/4] Add a blurb in the about section about Gateway API --- modules/imbgp-about.adoc | 3 +++ 1 file changed, 3 insertions(+) diff --git a/modules/imbgp-about.adoc b/modules/imbgp-about.adoc index 62c52ccec..515d4e058 100644 --- a/modules/imbgp-about.adoc +++ b/modules/imbgp-about.adoc @@ -9,6 +9,7 @@ Use case:: * Deploy multi-cluster applications with unified ingress using BGP-based load balancing. * Enable anycast IP addressing for seamless failover between OpenShift clusters. * Connect services across clusters using Red Hat Service Interconnect (Skupper) for secure east-west traffic. +* Leverage Kubernetes Gateway API to give application developers self-service control over service routing without depending on network infrastructure teams. * Demonstrate enterprise-grade BGP routing integration with Kubernetes/OpenShift environments. + [NOTE] @@ -26,6 +27,8 @@ This pattern addresses these requirements by combining: * *Red Hat Service Interconnect (Skupper)* - Creates a secure application-layer virtual network connecting services across clusters without requiring VPN or special network configurations. * *FRRouting (FRR)* - Industry-standard routing software running on EC2 instances that acts as the BGP peering infrastructure, simulating top-of-rack (TOR) switches and core routers. +Gateway API plays a central role in this architecture. It is the intermediary layer between BGP and Skupper: traffic arriving at a cluster via BGP-advertised anycast IPs is routed through Gateway API down to the appropriate Skupper site, where Skupper handles inter-cluster routing for sparse deployments or services not locally available. Gateway API also separates concerns across organizational roles — infrastructure teams define GatewayClasses, operations teams create Gateways, and application developers independently manage their own Routes (HTTPRoute, GRPCRoute, etc.) for both simple routing and more advanced mesh-like routing scenarios. + [id="about-solution-imbgp"] == About the solution From 13ed30fdbaf30e35889cdc104391d94ebe606478 Mon Sep 17 00:00:00 2001 From: Michele Baldessari Date: Fri, 27 Feb 2026 08:15:23 +0100 Subject: [PATCH 4/4] Add architecture explaining the client path --- modules/imbgp-architecture.adoc | 3 + .../ingress-mesh-bgp/ingress-path.drawio | 196 ++++++++++++++++++ .../images/ingress-mesh-bgp/ingress-path.svg | 4 + 3 files changed, 203 insertions(+) create mode 100644 static/images/ingress-mesh-bgp/ingress-path.drawio create mode 100644 static/images/ingress-mesh-bgp/ingress-path.svg diff --git a/modules/imbgp-architecture.adoc b/modules/imbgp-architecture.adoc index 0551736eb..88dfd0d35 100644 --- a/modules/imbgp-architecture.adoc +++ b/modules/imbgp-architecture.adoc @@ -57,6 +57,9 @@ The pattern deploys EC2 instances running FRRouting to create a simulated datace Both clusters advertise the same anycast IP range (192.168.155.0/24) via BGP. When a client accesses an anycast IP: +.Client path to services via anycast and BGP +image::ingress-mesh-bgp/ingress-path.svg[Client Ingress Path,700] + . The core router receives BGP advertisements from both TOR routers for the anycast range . ECMP routing distributes traffic across both paths . Requests reach either the west or east cluster based on the routing decision diff --git a/static/images/ingress-mesh-bgp/ingress-path.drawio b/static/images/ingress-mesh-bgp/ingress-path.drawio new file mode 100644 index 000000000..21980dcdc --- /dev/null +++ b/static/images/ingress-mesh-bgp/ingress-path.drawio @@ -0,0 +1,196 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/static/images/ingress-mesh-bgp/ingress-path.svg b/static/images/ingress-mesh-bgp/ingress-path.svg new file mode 100644 index 000000000..3003065bf --- /dev/null +++ b/static/images/ingress-mesh-bgp/ingress-path.svg @@ -0,0 +1,4 @@ + + + +
Core Network Fabric
Clients

West Cluster
East Cluster
TOR West
TOR East
Client requests foo.bar/app1
foo.bar maps to anycast IP 192.168.155.151
Ingress Route
foo.bar
Red Hat Service Interconnect
Service for /app1
Red Hat Service Interconnect
Client requests foo.bar/app2
foo.bar maps to anycast IP 192.168.155.151
Service for /app2
Ingress Route
foo.bar
Client requests foo.bar/app1
foo.bar maps to anycast IP 192.168.155.151
\ No newline at end of file