This guide provides instructions for creating backend service-based external passthrough Network Load Balancers that load-balance TCP, UDP, ESP, GRE, ICMP, and ICMPv6 traffic. You can use such a configuration if you want to load-balance traffic that is using IP protocols other than TCP or UDP. Target pool-based external passthrough Network Load Balancers don't support this capability.
To configure an external passthrough Network Load Balancer for IP protocols other than TCP or UDP, you
create a forwarding rule with protocol set to
L3_DEFAULT
. This
forwarding rule points to a backend service with protocol set to
UNSPECIFIED
.
In this example, we use two external passthrough Network Load Balancers to distribute traffic across
backend VMs in two zonal managed instance groups in the us-central1
region.
Both load balancers receive traffic at the same external IP address.
One load balancer has a forwarding rule with protocol TCP
and port 80, and the
other load balancer has a forwarding rule with protocol L3_DEFAULT
. TCP
traffic arriving at the IP address on port 80 is handled by the TCP
forwarding rule. All other traffic that does not match the TCP-specific
forwarding rule is handled by the L3_DEFAULT
forwarding rule.
This scenario distributes traffic across healthy instances. To support this, you create TCP health checks to ensure that traffic is sent only to healthy instances.
The external passthrough Network Load Balancer is a regional load balancer. All load balancer components must be in the same region.
Before you begin
Install the Google Cloud CLI. For a complete overview of the tool, see the gcloud CLI overview. You can find commands related to load balancing in the API and gcloud reference.
If you haven't run the gcloud CLI previously, first run the
gcloud init
command to authenticate.
This guide assumes that you are familiar with bash.
Set up the network and subnets
The example on this page uses a custom mode VPC
network named lb-network
. You can use an auto
mode VPC network if you only want to handle IPv4 traffic.
However, IPv6 traffic requires a custom mode
subnet.
IPv6 traffic also requires a dual-stack subnet (stack-type
set to
IPv4_IPv6
). When you create a dual stack subnet on a custom mode VPC network,
you choose an IPv6 access type for the
subnet. For this example, we set the subnet's ipv6-access-type
parameter to
EXTERNAL
. This means new VMs on this subnet can be assigned both external
IPv4 addresses and external IPv6 addresses. The forwarding rules can also be
assigned both external IPv4 addresses and external IPv6 addresses.
The backends and the load balancer components used for this example are located in this region and subnet:
- Region:
us-central1
- Subnet:
lb-subnet
, with primary IPv4 address range10.1.2.0/24
. Although you choose which IPv4 address range is configured on the subnet, the IPv6 address range is assigned automatically. Google provides a fixed size (/64) IPv6 CIDR block.
To create the example network and subnet, follow these steps.
Console
To support both IPv4 and IPv6 traffic, use the following steps:
In the Google Cloud console, go to the VPC networks page.
Click Create VPC network.
Enter a Name of
lb-network
.In the Subnets section:
- Set the Subnet creation mode to Custom.
- In the New subnet section, configure the following fields and click Done:
- Name:
lb-subnet
- Region:
us-central1
- IP stack type: IPv4 and IPv6 (dual-stack)
- IPv4 range:
10.1.2.0/24
Although you can configure an IPv4 range of addresses for the subnet, you cannot choose the range of the IPv6 addresses for the subnet. Google provides a fixed size (/64) IPv6 CIDR block. - IPv6 access type: External
- Name:
Click Create.
To support IPv4 traffic only, use the following steps:
In the Google Cloud console, go to the VPC networks page.
Click Create VPC network.
Enter a Name of
lb-network
.In the Subnets section:
- Set the Subnet creation mode to Custom.
- In the New subnet section, configure the following fields and click Done:
- Name:
lb-subnet
- Region:
us-central1
- IP stack type: IPv4 (single-stack)
- IPv4 range:
10.1.2.0/24
- Name:
Click Create.
gcloud
Create the custom mode VPC network:
gcloud compute networks create lb-network \ --subnet-mode=custom
Within the
lb-network
network, create a subnet for backends in theus-central1
region.For both IPv4 and IPv6 traffic, use the following command to create a dual-stack subnet:
gcloud compute networks subnets create lb-subnet \ --stack-type=IPV4_IPv6 \ --ipv6-access-type=EXTERNAL \ --network=lb-network \ --range=10.1.2.0/24 \ --region=us-central1
For IPv4 traffic only, use the following command:
gcloud compute networks subnets create lb-subnet \ --network=lb-network \ --range=10.1.2.0/24 \ --region=us-central1
Create the zonal managed instance groups
For this load balancing scenario, you create two Compute Engine zonal managed instance groups and install an Apache web server on each instance.
To handle both IPv4 and IPv6 traffic, configure the backend VMs to be
dual-stack. Set the VM's stack-type
to IPv4_IPv6
. The VMs also inherit the
ipv6-access-type
setting (in this example, EXTERNAL
) from the subnet. For
more details about IPv6 requirements, see the External passthrough Network Load Balancer overview:
Forwarding
rules.
To use existing VMs as backends, update the VMs to be dual-stack by using the
gcloud compute instances network-interfaces update
command.
Instances that participate as backend VMs for external passthrough Network Load Balancers must run the appropriate Linux guest environment, Windows guest environment, or other processes that provide equivalent capability.
Create the instance group for TCP traffic on port 80
Console
Create an instance template. In the Google Cloud console, go to the Instance templates page.
- Click Create instance template.
- For Name, enter
ig-us-template-tcp-80
. - Ensure that the Boot disk is set to a Debian image, such as
Debian GNU/Linux 12 (bookworm). These instructions use commands that
are only available on Debian, such as
apt-get
. - Expand the Advanced options section.
Expand the Management section, and then copy the following script into the Startup script field.
#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2
Expand the Networking section, and then specify the following:
- For Network tags, add
network-lb-tcp-80
. - For Network interfaces, click the default interface and
configure the following fields:
- Network:
lb-network
- Subnetwork:
lb-subnet
- Network:
- For Network tags, add
Click Create.
Create a managed instance group. Go to the Instance groups page in the Google Cloud console.
- Click Create instance group.
- Select New managed instance group (stateless). For more information, see Stateless or stateful MIGs.
- For the Name, enter
ig-us-tcp-80
. - Under Location, select Single zone.
- For the Region, select
us-central1
. - For the Zone, select
us-central1-a
. - Under Instance template, select
ig-us-template-tcp-80
. Specify the number of instances that you want to create in the group.
For this example, specify the following options under Autoscaling:
- For Autoscaling mode, select
Off:do not autoscale
. - For Maximum number of instances, enter
2
.
- For Autoscaling mode, select
Click Create.
gcloud
The gcloud
instructions in this guide assume that you are using Cloud
Shell or another environment with bash installed.
Create a VM instance template with HTTP server with the
gcloud compute instance-templates create
command.To handle both IPv4 and IPv6 traffic, use the following command.
gcloud compute instance-templates create ig-us-template-tcp-80 \ --region=us-central1 \ --network=lb-network \ --subnet=lb-subnet \ --ipv6-network-tier=PREMIUM \ --stack-type=IPv4_IPv6 \ --tags=network-lb-tcp-80 \ --image-family=debian-12 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
Or, if you want to handle IPv4 traffic only traffic, use the following command.
gcloud compute instance-templates create ig-us-template-tcp-80 \ --region=us-central1 \ --network=lb-network \ --subnet=lb-subnet \ --tags=network-lb-tcp-80 \ --image-family=debian-12 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
Create a managed instance group in the zone with the
gcloud compute instance-groups managed create
command.gcloud compute instance-groups managed create ig-us-tcp-80 \ --zone us-central1-a \ --size 2 \ --template ig-us-template-tcp-80
Create the instance group for TCP on port 8080, UDP, ESP, and ICMP traffic
Console
Create an instance template. In the Google Cloud console, go to the Instance templates page.
- Click Create instance template.
- For the Name, enter
ig-us-template-l3-default
. - Ensure that the Boot disk is set to a Debian image, such as
Debian GNU/Linux 12 (bookworm). These instructions use commands that
are only available on Debian, such as
apt-get
. - Expand the Advanced options section.
Expand the Management section, and then copy the following script into the Startup script field. The startup script also configures the Apache server to listen on port
8080
instead of port80
.#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html sed -ire 's/^Listen 80$/Listen 8080/g' /etc/apache2/ports.conf systemctl restart apache2
Expand the Networking section, and then specify the following:
- For Network tags, add
network-lb-l3-default
. - For Network interfaces, click the default interface and
configure the following fields:
- Network:
lb-network
- Subnetwork:
lb-subnet
- Network:
- For Network tags, add
Click Create.
Create a managed instance group. Go to the Instance groups page in the Google Cloud console.
- Click Create instance group.
- Choose New managed instance group (stateless). For more information, see Stateless or stateful MIGs.
- For the Name, enter
ig-us-l3-default
. - Under Location, select Single zone.
- For the Region, select
us-central1
. - For the Zone, select
us-central1-c
. - Under Instance template, select
ig-us-template-l3-default
. Specify the number of instances that you want to create in the group.
For this example, specify the following options under Autoscaling:
- For Autoscaling mode, select
Off:do not autoscale
. - For Maximum number of instances, enter
2
.
- For Autoscaling mode, select
Click Create.
gcloud
The gcloud
instructions in this guide assume that you are using Cloud
Shell or another environment with bash installed.
Create a VM instance template with HTTP server with the
gcloud compute instance-templates create
command.The startup script also configures the Apache server to listen on port
8080
instead of port80
.To handle both IPv4 and IPv6 traffic, use the following command.
gcloud compute instance-templates create ig-us-template-l3-default \ --region=us-central1 \ --network=lb-network \ --subnet=lb-subnet \ --ipv6-network-tier=PREMIUM \ --stack-type=IPv4_IPv6 \ --tags=network-lb-l3-default \ --image-family=debian-12 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html sed -ire "s/^Listen 80$/Listen 8080/g" /etc/apache2/ports.conf systemctl restart apache2'
Or, if you want to handle IPv4 traffic only, use the following command.
gcloud compute instance-templates create ig-us-template-l3-default \ --region=us-central1 \ --network=lb-network \ --subnet=lb-subnet \ --tags=network-lb-l3-default \ --image-family=debian-12 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html sed -ire "s/^Listen 80$/Listen 8080/g" /etc/apache2/ports.conf systemctl restart apache2'
Create a managed instance group in the zone with the
gcloud compute instance-groups managed create
command.gcloud compute instance-groups managed create ig-us-l3-default \ --zone us-central1-c \ --size 2 \ --template ig-us-template-l3-default
Configure firewall rules
Create the following firewall rules:
- Firewall rules that allow external TCP traffic to reach backend
instances in the
ig-us-tcp-80
instance group on port 80 (using target tagnetwork-lb-tcp-80
). Create separate firewall rules to allow IPv4 and IPv6 traffic. - Firewall rules that allow other external traffic (TCP on port
8080, UDP, ESP, and ICMP) to reach backend instances in the
ig-us-l3-default
instance group (using target tagnetwork-lb-l3-default
). Create separate firewall rules to allow IPv4 and IPv6 traffic.
This example creates firewall rules that allows traffic from all source ranges to reach your backend instances on the configured ports. If you want to create separate firewall rules specifically for the health check probes, use the source IP address ranges documented in the Health checks overview: Probe IP ranges and firewall rules.
Console
- In the Google Cloud console, go to the Firewall policies page.
Go to Firewall policies - To allow IPv4 TCP traffic to reach backends in the
ig-us-tcp-80
instance group, create the following firewall rule.- Click Create firewall rule.
- Enter a Name of
allow-network-lb-tcp-80-ipv4
. - Select the Network that the firewall rule applies to (Default).
- Under Targets, select Specified target tags.
- In the Target tags field, enter
network-lb-tcp-80
. - Set Source filter to IPv4 ranges.
- Set the Source IPv4 ranges to
0.0.0.0/0
, which allows traffic from any source. This allows both external traffic and health check probes to reach the backend instances. - Under Protocols and ports, select Specified protocols and
ports. Then select the TCP checkbox and enter
80
. - Click Create. It might take a moment for the Console to display the new firewall rule, or you might have to click Refresh to see the rule.
- To allow IPv4 UDP, ESP, and ICMP traffic to reach backends in the
ig-us-l3-default
instance group, create the following firewall rule.- Click Create firewall rule.
- Enter a Name of
allow-network-lb-l3-default-ipv4
. - Select the Network that the firewall rule applies to (Default).
- Under Targets, select Specified target tags.
- In the Target tags field, enter
network-lb-l3-default
. - Set Source filter to IPv4 ranges.
- Set the Source IPv4 ranges to
0.0.0.0/0
, which allows traffic from any source. This allows both external traffic and health check probes to reach the backend instances. - Under Protocols and ports, select Specified protocols and
ports.
- Select the TCP checkbox and enter
8080
. - Select the UDP checkbox.
- Select the Other checkbox and enter
esp, icmp
.
- Select the TCP checkbox and enter
- Click Create. It might take a moment for the Console to display the new firewall rule, or you might have to click Refresh to see the rule.
- To allow IPv6 TCP traffic to reach backends in the
ig-us-tcp-80
instance group, create the following firewall rule.- Click Create firewall rule.
- Enter a Name of
allow-network-lb-tcp-80-ipv6
. - Select the Network that the firewall rule applies to (Default).
- Under Targets, select Specified target tags.
- In the Target tags field, enter
network-lb-tcp-80
. - Set Source filter to IPv6 ranges.
- Set the Source IPv6 ranges to
::/0
, which allows traffic from any source. This allows both external traffic and health check probes to reach the backend instances. - Under Protocols and ports, select Specified protocols and
ports. Click the checkbox next to TCP and enter
80
. - Click Create. It might take a moment for the Console to display the new firewall rule, or you might have to click Refresh to see the rule.
- To allow IPv6 UDP, ESP, and ICMPv6 traffic to reach backends in the
ig-us-l3-default
instance group, create the following firewall rule. This firewall rule also allows TCP health check probes to reach the instances on port 8080.- Click Create firewall rule.
- Enter a Name of
allow-network-lb-l3-default-ipv6
. - Select the Network that the firewall rule applies to (Default).
- Under Targets, select Specified target tags.
- In the Target tags field, enter
network-lb-l3-default
. - Set Source filter to IPv6 ranges.
- Set the Source IPv6 ranges to
::/0
, which allows traffic from any source. This allows both external traffic and health check probes to reach the backend instances. - Under Protocols and ports, select Specified protocols and
ports.
- Click the checkbox next to TCP and enter
8080
. - Click the checkbox next to UDP.
- Click the checkbox next to Other and enter
esp, 58
.
- Click the checkbox next to TCP and enter
- Click Create. It might take a moment for the Console to display the new firewall rule, or you might have to click Refresh to see the rule.
gcloud
To allow IPv4 TCP traffic to reach backends in the
ig-us-tcp-80
instance group, create the following firewall rule.gcloud compute firewall-rules create allow-network-lb-tcp-80-ipv4 \ --network=lb-network \ --target-tags network-lb-tcp-80 \ --allow tcp:80 \ --source-ranges=0.0.0.0/0
To allow IPv4 UDP, ESP, and ICMP traffic to reach backends in the
ig-us-l3-default
instance group, create the following firewall rule. This firewall rule also allows TCP health check probes to reach the instances on port 8080.gcloud compute firewall-rules create allow-network-lb-l3-default-ipv4 \ --network=lb-network \ --target-tags network-lb-l3-default \ --allow tcp:8080,udp,esp,icmp \ --source-ranges=0.0.0.0/0
To allow IPv6 TCP traffic to reach backends in the
ig-us-tcp-80
instance group, create the following firewall rule.gcloud compute firewall-rules create allow-network-lb-tcp-80-ipv6 \ --network=lb-network \ --target-tags network-lb-tcp-80 \ --allow tcp:80 \ --source-ranges=::/0
To allow IPv6 UDP, ESP, and ICMPv6 traffic to reach backends in the
ig-us-l3-default
instance group, create the following firewall rule. This firewall rule also allows TCP health check probes to reach the instances on port 8080.gcloud compute firewall-rules create allow-network-lb-l3-default-ipv6 \ --network=lb-network \ --target-tags network-lb-l3-default \ --allow tcp:8080,udp,esp,58 \ --source-ranges=::/0
Configure the load balancers
Next, set up two load balancers. Configure both load balancers to use the same external IP address for the forwarding rules where one load balancer handles TCP traffic on port 80, and the other load balancer handles TCP, UDP, ESP, and ICMP traffic on port 8080.
When you configure a load balancer, your backend VM instances
receive packets that are destined for the static external IP address you
configure. If you are using an image provided by
Compute Engine,
your instances are automatically configured to handle this IP address. If
you are using any other image, you must configure this address as
an alias on eth0
or as a loopback on each instance.
To setup two load balancers, use the following the instructions.
Console
Start your configuration
In the Google Cloud console, go to the Load balancing page.
- Click Create load balancer.
- For Type of load balancer, select Network Load Balancer (TCP/UDP/SSL) and click Next.
- For Proxy or passthrough, select Passthrough load balancer and click Next.
- For Public facing or internal, select Public facing (external) and click Next.
- Click Configure.
Basic configuration
- In the Name field, enter the name
backend-service-tcp-80
for the new load balancer. - In the Region list, select
us-central1
.
Backend configuration
- Click Backend configuration.
- On the Backend configuration page, make the following changes:
- In the New Backend section, select the IP stack type. If you created dual-stack backends to handle both IPv4 and IPv6 traffic, select IPv4 and IPv6 (dual-stack). To handle IPv4 traffic only, select IPv4 (single-stack).
- In the Instance group list, select
ig-us-tcp-80
, and then click Done. - In the Health check list, click Create a health check,
and then enter the following information:
- Name:
tcp-health-check-80
- Protocol:
TCP
- Port:
80
- Name:
- Click Save.
- Verify that there is a blue checkmark next to Backend configuration before continuing.
Frontend configuration
- Click Frontend configuration.
- In the Name field, enter
forwarding-rule-tcp-80
. - To handle IPv4 traffic, use the following steps:
- For IP version, select IPv4.
- In the Internal IP purpose section, in the IP address list,
select Create IP address.
- In the Name field, enter
network-lb-ipv4
. - Click Reserve.
- In the Name field, enter
- For Ports, choose Single. In the Port number field, enter
80
. - Click Done.
To handle IPv6 traffic, use the following steps:
- For IP version, select IPv6.
- For Subnetwork, select lb-subnet.
- In the IPv6 range list, select Create
IP address.
- In the Name field, enter
network-lb-ipv6
. - Click Reserve.
- In the Name field, enter
- For Ports, choose Single. In the Port number field, enter
80
. - Click Done.
A blue circle with a checkmark to the left of Frontend configuration indicates a successful setup.
Review the configuration
- Click Review and finalize.
- Review your load balancer configuration settings.
- Optional: Click Equivalent code to view the REST API request that will be used to create the load balancer.
Click Create.
On the load balancing page, under the Backend column for your new load balancer, you should see a green checkmark showing that the new load balancer is healthy.
Create the second load balancer
Start your configuration
In the Google Cloud console, go to the Load balancing page.
- Click Create load balancer.
- For Type of load balancer, select Network Load Balancer (TCP/UDP/SSL) and click Next.
- For Proxy or passthrough, select Passthrough load balancer and click Next.
- For Public facing or internal, select Public facing (external) and click Next.
- Click Configure.
Basic configuration
- In the Name field, enter the name
backend-service-l3-default
for the new load balancer. - In the Region list, select
us-central1
.
Backend configuration
- Click Backend configuration.
- On the Backend configuration page, make the following changes:
- In the New Backend section, select the IP stack type. If you created dual-stack backends to handle both IPv4 and IPv6 traffic, select IPv4 and IPv6 (dual-stack). To handle IPv4 traffic only, select IPv4 (single-stack).
- In the Instance group list, select
ig-us-l3-default
, and then click Done. - In the Protocols list, select L3 (Multiple protocols).
- In the Health check list, click Create a health check,
and then enter the following information:
- Name:
tcp-health-check-8080
- Protocol:
TCP
- Port:
8080
- Name:
- Click Save.
- Verify that there is a blue checkmark next to Backend configuration before continuing.
Frontend configuration
- Click Frontend configuration.
- In the Name field, enter
forwarding-rule-l3-default
. - To handle IPv4 traffic, use the following steps:
- For IP version, select IPv4.
- In the Internal IP purpose section, in the IP address list,
select Create IP address.
- In the Name field, enter
network-lb-ipv4
. - Click Reserve.
- In the Name field, enter
- In the Protocol list, select L3 (Multiple protocols).
- For Ports, choose All.
- Click Done.
To handle IPv6 traffic, use the following steps:
- For IP version, select IPv6.
- For Subnetwork, select lb-subnet.
- In the IPv6 range list, select Create
IP address.
- In the Name field, enter
network-lb-ipv6
. - Click Reserve.
- In the Name field, enter
- In the Protocol field, select L3 (Multiple protocols).
- For Ports, select All.
- Click Done.
A blue circle with a checkmark to the left of Frontend configuration indicates a successful setup.
Review the configuration
- Click Review and finalize.
- Review your load balancer configuration settings.
- Optional: Click Equivalent code to view the REST API request that will be used to create the load balancer.
Click Create.
On the load balancing page, under the Backend column for your new load balancer, you should see a green checkmark showing that the new load balancer is healthy.
gcloud
Reserve a static external IP address.
For IPv4 traffic: Create a static external IP address for your load balancers.
gcloud compute addresses create network-lb-ipv4 \ --region us-central1
For IPv6 traffic: Create a static external IPv6 address range for your load balancers. The subnet used must be a dual-stack subnet with an external IPv6 subnet range.
gcloud compute addresses create network-lb-ipv6 \ --region us-central1 \ --subnet lb-subnet \ --ip-version IPV6 \ --endpoint-type NETLB
Create a TCP health check for port
80
. This health check is used to verify the health of backends in theig-us-tcp-80
instance group.gcloud compute health-checks create tcp tcp-health-check-80 \ --region us-central1 \ --port 80
Create a TCP health check for port
8080
. This health check is used to verify the health of backends in theig-us-l3-default
instance group.gcloud compute health-checks create tcp tcp-health-check-8080 \ --region us-central1 \ --port 8080
Create the first load balancer for TCP traffic on port
80
.Create a backend service with the protocol set to
TCP
.gcloud compute backend-services create backend-service-tcp-80 \ --protocol TCP \ --health-checks tcp-health-check-80 \ --health-checks-region us-central1 \ --region us-central1
Add the backend instance group to the backend service.
gcloud compute backend-services add-backend backend-service-tcp-80 \ --instance-group ig-us-tcp-80 \ --instance-group-zone us-central1-a \ --region us-central1
For IPv4 traffic: Create a forwarding rule to route incoming TCP traffic on port
80
to the backend service.TCP
is the default forwarding rule protocol and does not need to be set explicitly.Use the IP address reserved in step 1 as the static external IP address of the load balancer.
gcloud compute forwarding-rules create forwarding-rule-tcp-80 \ --load-balancing-scheme external \ --region us-central1 \ --ports 80 \ --address network-lb-ipv4 \ --backend-service backend-service-tcp-80
For IPv6 traffic: Create a forwarding rule to route incoming TCP traffic on port
80
to the backend service.TCP
is the default forwarding rule protocol and does not need to be set explicitly.Use the IPv6 address range reserved in step 1 as the static external IP address of the load balancer. The subnet used must be a dual-stack subnet with an external IPv6 subnet range.
gcloud compute forwarding-rules create forwarding-rule-tcp-80 \ --load-balancing-scheme external \ --region us-central1 \ --network-tier PREMIUM \ --ip-version IPV6 \ --subnet lb-subnet \ --address network-lb-ipv6 \ --ports 80 \ --backend-service backend-service-tcp-80
Create the second load balancer for TCP on port
8080
, UDP, ESP, and ICMP traffic.Create a backend service with the protocol set to
UNSPECIFIED
.gcloud compute backend-services create backend-service-l3-default \ --protocol UNSPECIFIED \ --health-checks tcp-health-check-8080 \ --health-checks-region us-central1 \ --region us-central1
Add the backend instance group to the backend service.
gcloud compute backend-services add-backend backend-service-l3-default \ --instance-group ig-us-l3-default \ --instance-group-zone us-central1-c \ --region us-central1
For IPv4 traffic: Create a forwarding rule with the protocol set to
L3_DEFAULT
to handle all remaining supported IP protocol traffic (TCP on port8080
, UDP, ESP, and ICMP). All ports must be configured withL3_DEFAULT
forwarding rules.Use the same external IPv4 address that you used for the previous load balancer.
gcloud compute forwarding-rules create forwarding-rule-l3-default \ --load-balancing-scheme external \ --region us-central1 \ --ports all \ --ip-protocol L3_DEFAULT \ --address network-lb-ipv4 \ --backend-service backend-service-l3-default
For IPv6 traffic: Create a forwarding rule with the protocol set to
L3_DEFAULT
to handle all remaining supported IP protocol traffic (TCP on port8080
, UDP, ESP, and ICMP). All ports must be configured withL3_DEFAULT
forwarding rules.Use the IPv6 address range reserved in step 1 as the static external IP address of the load balancer. The subnet used must be a dual-stack subnet with an external IPv6 subnet range.
gcloud compute forwarding-rules create forwarding-rule-l3-default \ --load-balancing-scheme external \ --region us-central1 \ --network-tier PREMIUM \ --ip-version IPV6 \ --subnet lb-subnet \ --address network-lb-ipv6 \ --ports all \ --ip-protocol L3_DEFAULT \ --backend-service backend-service-l3-default
Test the load balancer
Now that the load balancing service is configured, you can start sending traffic to the load balancer's external IP address and watch traffic get distributed to the backend instances.
Look up the load balancer's external IP address
Console
- On the Advanced load balancing page, go to the Forwarding Rules
tab.
Go to the Forwarding Rules tab - Locate the forwarding rules used by the load balancer.
- In the IP Address column, note the external IP address listed for each IPv4 and IPv6 forwarding rule.
gcloud: IPv4
Enter the following command to view the external IP address of the forwarding rule used by the load balancer.
gcloud compute forwarding-rules describe forwarding-rule-tcp-80 \ --region us-central1
This example uses the same IP address for both IPv4 forwarding rules so using
forwarding-rule-l3-default
will also work.
gcloud: IPv6
Enter the following command to view the external IPv6 address of the
forwarding-rule-tcp-80
forwarding rule used by the load balancer.
gcloud compute forwarding-rules describe forwarding-rule-tcp-80 \ --region us-central1
This example uses the same IP address for both IPv6 forwarding rules so
using forwarding-rule-l3-default
will also work.
Send traffic to the load balancer
This procedure sends external traffic to the load balancer. Run the following
tests to ensure that TCP traffic on port 80 is being load-balanced by the
ig-us-tcp-80
instance group while all other traffic (TCP on port 8080, UDP, ESP,
and ICMP) is being handled by the ig-us-l3-default
instance group.
Verifying behavior with TCP requests on port 80
Make web requests (over TCP on port 80) to the load balancer using
curl
to contact its IP address.From clients with IPv4 connectivity, run the following command:
$ while true; do curl -m1 IP_ADDRESS; done
From clients with IPv6 connectivity, run the following command:
$ while true; do curl -m1 http://IPV6_ADDRESS; done
For example, if the assigned IPv6 address is
[2001:db8:1:1:1:1:1:1/96]
, the command should look like:$ while true; do curl -m1 http://[2001:db8:1:1:1:1:1:1]; done
Note the text returned by the
curl
command. The name of the backend VM generating the response is displayed in that text; for example:Page served from: VM_NAME
. Responses should come from instances in theig-us-tcp-80
instance group only.If your response is initially unsuccessful, you might need to wait approximately 30 seconds for the configuration to be fully loaded and for your instances to be marked healthy before trying again.
Verifying behavior with TCP requests on port 8080
Make web requests (over TCP on port 8080
) to the load balancer using curl
to contact its IP address.
From clients with IPv4 connectivity, run the following command:
$ while true; do curl -m1 IPV4_ADDRESS:8080; done
From clients with IPv6 connectivity, run the following command:
$ while true; do curl -m1 http://IPV6_ADDRESS; done
For example, if the assigned IPv6 address is
[2001:db8:1:1:1:1:1:1/96]
, the command should look like:$ while true; do curl -m1 http://[2001:db8:1:1:1:1:1:1]:8080; done
Note the text returned by the curl
command. Responses should come from
instances in the ig-us-l3-default
instance group only.
This shows that any traffic sent to the load balancer's IP address at port
8080
is being handled by backends in the ig-us-l3-default
instance
group only.
Verifying behavior with ICMP requests
To verify behavior with ICMP traffic, you capture output from the tcpdump
command to confirm that only backend VMs in the ig-us-l3-default
instance
group are handling ICMP requests send to the load balancer.
SSH to the backend VMs.
In the Google Cloud console, go to the VM instances page.
Go to the VM instances pageIn the list of virtual machine instances, click SSH in the row of the instance that you want to connect to.
Run the following command to use
tcpdump
to start listening for ICMP traffic.sudo tcpdump icmp -w ~/icmpcapture.pcap -s0 -c 10000 tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
Leave the SSH window open.
Repeat steps 1 and 2 for all four backend VMs.
Make ICMP requests to the load balancer.
To test the IPv4 responses, use
ping
to contact the load balancer's IPv4 address.ping IPV4_ADDRESS
To test the IPv6 responses, use
ping6
to contact the load balancer's IPv6 address.ping6 IPV6_ADDRESS
For example, if the assigned IPv6 address is
[2001:db8:1:1:1:1:1:1/96]
, the command should look like:ping6 2001:db8:1:1:1:1:1:1
Go back to each VM's open SSH window and stop the
tcpdump
capture command. You can use Ctrl+C to do this.For each VM, check the output of the
tcpdump
command in theicmpcapture.pcap
file.sudo tcpdump -r ~/icmpcapture.pcap -n
For backend VMs in the
ig-us-l3-default
instance group, you should see file entries like:reading from file /home/[user-directory]/icmpcapture.pcap, link-type EN10MB (Ethernet) 22:13:07.814486 IP 35.230.115.24 > 35.193.84.93: ICMP echo request, id 1995, seq 1, length 64 22:13:07.814513 IP 35.193.84.93 > 35.230.115.24: ICMP echo reply, id 1995, seq 1, length 64 22:13:08.816150 IP 35.230.115.24 > 35.193.84.93: ICMP echo request, id 1995, seq 2, length 64 22:13:08.816175 IP 35.193.84.93 > 35.230.115.24: ICMP echo reply, id 1995, seq 2, length 64 22:13:09.817536 IP 35.230.115.24 > 35.193.84.93: ICMP echo request, id 1995, seq 3, length 64 22:13:09.817560 IP 35.193.84.93 > 35.230.115.24: ICMP echo reply, id 1995, seq 3, length 64 ...
For backend VMs in the
ig-us-tcp-80
instance group, you should see that no packets have been received and the file should be blank:reading from file /home/[user-directory]/icmpcapture.pcap, link-type EN10MB (Ethernet)
Additional configuration options
Create an IPv6 forwarding rule with BYOIP
The load balancer created in the previous steps has been configured with
forwarding rules with IP version
as IPv4
or IPv6
. This section provides
instructions to create an IPv6 forwarding rule with bring your own IP (BYOIP)
addresses.
Bring your own IP addresses lets you provision and use your own public IPv6 addresses for Google Cloud resources. For more information, see Bring your own IP addresses.
Before you start configuring an IPv6 forwarding rule with BYOIP addresses, you must complete the following steps:
- Create a public advertised IPv6 prefix
- Create public delegated prefixes
- Create IPv6 sub-prefixes
- Announce the prefix
To create a new forwarding rule, follow these steps:
Console
In the Google Cloud console, go to the Load balancing page.
- Click the name of the load balancer that you want to modify.
- Click Edit.
- Click Frontend configuration.
- Click Add frontend IP and port.
- In the New Frontend IP and port section, specify the following:
- The Protocol is TCP.
- In the IP version field, select IPv6.
- In the Source of IPv6 range field, select BYOIP.
- In the IP collection list, select a sub-prefix created in the previous steps with the forwarding rule option enabled.
- In the IPv6 range field, enter the IPv6 address range. The IPv6 address range must adhere to the IPv6 sub-prefix specifications.
- In the Ports field, enter a port number.
- Click Done.
- Click Add frontend IP and port.
- In the New Frontend IP and port section, specify the following:
- The Protocol is L3 (Multiple protocols).
- In the IP version field, select IPv6.
- In the Source of IPv6 range field, select BYOIP.
- In the IP collection list, select a sub-prefix created in the previous steps with the forwarding rule option enabled.
- In the IPv6 range field, enter the IPv6 address range. The IPv6 address range must adhere to the IPv6 sub-prefix specifications.
- In the Ports field, select All.
- Click Done.
- Click Update.
Google Cloud CLI
Create the forwarding rule by using the
gcloud compute forwarding-rules create
command:
gcloud compute forwarding-rules create FWD_RULE_NAME \ --load-balancing-scheme EXTERNAL \ --ip-protocol L3_DEFAULT \ --ports ALL \ --ip-version IPV6 \ --region REGION_A \ --address IPV6_CIDR_RANGE \ --backend-service BACKEND_SERVICE \ --ip-collection PDP_NAME
Create the forwarding rule by using the
gcloud compute forwarding-rules create
command:
gcloud compute forwarding-rules create FWD_RULE_NAME \ --load-balancing-scheme EXTERNAL \ --ip-protocol PROTOCOL \ --ports ALL \ --ip-version IPV6 \ --region REGION_A \ --address IPV6_CIDR_RANGE \ --backend-service BACKEND_SERVICE \ --ip-collection PDP_NAME
Replace the following:
FWD_RULE_NAME
: the name of the forwarding ruleREGION_A
: region for the forwarding ruleIPV6_CIDR_RANGE
: the IPv6 address range that the forwarding rule serves. The IPv6 address range must adhere to the IPv6 sub-prefix specifications.BACKEND_SERVICE
: the name of the backend servicePDP_NAME
: the name of the public delegated prefix. The PDP must be a sub-prefix in the EXTERNAL_IPV6_FORWARDING_RULE_CREATION mode
What's next
- To configure an external passthrough Network Load Balancer with zonal NEG backends that let you forward
packets to non-
nic0
network interfaces of VM instances, see Set up an external passthrough Network Load Balancer with zonal NEGs. - For information on how external passthrough Network Load Balancers work with backend services, see Backend service-based external passthrough Network Load Balancer overview.
- To learn how to transition an external passthrough Network Load Balancer from a target pool backend to a regional backend service, see Migrate external passthrough Network Load Balancers from target pools to backend services
- To configure advanced network DDoS protection for an external passthrough Network Load Balancer by using Google Cloud Armor, see Configure advanced network DDoS protection.
- To delete resources, see Cleaning up the load balancer setup.