Deploy a Server Load Balancer (SLB) with Terraform
This blog post demonstrates how a Server Load Balancer (SLB) configuration infront of the EmployeeDB Applikation can be automaticly deployed with Terraform. This functionally is typically used by Application and DevOps Teams as part of a Continuous Integration / Continuous Deliver (CI/CD) Pipeline to automatically deploy an Application into Dev, Test or Production stage.

The Picture above shows the demo setup and the involved systems in a overview.
Demo Setup
The Picture below show the demo setup in detail with all involed systems and IP Addresses .

Verify the EmployeeDB Application deployment
The EmpliyeeDB Application bas in preparation for this demo already been deployed on the Backend Servers employeedb1, employeedb2 and employeedb3 along with the PostgreSQL database empliyeedb-sql. Let’s verify that they are up and running.

Ay you can see in the picture above, the three deployments can be reached by their IP Adress individually on HTTP port 8080. Be aware that the backend, in this configuration is not TLS/SSL encrypted as we are doing TLS/SSL offloading on the FortiADC to safe resources on the backend server.
Protect Secrets in a Terraform Secrets File (secrets.tfvars)
Its best practices to have all ansible configuration files clean without critical information such as user credentials, certificate and certificate keys as ansible configuration files typically are in stored in a version control system such as GIT where every one is able to see them. Therefor we are using secrets files (secrets.tfvars) in the users home directory (read-only) to protect secrets.
=> cat /home/fortinet/.terraform/secrets.tfvars ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- fortiadc_hostname = "10.2.1.3" fortiadc_token = "7dfbe8d9128551908acf6e860f6e9b40" ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Set the file permission to read-only for the owner
=> chmod 600 $HOME/.terraform/secrets.tfvars => ls -la $HOME/.terraform/secrets.tfvars ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -rw------- 1 fortinet fortinet 83 Dec 1 19:27 /home/fortinet/.terraform/secrets.tfvars ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Create the Terraform Configuration Files
The Terraform configuration files written in HCL (HashiCorp Configuration Language) and define the target infrastructure. All (.tf) files combined give the complete configuration.
=> cat /home/fortinet/FortiADC_slb_employeedb/provider_variables.tf ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ############################################# # FortiADC provider variables ############################################# variable "fortiadc_hostname" { description = "FortiADC management IP or hostname" type = string } variable "fortiadc_token" { description = "FortiADC API token" type = string sensitive = true } ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The Terraform Variable File: variables.tf contains all the definitions, namings, IP Addresse etc. required for the configuration. The same Terraform configuration files could be used with another variable.tf for another application
=> cat /home/fortinet/FortiADC_slb_employeedb/variables.tf ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ############################################# # EmployeeDB SSL Load Balancer – Variables ############################################# # ----------------------------- # FortiADC connection (already used by main.tf) # ----------------------------- # variable "fortiadc_hostname" { ... } <-- keep your existing ones # variable "fortiadc_token" { ... } # ----------------------------- # Real servers (from Ansible real_servers) # ----------------------------- variable "real_servers" { description = "EmployeeDB real servers behind FortiADC" type = map(object({ ip = string port = number status = string weight = number id = number })) default = { rs_employeedb1 = { ip = "10.1.1.211" port = 8080 status = "enable" weight = 1 id = 1 } rs_employeedb2 = { ip = "10.1.1.212" port = 8080 status = "enable" weight = 1 id = 2 } rs_employeedb3 = { ip = "10.1.1.213" port = 8080 status = "enable" weight = 1 id = 3 } } } # ----------------------------- # Pool / VS configuration # ----------------------------- variable "pool_name" { description = "Real server pool name (Ansible: pool_name)" type = string default = "employeedb" } variable "vs_name" { description = "Virtual server name (Ansible: virtual_server_name)" type = string default = "ws-employeedb-fad-vs" } variable "vs_address" { description = "Virtual server IP (Ansible: virtual_server_ip)" type = string default = "10.2.1.115" } variable "vs_interface" { description = "Virtual server interface (Ansible: virtual_server_interface)" type = string default = "port1" } variable "vs_port" { description = "Virtual server port (Ansible: virtual_server_port)" type = number default = 443 } # ----------------------------- # Health check (Ansible: LBHC_HTTP_200) # We assume LBHC_HTTP_200 already exists on the ADC. # ----------------------------- variable "health_check_list" { description = "Health check list attached to the pool" type = string default = "LBHC_HTTP_200" } # ----------------------------- # Certificate + SSL profile # ----------------------------- variable "cert_name" { description = "Name of the CertKey object (Ansible: employeedb-ssl)" type = string default = "employeedb-ssl" } variable "cert_path" { description = "Path to certificate file (Ansible: ssl_cert)" type = string default = "/home/fortinet/cert/fortidemo/k3s-apps-external.crt" } variable "key_path" { description = "Path to key file (Ansible: ssl_key)" type = string default = "/home/fortinet/cert/fortidemo/k3s-apps-external.key" } variable "cert_group" { description = "Local certificate group name (Ansible: local_cert_group)" type = string default = "EMPLOYEEDB_CERT_GROUP" } variable "client_ssl_profile" { description = "Client SSL profile name (Ansible: client_ssl_profile)" type = string default = "LB_CLIENT_SSL_EMPLOYEEDB" } ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
This files defines the Certificate and Certificate group as well as the client ssl profile.
=> cat /home/fortinet/FortiADC_slb_employeedb/employeedb_certificate.tf ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ######################################################################## # Local certificate/key upload (Ansible: fadcos_system_certificate_local_upload) ######################################################################## resource "fortiadc_system_certificate_local_upload" "local_upload" { mkey = var.cert_name # employeedb-ssl type = "CertKey" upload = "text" vdom = "root" # Use local file paths (Terraform provider reads them on the FortiADC box) cert = var.cert_path # /home/fortinet/cert/fortidemo/k3s-apps-external.crt key = var.key_path # /home/fortinet/cert/fortidemo/k3s-apps-external.key } ######################################################################## # Local certificate group (Ansible: fadcos_local_cert_group action=add_group) ######################################################################## resource "fortiadc_system_certificate_local_cert_group" "cert_local_group" { mkey = var.cert_group # EMPLOYEEDB_CERT_GROUP vdom = "root" } ######################################################################## # Group member (Ansible: fadcos_local_cert_group action=add_member) ######################################################################## resource "fortiadc_system_certificate_local_cert_group_child_group_member" "cert_local_group_member" { pkey = fortiadc_system_certificate_local_cert_group.cert_local_group.mkey mkey = "1" # Member index; keep it simple vdom = "root" local_cert = fortiadc_system_certificate_local_upload.local_upload.mkey depends_on = [ fortiadc_system_certificate_local_upload.local_upload, fortiadc_system_certificate_local_cert_group.cert_local_group, ] } ######################################################################## # Client SSL Profile (Ansible: fadcos_client_ssl_profile) ######################################################################## resource "fortiadc_load_balance_client_ssl_profile" "client_ssl" { mkey = var.client_ssl_profile # LB_CLIENT_SSL_EMPLOYEEDB # Map of Ansible parameters -> Terraform fields backend_customized_ssl_ciphers_flag = "enable" backend_ssl_ocsp_stapling_support = "disable" backend_ssl_allowed_versions = "sslv3 tlsv1.0 tlsv1.1 tlsv1.2" backend_ssl_sni_forward = "disable" # Not explicit in Ansible, but usually required as 'cv1' client_certificate_verify = "cv1" client_certificate_verify_mode = "required" client_sni_required = "disable" customized_ssl_ciphers_flag = "disable" forward_proxy = "disable" forward_proxy_local_signing_ca = "SSLPROXY_LOCAL_CA" http_forward_client_certificate = "disable" http_forward_client_certificate_header = "X-Client-Cert" local_certificate_group = var.cert_group reject_ocsp_stapling_with_missing_nextupdate = "disable" ssl_allowed_versions = "tlsv1.1 tlsv1.2" ssl_dh_param_size = "1024bit" ssl_dynamic_record_sizing = "disable" ssl_renegotiate_period = "0" ssl_renegotiate_size = "0" ssl_renegotiation = "disable" ssl_renegotiation_interval = "-1" ssl_secure_renegotiation = "require" ssl_session_cache_flag = "enable" use_tls_tickets = "enable" depends_on = [ fortiadc_system_certificate_local_cert_group_child_group_member.cert_local_group_member ] } ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
This configures the Health Check definitions to verify the availability and status of the real servers.
=> cat /home/fortinet/FortiADC_slb_employeedb/employeedb_health_check.tf ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ######################################################################## # HTTP 200 Health Check (LBHC_HTTP_200) ######################################################################## resource "fortiadc_system_health_check" "http_200" { mkey = "LBHC_HTTP_200" # HTTP health check – equivalent to your Ansible playbook type = "http" dest_addr_type = "ipv4" # What to send to the server send_string = "/" # Expected HTTP status code status_code = "200" # Timing / retries (tune as you like) interval = "5" timeout = "4" up_retry = "5" down_retry = "5" } ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
This configuration file contains the Server Load Balancer settings foer the EmployeeDB Application
=> cat /home/fortinet/FortiADC_slb_employeedb/employeedb_load_balancer.tf ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ######################################################################## # Real servers (Ansible: fadcos_real_server) ######################################################################## resource "fortiadc_load_balance_real_server" "rs" { for_each = var.real_servers mkey = each.key address = each.value.ip server_type = "static" type = "ip" status = each.value.status sdn_addr_private = "disable" } ######################################################################## # Real server pool (Ansible: fadcos_real_server_pool) ######################################################################## resource "fortiadc_load_balance_pool" "pool" { mkey = "employeedb" vdom = "root" pool_type = "ipv4" type = "static" health_check = "enable" health_check_list = "LBHC_HTTP_200" health_check_relationship = "AND" # make sure the health check exists before the pool is created depends_on = [ fortiadc_system_health_check.http_200 ] } ######################################################################## # Pool members (Ansible: fadcos_real_server_pool_member) ######################################################################## resource "fortiadc_load_balance_pool_child_pool_member" "member" { for_each = var.real_servers pkey = fortiadc_load_balance_pool.pool.mkey # employeedb mkey = tostring(each.value.id) # 1 / 2 / 3 port = tostring(each.value.port) # 8080 status = each.value.status weight = tostring(each.value.weight) real_server_id = each.key # rs_employeedb1, etc. cookie = "rs${each.value.id}" ssl = "disable" # sensible defaults health_check_inherit = "enable" rs_profile_inherit = "enable" } ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
And this file configures the FortiADC Virtual Server and IP Address.
=> cat /home/fortinet/FortiADC_slb_employeedb/employeedb_virtual_server.tf ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ######################################################################## # HTTPS Virtual Server (Ansible: fadcos_virtual_server) ######################################################################## resource "fortiadc_load_balance_virtual_server" "employeedb_vs_l7" { mkey = var.vs_name # ws-employeedb-fad-vs type = "l7-load-balance" vdom = "root" addr_type = "ipv4" address = var.vs_address # 10.2.1.115 interface = var.vs_interface # port1 port = tostring(var.vs_port) # 443 profile = "LB_PROF_HTTPS" method = "LB_METHOD_ROUND_ROBIN" pool = fortiadc_load_balance_pool.pool.mkey client_ssl_profile = fortiadc_load_balance_client_ssl_profile.client_ssl.mkey status = "enable" # Optional extras – leave defaults if you like http2https = "disable" connection_limit = "0" connection_rate_limit = "0" } ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Terraform – Initialize Directory, Create Plan and Execute
Now as we have all the requires configuration files, we can initialize Terraforms
=> terraform -chdir=$HOME/FortiADC_slb_employeedb init
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Initializing the backend...
Initializing provider plugins...
- Finding fortinetdev/fortiadc versions matching "1.2.0"...
- Installing fortinetdev/fortiadc v1.2.0...
- Installed fortinetdev/fortiadc v1.2.0 (signed by a HashiCorp partner, key ID 325239133A112044)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://developer.hashicorp.com/terraform/cli/plugins/signing
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Create and store the plan as employeedb.plan for later reference
=> terraform -chdir=$HOME/FortiADC_slb_employeedb plan \
-var-file=$HOME/.terraform/secrets.tfvars \
-out=employeedb.plan
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# fortiadc_load_balance_client_ssl_profile.client_ssl will be created
+ resource "fortiadc_load_balance_client_ssl_profile" "client_ssl" {
+ backend_certificate_verify = (known after apply)
+ backend_ciphers_tlsv13 = (known after apply)
+ backend_customized_ssl_ciphers = (known after apply)
+ backend_customized_ssl_ciphers_flag = "enable"
+ backend_ssl_allowed_versions = "sslv3 tlsv1.0 tlsv1.1 tlsv1.2"
+ backend_ssl_ciphers = (known after apply)
+ backend_ssl_ocsp_stapling_support = "disable"
+ backend_ssl_sni_forward = "disable"
+ client_certificate_verify = "cv1"
+ client_certificate_verify_mode = "required"
+ client_sni_required = "disable"
+ customized_ssl_ciphers = (known after apply)
+ customized_ssl_ciphers_flag = "disable"
+ forward_proxy = "disable"
+ forward_proxy_certificate_caching = (known after apply)
+ forward_proxy_intermediate_ca_group = (known after apply)
+ forward_proxy_local_signing_ca = "SSLPROXY_LOCAL_CA"
+ http_forward_client_certificate = "disable"
+ http_forward_client_certificate_header = "X-Client-Cert"
+ id = (known after apply)
+ local_certificate_group = "EMPLOYEEDB_CERT_GROUP"
+ mkey = "LB_CLIENT_SSL_EMPLOYEEDB"
+ reject_ocsp_stapling_with_missing_nextupdate = "disable"
+ rfc7919_comply = (known after apply)
+ ssl_allowed_versions = "tlsv1.1 tlsv1.2"
+ ssl_ciphers = (known after apply)
+ ssl_ciphers_tlsv13 = (known after apply)
+ ssl_dh_param_size = "1024bit"
+ ssl_dynamic_record_sizing = "disable"
+ ssl_renegotiate_period = "0"
+ ssl_renegotiate_size = "0"
+ ssl_renegotiation = "disable"
+ ssl_renegotiation_interval = "-1"
+ ssl_secure_renegotiation = "require"
+ ssl_session_cache_flag = "enable"
+ supported_groups = (known after apply)
+ use_tls_tickets = "enable"
}
# fortiadc_load_balance_pool.pool will be created
+ resource "fortiadc_load_balance_pool" "pool" {
+ direct_route_ip = (known after apply)
+ direct_route_ip6 = (known after apply)
+ direct_route_mode = (known after apply)
+ health_check = "enable"
+ health_check_list = "LBHC_HTTP_200"
+ health_check_relationship = "AND"
+ id = (known after apply)
+ mkey = "employeedb"
+ pool_type = "ipv4"
+ rs_profile = (known after apply)
+ sdn_addr_private = (known after apply)
+ sdn_connector = (known after apply)
+ service = (known after apply)
+ type = "static"
+ vdom = "root"
}
# fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb1"] will be created
+ resource "fortiadc_load_balance_pool_child_pool_member" "member" {
+ address = (known after apply)
+ address6 = (known after apply)
+ backup = (known after apply)
+ connection_rate_limit = (known after apply)
+ connlimit = (known after apply)
+ cookie = "rs1"
+ hc_status = (known after apply)
+ health_check_inherit = "enable"
+ host = (known after apply)
+ id = (known after apply)
+ m_health_check = (known after apply)
+ m_health_check_list = (known after apply)
+ m_health_check_relationship = (known after apply)
+ mkey = "1"
+ modify_host = (known after apply)
+ mssql_read_only = (known after apply)
+ mysql_group_id = (known after apply)
+ mysql_read_only = (known after apply)
+ pkey = "employeedb"
+ port = "8080"
+ proxy_protocol = (known after apply)
+ real_server_id = "rs_employeedb1"
+ recover = (known after apply)
+ rs_profile = (known after apply)
+ rs_profile_inherit = "enable"
+ server_name = (known after apply)
+ ssl = "disable"
+ status = "enable"
+ warmrate = (known after apply)
+ warmup = (known after apply)
+ weight = "1"
}
# fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb2"] will be created
+ resource "fortiadc_load_balance_pool_child_pool_member" "member" {
+ address = (known after apply)
+ address6 = (known after apply)
+ backup = (known after apply)
+ connection_rate_limit = (known after apply)
+ connlimit = (known after apply)
+ cookie = "rs2"
+ hc_status = (known after apply)
+ health_check_inherit = "enable"
+ host = (known after apply)
+ id = (known after apply)
+ m_health_check = (known after apply)
+ m_health_check_list = (known after apply)
+ m_health_check_relationship = (known after apply)
+ mkey = "2"
+ modify_host = (known after apply)
+ mssql_read_only = (known after apply)
+ mysql_group_id = (known after apply)
+ mysql_read_only = (known after apply)
+ pkey = "employeedb"
+ port = "8080"
+ proxy_protocol = (known after apply)
+ real_server_id = "rs_employeedb2"
+ recover = (known after apply)
+ rs_profile = (known after apply)
+ rs_profile_inherit = "enable"
+ server_name = (known after apply)
+ ssl = "disable"
+ status = "enable"
+ warmrate = (known after apply)
+ warmup = (known after apply)
+ weight = "1"
}
# fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb3"] will be created
+ resource "fortiadc_load_balance_pool_child_pool_member" "member" {
+ address = (known after apply)
+ address6 = (known after apply)
+ backup = (known after apply)
+ connection_rate_limit = (known after apply)
+ connlimit = (known after apply)
+ cookie = "rs3"
+ hc_status = (known after apply)
+ health_check_inherit = "enable"
+ host = (known after apply)
+ id = (known after apply)
+ m_health_check = (known after apply)
+ m_health_check_list = (known after apply)
+ m_health_check_relationship = (known after apply)
+ mkey = "3"
+ modify_host = (known after apply)
+ mssql_read_only = (known after apply)
+ mysql_group_id = (known after apply)
+ mysql_read_only = (known after apply)
+ pkey = "employeedb"
+ port = "8080"
+ proxy_protocol = (known after apply)
+ real_server_id = "rs_employeedb3"
+ recover = (known after apply)
+ rs_profile = (known after apply)
+ rs_profile_inherit = "enable"
+ server_name = (known after apply)
+ ssl = "disable"
+ status = "enable"
+ warmrate = (known after apply)
+ warmup = (known after apply)
+ weight = "1"
}
# fortiadc_load_balance_real_server.rs["rs_employeedb1"] will be created
+ resource "fortiadc_load_balance_real_server" "rs" {
+ address = "10.1.1.211"
+ address6 = (known after apply)
+ fqdn = (known after apply)
+ id = (known after apply)
+ instance = (known after apply)
+ mkey = "rs_employeedb1"
+ sdn_addr_private = "disable"
+ sdn_connector = (known after apply)
+ server_type = "static"
+ status = "enable"
+ type = "ip"
}
# fortiadc_load_balance_real_server.rs["rs_employeedb2"] will be created
+ resource "fortiadc_load_balance_real_server" "rs" {
+ address = "10.1.1.212"
+ address6 = (known after apply)
+ fqdn = (known after apply)
+ id = (known after apply)
+ instance = (known after apply)
+ mkey = "rs_employeedb2"
+ sdn_addr_private = "disable"
+ sdn_connector = (known after apply)
+ server_type = "static"
+ status = "enable"
+ type = "ip"
}
# fortiadc_load_balance_real_server.rs["rs_employeedb3"] will be created
+ resource "fortiadc_load_balance_real_server" "rs" {
+ address = "10.1.1.213"
+ address6 = (known after apply)
+ fqdn = (known after apply)
+ id = (known after apply)
+ instance = (known after apply)
+ mkey = "rs_employeedb3"
+ sdn_addr_private = "disable"
+ sdn_connector = (known after apply)
+ server_type = "static"
+ status = "enable"
+ type = "ip"
}
# fortiadc_load_balance_virtual_server.employeedb_vs_l7 will be created
+ resource "fortiadc_load_balance_virtual_server" "employeedb_vs_l7" {
+ addr_type = "ipv4"
+ address = "10.2.1.115"
+ address6 = (known after apply)
+ adfs_published_service = (known after apply)
+ alone = (known after apply)
+ auth_policy = (known after apply)
+ av_profile = (known after apply)
+ azure_lb_backend = (known after apply)
+ captcha_profile = (known after apply)
+ client_ssl_profile = "LB_CLIENT_SSL_EMPLOYEEDB"
+ clone_pool = (known after apply)
+ clone_traffic_type = (known after apply)
+ comments = (known after apply)
+ connection_limit = "0"
+ connection_pool = (known after apply)
+ connection_rate_limit = "0"
+ content_rewriting = (known after apply)
+ content_rewriting_list = (known after apply)
+ content_routing = (known after apply)
+ content_routing_list = (known after apply)
+ domain_name = (known after apply)
+ dos_profile = (known after apply)
+ error_msg = (known after apply)
+ error_page = (known after apply)
+ fortiview = (known after apply)
+ host_name = (known after apply)
+ http2https = "disable"
+ http2https_port = (known after apply)
+ id = (known after apply)
+ interface = "port1"
+ ips_profile = (known after apply)
+ l2_exception_list = (known after apply)
+ method = "LB_METHOD_ROUND_ROBIN"
+ mkey = "ws-employeedb-fad-vs"
+ one_click_gslb_server = (known after apply)
+ packet_fwd_method = (known after apply)
+ pagespeed = (known after apply)
+ persistence = (known after apply)
+ pool = "employeedb"
+ port = "443"
+ profile = "LB_PROF_HTTPS"
+ protocol = (known after apply)
+ public_ip = (known after apply)
+ public_ip6 = (known after apply)
+ public_ip_type = (known after apply)
+ schedule_list = (known after apply)
+ schedule_pool_list = (known after apply)
+ scripting_flag = (known after apply)
+ scripting_list = (known after apply)
+ source_pool_list = (known after apply)
+ ssl_mirror = (known after apply)
+ ssl_mirror_intf = (known after apply)
+ status = "enable"
+ stream_scripting_flag = (known after apply)
+ stream_scripting_list = (known after apply)
+ traffic_group = (known after apply)
+ traffic_log = (known after apply)
+ trans_rate_limit = (known after apply)
+ type = "l7-load-balance"
+ use_azure_lb_backend_ip = (known after apply)
+ vdom = "root"
+ waf_profile = (known after apply)
+ warmrate = (known after apply)
+ warmup = (known after apply)
+ wccp = (known after apply)
+ ztna_profile = (known after apply)
}
# fortiadc_system_certificate_local_cert_group.cert_local_group will be created
+ resource "fortiadc_system_certificate_local_cert_group" "cert_local_group" {
+ id = (known after apply)
+ mkey = "EMPLOYEEDB_CERT_GROUP"
+ vdom = "root"
}
# fortiadc_system_certificate_local_cert_group_child_group_member.cert_local_group_member will be created
+ resource "fortiadc_system_certificate_local_cert_group_child_group_member" "cert_local_group_member" {
+ default = (known after apply)
+ extra_intermediate_cag = (known after apply)
+ extra_local_cert = (known after apply)
+ extra_ocsp_stapling = (known after apply)
+ id = (known after apply)
+ intermediate_cag = (known after apply)
+ local_cert = "employeedb-ssl"
+ mkey = "1"
+ ocsp_stapling = (known after apply)
+ pkey = "EMPLOYEEDB_CERT_GROUP"
+ vdom = "root"
}
# fortiadc_system_certificate_local_upload.local_upload will be created
+ resource "fortiadc_system_certificate_local_upload" "local_upload" {
+ cert = "/home/fortinet/cert/fortidemo/k3s-apps-external.crt"
+ id = (known after apply)
+ key = "/home/fortinet/cert/fortidemo/k3s-apps-external.key"
+ mkey = "employeedb-ssl"
+ type = "CertKey"
+ upload = "text"
+ vdom = "root"
}
# fortiadc_system_health_check.http_200 will be created
+ resource "fortiadc_system_health_check" "http_200" {
+ acct_appid = (known after apply)
+ addr_type = (known after apply)
+ agent_type = (known after apply)
+ allow_ssl_version = (known after apply)
+ attribute = (known after apply)
+ auth_appid = (known after apply)
+ basedn = (known after apply)
+ binddn = (known after apply)
+ column = (known after apply)
+ community = (known after apply)
+ compare_type = (known after apply)
+ connect_string = (known after apply)
+ connect_type = (known after apply)
+ counter_value = (known after apply)
+ cpu = (known after apply)
+ cpu_weight = (known after apply)
+ database = (known after apply)
+ dest_addr = (known after apply)
+ dest_addr6 = (known after apply)
+ dest_addr_type = "ipv4"
+ disk = (known after apply)
+ disk_weight = (known after apply)
+ domain_name = (known after apply)
+ down_retry = "5"
+ file = (known after apply)
+ filter = (known after apply)
+ folder = (known after apply)
+ host_addr = (known after apply)
+ host_addr6 = (known after apply)
+ host_ip6_addr = (known after apply)
+ host_ip_addr = (known after apply)
+ hostname = (known after apply)
+ http_connect = (known after apply)
+ http_extra_string = (known after apply)
+ http_version = (known after apply)
+ id = (known after apply)
+ interval = "5"
+ local_cert = (known after apply)
+ match_type = (known after apply)
+ mem = (known after apply)
+ mem_weight = (known after apply)
+ method_type = (known after apply)
+ mkey = "LBHC_HTTP_200"
+ mssql_column = (known after apply)
+ mssql_receive_string = (known after apply)
+ mssql_row = (known after apply)
+ mssql_send_string = (known after apply)
+ mysql_server_type = (known after apply)
+ nas_ip = (known after apply)
+ oid = (known after apply)
+ oracle_receive_string = (known after apply)
+ oracle_send_string = (known after apply)
+ origin_host = (known after apply)
+ origin_realm = (known after apply)
+ passive = (known after apply)
+ password = (known after apply)
+ port = (known after apply)
+ product_name = (known after apply)
+ pwd_type = (known after apply)
+ radius_reject = (known after apply)
+ receive_string = (known after apply)
+ remote_host = (known after apply)
+ remote_password = (known after apply)
+ remote_port = (known after apply)
+ remote_username = (known after apply)
+ row = (known after apply)
+ rtsp_describe_url = (known after apply)
+ rtsp_method_type = (known after apply)
+ script = (known after apply)
+ secret_key = (known after apply)
+ send_string = "/"
+ service_name = (known after apply)
+ sid = (known after apply)
+ sip_request_type = (known after apply)
+ ssl_ciphers = (known after apply)
+ status_code = "200"
+ string_value = (known after apply)
+ timeout = "4"
+ type = "http"
+ up_retry = "5"
+ username = (known after apply)
+ value_type = (known after apply)
+ vendor_id = (known after apply)
+ version = (known after apply)
}
Plan: 13 to add, 0 to change, 0 to destroy.
─────────────────────────────────────────────────────────────────────────────
Saved the plan to: employeedb.plan
To perform exactly these actions, run the following command to apply:
terraform apply "employeedb.plan"
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Finally, execute the plan
=> terraform -chdir=$HOME/FortiADC_slb_employeedb apply \
-var-file=$HOME/.terraform/secrets.tfvars \
-auto-approve employeedb.plan
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# fortiadc_load_balance_client_ssl_profile.client_ssl will be created
+ resource "fortiadc_load_balance_client_ssl_profile" "client_ssl" {
+ backend_certificate_verify = (known after apply)
+ backend_ciphers_tlsv13 = (known after apply)
+ backend_customized_ssl_ciphers = (known after apply)
+ backend_customized_ssl_ciphers_flag = "enable"
+ backend_ssl_allowed_versions = "sslv3 tlsv1.0 tlsv1.1 tlsv1.2"
+ backend_ssl_ciphers = (known after apply)
+ backend_ssl_ocsp_stapling_support = "disable"
+ backend_ssl_sni_forward = "disable"
+ client_certificate_verify = "cv1"
+ client_certificate_verify_mode = "required"
+ client_sni_required = "disable"
+ customized_ssl_ciphers = (known after apply)
+ customized_ssl_ciphers_flag = "disable"
+ forward_proxy = "disable"
+ forward_proxy_certificate_caching = (known after apply)
+ forward_proxy_intermediate_ca_group = (known after apply)
+ forward_proxy_local_signing_ca = "SSLPROXY_LOCAL_CA"
+ http_forward_client_certificate = "disable"
+ http_forward_client_certificate_header = "X-Client-Cert"
+ id = (known after apply)
+ local_certificate_group = "EMPLOYEEDB_CERT_GROUP"
+ mkey = "LB_CLIENT_SSL_EMPLOYEEDB"
+ reject_ocsp_stapling_with_missing_nextupdate = "disable"
+ rfc7919_comply = (known after apply)
+ ssl_allowed_versions = "tlsv1.1 tlsv1.2"
+ ssl_ciphers = (known after apply)
+ ssl_ciphers_tlsv13 = (known after apply)
+ ssl_dh_param_size = "1024bit"
+ ssl_dynamic_record_sizing = "disable"
+ ssl_renegotiate_period = "0"
+ ssl_renegotiate_size = "0"
+ ssl_renegotiation = "disable"
+ ssl_renegotiation_interval = "-1"
+ ssl_secure_renegotiation = "require"
+ ssl_session_cache_flag = "enable"
+ supported_groups = (known after apply)
+ use_tls_tickets = "enable"
}
# fortiadc_load_balance_pool.pool will be created
+ resource "fortiadc_load_balance_pool" "pool" {
+ direct_route_ip = (known after apply)
+ direct_route_ip6 = (known after apply)
+ direct_route_mode = (known after apply)
+ health_check = "enable"
+ health_check_list = "LBHC_HTTP_200"
+ health_check_relationship = "AND"
+ id = (known after apply)
+ mkey = "employeedb"
+ pool_type = "ipv4"
+ rs_profile = (known after apply)
+ sdn_addr_private = (known after apply)
+ sdn_connector = (known after apply)
+ service = (known after apply)
+ type = "static"
+ vdom = "root"
}
# fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb1"] will be created
+ resource "fortiadc_load_balance_pool_child_pool_member" "member" {
+ address = (known after apply)
+ address6 = (known after apply)
+ backup = (known after apply)
+ connection_rate_limit = (known after apply)
+ connlimit = (known after apply)
+ cookie = "rs1"
+ hc_status = (known after apply)
+ health_check_inherit = "enable"
+ host = (known after apply)
+ id = (known after apply)
+ m_health_check = (known after apply)
+ m_health_check_list = (known after apply)
+ m_health_check_relationship = (known after apply)
+ mkey = "1"
+ modify_host = (known after apply)
+ mssql_read_only = (known after apply)
+ mysql_group_id = (known after apply)
+ mysql_read_only = (known after apply)
+ pkey = "employeedb"
+ port = "8080"
+ proxy_protocol = (known after apply)
+ real_server_id = "rs_employeedb1"
+ recover = (known after apply)
+ rs_profile = (known after apply)
+ rs_profile_inherit = "enable"
+ server_name = (known after apply)
+ ssl = "disable"
+ status = "enable"
+ warmrate = (known after apply)
+ warmup = (known after apply)
+ weight = "1"
}
# fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb2"] will be created
+ resource "fortiadc_load_balance_pool_child_pool_member" "member" {
+ address = (known after apply)
+ address6 = (known after apply)
+ backup = (known after apply)
+ connection_rate_limit = (known after apply)
+ connlimit = (known after apply)
+ cookie = "rs2"
+ hc_status = (known after apply)
+ health_check_inherit = "enable"
+ host = (known after apply)
+ id = (known after apply)
+ m_health_check = (known after apply)
+ m_health_check_list = (known after apply)
+ m_health_check_relationship = (known after apply)
+ mkey = "2"
+ modify_host = (known after apply)
+ mssql_read_only = (known after apply)
+ mysql_group_id = (known after apply)
+ mysql_read_only = (known after apply)
+ pkey = "employeedb"
+ port = "8080"
+ proxy_protocol = (known after apply)
+ real_server_id = "rs_employeedb2"
+ recover = (known after apply)
+ rs_profile = (known after apply)
+ rs_profile_inherit = "enable"
+ server_name = (known after apply)
+ ssl = "disable"
+ status = "enable"
+ warmrate = (known after apply)
+ warmup = (known after apply)
+ weight = "1"
}
# fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb3"] will be created
+ resource "fortiadc_load_balance_pool_child_pool_member" "member" {
+ address = (known after apply)
+ address6 = (known after apply)
+ backup = (known after apply)
+ connection_rate_limit = (known after apply)
+ connlimit = (known after apply)
+ cookie = "rs3"
+ hc_status = (known after apply)
+ health_check_inherit = "enable"
+ host = (known after apply)
+ id = (known after apply)
+ m_health_check = (known after apply)
+ m_health_check_list = (known after apply)
+ m_health_check_relationship = (known after apply)
+ mkey = "3"
+ modify_host = (known after apply)
+ mssql_read_only = (known after apply)
+ mysql_group_id = (known after apply)
+ mysql_read_only = (known after apply)
+ pkey = "employeedb"
+ port = "8080"
+ proxy_protocol = (known after apply)
+ real_server_id = "rs_employeedb3"
+ recover = (known after apply)
+ rs_profile = (known after apply)
+ rs_profile_inherit = "enable"
+ server_name = (known after apply)
+ ssl = "disable"
+ status = "enable"
+ warmrate = (known after apply)
+ warmup = (known after apply)
+ weight = "1"
}
# fortiadc_load_balance_real_server.rs["rs_employeedb1"] will be created
+ resource "fortiadc_load_balance_real_server" "rs" {
+ address = "10.1.1.211"
+ address6 = (known after apply)
+ fqdn = (known after apply)
+ id = (known after apply)
+ instance = (known after apply)
+ mkey = "rs_employeedb1"
+ sdn_addr_private = "disable"
+ sdn_connector = (known after apply)
+ server_type = "static"
+ status = "enable"
+ type = "ip"
}
# fortiadc_load_balance_real_server.rs["rs_employeedb2"] will be created
+ resource "fortiadc_load_balance_real_server" "rs" {
+ address = "10.1.1.212"
+ address6 = (known after apply)
+ fqdn = (known after apply)
+ id = (known after apply)
+ instance = (known after apply)
+ mkey = "rs_employeedb2"
+ sdn_addr_private = "disable"
+ sdn_connector = (known after apply)
+ server_type = "static"
+ status = "enable"
+ type = "ip"
}
# fortiadc_load_balance_real_server.rs["rs_employeedb3"] will be created
+ resource "fortiadc_load_balance_real_server" "rs" {
+ address = "10.1.1.213"
+ address6 = (known after apply)
+ fqdn = (known after apply)
+ id = (known after apply)
+ instance = (known after apply)
+ mkey = "rs_employeedb3"
+ sdn_addr_private = "disable"
+ sdn_connector = (known after apply)
+ server_type = "static"
+ status = "enable"
+ type = "ip"
}
# fortiadc_load_balance_virtual_server.employeedb_vs_l7 will be created
+ resource "fortiadc_load_balance_virtual_server" "employeedb_vs_l7" {
+ addr_type = "ipv4"
+ address = "10.2.1.115"
+ address6 = (known after apply)
+ adfs_published_service = (known after apply)
+ alone = (known after apply)
+ auth_policy = (known after apply)
+ av_profile = (known after apply)
+ azure_lb_backend = (known after apply)
+ captcha_profile = (known after apply)
+ client_ssl_profile = "LB_CLIENT_SSL_EMPLOYEEDB"
+ clone_pool = (known after apply)
+ clone_traffic_type = (known after apply)
+ comments = (known after apply)
+ connection_limit = "0"
+ connection_pool = (known after apply)
+ connection_rate_limit = "0"
+ content_rewriting = (known after apply)
+ content_rewriting_list = (known after apply)
+ content_routing = (known after apply)
+ content_routing_list = (known after apply)
+ domain_name = (known after apply)
+ dos_profile = (known after apply)
+ error_msg = (known after apply)
+ error_page = (known after apply)
+ fortiview = (known after apply)
+ host_name = (known after apply)
+ http2https = "disable"
+ http2https_port = (known after apply)
+ id = (known after apply)
+ interface = "port1"
+ ips_profile = (known after apply)
+ l2_exception_list = (known after apply)
+ method = "LB_METHOD_ROUND_ROBIN"
+ mkey = "ws-employeedb-fad-vs"
+ one_click_gslb_server = (known after apply)
+ packet_fwd_method = (known after apply)
+ pagespeed = (known after apply)
+ persistence = (known after apply)
+ pool = "employeedb"
+ port = "443"
+ profile = "LB_PROF_HTTPS"
+ protocol = (known after apply)
+ public_ip = (known after apply)
+ public_ip6 = (known after apply)
+ public_ip_type = (known after apply)
+ schedule_list = (known after apply)
+ schedule_pool_list = (known after apply)
+ scripting_flag = (known after apply)
+ scripting_list = (known after apply)
+ source_pool_list = (known after apply)
+ ssl_mirror = (known after apply)
+ ssl_mirror_intf = (known after apply)
+ status = "enable"
+ stream_scripting_flag = (known after apply)
+ stream_scripting_list = (known after apply)
+ traffic_group = (known after apply)
+ traffic_log = (known after apply)
+ trans_rate_limit = (known after apply)
+ type = "l7-load-balance"
+ use_azure_lb_backend_ip = (known after apply)
+ vdom = "root"
+ waf_profile = (known after apply)
+ warmrate = (known after apply)
+ warmup = (known after apply)
+ wccp = (known after apply)
+ ztna_profile = (known after apply)
}
# fortiadc_system_certificate_local_cert_group.cert_local_group will be created
+ resource "fortiadc_system_certificate_local_cert_group" "cert_local_group" {
+ id = (known after apply)
+ mkey = "EMPLOYEEDB_CERT_GROUP"
+ vdom = "root"
}
# fortiadc_system_certificate_local_cert_group_child_group_member.cert_local_group_member will be created
+ resource "fortiadc_system_certificate_local_cert_group_child_group_member" "cert_local_group_member" {
+ default = (known after apply)
+ extra_intermediate_cag = (known after apply)
+ extra_local_cert = (known after apply)
+ extra_ocsp_stapling = (known after apply)
+ id = (known after apply)
+ intermediate_cag = (known after apply)
+ local_cert = "employeedb-ssl"
+ mkey = "1"
+ ocsp_stapling = (known after apply)
+ pkey = "EMPLOYEEDB_CERT_GROUP"
+ vdom = "root"
}
# fortiadc_system_certificate_local_upload.local_upload will be created
+ resource "fortiadc_system_certificate_local_upload" "local_upload" {
+ cert = "/home/fortinet/cert/fortidemo/k3s-apps-external.crt"
+ id = (known after apply)
+ key = "/home/fortinet/cert/fortidemo/k3s-apps-external.key"
+ mkey = "employeedb-ssl"
+ type = "CertKey"
+ upload = "text"
+ vdom = "root"
}
# fortiadc_system_health_check.http_200 will be created
+ resource "fortiadc_system_health_check" "http_200" {
+ acct_appid = (known after apply)
+ addr_type = (known after apply)
+ agent_type = (known after apply)
+ allow_ssl_version = (known after apply)
+ attribute = (known after apply)
+ auth_appid = (known after apply)
+ basedn = (known after apply)
+ binddn = (known after apply)
+ column = (known after apply)
+ community = (known after apply)
+ compare_type = (known after apply)
+ connect_string = (known after apply)
+ connect_type = (known after apply)
+ counter_value = (known after apply)
+ cpu = (known after apply)
+ cpu_weight = (known after apply)
+ database = (known after apply)
+ dest_addr = (known after apply)
+ dest_addr6 = (known after apply)
+ dest_addr_type = "ipv4"
+ disk = (known after apply)
+ disk_weight = (known after apply)
+ domain_name = (known after apply)
+ down_retry = "5"
+ file = (known after apply)
+ filter = (known after apply)
+ folder = (known after apply)
+ host_addr = (known after apply)
+ host_addr6 = (known after apply)
+ host_ip6_addr = (known after apply)
+ host_ip_addr = (known after apply)
+ hostname = (known after apply)
+ http_connect = (known after apply)
+ http_extra_string = (known after apply)
+ http_version = (known after apply)
+ id = (known after apply)
+ interval = "5"
+ local_cert = (known after apply)
+ match_type = (known after apply)
+ mem = (known after apply)
+ mem_weight = (known after apply)
+ method_type = (known after apply)
+ mkey = "LBHC_HTTP_200"
+ mssql_column = (known after apply)
+ mssql_receive_string = (known after apply)
+ mssql_row = (known after apply)
+ mssql_send_string = (known after apply)
+ mysql_server_type = (known after apply)
+ nas_ip = (known after apply)
+ oid = (known after apply)
+ oracle_receive_string = (known after apply)
+ oracle_send_string = (known after apply)
+ origin_host = (known after apply)
+ origin_realm = (known after apply)
+ passive = (known after apply)
+ password = (known after apply)
+ port = (known after apply)
+ product_name = (known after apply)
+ pwd_type = (known after apply)
+ radius_reject = (known after apply)
+ receive_string = (known after apply)
+ remote_host = (known after apply)
+ remote_password = (known after apply)
+ remote_port = (known after apply)
+ remote_username = (known after apply)
+ row = (known after apply)
+ rtsp_describe_url = (known after apply)
+ rtsp_method_type = (known after apply)
+ script = (known after apply)
+ secret_key = (known after apply)
+ send_string = "/"
+ service_name = (known after apply)
+ sid = (known after apply)
+ sip_request_type = (known after apply)
+ ssl_ciphers = (known after apply)
+ status_code = "200"
+ string_value = (known after apply)
+ timeout = "4"
+ type = "http"
+ up_retry = "5"
+ username = (known after apply)
+ value_type = (known after apply)
+ vendor_id = (known after apply)
+ version = (known after apply)
}
Plan: 13 to add, 0 to change, 0 to destroy.
fortiadc_load_balance_real_server.rs["rs_employeedb3"]: Creating...
fortiadc_load_balance_real_server.rs["rs_employeedb2"]: Creating...
fortiadc_system_certificate_local_cert_group.cert_local_group: Creating...
fortiadc_load_balance_real_server.rs["rs_employeedb1"]: Creating...
fortiadc_system_certificate_local_upload.local_upload: Creating...
fortiadc_system_health_check.http_200: Creating...
fortiadc_system_certificate_local_cert_group.cert_local_group: Creation complete after 0s [id=EMPLOYEEDB_CERT_GROUP]
fortiadc_load_balance_real_server.rs["rs_employeedb1"]: Creation complete after 0s [id=rs_employeedb1]
fortiadc_load_balance_real_server.rs["rs_employeedb2"]: Creation complete after 0s [id=rs_employeedb2]
fortiadc_system_certificate_local_upload.local_upload: Creation complete after 0s [id=employeedb-ssl]
fortiadc_system_certificate_local_cert_group_child_group_member.cert_local_group_member: Creating...
fortiadc_load_balance_real_server.rs["rs_employeedb3"]: Creation complete after 0s [id=rs_employeedb3]
fortiadc_system_health_check.http_200: Creation complete after 0s [id=LBHC_HTTP_200]
fortiadc_system_certificate_local_cert_group_child_group_member.cert_local_group_member: Creation complete after 0s [id=EMPLOYEEDB_CERT_GROUP_1]
fortiadc_load_balance_pool.pool: Creating...
fortiadc_load_balance_client_ssl_profile.client_ssl: Creating...
fortiadc_load_balance_pool.pool: Creation complete after 0s [id=employeedb]
fortiadc_load_balance_client_ssl_profile.client_ssl: Creation complete after 0s [id=LB_CLIENT_SSL_EMPLOYEEDB]
fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb2"]: Creating...
fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb3"]: Creating...
fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb1"]: Creating...
fortiadc_load_balance_virtual_server.employeedb_vs_l7: Creating...
fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb3"]: Creation complete after 0s [id=employeedb_3]
fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb1"]: Creation complete after 1s [id=employeedb_1]
fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb2"]: Creation complete after 1s [id=employeedb_2]
fortiadc_load_balance_virtual_server.employeedb_vs_l7: Creation complete after 1s [id=ws-employeedb-fad-vs]
Apply complete! Resources: 13 added, 0 changed, 0 destroyed.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Verify the EmployeeDB application deployment
After the Terraform deployment has been successfully completed, we can now verify that the application EmployeeDB is working correctly. The following command connection to the Java Spring Boot Actuator interface to verify the application’s health.
=> curl https://employeedb.apps.fortidemo.net/actuator/health 2>/dev/null | jq -r ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- { . "status": "UP" } ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Now let’s see the EmployeeDB Application shown from the Web Browser.

Verify FortiADC Configuration
In the FortiADC UI we can clearly see that the Cirtual Server ws-empliyeedb-fad-vs has been created with the IP Address: 10.2.1.115

The detail configuration shows that the Ansible Playbook created a Layer-7 Load Balancer.


The information in the ‘General’ tab shows the Load Balancer Profile: HB_PROF_HTTPS containing our Health Check configuration, the Client SSL Profile containing the SSL/TLS Certificate Group with the Certificate and it shows the new created Real Server Pool: employeedb.


The Client SSL Profile: LB_CLIENT_SSL_EMPLOYEEDB defines supported SSL Cipher and supported SSL Versions, as well as the Certificate Group: EMPLOYEEDB_CERT_GROUP containing the SSL/TLS certificate shown below.

TIn the picture below, you can see the definition of the Real Server Pool: employeedb just created with the three Real Servers employeedb1, employeedb2 and employeedb3 that act as the EmployeeDB Application backend servers.

Test Load Balancing behavior
In the following secrion we would like to test the Load Balancing behavior. Currently we have selected Round Robin as the Load Balancing method. Let’s generate again some traffic and watch the balancing across the Real Servers. To create the load, we use the following script.
#!/bin/bash # ============================================================================================ # File: ........: genTrafficDocker.sh # Demo Package .: fortiadc-slb-employdb-ansible # Language .....: bash # Author .......: Sacha Dubois/span> # -------------------------------------------------------------------------------------------- # Category .....: Ansible # Description ..: Generates load on the actuator/health endpoint and count entries in the logs # ============================================================================================ st1=$(ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null fortinet@10.1.1.211 -n "docker logs edb" 2>/dev/null | grep -c "GET \"/actuator") st2=$(ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null fortinet@10.1.1.212 -n "docker logs edb" 2>/dev/null | grep -c "GET \"/actuator") st3=$(ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null fortinet@10.1.1.213 -n "docker logs edb" 2>/dev/null | grep -c "GET \"/actuator") ls1=0; ls2=0; ls3=0; cnt=1 while [ $cnt -le 10 ]; do curl https://${APPNAME}.${DOMAIN}/actuator/health > /dev/null 2>&1 cn1=$(ssh fortinet@10.1.1.211 -n "docker logs edb" 2>/dev/null | grep -c "GET \"/actuator") cn2=$(ssh fortinet@10.1.1.212 -n "docker logs edb" 2>/dev/null | grep -c "GET \"/actuator") cn3=$(ssh fortinet@10.1.1.213 -n "docker logs edb" 2>/dev/null | grep -c "GET \"/actuator") let tt1=cn1-st1 let tt2=cn2-st2 let tt3=cn3-st3 tx1=$(printf "%02d\n" $tt1) tx2=$(printf "%02d\n" $tt2) tx3=$(printf "%02d\n" $tt3) hdr=$(printf "%03d\n" $cnt) echo -e " [${hdr}] https://${APPNAME}.${DOMAIN}/actuator/health $tx1 $tx2 $tx3" ls1=$tt1; ls2=$tt2; ls3=$tt3 let cnt=cnt+1 sleep 2 done
=> ./genTrafficDocker.sh
-----------------------------------------------------------------------------------------------------------------
[001] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [01] 10.1.1.212 [00] 10.1.1.213 [00]
[002] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [01] 10.1.1.212 [01] 10.1.1.213 [00]
[003] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [01] 10.1.1.212 [01] 10.1.1.213 [01]
[004] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [02] 10.1.1.212 [01] 10.1.1.213 [01]
[005] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [02] 10.1.1.212 [02] 10.1.1.213 [01]
[006] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [02] 10.1.1.212 [02] 10.1.1.213 [02]
[007] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [03] 10.1.1.212 [02] 10.1.1.213 [02]
[008] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [03] 10.1.1.212 [03] 10.1.1.213 [02]
[009] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [03] 10.1.1.212 [03] 10.1.1.213 [03]
[010] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [04] 10.1.1.212 [03] 10.1.1.213 [03]
-----------------------------------------------------------------------------------------------------------------
The test shows that the packages are evenly distributed over the three Real Servers. Now we are going to modify the configuration that each Member gets the following traffic weighing (v 1, Member-2, 3 and Member-3: 5).
=> cat /home/fortinet/FortiADC_slb_employeedb/variables.tf ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ############################################# # EmployeeDB SSL Load Balancer – Variables ############################################# # ----------------------------- # FortiADC connection (already used by main.tf) # ----------------------------- # variable "fortiadc_hostname" { ... } <-- keep your existing ones # variable "fortiadc_token" { ... } # ----------------------------- # Real servers (from Ansible real_servers) # ----------------------------- variable "real_servers" { description = "EmployeeDB real servers behind FortiADC" type = map(object({ ip = string port = number status = string weight = number id = number })) default = { rs_employeedb1 = { ip = "10.1.1.211" port = 8080 status = "enable" weight = 1 id = 1 } rs_employeedb2 = { ip = "10.1.1.212" port = 8080 status = "enable" weight = 3 id = 2 } rs_employeedb3 = { ip = "10.1.1.213" port = 8080 status = "enable" weight = 5 id = 3 } } } # ----------------------------- # Pool / VS configuration # ----------------------------- variable "pool_name" { description = "Real server pool name (Ansible: pool_name)" type = string default = "employeedb" } variable "vs_name" { description = "Virtual server name (Ansible: virtual_server_name)" type = string default = "ws-employeedb-fad-vs" } variable "vs_address" { description = "Virtual server IP (Ansible: virtual_server_ip)" type = string default = "10.2.1.115" } variable "vs_interface" { description = "Virtual server interface (Ansible: virtual_server_interface)" type = string default = "port1" } variable "vs_port" { description = "Virtual server port (Ansible: virtual_server_port)" type = number default = 443 } # ----------------------------- # Health check (Ansible: LBHC_HTTP_200) # We assume LBHC_HTTP_200 already exists on the ADC. # ----------------------------- variable "health_check_list" { description = "Health check list attached to the pool" type = string default = "LBHC_HTTP_200" } # ----------------------------- # Certificate + SSL profile # ----------------------------- variable "cert_name" { description = "Name of the CertKey object (Ansible: employeedb-ssl)" type = string default = "employeedb-ssl" } variable "cert_path" { description = "Path to certificate file (Ansible: ssl_cert)" type = string default = "/home/fortinet/cert/fortidemo/k3s-apps-external.crt" } variable "key_path" { description = "Path to key file (Ansible: ssl_key)" type = string default = "/home/fortinet/cert/fortidemo/k3s-apps-external.key" } variable "cert_group" { description = "Local certificate group name (Ansible: local_cert_group)" type = string default = "EMPLOYEEDB_CERT_GROUP" } variable "client_ssl_profile" { description = "Client SSL profile name (Ansible: client_ssl_profile)" type = string default = "LB_CLIENT_SSL_EMPLOYEEDB" } ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Recreate the Plan
=> terraform -chdir=$HOME/FortiADC_slb_employeedb plan \
-var-file=$HOME/.terraform/secrets.tfvars \
-out=employeedb.plan
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
fortiadc_load_balance_real_server.rs["rs_employeedb2"]: Refreshing state... [id=rs_employeedb2]
fortiadc_system_certificate_local_upload.local_upload: Refreshing state... [id=employeedb-ssl]
fortiadc_load_balance_real_server.rs["rs_employeedb3"]: Refreshing state... [id=rs_employeedb3]
fortiadc_load_balance_real_server.rs["rs_employeedb1"]: Refreshing state... [id=rs_employeedb1]
fortiadc_system_certificate_local_cert_group.cert_local_group: Refreshing state... [id=EMPLOYEEDB_CERT_GROUP]
fortiadc_system_health_check.http_200: Refreshing state... [id=LBHC_HTTP_200]
fortiadc_system_certificate_local_cert_group_child_group_member.cert_local_group_member: Refreshing state... [id=EMPLOYEEDB_CERT_GROUP_1]
fortiadc_load_balance_pool.pool: Refreshing state... [id=employeedb]
fortiadc_load_balance_client_ssl_profile.client_ssl: Refreshing state... [id=LB_CLIENT_SSL_EMPLOYEEDB]
fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb2"]: Refreshing state... [id=employeedb_2]
fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb3"]: Refreshing state... [id=employeedb_3]
fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb1"]: Refreshing state... [id=employeedb_1]
fortiadc_load_balance_virtual_server.employeedb_vs_l7: Refreshing state... [id=ws-employeedb-fad-vs]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# fortiadc_load_balance_client_ssl_profile.client_ssl will be updated in-place
~ resource "fortiadc_load_balance_client_ssl_profile" "client_ssl" {
+ client_certificate_verify = "cv1"
+ forward_proxy_local_signing_ca = "SSLPROXY_LOCAL_CA"
id = "LB_CLIENT_SSL_EMPLOYEEDB"
# (34 unchanged attributes hidden)
}
# fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb2"] will be updated in-place
~ resource "fortiadc_load_balance_pool_child_pool_member" "member" {
id = "employeedb_2"
~ weight = "1" -> "3"
# (29 unchanged attributes hidden)
}
# fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb3"] will be updated in-place
~ resource "fortiadc_load_balance_pool_child_pool_member" "member" {
id = "employeedb_3"
~ weight = "1" -> "5"
# (29 unchanged attributes hidden)
}
Plan: 0 to add, 3 to change, 0 to destroy.
─────────────────────────────────────────────────────────────────────────────
Saved the plan to: employeedb.plan
To perform exactly these actions, run the following command to apply:
terraform apply "employeedb.plan"
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Apply the Infrastructure Changes
=> terraform -chdir=$HOME/FortiADC_slb_employeedb apply \
-var-file=$HOME/.terraform/secrets.tfvars \
-auto-approve employeedb.plan
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
fortiadc_load_balance_client_ssl_profile.client_ssl: Modifying... [id=LB_CLIENT_SSL_EMPLOYEEDB]
fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb3"]: Modifying... [id=employeedb_3]
fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb2"]: Modifying... [id=employeedb_2]
fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb3"]: Modifications complete after 0s [id=employeedb_3]
fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb2"]: Modifications complete after 0s [id=employeedb_2]
fortiadc_load_balance_client_ssl_profile.client_ssl: Modifications complete after 0s [id=LB_CLIENT_SSL_EMPLOYEEDB]
Apply complete! Resources: 0 added, 3 changed, 0 destroyed.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Let’s generate again some traffic and watch the balancing across the Real Servers with the modified weighing. As we can see in the output below, Member-1 is getting only one request where Member-3 is having the most of the traffic.
=> ./genTrafficDocker.sh
-----------------------------------------------------------------------------------------------------------------
[001] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [00] 10.1.1.212 [00] 10.1.1.213 [01]
[002] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [00] 10.1.1.212 [01] 10.1.1.213 [01]
[003] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [00] 10.1.1.212 [01] 10.1.1.213 [02]
[004] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [00] 10.1.1.212 [02] 10.1.1.213 [02]
[005] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [00] 10.1.1.212 [02] 10.1.1.213 [03]
[006] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [00] 10.1.1.212 [03] 10.1.1.213 [03]
[007] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [00] 10.1.1.212 [03] 10.1.1.213 [04]
[008] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [00] 10.1.1.212 [03] 10.1.1.213 [05]
[009] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [01] 10.1.1.212 [03] 10.1.1.213 [05]
[010] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [01] 10.1.1.212 [03] 10.1.1.213 [06]
-----------------------------------------------------------------------------------------------------------------
We are going to modify the configuration that Member-2 gets disabled. This is useful in case that a Real Server needs to be taken down for maintenance and the packages should be distributed to the other Servers instead.
=> cat /home/fortinet/FortiADC_slb_employeedb/variables.tf ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ############################################# # EmployeeDB SSL Load Balancer – Variables ############################################# # ----------------------------- # FortiADC connection (already used by main.tf) # ----------------------------- # variable "fortiadc_hostname" { ... } <-- keep your existing ones # variable "fortiadc_token" { ... } # ----------------------------- # Real servers (from Ansible real_servers) # ----------------------------- variable "real_servers" { description = "EmployeeDB real servers behind FortiADC" type = map(object({ ip = string port = number status = string weight = number id = number })) default = { rs_employeedb1 = { ip = "10.1.1.211" port = 8080 status = "enable" weight = 1 id = 1 } rs_employeedb2 = { ip = "10.1.1.212" port = 8080 status = "disable" weight = 1 id = 2 } rs_employeedb3 = { ip = "10.1.1.213" port = 8080 status = "enable" weight = 1 id = 3 } } } # ----------------------------- # Pool / VS configuration # ----------------------------- variable "pool_name" { description = "Real server pool name (Ansible: pool_name)" type = string default = "employeedb" } variable "vs_name" { description = "Virtual server name (Ansible: virtual_server_name)" type = string default = "ws-employeedb-fad-vs" } variable "vs_address" { description = "Virtual server IP (Ansible: virtual_server_ip)" type = string default = "10.2.1.115" } variable "vs_interface" { description = "Virtual server interface (Ansible: virtual_server_interface)" type = string default = "port1" } variable "vs_port" { description = "Virtual server port (Ansible: virtual_server_port)" type = number default = 443 } # ----------------------------- # Health check (Ansible: LBHC_HTTP_200) # We assume LBHC_HTTP_200 already exists on the ADC. # ----------------------------- variable "health_check_list" { description = "Health check list attached to the pool" type = string default = "LBHC_HTTP_200" } # ----------------------------- # Certificate + SSL profile # ----------------------------- variable "cert_name" { description = "Name of the CertKey object (Ansible: employeedb-ssl)" type = string default = "employeedb-ssl" } variable "cert_path" { description = "Path to certificate file (Ansible: ssl_cert)" type = string default = "/home/fortinet/cert/fortidemo/k3s-apps-external.crt" } variable "key_path" { description = "Path to key file (Ansible: ssl_key)" type = string default = "/home/fortinet/cert/fortidemo/k3s-apps-external.key" } variable "cert_group" { description = "Local certificate group name (Ansible: local_cert_group)" type = string default = "EMPLOYEEDB_CERT_GROUP" } variable "client_ssl_profile" { description = "Client SSL profile name (Ansible: client_ssl_profile)" type = string default = "LB_CLIENT_SSL_EMPLOYEEDB" } ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
To only update the Real Server Pool Members, let’s update the Terraform plan
=> terraform -chdir=$HOME/FortiADC_slb_employeedb plan \
-var-file=$HOME/.terraform/secrets.tfvars \
-out=employeedb.plan
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
fortiadc_system_certificate_local_cert_group.cert_local_group: Refreshing state... [id=EMPLOYEEDB_CERT_GROUP]
fortiadc_load_balance_real_server.rs["rs_employeedb3"]: Refreshing state... [id=rs_employeedb3]
fortiadc_load_balance_real_server.rs["rs_employeedb1"]: Refreshing state... [id=rs_employeedb1]
fortiadc_load_balance_real_server.rs["rs_employeedb2"]: Refreshing state... [id=rs_employeedb2]
fortiadc_system_certificate_local_upload.local_upload: Refreshing state... [id=employeedb-ssl]
fortiadc_system_health_check.http_200: Refreshing state... [id=LBHC_HTTP_200]
fortiadc_system_certificate_local_cert_group_child_group_member.cert_local_group_member: Refreshing state... [id=EMPLOYEEDB_CERT_GROUP_1]
fortiadc_load_balance_client_ssl_profile.client_ssl: Refreshing state... [id=LB_CLIENT_SSL_EMPLOYEEDB]
fortiadc_load_balance_pool.pool: Refreshing state... [id=employeedb]
fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb2"]: Refreshing state... [id=employeedb_2]
fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb1"]: Refreshing state... [id=employeedb_1]
fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb3"]: Refreshing state... [id=employeedb_3]
fortiadc_load_balance_virtual_server.employeedb_vs_l7: Refreshing state... [id=ws-employeedb-fad-vs]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# fortiadc_load_balance_client_ssl_profile.client_ssl will be updated in-place
~ resource "fortiadc_load_balance_client_ssl_profile" "client_ssl" {
+ client_certificate_verify = "cv1"
+ forward_proxy_local_signing_ca = "SSLPROXY_LOCAL_CA"
id = "LB_CLIENT_SSL_EMPLOYEEDB"
# (34 unchanged attributes hidden)
}
# fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb2"] will be updated in-place
~ resource "fortiadc_load_balance_pool_child_pool_member" "member" {
id = "employeedb_2"
~ status = "enable" -> "disable"
~ weight = "3" -> "1"
# (28 unchanged attributes hidden)
}
# fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb3"] will be updated in-place
~ resource "fortiadc_load_balance_pool_child_pool_member" "member" {
id = "employeedb_3"
~ weight = "5" -> "1"
# (29 unchanged attributes hidden)
}
# fortiadc_load_balance_real_server.rs["rs_employeedb2"] will be updated in-place
~ resource "fortiadc_load_balance_real_server" "rs" {
id = "rs_employeedb2"
~ status = "enable" -> "disable"
# (9 unchanged attributes hidden)
}
Plan: 0 to add, 4 to change, 0 to destroy.
─────────────────────────────────────────────────────────────────────────────
Saved the plan to: employeedb.plan
To perform exactly these actions, run the following command to apply:
terraform apply "employeedb.plan"
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Configure the Server Load Balancer with the Ansible Playbook
terraform -chdir=$HOME/FortiADC_slb_employeedb apply \
-var-file=$HOME/.terraform/secrets.tfvars \
-auto-approve employeedb.plan
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
fortiadc_load_balance_real_server.rs["rs_employeedb2"]: Modifying... [id=rs_employeedb2]
fortiadc_load_balance_client_ssl_profile.client_ssl: Modifying... [id=LB_CLIENT_SSL_EMPLOYEEDB]
fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb2"]: Modifying... [id=employeedb_2]
fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb3"]: Modifying... [id=employeedb_3]
fortiadc_load_balance_real_server.rs["rs_employeedb2"]: Modifications complete after 0s [id=rs_employeedb2]
fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb2"]: Modifications complete after 0s [id=employeedb_2]
fortiadc_load_balance_pool_child_pool_member.member["rs_employeedb3"]: Modifications complete after 1s [id=employeedb_3]
fortiadc_load_balance_client_ssl_profile.client_ssl: Modifications complete after 1s [id=LB_CLIENT_SSL_EMPLOYEEDB]
Apply complete! Resources: 0 added, 4 changed, 0 destroyed.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Let’s generate again some traffic and watch the balancing.
=> ./genTrafficDocker.sh
-----------------------------------------------------------------------------------------------------------------
[001] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [01] 10.1.1.212 [00] 10.1.1.213 [00]
[002] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [01] 10.1.1.212 [00] 10.1.1.213 [01]
[003] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [02] 10.1.1.212 [00] 10.1.1.213 [01]
[004] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [02] 10.1.1.212 [00] 10.1.1.213 [02]
[005] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [03] 10.1.1.212 [00] 10.1.1.213 [02]
[006] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [03] 10.1.1.212 [00] 10.1.1.213 [03]
[007] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [04] 10.1.1.212 [00] 10.1.1.213 [03]
[008] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [04] 10.1.1.212 [00] 10.1.1.213 [04]
[009] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [05] 10.1.1.212 [00] 10.1.1.213 [04]
[010] https://employeedb.apps.fortidemo.net/actuator/health 10.1.1.211 [05] 10.1.1.212 [00] 10.1.1.213 [05]
-----------------------------------------------------------------------------------------------------------------
Cleaning Up
To complete this demo and cleanup we undeploy the Load Balancer Config from the FortiADC
=> terraform -chdir=$HOME/FortiADC_slb_employeedb destroy \
-var-file=$HOME/.terraform/secrets.tfvars \
-auto-approve
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
fortiadc_load_balance_real_server.rs["rs_employeedb3"]: Refreshing state... [id=rs_employeedb3]
fortiadc_load_balance_real_server.rs["rs_employeedb1"]: Refreshing state... [id=rs_employeedb1]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:
# fortiadc_load_balance_real_server.rs["rs_employeedb1"] will be destroyed
- resource "fortiadc_load_balance_real_server" "rs" {
- address = "10.1.1.211" -> null
- address6 = "::" -> null
- id = "rs_employeedb1" -> null
- mkey = "rs_employeedb1" -> null
- sdn_addr_private = "disable" -> null
- server_type = "static" -> null
- status = "enable" -> null
- type = "ip" -> null
# (3 unchanged attributes hidden)
}
# fortiadc_load_balance_real_server.rs["rs_employeedb3"] will be destroyed
- resource "fortiadc_load_balance_real_server" "rs" {
- address = "10.1.1.213" -> null
- address6 = "::" -> null
- id = "rs_employeedb3" -> null
- mkey = "rs_employeedb3" -> null
- sdn_addr_private = "disable" -> null
- server_type = "static" -> null
- status = "enable" -> null
- type = "ip" -> null
# (3 unchanged attributes hidden)
}
Plan: 0 to add, 0 to change, 2 to destroy.
fortiadc_load_balance_real_server.rs["rs_employeedb3"]: Destroying... [id=rs_employeedb3]
fortiadc_load_balance_real_server.rs["rs_employeedb1"]: Destroying... [id=rs_employeedb1]
fortiadc_load_balance_real_server.rs["rs_employeedb1"]: Destruction complete after 0s
fortiadc_load_balance_real_server.rs["rs_employeedb3"]: Destruction complete after 0s
Destroy complete! Resources: 0 destroyed.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------