Create Subnet with 2 Web server VMs
Use the normal way of creating VM instances and add 2 VMs running a webserver into a single subnet
Install LBaaSv2 with the Haproxy driver
Install lbaasv2-agent
Install the lbaasv2-agent on all controllers
sudo apt-get install neutron-lbaasv2-agent
Install the lbaas_agent.ini file
Update the /etc/neutron/lbaas_agent.ini
to say the following
[DEFAULT]
verbose = False
debug = False
periodic_interval = 10
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
[haproxy]
user_group = nogroup
send_gratuitous_arp = 3
In /etc/neutron/neutron.conf
on all controllers apply the following diff:
--- /etc/neutron/neutron.conf.old 2016-12-19 23:20:35.987369011 +0000
+++ /etc/neutron/neutron.conf 2016-12-19 20:02:23.735785952 +0000
@@ -30,6 +30,7 @@
# The service plugins Neutron will use (list value)
#service_plugins =
+service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.metering.metering_plugin.MeteringPlugin,neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2
# The base MAC address Neutron will use for VIFs. The first 3 octets will remain unchanged. If the 4th octet is not 00, it will also be
# used. The others will be randomly generated. (string value)
@@ -1361,3 +1362,8 @@
# Sets the list of available ciphers. value should be a string in the OpenSSL cipher list format. (string value)
#ciphers = <None>
+
+[service_providers]
+service_provider = LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
Restart Neutron servers
sudo service neutron-server restart
sudo service neutron-lbaasv2-agent restart
Configure a LoadBalancer
Create the Loadbalancer
First determine the Subnet ID that contains the hosts you wish to load balance to.
root@node-2:~# openstack server list -f json
[
{
"Status": "ACTIVE",
"Networks": "admin_internal_net=10.109.4.35, 10.109.3.170",
"ID": "5afe10cb-d302-4c3d-923c-0ed01cf3f8f5",
"Name": "lbtest"
},
{
"Status": "ACTIVE",
"Networks": "admin_internal_net=10.109.4.33, 10.109.3.168",
"ID": "5773ebae-0882-460d-a464-1f97876a6db6",
"Name": "ex-2ofp-5e5rdjgonxds-z3s44d7kaxjf-server-hak6j4smwxag"
},
{
"Status": "ACTIVE",
"Networks": "admin_internal_net=10.109.4.32, 10.109.3.167",
"ID": "f5752b90-7f24-4127-8b45-90e694455f5a",
"Name": "ex-2ofp-5z22nlq5cuyz-ju4nl35q5nvt-server-z4nje5cpxjsk"
}
]root@node-2:~# openstack network list --json
usage: openstack network list [-h] [-f {csv,json,table,value,yaml}]
[-c COLUMN] [--max-width ] [--noindent]
[--quote {all,minimal,none,nonnumeric}]
[--external] [--long]
openstack network list: error: unrecognized arguments: --json
root@node-2:~# openstack network list -f json
[
{
"Subnets": "449a4a98-5bae-42ed-a4cd-a2a24bd27a6b",
"ID": "9f178fd6-8914-426d-9f5e-d4ad9a073484",
"Name": "admin_floating_net"
},
{
"Subnets": "139cc698-f079-4c46-ac0f-6364ad3238d5",
"ID": "2a165d79-9fdc-486b-8a7b-8db076ebee20",
"Name": "admin_internal_net"
}
]root@node-2:~#
oot@node-2:~# neutron lbaas-loadbalancer-create --name test-lb 139cc698-f079-4c46-ac0f-6364ad3238d5
Created a new loadbalancer:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| admin_state_up | True |
| description | |
| id | 33fcb82e-ab1d-4c71-90b4-6ce04998b993 |
| listeners | |
| name | test-lb |
| operating_status | OFFLINE |
| pools | |
| provider | haproxy |
| provisioning_status | PENDING_CREATE |
| tenant_id | 9388b4bab91e4ac8a8cb96877df6af40 |
| vip_address | 10.109.4.41 |
| vip_port_id | 458fac9d-755e-4c10-ba54-2186076059a4 |
| vip_subnet_id | 139cc698-f079-4c46-ac0f-6364ad3238d5 |
+---------------------+--------------------------------------+
root@node-2:~# neutron lbaas-loadbalancer-show test-lb
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| admin_state_up | True |
| description | |
| id | 33fcb82e-ab1d-4c71-90b4-6ce04998b993 |
| listeners | |
| name | test-lb |
| operating_status | ONLINE |
| pools | |
| provider | haproxy |
| provisioning_status | ACTIVE |
| tenant_id | 9388b4bab91e4ac8a8cb96877df6af40 |
| vip_address | 10.109.4.41 |
| vip_port_id | 458fac9d-755e-4c10-ba54-2186076059a4 |
| vip_subnet_id | 139cc698-f079-4c46-ac0f-6364ad3238d5 |
+---------------------+--------------------------------------+
</code>
</pre>
Notice that the Operational status is _ONLINE_ but there are **no listeners**. Listeners are ports you want the load balancer to manage and balance traffic to. Example HTTP Port 80.
The Mitaka LBaaS doc says that you should be able to ping the ``vip_address``. With OVS you cannot do that. It is because the lbaas network namespace is not created. This is what its like right now
```
root@node-2:~# ip netns ls qdhcp-2a165d79-9fdc-486b-8a7b-8db076ebee20 haproxy vrouter
```
To create a ip netns namespace add a listener to the loadbalancer object.
```
root@node-2:~# neutron lbaas-listener-create --name test-lb-http --loadbalancer test-lb --protocol HTTP --protocol-port 80
Created a new listener:
+---------------------------+------------------------------------------------+
| Field | Value |
+---------------------------+------------------------------------------------+
| admin_state_up | True |
| connection_limit | -1 |
| default_pool_id | |
| default_tls_container_ref | |
| description | |
| id | 9ea6d06e-a214-4369-a37c-4e612883c76b |
| loadbalancers | {"id": "33fcb82e-ab1d-4c71-90b4-6ce04998b993"} |
| name | test-lb-http |
| protocol | HTTP |
| protocol_port | 80 |
| sni_container_refs | |
| tenant_id | 9388b4bab91e4ac8a8cb96877df6af40 |
+---------------------------+------------------------------------------------+
root@node-2:~# neutron lbaas-loadbalancer-show test-lb
+---------------------+------------------------------------------------+
| Field | Value |
+---------------------+------------------------------------------------+
| admin_state_up | True |
| description | |
| id | 33fcb82e-ab1d-4c71-90b4-6ce04998b993 |
| listeners | {"id": "9ea6d06e-a214-4369-a37c-4e612883c76b"} |
| name | test-lb |
| operating_status | ONLINE |
| pools | |
| provider | haproxy |
| provisioning_status | ACTIVE |
| tenant_id | 9388b4bab91e4ac8a8cb96877df6af40 |
| vip_address | 10.109.4.41 |
| vip_port_id | 458fac9d-755e-4c10-ba54-2186076059a4 |
| vip_subnet_id | 139cc698-f079-4c46-ac0f-6364ad3238d5 |
+---------------------+------------------------------------------------+
root@node-2:~# ip netns list
qlbaas-33fcb82e-ab1d-4c71-90b4-6ce04998b993
qdhcp-2a165d79-9fdc-486b-8a7b-8db076ebee20
haproxy
vrouter
```
Notice above that the ``qlbaas-xxxx`` namespace has the same ID as the name of the ID of the loadbalancer. In this case it is ``33fcb82e-ab1d-4c71-90b4-6ce04998b993``.
Now it is possible to ping a test VM from the loadbalancer. This is different than what the Mitaka Docs say.
```
root@node-2:~# ip netns exec qlbaas-33fcb82e-ab1d-4c71-90b4-6ce04998b993 netshow l3
--------------------------------------------------------------------
To view the legend, rerun "netshow" cmd with the "--legend" option
--------------------------------------------------------------------
Name Speed MTU Mode Summary
-- -------------- ------- ----- ------------ ------------------------
UP lo N/A 65536 Loopback IP: 127.0.0.1/8, ::1/128
UP tap458fac9d-75 N/A 1500 Interface/L3 IP: 10.109.4.41/24
root@node-2:~# openstack server list -f json
[
{
"Status": "ACTIVE",
"Networks": "admin_internal_net=10.109.4.35, 10.109.3.170",
"ID": "5afe10cb-d302-4c3d-923c-0ed01cf3f8f5",
"Name": "lbtest"
},
{
"Status": "ACTIVE",
"Networks": "admin_internal_net=10.109.4.33, 10.109.3.168",
"ID": "5773ebae-0882-460d-a464-1f97876a6db6",
"Name": "ex-2ofp-5e5rdjgonxds-z3s44d7kaxjf-server-hak6j4smwxag"
},
{
"Status": "ACTIVE",
"Networks": "admin_internal_net=10.109.4.32, 10.109.3.167",
"ID": "f5752b90-7f24-4127-8b45-90e694455f5a",
"Name": "ex-2ofp-5z22nlq5cuyz-ju4nl35q5nvt-server-z4nje5cpxjsk"
}
]root@node-2:~#
root@node-2:~# ip netns exec qlbaas-33fcb82e-ab1d-4c71-90b4-6ce04998b993 ping -c4 10.109.4.35
PING 10.109.4.35 (10.109.4.35) 56(84) bytes of data.
64 bytes from 10.109.4.35: icmp_seq=1 ttl=64 time=4.47 ms
64 bytes from 10.109.4.35: icmp_seq=2 ttl=64 time=1.17 ms
64 bytes from 10.109.4.35: icmp_seq=3 ttl=64 time=1.08 ms
64 bytes from 10.109.4.35: icmp_seq=4 ttl=64 time=0.911 ms
--- 10.109.4.35 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 0.911/1.910/4.477/1.484 ms
```