Skip to content
Eufemia Tinelli edited this page Jun 23, 2014 · 37 revisions

Testbed Infrastructure Deployment

Each Openstack installation will have the following nodes with the relative configuration: vm02 (controller node)

vm02 (compute node):

  • vdb 100G per ceph
  • vdc 100G per cinder (lvm)
  • vdd 100G per Swift

vm03 (compute node):

  • vdb 100G per ceph
  • vdc 100G per Swift

vm04 (heat + ceilometer management server):

  • vdb 500G per mongodb
  • vdc 100G per Swift

Pre-requisiti

Step 1: Network interfaces configuration

Step 2: Install Network Time Protocol (NTP)

# apt-get install -y ntp

Step 3: Controller setup

install mysql:

# apt-get install python-mysqldb mysql-server

Note: When you install the server package, you are prompted for the root password for the database. Choose a strong password and remember it.

Edit the /etc/mysql/my.cnf file:

Under the [mysqld] section, set the bind-address key to the management IP address of the controller node to enable access by other nodes via the management network:

[mysqld]
...
bind-address = $CONTROLLER_MNGT_IP

Under the [mysqld] section, set the following keys to enable InnoDB, UTF-8 character set, and UTF-8 collation by default:

[mysqld]
...
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8

Restart the MySQL service to apply the changes:

# service mysql restart

You must delete the anonymous users that are created when the database is first started. Otherwise, database connection problems occur when you follow the instructions in this guide. To do this, use the mysql_secure_installation command. Note that if mysql_secure_installation fails you might need to use mysql_install_db first:

# mysql_install_db
# mysql_secure_installation

This command presents a number of options for you to secure your database installation. Respond yes to all prompts unless you have a good reason to do otherwise.

On all nodes other than the controller node, install the MySQL Python library:

# apt-get install python-mysqldb

Install Openstack Packages on all nodes

Install the Ubuntu Cloud Archive for Icehouse:

# apt-get install python-software-properties
# add-apt-repository cloud-archive:icehouse

Update the package database and upgrade your system:

# apt-get update
# apt-get dist-upgrade

If you intend to use OpenStack Networking with Ubuntu 12.04, you should install a backported Linux kernel to improve the stability of your system. This installation is not needed if you intend to use the legacy networking service.

Install the Ubuntu 13.10 backported kernel:

# apt-get install linux-image-generic-lts-saucy linux-headers-generic-lts-saucy

Reboot the system for all changes to take effect:

# reboot

install the message broker service

On the controller node:

# apt-get install rabbitmq-server

Replace RABBIT_PASS with a suitable password.

# rabbitmqctl change_password guest RABBIT_PASS

Install identity Service

On the controller node:

Install the OpenStack Identity Service on the controller node, together with python-keystoneclient (which is a dependency):

# apt-get install keystone

The Identity Service uses a database to store information. Specify the location of the database in the configuration file. In this guide, we use a MySQL database on the controller node with the username keystone. Replace KEYSTONE_DBPASS with a suitable password for the database user.

Edit /etc/keystone/keystone.conf and change the [database] section:

[database]
# The SQLAlchemy connection string used to connect to the database
connection = mysql://keystone:KEYSTONE_DBPASS@controller/keystone
...

By default, the Ubuntu packages create a SQLite database. Delete the keystone.db file created in the /var/lib/keystone/ directory so that it does not get used by mistake:

# rm /var/lib/keystone/keystone.db

Use the password that you set previously to log in as root. Create a keystone database user:

$ mysql -u root -p
mysql> CREATE DATABASE keystone;
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
  IDENTIFIED BY 'KEYSTONE_DBPASS';
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
  IDENTIFIED BY 'KEYSTONE_DBPASS';
mysql> exit

Create the database tables for the Identity Service:

# su -s /bin/sh -c "keystone-manage db_sync" keystone

Define an authorization token to use as a shared secret between the Identity Service and other OpenStack services. Use openssl to generate a random token and store it in the configuration file:

# openssl rand -hex 10

Edit /etc/keystone/keystone.conf and change the [DEFAULT] section, replacing ADMIN_TOKEN with the results of the command:

[DEFAULT]
# A "shared secret" between keystone and other openstack services
admin_token = ADMIN_TOKEN
...

Configure the log directory. Edit the /etc/keystone/keystone.conf file and update the [DEFAULT] section:

[DEFAULT]
...
log_dir = /var/log/keystone

Restart the Identity Service:

# service keystone restart

By default, the Identity Service stores expired tokens in the database indefinitely. While potentially useful for auditing in production environments, the accumulation of expired tokens will considerably increase database size and may decrease service performance, particularly in test environments with limited resources. We recommend configuring a periodic task using cron to purge expired tokens hourly.

Run the following command to purge expired tokens every hour and log the output to /var/log/keystone/keystone-tokenflush.log:

# (crontab -l -u keystone 2>&1 | grep -q token_flush) || \
echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' >> /var/spool/cron/crontabs/keystone

Define users, tenants, and roles

$ export OS_SERVICE_TOKEN=ADMIN_TOKEN
$ export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0

Create an administrative user

Follow these steps to create an administrative user, role, and tenant. You will use this account for administrative interaction with the OpenStack cloud.

By default, the Identity Service creates a special member role. The OpenStack dashboard automatically grants access to users with this role. You will give the admin user access to this role in addition to the admin role.

[Note] Note Any role that you create must map to roles specified in the policy.json file included with each OpenStack service. The default policy file for most services grants administrative access to the admin role.

Create the admin user:

$ keystone user-create --name=admin --pass=ADMIN_PASS --email=ADMIN_EMAIL

Replace ADMIN_PASS with a secure password and replace ADMIN_EMAIL with an email address to associate with the account.

Create the admin role:

$ keystone role-create --name=admin

Create the admin tenant:

$ keystone tenant-create --name=admin --description="Admin Tenant"

You must now link the admin user, admin role, and admin tenant together using the user-role-add option:

$ keystone user-role-add --user=admin --tenant=admin --role=admin

Link the admin user, member role, and admin tenant:

$ keystone user-role-add --user=admin --role=_member_ --tenant=admin

Create a service tenant

OpenStack services also require a username, tenant, and role to access other OpenStack services. In a basic installation, OpenStack services typically share a single tenant named service.

You will create additional usernames and roles under this tenant as you install and configure each service.

Create the service tenant:

$ keystone tenant-create --name=service --description="Service Tenant"

Create the Identity Service

root@group0vm1:~# keystone service-create --name=keystone --type=identity --description="OpenStack Identity"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |        OpenStack Identity        |
|   enabled   |               True               |
|      id     | 68683d6ffd7d49859dd9f7fe2fd12be7 |
|     name    |             keystone             |
|     type    |             identity             |
+-------------+----------------------------------+
root@group0vm1:~# keystone endpoint-create --service-id=$(keystone service-list | awk '/ identity / {print $2}') --publicurl=http://10.10.10.3:5000/v2.0 --internalurl=http://controller:5000/v2.0 --adminurl=http://controller:35357/v2.0
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  |   http://controller:35357/v2.0   |
|      id     | 0c34c6e6fd5f411a9e349eeca1c9b3db |
| internalurl |   http://controller:5000/v2.0    |
|  publicurl  |   http://10.10.10.3:5000/v2.0    |
|    region   |            regionOne             |
|  service_id | 68683d6ffd7d49859dd9f7fe2fd12be7 |
+-------------+----------------------------------+

Verify the Identity Service installation

To verify that the Identity Service is installed and configured correctly, clear the values in the OS_SERVICE_TOKEN and OS_SERVICE_ENDPOINT environment variables:

$ unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT

These variables, which were used to bootstrap the administrative user and register the Identity Service, are no longer needed.

You can now use regular user name-based authentication.

Request a authentication token by using the admin user and the password you chose for that user:

$ keystone --os-username=admin --os-password=ADMIN_PASS \
  --os-auth-url=http://controller:35357/v2.0 token-get

In response, you receive a token paired with your user ID. This verifies that the Identity Service is running on the expected endpoint and that your user account is established with the expected credentials.

Verify that authorization behaves as expected. To do so, request authorization on a tenant:

$ keystone --os-username=admin --os-password=ADMIN_PASS \
  --os-tenant-name=admin --os-auth-url=http://controller:35357/v2.0 \
  token-get

In response, you receive a token that includes the ID of the tenant that you specified. This verifies that your user account has an explicitly defined role on the specified tenant and the tenant exists as expected.

You can also set your --os-* variables in your environment to simplify command-line usage. Set up a admin-openrc.sh file with the admin credentials and admin endpoint:

export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://controller:35357/v2.0

Source this file to read in the environment variables:

$ source admin-openrc.sh

Verify that your admin-openrc.sh file is configured correctly. Run the same command without the --os-* arguments:

$ keystone token-get

The command returns a token and the ID of the specified tenant. This verifies that you have configured your environment variables correctly.

keystone --os-username=admin --os-password=$KEYSTONE_PASSWD --os-auth-url=http://controller:35357/v2.0 token-get

Create openrc.sh files

export OS_USERNAME=demo
export OS_PASSWORD=DEMO_PASS
export OS_TENANT_NAME=demo
export OS_AUTH_URL=http://controller:35357/v2.0

Image Service (controller node)

Install the Image Service on the controller node:

# apt-get install glance python-glanceclient

The Image Service stores information about images in a database. The examples in this guide use the MySQL database that is used by other OpenStack services.

Configure the location of the database. The Image Service provides the glance-api and glance-registry services, each with its own configuration file. You must update both configuration files throughout this section. Replace GLANCE_DBPASS with your Image Service database password.

Edit /etc/glance/glance-api.conf and /etc/glance/glance-registry.conf and edit the [database] section of each file:

[database]
connection = mysql://glance:GLANCE_DBPASS@controller/glance

Configure the Image Service to use the message broker:

Edit the /etc/glance/glance-api.conf file and add the following keys to the [DEFAULT] section:

Replace RABBIT_PASS with the password you chose for the guest account in RabbitMQ.

[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS

By default, the Ubuntu packages create an SQLite database. Delete the glance.sqlite file (if exists) created in the /var/lib/glance/ directory so that it does not get used by mistake:

# rm /var/lib/glance/glance.sqlite

Use the password you created to log in as root and create a glance database user:

$ mysql -u root -p
mysql> CREATE DATABASE glance;
mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
IDENTIFIED BY 'GLANCE_DBPASS';
mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY 'GLANCE_DBPASS';

Create the database tables for the Image Service:

# su -s /bin/sh -c "glance-manage db_sync" glance

Create a glance user that the Image Service can use to authenticate with the Identity service. Choose a password and specify an email address for the glance user. Use the service tenant and give the user the admin role:

$ keystone user-create --name=glance --pass=GLANCE_PASS \
   [email protected]
$ keystone user-role-add --user=glance --tenant=service --role=admin

Configure the Image Service to use the Identity Service for authentication.

Edit the /etc/glance/glance-api.conf and /etc/glance/glance-registry.conf files. Replace GLANCE_PASS with the password you chose for the glance user in the Identity service.

Add or modify the following keys under the [keystone_authtoken] section:

[keystone_authtoken]
auth_uri = http://controller:5000
auth_host = controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = GLANCE_PASS

Modify the following key under the [paste_deploy] section:

[paste_deploy]
...
flavor = keystone

Register the Image Service with the Identity service so that other OpenStack services can locate it. Register the service and create the endpoint:

$ keystone service-create --name=glance --type=image \
  --description="OpenStack Image Service"
$ keystone endpoint-create \
  --service-id=$(keystone service-list | awk '/ image / {print $2}') \
  --publicurl=http://controller:9292 \
  --internalurl=http://controller:9292 \
  --adminurl=http://controller:9292

Restart the glance service with its new settings:

# service glance-registry restart
# service glance-api restart

Compute service (controller node)

Install the Compute packages necessary for the controller node.

# apt-get install -y nova-api nova-cert nova-conductor nova-consoleauth \
  nova-novncproxy nova-scheduler python-novaclient

Compute stores information in a database. In this guide, we use a MySQL database on the controller node. Configure Compute with the database location and credentials. Replace NOVA_DBPASS with the password for the database that you will create in a later step.

Edit the [database] section in the /etc/nova/nova.conf file, adding it if necessary, to modify this key:

[database]
connection = mysql://nova:NOVA_DBPASS@controller/nova

Configure the Compute service to use the RabbitMQ message broker by setting these configuration keys in the [DEFAULT] configuration group of the /etc/nova/nova.conf file:

[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS

Set the my_ip, vncserver_listen, and vncserver_proxyclient_address configuration options to the management interface IP address of the controller node:

Edit the /etc/nova/nova.conf file and add these lines to the [DEFAULT] section:

[DEFAULT]
...
my_ip = 10.0.0.11
vncserver_listen = 10.0.0.11
vncserver_proxyclient_address = 10.0.0.11

By default, the Ubuntu packages create an SQLite database. Delete the nova.sqlite file created in the /var/lib/nova/ directory so that it does not get used by mistake:

# rm /var/lib/nova/nova.sqlite

Use the password you created previously to log in as root. Create a nova database user:

$ mysql -u root -p
mysql> CREATE DATABASE nova;
mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';

Create the Compute service tables:

# su -s /bin/sh -c "nova-manage db sync" nova

Create a nova user that Compute uses to authenticate with the Identity Service. Use the service tenant and give the user the admin role:

$ keystone user-create --name=nova --pass=NOVA_PASS [email protected]
$ keystone user-role-add --user=nova --tenant=service --role=admin

Configure Compute to use these credentials with the Identity Service running on the controller. Replace NOVA_PASS with your Compute password.

Edit the [DEFAULT] section in the /etc/nova/nova.conf file to add this key:

[DEFAULT]
...
auth_strategy = keystone

Add these keys to the [keystone_authtoken] section:

[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_host = controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = NOVA_PASS

You must register Compute with the Identity Service so that other OpenStack services can locate it. Register the service and specify the endpoint:

$ keystone service-create --name=nova --type=compute \
  --description="OpenStack Compute"
$ keystone endpoint-create \
  --service-id=$(keystone service-list | awk '/ compute / {print $2}') \
  --publicurl=http://controller:8774/v2/%\(tenant_id\)s \
  --internalurl=http://controller:8774/v2/%\(tenant_id\)s \
  --adminurl=http://controller:8774/v2/%\(tenant_id\)s

Restart Compute services:

# service nova-api restart
# service nova-cert restart
# service nova-consoleauth restart
# service nova-scheduler restart
# service nova-conductor restart
# service nova-novncproxy restart

To verify your configuration, list available images:

$ nova image-list
+--------------------------------------+---------------------+--------+--------+
| ID                                   | Name                | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| acafc7c0-40aa-4026-9673-b879898e1fc2 | cirros-0.3.2-x86_64 | ACTIVE |        |
+--------------------------------------+---------------------+--------+--------+

Networking (Neutron)

Prerequisites

Before you configure OpenStack Networking (neutron), you must create a database and Identity service credentials including a user and service.

Connect to the database as the root user, create the neutron database, and grant the proper access to it:

Replace NEUTRON_DBPASS with a suitable password.

$ mysql -u root -p
mysql> CREATE DATABASE neutron;
mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY 'NEUTRON_DBPASS';
mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'NEUTRON_DBPASS';

Create Identity service credentials for Networking:

Create the neutron user:

Replace NEUTRON_PASS with a suitable password and [email protected] with a suitable e-mail address.

$ keystone user-create --name neutron --pass NEUTRON_PASS --email [email protected]

Link the neutron user to the service tenant and admin role:

$ keystone user-role-add --user neutron --tenant service --role admin

Create the neutron service:

$ keystone service-create --name neutron --type network --description "OpenStack Networking"

Create the service endpoint:

$ keystone endpoint-create \
  --service-id $(keystone service-list | awk '/ network / {print $2}') \
  --publicurl http://controller:9696 \
  --adminurl http://controller:9696 \
  --internalurl http://controller:9696

Install the Networking components (controller node)

# apt-get install -y neutron-server neutron-plugin-ml2

Configure the Networking server component

The Networking server component configuration includes the database, authentication mechanism, message broker, topology change notifier, and plug-in.

Configure Networking to use the database:

Edit the /etc/neutron/neutron.conf file and add the following key to the [database] section:

Replace NEUTRON_DBPASS with the password you chose for the database.

[database]
...
connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron

Configure Networking to use the Identity service for authentication:

Edit the /etc/neutron/neutron.conf file and add the following key to the [DEFAULT] section:

[DEFAULT]
...
auth_strategy = keystone

Add the following keys to the [keystone_authtoken] section:

Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.

[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_host = controller
auth_protocol = http
auth_port = 35357
admin_tenant_name = service
admin_user = neutron
admin_password = NEUTRON_PASS

Configure Networking to use the message broker:

Edit the /etc/neutron/neutron.conf file and add the following keys to the [DEFAULT] section:

Replace RABBIT_PASS with the password you chose for the guest account in RabbitMQ.

[DEFAULT]
...
rpc_backend = neutron.openstack.common.rpc.impl_kombu
rabbit_host = controller
rabbit_password = RABBIT_PASS

Configure Networking to notify Compute about network topology changes:

Replace SERVICE_TENANT_ID with the service tenant identifier (id) in the Identity service and NOVA_PASS with the password you chose for the nova user in the Identity service.

Edit the /etc/neutron/neutron.conf file and add the following keys to the [DEFAULT] section:

[DEFAULT]
...
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774/v2
nova_admin_username = nova
nova_admin_tenant_id = SERVICE_TENANT_ID
nova_admin_password = NOVA_PASS
nova_admin_auth_url = http://controller:35357/v2.0

[Note] Note To obtain the service tenant identifier (id):

$ source admin-openrc.sh
$ keystone tenant-get service
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |          Service Tenant          |
|   enabled   |               True               |
|      id     | f727b5ec2ceb4d71bad86dfc414449bf |
|     name    |             service              |
+-------------+----------------------------------+

Configure Networking to use the Modular Layer 2 (ML2) plug-in and associated services:

Edit the /etc/neutron/neutron.conf file and add the following keys to the [DEFAULT] section:

[DEFAULT]
...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True

[Note] Note We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/neutron.conf to assist with troubleshooting.

## Configure the Modular Layer 2 (ML2) plug-in

The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build the virtual networking framework for instances. However, the controller node does not need the OVS agent or service because it does not handle instance network traffic.

Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file:

Add the following keys to the [ml2] section:

[ml2]
...
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch

Add the following key to the [ml2_type_gre] section:

[ml2_type_gre]
...
tunnel_id_ranges = 1:1000

Add the [securitygroup] section and the following keys to it:

[securitygroup]
...
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True

Configure Compute to use Networking

By default, most distributions configure Compute to use legacy networking. You must reconfigure Compute to manage networks through Networking.

Edit the /etc/nova/nova.conf and add the following keys to the [DEFAULT] section:

Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.

[DEFAULT]
...
network_api_class = nova.network.neutronv2.api.API
neutron_url = http://controller:9696
neutron_auth_strategy = keystone
neutron_admin_tenant_name = service
neutron_admin_username = neutron
neutron_admin_password = NEUTRON_PASS
neutron_admin_auth_url = http://controller:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
security_group_api = neutron

[Note] Note By default, Compute uses an internal firewall service. Since Networking includes a firewall service, you must disable the Compute firewall service by using the nova.virt.firewall.NoopFirewallDriver firewall driver.

Finalize installation

Restart the Compute services:

# service nova-api restart
# service nova-scheduler restart
# service nova-conductor restart

Restart the Networking service:

# service neutron-server restart

T

Edit /etc/sysctl.conf to contain the following:

net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

Implement the changes:

# sysctl -p

To install the Networking components

# apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent \
  neutron-l3-agent neutron-dhcp-agent

To configure the Layer-3 (L3) agent

The Layer-3 (L3) agent provides routing services for instance virtual networks.

Edit the /etc/neutron/l3_agent.ini file and add the following keys to the [DEFAULT] section:

[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True

[Note] Note We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/l3_agent.ini to assist with troubleshooting.

To configure the DHCP agent

The DHCP agent provides DHCP services for instance virtual networks.

Edit the /etc/neutron/dhcp_agent.ini file and add the following keys to the [DEFAULT] section:

[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True

[Note] Note We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/dhcp_agent.ini to assist with troubleshooting.

To configure the metadata agent

The metadata agent provides configuration information such as credentials for remote access to instances.

Edit the /etc/neutron/metadata_agent.ini file and add the following keys to the [DEFAULT] section:

Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service. Replace METADATA_SECRET with a suitable secret for the metadata proxy.

[DEFAULT]
...
auth_url = http://controller:5000/v2.0
auth_region = regionOne
admin_tenant_name = service
admin_user = neutron
admin_password = NEUTRON_PASS
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET

[Note] Note We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/metadata_agent.ini to assist with troubleshooting.

edit the /etc/nova/nova.conf file and add the following keys to the [DEFAULT] section:

Replace METADATA_SECRET with the secret you chose for the metadata proxy.

[DEFAULT]
...
service_neutron_metadata_proxy = true
neutron_metadata_proxy_shared_secret = METADATA_SECRET

On the controller node, restart the Compute API service:

# service nova-api restart

To configure the Modular Layer 2 (ML2) plug-in

The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build virtual networking framework for instances.

Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file.

Add the [ovs] section and the following keys to it:

Replace INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS with the IP address of the instance tunnels network interface on your network node.

[ovs]
...
local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
tunnel_type = gre
enable_tunneling = True

## To configure the Open vSwitch (OVS) service

The OVS service provides the underlying virtual networking framework for instances. The integration bridge br-int handles internal instance network traffic within OVS. The external bridge br-ex handles external instance network traffic within OVS. The external bridge requires a port on the physical external network interface to provide instances with external network access. In essence, this port bridges the virtual and physical external networks in your environment.

Restart the OVS service:

# service openvswitch-switch restart

Add the integration bridge:

# ovs-vsctl add-br br-int

Add the external bridge:

# ovs-vsctl add-br br-ex

Add a port to the external bridge that connects to the physical external network interface:

Replace INTERFACE_NAME with the actual interface name. In our case eth0

# ovs-vsctl add-port br-ex INTERFACE_NAME

[Note] Note Depending on your network interface driver, you may need to disable Generic Receive Offload (GRO) to achieve suitable throughput between your instances and the external network.

To temporarily disable GRO on the external network interface while testing your environment:

# ethtool -K INTERFACE_NAME gro off

To finalize the installation

Restart the Networking services:

# service neutron-plugin-openvswitch-agent restart
# service neutron-l3-agent restart
# service neutron-dhcp-agent restart
# service neutron-metadata-agent restart