Openstack TripleO Installation with Ceph and HA
Create 10 vm with 2 NIC:
1 Director Node: 2 CPU, 8G RAM, 40G Disk
3 Controller Node: 4 CPU, 32G RAM, 100G Disk
3 Compute Node: 4 CPU, 16G RAM, 100G Disk
3 Ceph Node: 4 CPU, 16G RAM, 60G Disk and 100G Disk
Network Layout:
Contents
- 1Undercloud Installation
- 1.1Create ssl certificate:
- 1.2Download images
- 2Register Nodes
- 3Introspect Nodes
- 4Tagging Nodes
- 5Defining the Root Disk for Ceph Storage Nodes
- 6Enabling Ceph Storage in the Overcloud
- 7Formatting Ceph Storage Node Disks to GPT
- 8Delpoy Nodes
- 9Sources
Undercloud Installation
sudo yum install -y https://trunk.rdoproject.org/centos7/current/python2-tripleo-repos-0.0.1-0.20180418175107.ef4e12e.el7.centos.noarch.rpm sudo -E tripleo-repos -b newton current ceph
then
sudo yum install -y python-tripleoclient sudo yum install -y ceph-ansible cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf
[DEFAULT] local_ip = 192.168.126.1/24 network_gateway = 192.168.126.1 undercloud_public_vip = 172.17.31.51 ## change to 192.168.126.2 if you access from the same network 192.168.126.0/24 undercloud_admin_vip = 192.168.126.3 undercloud_service_certificate = /etc/pki/instack-certs/undercloud.pem local_interface = eth1 network_cidr = 192.168.126.0/24 masquerade_network = 192.168.126.0/24 dhcp_start = 192.168.126.100 dhcp_end = 192.168.126.150 inspection_iprange = 192.168.126.160,192.168.126.199 enable_ui = true
Create ssl certificate:
To create ssl certificate for undercloud, follow this link:
Install undercloud:
openstack undercloud install
Download images
[stack@undercloud ~]$ sudo wget https://images.rdoproject.org/newton/delorean/current-tripleo/overcloud-full.tar --no-check-certificate [stack@undercloud ~]$ sudo wget https://images.rdoproject.org/newton/delorean/current-tripleo/ironic-python-agent.tar --no-check-certificate [stack@undercloud ~]$ mkdir ~/images [stack@undercloud ~]$ tar -xpvf ironic-python-agent.tar -C ~/images/ [stack@undercloud ~]$ tar -xpvf overcloud-full.tar -C ~/images/ [stack@undercloud ~]$ source ~/stackrc [stack@undercloud ~]$ openstack overcloud image upload --image-path ~/images/
Register Nodes
create instackenv.json, below is an example:
{ "nodes": [ { "name": "comp-0", "mac": [ "00:0c:29:8e:30:23" ], "pm_type": "fake_pxe" }, { "name": "ctrl-0", "mac": [ "00:0c:29:7d:80:3d" ], "pm_type": "fake_pxe" }, { "name": "ceph-0", "mac": [ "00:0c:29:7d:80:3e" ], "pm_type": "fake_pxe" } ] }
Edit the /etc/ironic/ironic.conf file and add fake_pxe to the enabled_drivers option to enable this driver. Restart the baremetal services after editing the file:
$ sudo systemctl restart openstack-ironic-api openstack-ironic-conductor
Then register node to undecloud:
openstack overcloud node import instackenv.json
Introspect Nodes
To introspect and make it available, run:
openstack overcloud node introspect --all-manageable --provide
After running above command, immediately start the vm to boot from network. Wait until state of baremetal is "poweroff" and then restart vm. Check again Nodes status, shut down the vm after status is changed to available and power state is power off.
Tagging Nodes
Assign profile to the nodes, below is an example:
+--------------------------------------+------------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ | 04dd0d86-b35b-432c-8cff-63be658fecbe | ctrl-0 | None | power off | available | False | | d0ec8c83-afbf-46c4-8599-33542851c84a | comp-0 | None | power off | available | False | | d283732e-1c83-4914-b724-0d4915aef8f1 | ceph-0 | None | power off | available | False | +--------------------------------------+------------+---------------+-------------+--------------------+-------------+ openstack baremetal node set --property capabilities='profile:compute,boot_option:local' d0ec8c83-afbf-46c4-8599-33542851c84a openstack baremetal node set --property capabilities='profile:control,boot_option:local' 04dd0d86-b35b-432c-8cff-63be658fecbe openstack baremetal node set --property capabilities='profile:ceph-storage,boot_option:local' d283732e-1c83-4914-b724-0d4915aef8f1
Defining the Root Disk for Ceph Storage Nodes
First, collect a copy of each node’s hardware information that the director obtained from the introspection. This information is stored in the OpenStack Object Storage server (swift). Download this information to a new directory:
[undercloud ~]$ mkdir swift-data [undercloud ~]$ cd swift-data [undercloud ~]$ export SWIFT_PASSWORD=`sudo crudini --get /etc/ironic-inspector/inspector.conf swift password` [undercloud ~]$ for node in $(ironic node-list | grep -v UUID| awk '{print $2}'); do swift -U service:ironic -K $SWIFT_PASSWORD download ironic-inspector inspector_data-$node; done
Check the disk information for each node. The following command displays each node ID and the disk information:
[undercloud ~]$ for node in $(ironic node-list | grep -v UUID| awk '{print $2}'); do echo "NODE: $node" ; cat inspector_data-$node | jq '.inventory.disks' ; echo "-----" ; done
take note from above result the serial number from disk 1, for example "61866da04f37fc001ea4e31e121cfb45", we need to assign the disk 1 as root partition. Run the following command to assign disk 1 as root partition:
ironic node-update 15fc0edc-eb8d-4c7f-8dc0-a2a25d5e09e3 add properties/root_device='{"serial": "61866da04f37fc001ea4e31e121cfb45"}'
15fc0edc-eb8d-4c7f-8dc0-a2a25d5e09e3 is a node ID.
Enabling Ceph Storage in the Overcloud
Copy sample yaml config to /home/stack/templates:
cp /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml ~/templates/.
Edit the content like this storage-environment.yaml.
## A Heat environment file which can be used to set up storage ## backends. Defaults to Ceph used as a backend for Cinder, Glance and ## Nova ephemeral storage. resource_registry: OS::TripleO::Services::CephMon: /usr/share/openstack-tripleo-heat-templates/puppet/services/ceph-mon.yaml OS::TripleO::Services::CephOSD: /usr/share/openstack-tripleo-heat-templates/puppet/services/ceph-osd.yaml OS::TripleO::Services::CephClient: /usr/share/openstack-tripleo-heat-templates/puppet/services/ceph-client.yaml # OS::TripleO::Services::CephRgw: /usr/share/openstack-tripleo-heat-templates/puppet/services/ceph-rgw.yaml # OS::TripleO::Services::SwiftProxy: OS::Heat::None # OS::TripleO::Services::SwiftStorage: OS::Heat::None # OS::TripleO::Services::SwiftRingBuilder: OS::Heat::None # OS::TripleO::Services::CinderBackup: /usr/share/openstack-tripleo-heat-templates/puppet/services/pacemaker/cinder-backup.yaml OS::TripleO::NodeUserData: /home/stack/templates/wipe-disks.yaml parameter_defaults: #### BACKEND SELECTION #### ## Whether to enable iscsi backend for Cinder. CinderEnableIscsiBackend: false ## Whether to enable rbd (Ceph) backend for Cinder. CinderEnableRbdBackend: true ## Cinder Backup backend can be either 'ceph' or 'swift'. CinderBackupBackend: ceph #CinderBackupBackend: swift ## Whether to enable NFS backend for Cinder. # CinderEnableNfsBackend: false ## Whether to enable rbd (Ceph) backend for Nova ephemeral storage. NovaEnableRbdBackend: true ## Glance backend can be either 'rbd' (Ceph), 'swift' or 'file'. GlanceBackend: rbd ## Gnocchi backend can be either 'rbd' (Ceph), 'swift' or 'file'. GnocchiBackend: rbd #### CINDER NFS SETTINGS #### ## NFS mount options # CinderNfsMountOptions: '' ## NFS mount point, e.g. '192.168.122.1:/export/cinder' # CinderNfsServers: '' #### GLANCE NFS SETTINGS #### ## Make sure to set `GlanceBackend: file` when enabling NFS ## ## Whether to make Glance 'file' backend a NFS mount # GlanceNfsEnabled: false ## NFS share for image storage, e.g. '192.168.122.1:/export/glance' ## (If using IPv6, use both double- and single-quotes, ## e.g. "'[fdd0::1]:/export/glance'") # GlanceNfsShare: '' ## Mount options for the NFS image storage mount point # GlanceNfsOptions: 'intr,context=system_u:object_r:glance_var_lib_t:s0' #### CEPH SETTINGS #### ## When deploying Ceph Nodes through the oscplugin CLI, the following ## parameters are set automatically by the CLI. When deploying via ## heat stack-create or ceph on the controller nodes only, ## they need to be provided manually. ## Number of Ceph storage nodes to deploy # CephStorageCount: 0 ## Ceph FSID, e.g. '4b5c8c0a-ff60-454b-a1b4-9747aa737d19' # CephClusterFSID: '' ## Ceph monitor key, e.g. 'AQC+Ox1VmEr3BxAALZejqeHj50Nj6wJDvs96OQ==' # CephMonKey: '' ## Ceph admin key, e.g. 'AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==' # CephAdminKey: '' ## Ceph client key, e.g 'AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw==' # CephClientKey: '' ExtraConfig: ceph::profile::params::osds: '/dev/sdb': {}
Formatting Ceph Storage Node Disks to GPT
The Ceph Storage OSDs and journal partitions require GPT disk labels. We need to convert the disk to GPT labels before installing the Ceph OSD. Create ~/templates/wipe-disks.yaml with this content:
heat_template_version: 2014-10-16 description: > Wipe and convert all disks to GPT (except the disk containing the root file system) resources: userdata: type: OS::Heat::MultipartMime properties: parts: - config: {get_resource: wipe_disk} wipe_disk: type: OS::Heat::SoftwareConfig properties: config: {get_file: wipe-disk.sh} outputs: OS::stack_id: value: {get_resource: userdata}
and ~/templates/wipe-disks.sh
#!/bin/bash if [[ `hostname` = *"ceph"* ]] then echo "Number of disks detected: $(lsblk -no NAME,TYPE,MOUNTPOINT | grep "disk" | awk '{print $1}' | wc -l)" for DEVICE in `lsblk -no NAME,TYPE,MOUNTPOINT | grep "disk" | awk '{print $1}'` do ROOTFOUND=0 echo "Checking /dev/$DEVICE..." echo "Number of partitions on /dev/$DEVICE: $(expr $(lsblk -n /dev/$DEVICE | awk '{print $7}' | wc -l) - 1)" for MOUNTS in `lsblk -n /dev/$DEVICE | awk '{print $7}'` do if [ "$MOUNTS" = "/" ] then ROOTFOUND=1 fi done if [ $ROOTFOUND = 0 ] then echo "Root not found in /dev/${DEVICE}" echo "Wiping disk /dev/${DEVICE}" sgdisk -Z /dev/${DEVICE} sgdisk -g /dev/${DEVICE} else echo "Root found in /dev/${DEVICE}" fi done fi
Delpoy Nodes
Define some parameter into deployment environment, first create file ~/templates/environment.yaml
parameter_defaults: ControllerCount: 3 ComputeCount: 3 CephStorageCount: 3
Now, run the following command to install overcloud:
openstack overcloud deploy --templates --libvirt-type qemu \ --ntp-server 0.id.pool.ntp.org \ -e /home/stack/templates/environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml \ -e /home/stack/templates/storage-environment.yaml \ --log-file overcloud_deployment.log
we need to use option --libvirt-type qemu because we run the node under vmware/ovirt. Add this --dry-run option to validate the environment file.
Wait a couple of minutes, until the following command show result like below, start the vm.
[stack@lab-dir ~]$ openstack baremetal node list +--------------------------------------+------------+--------------------------------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------------+--------------------------------------+-------------+--------------------+-------------+ | 04dd0d86-b35b-432c-8cff-63be658fecbe | ctrl-0 | 64c64fe0-7919-4475-b3e4-02ac11a085ba | power on | wait call-back | False | | d0ec8c83-afbf-46c4-8599-33542851c84a | comp-0 | 512553bb-5c6b-4bcc-b146-cae8b13d00fb | power on | wait call-back | False | +--------------------------------------+------------+--------------------------------------+-------------+--------------------+-------------+
During deployment, the vm will be powered off. Start the vm immediately when that happened.
Sources
- https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/red_hat_ceph_storage_for_the_overcloud/creation#Configuring_Ceph_Storage_Cluster_Settings
- https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/director_installation_and_usage/chap-performing_tasks_after_overcloud_creation#sect-Fencing_the_Controller_Nodes
Comments