Ubuntu20.04配置Victoria版OpenStack以及Zun容器服务

​ 之前一直希望尝试一下zun,但安装总是失败。不过有了七八次失败经验后,这次一发入魂,成功搭建起了Victoria版本的OpenStack。以下所有内容都是根据官方文档以及个人经验完成的。部分地方由于配置较多,记录不详,但有过安装经验的大概能够看懂写的内容。如果你要自己在vmware里面搭建,最好给虚拟机打开虚拟化相关选项。

Preparation 准备工作

1.配置网卡接口:

Ubuntu 20.04 server使用netplan进行网络配置,因此配置方式有所不同。

/etc/netplan/目录下查看yaml文件,例如00-installer-config.yaml

1
2
3
4
5
6
7
8
# This is the network config written by 'subiquity'
network:
ethernets:
ens33:
dhcp4: true
ens34:
dhcp4: false
version: 2

第一个网卡作为management interface,这里是dhcp获取的地址,最好改成静态ip防止地址变动。具体可以参考链接,需要注意yaml文件格式的缩进。第二个网卡作为provider interface,不需要配置ip地址。所以直接关闭dhcp就可以。

配置完成后执行sudo netplan apply应用修改后的配置。

2.修改域名解析:

/etc/hosts文件添加以下示例来配置各个节点的主机名与IP地址对应关系。

1
2
192.168.216.135	controller
192.168.216.134 compute1

3.配置NTP

Controller node

1
2
3
apt install chrony

vim /etc/chrony/chrony.conf

添加NTP服务器(optional)、允许其他节点访问:

1
2
server NTP_SERVER iburst
allow 10.0.0.0/24

重启服务

1
service chrony restart

Other nodes

1
2
3
apt install chrony

vim /etc/chrony/chrony.conf

删除其他所有服务器并添加controller为唯一NTP服务器:

1
server controller iburst

重启服务

1
service chrony restart

Verify operation

1
chronyc sources

控制节点可以看到别的ntp服务器,而其他节点只能看到controller一个,而且必然有一个服务器前有*标识,如果没有请检查配置或者防火墙配置。

4.配置OpenStack包

由于OpenStack更新比较快,在Ubuntu 20.04版本上旧版的OpenStack需要通过Ubuntu Cloud Archive来安装。

所有节点启用归档:

1
add-apt-repository cloud-archive:victoria

测试(可以在所有节点安装):

1
apt install python3-openstackclient

5.SQL数据库

只需在控制节点完成以下所有操作:

1
apt install mariadb-server python3-pymysql

创建并编辑配置文件:

1
2
touch /etc/mysql/mariadb.conf.d/99-openstack.cnf
vim /etc/mysql/mariadb.conf.d/99-openstack.cnf

修改bind-address为管理接口的ip地址:

1
2
3
4
5
6
7
8
[mysqld]
bind-address = 10.0.0.11

default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

重启服务

1
service mysql restart

Optional:

1
mysql_secure_installation

6.消息队列

只需在控制节点完成配置。

1
2
3
4
5
6
7
apt install rabbitmq-server

#need root or sudo, replace 123456 to your own password
rabbitmqctl add_user openstack 123456

#need root or sudo
rabbitmqctl set_permissions openstack ".*" ".*" ".*"

7.Memcached

只需在控制节点完成配置。

1
apt install memcached python3-memcache

修改监听:

1
2
3
4
vim /etc/memcached.conf

# replace 192.168.216.135 to your management interface ip address
-l 127.0.0.1,192.168.216.135,controller

重启服务

1
service memcached restart

8.Etcd

只需在控制节点完成配置。

1
apt install etcd

修改配置文件,注意修改为controller的管理IP地址:

1
2
3
4
5
6
7
8
9
10
11
vim /etc/default/etcd

ETCD_NAME="controller"
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER="controller=http://192.168.216.135:2380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.216.135:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.216.135:2379"
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.216.135:2379"

启用服务并重启:

1
2
systemctl enable etcd
systemctl restart etcd

Identity service

只需在控制节点完成配置。

1
2
3
4
5
6
7
8
9
10
mysql

MariaDB [(none)]>

CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '123456';

exit
# replace 123456 with your own password
1
apt install keystone
1
2
3
4
5
6
7
8
vim /etc/keystone/keystone.conf

[database]
# replace 123456, remove other connection
connection = mysql+pymysql://keystone:123456@controller/keystone

[token]
provider = fernet
1
2
3
4
5
6
7
8
9
10
11
su -s /bin/sh -c "keystone-manage db_sync" keystone

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

# replace 123456
keystone-manage bootstrap --bootstrap-password 123456 \
--bootstrap-admin-url http://controller:5000/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne
1
2
3
4
vim /etc/apache2/apache2.conf

#configure ServerName, add line if not exist
ServerName controller
1
service apache2 restart
1
2
3
4
5
6
7
8
9
10
11
touch admin-openrc
vim admin-openrc

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
1
2
3
4
5
6
7
. admin-openrc
openstack domain create --description "An Example Domain" example
openstack project create --domain default --description "Service Project" service
openstack project create --domain default --description "Demo Project" myproject
openstack user create --domain default --password-prompt myuser
openstack role create myrole
openstack role add --project myproject --user myuser myrole

Image Service

只需在控制节点完成配置。

1
2
3
4
5
6
7
8
9
10
mysql

MariaDB [(none)]>

CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '123456';

exit
# replace 123456 with your own password
1
2
3
4
5
6
7
8
9
10
. admin-openrc

# input your password in follow two lines
openstack user create --domain default --password-prompt glance

openstack role add --project service --user glance admin
openstack service create --name glance --description "OpenStack Image" image
openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292
1
2
3
apt install glance

vim /etc/glance/glance-api.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[database]
# replace 123456, remove other connection
connection = mysql+pymysql://glance:123456@controller/glance

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = 123456

[paste_deploy]
flavor = keystone

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
1
2
3
su -s /bin/sh -c "glance-manage db_sync" glance

service glance-api restart

Verify operation:

1
2
3
4
5
wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

glance image-create --name "cirros" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility=public

glance image-list

Placement Service

只需在控制节点完成配置。

1
2
3
4
5
6
7
8
9
10
mysql

MariaDB [(none)]>

CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '123456';

exit
# replace 123456 with your own password
1
2
3
4
5
6
7
8
9
10
. admin-openrc

# input your password in follow two lines
openstack user create --domain default --password-prompt placement

openstack role add --project service --user placement admin
openstack service create --name placement --description "Placement API" placement
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778
1
2
3
apt install placement-api

vim /etc/placement/placement.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[placement_database]
connection = mysql+pymysql://placement:123456@controller/placement

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = 123456
1
2
3
su -s /bin/sh -c "placement-manage db sync" placement

service apache2 restart

Verify operation:

1
2
. admin-openrc
placement-status upgrade check

Compute Service

controller node

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
mysql

MariaDB [(none)]>

CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '123456';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '123456';

GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '123456';

exit
# replace 123456 with your own password
1
2
3
4
5
6
7
8
9
10
11
. admin-openrc
openstack user create --domain default --password-prompt nova

openstack role add --project service --user nova admin
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

apt install nova-api nova-conductor nova-novncproxy nova-scheduler

vim /etc/nova/nova.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
[DEFAULT]
# remove the log_dir option from the [DEFAULT] section, replace my_ip with your management ip address
transport_url = rabbit://openstack:123456@controller:5672/
my_ip = 10.0.0.11

[api_database]
connection = mysql+pymysql://nova:123456@controller/nova_api

[database]
connection = mysql+pymysql://nova:123456@controller/nova

[api]
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123456

[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123456
1
2
3
4
5
6
su -s /bin/sh -c "nova-manage api_db sync" nova

su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
1
2
3
4
service nova-api restart
service nova-scheduler restart
service nova-conductor restart
service nova-novncproxy restart

compute node

1
2
3
apt install nova-compute

vim /etc/nova/nova.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
[DEFAULT]
transport_url = rabbit://openstack:123456@controller
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS

[api]
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123456

[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
# replace controller to controller's management ip if controller isn't accessable on user's web browser

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123456

Optional:

If you are deploying openstack on a vm or your compute node doesnt supports hardware acceleration, better do the next steps.

1
egrep -c '(vmx|svm)' /proc/cpuinfo

If results above is zero:

1
2
3
4
5
vim /etc/nova/nova-compute.conf

[libvirt]
# ...
virt_type = qemu

Restart service

1
service nova-compute restart

Back to the controller node:

1
2
3
4
5
6
. admin-openrc

openstack compute service list --service nova-compute

su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
#Discover compute hosts

Verify operation

1
2
. admin-openrc
openstack compute service list

Networking Service

Make sure that you have done the preparation of network interface. The following steps only deploy the linuxbridge and vxlan network type. If you have a network node to deploy, the follow steps may not be suitable for you.

controller node

1
2
3
4
5
6
7
8
9
10
mysql

MariaDB [(none)]

CREATE DATABASE neutron;

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '123456';

exit
1
2
3
4
5
6
7
8
9
. admin-openrc

openstack user create --domain default --password-prompt neutron

openstack role add --project service --user neutron admin
openstack service create --name neutron --description "OpenStack Networking" network
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696

Configure Self-service networks:

1
2
3
apt install neutron-server neutron-plugin-ml2 neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent

vim /etc/neutron/neutron.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[database]
connection = mysql+pymysql://neutron:123456@controller/neutron

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456

[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123456

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
1
vim /etc/neutron/plugins/ml2/ml2_conf.ini
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security

[ml2_type_flat]
flat_networks = provider

[ml2_type_vxlan]
vni_ranges = 1:1000

[securitygroup]
enable_ipset = true
1
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
1
2
3
4
5
6
7
8
9
10
11
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME

[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
1
vim /etc/neutron/l3_agent.ini
1
2
[DEFAULT]
interface_driver = linuxbridge
1
vim /etc/neutron/dhcp_agent.ini
1
2
3
4
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
1
vim /etc/neutron/metadata_agent.ini
1
2
3
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = 123456
1
vim /etc/nova/nova.conf
1
2
3
4
5
6
7
8
9
10
11
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = 123456
1
2
3
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

service nova-api restart
1
2
3
4
5
service neutron-server restart
service neutron-linuxbridge-agent restart
service neutron-dhcp-agent restart
service neutron-metadata-agent restart
service neutron-l3-agent restart

compute node

1
2
3
apt install neutron-linuxbridge-agent

vim /etc/neutron/neutron.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[DEFAULT]
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

#remove any connection options under [database]
1
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
1
2
3
4
5
6
7
8
9
10
11
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME

[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
1
vim /etc/nova/nova.conf
1
2
3
4
5
6
7
8
9
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
1
2
3
service nova-compute restart

service neutron-linuxbridge-agent restart

Dashboard

Only on controller node:

1
2
3
apt install openstack-dashboard

vim /etc/openstack-dashboard/local_settings.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
OPENSTACK_HOST = "controller"

# add follow line, do not edit the ALLOWED_HOSTS parameter under the Ubuntu configuration section.
ALLOWED_HOSTS = ['*']

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}

OPENSTACK_KEYSTONE_URL = "http://%s:5000/identity/v3" % OPENSTACK_HOST

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 3,
}

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

TIME_ZONE = "Asia/Shanghai"

Personal change:

1
DEFAULT_THEME = 'default'

Add the following line to /etc/apache2/conf-available/openstack-dashboard.conf if not included:

1
WSGIApplicationGroup %{GLOBAL}
1
2
3
systemctl reload apache2.service

systemctl restart apache2.service

Verify operation

Access the dashboard using a web browser at http://controller/horizon.

Zun

controller node

1
2
3
4
5
6
7
8
9
mysql

MariaDB [(none)]

CREATE DATABASE zun;
GRANT ALL PRIVILEGES ON zun.* TO 'zun'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON zun.* TO 'zun'@'%' IDENTIFIED BY '123456';

exit
1
2
3
4
5
6
7
8
9
. admin-openrc

openstack user create --domain default --password-prompt zun

openstack role add --project service --user zun admin
openstack service create --name zun --description "Container Service" container
openstack endpoint create --region RegionOne container public http://controller:9517/v1
openstack endpoint create --region RegionOne container internal http://controller:9517/v1
openstack endpoint create --region RegionOne container admin http://controller:9517/v1
1
2
3
4
5
6
groupadd --system zun

useradd --home-dir "/var/lib/zun" --create-home --system --shell /bin/false -g zun zun

mkdir -p /etc/zun
chown zun:zun /etc/zun
1
apt-get install python3-pip git
1
2
3
4
5
6
7
8
9
10
cd /var/lib/zun
git clone -b stable/victoria https://opendev.org/openstack/zun.git
chown -R zun:zun zun
cd zun
pip3 install -r requirements.txt
python3 setup.py install

# pip3 install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
# git log
# git config --global --add safe.directory /var/lib/zun/zun
1
2
3
4
5
su -s /bin/sh -c "oslo-config-generator --config-file etc/zun/zun-config-generator.conf" zun
su -s /bin/sh -c "cp etc/zun/zun.conf.sample /etc/zun/zun.conf" zun
su -s /bin/sh -c "cp etc/zun/api-paste.ini /etc/zun" zun

vim /etc/zun/zun.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
[DEFAULT]
transport_url = rabbit://openstack:123456@controller

[api]
host_ip = 10.0.0.11
port = 9517

[database]
connection = mysql+pymysql://zun:123456@controller/zun

[keystone_auth]
memcached_servers = controller:11211
www_authenticate_uri = http://controller:5000
project_domain_name = default
project_name = service
user_domain_name = default
password = 123456
username = zun
auth_url = http://controller:5000
auth_type = password
auth_version = v3
auth_protocol = http
service_token_roles_required = True
endpoint_type = internalURL

[keystone_authtoken]
memcached_servers = controller:11211
www_authenticate_uri = http://controller:5000
project_domain_name = default
project_name = service
user_domain_name = default
password = 123456
username = zun
auth_url = http://controller:5000
auth_type = password
auth_version = v3
auth_protocol = http
service_token_roles_required = True
endpoint_type = internalURL

[oslo_concurrency]
lock_path = /var/lib/zun/tmp

[oslo_messaging_notifications]
driver = messaging

[websocket_proxy]
wsproxy_host = 10.0.0.11
wsproxy_port = 6784
base_url = ws://controller:6784/
# better replace controller with ip address
1
2
3
4
su -s /bin/sh -c "zun-db-manage upgrade" zun

touch /etc/systemd/system/zun-api.service
vim /etc/systemd/system/zun-api.service
1
2
3
4
5
6
7
8
9
[Unit]
Description = OpenStack Container Service API

[Service]
ExecStart = /usr/local/bin/zun-api
User = zun

[Install]
WantedBy = multi-user.target
1
2
touch /etc/systemd/system/zun-wsproxy.service
vim /etc/systemd/system/zun-wsproxy.service
1
2
3
4
5
6
7
8
9
[Unit]
Description = OpenStack Container Service Websocket Proxy

[Service]
ExecStart = /usr/local/bin/zun-wsproxy
User = zun

[Install]
WantedBy = multi-user.target
1
2
3
4
5
6
7
8
systemctl enable zun-api
systemctl enable zun-wsproxy

systemctl start zun-api
systemctl start zun-wsproxy

systemctl status zun-api
systemctl status zun-wsproxy

compute node

Before you install and configure Zun, you must have Docker and Kuryr-libnetwork installed properly in the compute node, and have Etcd installed properly in the controller node.

1.Docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Add Docker's official GPG key:
apt update
apt install ca-certificates curl
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
# http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg
chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
tee /etc/apt/sources.list.d/docker.list > /dev/null
apt update
1
2
3
4
5
6
# List the available versions:
apt-cache madison docker-ce | awk '{ print $3 }'

5:24.0.0-1~ubuntu.22.04~jammy
5:23.0.6-1~ubuntu.22.04~jammy
...
1
2
3
4
5
VERSION_STRING=5:24.0.0-1~ubuntu.22.04~jammy
apt-get install docker-ce=$VERSION_STRING docker-ce-cli=$VERSION_STRING containerd.io docker-buildx-plugin docker-compose-plugin

# apt-get install docker-ce-5:19.03.15~3-0~ubuntu-focal docker-ce-cli-5:19.03.15~3-0~ubuntu-focal containerd.io docker-buildx-plugin docker-compose-plugin
# lower version is better, high version get some difference and cant run.

2.kuryr-libnetwork

First go back to the controller node:

1
2
3
4
. admin-openrc
openstack user create --domain default --password-prompt kuryr

openstack role add --project service --user kuryr admin

Then return to compute node:

1
apt-get install python3-pip git
1
2
3
4
5
groupadd --system kuryr
useradd --home-dir "/var/lib/kuryr" --create-home --system --shell /bin/false -g kuryr kuryr

mkdir -p /etc/kuryr
chown kuryr:kuryr /etc/kuryr
1
2
3
4
5
6
7
8
9
10
cd /var/lib/kuryr
git clone -b stable/victoria https://opendev.org/openstack/kuryr-libnetwork.git
chown -R kuryr:kuryr kuryr-libnetwork
cd kuryr-libnetwork
pip3 install -r requirements.txt
python3 setup.py install

# pip3 install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
# git log
# git config --global --add safe.directory /var/lib/kuryr/kuryr-libnetwork
1
2
3
4
su -s /bin/sh -c "./tools/generate_config_file_samples.sh" kuryr
su -s /bin/sh -c "cp etc/kuryr.conf.sample /etc/kuryr/kuryr.conf" kuryr

vim /etc/kuryr/kuryr.conf
1
2
3
4
5
6
7
8
9
10
11
12
[DEFAULT]
bindir = /usr/local/libexec/kuryr

[neutron]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
username = kuryr
user_domain_name = default
password = 123456
project_name = service
project_domain_name = default
auth_type = password
1
2
touch /etc/systemd/system/kuryr-libnetwork.service
vim /etc/systemd/system/kuryr-libnetwork.service
1
2
3
4
5
6
7
8
9
[Unit]
Description = Kuryr-libnetwork - Docker network plugin for Neutron

[Service]
ExecStart = /usr/local/bin/kuryr-server --config-file /etc/kuryr/kuryr.conf
CapabilityBoundingSet = CAP_NET_ADMIN

[Install]
WantedBy = multi-user.target
1
2
3
4
systemctl enable kuryr-libnetwork
systemctl start kuryr-libnetwork

systemctl restart docker

3.zun

1
2
groupadd --system zun
useradd --home-dir "/var/lib/zun" --create-home --system --shell /bin/false -g zun zun
1
2
mkdir -p /etc/zun
chown zun:zun /etc/zun
1
2
mkdir -p /etc/cni/net.d
chown zun:zun /etc/cni/net.d
1
apt-get install python3-pip git numactl
1
2
3
4
5
6
7
8
9
10
cd /var/lib/zun
git clone -b stable/victoria https://opendev.org/openstack/zun.git
chown -R zun:zun zun
cd zun/
pip3 install -r requirements.txt
python3 setup.py install

# pip3 install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
# git log
# git config --global --add safe.directory /var/lib/kuryr/kuryr-libnetwork
1
2
3
4
5
6
su -s /bin/sh -c "oslo-config-generator --config-file etc/zun/zun-config-generator.conf" zun
su -s /bin/sh -c "cp etc/zun/zun.conf.sample /etc/zun/zun.conf" zun
su -s /bin/sh -c "cp etc/zun/rootwrap.conf /etc/zun/rootwrap.conf" zun
su -s /bin/sh -c "mkdir -p /etc/zun/rootwrap.d" zun
su -s /bin/sh -c "cp etc/zun/rootwrap.d/* /etc/zun/rootwrap.d/" zun
su -s /bin/sh -c "cp etc/cni/net.d/* /etc/cni/net.d/" zun
1
echo "zun ALL=(root) NOPASSWD: /usr/local/bin/zun-rootwrap /etc/zun/rootwrap.conf *" | sudo tee /etc/sudoers.d/zun-rootwrap
1
vim /etc/zun/zun.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
[DEFAULT]
transport_url = rabbit://openstack:123456@controller
state_path = /var/lib/zun

[database]
connection = mysql+pymysql://zun:123456@controller/zun

[keystone_auth]
memcached_servers = controller:11211
www_authenticate_uri = http://controller:5000
project_domain_name = default
project_name = service
user_domain_name = default
password = 123456
username = zun
auth_url = http://controller:5000
auth_type = password
auth_version = v3
auth_protocol = http
service_token_roles_required = True
endpoint_type = internalURL

[keystone_authtoken]
memcached_servers = controller:11211
www_authenticate_uri= http://controller:5000
project_domain_name = default
project_name = service
user_domain_name = default
password = 123456
username = zun
auth_url = http://controller:5000
auth_type = password

[oslo_concurrency]
lock_path = /var/lib/zun/tmp

(Optional) If you want to run both containers and nova instances in this compute node, in the [compute] section, configure the host_shared_with_nova:

1
2
[compute]
host_shared_with_nova = true
1
2
3
4
5
# chown zun:zun /etc/zun/zun.conf

mkdir -p /etc/systemd/system/docker.service.d
touch /etc/systemd/system/docker.service.d/docker.conf
vim /etc/systemd/system/docker.service.d/docker.conf
1
2
3
4
5
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --group zun -H tcp://compute1:2375 -H unix:///var/run/docker.sock --cluster-store etcd://controller:2379

# better replace compute1 and controller to ip address, especially compute1
1
2
systemctl daemon-reload
systemctl restart docker
1
vim /etc/kuryr/kuryr.conf
1
2
3
4
[DEFAULT]
...
capability_scope = global
process_external_connectivity = False
1
systemctl restart kuryr-libnetwork
1
containerd config default > /etc/containerd/config.toml

Get zun group id:

1
getent group zun | cut -d: -f3

replace gid with zun group id:

1
2
3
4
5
6
7
8
9
vim /etc/containerd/config.toml

[grpc]
...
gid = ZUN_GROUP_ID

# chown zun:zun /etc/containerd/config.toml

systemctl restart containerd
1
2
3
4
mkdir -p /opt/cni/bin
cd ~
curl -L https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz \
| tar -C /opt/cni/bin -xzvf - ./loopback
1
install -o zun -m 0555 -D /usr/local/bin/zun-cni /opt/cni/bin/zun-cni
1
2
touch /etc/systemd/system/zun-compute.service
vim /etc/systemd/system/zun-compute.service
1
2
3
4
5
6
7
8
9
[Unit]
Description = OpenStack Container Service Compute Agent

[Service]
ExecStart = /usr/local/bin/zun-compute
User = zun

[Install]
WantedBy = multi-user.target
1
2
touch /etc/systemd/system/zun-cni-daemon.service
vim /etc/systemd/system/zun-cni-daemon.service
1
2
3
4
5
6
7
8
9
[Unit]
Description = OpenStack Container Service CNI daemon

[Service]
ExecStart = /usr/local/bin/zun-cni-daemon
User = zun

[Install]
WantedBy = multi-user.target

Zun compute had a version request on protobuf:

1
2
3
4
5
6
2024-03-15 05:55:19.487 13441 CRITICAL zun [-] Unhandled error: TypeError: Descriptors cannot be created directly.
Mar 15 05:55:19 compute1 zun-compute[13441]: If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
Mar 15 05:55:19 compute1 zun-compute[13441]: If you cannot immediately regenerate your protos, some other possible workarounds are:
Mar 15 05:55:19 compute1 zun-compute[13441]: 1. Downgrade the protobuf package to 3.20.x or lower.
Mar 15 05:55:19 compute1 zun-compute[13441]: 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
Mar 15 05:55:19 compute1 zun-compute[13441]: More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates

I choose to downgrade protobuf to 3.20.3:

1
2
3
4
pip3 uninstall protobuf
pip3 install -v protobuf==3.20.3

#pip3 install -v protobuf==3.20.3 -i https://pypi.tuna.tsinghua.edu.cn/simple

Finally:

1
2
3
4
5
6
7
8
systemctl enable zun-compute
systemctl start zun-compute

systemctl enable zun-cni-daemon
systemctl start zun-cni-daemon

systemctl status zun-compute
systemctl status zun-cni-daemon

Others

Package needs to be installed on nodes want to use zun cli:

1
2
3
4
5
pip3 install python-zunclient

# test
# . admin-openrc
# openstack appcontainer service list

Zun-ui

1
2
3
4
5
git clone -b stable/victoria https://github.com/openstack/zun-ui
# git clone -b stable/victoria https://opendev.org/openstack/zun-ui.git
cd zun-ui/
pip3 install .
# pip3 install . -i https://pypi.tuna.tsinghua.edu.cn/simple
1
cp zun_ui/enabled/* /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/
1
python3 /usr/share/openstack-dashboard/manage.py collectstatic

Optional. But in my test it cant running if I dont run the command:

1
python3 /usr/share/openstack-dashboard/manage.py compress
1
2
systemctl reload apache2.service
systemctl restart apache2.service

Others

Use image from glance

Zun uses image from Docker Hub by default. Here is how to use image from glance.

1
2
3
docker pull cirros
docker save cirros | openstack image create cirros --public --container-format docker --disk-format raw
zun run --image-driver glance cirros ping -c 4 8.8.8.8

Run container under privileged mode

Zun doesnt support creating the privileged container by default. To enable this feature, we need to create a oslo policy file. Quote Each OpenStack service, Identity, Compute, Networking, and so on, has its own role-based access policies. They determine which user can access which objects in which way, and are defined in the service’s policy.yaml file.

On the controller node:

1
2
3
mkdir /etc/zun/policy.d
touch /etc/zun/policy.d/privileged.yaml
vim /etc/zun/policy.d/privileged.yaml

Add the following line:

1
2
"container:create:privileged" : "rule:context_is_admin"
# "container:create:privileged" : ""

This allows admin to create a privileged container. Set the empty string means everyone could create a container under privileged mode, which is quite dangerous.

Edit the zun.conf:

1
vim /etc/zun/zun.conf
1
2
[oslo_policy]
policy_dirs = /etc/zun/policy.d/

Restart service:

1
systemctl restart zun-api.service

Currently, it is not possible to create a privileged container using zun ui. Only zun cli can be used for creation and startup. To use zun cli, please ensure that the zunclinet package has been installed.

For example:

1
2
3
4
zun create --name test --interactive cirros --privileged
zun start test
or
zun run --name test --interactive --image-driver glance --net network=test-net1,v4-fixed-ip=10.1.1.121 --hostname r1 --privileged cirros_docker /bin/sh

To stop and delete container:

1
2
zun stop test
zun delete test

Here is the help of zun run:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
usage: zun run [-n <name>] [--cpu <cpu>] [-m <memory>] [-e <KEY=VALUE>] [--workdir <workdir>] [--label <KEY=VALUE>]
[--image-pull-policy <policy>] [-i] [--image-driver <image_driver>] [--hint <key=value>]
[--net <network=network, port=port-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr>] [--mount <mount>]
[--runtime <runtime>] [--hostname <hostname>] [--disk <disk>] [--availability-zone <availability_zone>]
[--auto-heal] [--privileged] [--healthcheck <cmd=command,interval=time,retries=integer,timeout=time>]
[--registry <registry>] [--host <host>] [--entrypoint <entrypoint>]
[--security-group <security-group> | -p <port>] [--auto-remove | --restart <restart>]
<image> ...

Run a command in a new container.

Positional arguments:
<image> name or ID of the image
<command> Send command to the container

Optional arguments:
-n <name>, --name <name>
name of the container
--cpu <cpu> The number of virtual cpus.
-m <memory>, --memory <memory>
The container memory size in MiB
-e <KEY=VALUE>, --environment <KEY=VALUE>
The environment variables
--workdir <workdir> The working directory for commands to run in
--label <KEY=VALUE> Adds a map of labels to a container. May be used multiple times.
--image-pull-policy <policy>
The policy which determines if the image should be pulled prior to starting the container. It
can have following values: "ifnotpresent": only pull the image if it does not already exist on
the node. "always": Always pull the image from repository."never": never pull the image
-i, --interactive Keep STDIN open even if not attached, allocate a pseudo-TTY
--image-driver <image_driver>
The image driver to use to pull container image. It can have following values: "docker": pull
the image from Docker Hub. "glance": pull the image from Glance.
--hint <key=value> The key-value pair(s) for scheduler to select host. The format of this parameter is
"key=value[,key=value]". May be used multiple times.
--net <network=network, port=port-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr>
Create network enpoints for the container. network: attach container to the specified neutron
networks. port: attach container to the neutron port with this UUID. v4-fixed-ip: IPv4 fixed
address for container. v6-fixed-ip: IPv6 fixed address for container.
--mount <mount> A dictionary to configure volumes mounted inside the container.
--runtime <runtime> The runtime to use for this container. It can have value "runc" or any other custom runtime.
--hostname <hostname>
Container hostname
--disk <disk> The disk size in GiB for per container.
--availability-zone <availability_zone>
The availability zone of the container.
--auto-heal The flag of healing non-existent container in docker.
--privileged Give extended privileges to this container
--healthcheck <cmd=command,interval=time,retries=integer,timeout=time>
Specify a test cmd to perform to check that the containeris healthy. cmd: Command to run to
check health. interval: Time between running the check (s|m|h) (default 0s). retries:
Consecutive failures needed to report unhealthy. timeout: Maximum time to allow one check to
run (s|m|h) (default 0s).
--registry <registry>
The container image registry ID or name
--host <host> Requested host to run containers. Admin only by default.(Supported by API versions 1.39 or
above)
--entrypoint <entrypoint>
The entrypoint which overwrites the default ENTRYPOINT of the image. (Supported by API
versions 1.40 or above)
--security-group <security-group>
The name of security group for the container. May be used multiple times.
-p <port>, --expose-port <port>
Expose container port(s) to outside (format: <port>[/<protocol>])
--auto-remove Automatically remove the container when it exits
--restart <restart> Restart policy to apply when a container exits(no, on-failure[:max-retry], always, unless-
stopped)

附件

在尝试更旧版本的openstack时候,我发现zun的一些代码已经没有分支了,只能够在tags里找到,而我当时并没有安装成功。所以这里放我能成功部署的官网源码。

zun-stable-victoria.tar.gz

zun-ui-stable-victoria.tar.gz

kuryr-libnetwork-stable-victoria.tar.gz