CREATE DATABASE keystone; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '123456'; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '123456';
exit # replace 123456 with your own password
1
apt install keystone
1 2 3 4 5 6 7 8
vim /etc/keystone/keystone.conf
[database] # replace 123456, remove other connection connection = mysql+pymysql://keystone:123456@controller/keystone
[token] provider = fernet
1 2 3 4 5 6 7 8 9 10 11
su -s /bin/sh -c "keystone-manage db_sync" keystone
. admin-openrc openstack domain create --description "An Example Domain" example openstack project create --domain default --description "Service Project" service openstack project create --domain default --description "Demo Project" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole
Image Service
只需在控制节点完成配置。
1 2 3 4 5 6 7 8 9 10
mysql
MariaDB [(none)]>
CREATE DATABASE glance; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '123456'; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '123456';
exit # replace 123456 with your own password
1 2 3 4 5 6 7 8 9 10
. admin-openrc
# input your password in follow two lines openstack user create --domain default --password-prompt glance
CREATE DATABASE placement; GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '123456'; GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '123456';
exit # replace 123456 with your own password
1 2 3 4 5 6 7 8 9 10
. admin-openrc
# input your password in follow two lines openstack user create --domain default --password-prompt placement
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '123456'; GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '123456'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '123456'; GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '123456';
exit # replace 123456 with your own password
1 2 3 4 5 6 7 8 9 10 11
. admin-openrc openstack user create --domain default --password-prompt nova
openstack role add --project service --user nova admin openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
[DEFAULT] # remove the log_dir option from the [DEFAULT] section, replace my_ip with your management ip address transport_url = rabbit://openstack:123456@controller:5672/ my_ip = 10.0.0.11
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova su -s /bin/sh -c "nova-manage db sync" nova su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
1 2 3 4
service nova-api restart service nova-scheduler restart service nova-conductor restart service nova-novncproxy restart
If you are deploying openstack on a vm or your compute node doesnt supports hardware acceleration, better do the next steps.
1
egrep -c '(vmx|svm)' /proc/cpuinfo
If results above is zero:
1 2 3 4 5
vim /etc/nova/nova-compute.conf
[libvirt] # ... virt_type = qemu
Restart service
1
service nova-compute restart
Back to the controller node:
1 2 3 4 5 6
. admin-openrc
openstack compute service list --service nova-compute
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova #Discover compute hosts
Verify operation
1 2
. admin-openrc openstack compute service list
Networking Service
Make sure that you have done the preparation of network interface. The following steps only deploy the linuxbridge and vxlan network type. If you have a network node to deploy, the follow steps may not be suitable for you.
controller node
1 2 3 4 5 6 7 8 9 10
mysql
MariaDB [(none)]
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '123456'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '123456';
exit
1 2 3 4 5 6 7 8 9
. admin-openrc
openstack user create --domain default --password-prompt neutron
service neutron-server restart service neutron-linuxbridge-agent restart service neutron-dhcp-agent restart service neutron-metadata-agent restart service neutron-l3-agent restart
Add the following line to /etc/apache2/conf-available/openstack-dashboard.conf if not included:
1
WSGIApplicationGroup %{GLOBAL}
1 2 3
systemctl reload apache2.service
systemctl restart apache2.service
Verify operation
Access the dashboard using a web browser at http://controller/horizon.
Zun
controller node
1 2 3 4 5 6 7 8 9
mysql
MariaDB [(none)]
CREATE DATABASE zun; GRANT ALL PRIVILEGES ON zun.* TO 'zun'@'localhost' IDENTIFIED BY '123456'; GRANT ALL PRIVILEGES ON zun.* TO 'zun'@'%' IDENTIFIED BY '123456';
exit
1 2 3 4 5 6 7 8 9
. admin-openrc
openstack user create --domain default --password-prompt zun
systemctl status zun-api systemctl status zun-wsproxy
compute node
Before you install and configure Zun, you must have Docker and Kuryr-libnetwork installed properly in the compute node, and have Etcd installed properly in the controller node.
# apt-get install docker-ce-5:19.03.15~3-0~ubuntu-focal docker-ce-cli-5:19.03.15~3-0~ubuntu-focal containerd.io docker-buildx-plugin docker-compose-plugin # lower version is better, high version get some difference and cant run.
2.kuryr-libnetwork
First go back to the controller node:
1 2 3 4
. admin-openrc openstack user create --domain default --password-prompt kuryr
openstack role add --project service --user kuryr admin
touch /etc/systemd/system/zun-compute.service vim /etc/systemd/system/zun-compute.service
1 2 3 4 5 6 7 8 9
[Unit] Description = OpenStack Container Service Compute Agent
[Service] ExecStart = /usr/local/bin/zun-compute User = zun
[Install] WantedBy = multi-user.target
1 2
touch /etc/systemd/system/zun-cni-daemon.service vim /etc/systemd/system/zun-cni-daemon.service
1 2 3 4 5 6 7 8 9
[Unit] Description = OpenStack Container Service CNI daemon
[Service] ExecStart = /usr/local/bin/zun-cni-daemon User = zun
[Install] WantedBy = multi-user.target
Zun compute had a version request on protobuf:
1 2 3 4 5 6
2024-03-15 05:55:19.487 13441 CRITICAL zun [-] Unhandled error: TypeError: Descriptors cannot be created directly. Mar 15 05:55:19 compute1 zun-compute[13441]: If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0. Mar 15 05:55:19 compute1 zun-compute[13441]: If you cannot immediately regenerate your protos, some other possible workarounds are: Mar 15 05:55:19 compute1 zun-compute[13441]: 1. Downgrade the protobuf package to 3.20.x or lower. Mar 15 05:55:19 compute1 zun-compute[13441]: 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower). Mar 15 05:55:19 compute1 zun-compute[13441]: More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
Zun uses image from Docker Hub by default. Here is how to use image from glance.
1 2 3
docker pull cirros docker save cirros | openstack image create cirros --public --container-format docker --disk-format raw zun run --image-driver glance cirros ping -c 4 8.8.8.8
Run container under privileged mode
Zun doesnt support creating the privileged container by default. To enable this feature, we need to create a oslo policy file. Quote Each OpenStack service, Identity, Compute, Networking, and so on, has its own role-based access policies. They determine which user can access which objects in which way, and are defined in the service’s policy.yaml file.
On the controller node:
1 2 3
mkdir /etc/zun/policy.d touch /etc/zun/policy.d/privileged.yaml vim /etc/zun/policy.d/privileged.yaml
This allows admin to create a privileged container. Set the empty string means everyone could create a container under privileged mode, which is quite dangerous.
Edit the zun.conf:
1
vim /etc/zun/zun.conf
1 2
[oslo_policy] policy_dirs = /etc/zun/policy.d/
Restart service:
1
systemctl restart zun-api.service
Currently, it is not possible to create a privileged container using zun ui. Only zun cli can be used for creation and startup. To use zun cli, please ensure that the zunclinet package has been installed.
For example:
1 2 3 4
zun create --name test --interactive cirros --privileged zun start test or zun run --name test --interactive --image-driver glance --net network=test-net1,v4-fixed-ip=10.1.1.121 --hostname r1 --privileged cirros_docker /bin/sh
Positional arguments: <image> name or ID of the image <command> Send command to the container
Optional arguments: -n <name>, --name <name> name of the container --cpu <cpu> The number of virtual cpus. -m <memory>, --memory <memory> The container memory size in MiB -e <KEY=VALUE>, --environment <KEY=VALUE> The environment variables --workdir <workdir> The working directory for commands to run in --label <KEY=VALUE> Adds a map of labels to a container. May be used multiple times. --image-pull-policy <policy> The policy which determines if the image should be pulled prior to starting the container. It can have following values: "ifnotpresent": only pull the image if it does not already exist on the node. "always": Always pull the image from repository."never": never pull the image -i, --interactive Keep STDIN open even if not attached, allocate a pseudo-TTY --image-driver <image_driver> The image driver to use to pull container image. It can have following values: "docker": pull the image from Docker Hub. "glance": pull the image from Glance. --hint <key=value> The key-value pair(s) for scheduler to select host. The format of this parameter is "key=value[,key=value]". May be used multiple times. --net <network=network, port=port-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr> Create network enpoints for the container. network: attach container to the specified neutron networks. port: attach container to the neutron port with this UUID. v4-fixed-ip: IPv4 fixed address for container. v6-fixed-ip: IPv6 fixed address for container. --mount <mount> A dictionary to configure volumes mounted inside the container. --runtime <runtime> The runtime to use for this container. It can have value "runc" or any other custom runtime. --hostname <hostname> Container hostname --disk <disk> The disk size in GiB for per container. --availability-zone <availability_zone> The availability zone of the container. --auto-heal The flag of healing non-existent container in docker. --privileged Give extended privileges to this container --healthcheck <cmd=command,interval=time,retries=integer,timeout=time> Specify a test cmd to perform to check that the containeris healthy. cmd: Command to run to check health. interval: Time between running the check (s|m|h) (default 0s). retries: Consecutive failures needed to report unhealthy. timeout: Maximum time to allow one check to run (s|m|h) (default 0s). --registry <registry> The container image registry ID or name --host <host> Requested host to run containers. Admin only by default.(Supported by API versions 1.39 or above) --entrypoint <entrypoint> The entrypoint which overwrites the default ENTRYPOINT of the image. (Supported by API versions 1.40 or above) --security-group <security-group> The name of security group for the container. May be used multiple times. -p <port>, --expose-port <port> Expose container port(s) to outside (format: <port>[/<protocol>]) --auto-remove Automatically remove the container when it exits --restart <restart> Restart policy to apply when a container exits(no, on-failure[:max-retry], always, unless- stopped)