跳到主要内容

[toc]

centos7搭建openstack Rocky版

官方文档

rocky版中的密码说明

数据库密码(未使用变量)数据库的根密码
ADMIN_PASS用户密码 admin
CINDER_DBPASS块存储服务的数据库密码
CINDER_PASS块存储服务用户密码 cinder
DASH_DBPASS仪表板的数据库密码
DEMO_PASS用户密码 demo
GLANCE_DBPASS镜像服务的数据库密码
GLANCE_PASS图片服务用户密码 glance
KEYSTONE_DBPASS身份服务的数据库密码
METADATA_SECRET元数据代理的秘密
NEUTRON_DBPASS网络服务的数据库密码
NEUTRON_PASS网络服务用户密码 neutron
NOVA_DBPASS计算服务的数据库密码
NOVA_PASS计算服务用户密码 nova
PLACEMENT_PASS展示位置服务用户的密码 placement
RABBIT_PASSRabbitMQ用户密码 openstack

实验环境

角色IP主机名默认网关硬件环境虚拟化防火墙、selinux操作系统内核版本
控制节点172.30.100.4/16openstack-controller172.30.255.2534c16g,40g+100g开启关闭CentOS7.63.10.0-957.21.3.el7.x86_64
计算节点1172.30.100.5/16openstack-compute01172.30.255.2534c16g,40g+100g开启关闭CentOS7.63.10.0-957.21.3.el7.x86_64
存储节点1172.30.100.6/16openstack-block01172.30.255.2534c16g,40g+100g开启关闭CentOS7.63.10.0-957.21.3.el7.x86_64
对象节点1172.30.100.7/16openstack-object01172.30.255.2534c16g,40g+50g+50g开启关闭CentOS7.63.10.0-957.21.3.el7.x86_64
对象节点2172.30.100.8/16openstack-object02172.30.255.2534c16g,40g+50g+50g开启关闭CentOS7.63.10.0-957.21.3.el7.x86_64

1.把rocky版rpm包做成本地yum源

rocky版离线包已上传至 百度网盘 提取码: 4gam

提示

由于rocky官方yum源发生变更,在centos7上使用官方yum源安装会有部分包无法安装,因此采用离线包制作本地yum源方式安装

1.1 安装 createrepo 命令

控制节点操作

yum -y install createrepo 

1.2 制作仓库

控制节点操作

createrepo openstack-rocky

1.3 安装nginx

控制节点操作

yum -y install nginx
systemctl enable nginx && systemctl start nginx

1.4 配置nginx

控制节点操作

# 编辑nginx配置文件,yum安装的nginx根目录是/usr/share/nginx/html,这里个人习惯选择启用一个虚拟主机,监听88端口
cat > /etc/nginx/conf.d/openstack-rocky.repo.conf <<EOF
server {
listen 88;
root /opt;
location /openstack-rocky {
autoindex on;
autoindex_exact_size off;
autoindex_localtime on;
}
}
EOF

1.5 编辑本地yum仓库文件

所有节点操作

# 指定repo文件,把提前准备好的离线包上传到/opt下,目录名称为openstack-rocky
cat >/etc/yum.repos.d/openstack-rocky.repo <<EOF
[openstack]
name=openstack
baseurl=http://172.30.100.4:88/openstack-rocky
enabled=1
gpgcheck=0
EOF

1.6 生成本地缓存

所有节点操作

yum makecache

2.基础环境配置

基础环境官方文档

2.1 关闭防火墙和selinux

所有节点操作

# 关闭防火墙
systemctl stop firewalld && systemctl disable firewalld

# 禁用selinux
// 临时修改
setenforce 0

// 永久修改,重启服务器后生效
sed -i '7s/enforcing/disabled/' /etc/selinux/config

2.2 配置hosts解析

所有节点操作

cat >> /etc/hosts << EOF
172.30.100.4 openstack-controller
172.30.100.5 openstack-compute01
172.30.100.6 openstack-block01
172.30.100.7 openstack-object01
172.30.100.8 openstack-object02
EOF

2.3 配置chrony服务,要保证所有节点时间一致

2.3.1 安装chrony

所有节点操作

yum -y install chrony

2.3.2 修改配置文件 /etc/chrony.conf

控制节点操作

sed -i.bak '3,6d' /etc/chrony.conf && \
sed -i '3cserver ntp1.aliyun.com iburst' /etc/chrony.conf && \
sed -i '23callow 172.30.0.0/16' /etc/chrony.conf

计算、存储、对象节点操作

sed -i '3,6d' /etc/chrony.conf && \
sed -i '3cserver openstack-controller iburst' /etc/chrony.conf

2.3.3 启动服务并设置开机自启

所有节点操作

systemctl enable chronyd && systemctl start chronyd

2.3.4 验证

控制节点操作

$ chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 120.25.115.20 2 6 37 29 +43us[ -830us] +/- 22ms

计算、存储、对象节点操作

$ chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^? openstack-controller 3 6 200 50 +1319ms[+1319ms] +/- 14.4s

2.4 下载openstack官方yum源安装openstack客户端

所有节点操作

本文使用本地源安装,不需要安装 centos-release-openstack-rocky

# yum -y install centos-release-openstack-rocky
yum -y install python-openstackclient

到此,所有节点基础环境配置完成!!!

3.控制节点基础环境安装

3.1 安装配置mariadb数据库

3.1.1 安装包

yum -y install mariadb mariadb-server python2-PyMySQL

3.1.2 编辑配置文件

cat > /etc/my.cnf.d/openstack.cnf <<EOF
[mysqld]
bind-address = 172.30.100.4

default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
EOF

3.1.3 启动mariadb并设置开机自启

systemctl enable mariadb && systemctl start mariadb

3.1.4 进行数据库安全设置

$ mysql_secure_installation
Enter current password for root (enter for none): # 没有密码,直接回车
Set root password? [Y/n] n # 不设置root密码
Remove anonymous users? [Y/n] y # 移除匿名用户
Disallow root login remotely? [Y/n] y # 禁止root远程登陆
Remove test database and access to it? [Y/n] y # 移除test数据库
Reload privilege tables now? [Y/n] y # 刷新权限表

3.2 安装消息队列rabbitmq

OpenStack 使用 message queue 协调操作和各服务的状态信息。消息队列服务一般运行在控制节点上

rabbitmq会启动2个端口

tcp/5672 rabbitmq服务端口

tcp/25672 多个rabbitmq通信用到的端口

3.2.1 安装包

yum -y install rabbitmq-server

3.2.2 启动rabbitmq并设置为开机自启

systemctl enable rabbitmq-server && systemctl start rabbitmq-server

3.2.3 添加openstack用户

$ rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack"

3.2.4 给openstack用户设置读和写权限

*3个.分别是 可读、可写、可配置

$ rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/"

3.2.5 启动rabbitmq一个插件,启动之后会监听tcp/15672

是一个web管理界面,默认用户名和密码都是guest

$ rabbitmq-plugins enable rabbitmq_management
The following plugins have been enabled:
amqp_client
cowlib
cowboy
rabbitmq_web_dispatch
rabbitmq_management_agent
rabbitmq_management

Applying plugin configuration to rabbit@openstack-controller... started 6 plugins.

3.3 安装memcached

认证服务认证缓存使用Memcached缓存令牌。缓存服务memecached运行在控制节点。在生产部署中,我们推荐联合启用防火墙、认证和加密保证它的安全。

memcache监听 tcp/udp 11211端口

3.3.1 安装包

yum -y install memcached python-memcached

3.3.2 修改配置文件

配置服务以使用控制器节点的管理IP地址。这是为了允许其他节点通过管理网络进行访问:

sed -i.bak '/OPTIONS/c OPTIONS="-l 127.0.0.1,::1,openstack-controller"' /etc/sysconfig/memcached 

修改后文件内容如下

$ cat /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 127.0.0.1,::1,openstack-controller"

3.3.3 启动memcached并设置为开机自启

systemctl enable memcached && systemctl start memcached

3.4 安装etcd

OpenStack服务可以使用Etcd(分布式可靠键值存储)来进行分布式键锁定,存储配置,跟踪服务活动性和其他情况。

etcd服务在控制器节点上运行。

etcd服务启动后提供给外部客户端通信的端口是2379,而etcd服务中成员间的通信端口是2380

3.4.1 安装包

yum -y install etcd

3.4.2 编辑配置文件

编辑/etc/etcd/etcd.conf文件,并设置ETCD_INITIAL_CLUSTERETCD_INITIAL_ADVERTISE_PEER_URLSETCD_ADVERTISE_CLIENT_URLSETCD_LISTEN_CLIENT_URLS控制器节点,以使经由管理网络通过其他节点的访问的管理IP地址:

export CONTROLLER_IP=172.30.100.4
cat > /etc/etcd/etcd.conf << EOF
# [Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://${CONTROLLER_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="http://${CONTROLLER_IP}:2379"
ETCD_NAME="openstack-controller"

# [Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://${CONTROLLER_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://${CONTROLLER_IP}:2379"
ETCD_INITIAL_CLUSTER="openstack-controller=http://${CONTROLLER_IP}:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

3.4.3 启动etcd并设置开机自启

systemctl enable etcd && systemctl restart etcd

到此,控制节点环境安装完成!!!

4.控制节点认证服务keystone安装

rocky版认证服务keystone安装配置官方文档

keystone认证服务功能:认证管理、授权管理、服务目录

认证:用户名和密码

授权:授权管理,例如一些技术网站(掘金、csdn)可以授权微信、QQ登陆

服务目录:相当于通讯录,即要访问openstack的镜像、网络、存储等服务,只需要找到keystone即可,而不需要再单独记住各个服务的访问地址

后续每安装一个服务都需要在keystone上注册

4.1 创建keystone数据库并授权

mysql -e "CREATE DATABASE keystone;"
mysql -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'KEYSTONE_DBPASS';"
mysql -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'KEYSTONE_DBPASS';"

4.2 安装和配置keystron

  • keystone借助apache访问

  • mod_wsgi是帮助apache连接python程序

  • 监听端口 5000

4.2.1 安装软件包

yum -y install openstack-keystone httpd mod_wsgi openstack-utils.noarch

4.2.2 修改配置文件 /etc/keystone/keystone.conf

# 在 [database] 部分,配置数据库访问:
[database]
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone

# 在[token]部分,配置Fernet UUID令牌的提供者
[token]
provider = fernet

使用如下命令修改

\cp /etc/keystone/keystone.conf{,.bak}
grep -Ev '^$|#' /etc/keystone/keystone.conf.bak >/etc/keystone/keystone.conf
openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:KEYSTONE_DBPASS@openstack-controller/keystone
openstack-config --set /etc/keystone/keystone.conf token provider fernet

文件md5值

md5sum /etc/keystone/keystone.conf
e12c017255f580f414e3693bd4ccaa1a /etc/keystone/keystone.conf

4.2.3 初始化身份认证服务的数据库

命令的含义是切换到keystone用户,使用的shell是/bin/sh,执行 -c后的命令

su -s /bin/sh -c "keystone-manage db_sync" keystone

上一步操作为导入表,以下命令执行返回有表即为正确

$ mysql keystone -e "show tables;"|wc -l
45

4.2.4 初始化Fernet key

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

4.2.5 引导身份服务

keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
--bootstrap-admin-url http://openstack-controller:5000/v3/ \
--bootstrap-internal-url http://openstack-controller:5000/v3/ \
--bootstrap-public-url http://openstack-controller:5000/v3/ \
--bootstrap-region-id RegionOne

4.2.6 配置Apache服务器

编辑/etc/httpd/conf/httpd.conf文件,配置ServerName 选项为控制节点

4.2.6.1 修改文件
\cp /etc/httpd/conf/httpd.conf{,.bak}
sed -i.bak -e '96cServerName openstack-controller' -e '/^Listen/c Listen 8080' /etc/httpd/conf/httpd.conf

文件md5值

$ md5sum /etc/httpd/conf/httpd.conf
812165839ec4f2e87a31c1ff2ba423aa /etc/httpd/conf/httpd.conf
4.2.6.2 创建文件软连接
ln -sf /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

4.2.7 启动Apache并设置为开机自启

systemctl enable httpd && systemctl start httpd

4.2.8 配置管理账户

以下为创建管理员账户admin,密码为ADMIN_PASS

export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://openstack-controller:5000/v3
export OS_IDENTITY_API_VERSION=3

4.3 创建域、项目、用户和角色

4.3.1 创建一个域

$ openstack domain create --description "An Example Domain" example
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | An Example Domain |
| enabled | True |
| id | ab6f853144384043a5dd648c154d0efe |
| name | example |
| tags | [] |
+-------------+----------------------------------+

4.3.2 创建一个服务项目

# service,后期用于关联openstack系统用户glance、nova、neutron
$ openstack project create --domain default \
--description "Service Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | default |
| enabled | True |
| id | f6696bc9511043ae9ec72d1c31a494f3 |
| is_domain | False |
| name | service |
| parent_id | default |
| tags | [] |
+-------------+----------------------------------+

4.3.3 常规(非管理员)任务应使用无特权的项目和用户

例如,本指南创建myproject项目和myuser 用户

4.3.3.1 创建myproject项目
$ openstack project create --domain default \
--description "Demo Project" myproject
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Demo Project |
| domain_id | default |
| enabled | True |
| id | 5b9ccd294c364cc68747df85f9598c89 |
| is_domain | False |
| name | myproject |
| parent_id | default |
| tags | [] |
+-------------+----------------------------------+
4.3.3.2 创建myuser用户

密码设置为 MYUSER_PASS

交互式与非交互式设置密码选择其中一种

非交互式设置密码

$ openstack user create --domain default \
--password MYUSER_PASS myuser
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | f7985ae93ad24f7784a5ea3e1f22109a |
| name | myuser |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+

交互式设置密码

openstack user create --domain default \
--password-prompt myuser
4.3.3.3 创建myrole角色
$ openstack role create myrole
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | 9cb289f07a6d4bd6898dd863d616b164 |
| name | myrole |
+-----------+----------------------------------+
4.3.3.4 将myrole角色添加到myproject项目和myuser用户
openstack role add --project myproject --user myuser myrole

4.3.4 验证

4.3.4.1 取消设置临时变量 OS_AUTH_URLOS_PASSWORD 环境变量
unset OS_AUTH_URL OS_PASSWORD
4.3.4.2 以admin用户身份请求身份验证令牌

密码是 ADMIN_PASS

$ openstack --os-auth-url http://openstack-controller:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name admin --os-username admin token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2021-11-04T03:32:38+0000 |
| id | gAAAAABhg0ZG2yKJ00Myq8FQ33ibR6tN3I1Fu2xXJRN17usVIVPHiVJ2eJYQviKz9HeKWKEjmH_MLaWeiDZcW3QBQGjnT_Mbe9EEKqHSXKBJxo2etnI_kPCvxRoLPGE-XbevIWW6DYmsJqCJr32TdUG5wysC12ZbSWyVp25qyX_BKl_8KGSXXyM |
| project_id | a6c250532966417cae11b1dfb5f0f6cc |
| user_id | 20b791e627a741ed8b21e41027638986 |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
4.3.4.3 使用上一节中创建的用户myuser,请请求认证令牌

密码是 MYUSER_PASS

$ openstack --os-auth-url http://openstack-controller:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name myproject --os-username myuser token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2021-11-04T03:35:19+0000 |
| id | gAAAAABhg0bnYFRoP2qjzgTRmT7lojzV3WO9GkYv6qFu5Nhx9_WbhIV6EDfNBbuJa7EHjmfz5BvYAza9J6wC6ZF36_nHfVPVkq3xO4E7fHNTa914q79UKTkpikR2i5NfPNgo1FqeIa0snUQ2M2-JSqteLCZxLMYZRTa_ckdV12i9OTle5_-6wk8 |
| project_id | a150277718cd41439adbf88bbac6d1fe |
| user_id | d7d0fdf398d949038719c5f0c22fc379 |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

4.4 创建OpenStack客户端环境脚本

4.4.1 创建脚本

创建和编辑 admin-openrc 文件并添加以下内容,这里放在/opt下

cat > /opt/admin-openrc <<EOF
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://openstack-controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF

创建和编辑 demo-openrc 文件并添加以下内容

cat > /opt/demo-openrc <<EOF
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=MYUSER_PASS
export OS_AUTH_URL=http://openstack-controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF

4.4.2 使用脚本

4.4.2.1 加载 admin-openrc 文件以使用身份服务的位置以及admin项目和用户凭据填充环境变量
source /opt/admin-openrc
4.4.2.2 请求身份验证令牌

注意expires中是UTC时间,落后中国8个小时,我国是东八区,使用 timedatectl 查看时间及时区,默认过期时间1小时

$ openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2021-11-04T03:47:51+0000 |
| id | gAAAAABhg0nXv43sRRJ5ahS0P2z86nPoZGz7g-Y2v3jcLhW-QM5eTIj_39ncEktjGu1R1SAOM9cqMpmOHF26j8ur7L26fYJ8gyNoA-JC51ZWesc5mnr1FapD0dxqCmteL22RmA5gRtzjC5qHfbn_RjVNe-AjBNSL_OmtAEdr-kY5B2IO7kvt7ko |
| project_id | a6c250532966417cae11b1dfb5f0f6cc |
| user_id | 20b791e627a741ed8b21e41027638986 |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

到此,控制节点认证服务keystone安装完成!!!

5.控制节点镜像服务glance安装

rocky版镜像服务glance安装配置官方文档

OpenStack镜像服务包括以下组件:

glance-api 接收镜像API的调用,诸如镜像发现、恢复、存储

glance-registry 存储、处理和恢复镜像的元数据(属性),元数据包括项诸如大小和类型

glance服务监听两个端口

glance-api 9292

glance-registry 9191

5.1 创建glance数据库并授权

mysql -e "CREATE DATABASE glance;"
mysql -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
IDENTIFIED BY 'GLANCE_DBPASS';"
mysql -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY 'GLANCE_DBPASS';"

5.2 获取管理员凭据以获取对仅管理员CLI命令的访问

source /opt/admin-openrc

5.3 创建服务凭据

5.3.1 创建 glance 用户

密码设置为 GLANCE_PASS

交互式与非交互式设置密码选择其中一种

非交互式设置密码

$ openstack user create --domain default --password GLANCE_PASS glance
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 36a88bb288464126837ebc19758bead6 |
| name | glance |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+

交互式设置密码

openstack user create --domain default --password-prompt glance

5.3.2 将 admin 角色添加到 glance 用户和 service 项目

openstack role add --project service --user glance admin

5.3.3 创建glance服务实体

$ openstack service create --name glance \
--description "OpenStack Image" image
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image |
| enabled | True |
| id | ce5a424428d640c9adec06865d211916 |
| name | glance |
| type | image |
+-------------+----------------------------------+

5.3.4 创建Image服务API端点

$ openstack endpoint create --region RegionOne \
image public http://openstack-controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | bed29b8924114eee8b427f7a83f2cd64 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ce5a424428d640c9adec06865d211916 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+

$ openstack endpoint create --region RegionOne \
image internal http://openstack-controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 94f84d946e6f4463af82041caf2877b5 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ce5a424428d640c9adec06865d211916 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+

$ openstack endpoint create --region RegionOne \
image admin http://openstack-controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 16e947838d7948e6a0ec7feb7910b415 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ce5a424428d640c9adec06865d211916 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+

删除API端点使用命令openstack endpoint delete <endpoint-id>

使用命令openstack endpoint list查看endpoint-id然后根据id删除

5.4 安装和配置组件

5.4.1 安装软件包

yum -y install openstack-glance

5.4.2 编辑 /etc/glance/glance-api.conf 文件并完成以下操作

1.在该[database]部分中,配置数据库访问
[database]
# ...
connection = mysql+pymysql://glance:GLANCE_DBPASS@openstack-controller/glance

2.[keystone_authtoken][paste_deploy]部分中,配置身份服务访问
[keystone_authtoken]
# ...
www_authenticate_uri = http://openstack-controller:5000
auth_url = http://openstack-controller:5000
memcached_servers = openstack-controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS

[paste_deploy]
# ...
flavor = keystone

3.在该[glance_store]部分中,配置本地文件系统存储和图像文件的位置
[glance_store]
# ...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

用以下命令修改

\cp /etc/glance/glance-api.conf{,.bak}
grep '^[a-Z\[]' /etc/glance/glance-api.conf.bak >/etc/glance/glance-api.conf
openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@openstack-controller/glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken www_authenticate_uri http://openstack-controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://openstack-controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers openstack-controller:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password GLANCE_PASS
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/

文件md5值

$ md5sum /etc/glance/glance-api.conf
768bce1167f1545fb55115ad7e4fe3ff /etc/glance/glance-api.conf

5.4.3 编辑 /etc/glance/glance-registry.conf 文件并完成以下操作

1.在该[database]部分中,配置数据库访问
[database]
# ...
connection = mysql+pymysql://glance:GLANCE_DBPASS@openstack-controller/glance

2.[keystone_authtoken][paste_deploy]部分中,配置身份服务访问
[keystone_authtoken]
# ...
www_authenticate_uri = http://openstack-controller:5000
auth_url = http://openstack-controller:5000
memcached_servers = openstack-controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS

[paste_deploy]
# ...
flavor = keystone

用以下命令修改

\cp /etc/glance/glance-registry.conf{,.bak}
grep '^[a-Z\[]' /etc/glance/glance-registry.conf.bak > /etc/glance/glance-registry.conf
openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@openstack-controller/glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken www_authenticate_uri http://openstack-controller:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://openstack-controller:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers openstack-controller:11211
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password GLANCE_PASS
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone

文件md5值

$ md5sum /etc/glance/glance-registry.conf
ca0383d969bf7d1e9125b836769c9a2e /etc/glance/glance-registry.conf

5.4.4 同步数据库

忽略此输出中的任何弃用消息

$ su -s /bin/sh -c "glance-manage db_sync" glance
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:1352: OsloDBDeprecationWarning: EngineFacade is deprecated; please use oslo_db.sqlalchemy.enginefacade
expire_on_commit=expire_on_commit, _conf=conf)
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> liberty, liberty initial
INFO [alembic.runtime.migration] Running upgrade liberty -> mitaka01, add index on created_at and updated_at columns of 'images' table
INFO [alembic.runtime.migration] Running upgrade mitaka01 -> mitaka02, update metadef os_nova_server
INFO [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_expand01, add visibility to images
INFO [alembic.runtime.migration] Running upgrade ocata_expand01 -> pike_expand01, empty expand for symmetry with pike_contract01
INFO [alembic.runtime.migration] Running upgrade pike_expand01 -> queens_expand01
INFO [alembic.runtime.migration] Running upgrade queens_expand01 -> rocky_expand01, add os_hidden column to images table
INFO [alembic.runtime.migration] Running upgrade rocky_expand01 -> rocky_expand02, add os_hash_algo and os_hash_value columns to images table
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Upgraded database to: rocky_expand02, current revision(s): rocky_expand02
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Database migration is up to date. No migration needed.
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_contract01, remove is_public from images
INFO [alembic.runtime.migration] Running upgrade ocata_contract01 -> pike_contract01, drop glare artifacts tables
INFO [alembic.runtime.migration] Running upgrade pike_contract01 -> queens_contract01
INFO [alembic.runtime.migration] Running upgrade queens_contract01 -> rocky_contract01
INFO [alembic.runtime.migration] Running upgrade rocky_contract01 -> rocky_contract02
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Upgraded database to: rocky_contract02, current revision(s): rocky_contract02
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Database is synced successfully.

有表即为正确

$ mysql glance -e "show tables;" | wc -l
16

5.4.5 启动glance服务并设置为开机自启

systemctl enable openstack-glance-api openstack-glance-registry
systemctl start openstack-glance-api openstack-glance-registry

5.4.6 验证操作

5.4.6.1 获取管理员凭据以获取对仅管理员CLI命令的访问权限
source /opt/admin-openrc
5.4.6.2 下载测试镜像
wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
5.4.6.3 上传镜像

使用QCOW2磁盘格式,裸容器格式和公共可见性将映像上载到映像服务 ,以便所有项目都可以访问它

删除镜像使用命令 glance image-delete 镜像id

这一步一定要看执行后输出结果中size大小,如果为0则说明镜像上载有问题

$ openstack image create "cirros" \
--file cirros-0.4.0-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--public
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum | 443b7623e27ecf03dc9e01ee93f67afe |
| container_format | bare |
| created_at | 2021-11-04T03:56:55Z |
| disk_format | qcow2 |
| file | /v2/images/ff6ea9e3-e409-41e1-a871-daf3f8ebfb9e/file |
| id | ff6ea9e3-e409-41e1-a871-daf3f8ebfb9e |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | a6c250532966417cae11b1dfb5f0f6cc |
| properties | os_hash_algo='sha512', os_hash_value='6513f21e44aa3da349f248188a44bc304a3653a04122d8fb4535423c8e1d14cd6a153f735bb0982e2161b5b5186106570c17a9e58b64dd39390617cd5a350f78', os_hidden='False' |
| protected | False |
| schema | /v2/schemas/image |
| size | 12716032 |
| status | active |
| tags | |
| updated_at | 2021-11-04T03:56:56Z |
| virtual_size | None |
| visibility | public |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
5.4.7 确认上传图像并验证属性
$ openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 94c96aab-d0b3-4340-835c-9a97108d0554 | cirros | active |
+--------------------------------------+--------+--------+

到此,控制节点镜像服务glance安装完成!!!


6.控制节点和计算节点计算服务nova安装

nova相关服务

服务名称作用
nova-api接受并响应最终用户的计算API调用。该服务支持OpenStack Compute API。它执行一些策略并启动大多数编排活动,例如运行实例
nova-api-metadata接受来自实例的元数据请求。nova-api-metadata当您在nova-network 安装时以多主机模式运行时,通常会使用该服务
nova-compute通过守护程序API创建和终止虚拟机实例的辅助程序守护程序
nova-placement-api跟踪每个提供商的库存和使用情况
nova-scheduler从队列中获取虚拟机实例请求,并确定它在哪台计算服务器主机上运行
nova-conductor调解nova计算服务和数据库之间的交互。它消除了nova计算服务对云数据库的直接访问。nova导体模块水平伸缩。但是,不要在nova计算服务运行的节点上部署它
nova-consoleauth为控制台代理提供的用户授权令牌。请参阅 nova-novncproxynova-xvpvncproxy。该服务必须正在运行,控制台代理才能起作用。您可以在集群配置中针对单个nova-consoleauth服务运行这两种类型的代理 rocky版不推荐使用,并且以后会删除
nova-novncproxy提供用于通过VNC连接访问正在运行的实例的代理。支持基于浏览器的novnc客户端。
nova-spicehtml5proxy提供用于通过SPICE连接访问正在运行的实例的代理。支持基于浏览器的HTML5客户端。
nova-xvpvncproxy提供用于通过VNC连接访问正在运行的实例的代理。支持特定于OpenStack的Java客户端。

安装和配置控制节点

rocky版控制节点计算服务nova安装配置官方文档

6.1 创建nova、nova_api、nova_cell0、placement数据库并授权

mysql -e "CREATE DATABASE nova_api;"
mysql -e "CREATE DATABASE nova;"
mysql -e "CREATE DATABASE nova_cell0;"
mysql -e "CREATE DATABASE placement;"
mysql -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';"
mysql -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';"
mysql -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';"
mysql -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';"
mysql -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';"
mysql -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';"
mysql -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
IDENTIFIED BY 'PLACEMENT_DBPASS';"
mysql -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
IDENTIFIED BY 'PLACEMENT_DBPASS';"

6.2 获取管理员凭据以获取对仅管理员CLI命令的访问权限

source /opt/admin-openrc

6.3 创建计算服务凭据

6.3.1 创建 nova 用户

密码设置为 NOVA_PASS

交互式与非交互式设置密码选择其中一种

非交互式设置密码

$ openstack user create --domain default \
--password NOVA_PASS nova
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | ebe9b1934a2e4c8ca9c177af647851b1 |
| name | nova |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+

交互式设置密码

openstack user create --domain default --password-prompt nova

6.3.2 将 admin 角色添加到 nova 用户

openstack role add --project service --user nova admin

6.3.3 创建 nova 服务实体

$ openstack service create --name nova \
--description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | 412f485718f44759b6c3cd46b1d624e6 |
| name | nova |
| type | compute |
+-------------+----------------------------------+

6.3.4 创建 Compute API 服务端点

$ openstack endpoint create --region RegionOne \
compute public http://openstack-controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | cc0a7c21acd0450998760841dd9a11c0 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 412f485718f44759b6c3cd46b1d624e6 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+

$ openstack endpoint create --region RegionOne \
compute internal http://openstack-controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 69acbdd4f0114a339f8b62d9118ce137 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 412f485718f44759b6c3cd46b1d624e6 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+

$ openstack endpoint create --region RegionOne \
compute admin http://openstack-controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 5382d617406a4dba8280dc375dd53329 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 412f485718f44759b6c3cd46b1d624e6 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+

6.3.5 创建展示位置服务用户 PLACEMENT

说明

密码设置为PLACEMENT_PASS

交互式与非交互式设置密码选择其中一种

非交互式创建密码

$ openstack user create --domain default --password PLACEMENT_PASS placement
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 5ab24083149e4adf978c43439b87c982 |
| name | placement |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+

交互式创建密码

openstack user create --domain default --password-prompt placement

6.3.6 使用管理员角色将 Placement 用户添加到服务项目中

openstack role add --project service --user placement admin

6.3.7 在服务目录中创建 Placement API 条目

$ openstack service create --name placement \
--description "Placement API" placement
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Placement API |
| enabled | True |
| id | 274104c9a16f4b728bd7f484d3c54d3e |
| name | placement |
| type | placement |
+-------------+----------------------------------+

6.3.8 创建 Placement API 服务端点

$ openstack endpoint create --region RegionOne \
placement public http://openstack-controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | c3afd275f71a4406a701d16ad24aa325 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 274104c9a16f4b728bd7f484d3c54d3e |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+

$ openstack endpoint create --region RegionOne \
placement internal http://openstack-controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 2ddc86a3b46d45489ebbedbd54fc3c0c |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 274104c9a16f4b728bd7f484d3c54d3e |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+

$ openstack endpoint create --region RegionOne \
placement admin http://openstack-controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | d0714a417aa44c0180d59be843e1d40d |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 274104c9a16f4b728bd7f484d3c54d3e |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+

6.4 安装和配置组件

6.4.1 安装软件包

yum -y install openstack-nova-api openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler openstack-nova-placement-api

6.4.2 编辑 /etc/nova/nova.conf 文件并完成以下操作

1.在此[DEFAULT]部分中,仅启用计算和元数据API
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata

2.[api_database][database][placement_database] 部分,配置数据库访问
[api_database]
# ...
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api

[database]
# ...
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova

[placement_database]
# ...
connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement

3.在该[DEFAULT]部分中,配置RabbitMQ消息队列访问
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller

4.[api][keystone_authtoken]部分中,配置身份服务访问
[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS

5.在该[DEFAULT]部分中,配置my_ip选项以使用控制器节点的管理接口IP地址
[DEFAULT]
# ...
my_ip = 10.0.0.11

6.在本[DEFAULT]节中,启用对网络服务的支持
[DEFAULT]
# ...
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
默认情况下,Compute使用内部防火墙驱动程序。由于网络服务包含防火墙驱动程序,因此必须使用nova.virt.firewall.NoopFirewallDriver防火墙驱动程序禁用计算防火墙驱动 程序

7.在该[vnc]部分中,将VNC代理配置为使用控制器节点的管理接口IP地址
[vnc]
enabled = true
# ...
server_listen = $my_ip
server_proxyclient_address = $my_ip

8.在该[glance]部分中,配置图像服务API的位置
[glance]
# ...
api_servers = http://controller:9292

9.在该[oslo_concurrency]部分中,配置锁定路径
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp

10.在该[placement]部分中,配置Placement API
[placement]
# ...
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = PLACEMENT_PASS

用以下命令修改

\cp /etc/nova/nova.conf{,.bak}
grep '^[a-Z\[]' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@openstack-controller
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 172.30.100.4
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:NOVA_DBPASS@openstack-controller/nova_api
openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:NOVA_DBPASS@openstack-controller/nova
openstack-config --set /etc/nova/nova.conf placement_database connection mysql+pymysql://placement:PLACEMENT_DBPASS@openstack-controller/placement
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://openstack-controller:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers openstack-controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
openstack-config --set /etc/nova/nova.conf vnc enabled true
openstack-config --set /etc/nova/nova.conf vnc server_listen '$my_ip'
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address '$my_ip'
openstack-config --set /etc/nova/nova.conf glance api_servers http://openstack-controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://openstack-controller:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password PLACEMENT_PASS

文件md5值

$ md5sum /etc/nova/nova.conf
44436fe1f334fdfdf0b5efdbf4250e94 /etc/nova/nova.conf

6.4.3 由于包装错误,您必须通过将以下配置添加到来启用对Placement API的访问 /etc/httpd/conf.d/00-nova-placement-api.conf

6.4.3.1 备份文件并追加内容

追加内容时要添加一行空行,否则格式会有错误(这里添加了两行空行,其中第一行是为了格式正确,第二行是为了格式规范,即标签与标签之间有一行空行)

\cp /etc/httpd/conf.d/00-nova-placement-api.conf{,.bak}
cat >> /etc/httpd/conf.d/00-nova-placement-api.conf <<EOF


<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
EOF

文件md5值

$ md5sum /etc/httpd/conf.d/00-nova-placement-api.conf
4b31341049e863449951b0c76fe17bde /etc/httpd/conf.d/00-nova-placement-api.conf

6.4.3.2 重启httpd

systemctl restart httpd

6.4.3 同步数据库,忽略输出

6.4.3.1 同步数据库
# 同步nova-api和placement数据库
$ su -s /bin/sh -c "nova-manage api_db sync" nova

# 注册cell0数据库
$ su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

# 创建cell1单元格
$ su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
536383cb-03e4-48bb-bb77-4eeb1bfb9d80

# 同步nova数据库
$ su -s /bin/sh -c "nova-manage db sync" nova
/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1831, u'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release.')
result = self._query(query)
/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1831, u'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release.')
result = self._query(query)
6.4.3.2 验证 nova cell0cell1 是否正确注册
$ su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+
| Name | UUID | Transport URL | Database Connection | Disabled |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@controller/nova_cell0 | False |
| cell1 | 536383cb-03e4-48bb-bb77-4eeb1bfb9d80 | rabbit://openstack:****@controller | mysql+pymysql://nova:****@controller/nova | False |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+

6.4.4 启动Compute服务并将其配置为在系统引导时启动

nova-consoleauth自18.0.0(Rocky)起不推荐使用,并将在以后的版本中删除。每个单元应部署控制台代理。如果执行全新安装(而非升级),则可能不需要安装nova-consoleauth 服务。有关workarounds.enable_consoleauth详细信息,请参见 。

systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl start openstack-nova-api.service \
openstack-nova-consoleauth openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service

安装完成后会有no VNC 172.30.100.4:6080

iShot2021-11-04_14.48.13


安装和配置计算节点

rocky版计算节点计算服务nova安装配置官方文档

6.5 安装和配置组件

6.5.1 安装软件包

yum -y install openstack-nova-compute openstack-utils

6.5.2 编辑/etc/nova/nova.conf文件并完成以下操作

1.在此[DEFAULT]部分中,仅启用计算和元数据API
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata

2.在该[DEFAULT]部分中,配置RabbitMQ消息队列访问:
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller

3.[api][keystone_authtoken]部分中,配置身份服务访问:
[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS

4.在该[DEFAULT]部分中,配置my_ip选项:MANAGEMENT_INTERFACE_IP_ADDRESS为计算节点上管理网络接口的IP地址 my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
[DEFAULT]
# ...
my_ip = 10.0.0.31

5.在本[DEFAULT]节中,启用对网络服务的支持:
[DEFAULT]
# ...
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

6.在本[DEFAULT]节中,启用对网络服务的支持:
[DEFAULT]
# ...
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

7.在该[vnc]部分中,启用和配置远程控制台访问:
[vnc]
# ...
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

8.在该[glance]部分中,配置图像服务API的位置:
[glance]
# ...
api_servers = http://controller:9292

9.在该[oslo_concurrency]部分中,配置锁定路径:
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp

10.在该[placement]部分中,配置Placement API:
[placement]
# ...
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = PLACEMENT_PASS

用以下命令修改

\cp /etc/nova/nova.conf{,.bak}
grep '^[a-Z\[]' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 172.30.100.5
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
openstack-config --set /etc/nova/nova.conf vnc enabled true
openstack-config --set /etc/nova/nova.conf vnc server_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address '$my_ip'
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://controller:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password PLACEMENT_PASS

文件md5值

$ md5sum /etc/nova/nova.conf
fa0ddace12aaa14c6bcfe86b70efac24 /etc/nova/nova.conf

6.5.3 确定您的计算节点是否支持虚拟机的硬件加速

$ egrep -c '(vmx|svm)' /proc/cpuinfo
2

如果此命令返回值1或更大,则计算节点支持硬件加速,通常不需要其他配置。 如果此命令返回值为0,则计算节点不支持硬件加速,您必须将libvirt配置为使用QEMU而不是KVM。

openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu

6.5.4 启动Compute服务及其依赖项,并将它们配置为在系统引导时自动启动

systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service

验证操作,在控制节点执行

6.6 验证Compute服务的运行

6.6.1 获取管理员凭据以获取对仅管理员CLI命令的访问权限

source /opt/admin-openrc

6.6.2 列出服务组件以验证每个进程的成功启动和注册

$ openstack compute service list --service nova-compute
+----+--------------+---------------------+------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+--------------+---------------------+------+---------+-------+----------------------------+
| 6 | nova-compute | openstack-compute01 | nova | enabled | up | 2021-11-04T07:29:47.000000 |
+----+--------------+---------------------+------+---------+-------+----------------------------+

6.6.3 发现计算主机

$ su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': 536383cb-03e4-48bb-bb77-4eeb1bfb9d80
Checking host mapping for compute host 'compute1': 83452da0-a693-4860-bcd8-028743169f0f
Creating host mapping for compute host 'compute1': 83452da0-a693-4860-bcd8-028743169f0f
Found 1 unmapped computes in cell: 536383cb-03e4-48bb-bb77-4eeb1bfb9d80

到此,控制节点和计算节点计算服务nova安装完成!!!

7.控制节点、计算节点网络服务neutron安装

neutron相关服务

服务名说明
neutron-server端口(9696) api 接受和响应外部的网络管理请求
neutron-linuxbridge-agent负责创建桥接网卡
neutron-dhcp-agent负责分配IP
neutron-metadata-agent配合nova-metadata-api实现虚拟机的定制化操作
L3-agent实现三层网络(网络层)

安装和配置控制节点

rocky版控制节点网络服务neutron安装配置官方文档

7.1 创建neutron数据库并授权

mysql -e "CREATE DATABASE neutron;"
mysql -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY 'NEUTRON_DBPASS';"
mysql -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'NEUTRON_DBPASS';"

7.2 获取管理员凭据以获取对仅管理员CLI命令的访问权限

source /opt/admin-openrc 

7.3 创建服务凭证

7.3.1 创建neutron用户

密码设置为 NEUTRON_PASS

交互式与非交互式设置密码选择其中一种

非交互式创建密码

$ openstack user create --domain default --password NEUTRON_PASS neutron
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 014a7629fb0548899be31c87494e1156 |
| name | neutron |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+

交互式创建密码

openstack user create --domain default --password-prompt neutron

7.3.2 将 admin 角色添加到 neutron用户

openstack role add --project service --user neutron admin

7.3.3 创建 neutron 服务实体

$ openstack service create --name neutron \
--description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | 9e74ccbdaa85421894cf61c97f355dc7 |
| name | neutron |
| type | network |
+-------------+----------------------------------+

7.4 创建网络服务API端点

$ openstack endpoint create --region RegionOne \
network public http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | abe8c37741934ade89308da46501ea03 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 9e74ccbdaa85421894cf61c97f355dc7 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+

$ openstack endpoint create --region RegionOne \
network internal http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 0c32f6cb44a74ec5b653ba79153e3d68 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 9e74ccbdaa85421894cf61c97f355dc7 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+

$ openstack endpoint create --region RegionOne \
network admin http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | d2c77ff079c94591bc8ea0b4e51be936 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 9e74ccbdaa85421894cf61c97f355dc7 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+

安装和配置控制节点

7.5 配置网络选项

官网关于两种网络的说明

您可以使用选项1和2表示的两种体系结构之一来部署网络服务。

选项1部署了最简单的架构,该架构仅支持将实例附加到提供程序(外部)网络。没有自助服务(专用)网络,路由器或浮动IP地址。只有admin或其他特权用户可以管理提供商网络。

选项2通过支持将实例附加到自助服务网络的第3层服务增强了选项1。该demo非特权用户或其他非特权用户可以管理自助服务网络,包括在自助服务网络与提供商网络之间提供连接的路由器。此外,浮动IP地址使用自助服务网络从外部网络(例如Internet)提供到实例的连接。

自助服务网络通常使用覆盖网络。诸如VXLAN之类的覆盖网络协议包括其他标头,这些标头增加了开销并减少了可用于有效负载或用户数据的空间。在不了解虚拟网络基础结构的情况下,实例尝试使用默认的1500字节以太网最大传输单元(MTU)发送数据包。网络服务会通过DHCP自动为实例提供正确的MTU值。但是,某些云映像不使用DHCP或忽略DHCP MTU选项,而是需要使用元数据或脚本进行配置。

网络选项1:提供商网络

网络选项2:自助服务网络

以上两种网络任选一种完成后返回这里配置元数据代理

这里选择网路选项1

7.5.1 安装软件包

yum -y install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables

7.5.2 编辑 /etc/neutron/neutron.conf 文件并完成以下操作

1.在该[database]部分中,配置数据库访问
[database]
# ...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron

2.在该[DEFAULT]部分中,启用模块化第2层(ML2)插件并禁用其他插件:
[DEFAULT]
# ...
core_plugin = ml2
service_plugins =

3.在该[DEFAULT]部分中,配置RabbitMQ 消息队列访问:
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller

4.[DEFAULT][keystone_authtoken]部分中,配置身份服务访问
[DEFAULT]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS

5.[DEFAULT][nova]部分中,将网络配置为通知Compute网络拓扑更改
[DEFAULT]
# ...
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[nova]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS

6.在该[oslo_concurrency]部分中,配置锁定路径
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp

用以下命令修改

\cp /etc/neutron/neutron.conf{,.bak}
grep '^[a-Z\[]' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes true
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes true
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf nova auth_url http://controller:5000
openstack-config --set /etc/neutron/neutron.conf nova auth_type password
openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
openstack-config --set /etc/neutron/neutron.conf nova project_name service
openstack-config --set /etc/neutron/neutron.conf nova username nova
openstack-config --set /etc/neutron/neutron.conf nova password NOVA_PASS
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

文件md5值

$ md5sum /etc/neutron/neutron.conf
1c4b4339f83596fa6bfdbec7a622a35e /etc/neutron/neutron.conf

7.6 编辑 /etc/neutron/plugins/ml2/ml2_conf.ini 文件并完成以下操作

ML2插件使用Linux桥接机制为实例构建第2层(桥接和交换)虚拟网络基础架构

1.在本[ml2]节中,启用平面和VLAN网络
[ml2]
# ...
type_drivers = flat,vlan

2.在该[ml2]部分中,禁用自助服务网络:
[ml2]
# ...
tenant_network_types =

3.在本[ml2]节中,启用Linux桥接机制:
[ml2]
# ...
mechanism_drivers = linuxbridge

4.在此[ml2]部分中,启用端口安全扩展驱动程序:
[ml2]
# ...
extension_drivers = port_security

5.在本[ml2_type_flat]节中,将提供者虚拟网络配置为平面网络:
[ml2_type_flat]
# ...
flat_networks = provider

6.在本[securitygroup]节中,启用ipset以提高安全组规则的效率:
[securitygroup]
# ...
enable_ipset = true

用以下命令修改

\cp /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/ml2_conf.ini.bak >/etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset true

文件md5值

$ md5sum /etc/neutron/plugins/ml2/ml2_conf.ini
eb38c10cfd26c1cc308a050c9a5d8aa1 /etc/neutron/plugins/ml2/ml2_conf.ini

7.7 配置linux桥接代理

7.7.1 编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件并完成以下操作

Linux网桥代理为实例构建第2层(桥接和交换)虚拟网络基础架构并处理安全组

1.在本[linux_bridge]节中,将提供者虚拟网络映射到提供者物理网络接口:
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
替换PROVIDER_INTERFACE_NAME为基础提供商物理网络接口的名称。这里是eth0

2.在该[vxlan]部分中,禁用VXLAN覆盖网络:
[vxlan]
enable_vxlan = false

3.在该[securitygroup]部分中,启用安全组并配置Linux网桥iptables防火墙驱动程序:
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
通过验证以下所有sysctl值是否设置为确保Linux操作系统内核支持网桥过滤器1

net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-ip6tables
要启用网络桥接器支持,通常br_netfilter需要加载内核模块。查看操作系统的文档,以获取有关启用此模块的其他详细信息。

用以下命令修改

\cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eth0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan false
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

文件md5值

$ md5sum /etc/neutron/plugins/ml2/linuxbridge_agent.ini
794b19995c83e2fc0c3fd42f506904f1 /etc/neutron/plugins/ml2/linuxbridge_agent.ini

7.7.2 使Linux操作系统内核支持网桥过滤器1

/etc/sysctl.d/openstack-rocky-bridge.conf 写入以下内容

cat >> /etc/sysctl.d/openstack-rocky-bridge.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

执行以下命令生效

$ modprobe br_netfilter && sysctl -p /etc/sysctl.d/openstack-rocky-bridge.conf 
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

7.8 配置DHCP代理

DHCP代理为虚拟网络提供DHCP服务

编辑 /etc/neutron/dhcp_agent.ini 文件并完成以下操作

在本[DEFAULT]节中,配置Linux桥接口驱动程序Dnsmasq DHCP驱动程序,并启用隔离的元数据,以便提供商网络上的实例可以通过网络访问元数据
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

用以下命令修改

\cp /etc/neutron/dhcp_agent.ini{,.bak} 
grep '^[a-Z\[]' /etc/neutron/dhcp_agent.ini.bak >/etc/neutron/dhcp_agent.ini
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver linuxbridge
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true

文件md5值

$ md5sum /etc/neutron/dhcp_agent.ini
33a1e93e1853796070d5da0773496665 /etc/neutron/dhcp_agent.ini

7.9 配置元数据代理

所述元数据代理提供配置信息的诸如凭据实例。

编辑 /etc/neutron/metadata_agent.ini 文件并完成以下操作

在该[DEFAULT]部分中,配置元数据主机和共享机密:
[DEFAULT]
# ...
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET
替换METADATA_SECRET为元数据代理的适当机密

用以下命令修改

\cp /etc/neutron/metadata_agent.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/metadata_agent.ini.bak >/etc/neutron/metadata_agent.ini
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret METADATA_SECRET

文件md5值

$ md5sum /etc/neutron/metadata_agent.ini
e8b90a011b94fece31d33edfd8bc72b6 /etc/neutron/metadata_agent.ini

7.10 配置计算以使用网络

编辑 /etc/nova/nova.conf 文件并执行以下操作

在该[neutron]部分中,配置访问参数,启用元数据代理,并配置机密:

[neutron]
# ...
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET

用以下命令修改

openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:5000
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_PASS
openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy true
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret METADATA_SECRET

文件md5值

$ md5sum /etc/nova/nova.conf
81feca9d18ee91397cc973d455bfa271 /etc/nova/nova.conf

7.11 完成安装

7.11.1 创建链接文件

网络服务初始化脚本需要 /etc/neutron/plugin.ini 指向ML2插件配置文件的符号链接 /etc/neutron/plugins/ml2/ml2_conf.ini。如果此符号链接不存在,请使用以下命令创建它

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

7.11.2 同步数据库,最后提示OK即为正确

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

7.11.3 重新启动Compute API服务

systemctl restart openstack-nova-api.service

7.11.4 启动网络服务并将其配置为在系统引导时启动

对于官网中的两种网络,这里选择的是第一种网络

systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service

systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service

7.11.5 验证

# 启动服务后提示如下即为正确,alive处都为笑脸     
$ neutron agent-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| beffcac6-745e-449f-bad8-7f2e4fa973f2 | Linux bridge agent | controller | | :-) | True | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+

安装和配置计算节点

rocky版计算节点网络服务neutron安装配置官方文档

7.12 安装包

yum -y install openstack-neutron-linuxbridge ebtables ipset openstack-utils

7.13 配置公共组件

编辑 /etc/neutron/neutron.conf 文件并完成以下操作

1.在该[DEFAULT]部分中,配置RabbitMQ 消息队列访问:
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller

2.[DEFAULT][keystone_authtoken]部分中,配置身份服务访问:
[DEFAULT]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS

3.在该[oslo_concurrency]部分中,配置锁定路径:
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp

#用以下命令修改
\cp /etc/neutron/neutron.conf{,.bak}
grep '^[a-Z\[]' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

用以下命令修改

\cp /etc/neutron/neutron.conf{,.bak}
grep '^[a-Z\[]' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

文件md5值

$ md5sum /etc/neutron/neutron.conf
9c47ffb59b23516b59e7de84a39bcbe8 /etc/neutron/neutron.conf

7.14 配置网络选项

7.14.1 配置桥接代理

Linux网桥代理为实例构建第2层(桥接和交换)虚拟网络基础结构并处理安全组

7.14.1.1 编辑 /etc/neutron/plugins/ml2/linuxbridge_agent.ini 文件并完成以下操作
1.在本[linux_bridge]节中,将提供者虚拟网络映射到提供者物理网络接口:
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
替换PROVIDER_INTERFACE_NAME为基础提供商物理网络接口的名称。这里为eth0

2.在该[vxlan]部分中,禁用VXLAN覆盖网络:
[vxlan]
enable_vxlan = false

3.在该[securitygroup]部分中,启用安全组并配置Linux网桥iptables防火墙驱动程序:
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

用以下命令修改

\cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak} 
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eth0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan false
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

文件md5值

$ md5sum /etc/neutron/plugins/ml2/linuxbridge_agent.ini
794b19995c83e2fc0c3fd42f506904f1 /etc/neutron/plugins/ml2/linuxbridge_agent.ini
7.14.1.2 使Linux操作系统内核支持网桥过滤器1

/etc/sysctl.d/openstack-rocky-bridge.conf 写入以下内容

cat >> /etc/sysctl.d/openstack-rocky-bridge.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

执行以下命令生效

$ modprobe br_netfilter && sysctl -p /etc/sysctl.d/openstack-rocky-bridge.conf 
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

7.14.2 配置计算以使用网络

编辑 /etc/nova/nova.conf 文件并完成以下操作

在该[neutron]部分中,配置访问参数:
[neutron]
# ...
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS

用以下命令修改

openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:5000
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_PASS

文件md5值

$ md5sum /etc/nova/nova.conf
9b96b21ae709f89c96cc559018ba7058 /etc/nova/nova.conf

7.15 完成安装

7.15.1 重新启动Compute服务

systemctl restart openstack-nova-compute.service

7.15.2 启动Linux网桥代理并将其配置为在系统引导时启动

systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service

7.16 验证

控制节点执行

输出应指示控制器节点上的三个代理,每个计算节点上的一个代理

$ openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 42cfe05b-0a9c-40ce-8f99-06ba76938c50 | Linux bridge agent | controller | None | :-) | UP | neutron-linuxbridge-agent |
| 749cb43f-a5db-4918-a3f5-8765e92e851c | DHCP agent | controller | nova | :-) | UP | neutron-dhcp-agent |
| 856ecf5f-6018-4ac4-a66b-f6f88784db0e | Metadata agent | controller | None | :-) | UP | neutron-metadata-agent |
| b5c2309c-fefc-46d0-b98e-37b05861095c | Linux bridge agent | compute1 | None | :-) | UP | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

到此,控制节点、计算节点网络服务neutron安装完成!!!

8.控制节点horizon web界面Dashboard安装

rocky版控制节点horizon web节点dashboard安装配置官方文档

horizon插件注册表官方文档

8.1 安装包

yum -y install openstack-dashboard

8.2 编辑 /etc/openstack-dashboard/local_settings 文件并完成以下操作

因为有一些内容是删除之后粘贴的,所以一些行数并不是很准确,但是行数误差不超过3行

1.配置仪表板以在controller节点上使用OpenStack服务 :
184行 OPENSTACK_HOST = "127.0.0.1"
修改为 OPENSTACK_HOST = "controller"

2.允许主机访问仪表板:
ALLOWED_HOSTS也可以是['*']以接受所有主机。这对于开发工作可能有用,但是可能不安全,因此不应在生产中使用。有关 更多信息,请参见 https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts。
38行 ALLOWED_HOSTS = ['one.example.com', 'two.example.com']
修改为 ALLOWED_HOSTS = ['*', ]

3.配置memcached会话存储服务:
161行,CACHES上加一行 SESSION_ENGINE,并且修改为如下内容
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}

4.启用身份API版本3
187行,不用修改
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

5.启用对域的支持:
75#OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = False
修改为 OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

6.配置API版本:
64行,原先为注释,
#OPENSTACK_API_VERSIONS = {
# "data-processing": 1.1,
# "identity": 3,
# "image": 2,
# "volume": 2,
# "compute": 2,
#}

修改为如下
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}

7.配置Default为通过仪表板创建的用户的默认域:
95#OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'
修改为 OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

8.配置user为通过仪表板创建的用户的默认角色:
186行 OPENSTACK_KEYSTONE_DEFAULT_ROLE = "_member_"
修改为 OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

9.如果选择网络选项1,请禁用对第3层网络服务的支持:
324
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': True,
'enable_quotas': True,
'enable_ipv6': True,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_fip_topology_check': True,

修改为如下
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,

10.配置时区:
467行 TIME_ZONE = "TIME_ZONE"
修改为 TIME_ZONE = "Asia/Shanghai"

备份一下文件

\cp /etc/openstack-dashboard/local_settings{,.bak}

采用 cat EOF 方式可能会有格式问题,因为文件内容太多了,这里必须使用vi打开文件然后复制粘贴内容,不能使用vim会有格式错误

/etc/openstack-dashboard/local_settings文件内容

# -*- coding: utf-8 -*-

import os

from django.utils.translation import ugettext_lazy as _


from openstack_dashboard.settings import HORIZON_CONFIG

DEBUG = False

# This setting controls whether or not compression is enabled. Disabling
# compression makes Horizon considerably slower, but makes it much easier
# to debug JS and CSS changes
#COMPRESS_ENABLED = not DEBUG

# This setting controls whether compression happens on the fly, or offline
# with `python manage.py compress`
# See https://django-compressor.readthedocs.io/en/latest/usage/#offline-compression
# for more information
#COMPRESS_OFFLINE = not DEBUG

# WEBROOT is the location relative to Webserver root
# should end with a slash.
WEBROOT = '/dashboard/'
#LOGIN_URL = WEBROOT + 'auth/login/'
#LOGOUT_URL = WEBROOT + 'auth/logout/'
#
# LOGIN_REDIRECT_URL can be used as an alternative for
# HORIZON_CONFIG.user_home, if user_home is not set.
# Do not set it to '/home/', as this will cause circular redirect loop
#LOGIN_REDIRECT_URL = WEBROOT

# If horizon is running in production (DEBUG is False), set this
# with the list of host/domain names that the application can serve.
# For more information see:
# https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts
ALLOWED_HOSTS = ['*', ]

# Set SSL proxy settings:
# Pass this header from the proxy after terminating the SSL,
# and don't forget to strip it from the client's request.
# For more information see:
# https://docs.djangoproject.com/en/dev/ref/settings/#secure-proxy-ssl-header
#SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')

# If Horizon is being served through SSL, then uncomment the following two
# settings to better secure the cookies from security exploits
#CSRF_COOKIE_SECURE = True
#SESSION_COOKIE_SECURE = True

# The absolute path to the directory where message files are collected.
# The message file must have a .json file extension. When the user logins to
# horizon, the message files collected are processed and displayed to the user.
#MESSAGES_PATH=None

# Overrides for OpenStack API versions. Use this setting to force the
# OpenStack dashboard to use a specific API version for a given service API.
# Versions specified here should be integers or floats, not strings.
# NOTE: The version should be formatted as it appears in the URL for the
# service API. For example, The identity service APIs have inconsistent
# use of the decimal point, so valid options would be 2.0 or 3.
# Minimum compute version to get the instance locked status is 2.9.
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}

# Set this to True if running on a multi-domain model. When this is enabled, it
# will require the user to enter the Domain name in addition to the username
# for login.
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

# Set this to True if you want available domains displayed as a dropdown menu
# on the login screen. It is strongly advised NOT to enable this for public
# clouds, as advertising enabled domains to unauthenticated customers
# irresponsibly exposes private information. This should only be used for
# private clouds where the dashboard sits behind a corporate firewall.
#OPENSTACK_KEYSTONE_DOMAIN_DROPDOWN = False

# If OPENSTACK_KEYSTONE_DOMAIN_DROPDOWN is enabled, this option can be used to
# set the available domains to choose from. This is a list of pairs whose first
# value is the domain name and the second is the display name.
#OPENSTACK_KEYSTONE_DOMAIN_CHOICES = (
# ('Default', 'Default'),
#)

# Overrides the default domain used when running on single-domain model
# with Keystone V3. All entities will be created in the default domain.
# NOTE: This value must be the name of the default domain, NOT the ID.
# Also, you will most likely have a value in the keystone policy file like this
# "cloud_admin": "rule:admin_required and domain_id:<your domain id>"
# This value must be the name of the domain whose ID is specified there.
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'

# Set this to True to enable panels that provide the ability for users to
# manage Identity Providers (IdPs) and establish a set of rules to map
# federation protocol attributes to Identity API attributes.
# This extension requires v3.0+ of the Identity API.
#OPENSTACK_KEYSTONE_FEDERATION_MANAGEMENT = False

# Set Console type:
# valid options are "AUTO"(default), "VNC", "SPICE", "RDP", "SERIAL", "MKS"
# or None. Set to None explicitly if you want to deactivate the console.
#CONSOLE_TYPE = "AUTO"

# Toggle showing the openrc file for Keystone V2.
# If set to false the link will be removed from the user dropdown menu
# and the API Access page
#SHOW_KEYSTONE_V2_RC = True

# If provided, a "Report Bug" link will be displayed in the site header
# which links to the value of this setting (ideally a URL containing
# information on how to report issues).
#HORIZON_CONFIG["bug_url"] = "http://bug-report.example.com"

# Show backdrop element outside the modal, do not close the modal
# after clicking on backdrop.
#HORIZON_CONFIG["modal_backdrop"] = "static"

# Specify a regular expression to validate user passwords.
#HORIZON_CONFIG["password_validator"] = {
# "regex": '.*',
# "help_text": _("Your password does not meet the requirements."),
#}

# Turn off browser autocompletion for forms including the login form and
# the database creation workflow if so desired.
#HORIZON_CONFIG["password_autocomplete"] = "off"

# Setting this to True will disable the reveal button for password fields,
# including on the login form.
#HORIZON_CONFIG["disable_password_reveal"] = False

LOCAL_PATH = '/tmp'

# Set custom secret key:
# You can either set it to a specific value or you can let horizon generate a
# default secret key that is unique on this machine, e.i. regardless of the
# amount of Python WSGI workers (if used behind Apache+mod_wsgi): However,
# there may be situations where you would want to set this explicitly, e.g.
# when multiple dashboard instances are distributed on different machines
# (usually behind a load-balancer). Either you have to make sure that a session
# gets all requests routed to the same dashboard instance or you set the same
# SECRET_KEY for all of them.
SECRET_KEY='f9ed41e34c2b04178998'

# We recommend you use memcached for development; otherwise after every reload
# of the django development server, you will have to login again. To use
# memcached set CACHES to something like
#CACHES = {
# 'default': {
# 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
# 'LOCATION': '127.0.0.1:11211',
# },
#}
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}

# Send email to the console by default
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
# Or send them to /dev/null
#EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend'

# Configure these for your outgoing email host
#EMAIL_HOST = 'smtp.my-company.com'
#EMAIL_PORT = 25
#EMAIL_HOST_USER = 'djangomail'
#EMAIL_HOST_PASSWORD = 'top-secret!'

# For multiple regions uncomment this configuration, and add (endpoint, title).
#AVAILABLE_REGIONS = [
# ('http://cluster1.example.com:5000/v3', 'cluster1'),
# ('http://cluster2.example.com:5000/v3', 'cluster2'),
#]

OPENSTACK_HOST = "controller"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

# For setting the default service region on a per-endpoint basis. Note that the
# default value for this setting is {}, and below is just an example of how it
# should be specified.
# A key of '*' is an optional global default if no other key matches.
#DEFAULT_SERVICE_REGIONS = {
# '*': 'RegionOne'
# OPENSTACK_KEYSTONE_URL: 'RegionTwo'
#}

# Enables keystone web single-sign-on if set to True.
#WEBSSO_ENABLED = False

# Authentication mechanism to be selected as default.
# The value must be a key from WEBSSO_CHOICES.
#WEBSSO_INITIAL_CHOICE = "credentials"

# The list of authentication mechanisms which include keystone
# federation protocols and identity provider/federation protocol
# mapping keys (WEBSSO_IDP_MAPPING). Current supported protocol
# IDs are 'saml2' and 'oidc' which represent SAML 2.0, OpenID
# Connect respectively.
# Do not remove the mandatory credentials mechanism.
# Note: The last two tuples are sample mapping keys to a identity provider
# and federation protocol combination (WEBSSO_IDP_MAPPING).
#WEBSSO_CHOICES = (
# ("credentials", _("Keystone Credentials")),
# ("oidc", _("OpenID Connect")),
# ("saml2", _("Security Assertion Markup Language")),
# ("acme_oidc", "ACME - OpenID Connect"),
# ("acme_saml2", "ACME - SAML2"),
#)

# A dictionary of specific identity provider and federation protocol
# combinations. From the selected authentication mechanism, the value
# will be looked up as keys in the dictionary. If a match is found,
# it will redirect the user to a identity provider and federation protocol
# specific WebSSO endpoint in keystone, otherwise it will use the value
# as the protocol_id when redirecting to the WebSSO by protocol endpoint.
# NOTE: The value is expected to be a tuple formatted as: (<idp_id>, <protocol_id>).
#WEBSSO_IDP_MAPPING = {
# "acme_oidc": ("acme", "oidc"),
# "acme_saml2": ("acme", "saml2"),
#}

# If set this URL will be used for web single-sign-on authentication
# instead of OPENSTACK_KEYSTONE_URL. This is needed in the deployment
# scenarios where network segmentation is used per security requirement.
# In this case, the controllers are not reachable from public network.
# Therefore, user's browser will not be able to access OPENSTACK_KEYSTONE_URL
# if it is set to the internal endpoint.
#WEBSSO_KEYSTONE_URL = "http://keystone-public.example.com/v3"

# The Keystone Provider drop down uses Keystone to Keystone federation
# to switch between Keystone service providers.
# Set display name for Identity Provider (dropdown display name)
#KEYSTONE_PROVIDER_IDP_NAME = "Local Keystone"
# This id is used for only for comparison with the service provider IDs. This ID
# should not match any service provider IDs.
#KEYSTONE_PROVIDER_IDP_ID = "localkeystone"

# Disable SSL certificate checks (useful for self-signed certificates):
#OPENSTACK_SSL_NO_VERIFY = True

# The CA certificate to use to verify SSL connections
#OPENSTACK_SSL_CACERT = '/path/to/cacert.pem'

# The OPENSTACK_KEYSTONE_BACKEND settings can be used to identify the
# capabilities of the auth backend for Keystone.
# If Keystone has been configured to use LDAP as the auth backend then set
# can_edit_user to False and name to 'ldap'.
#
# TODO(tres): Remove these once Keystone has an API to identify auth backend.
OPENSTACK_KEYSTONE_BACKEND = {
'name': 'native',
'can_edit_user': True,
'can_edit_group': True,
'can_edit_project': True,
'can_edit_domain': True,
'can_edit_role': True,
}

# Setting this to True, will add a new "Retrieve Password" action on instance,
# allowing Admin session password retrieval/decryption.
#OPENSTACK_ENABLE_PASSWORD_RETRIEVE = False

# The Launch Instance user experience has been significantly enhanced.
# You can choose whether to enable the new launch instance experience,
# the legacy experience, or both. The legacy experience will be removed
# in a future release, but is available as a temporary backup setting to ensure
# compatibility with existing deployments. Further development will not be
# done on the legacy experience. Please report any problems with the new
# experience via the Launchpad tracking system.
#
# Toggle LAUNCH_INSTANCE_LEGACY_ENABLED and LAUNCH_INSTANCE_NG_ENABLED to
# determine the experience to enable. Set them both to true to enable
# both.
#LAUNCH_INSTANCE_LEGACY_ENABLED = True
#LAUNCH_INSTANCE_NG_ENABLED = False

# A dictionary of settings which can be used to provide the default values for
# properties found in the Launch Instance modal.
#LAUNCH_INSTANCE_DEFAULTS = {
# 'config_drive': False,
# 'enable_scheduler_hints': True,
# 'disable_image': False,
# 'disable_instance_snapshot': False,
# 'disable_volume': False,
# 'disable_volume_snapshot': False,
# 'create_volume': True,
#}

# The Xen Hypervisor has the ability to set the mount point for volumes
# attached to instances (other Hypervisors currently do not). Setting
# can_set_mount_point to True will add the option to set the mount point
# from the UI.
OPENSTACK_HYPERVISOR_FEATURES = {
'can_set_mount_point': False,
'can_set_password': False,
'requires_keypair': False,
'enable_quotas': True
}

# This settings controls whether IP addresses of servers are retrieved from
# neutron in the project instance table. Setting this to ``False`` may mitigate
# a performance issue in the project instance table in large deployments.
#OPENSTACK_INSTANCE_RETRIEVE_IP_ADDRESSES = True

# The OPENSTACK_CINDER_FEATURES settings can be used to enable optional
# services provided by cinder that is not exposed by its extension API.
OPENSTACK_CINDER_FEATURES = {
'enable_backup': False,
}

# The OPENSTACK_NEUTRON_NETWORK settings can be used to enable optional
# services provided by neutron. Options currently available are load
# balancer service, security groups, quotas, VPN service.
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,

# Default dns servers you would like to use when a subnet is
# created. This is only a default, users can still choose a different
# list of dns servers when creating a new subnet.
# The entries below are examples only, and are not appropriate for
# real deployments
# 'default_dns_nameservers': ["8.8.8.8", "8.8.4.4", "208.67.222.222"],

# Set which provider network types are supported. Only the network types
# in this list will be available to choose from when creating a network.
# Network types include local, flat, vlan, gre, vxlan and geneve.
# 'supported_provider_types': ['*'],

# You can configure available segmentation ID range per network type
# in your deployment.
# 'segmentation_id_range': {
# 'vlan': [1024, 2048],
# 'vxlan': [4094, 65536],
# },

# You can define additional provider network types here.
# 'extra_provider_types': {
# 'awesome_type': {
# 'display_name': 'Awesome New Type',
# 'require_physical_network': False,
# 'require_segmentation_id': True,
# }
# },

# Set which VNIC types are supported for port binding. Only the VNIC
# types in this list will be available to choose from when creating a
# port.
# VNIC types include 'normal', 'direct', 'direct-physical', 'macvtap',
# 'baremetal' and 'virtio-forwarder'
# Set to empty list or None to disable VNIC type selection.
'supported_vnic_types': ['*'],

# Set list of available physical networks to be selected in the physical
# network field on the admin create network modal. If it's set to an empty
# list, the field will be a regular input field.
# e.g. ['default', 'test']
'physical_networks': [],

}

# The OPENSTACK_HEAT_STACK settings can be used to disable password
# field required while launching the stack.
OPENSTACK_HEAT_STACK = {
'enable_user_pass': True,
}

# The OPENSTACK_IMAGE_BACKEND settings can be used to customize features
# in the OpenStack Dashboard related to the Image service, such as the list
# of supported image formats.
#OPENSTACK_IMAGE_BACKEND = {
# 'image_formats': [
# ('', _('Select format')),
# ('aki', _('AKI - Amazon Kernel Image')),
# ('ami', _('AMI - Amazon Machine Image')),
# ('ari', _('ARI - Amazon Ramdisk Image')),
# ('docker', _('Docker')),
# ('iso', _('ISO - Optical Disk Image')),
# ('ova', _('OVA - Open Virtual Appliance')),
# ('qcow2', _('QCOW2 - QEMU Emulator')),
# ('raw', _('Raw')),
# ('vdi', _('VDI - Virtual Disk Image')),
# ('vhd', _('VHD - Virtual Hard Disk')),
# ('vhdx', _('VHDX - Large Virtual Hard Disk')),
# ('vmdk', _('VMDK - Virtual Machine Disk')),
# ],
#}

# The IMAGE_CUSTOM_PROPERTY_TITLES settings is used to customize the titles for
# image custom property attributes that appear on image detail pages.
IMAGE_CUSTOM_PROPERTY_TITLES = {
"architecture": _("Architecture"),
"kernel_id": _("Kernel ID"),
"ramdisk_id": _("Ramdisk ID"),
"image_state": _("Euca2ools state"),
"project_id": _("Project ID"),
"image_type": _("Image Type"),
}

# The IMAGE_RESERVED_CUSTOM_PROPERTIES setting is used to specify which image
# custom properties should not be displayed in the Image Custom Properties
# table.
IMAGE_RESERVED_CUSTOM_PROPERTIES = []

# Set to 'legacy' or 'direct' to allow users to upload images to glance via
# Horizon server. When enabled, a file form field will appear on the create
# image form. If set to 'off', there will be no file form field on the create
# image form. See documentation for deployment considerations.
#HORIZON_IMAGES_UPLOAD_MODE = 'legacy'

# Allow a location to be set when creating or updating Glance images.
# If using Glance V2, this value should be False unless the Glance
# configuration and policies allow setting locations.
#IMAGES_ALLOW_LOCATION = False

# A dictionary of default settings for create image modal.
#CREATE_IMAGE_DEFAULTS = {
# 'image_visibility': "public",
#}

# OPENSTACK_ENDPOINT_TYPE specifies the endpoint type to use for the endpoints
# in the Keystone service catalog. Use this setting when Horizon is running
# external to the OpenStack environment. The default is 'publicURL'.
#OPENSTACK_ENDPOINT_TYPE = "publicURL"

# SECONDARY_ENDPOINT_TYPE specifies the fallback endpoint type to use in the
# case that OPENSTACK_ENDPOINT_TYPE is not present in the endpoints
# in the Keystone service catalog. Use this setting when Horizon is running
# external to the OpenStack environment. The default is None. This
# value should differ from OPENSTACK_ENDPOINT_TYPE if used.
#SECONDARY_ENDPOINT_TYPE = None

# The number of objects (Swift containers/objects or images) to display
# on a single page before providing a paging element (a "more" link)
# to paginate results.
API_RESULT_LIMIT = 1000
API_RESULT_PAGE_SIZE = 20

# The size of chunk in bytes for downloading objects from Swift
SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024

# The default number of lines displayed for instance console log.
INSTANCE_LOG_LENGTH = 35

# Specify a maximum number of items to display in a dropdown.
DROPDOWN_MAX_ITEMS = 30

# The timezone of the server. This should correspond with the timezone
# of your entire OpenStack installation, and hopefully be in UTC.
TIME_ZONE = "Asia/Shanghai"

# When launching an instance, the menu of available flavors is
# sorted by RAM usage, ascending. If you would like a different sort order,
# you can provide another flavor attribute as sorting key. Alternatively, you
# can provide a custom callback method to use for sorting. You can also provide
# a flag for reverse sort. For more info, see
# http://docs.python.org/2/library/functions.html#sorted
#CREATE_INSTANCE_FLAVOR_SORT = {
# 'key': 'name',
# # or
# 'key': my_awesome_callback_method,
# 'reverse': False,
#}

# Set this to True to display an 'Admin Password' field on the Change Password
# form to verify that it is indeed the admin logged-in who wants to change
# the password.
#ENFORCE_PASSWORD_CHECK = False

# Modules that provide /auth routes that can be used to handle different types
# of user authentication. Add auth plugins that require extra route handling to
# this list.
#AUTHENTICATION_URLS = [
# 'openstack_auth.urls',
#]

# The Horizon Policy Enforcement engine uses these values to load per service
# policy rule files. The content of these files should match the files the
# OpenStack services are using to determine role based access control in the
# target installation.

# Path to directory containing policy.json files
POLICY_FILES_PATH = '/etc/openstack-dashboard'

# Map of local copy of service policy files.
# Please insure that your identity policy file matches the one being used on
# your keystone servers. There is an alternate policy file that may be used
# in the Keystone v3 multi-domain case, policy.v3cloudsample.json.
# This file is not included in the Horizon repository by default but can be
# found at
# http://git.openstack.org/cgit/openstack/keystone/tree/etc/ \
# policy.v3cloudsample.json
# Having matching policy files on the Horizon and Keystone servers is essential
# for normal operation. This holds true for all services and their policy files.
#POLICY_FILES = {
# 'identity': 'keystone_policy.json',
# 'compute': 'nova_policy.json',
# 'volume': 'cinder_policy.json',
# 'image': 'glance_policy.json',
# 'network': 'neutron_policy.json',
#}

# Change this patch to the appropriate list of tuples containing
# a key, label and static directory containing two files:
# _variables.scss and _styles.scss
#AVAILABLE_THEMES = [
# ('default', 'Default', 'themes/default'),
# ('material', 'Material', 'themes/material'),
#]

LOGGING = {
'version': 1,
# When set to True this will disable all logging except
# for loggers specified in this configuration dictionary. Note that
# if nothing is specified here and disable_existing_loggers is True,
# django.db.backends will still log unless it is disabled explicitly.
'disable_existing_loggers': False,
# If apache2 mod_wsgi is used to deploy OpenStack dashboard
# timestamp is output by mod_wsgi. If WSGI framework you use does not
# output timestamp for logging, add %(asctime)s in the following
# format definitions.
'formatters': {
'console': {
'format': '%(levelname)s %(name)s %(message)s'
},
'operation': {
# The format of "%(message)s" is defined by
# OPERATION_LOG_OPTIONS['format']
'format': '%(message)s'
},
},
'handlers': {
'null': {
'level': 'DEBUG',
'class': 'logging.NullHandler',
},
'console': {
# Set the level to "DEBUG" for verbose output logging.
'level': 'INFO',
'class': 'logging.StreamHandler',
'formatter': 'console',
},
'operation': {
'level': 'INFO',
'class': 'logging.StreamHandler',
'formatter': 'operation',
},
},
'loggers': {
'horizon': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'horizon.operation_log': {
'handlers': ['operation'],
'level': 'INFO',
'propagate': False,
},
'openstack_dashboard': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'novaclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'cinderclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'keystoneauth': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'keystoneclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'glanceclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'neutronclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'swiftclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'oslo_policy': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'openstack_auth': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'django': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
# Logging from django.db.backends is VERY verbose, send to null
# by default.
'django.db.backends': {
'handlers': ['null'],
'propagate': False,
},
'requests': {
'handlers': ['null'],
'propagate': False,
},
'urllib3': {
'handlers': ['null'],
'propagate': False,
},
'chardet.charsetprober': {
'handlers': ['null'],
'propagate': False,
},
'iso8601': {
'handlers': ['null'],
'propagate': False,
},
'scss': {
'handlers': ['null'],
'propagate': False,
},
},
}

# 'direction' should not be specified for all_tcp/udp/icmp.
# It is specified in the form.
SECURITY_GROUP_RULES = {
'all_tcp': {
'name': _('All TCP'),
'ip_protocol': 'tcp',
'from_port': '1',
'to_port': '65535',
},
'all_udp': {
'name': _('All UDP'),
'ip_protocol': 'udp',
'from_port': '1',
'to_port': '65535',
},
'all_icmp': {
'name': _('All ICMP'),
'ip_protocol': 'icmp',
'from_port': '-1',
'to_port': '-1',
},
'ssh': {
'name': 'SSH',
'ip_protocol': 'tcp',
'from_port': '22',
'to_port': '22',
},
'smtp': {
'name': 'SMTP',
'ip_protocol': 'tcp',
'from_port': '25',
'to_port': '25',
},
'dns': {
'name': 'DNS',
'ip_protocol': 'tcp',
'from_port': '53',
'to_port': '53',
},
'http': {
'name': 'HTTP',
'ip_protocol': 'tcp',
'from_port': '80',
'to_port': '80',
},
'pop3': {
'name': 'POP3',
'ip_protocol': 'tcp',
'from_port': '110',
'to_port': '110',
},
'imap': {
'name': 'IMAP',
'ip_protocol': 'tcp',
'from_port': '143',
'to_port': '143',
},
'ldap': {
'name': 'LDAP',
'ip_protocol': 'tcp',
'from_port': '389',
'to_port': '389',
},
'https': {
'name': 'HTTPS',
'ip_protocol': 'tcp',
'from_port': '443',
'to_port': '443',
},
'smtps': {
'name': 'SMTPS',
'ip_protocol': 'tcp',
'from_port': '465',
'to_port': '465',
},
'imaps': {
'name': 'IMAPS',
'ip_protocol': 'tcp',
'from_port': '993',
'to_port': '993',
},
'pop3s': {
'name': 'POP3S',
'ip_protocol': 'tcp',
'from_port': '995',
'to_port': '995',
},
'ms_sql': {
'name': 'MS SQL',
'ip_protocol': 'tcp',
'from_port': '1433',
'to_port': '1433',
},
'mysql': {
'name': 'MYSQL',
'ip_protocol': 'tcp',
'from_port': '3306',
'to_port': '3306',
},
'rdp': {
'name': 'RDP',
'ip_protocol': 'tcp',
'from_port': '3389',
'to_port': '3389',
},
}

# Deprecation Notice:
#
# The setting FLAVOR_EXTRA_KEYS has been deprecated.
# Please load extra spec metadata into the Glance Metadata Definition Catalog.
#
# The sample quota definitions can be found in:
# <glance_source>/etc/metadefs/compute-quota.json
#
# The metadata definition catalog supports CLI and API:
# $glance --os-image-api-version 2 help md-namespace-import
# $glance-manage db_load_metadefs <directory_with_definition_files>
#
# See Metadata Definitions on:
# https://docs.openstack.org/glance/latest/user/glancemetadefcatalogapi.html

# The hash algorithm to use for authentication tokens. This must
# match the hash algorithm that the identity server and the
# auth_token middleware are using. Allowed values are the
# algorithms supported by Python's hashlib library.
#OPENSTACK_TOKEN_HASH_ALGORITHM = 'md5'

# AngularJS requires some settings to be made available to
# the client side. Some settings are required by in-tree / built-in horizon
# features. These settings must be added to REST_API_REQUIRED_SETTINGS in the
# form of ['SETTING_1','SETTING_2'], etc.
#
# You may remove settings from this list for security purposes, but do so at
# the risk of breaking a built-in horizon feature. These settings are required
# for horizon to function properly. Only remove them if you know what you
# are doing. These settings may in the future be moved to be defined within
# the enabled panel configuration.
# You should not add settings to this list for out of tree extensions.
# See: https://wiki.openstack.org/wiki/Horizon/RESTAPI
REST_API_REQUIRED_SETTINGS = ['OPENSTACK_HYPERVISOR_FEATURES',
'LAUNCH_INSTANCE_DEFAULTS',
'OPENSTACK_IMAGE_FORMATS',
'OPENSTACK_KEYSTONE_BACKEND',
'OPENSTACK_KEYSTONE_DEFAULT_DOMAIN',
'CREATE_IMAGE_DEFAULTS',
'ENFORCE_PASSWORD_CHECK']

# Additional settings can be made available to the client side for
# extensibility by specifying them in REST_API_ADDITIONAL_SETTINGS
# !! Please use extreme caution as the settings are transferred via HTTP/S
# and are not encrypted on the browser. This is an experimental API and
# may be deprecated in the future without notice.
#REST_API_ADDITIONAL_SETTINGS = []

# DISALLOW_IFRAME_EMBED can be used to prevent Horizon from being embedded
# within an iframe. Legacy browsers are still vulnerable to a Cross-Frame
# Scripting (XFS) vulnerability, so this option allows extra security hardening
# where iframes are not used in deployment. Default setting is True.
# For more information see:
# http://tinyurl.com/anticlickjack
#DISALLOW_IFRAME_EMBED = True

# Help URL can be made available for the client. To provide a help URL, edit the
# following attribute to the URL of your choice.
#HORIZON_CONFIG["help_url"] = "http://openstack.mycompany.org"

# Settings for OperationLogMiddleware
# OPERATION_LOG_ENABLED is flag to use the function to log an operation on
# Horizon.
# mask_targets is arrangement for appointing a target to mask.
# method_targets is arrangement of HTTP method to output log.
# format is the log contents.
#OPERATION_LOG_ENABLED = False
#OPERATION_LOG_OPTIONS = {
# 'mask_fields': ['password'],
# 'target_methods': ['POST'],
# 'ignored_urls': ['/js/', '/static/', '^/api/'],
# 'format': ("[%(client_ip)s] [%(domain_name)s]"
# " [%(domain_id)s] [%(project_name)s]"
# " [%(project_id)s] [%(user_name)s] [%(user_id)s] [%(request_scheme)s]"
# " [%(referer_url)s] [%(request_url)s] [%(message)s] [%(method)s]"
# " [%(http_status)s] [%(param)s]"),
#}

# The default date range in the Overview panel meters - either <today> minus N
# days (if the value is integer N), or from the beginning of the current month
# until today (if set to None). This setting should be used to limit the amount
# of data fetched by default when rendering the Overview panel.
#OVERVIEW_DAYS_RANGE = 1

# To allow operators to require users provide a search criteria first
# before loading any data into the views, set the following dict
# attributes to True in each one of the panels you want to enable this feature.
# Follow the convention <dashboard>.<view>
#FILTER_DATA_FIRST = {
# 'admin.instances': False,
# 'admin.images': False,
# 'admin.networks': False,
# 'admin.routers': False,
# 'admin.volumes': False,
# 'identity.users': False,
# 'identity.projects': False,
# 'identity.groups': False,
# 'identity.roles': False
#}

# Dict used to restrict user private subnet cidr range.
# An empty list means that user input will not be restricted
# for a corresponding IP version. By default, there is
# no restriction for IPv4 or IPv6. To restrict
# user private subnet cidr range set ALLOWED_PRIVATE_SUBNET_CIDR
# to something like
#ALLOWED_PRIVATE_SUBNET_CIDR = {
# 'ipv4': ['10.0.0.0/8', '192.168.0.0/16'],
# 'ipv6': ['fc00::/7']
#}
ALLOWED_PRIVATE_SUBNET_CIDR = {'ipv4': [], 'ipv6': []}

# Projects and users can have extra attributes as defined by keystone v3.
# Horizon has the ability to display these extra attributes via this setting.
# If you'd like to display extra data in the project or user tables, set the
# corresponding dict key to the attribute name, followed by the display name.
# For more information, see horizon's customization
# (https://docs.openstack.org/horizon/latest/configuration/customizing.html#horizon-customization-module-overrides)
#PROJECT_TABLE_EXTRA_INFO = {
# 'phone_num': _('Phone Number'),
#}
#USER_TABLE_EXTRA_INFO = {
# 'phone_num': _('Phone Number'),
#}

# Password will have an expiration date when using keystone v3 and enabling the
# feature.
# This setting allows you to set the number of days that the user will be alerted
# prior to the password expiration.
# Once the password expires keystone will deny the access and users must
# contact an admin to change their password.
#PASSWORD_EXPIRES_WARNING_THRESHOLD_DAYS = 0

文件md5值

$ md5sum /etc/openstack-dashboard/local_settings
0e53f197affdd94c9e25a4f6f7fdf14b /etc/openstack-dashboard/local_settings

8.3 修改配置文件,否则后续访问dashboard会报500错误

8.3.1 编辑 /etc/httpd/conf.d/openstack-dashboard.conf

sed -i.bak  '3aWSGIApplicationGroup %{GLOBAL}' /etc/httpd/conf.d/openstack-dashboard.conf

8.3.2 重启httpd和memcache

systemctl restart httpd.service memcached.service

8.4 登陆dashboard

172.30.100.4:8080/dashboard

域:default

用户名:admin

密码:ADMIN_PASS

iShot2020-05-2620.16.57

登陆后首界面

iShot2020-05-2620.18.46

如果登陆报错如下

iShot2021-11-04_17.09.24

解决方法

在安装dashboard节点做以下操作

1.修改配置文件 /etc/openstack-dashboard/local_settings
修改 SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
修改为 SESSION_ENGINE = 'django.contrib.sessions.backends.file'

2.重启httpd
systemctl restart httpd

9.控制节点和存储节点块存储服务cinder安装

rocky版块存储cinder安装配置官方文档

官方对cidner服务的说明

本节介绍如何为块存储服务安装和配置存储节点。为简单起见,此配置引用一个具有空本地块存储设备的存储节点。指令使用/dev/sdb,但是您可以将特定节点替换为其他值。

该服务使用LVM驱动程序在该设备上置备逻辑卷, 并通过iSCSI传输将其提供给实例。您可以对这些说明进行少量修改,以通过其他存储节点水平扩展您的环境。

cinder相关服务

服务名说明
cinder-volume提供存储空间,包括lvm、nfs、glusterfs、ceph等等存储
cinder-api接收外部的api请求
cinder-scheduler调度器,决定由哪一个cinder-volume提供存储空间
cinder-backup备份创建的卷

安装和配置控制节点

rocky版控制节点块存储服务cinder安装配置官方文档

9.1 创建cinder数据库并授权

mysql -e "CREATE DATABASE cinder;"
mysql -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
IDENTIFIED BY 'CINDER_DBPASS';"
mysql -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
IDENTIFIED BY 'CINDER_DBPASS';"

9.2 获取管理员凭据以获取对仅管理员CLI命令的访问

source /opt/admin-openrc

9.3 创建服务凭证

9.3.1 创建 cinder用户

密码设置为 CIDNER_PASS

交互式与非交互式设置密码选择其中一种

非交互式设置密码

$ openstack user create --domain default --password CINDER_PASS cinder
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 65b5343859e6409994d007f2de30570b |
| name | cinder |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+

交互式设置密码

openstack user create --domain default --password-prompt cinder

9.3.2 将admin角色添加到cinder用户

openstack role add --project service --user cinder admin

9.3.3 创建 cinderv2cinderv3 服务实体

块存储服务需要两个服务实体

$ openstack service create --name cinderv2 \
--description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | efcdff53520142d2ac6b0953cf532340 |
| name | cinderv2 |
| type | volumev2 |
+-------------+----------------------------------+

$ openstack service create --name cinderv3 \
--description "OpenStack Block Storage" volumev3
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | 910b223e07244359ae0480d579a0231a |
| name | cinderv3 |
| type | volumev3 |
+-------------+----------------------------------+

9.4 创建块存储服务API端点

创建cinderv2

$ openstack endpoint create --region RegionOne \
volumev2 public http://controller:8776/v2/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | dcd7a491205f4c8a8a1af94fbd95d452 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | efcdff53520142d2ac6b0953cf532340 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+

$ openstack endpoint create --region RegionOne \
volumev2 internal http://controller:8776/v2/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | e044f276dd624336b5c4bb51aa343a55 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | efcdff53520142d2ac6b0953cf532340 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+

$ openstack endpoint create --region RegionOne \
volumev2 admin http://controller:8776/v2/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | e9c888a00e354a2cab3035b70b9e6c30 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | efcdff53520142d2ac6b0953cf532340 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+

创建cinderv3

$ openstack endpoint create --region RegionOne \
volumev3 public http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 4f818560bac745dfa493dc53b3106cc3 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 910b223e07244359ae0480d579a0231a |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+

$ openstack endpoint create --region RegionOne \
volumev3 internal http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | e102e7b28f5f47409e4e231af0af4776 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 910b223e07244359ae0480d579a0231a |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+

$ openstack endpoint create --region RegionOne \
volumev3 admin http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 4585dc9397b041b7a1a45d063d515dcb |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 910b223e07244359ae0480d579a0231a |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+

9.5 安装和配置组件

9.5.1 安装软件包

yum -y install openstack-cinder

9.5.2 编辑 /etc/cinder/cinder.conf 文件并完成以下操作

1.在该[database]部分中,配置数据库访问:
[database]
# ...
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder

2.在该[DEFAULT]部分中,配置RabbitMQ 消息队列访问:
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller

3.[DEFAULT][keystone_authtoken]部分中,配置身份服务访问:
[DEFAULT]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = CINDER_PASS

4.在该[DEFAULT]部分中,配置my_ip选项以使用控制器节点的管理接口IP地址:
[DEFAULT]
# ...
my_ip = 10.0.0.11

5.在该[oslo_concurrency]部分中,配置锁定路径:
[oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp

用以下命令修改

\cp /etc/cinder/cinder.conf{,.bak}
grep '^[a-Z\[]' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf
openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 172.30.100.4
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_id default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_id default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password CINDER_PASS
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp

文件md5值

$ md5sum /etc/cinder/cinder.conf
a5023d5b6df47ce8d186d5d32623c076 /etc/cinder/cinder.conf

9.5.3 同步数据库

$ su -s /bin/sh -c "cinder-manage db sync" cinder
Deprecated: Option "logdir" from group "DEFAULT" is deprecated. Use option "log-dir" from group "DEFAULT"

有表即为正确

$ mysql cinder -e "show tables"|wc -l
36

9.6 配置计算已使用块存储

编辑 /etc/nova/nova.conf 文件并向其中添加以下内容

openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne

文件md5值

$ md5sum /etc/nova/nova.conf
606a18d1be80cd7e0a57150ca0e5040f /etc/nova/nova.conf

9.7 完成安装

9.7.1 重启Compute API服务

systemctl restart openstack-nova-api.service

9.7.2 启动块存储服务,并将其配置为在系统启动时启动

systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

9.8 验证

返回的结果是 cinder-api 提供的,并且 cinder-scheduler 的状态是up

$ cinder service-list
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled | up | 2020-05-29T10:58:08.000000 | - |
+------------------+------------+------+---------+-------+----------------------------+-----------------+

安装和配置块存储节点

rocky版块存储节点块存储服务cinder安装配置官方文档

9.9 前提条件

9.9.1 安装包

yum -y install lvm2 device-mapper-persistent-data openstack-utils.noarch

9.9.2 启动LVM元数据服务,并将其配置为在系统引导时启动

systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service

9.9.3 创建LVM物理卷 /dev/vdb

如果是在虚拟机中,需要添加一块数据盘,使用命令 echo "- - -" >/sys/class/scsi_host/host0/scan 实现热加载(不一定为host0,也可能是其他数字,比如1、2、3等)

查看添加的磁盘

$ fdisk -l /dev/vdb

Disk /dev/vdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

创建物理卷pv

$ pvcreate /dev/vdb
Physical volume "/dev/vdb" successfully created.

9.9.4 创建LVM卷组cinder-volumes

$ vgcreate cinder-volumes /dev/vdb
Volume group "cinder-volumes" successfully created

9.9.5 关于lvm的问题

只有实例可以访问块存储卷。但是,底层操作系统管理与卷关联的设备。默认情况下,LVM卷扫描工具会在 /dev 目录中扫描包含卷的块存储设备。如果项目在其卷上使用LVM,则扫描工具会检测到这些卷并尝试对其进行缓存,这可能导致基础操作系统卷和项目卷出现各种问题。您必须将LVM重新配置为仅扫描包含 cinder-volumes 卷组的设备

⚠️⚠️⚠️

如果存储节点在操作系统磁盘上使用LVM,则还必须将关联的设备添加到过滤器中。例如,如果 /dev/vda 设备包含操作系统:

filter = [ "a/vda/", "a/vdb/", "r/.*/"]

同样,如果您的计算节点在操作系统磁盘上使用LVM,则还必须 /etc/lvm/lvm.conf 在这些节点上的文件中修改过滤器, 使其仅包括操作系统磁盘。例如,如果 /dev/vda 设备包含操作系统:

filter = [ "a/vda/", "r/.*/"]

因为使用的虚拟机在安装的时候是采用的lvm,所以配置文件中应当把系统盘/dev/vda也添加

编辑 /etc/lvm/lvm.conf 文件并完成以下操作

在该devices部分中,添加一个接受/dev/sdb设备并拒绝所有其他设备的过滤 器:
devices {
...
filter = [ "a/vdb/", "r/.*/"]

滤波器阵列中的每个项目开始于a用于接受或 r用于拒绝,并且包括用于所述装置名称的正则表达式。该阵列必须r/.*/以拒绝任何剩余的设备结尾。您可以使用vgs -vvvv命令测试过滤器

用以下命令修改

\cp /etc/lvm/lvm.conf{,.bak} 
egrep -v '^$|#' /etc/lvm/lvm.conf.bak > /etc/lvm/lvm.conf
sed -i '/^devices/a\\tfilter = [ "a/vda/", "a/vdb/", "r/.*/"]' /etc/lvm/lvm.conf

文件md5值

$ md5sum /etc/lvm/lvm.conf
572157ddd9d8b095ac37e89f4d1e603a /etc/lvm/lvm.conf

9.10 安装和配置组件

9.10.1 安装软件包

targetcli是iscsi的包

yum -y install openstack-cinder targetcli python-keystone

9.10.2 编辑 /etc/cinder/cinder.conf 文件并完成以下操作

在配置文件中的 [DEFAULT] 区域,默认官方定义的后端lvm卷名称就是lvm,这个名称是任意的,并且DEFAULT区域的enabled_backends定义的名称是有单独的一个区域,例如

# default区域内容如下
[DEFAULT]
enabled_backends = lvm

# 则如下区域内容和DEFAULT区域定义的名称一一对应的
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver # 驱动
volume_group = cinder-volumes # 卷组名
iscsi_protocol = iscsi
iscsi_helper = lioadm


如果有多个lvm卷,则DEFAULT区域可以写成如下,并且最后要写上一一对应的内容
[DEFAULT]
enabled_backends = lvm,lvm2,lvm3
# 后段名称任意,这里我们定义为普通磁盘sata,固态硬盘ssd
[DEFAULT]
enabled_backends = sata,ssd

[sata]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name = sata

[ssd]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-ssd
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name = ssd
1.在该[database]部分中,配置数据库访问:
[database]
# ...
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder

2.在该[DEFAULT]部分中,配置RabbitMQ 消息队列访问:
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller

3.在[DEFAULT][keystone_authtoken]部分中,配置身份服务访问:
[DEFAULT]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = CINDER_PASS

4.在该[DEFAULT]部分中,配置my_ip选项:替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络接口的IP地址,对于示例体系结构中的第一个节点,通常为10.0.0.41
[DEFAULT]
# ...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS

5.在该[lvm]部分中,为LVM后端配置LVM驱动程序,cinder-volumes卷组,iSCSI协议和适当的iSCSI服务。如果该[lvm]部分不存在,请创建它:
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm

6.在该[DEFAULT]部分中,启用LVM后端: 后端名称是任意的。例如,本指南使用驱动程序的名称作为后端的名称。
[DEFAULT]
# ...
enabled_backends = lvm

7.在该[DEFAULT]部分中,配置图像服务API的位置:
[DEFAULT]
# ...
glance_api_servers = http://controller:9292

8.在该[oslo_concurrency]部分中,配置锁定路径:
[oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp

用以下命令修改

\cp /etc/cinder/cinder.conf{,.bak}
grep '^[a-Z\[]' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf
openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 172.30.100.6
openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends lvm
openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://controller:9292
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_id default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_id default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password CINDER_PASS
openstack-config --set /etc/cinder/cinder.conf lvm volume_driver cinder.volume.drivers.lvm.LVMVolumeDriver
openstack-config --set /etc/cinder/cinder.conf lvm volume_group cinder-volumes
openstack-config --set /etc/cinder/cinder.conf lvm iscsi_protocol iscsi
openstack-config --set /etc/cinder/cinder.conf lvm iscsi_helper lioadm
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp

文件md5值

$ md5sum /etc/cinder/cinder.conf
c88237e48f728cbe389b57c75b2be155 /etc/cinder/cinder.conf

9.10.3 启动块存储卷服务及其相关性,并将其配置为在系统启动时启动

systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service

到此,控制节点和块存储节点块存储服务cinder安装完成!!!

10.控制节点和对象节点对象存储服务swift安装

OpenStack对象存储是一个多租户对象存储系统。它具有高度的可扩展性,并且可以通过RESTful HTTP API以低成本管理大量非结构化数据。

swift相关组件

名称说明
代理服务器(swift-proxy-server)接受OpenStack对象存储API和原始HTTP请求,以上传文件,修改元数据和创建容器。它还向网络浏览器提供文件或容器列表。为了提高性能,代理服务器可以使用通常与memcache一起部署的可选缓存。
帐户服务器(swift-account-server)管理使用对象存储定义的帐户
容器服务器(swift容器服务器)在对象存储中管理容器或文件夹的映射
对象服务器(swift-object-server)管理存储节点上的实际对象,例如文件
各种周期性过程在大型数据存储上执行内务处理任务。复制服务可确保整个群集的一致性和可用性。其他定期过程包括审核员,更新者和收割者
WSGI中间件处理身份验证,通常是OpenStack身份
swift client使用户能够通过授权为管理员用户,代理商用户或快速用户的命令行客户端将命令提交到REST API
swift-init脚本初始化环文件的构建,以守护程序名称作为参数并提供命令。记录在 https://docs.openstack.org/swift/latest/admin_guide.html#managing-services中
swift-reconcli工具,用于检索swift-recon中间件已收集的有关群集的各种度量和遥测信息
swift-ring-builder存储环构建和重新平衡实用程序。在 https://docs.openstack.org/swift/latest/admin_guide.html#managing-the-rings中记录

安装和配置控制节点

rocky版控制节点对象存储服务swift安装配置官方文档

10.1 创建身份服务凭据

10.1.1 获取管理员凭据以获取对仅管理员CLI命令的访问

source /opt/admin-openrc

10.1.2 创建 swift 用户

密码设置为 SWIFT_PASS

交互式与非交互式设置密码选择其中一种

非交互式设置密码

$ openstack user create --domain default --password SWIFT_PASS swift
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 312ab4f320434d30b76c9486463e2dea |
| name | swift |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+

交互式设置密码

openstack user create --domain default --password-prompt swift

10.1.3 将 admin 角色添加到 swift 用户

openstack role add --project service --user swift admin

10.1.4 创建 swift 服务实体

$ openstack service create --name swift \
--description "OpenStack Object Storage" object-store
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Object Storage |
| enabled | True |
| id | a41ace8ca3bb42ec92e27a29503828e7 |
| name | swift |
| type | object-store |
+-------------+----------------------------------+

10.2 创建对象存储服务API端点

$ openstack endpoint create --region RegionOne \
object-store public http://controller:8080/v1/AUTH_%\(project_id\)s
+--------------+-----------------------------------------------+
| Field | Value |
+--------------+-----------------------------------------------+
| enabled | True |
| id | 25063822e1c947f38189af370d97a0c2 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | a41ace8ca3bb42ec92e27a29503828e7 |
| service_name | swift |
| service_type | object-store |
| url | http://controller:8080/v1/AUTH_%(project_id)s |
+--------------+-----------------------------------------------+

$ openstack endpoint create --region RegionOne \
object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s
+--------------+-----------------------------------------------+
| Field | Value |
+--------------+-----------------------------------------------+
| enabled | True |
| id | 6f36da03e9344d84906f98a31a09ebe9 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | a41ace8ca3bb42ec92e27a29503828e7 |
| service_name | swift |
| service_type | object-store |
| url | http://controller:8080/v1/AUTH_%(project_id)s |
+--------------+-----------------------------------------------+

$ openstack endpoint create --region RegionOne \
object-store admin http://controller:8080/v1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | ffe6a96aeb224d8abe3e0c3de6c9e072 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | a41ace8ca3bb42ec92e27a29503828e7 |
| service_name | swift |
| service_type | object-store |
| url | http://controller:8080/v1 |
+--------------+----------------------------------+

10.3 安装和配置组件

10.3.1 安装软件包

yum -y install openstack-swift-proxy python-swiftclient \
python-keystoneclient python-keystonemiddleware memcached openstack-utils.noarch

10.3.2 从对象存储源存储库获取代理服务配置文件

curl -o /etc/swift/proxy-server.conf https://opendev.org/openstack/swift/raw/branch/stable/rocky/etc/proxy-server.conf-sample 

10.3.3 编辑 /etc/swift/proxy-server.conf 文件并完成以下操作

这里官方文档有坑,q版之后就不用35357端口了,但是文档中还是写着auth_url = http://controller:35357,这里应该改成5000端口,否则后续验证swift会报错500

1.在该[DEFAULT]部分中,配置绑定端口,用户和配置目录:
[DEFAULT]
...
bind_port = 8080
user = swift
swift_dir = /etc/swift

2.在该[pipeline:main]部分中,删除tempurl和 tempauth模块,然后添加authtoken和keystoneauth 模块 请勿更改模块的顺序!!!
有关启用其他功能的其他模块的更多信息,请参考如下地址
https://docs.openstack.org/swift/latest/deployment_guide.html

[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server

3.在该[app:proxy-server]部分中,启用自动帐户创建:
[app:proxy-server]
use = egg:swift#proxy
...
account_autocreate = True

4.在该[filter:keystoneauth]部分中,配置操作员角色:
[filter:keystoneauth]
use = egg:swift#keystoneauth
...
operator_roles = admin,user

5.在该[filter:authtoken]部分中,配置身份服务访问:
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = swift
password = SWIFT_PASS
delay_auth_decision = True

6.在该[filter:cache]部分中,配置memcached位置:
[filter:cache]
use = egg:swift#memcache
...
memcache_servers = controller:11211

用以下命令修改

\cp /etc/swift/proxy-server.conf{,.bak}
grep '^[a-Z\[]' /etc/swift/proxy-server.conf.bak > /etc/swift/proxy-server.conf
openstack-config --set /etc/swift/proxy-server.conf DEFAULT bind_port 8080
openstack-config --set /etc/swift/proxy-server.conf DEFAULT user swift
openstack-config --set /etc/swift/proxy-server.conf DEFAULT swift_dir /etc/swift
openstack-config --set /etc/swift/proxy-server.conf app:proxy-server use egg:swift#proxy
openstack-config --set /etc/swift/proxy-server.conf app:proxy-server account_autocreate True
openstack-config --set /etc/swift/proxy-server.conf filter:keystoneauth use egg:swift#keystoneauth
openstack-config --set /etc/swift/proxy-server.conf filter:keystoneauth operator_roles admin,user
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken paste.filter_factory keystonemiddleware.auth_token:filter_factory
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken auth_url http://controller:5000
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken memcached_servers controller:11211
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken auth_type password
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken project_domain_id default
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken user_domain_id default
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken project_name service
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken username swift
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken password SWIFT_PASS
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken delay_auth_decision True
openstack-config --set /etc/swift/proxy-server.conf filter:cache use egg:swift#memcache
openstack-config --set /etc/swift/proxy-server.conf filter:cache memcache_servers controller:11211
sed -i '/^pipeline/c pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server' /etc/swift/proxy-server.conf

安装和配置对象节点

rocky版存储节点对象存储服务swift安装配置官方文档

本节介绍如何安装和配置用于操作帐户,容器和对象服务的存储节点。为简单起见,此配置引用两个存储节点,每个存储节点包含两个空的本地块存储设备。指令使用 /dev/sdb/dev/sdc,但是您可以为特定节点替换不同的值。

尽管对象存储支持具有扩展属性(xattr)的任何文件系统,但是测试和基准测试表明XFS具有最佳性能和可靠性。有关水平扩展环境的更多信息,请参阅《 部署指南》

本部分适用于Red Hat Enterprise Linux 7和CentOS 7

10.4 前提条件

10.4.1 安装软件包

yum -y install xfsprogs rsync

10.4.2 将/dev/sdb/dev/sdc设备格式化为XFS

mkfs.xfs /dev/sdb
mkfs.xfs /dev/sdc

10.4.3 创建安装点目录结构

mkdir -p /srv/node/sdb
mkdir -p /srv/node/sdc

10.4.4 编辑 /etc/fstab 文件并向其中添加以下内容

cp /etc/fstab{,.bak}
cat >> /etc/fstab <<EOF
/dev/sdb /srv/node/sdb xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
/dev/sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
EOF

10.4.5 挂载目录

mount /srv/node/sdb
mount /srv/node/sdc

10.4.6 创建或编辑 /etc/rsyncd.conf 文件以包含以下内容

rsync 服务不需要身份验证,因此请考虑在生产环境中的专用网络上运行它。

替换 MANAGEMENT_INTERFACE_IP_ADDRESS 为存储节点上管理网络的IP地址

object01操作

\cp /etc/rsyncd{,.bak}
cat > /etc/rsyncd.conf <<EOF
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 172.30.100.7

[account]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/object.lock
EOF

object02操作

\cp /etc/rsyncd{,.bak}
cat > /etc/rsyncd.conf <<EOF
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 172.30.100.8

[account]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/object.lock
EOF

10.5 安装和配置组件

在每个对象节点上执行以下步骤

10.5.1 安装软件包

yum -y install openstack-swift-account openstack-swift-container \
openstack-swift-object openstack-utils.noarch

10.5.2 从对象存储源存储库中获取计费,容器和对象服务配置文件

curl -o /etc/swift/account-server.conf https://opendev.org/openstack/swift/raw/branch/stable/rocky/etc/account-server.conf-sample

curl -o /etc/swift/container-server.conf https://opendev.org/openstack/swift/raw/branch/stable/rocky/etc/container-server.conf-sample

curl -o /etc/swift/object-server.conf https://opendev.org/openstack/swift/raw/branch/stable/rocky/etc/object-server.conf-sample

10.5.3 编辑 /etc/swift/account-server.conf 文件并完成以下操作

1.在此[DEFAULT]部分中,配置绑定IP地址,绑定端口,用户,配置目录和安装点目录:替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。
[DEFAULT]
...
bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
bind_port = 6202
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

2.在该[pipeline:main]部分中,启用适当的模块:
有关启用其他功能的其他模块的更多信息,参考如下地址
https://docs.openstack.org/swift/latest/deployment_guide.html
[pipeline:main]
pipeline = healthcheck recon account-server

3.在该[filter:recon]部分中,配置侦察(计量)缓存目录:
[filter:recon]
use = egg:swift#recon
...
recon_cache_path = /var/cache/swift

用以下命令操作

export IP=`ip a s eth0| awk -F '[ /]+' 'NR==3{print $3}'`
\cp /etc/swift/account-server.conf{,.bak}
grep '^[a-Z\[]' /etc/swift/account-server.conf.bak > /etc/swift/account-server.conf
openstack-config --set /etc/swift/account-server.conf DEFAULT bind_ip ${IP}
openstack-config --set /etc/swift/account-server.conf DEFAULT bind_port 6202
openstack-config --set /etc/swift/account-server.conf DEFAULT user swift
openstack-config --set /etc/swift/account-server.conf DEFAULT swift_dir /etc/swift
openstack-config --set /etc/swift/account-server.conf DEFAULT devices /srv/node
openstack-config --set /etc/swift/account-server.conf DEFAULT mount_check True
openstack-config --set /etc/swift/account-server.conf filter:recon use egg:swift#recon
openstack-config --set /etc/swift/account-server.conf filter:recon recon_cache_path /var/cache/swift

10.5.4 编辑 /etc/swift/container-server.conf 文件并完成以下操作

1.在此[DEFAULT]部分中,配置绑定IP地址,绑定端口,用户,配置目录和安装点目录:
替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。
[DEFAULT]
...
bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
bind_port = 6201
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

2.在该[pipeline:main]部分中,启用适当的模块:
有关启用其他功能的其他模块的更多信息,请参考如下地址
https://docs.openstack.org/swift/latest/deployment_guide.html
[pipeline:main]
pipeline = healthcheck recon container-server

3.在该[filter:recon]部分中,配置侦察(计量)缓存目录:
[filter:recon]
use = egg:swift#recon
...
recon_cache_path = /var/cache/swift

用以下命令修改

export IP=`ip a s eth0| awk -F '[ /]+' 'NR==3{print $3}'`
\cp /etc/swift/container-server.conf{,.bak}
grep '^[a-Z\[]' /etc/swift/container-server.conf.bak > /etc/swift/container-server.conf
openstack-config --set /etc/swift/container-server.conf DEFAULT bind_ip ${IP}
openstack-config --set /etc/swift/container-server.conf DEFAULT bind_port 6201
openstack-config --set /etc/swift/container-server.conf DEFAULT user swift
openstack-config --set /etc/swift/container-server.conf DEFAULT swift_dir /etc/swift
openstack-config --set /etc/swift/container-server.conf DEFAULT devices /srv/node
openstack-config --set /etc/swift/container-server.conf DEFAULT mount_check True
openstack-config --set /etc/swift/container-server.conf filter:recon use egg:swift#recon
openstack-config --set /etc/swift/container-server.conf filter:recon recon_cache_path /var/cache/swift

10.5.5 编辑/etc/swift/object-server.conf文件并完成以下操作

1.在此[DEFAULT]部分中,配置绑定IP地址,绑定端口,用户,配置目录和安装点目录:
替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。
[DEFAULT]
...
bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
bind_port = 6200
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

2.在该[pipeline:main]部分中,启用适当的模块:
有关启用其他功能的其他模块的更多信息,请参考如下地址
https://docs.openstack.org/swift/latest/deployment_guide.html
[pipeline:main]
pipeline = healthcheck recon object-server

3.在此[filter:recon]部分中,配置侦察(仪表)缓存和锁定目录:
[filter:recon]
use = egg:swift#recon
...
recon_cache_path = /var/cache/swift
recon_lock_path = /var/lock

用以下命令修改

export IP=`ip a s eth0| awk -F '[ /]+' 'NR==3{print $3}'`
\cp /etc/swift/object-server.conf{,.bak}
grep '^[a-Z\[]' /etc/swift/object-server.conf.bak > /etc/swift/object-server.conf
openstack-config --set /etc/swift/object-server.conf DEFAULT bind_ip ${IP}
openstack-config --set /etc/swift/object-server.conf DEFAULT bind_port 6200
openstack-config --set /etc/swift/object-server.conf DEFAULT user swift
openstack-config --set /etc/swift/object-server.conf DEFAULT swift_dir /etc/swift
openstack-config --set /etc/swift/object-server.conf DEFAULT devices /srv/node
openstack-config --set /etc/swift/object-server.conf DEFAULT mount_check True
openstack-config --set /etc/swift/object-server.conf filter:recon use egg:swift#recon
openstack-config --set /etc/swift/object-server.conf filter:recon recon_cache_path /var/cache/swift
openstack-config --set /etc/swift/object-server.conf filter:recon recon_lock_path /var/lock

10.5.6 确保对安装点目录结构拥有适当的所有权

chown -R swift:swift /srv/node

10.5.7 创建 recon 目录并确保对其拥有适当的所有权

mkdir -p /var/cache/swift
chown -R root:swift /var/cache/swift
chmod -R 775 /var/cache/swift

10.6 创建和分发初始环

在控制节点执行以下步骤

在启动对象存储服务之前,必须创建初始帐户,容器和对象环。环形构建器创建配置文件,每个节点都使用该配置文件来确定和部署存储体系结构。为简单起见,本指南使用一个区域和两个区域,最大分区为2 ^ 10(1024)个,每个对象3个副本,两次移动一个分区之间的最少间隔时间为1小时。对于对象存储,分区表示存储设备上的目录,而不是常规分区表。有关更多信息,请参见《 部署指南》

10.7 创建帐户环

帐户服务器使用帐户环维护容器列表

10.7.1 要切换到/etc/swift目录

cd /etc/swift

10.7.2 创建基础account.builder文件

swift-ring-builder account.builder create 10 3 1

10.7.3 将每个存储节点添加到环

官网示例

swift-ring-builder account.builder \
add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 \
--device DEVICE_NAME --weight DEVICE_WEIGHT

替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。用DEVICE_NAME同一存储节点上的存储设备名称替换。例如,使用“ 安装”中的第一个存储节点使用/dev/sdb存储设备和权重100 配置存储节点

对每个存储节点上的每个存储设备重复此命令。在示例体系结构中,使用以下四个变体中的命令

$ swift-ring-builder account.builder add \
--region 1 --zone 1 --ip 172.30.100.7 --port 6202 --device vdb --weight 100
Device d0r1z1-172.30.100.7:6202R172.30.100.7:6202/vdb_"" with 100.0 weight got id 0

$ swift-ring-builder account.builder add \
--region 1 --zone 1 --ip 172.30.100.7 --port 6202 --device vdc --weight 100
Device d1r1z1-172.30.100.7:6202R172.30.100.7:6202/vdc_"" with 100.0 weight got id 1

$ swift-ring-builder account.builder add \
--region 1 --zone 2 --ip 172.30.100.8 --port 6202 --device vdb --weight 100
Device d2r1z2-172.30.100.8:6202R172.30.100.8:6202/vdb_"" with 100.0 weight got id 2

$ swift-ring-builder account.builder add \
--region 1 --zone 2 --ip 172.30.100.8 --port 6202 --device vdc --weight 100
Device d3r1z2-172.30.100.8:6202R172.30.100.8:6202/vdc_"" with 100.0 weight got id 3

10.7.4 验证

$ swift-ring-builder account.builder
account.builder, build version 4, id 2738119c3e3c47b199313d7ad28f17cd
1024 partitions, 3.000000 replicas, 1 regions, 2 zones, 4 devices, 100.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1 (0:00:00 remaining)
The overload factor is 0.00% (0.000000)
Ring file account.ring.gz not found, probably it hasn't been written yet
Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta
0 1 1 172.30.100.7:6202 172.30.100.7:6202 vdb 100.00 0 -100.00
1 1 1 172.30.100.7:6202 172.30.100.7:6202 vdc 100.00 0 -100.00
2 1 2 172.30.100.8:6202 172.30.100.8:6202 vdb 100.00 0 -100.00
3 1 2 172.30.100.8:6202 172.30.100.8:6202 vdc 100.00 0 -100.00

10.7.5 重新平衡权重

$ swift-ring-builder account.builder rebalance
Reassigned 3072 (300.00%) partitions. Balance is now 0.00. Dispersion is now 0.00

10.8 创建容器环

容器服务器使用容器环维护对象列表。但是,它不跟踪对象位置

10.8.1 转到 /etc/swift 目录

cd /etc/swift

10.8.2 创建基础container.builder文件

swift-ring-builder container.builder create 10 3 1

10.8.3 将每个存储节点添加到环

官网示例

swift-ring-builder container.builder \
add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 --device DEVICE_NAME --weight DEVICE_WEIGHT

替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。用DEVICE_NAME同一存储节点上的存储设备名称替换。例如,使用“ 安装”中的第一个存储节点使用/dev/sdb 存储设备和权重100 配置存储节点

对每个存储节点上的每个存储设备重复此命令。在示例体系结构中,使用以下四个变体中的命令

$ swift-ring-builder container.builder add \
--region 1 --zone 1 --ip 172.30.100.7 --port 6201 --device vdb --weight 100
Device d0r1z1-172.30.100.7:6201R172.30.100.7:6201/vdb_"" with 100.0 weight got id 0

$ swift-ring-builder container.builder add \
--region 1 --zone 1 --ip 172.30.100.7 --port 6201 --device vdc --weight 100
Device d1r1z1-172.30.100.7:6201R172.30.100.7:6201/vdc_"" with 100.0 weight got id 1

$ swift-ring-builder container.builder add \
--region 1 --zone 2 --ip 172.30.100.8 --port 6201 --device vdb --weight 100
Device d2r1z2-172.30.100.8:6201R172.30.100.8:6201/vdb_"" with 100.0 weight got id 2

$ swift-ring-builder container.builder add \
--region 1 --zone 2 --ip 172.30.100.8 --port 6201 --device vdc --weight 100
Device d3r1z2-172.30.100.8:6201R172.30.100.8:6201/vdc_"" with 100.0 weight got id 3

10.8.4 验证

$ swift-ring-builder container.builder
container.builder, build version 4, id c50b252ff09548ba8e9f1639516cd6b1
1024 partitions, 3.000000 replicas, 1 regions, 2 zones, 4 devices, 100.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1 (0:00:00 remaining)
The overload factor is 0.00% (0.000000)
Ring file container.ring.gz not found, probably it hasn't been written yet
Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta
0 1 1 172.30.100.7:6201 172.30.100.7:6201 vdb 100.00 0 -100.00
1 1 1 172.30.100.7:6201 172.30.100.7:6201 vdc 100.00 0 -100.00
2 1 2 172.30.100.8:6201 172.30.100.8:6201 vdb 100.00 0 -100.00
3 1 2 172.30.100.8:6201 172.30.100.8:6201 vdc 100.00 0 -100.00

10.8.5 重新平衡权重

$ swift-ring-builder container.builder rebalance
Reassigned 3072 (300.00%) partitions. Balance is now 0.00. Dispersion is now 0.00

10.9 创建对象环

对象服务器使用对象环维护本地设备上对象位置的列表

10.9.1 转到 /etc/swift 目录

cd /etc/dwift

10.9.2 创建基础 object.builder 文件

swift-ring-builder object.builder create 10 3 1

10.9.3 将每个存储节点添加到环

官方示例

swift-ring-builder object.builder \
add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 --device DEVICE_NAME --weight DEVICE_WEIGHT

替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。用DEVICE_NAME同一存储节点上的存储设备名称替换。例如,使用“ 安装”中的第一个存储节点使用/dev/sdb存储设备和权重100 配置存储节点

对每个存储节点上的每个存储设备重复此命令。在示例体系结构中,使用以下四个变体中的命令

$ swift-ring-builder object.builder add \
--region 1 --zone 1 --ip 172.30.100.7 --port 6200 --device vdb --weight 100
Device d0r1z1-172.30.100.7:6200R172.30.100.7:6200/vdb_"" with 100.0 weight got id 0

$ swift-ring-builder object.builder add \
--region 1 --zone 1 --ip 172.30.100.7 --port 6200 --device vdc --weight 100
Device d1r1z1-172.30.100.7:6200R172.30.100.7:6200/vdc_"" with 100.0 weight got id 1

$ swift-ring-builder object.builder add \
--region 1 --zone 2 --ip 172.30.100.8 --port 6200 --device vdb --weight 100
Device d2r1z2-172.30.100.8:6200R172.30.100.8:6200/vdb_"" with 100.0 weight got id 2

$ swift-ring-builder object.builder add \
--region 1 --zone 2 --ip 172.30.100.8 --port 6200 --device sdc --weight 100
Device d3r1z2-172.30.100.8:6200R172.30.100.8:6200/sdc_"" with 100.0 weight got id 3

10.9.4 验证

$ swift-ring-builder object.builder
object.builder, build version 4, id 0a4d3c0a65aa4d3d9a0d9407c99312c6
1024 partitions, 3.000000 replicas, 1 regions, 2 zones, 4 devices, 100.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1 (0:00:00 remaining)
The overload factor is 0.00% (0.000000)
Ring file object.ring.gz not found, probably it hasn't been written yet
Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta
0 1 1 172.30.100.7:6200 172.30.100.7:6200 vdb 100.00 0 -100.00
1 1 1 172.30.100.7:6200 172.30.100.7:6200 vdc 100.00 0 -100.00
3 1 2 172.30.100.8:6200 172.30.100.8:6200 sdc 100.00 0 -100.00
2 1 2 172.30.100.8:6200 172.30.100.8:6200 vdb 100.00 0 -100.00

10.9.5 重新平衡权重

$ swift-ring-builder object.builder rebalance
Reassigned 3072 (300.00%) partitions. Balance is now 0.00. Dispersion is now 0.00

10.10 分发环网配置文件

拷贝 account.ring.gzcontainer.ring.gz 以及 object.ring.gz 文件复制到 /etc/swift 每个存储节点和运行代理服务的任何其他节点上目录

scp /etc/swift/{account.ring.gz,container.ring.gz,object.ring.gz} object01:/etc/swift

scp /etc/swift/{account.ring.gz,container.ring.gz,object.ring.gz} object02:/etc/swift

10.11 完成安装

rocky版对象存储服务swift完成安装官方文档

10.11.1 /etc/swift/swift.conf从对象存储源存储库中获取文件

⚠️这一步在控制节点执行

/etc/swift/swift.conf 原先内容

[swift-hash]
swift_hash_path_suffix = %SWIFT_HASH_PATH_SUFFIX%

执行命令

curl -o /etc/swift/swift.conf \
https://opendev.org/openstack/swift/raw/branch/stable/rocky/etc/swift.conf-sample

10.11.2 编辑 /etc/swift/swift.conf 文件并完成以下操作

⚠️这一步在控制节点执行

1.在该[swift-hash]部分中,为您的环境配置哈希路径前缀和后缀。用唯一值替换HASH_PATH_PREFIX和HASH_PATH_SUFFIX,请将这些值保密,不要更改或丢失它们
[swift-hash]
...
swift_hash_path_suffix = HASH_PATH_SUFFIX
swift_hash_path_prefix = HASH_PATH_PREFIX

2.在该[storage-policy:0]部分中,配置默认存储策略:
[storage-policy:0]
...
name = Policy-0
default = yes

用以下命令修改

\cp /etc/swift/swift.conf{,.bak}
grep '^[a-Z\[]' /etc/swift/swift.conf.bak > /etc/swift/swift.conf
openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_suffix HASH_PATH_SUFFIX
openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_prefix HASH_PATH_PREFIX
openstack-config --set /etc/swift/swift.conf storage-policy:0 name Policy-0
openstack-config --set /etc/swift/swift.conf storage-policy:0 default yes

10.11.3 将 swift.conf 文件复制到 /etc/swift 每个存储节点以及运行代理服务的所有其他节点上的目录中

⚠️这一步在控制节点执行

scp /etc/swift/swift.conf object01:/etc/swift
scp /etc/swift/swift.conf object02:/etc/swift

10.11.4 在所有节点上,确保对配置目录拥有适当的所有权

在控制节点和两个对象存储节点执行

chown -R root:swift /etc/swift

10.11.5 在控制器节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其相关性,并将它们配置为在系统启动时启动

⚠️这一步在控制节点执行

systemctl enable openstack-swift-proxy.service memcached.service
systemctl start openstack-swift-proxy.service memcached.service

10.11.6 在存储节点上,启动对象存储服务并将其配置为在系统引导时启动

两个对象存储节点执行

systemctl enable openstack-swift-account \
openstack-swift-account-auditor \
openstack-swift-account-reaper \
openstack-swift-account-replicator \
openstack-swift-container \
openstack-swift-container-auditor \
openstack-swift-container-replicator \
openstack-swift-container-updater \
openstack-swift-object \
openstack-swift-object-auditor \
openstack-swift-object-replicator \
openstack-swift-object-updater

systemctl start openstack-swift-account \
openstack-swift-account-auditor \
openstack-swift-account-reaper \
openstack-swift-account-replicator \
openstack-swift-container \
openstack-swift-container-auditor \
openstack-swift-container-replicator \
openstack-swift-container-updater \
openstack-swift-object \
openstack-swift-object-auditor \
openstack-swift-object-replicator \
openstack-swift-object-updater

10.12 验证操作

rocky版对象存储服务swift验证操作官方文档

在控制器上执行以下步骤

⚠️⚠️⚠️

如果您使用的是Red Hat Enterprise Linux 7或CentOS 7,并且其中一个或多个步骤不起作用,请检查该/var/log/audit/audit.log文件中是否有SELinux消息,表明该swift进程拒绝采取措施。如果存在,请将/srv/node目录的安全上下文更改为swift_data_t类型,object_r 角色和system_u用户的最低安全级别(s0):

chcon -R system_u:object_r:swift_data_t:s0 /srv/node

10.12.1 获取demo凭证

⚠️⚠️⚠️这里不知道是什么原因(也肯能是我自己哪里出错了),如果按照官方文档中加载demo凭证,那么在执行swift stat命令时会报错403权限拒绝,所以这里加载了admin凭证

source /opt/admin-openrc

10.12.2 显示服务状态

$ swift stat
Account: AUTH_a8cb8e52e5a44288b2ac1a216195ee10
Containers: 0
Objects: 0
Bytes: 0
X-Put-Timestamp: 1590683616.82847
X-Timestamp: 1590683616.82847
X-Trans-Id: tx85844a798ed34d998b692-005ecfe7e0
Content-Type: text/plain; charset=utf-8
X-Openstack-Request-Id: tx85844a798ed34d998b692-005ecfe7e0

10.12.3 创建container1容器

openstack container create container1
+---------------------------------------+------------+------------------------------------+
| account | container | x-trans-id |
+---------------------------------------+------------+------------------------------------+
| AUTH_a8cb8e52e5a44288b2ac1a216195ee10 | container1 | tx3e166ae00ded4264a0dbe-005ecfea29 |
+---------------------------------------+------------+------------------------------------+

10.12.4 将测试文件上传到container1容器

#需要创建一个测试文件
echo test >/tmp/test.txt

$ openstack object create container1 /tmp/test.txt
+---------------+------------+----------------------------------+
| object | container | etag |
+---------------+------------+----------------------------------+
| /tmp/test.txt | container1 | d8e8fca2dc0f896fd7cb4cb0031ba249 |
+---------------+------------+----------------------------------+

10.12.5 列出container1容器中的文件

openstack object list container1
+---------------+
| Name |
+---------------+
| /tmp/test.txt |
+---------------+

10.12.6 从container1容器下载测试文件

#先把本地的测试文件/tmp/test.txt删除
rm /tmp/test.txt

#然后下载测试文件,能下载并且文件内容不变即为正确
openstack object save container1 /tmp/test.txt

到此,对象存储服务swift安装完成

rocky版更多服务参考这个官方文档

11.安装和配置备份服务(可选)

安装和配置备份服务。为简单起见,此配置使用“块存储”节点和“对象存储”(驱动程序)驱动程序,因此取决于“ 对象存储”服务

在安装和配置备份服务之前,必须先安装和配置存储节点

在块存储节点上执行以下步骤

11.1 安装软件包

yum -y install openstack-cinder

11.2 编辑/etc/cinder/cinder.conf文件并完成以下操作

在该[DEFAULT]部分中,配置备份选项:

[DEFAULT]
# ...
backup_driver = cinder.backup.drivers.swift
backup_swift_url = SWIFT_URL

#用以下命令修改
openstack-config --set /etc/cinder/cinder.conf DEFAULT backup_driver cinder.backup.drivers.swift
openstack-config --set /etc/cinder/cinder.conf DEFAULT backup_swift_url http://controller:8080/v1/AUTH_ee435c972a7a476cadbd2c9ad782c6f0

public的url每次都是不一样的

替换SWIFT_URL为对象存储服务的URL。可以通过显示对象库API端点来找到URL,在控制节点上执行命令openstack catalog show object-store

openstack catalog show object-store
+-----------+-----------------------------------------------------------------------------+
| Field | Value |
+-----------+-----------------------------------------------------------------------------+
| endpoints | RegionOne |
| | admin: http://controller:8080/v1 |
| | RegionOne |
| | public: http://controller:8080/v1/AUTH_a8cb8e52e5a44288b2ac1a216195ee10 |
| | RegionOne |
| | internal: http://controller:8080/v1/AUTH_a8cb8e52e5a44288b2ac1a216195ee10 |
| | |
| id | b5169189845a4fb1b80fe1ab06584ffc |
| name | swift |
| type | object-store |
+-----------+-----------------------------------------------------------------------------+

11.3 启动块存储备份服务,并将其配置为在系统启动时启动

systemctl enable openstack-cinder-backup.service
systemctl start openstack-cinder-backup.service

到此块存储节点块存储备份服务安装完成!!!

Right Bottom Gif
Right Top GIF