360SDN.COM

OpenStack Queens版本部署安装详解

来源:  2019-04-19 11:13:37    评论:0点击:

一、部署软件环境
硬件需求
 
操作系统:
 
Centos7
内核版本:
 
[root@controller ~]# uname -m
x86_64
[root@controller ~]# uname -r
3.10.0-862.3.2.el7.x86_64
说明:此次部署搭建采用三台物理节点手搭建社区openstack Queens环境。
 
二.OpenStack概述
OpenStack项目是一个开源云计算平台,支持所有类型的云环境。该项目旨在实现简单,大规模的可扩展性和丰富的功能。
 
OpenStack通过各种补充服务提供基础架构即服务(IaaS)解决方案。每项服务都提供了一个应用程序编程接口(API),以促进这种集成。
 
本文涵盖了使用适用于具有足够Linux经验的OpenStack新用户的功能性示例体系结构,逐步部署主要OpenStack服务。只用于学习OpenStack最小化环境。
 
三、OpenStack架构总览
1.概念性架构
下图显示了OpenStack服务之间的关系:
 
 
2.逻辑体系结构
下图显示了OpenStack云中最常见但不是唯一可能的体系结构:
 
对于设计,部署和配置OpenStack,学习者必须了解逻辑体系结构。
如概念架构所示,OpenStack由几个独立的部分组成,称为OpenStack服务。所有服务都通过keystone服务进行身份验证。
 
各个服务通过公共API相互交互,除非需要特权管理员命令。
 
在内部,OpenStack服务由多个进程组成。所有服务都至少有一个API进程,它监听API请求,预处理它们并将它们传递给服务的其他部分。除身份服务外,实际工作由不同的流程完成。
对于一个服务的进程之间的通信,使用AMQP消息代理。该服务的状态存储在数据库中。部署和配置OpenStack云时,您可以选择多种消息代理和数据库解决方案,例如RabbitMQ,MySQL,MariaDB和SQLite。
 
用户可以通过Horizon Dashboard实现的基于Web的用户界面,通过命令行客户端以及通过浏览器插件或curl等工具发布API请求来访问OpenStack。对于应用程序,有几个SDK可用。最终,所有这些访问方法都会对各种OpenStack服务发出REST API调用。
 
四.OpenStack组件服务部署
部署前置条件(以下命令在所有节点执行)
1.虚拟机
 
controller   4c+8g+100g 172.16.14.224 可以nat上网
compute      2c+4g+100g 172.16.14.225 可以nat上网
cinder     2c+4g+100g 172.16.14.226 可以nat上网
2.设置主机名,配置域名解析,编辑编辑/etc/hosts文件,加入如下配置
 
vi /etc/hosts172.16.14.224 controller openstack-controller.com172.16.14.225 compute openstack-compute.com172.16.14.226 neutron openstack-cinder.com
 
3.关闭防火墙和selinux
 
关闭防火墙和SELinux
 
[root@controller ~]# vi /etc/selinux/config 
 
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted
[root@controller ~]# systemctl disable firewalld 
 
4.验证网络连通性
在控制节点执行
 
root@controller ~]# ping openstack.org
在计算节点执行
 
[root@compute ~]# ping openstack.org
[root@compute ~]# ping controller
5.配置阿里yum源
 
备份
 
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
下载
 
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
或者
 
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
6.安装NTP时钟服务(所有节点)
 
##controller节点##
安装软件包
 
yum install chrony -y
编辑/etc/chrony.conf文件,配置时钟源同步服务端
 
server time.windows.com iburst  ##所有节点向controller节点同步时间
allow 172.16.14.0/24         ##设置时间同步网段
设置NTP服务开机启动
 
systemctl enable chronyd.service
systemctl start chronyd.service
其他节点
安装软件包
 
yum install chrony -y
配置所有节点指向controller同步时间
 
vi /etc/chrony.conf
server  controlelr  iburst
重启NTP服务
 
验证时钟同步服务
 
在controller节点执行
 
 
MS列中的内容应该指明* NTP服务当前同步的服务器。
在其他节点执行
 
注意:日常运维中经常遇见时钟飘逸问题,导致集群服务脑裂
 
openstack服务安装、配置
说明:无特殊说明,以下操作在所有节点上执行
1.下载安装openstack软件仓库(queens版本)
 
yum install centos-release-openstack-queens -y
2.更新所有节点软件包
 
yum upgrade
3.安装openstack client端
 
yum install python-openstackclient -y
4.安装openstack-selinux
 
yum install openstack-selinux -y
安装数据库(controller节点执行)
大多数OpenStack服务使用SQL数据库来存储信息,数据库通常在控制器节点上运行。 本文主要使用MariaDB或MySQL。
 
安装软件包
 
yum install mariadb mariadb-server python2-PyMySQL -y
编辑/etc/my.cnf.d/mariadb-server.cnf并完成以下操作
 
[root@controller ~]# vi /etc/my.cnf.d/mariadb-server.cnf
 
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
 
# this is read by the standalone daemon and embedded servers
[server]
 
# this is only for the mysqld standalone daemon
# Settings user and group are ignored when systemd is used.
# If you need to run mysqld under a different user or group,
# customize your systemd unit file for mysqld/mariadb according to the
# instructions in http://fedoraproject.org/wiki/Systemd
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid
bind-address = 172.16.14.224
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
说明:bind-address使用controller节点的管理IP
 
设置服务开机启动
 
systemctl enable mariadb.service
systemctl start mariadb.service
通过运行mysql_secure_installation脚本来保护数据库服务。
 
[root@controller ~]# mysql_secure_installation
 
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!
 
In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.
Enter current password for root (enter for none): 
OK, successfully used password, moving on...
Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.
Set root password? [Y/n] 
New password: 
Re-enter new password: 
Password updated successfully!
Reloading privilege tables..
 ... Success!
By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.
Remove anonymous users? [Y/n] 
 ... Success!
Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.
Disallow root login remotely? [Y/n] 
 ... Success!
By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.
Remove test database and access to it? [Y/n] 
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
Reload privilege tables now? [Y/n] 
 ... Success!
Cleaning up...
All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.
 
Thanks for using MariaDB!
在controller节点安装、配置RabbitMQ
1.安装配置消息队列组件
 
yum install rabbitmq-server -y
2.设置服务开机启动
 
systemctl enable rabbitmq-server.service;systemctl start rabbitmq-server.service
3.添加openstack 用户
rabbitmqctl add_user openstack openstack
 
4.openstack用户的权限配置
 
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
web界面登陆测试:
http://172.16.14.224:15672 使用openstack/openstack登陆即可
 
 
安装缓存数据库Memcached(controller节点)
说明:服务的身份认证服务使用Memcached缓存令牌。 memcached服务通常在控制器节点上运行。 对于生产部署,我们建议启用防火墙,身份验证和加密的组合来保护它。
 
1.安装配置组件
 
yum install memcached python-memcached -y
2.编辑/etc/sysconfig/memcached
 
vi /etc/sysconfig/memcached
 
OPTIONS="-l 172.16.14.224,::1,controller"
3.设置服务开机启动
 
systemctl enable memcached.service;systemctl start memcached.service
检查一下memcache端口:
[root@openstack-controller ~]# netstat -anltp|grep memcache
tcp 0 0 172.16.14.224:11211 0.0.0.0: LISTEN 14940/memcached 
tcp 0 0 127.0.0.1:11211 0.0.0.0: LISTEN 14940/memcached 
tcp6 0 0 ::1:11211 :::* LISTEN 14940/memcached
 
 
Etcd服务安装(controller)
1.安装服务
 
yum install etcd -y 
2.编辑/etc/etcd/etcd.conf文件
 
vi /etc/etcd/etcd.conf
 
ETCD_INITIAL_CLUSTER
ETCD_INITIAL_ADVERTISE_PEER_URLS
ETCD_ADVERTISE_CLIENT_URLS
ETCD_LISTEN_CLIENT_URLS
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://172.16.14.224:2380"
ETCD_LISTEN_CLIENT_URLS="http://172.16.14.224:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.16.14.224:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://172.16.14.224:2379"
ETCD_INITIAL_CLUSTER="controller=http://172.16.14.224:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
 
3.设置服务开机启动
 
systemctl enable etcd;systemctl start etcd
安装keystone组件(controller)
 
 
生成一个随机值在初始的配置中作为管理员的令牌。
 
openssl rand -hex 10
 
admin_token = 8e79c25cae896e43449b
 
1.创建keystone数据库并授权
 
mysql -uroot -p
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '123456';
2.安装、配置组件
 
yum install openstack-keystone httpd mod_wsgi -y
3.编辑 /etc/keystone/keystone.conf
 
修改keystone的配置文件 /etc/keystone/keystone.conf:
 
在``[DEFAULT]``部分,定义初始管理令牌的值:
 
[DEFAULT]
 
...
 
admin_token = ADMIN_TOKEN
 
使用前面步骤生成的随机数替换``ADMIN_TOKEN``值。
 
[database]
 
connection = mysql+pymysql://keystone:123456@controller/keystone
[token]
 
provider = fernet
4.同步keystone数据库
 
su -s /bin/sh -c "keystone-manage db_sync" keystone
5.数据库初始化
 
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
6.引导身份认证服务
 
keystone-manage bootstrap --bootstrap-password 123456 --bootstrap-admin-url http://controller:35357/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne
配置apache http服务
1.编辑/etc/httpd/conf/httpd.conf,配置ServerName参数
 
ServerName controller
2.创建 /usr/share/keystone/wsgi-keystone.conf链接文件
 
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
3.设置服务开机启动
 
systemctl enable httpd.service;systemctl start httpd.service
4.配置环境变量
 
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
创建 domain, projects, users, roles
1.创建域
 
openstack domain create --description "Domain" example
[root@controller ~]# openstack domain create --description "Domain" example
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Domain                           |
| enabled     | True                             |
| id          | 199658b1d0234c3cb8785c944aa05780 |
| name        | example                          |
| tags        | []                               |
+-------------+----------------------------------+
创建服务项目
openstack project create --domain default   --description "Service Project" service
[root@controller ~]# openstack project create --domain default   --description "Service Project" service
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Service Project                  |
| domain_id   | default                          |
| enabled     | True                             |
| id          | 03e700ff43e44b29b97365bac6c7d723 |
| is_domain   | False                            |
| name        | service                          |
| parent_id   | default                          |
| tags        | []                               |
+-------------+----------------------------------+
3.创建平台demo项目
 
openstack project create --domain default --description "Demo Project" demo
[root@controller ~]# openstack project create --domain default --description "Demo Project" demo
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Demo Project                     |
| domain_id   | default                          |
| enabled     | True                             |
| id          | 61f8c9005ca84477b5bdbf485be1a546 |
| is_domain   | False                            |
| name        | demo                             |
| parent_id   | default                          |
| tags        | []                               |
+-------------+----------------------------------+
4.创建demo用户
 
openstack user create --domain default  --password-prompt demo
[root@controller ~]# openstack user create --domain default  --password-prompt demo
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | fa794c034a53472c827a94e6a6ad12c1 |
| name                | demo                             |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
5.创建用户角色
 
openstack role create user
[root@controller ~]# openstack role create user
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | None                             |
| id        | 15ea413279a74770b79630b75932a596 |
| name      | user                             |
+-----------+----------------------------------+
6.添加用户角色到demo项目和用户
 
openstack role add --project demo --user demo user
说明:此条命令执行成功后不返回参数
 
验证操作
1.取消环境变量
 
unset OS_AUTH_URL OS_PASSWORD
2.admin用户返回的认证token
 
[root@controller ~]# unset OS_AUTH_URL OS_PASSWORD
[root@controller ~]#  openstack --os-auth-url http://controller:35357/v3 \
>   --os-project-domain-name Default --os-user-domain-name Default \
>   --os-project-name admin --os-username admin token issue
Password: 
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2018-06-25T07:45:18+0000                                                                                                                                                                |
| id         | gAAAAABawH_-ke3POs9LLzpEEH3Wziuk6VlQmNZCtxlDovLaSmg_-dOOUSDWsF-gw9we4QvcHzdO5Ahc3eEdDl6sIztZ60QQTG3x5Kbt_75EbWCZsBa2HkybZ-nJYuN4o3tQugse2BDcs8HF7bT1pAtoW0UM29RQNlCMdvx9jfcIT4EBit1SMKM |
| project_id | 4205b649750d4ea68ff5bea73de0faae                                                                                                                                                        |
| user_id    | 475b31138acc4cc5bb42ca64af418963                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
3.demo用户返回的认证token
 
[root@controller ~]# openstack --os-auth-url http://controller:5000/v3 \
>   --os-project-domain-name Default --os-user-domain-name Default \
>   --os-project-name demo --os-username demo token issue
Password: 
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2018-06-25T07:45:58+0000                                                                                                                                                                |
| id         | gAAAAABawIAmwGuiyDMjhqTmkwgDi0hKyj55WCDaMdPvyr4H8ZJbBNt7cUTtQ2AEHdP8Z_PRB4RI0uiJIvtOoMI0DUmMrKsmZU5G95tKY4y-kXPvvqdd8_JdUvQN4MgCStb-ZZ3OpNwN6500C891M8DTA6W1pWR8julBNaFrEQdlllhreOfdLc4 |
| project_id | 61f8c9005ca84477b5bdbf485be1a546                                                                                                                                                        |
| user_id    | fa794c034a53472c827a94e6a6ad12c1                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
创建openstack 客户端环境脚本
1.创建admin-openrc脚本
 
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
2.创建demo-openrc脚本
 
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
3.使用脚本,返回认证token
 
[root@controller ~]# openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2018-06-25T08:17:29+0000                                                                                                                                                                |
| id         | gAAAAABawIeJ0z-3R2ltY6ublCGqZX80AIi4tQUxqEpw0xvPsFP9BLV8ALNsB2B7bsVivGB14KvhUncdoRl_G2ng5BtzVKAfzHyB-OxwiXeqAttkpQsuLCDKRHd3l-K6wRdaDqfNm-D1QjhtFoxHOTotOcjtujBHF12uP49TjJtl1Rrd6uVDk0g |
| project_id | 4205b649750d4ea68ff5bea73de0faae                                                                                                                                                        |
| user_id    | 475b31138acc4cc5bb42ca64af418963                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
安装Glance服务(controller)
1.创建glance数据库,并授权
 
mysql -uroot -p
 
CREATE DATABASE glance;
 
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '123456';
 
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%'  IDENTIFIED BY '123456';
2.获取admin用户的环境变量,并创建服务认证
 
. admin-openrc
创建glance用户
 
 
[root@controller ~]# openstack user create --domain default --password-prompt glance
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | dd2363d365624c998dfd788b13e1282b |
| name                | glance                           |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
把admin用户添加到glance用户和项目中
 
openstack role add --project service --user glance admin
说明:此条命令执行不返回不返回
 
创建glance服务
 
[root@controller ~]# openstack service create --name glance  --description "OpenStack Image" image
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Image                  |
| enabled     | True                             |
| id          | 5927e22c745449869ff75b193ed7d7c6 |
| name        | glance                           |
| type        | image                            |
+-------------+----------------------------------+
3.创建镜像服务API端点
 
[root@controller ~]# openstack endpoint create --region RegionOne  image public http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 0822449bf80f4f6897be5e3240b6bfcc |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 5927e22c745449869ff75b193ed7d7c6 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne  image internal http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | f18ae583441b4d118526571cdc204d8a |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 5927e22c745449869ff75b193ed7d7c6 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne  image admin http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 79eadf7829274b1b9beb2bfb6be91992 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 5927e22c745449869ff75b193ed7d7c6 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+
安装和配置组件
1.安装软件包
 
yum install openstack-glance -y
2.编辑/etc/glance/glance-api.conf文件
 
[database]
 
connection = mysql+pymysql://glance:123456@controller/glance
[keystone_authtoken]
 
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456
[paste_deploy]
 
flavor = keystone
[glance_store]
 
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
3.编辑/etc/glance/glance-registry.conf
 
[database]
 
connection = mysql+pymysql://glance:123456@controller/glance
[keystone_authtoken]
 
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456
[paste_deploy]
 
flavor = keystone
4.同步镜像服务数据库
 
su -s /bin/sh -c "glance-manage db_sync" glance
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service  openstack-glance-registry.service
验证操作
使用CirrOS验证Image服务的操作,这是一个小型Linux映像,可帮助您测试OpenStack部署。
有关如何下载和构建映像的更多信息,请参阅OpenStack虚拟机映像指南https://docs.openstack.org/image-guide/
有关如何管理映像的信息,请参阅OpenStack最终用户指南https://docs.openstack.org/queens/user/
 
1.获取admin用户的环境变量,且下载镜像
 
 . admin-openrc
 
 wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
2.上传镜像
使用QCOW2磁盘格式,裸容器格式和公开可见性将图像上传到Image服务,以便所有项目都可以访问它:
 
[root@controller ~]# openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img  --disk-format qcow2 --container-format bare  --public
+------------------+------------------------------------------------------+
| Field            | Value                                                |
+------------------+------------------------------------------------------+
| checksum         | f8ab98ff5e73ebab884d80c9dc9c7290                     |
| container_format | bare                                                 |
| created_at       | 2018-05-23T08:00:05Z                                 |
| disk_format      | qcow2                                                |
| file             | /v2/images/916faa2b-e292-46e0-bfe4-0f535069a1a0/file |
| id               | 916faa2b-e292-46e0-bfe4-0f535069a1a0                 |
| min_disk         | 0                                                    |
| min_ram          | 0                                                    |
| name             | cirros                                               |
| owner            | 4205b649750d4ea68ff5bea73de0faae                     |
| protected        | False                                                |
| schema           | /v2/schemas/image                                    |
| size             | 13267968                                             |
| status           | active                                               |
| tags             |                                                      |
| updated_at       | 2018-05-23T08:00:06Z                                 |
| virtual_size     | None                                                 |
| visibility       | public                                               |
+------------------+------------------------------------------------------+
3.查看上传的镜像
 
 
[root@controller ~]# openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| 916faa2b-e292-46e0-bfe4-0f535069a1a0 | cirros | active |
+--------------------------------------+--------+--------+
说明:glance具体配置选项:https://docs.openstack.org/glance/queens/configuration/index.html
 
controller节点安装和配置compute服务
1.创建nova_api, nova, nova_cell0数据库
 
mysql -uroot -p
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
数据库登录授权
 
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '123456';
2.创建nova用户
 
[root@controller ~]# . admin-openrc
 
[root@controller ~]# openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 8e72103f5cc645669870a630ffb25065 |
| name                | nova                             |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
3.添加admin用户为nova用户
 
openstack role add --project service --user nova admin
4.创建nova服务端点
 
[root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Compute                |
| enabled     | True                             |
| id          | 9f8f8d8cb8e542b09694bee6016cc67c |
| name        | nova                             |
| type        | compute                          |
+-------------+----------------------------------+
5.创建compute API 服务端点
 
[root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | cf260d5a56344c728840e2696f44f9bc |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 9f8f8d8cb8e542b09694bee6016cc67c |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://controller:8774/v2.1      |
+--------------+----------------------------------+
 
[root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | f308f29a78e04b888c7418e78c3d6a6d |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 9f8f8d8cb8e542b09694bee6016cc67c |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://controller:8774/v2.1      |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 022d96fa78de4b73b6212c09f13d05be |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 9f8f8d8cb8e542b09694bee6016cc67c |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://controller:8774/v2.1      |
+--------------+----------------------------------+
创建一个placement服务用户
 
[root@controller ~]# openstack user create --domain default --password-prompt placement
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | fa239565fef14492ba18a649deaa6f3c |
| name                | placement                        |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
6.添加placement用户为项目服务admin角色
 
openstack role add --project service --user placement admin
7.创建在服务目录创建Placement API服务
 
[root@controller ~]# openstack service create --name placement --description "Placement API" placement
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Placement API                    |
| enabled     | True                             |
| id          | 32bb1968c08747ccb14f6e4a20cd509e |
| name        | placement                        |
| type        | placement                        |
+-------------+----------------------------------+
8.创建Placement API服务端点
 
[root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | b856962188484f4ba6fad500b26b00ee |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 32bb1968c08747ccb14f6e4a20cd509e |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://controller:8778           |
+--------------+----------------------------------+
 
[root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 62e5a3d82a994f048a8bb8ddd1adc959 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 32bb1968c08747ccb14f6e4a20cd509e |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://controller:8778           |
+--------------+----------------------------------+
 
[root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | f12f81ff7b72416aa5d035b8b8cc2605 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 32bb1968c08747ccb14f6e4a20cd509e |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://controller:8778           |
+--------------+----------------------------------+
安装和配置组件
1.安装软件包
 
 yum install openstack-nova-api openstack-nova-conductor  openstack-nova-console openstack-nova-novncproxy  openstack-nova-scheduler openstack-nova-placement-api
2.编辑 /etc/nova/nova.conf
 
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 172.16.14.224
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database]
 
connection = mysql+pymysql://nova:123456@controller/nova_api
[database]
 
connection = mysql+pymysql://nova:123456@controller/nova
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = 123456
3.由于软件包的一个bug,需要在/etc/httpd/conf.d/00-nova-placement-api.conf文件中添加如下配置
 
<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>
4.重新http服务
 
systemctl restart httpd
5.同步nova-api数据库
 
 su -s /bin/sh -c "nova-manage api_db sync" nova
同步数据库报错
 
 
[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
Traceback (most recent call last):
  File "/usr/bin/nova-manage", line 10, in <module>
    sys.exit(main())
  File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1597, in main
    config.parse_args(sys.argv)
  File "/usr/lib/python2.7/site-packages/nova/config.py", line 52, in parse_args
    default_config_files=default_config_files)
  File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2502, in __call__
    else sys.argv[1:])
  File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 3166, in _parse_cli_opts
    return self._parse_config_files()
  File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 3183, in _parse_config_files
    ConfigParser._parse_file(config_file, namespace)
  File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 1950, in _parse_file
    raise ConfigFileParseError(pe.filename, str(pe))
oslo_config.cfg.ConfigFileParseError: Failed to parse /etc/nova/nova.conf: at /etc/nova/nova.conf:8, No ':' or '=' found in assignment: '/etc/nova/nova.conf'
根据报错,把/etc/nova/nova.conf中第八行注释掉,解决报错
 
[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  exception.NotSupportedWarning
6.注册cell0数据库
 
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  exception.NotSupportedWarning
7.创建cell1 cell
 
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  exception.NotSupportedWarning
6c689e8c-3e13-4e6d-974c-c2e4e22e510b
8.同步nova数据库
 
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  exception.NotSupportedWarning
/usr/lib/python2.7/site-packages/pymysql/cursors.py:165: Warning: (1831, u'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release.')
  result = self._query(query)
/usr/lib/python2.7/site-packages/pymysql/cursors.py:165: Warning: (1831, u'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release.')
  result = self._query(query)
9.验证 nova、 cell0、 cell1数据库是否注册正确
 
 
[root@controller ~]# nova-manage cell_v2 list_cells
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  exception.NotSupportedWarning
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+
|  Name |                 UUID                 |           Transport URL            |               Database Connection               |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+
| cell0 | 00000000-0000-0000-0000-000000000000 |               none:/               | mysql+pymysql://nova:****@controller/nova_cell0 |
| cell1 | 6c689e8c-3e13-4e6d-974c-c2e4e22e510b | rabbit://openstack:****@controller |    mysql+pymysql://nova:****@controller/nova    |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+
10.设置服务为开机启动
 
systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service  openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service  openstack-nova-consoleauth.service openstack-nova-scheduler.service  openstack-nova-conductor.service openstack-nova-novncproxy.service
安装和配置compute节点服务
1.安装软件包
 
yum install openstack-nova-compute
2.编辑/etc/nova/nova.conf
 
[DEFAULT]
 
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 172.16.14.225
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456
[vnc]
enabled = True
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = 123456
3.设置服务开机启动
 
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
说明:如果nova-compute服务无法启动,请检查/var/log/nova/nova-compute.log,会出现如下报错信息
 
2018-05-25 14:25:46.317 2300 ERROR nova.virt.driver [req-094dd87e-7c1a-4346-a684-7401249caf0c - - - - -] Unable to load the virtuali
zation driver: ImportError: Class NoopFirewallDrive cannot be found (['Traceback (most recent call last):\n', '  File "/usr/lib/pyth
on2.7/site-packages/oslo_utils/importutils.py", line 32, in import_class\n    return getattr(sys.modules[mod_str], class_str)\n', "A
ttributeError: 'module' object has no attribute 'NoopFirewallDrive'\n"])
2018-05-25 14:25:46.317 2300 ERROR nova.virt.driver Traceback (most recent call last):
2018-05-25 14:25:46.317 2300 ERROR nova.virt.driver   File "/usr/lib/python2.7/site-packages/nova/virt/driver.py", line 1693, in loa
d_compute_driver
2018-05-25 14:25:46.317 2300 ERROR nova.virt.driver     virtapi)
2018-05-25 14:25:46.317 2300 ERROR nova.virt.driver   File "/usr/lib/python2.7/site-packages/oslo_utils/importutils.py", line 44, in
 import_object
2018-05-25 14:25:46.317 2300 ERROR nova.virt.driver     return import_class(import_str)(*args, **kwargs)
2018-05-25 14:25:46.317 2300 ERROR nova.virt.driver   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 360,
 in __init__
2018-05-25 14:25:46.317 2300 ERROR nova.virt.driver     host=self._host)
2018-05-25 14:25:46.317 2300 ERROR nova.virt.driver   File "/usr/lib/python2.7/site-packages/nova/virt/firewall.py", line 34, in loa
d_driver
2018-05-25 14:25:46.317 2300 ERROR nova.virt.driver     fw_class = importutils.import_class(CONF.firewall_driver or default)
2018-05-25 14:25:46.317 2300 ERROR nova.virt.driver   File "/usr/lib/python2.7/site-packages/oslo_utils/importutils.py", line 36, in
 import_class
2018-05-25 14:25:46.317 2300 ERROR nova.virt.driver     traceback.format_exception(*sys.exc_info())))
2018-05-25 14:25:46.317 2300 ERROR nova.virt.driver ImportError: Class NoopFirewallDrive cannot be found (['Traceback (most recent c
all last):\n', '  File "/usr/lib/python2.7/site-packages/oslo_utils/importutils.py", line 32, in import_class\n    return getattr(sy
s.modules[mod_str], class_str)\n', "AttributeError: 'module' object has no attribute 'NoopFirewallDrive'\n"])
2018-05-25 14:25:46.317 2300 ERROR nova.virt.driver 
 
 
 
修改配置文件/etc/nova/nova.conf
 
firewall_driver = nova.virt.firewall.NoopFirewallDrive
 
compute_driver = libvirt.LibvirtDriver
 
virt_type = qemu
 
重启计算服务成功
 
4.添加compute节点到cell数据库(controller)
验证有几个计算节点在数据库中
 
 [root@controller ~]. admin-openrc
 
[root@controller ~]# openstack compute service list --service nova-compute
+----+--------------+---------+------+---------+-------+----------------------------+
| ID | Binary       | Host    | Zone | Status  | State | Updated At                 |
+----+--------------+---------+------+---------+-------+----------------------------+
|  8 | nova-compute | compute | nova | enabled | up    | 2018-04-01T22:24:14.000000 |
+----+--------------+---------+------+---------+-------+----------------------------+
5.发现计算节点
 
 
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  exception.NotSupportedWarning
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': 6c689e8c-3e13-4e6d-974c-c2e4e22e510b
Found 1 unmapped computes in cell: 6c689e8c-3e13-4e6d-974c-c2e4e22e510b
Checking host mapping for compute host 'compute': 32861a0d-894e-4af9-a57c-27662d27e6bd
Creating host mapping for compute host 'compute': 32861a0d-894e-4af9-a57c-27662d27e6b
在controller节点验证计算服务操作
1.列出服务组件
 
[root@controller ~]#. admin-openrc
 
[root@controller ~]# openstack compute service list
+----+------------------+----------------+----------+---------+-------+----------------------------+
| ID | Binary           | Host           | Zone     | Status  | State | Updated At                 |
+----+------------------+----------------+----------+---------+-------+----------------------------+
|  1 | nova-consoleauth | controller     | internal | enabled | up    | 2018-04-01T22:25:29.000000 |
|  2 | nova-conductor   | controller     | internal | enabled | up    | 2018-04-01T22:25:33.000000 |
|  3 | nova-scheduler   | controller     | internal | enabled | up    | 2018-04-01T22:25:30.000000 |
|  6 | nova-conductor   | ansible-server | internal | enabled | up    | 2018-04-01T22:25:55.000000 |
|  7 | nova-scheduler   | ansible-server | internal | enabled | up    | 2018-04-01T22:25:59.000000 |
|  8 | nova-compute     | compute        | nova     | enabled | up    | 2018-04-01T22:25:34.000000 |
|  9 | nova-consoleauth | ansible-server | internal | enabled | up    | 2018-04-01T22:25:57.000000 |
+----+------------------+----------------+----------+---------+-------+----------------------------+
2.列出身份服务中的API端点以验证与身份服务的连接:
 
[root@controller ~]# openstack catalog list
+-----------+-----------+-----------------------------------------+
| Name      | Type      | Endpoints                               |
+-----------+-----------+-----------------------------------------+
| placement | placement | RegionOne                               |
|           |           |   internal: http://controller:8778      |
|           |           | RegionOne                               |
|           |           |   public: http://controller:8778        |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:8778         |
|           |           |                                         |
| keystone  | identity  | RegionOne                               |
|           |           |   public: http://controller:5000/v3/    |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:35357/v3/    |
|           |           | RegionOne                               |
|           |           |   internal: http://controller:5000/v3/  |
|           |           |                                         |
| glance    | image     | RegionOne                               |
|           |           |   public: http://controller:9292        |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:9292         |
|           |           | RegionOne                               |
|           |           |   internal: http://controller:9292      |
|           |           |                                         |
| nova      | compute   | RegionOne                               |
|           |           |   admin: http://controller:8774/v2.1    |
|           |           | RegionOne                               |
|           |           |   public: http://controller:8774/v2.1   |
|           |           | RegionOne                               |
|           |           |   internal: http://controller:8774/v2.1 |
|           |           |                                         |
+-----------+-----------+-----------------------------------------+
3.列出镜像
[root@controller ~]# openstack image list
+--------------------------------------+-------------------+--------+
| ID                                   | Name              | Status |
+--------------------------------------+-------------------+--------+
| 77721f3d-f353-4c1f-9f1d-bba32c859d4b | CentOS-6.8-x86_64 | active |
| c469bbb9-72cd-4e56-a9fa-c6ae3b1bc3f0 | cirros            | active |
+--------------------------------------+-------------------+--------+
 
4.检查cells和placement API是否正常
 
[root@controller ~]# nova-status upgrade check
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  exception.NotSupportedWarning
Option "os_region_name" from group "placement" is deprecated. Use option "region-name" from group "placement".
+---------------------------+
| Upgrade Check Results     |
+---------------------------+
| Check: Cells v2           |
| Result: Success           |
| Details: None             |
+---------------------------+
| Check: Placement API      |
| Result: Success           |
| Details: None             |
+---------------------------+
| Check: Resource Providers |
| Result: Success           |
| Details: None             |
+---------------------------+
nova知识点https://docs.openstack.org/nova/queens/admin/index.html
 
安装和配置controller节点neutron网络配置
1.创建nuetron数据库和授权
 
mysql -uroot -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost'   IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%'   IDENTIFIED BY '123456';
2.创建服务
 
. admin-openrc
openstack user create --domain default --password-prompt neutron
添加admin角色为neutron用户
 
openstack role add --project service --user neutron admin
创建neutron服务
 
openstack service create --name neutron   --description "OpenStack Networking" network
3.创建网络服务端点
 
openstack endpoint create --region RegionOne  network public http://controller:9696
openstack endpoint create --region RegionOne  network internal http://controller:9696
openstack endpoint create --region RegionOne  network admin http://controller:969
配置网络部分(controller节点)
1.安装组件
 
yum install openstack-neutron openstack-neutron-ml2  openstack-neutron-linuxbridge ebtables
2.配置服务组件,编辑 /etc/neutron/neutron.conf
 
[database]
 
connection = mysql+pymysql://neutron:123456@controller/neutron
[DEFAULT]
 
auth_strategy = keystone
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:123456@controller
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[keystone_authtoken]
 
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
[nova]
 
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123456
[oslo_concurrency]
 
lock_path = /var/lib/neutron/tmp
配置网络二层插件
编辑/etc/neutron/plugins/ml2/ml2_conf.ini
 
[ml2]
 
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
 
flat_networks = provider
[securitygroup]
 
enable_ipset = true
配置Linux网桥
编辑 /etc/neutron/plugins/ml2/linuxbridge_agent.ini
 
[linux_bridge]
physical_interface_mappings = provider:ens6f0
[vxlan]
enable_vxlan = false
[securitygroup]
 
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
配置DHCP服务
编辑 /etc/neutron/dhcp_agent.ini
 
[DEFAULT]
 
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
配置metadata
编辑 /etc/neutron/metadata_agent.ini
 
DEFAULT]
 
nova_metadata_host = controller
metadata_proxy_shared_secret = 123456
配置计算服务使用网络服务
编辑/etc/nova/nova.conf
 
[neutron]
 
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = 123456
 
完成安装
1.创建服务软连接
 
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
2.同步数据库
 
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf   --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
3.重启compute API服务
 
systemctl restart openstack-nova-api.service
4.配置网络服务开机启动
 
systemctl enable neutron-server.service   neutron-linuxbridge-agent.service neutron-dhcp-agent.service   neutron-metadata-agent.service
systemctl start neutron-server.service   neutron-linuxbridge-agent.service neutron-dhcp-agent.service   neutron-metadata-agent.service
配置compute节点网络服务
1.安装组件
 
yum install openstack-neutron-linuxbridge ebtables ipset
2.配置公共组件
 
编辑/etc/neutron/neutron.conf
 
[DEFAULT]
auth_strategy = keystone
transport_url = rabbit://openstack:openstack@controller
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
配置网络
1.配置Linux网桥,编辑 /etc/neutron/plugins/ml2/linuxbridge_agent.ini
 
[linux_bridge]
 
physical_interface_mappings = provider:ens06777728
[vxlan]
enable_vxlan = false
[securitygroup]
 
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
配置计算节点网络服务
编辑/etc/nova/nova.conf
 
[neutron]
 
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
完成安装
1.重启compute服务
 
systemctl restart openstack-nova-compute.service
2.设置网桥服务开机启动
 
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service
在controller节点安装Horizon服务
1.安装软件包
 
yum install openstack-dashboard -y
编辑/etc/openstack-dashboard/local_settings
 
OPENSTACK_HOST = "172.16.14.224"
 
ALLOWED_HOSTS = ['*',]
配置memcache会话存储
 
 
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
 
CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': '172.16.14.224:11211',
    }
}
开启身份认证API 版本v3
 
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
开启domains版本支持
 
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
通过仪表盘创建的用户默认角色配置为 user :
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
配置API版本
 
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}
 
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
 
OPENSTACK_NEUTRON_NETWORK = {
 
    'enable_router': False,
    'enable_quotas': False,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_vpn': False,
    'enable_fip_topology_check': False,
}
 
TIME_ZONE = "Asia/Shanghai"
2.完成安装,重启web服务和会话存储
 
systemctl restart httpd.service memcached.service
在浏览器输入http://172.16.14.224/dashboard.,访问openstack的web页面
 
default
admin
123456
 
 
在控制节点安装和配置Cinnde服务
 
1.创建Cinder数据库,并授权
 
mysql -uroot -p
 
CREATE DATABASE cinder;
 
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '123456';
 
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%'  IDENTIFIED BY '123456';
2.获取admin用户的环境变量,并创建服务认证
 
. admin-openrc
创建cinder用户
 
 
[root@controller ~]# openstack user create --domain default --password-prompt cinder
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | dd2363d365624c998dfd788b13e1282b |
| name                | cinder                           |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
把admin用户添加到glance用户和项目中
 
openstack role add --project service --user cinder admin
说明:此条命令执行不返回不返回
 
创建cinder服务
 
[root@controller ~]# openstack service create --name cinder  --description "OpenStack Block Store" volume
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Store            |
| enabled     | True                             |
| id          | 5927e22c745449869ff75b193ed7d7c6 |
| name        | cinder                           |
| type        | volume                           |
+-------------+----------------------------------+
[root@controller ~]# openstack service create --name cinderv2  --description "OpenStack Block Store" volumev2
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Store            |
| enabled     | True                             |
| id          | ebedgte22c745449869ff75b193ed7d7 |
| name        | cinderv2                         |
| type        | volumev2                         |
+-------------+----------------------------------+
[root@controller ~]# openstack service create --name cinderv3  --description "OpenStack Block Store" volumev3
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Store            |
| enabled     | True                             |
| id          | 34dfgte22c745449869ff75b193w6d7u |
| name        | cinderv3                         |
| type        | volumev3                         |
+-------------+----------------------------------+
3.创建cinder服务API端点
 
openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s
  +--------------+-----------------------------------------+
  | Field        | Value                                   |
  +--------------+-----------------------------------------+
  | enabled      | True                                    |
  | id           | 03fa2c90153546c295bf30ca86b1344b        |
  | interface    | public                                  |
  | region       | RegionOne                               |
  | region_id    | RegionOne                               |
  | service_id   | ab3bbbef780845a1a283490d281e7fda        |
  | service_name | cinder                                  |
  | service_type | volume                                  |
  | url          | http://controller:8776/v1/%(tenant_id)s |
  +--------------+-----------------------------------------+
 
openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s
  +--------------+-----------------------------------------+
  | Field        | Value                                   |
  +--------------+-----------------------------------------+
  | enabled      | True                                    |
  | id           | 94f684395d1b41068c70e4ecb11364b2        |
  | interface    | internal                                |
  | region       | RegionOne                               |
  | region_id    | RegionOne                               |
  | service_id   | ab3bbbef780845a1a283490d281e7fda        |
  | service_name | cinder                                  |
  | service_type | volume                                  |
  | url          | http://controller:8776/v1/%(tenant_id)s |
  +--------------+-----------------------------------------+
 
openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s
  +--------------+-----------------------------------------+
  | Field        | Value                                   |
  +--------------+-----------------------------------------+
  | enabled      | True                                    |
  | id           | 4511c28a0f9840c78bacb25f10f62c98        |
  | interface    | admin                                   |
  | region       | RegionOne                               |
  | region_id    | RegionOne                               |
  | service_id   | ab3bbbef780845a1a283490d281e7fda        |
  | service_name | cinder                                  |
  | service_type | volume                                  |
  | url          | http://controller:8776/v1/%(tenant_id)s |
  +--------------+-----------------------------------------+
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | 513e73819e14460fb904163f41ef3759        |
| interface    | public                                  |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | eb9fd245bdbc414695952e93f29fe3ac        |
| service_name | cinderv2                                |
| service_type | volumev2                                |
| url          | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
 
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | 6436a8a23d014cfdb69c586eff146a32        |
| interface    | internal                                |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | eb9fd245bdbc414695952e93f29fe3ac        |
| service_name | cinderv2                                |
| service_type | volumev2                                |
| url          | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
 
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | e652cf84dd334f359ae9b045a2c91d96        |
| interface    | admin                                   |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | eb9fd245bdbc414695952e93f29fe3ac        |
| service_name | cinderv2                                |
| service_type | volumev2                                |
| url          | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v2/%\(tenant_id\)s
 
 
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v2/%\(tenant_id\)s
 
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v2/%\(tenant_id\)s
 
安装和配置组件
1.安装软件包
 
yum install openstack-cinder -y 
2.编辑/etc/cinder/cinder.conf 文件
 
在 [database] 部分,配置数据库访问:
 
[database]
...
connection = mysql+pymysql://cinder:123456@controller/cinder
在 “[DEFAULT]” 和 “[oslo_messaging_rabbit]”部分,配置 “RabbitMQ” 消息队列访问:
 
[DEFAULT]
...
rpc_backend = rabbit
 
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = openstack
在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置认证服务访问:
 
[DEFAULT]
...
auth_strategy = keystone
 
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 123456
将 CINDER_PASS 替换为你在认证服务中为 cinder 用户选择的密码。
 
 
注解
在 [keystone_authtoken] 中注释或者删除其他选项。
 
在 [DEFAULT 部分,配置``my_ip`` 来使用控制节点的管理接口的IP 地址。
 
[DEFAULT]
...
my_ip = 172.16.14.226
在 [oslo_concurrency] 部分,配置锁路径:
 
[oslo_concurrency]
...
lock_path = /var/lib/cinder/tmp
初始化块设备服务的数据库:
 
# su -s /bin/sh -c "cinder-manage db sync" cinder
 
注解
忽略输出中任何不推荐使用的信息。
 
配置计算节点以使用块设备存储
编辑文件 /etc/nova/nova.conf 并添加如下到其中:
 
[cinder]
os_region_name = RegionOne
完成安装
重启计算API 服务:
 
# systemctl restart openstack-nova-api.service
启动块设备存储服务,并将其配置为开机自启:
 
# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
Cinder节点安装和配置Cinder服务
本节介绍如何为Block Storage服务安装和配置存储节点。 为简单起见,此配置使用空的本地块存储设备引用一个存储节点。
 
该服务使用LVM驱动程序在该设备上配置逻辑卷,并通过iSCSI传输将其提供给实例。 您可以按照这些说明进行小的修改,以便使用其他存储节点水平扩展您的环境。
 
1.安装支持的软件包
 
安装LVM
 
yum install lvm2 device-mapper-persistent-data -y
设置LVM服务开机启动
 
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service
2.创建LVM物理逻辑卷/dev/sdb
 
[root@cinder ~]# pvcreate /dev/sdb1
Device /dev/sdb not found (or ignored by filtering).
解决方案(1):
 
编辑 vim /etc/lvm/lvm.conf,找到global_filter一行,配置如下
 
   global_filter = [ "a|.*/|","a|sdb1|"]
解决方案(2):
 
在执行到创建LVM 物理卷:
 
pvcreate /dev/sdb
遇到下面错误
Device /dev/sdb not found (or ignored by filtering).
这是由于我们只是挂载了硬盘,但是并没有格式化使用。所以下面教给大家怎么使用起来:
fdisk /dev/sdb
#Type in the followings:
n
p
1
ENTER
ENTER
t
8e
w
对于上面的 /dev/sdb是如何来的呢?这里由于使用的是虚拟机,是通过虚拟机增加的第二块硬盘:如下图:
 
 
增加硬盘之后,我们通过
fdisk -l
命令,可以查看到
 
所以我们通过下面命令,即可完成:
fdisk /dev/sdb
#Type in the followings:
n
p
1
ENTER
ENTER
t
8e
w
之后再执行pvcreate命令,问题解决。
 
[root@cinder ~]# pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created.
3.创建cinder-volumes逻辑卷组
 
[root@cinder ~]# vgcreate cinder-volumes /dev/sdb1
Volume group "cinder-volumes" successfully created
4.安装和配置组件
安装软件包
 
yum install openstack-cinder targetcli python-keystone -y
编辑/etc/cinder/cinder.conf
 
 
[DEFAULT]
 
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
my_ip = 172.16.14.226
enabled_backends = lvm
glance_api_servers = http://controller:9292
 
[database]
 
connection = mysql+pymysql://cinder:123456@controller/cinder
 
[keystone_authtoken]
 
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = 123456
 
在[lvm]部分中,使用LVM驱动程序,cinder-volumes卷组,iSCSI协议和相应的iSCSI服务配置LVM后端。 如果[lvm]部分不存在,请创建它:
 
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
 
[oslo_concurrency]
 
lock_path = /var/lib/cinder/tmp
设置存储服务开机启动
 
systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service
五、登录Dashboard界面
社区Queens Web界面显示三个角色
项目
管理员
身份管理
 
 
 
六、上传镜像
1. 把原生iso镜像上传到controller节点
  lrzsz工具   
 
 镜像: CentOS-6.8-x86_64-bin-DVD1.iso
 
2.转换原生ISO镜像格式为qcow2
[root@controller ~]# openstack image create --disk-format qcow2 --container-format bare --public --file /usr/CentOS-6.8-x86_64-bin-DVD1.iso CentOS-6.8-x86_64
 
 
3.查看制作的镜像信息
[root@controller keystone]# openstack image list
 
 
 
七、创建虚拟机流程
1. 创建网络
 . admin-openrc
openstack network create  --share --external  --provider-physical-network provider  --provider-network-type flat provider
 
 
参数
--share 允许所有项目使用虚拟网络
--external 定义外接虚拟网络 如果需要创建外网使用 --internal
--provider-physical-network provider && --provider-network-type flat 连接flat 虚拟网络
 
2.创建子网
 openstack subnet create --network provider  --allocation-pool start=172.16.14.230,end=172.16.14.240 --dns-nameserver 114.114.114.114 --gateway 172.16.14.1 --subnet-range 172.16.14.0/24 provider 
 
 
3.创建flavor
[root@controller keystone]# openstack flavor create --id 1 --vcpus 4 --ram 128 --disk 1 m2.nano
[root@controller keystone]# openstack flavor create --id 10000001 --vcpus 2 --ram 1024 --disk 20 wyh01
 
 
 
4.控制节点生成秘钥对,在启动实例之前,需要将公钥添加到Compute服务
. demo-openrc
ssh-keygen -q -N ""
openstack keypair create --public-key ~/.ssh/id_rsa.pub liukey
 
 
5.添加安全组,允许ICMP(ping)和安全shell(SSH)
openstack security group rule create --proto icmp default
[root@controller keystone]# openstack security group rule create --proto icmp default
 
6.允许安全shell(SSH)访问
openstack security group rule create --proto tcp --dst-port 22 default
 
 
 
 
[root@controller keystone]#  openstack security group rule create --proto icmp 9eb8b598-6371-468b-867a-b68f0d525d0b
 
 
 
[root@controller keystone]#  openstack security group rule create --proto icmp 0de89933-4af2-4bf9-b3bf-4efdb1295f67
 
 
 
 
 
7.列出flavor
openstack flavor list
 
 
8.列出可用镜像
[root@controller keystone]# openstack image list
 
 
 
9.列出网络
[root@controller keystone]# openstack network list
 
 
10.列出安全组
[root@controller keystone]#  openstack security group list
 
 
11.创建虚拟机
[root@controller keystone]# openstack server create --flavor wyh01 --image CentOS-6.8-x86_64 --nic net-id=aef7799c-ae84-472b-80ce-3e608ef49688 --security-group 0de89933-4af2-4bf9-b3bf-4efdb1295f67 --key-name liukey provider-instance  
[root@controller keystone]# openstack server create --flavor wyh01 --image CentOS-6.8-x86_64 --nic net-id=aef7799c-ae84-472b-80ce-3e608ef49688 --security-group 9eb8b598-6371-468b-867a-b68f0d525d0b --key-name liukey provider-instance 
 
 
 
 
 
 
 
 
12.查看实列状态
[root@controller keystone]# openstack server list
 
 
 
 
登录报表
 
http://172.16.14.224/dashboard/project/
 
 
 
 
 
汇总完成与 2018年5月26日 18:59分。
 


 
原文:https://blog.csdn.net/kim_weir/article/details/80463131  

为您推荐

友情链接 |九搜汽车网 |手机ok生活信息网|ok生活信息网|ok微生活
 Powered by www.360SDN.COM   京ICP备11022651号-4 © 2012-2016 版权