文章目录

  • 基础环境
    • Docker
      • 新版
      • 19.03
    • Etcd
  • Kuryr-libnetwork
    • 控制节点
    • 计算节点
    • 验证
      • 报错
        • Error response from daemon: legacy plugin: Post http://127.0.0.1:23750/Plugin.Activate: dial tcp 127.0.0.1:23750: connect: connection refused
        • ERROR kuryr_libnetwork.controllers [-] ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied)
      • 重新验证
        • 报错
  • Zun
    • 控制节点
      • 基础
      • 安装配置
        • 报错:There must be at least one plugin active
        • 继续
    • 计算节点
    • 验证
    • 创建一个Container
    • docker私有仓库
      • 创建仓库
      • 上传镜像
      • 镜像的导出和导入
      • 容器测试
      • 容器删除
    • 报错
      • image could not be found
      • permission deny
      • 找不到镜像

配置Zun还需要一些其他的组件和软件支持,比如kuryr-libnetwork,docker之类的。

基础环境

Docker

新版

安装一点必备的依赖

yum install -y yum-utils device-mapper-persistent-data lvm2

配置仓库

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

安装

yum install -y docker-ce docker-ce-cli containerd.io

启动服务,自启动

systemctl start docker containerd.service
systemctl enable docker containerd.service

19.03

好像最近的docker版本升级了,变成了20版本,然后kuryr就用不成了。之前用的19.03没问题,就换成这个,安装这个版本的。
通过本地包安装

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-19.03.14-3.el7.x86_64.rpm
wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-cli-19.03.14-3.el7.x86_64.rpm
wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.3.9-3.1.el7.x86_64.rpmyum localinstall  -y  docker-ce-cli-19.03.14-3.el7.x86_64.rpm
yum localinstall  -y  containerd.io-1.3.9-3.1.el7.x86_64.rpm
yum localinstall  -y  docker-ce-19.03.14-3.el7.x86_64.rpm

Etcd

Etcd for RHEL and CentOS
直接安装

yum install -y etcd

编辑配置文件/etc/etcd/etcd.conf

#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.1.104:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.104:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.104:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.104:2379"
ETCD_INITIAL_CLUSTER="controller=http://192.168.1.104:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"

启动服务,自启动

systemctl enable etcd
# systemctl start etcd

Kuryr-libnetwork

控制节点

Install and configure controller node
创建用户

[root@controller OpenStack (keystone_admin)]#openstack user create --domain default --password-prompt kuryr
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | df33445232c345978e520490244e6769 |
| name                | kuryr                            |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

添加角色

openstack role add --project service --user kuryr admin

计算节点

Install and configure a compute node for Ubuntu
创建用户

groupadd --system kuryr
useradd --home-dir "/var/lib/kuryr" --create-home --system --shell /bin/false -g kuryr kuryr

创建文件夹,然后授权

mkdir -p /etc/kuryr
chown kuryr:kuryr /etc/kuryr

安装epel,再换个源

yum install -y epel-release
wget -O /etc/yum.repos.d/epel-7.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum clean all && yum makecache
yum install -y python-pip

下载kuryr-libnetwork,复制到/var/lib/kuryr,因为我下不下来,所以提前下好了。

#直接下载
cd /var/lib/kuryr
git clone -b stable/stein https://git.openstack.org/openstack/kuryr-libnetwork.git
#使用本地继续
tar -zxvf kuryr-libnetwork-stable_stein.tar.gz
cp -r kuryr-libnetwork /var/lib/kuryr/kuryr-libnetwork
cd /var/lib/kuryr/
chown -R kuryr:kuryr kuryr-libnetwork
cd kuryr-libnetwork/
git init

这时候再换一个源,换pip源,创建~/.pip/pip.conf

[global]
index-url=http://mirrors.aliyun.com/pypi/simple/[install]
trusted-host=mirrors.aliyun.com

安装kuryr-libnetwork的依赖

pip install -r requirements.txt
python setup.py install

这时候报错了

Marker evaluation failed, see the following error.  For more information see: http://docs.openstack.org/pbr/latest/user/using.html#environment-markers
ERROR:root:Error parsing
Traceback (most recent call last):File "/usr/lib/python2.7/site-packages/pbr/core.py", line 96, in pbrattrs = util.cfg_to_args(path, dist.script_args)File "/usr/lib/python2.7/site-packages/pbr/util.py", line 258, in cfg_to_argskwargs = setup_cfg_to_setup_kwargs(config, script_args)File "/usr/lib/python2.7/site-packages/pbr/util.py", line 456, in setup_cfg_to_setup_kwargsif pkg_resources.evaluate_marker('(%s)' % env_marker):File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 1364, in evaluate_markerreturn interpret(parser.expr(text).totuple(1)[1])File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 1342, in interpretreturn op(nodelist)File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 1307, in atomreturn interpret(nodelist[2])File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 1342, in interpretreturn op(nodelist)File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 1324, in comparisonraise SyntaxError(repr(cop)+" operator not allowed in environment markers")
SyntaxError: '<' operator not allowed in environment markers
error in setup command: Error parsing /var/lib/kuryr/kuryr-libnetwork/setup.cfg: SyntaxError: '<' operator not allowed in environment markers

好像是setuptools版本不够,升级一下

pip install --upgrade pip
pip install --upgrade setuptools

然后重新执行python setup.py install
复制一下配置文件

su -s /bin/sh -c "./tools/generate_config_file_samples.sh" kuryr
su -s /bin/sh -c "cp etc/kuryr.conf.sample /etc/kuryr/kuryr.conf" kuryr

编辑配置文件/etc/kuryr/kuryr.conf,第一项,官网写的是/usr/local/linexec文件夹,但是系统里/usr/libexec里有内容,我就先用系统的路径试试,正好默认也是这个。

[DEFAULT]
bindir = /usr/libexec/kuryr[neutron]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:35357
username = kuryr
user_domain_name = Default
password = kuryr
project_name = service
project_domain_name = Default
auth_type = password

创建服务文件/etc/systemd/system/kuryr-libnetwork.service,这里官网是/usr/local/bin/kuryr-server但是我的这里没有东西,/usr/bin/kuryr-server这里有,我就用后者了先。

[Unit]
Description = Kuryr-libnetwork - Docker network plugin for Neutron[Service]
ExecStart = /usr/bin/kuryr-server --config-file /etc/kuryr/kuryr.conf
CapabilityBoundingSet = CAP_NET_ADMIN[Install]
WantedBy = multi-user.target

启动服务,自启动

systemctl enable kuryr-libnetwork
systemctl start kuryr-libnetworksystemctl restart docker

验证

这里其实已经可以使用kuryr-libnetwork创建网络了,驱动选择kuryr就好了,会有一点坑。

报错

Error response from daemon: legacy plugin: Post http://127.0.0.1:23750/Plugin.Activate: dial tcp 127.0.0.1:23750: connect: connection refused

docker network create with kuryr
执行

docker network create --driver kuryr --ipam-driver kuryr --subnet 10.10.0.0/16 --gateway=10.10.0.1 test_net

报错

Error response from daemon: legacy plugin: Post http://127.0.0.1:23750/Plugin.Activate: dial tcp 127.0.0.1:23750: connect: connection refused

查看一下docker.service的status,systemctl status docker.service

Sep 17 21:31:16 controller dockerd[30029]: time="2020-09-17T21:31:16.642992083+08:00" level=warning msg="Unable to connect to plugin: 127.0.0.1:23750/Plugin.Activate: Post http://127.0.0.1:23750/Plugin.Activate: dial tcp 127.0.0.1:23750: connect: connection refused, retrying in 1s"
Sep 17 21:31:17 controller dockerd[30029]: time="2020-09-17T21:31:17.643707810+08:00" level=warning msg="Unable to connect to plugin: 127.0.0.1:23750/Plugin.Activate: Post http://127.0.0.1:23750/Plugin.Activate: dial tcp 127.0.0.1:23750: connect: connection refused, retrying in 2s"
Sep 17 21:31:19 controller dockerd[30029]: time="2020-09-17T21:31:19.644635277+08:00" level=warning msg="Unable to connect to plugin: 127.0.0.1:23750/Plugin.Activate: Post http://127.0.0.1:23750/Plugin.Activate: dial tcp 127.0.0.1:23750: connect: connection refused, retrying in 4s"
Sep 17 21:31:23 controller dockerd[30029]: time="2020-09-17T21:31:23.645450218+08:00" level=warning msg="Unable to connect to plugin: 127.0.0.1:23750/Plugin.Activate: Post http://127.0.0.1:23750/Plugin.Activate: dial tcp 127.0.0.1:23750: connect: connection refused, retrying in 8s"
Sep 17 21:31:31 controller dockerd[30029]: time="2020-09-17T21:31:31.646711235+08:00" level=error msg="Handler for POST /v1.40/networks/create returned error: legacy plugin: Post http://127.0.0.1:23750/Plugin.Activate: dial tcp 127.0.0.1:23750: connect: connection refused"

是端口没有开放,查看一下kuryr发现是failed,好奇怪,failed为什么不报错

systemctl status kuryr-libnetwork.service
● kuryr-libnetwork.service - Kuryr-libnetwork - Docker network plugin for NeutronLoaded: loaded (/etc/systemd/system/kuryr-libnetwork.service; enabled; vendor preset: disabled)Active: failed (Result: exit-code) since Thu 2020-09-17 21:50:35 CST; 1min 55s agoProcess: 34251 ExecStart=/usr/bin/kuryr-server --config-file /etc/kuryr/kuryr.conf (code=exited, status=1/FAILURE)Main PID: 34251 (code=exited, status=1/FAILURE)Sep 17 21:50:35 controller kuryr-server[34251]: 2020-09-17 21:50:35.015 34251 ERROR kuryr     self.auth_ref = self.get_auth_ref(session)
Sep 17 21:50:35 controller kuryr-server[34251]: 2020-09-17 21:50:35.015 34251 ERROR kuryr   File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/generic/base.py", line 206, in get_auth_ref
Sep 17 21:50:35 controller kuryr-server[34251]: 2020-09-17 21:50:35.015 34251 ERROR kuryr     self._plugin = self._do_create_plugin(session)
Sep 17 21:50:35 controller kuryr-server[34251]: 2020-09-17 21:50:35.015 34251 ERROR kuryr   File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/generic/base.py", line 161, in _do_create_plugin
Sep 17 21:50:35 controller kuryr-server[34251]: 2020-09-17 21:50:35.015 34251 ERROR kuryr     'auth_url is correct. %s' % e)
Sep 17 21:50:35 controller kuryr-server[34251]: 2020-09-17 21:50:35.015 34251 ERROR kuryr DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Unable to establish connection to http://controller:35357: HTTPConnectionPool(host='controller', port=35357): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f7eb017a2d0>: Failed to establish a new connection: [Errno 111] Connection refused',))
Sep 17 21:50:35 controller kuryr-server[34251]: 2020-09-17 21:50:35.015 34251 ERROR kuryr
Sep 17 21:50:35 controller systemd[1]: kuryr-libnetwork.service: main process exited, code=exited, status=1/FAILURE
Sep 17 21:50:35 controller systemd[1]: Unit kuryr-libnetwork.service entered failed state.
Sep 17 21:50:35 controller systemd[1]: kuryr-libnetwork.service failed.

看样子是授权的问题,修改一下配置文件/etc/kuryr/kuryr.conf[neutron]段中的auth_url

auth_url = http://controller:5000

然后重新启动服务systemctl restart kuryr-libnetwork.service,再查看一下status,他正常了。

ERROR kuryr_libnetwork.controllers [-] ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied)

然后我使用kuryr网络创建容器,也可以运行,但是就是连不上网,谁都ping不通,网关也不通。查看opensatck发现网络端口是down的,显然不能用,然后呢,查看日志/var/log/kuryr/kuryr-server.log,可以发现每次创建容器的时候都会有这一句报错。

2020-11-04 11:23:31.404 11296 ERROR kuryr_libnetwork.controllers [-] ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied)

我用的是opencswitch,大概是kuryr没有权限操作这个db.sock去操作数据库,所以最后一步无法完成。
解决方案就是给他权限

chmod 777 /var/run/openvswitch/db.sock

然后就可以了。

重新验证

创建kuryr网络

[root@controller bin (keystone_admin)]#docker network create --driver kuryr --ipam-driver kuryr --subnet 10.10.0.0/16 --gateway=10.10.0.1 test_net
f7d2309acb8866510fbc8f4a12302e3b23ab88cb68b2639067dd82b2db9b2e13
[root@controller bin (keystone_admin)]#docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
5eb775de5898        bridge              bridge              local
f25e70052513        host                host                local
817f32f0e961        none                null                local
f7d2309acb88        test_net            kuryr               local
[root@controller bin (keystone_admin)]#docker network rm test_net
test_net
[root@controller bin (keystone_admin)]#docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
5eb775de5898        bridge              bridge              local
f25e70052513        host                host                local
817f32f0e961        none                null                local

然后可以在docker里创建一个容器看一看能不能用,如果正常的话应该是没问题的。
创建个容器试一下
拉取镜像

docker pull cirros

创建容器

[root@controller kuryr-libnetwork (keystone_admin)]#docker create --name 111 --net test_net cirros
df05dab0c409ebec4eee3805a5ea17b05822f5d090f5faddbb124d4eda91cd89
[root@controller kuryr-libnetwork (keystone_admin)]#docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
df05dab0c409        cirros              "/sbin/init"        4 seconds ago       Created                                 111

运行容器

docker start df0

没什么问题的话应该可以启动了。
执行一些命令看看,查看一下ip

[root@controller kuryr-libnetwork (keystone_admin)]#docker exec df0 ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:59:E3:6B:A1  inet addr:10.10.3.22  Bcast:10.10.255.255  Mask:255.255.0.0UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1RX packets:18 errors:0 dropped:0 overruns:0 frame:0TX packets:91 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000 RX bytes:1488 (1.4 KiB)  TX bytes:4909 (4.7 KiB)lo        Link encap:Local Loopback  inet addr:127.0.0.1  Mask:255.0.0.0UP LOOPBACK RUNNING  MTU:65536  Metric:1RX packets:64 errors:0 dropped:0 overruns:0 frame:0TX packets:64 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000 RX bytes:5766 (5.6 KiB)  TX bytes:5766 (5.6 KiB)

再进入看看联不联网

报错

会报错,大致是说,他需要一个unicode,不是普通字符串,类型不匹配所以报错,经过测试也的确,把字符串改为unicode就不报错了,但是在这里不知道怎么转为unicode。

Error response from daemon: failed to create endpoint test_docker on network test_net: NetworkDriver.CreateEndpoint: '10.10.0.0/16' does not appear to be an IPv4 or IPv6 network. Did you pass in a bytes (str in Python 2) instead of a unicode object?

这可能是因为版本问题,我发现安装的docker是20版本,但是我之前好像是19,这里换成19.03版本就好了。

Zun

控制节点

Install and configure controller node

基础

创建数据库,授予权限

CREATE DATABASE zun;
GRANT ALL PRIVILEGES ON zun.* TO 'zun'@'localhost' IDENTIFIED BY 'zun';
GRANT ALL PRIVILEGES ON zun.* TO 'zun'@'%' IDENTIFIED BY 'zun';

创建用户,添加角色

[root@controller OpenStack (keystone_admin)]#openstack user create --domain default --password-prompt zun
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | cbbbd9a8cdb34e0c9e68d6908e5768d3 |
| name                | zun                              |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+openstack role add --project service --user zun admin

创建服务实体

[root@controller OpenStack (keystone_admin)]#openstack service create --name zun --description "Container Service" container
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Container Service                |
| enabled     | True                             |
| id          | 88207c4d1785436898476a821ecc9100 |
| name        | zun                              |
| type        | container                        |
+-------------+----------------------------------+

创建服务端点

[root@controller OpenStack (keystone_admin)]#openstack endpoint create --region RegionOne container public http://controller:9517/v1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 7405f9ce180c40dcba847b79cee825a3 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 88207c4d1785436898476a821ecc9100 |
| service_name | zun                              |
| service_type | container                        |
| url          | http://controller:9517/v1        |
+--------------+----------------------------------+
[root@controller OpenStack (keystone_admin)]#openstack endpoint create --region RegionOne container admin http://controller:9517/v1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 56c39d65b7064fb2bd230d7eeb05ff07 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 88207c4d1785436898476a821ecc9100 |
| service_name | zun                              |
| service_type | container                        |
| url          | http://controller:9517/v1        |
+--------------+----------------------------------+
[root@controller OpenStack (keystone_admin)]#openstack endpoint create --region RegionOne container internal http://controller:9517/v1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 38f89951d19645b78be8ba10717d7440 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 88207c4d1785436898476a821ecc9100 |
| service_name | zun                              |
| service_type | container                        |
| url          | http://controller:9517/v1        |
+--------------+----------------------------------+

安装配置

创建系统用户

groupadd --system zun
useradd --home-dir "/var/lib/zun" --create-home --system --shell /bin/false -g zun zun

创建文件夹,授权一下

mkdir -p /etc/zun
chown zun:zun /etc/zun

安装一些依赖

yum install python-pip git python-devel libffi-devel gcc openssl-devel -y

还是一样,我直接复制过去了,因为我的zun是单独下载下来的,GitHub太慢了。复制过去,授予权限。

# cd /var/lib/zun
# git clone -b stable/stein https://git.openstack.org/openstack/zun.git
tar -zvxf zun-stable_stein.tar.gz
cp -r zun /var/lib/zun/zun
cd /var/lib/zun
chown -R zun:zun zun
cd zun
git init
pip install -r requirements.txt
python setup.py install

这里报错了

Attempting uninstall: PyYAMLFound existing installation: PyYAML 3.10
ERROR: Cannot uninstall 'PyYAML'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.

如何升级disutils软件包PyYAML?(How to upgrade disutils package PyYAML?)
这个报错大概就是说PyYAML它不会自己卸载,可能有依赖之类的,解决方案是降级pip,然后重新安装依赖

pip install pip==8.11
pip install -r requirements.txt

复制一下配置文件

su -s /bin/sh -c "oslo-config-generator  --config-file etc/zun/zun-config-generator.conf" zun
su -s /bin/sh -c "cp etc/zun/zun.conf.sample /etc/zun/zun.conf" zun
su -s /bin/sh -c "cp etc/zun/api-paste.ini /etc/zun" zun

编辑配置文件/etc/zun/zun.conf

[DEFAULT]
transport_url = rabbit://openstack:openstack@controller[api]
port = 9517
host_ip = 192.168.1.104[database]
connection = mysql+pymysql://zun:zun@controller/zun[keystone_auth]
memcached_servers = controller:11211
www_authenticate_uri = http://controller:5000
project_domain_name = Default
project_name = service
user_domain_name = Default
password = zun
username = zun
auth_url = http://controller:5000
auth_type = password
auth_version = v3
auth_protocol = http
service_token_roles_required = True
endpoint_type = internalURL[keystone_authtoken]
memcached_servers = controller:11211
www_authenticate_uri = http://controller:5000
project_domain_name = Default
project_name = service
user_domain_name = Default
password = zun
username = zun
auth_url = http://controller:5000
auth_type = password
auth_version = v3
auth_protocol = http
service_token_roles_required = True
endpoint_type = internalURL[oslo_concurrency]
lock_path = /var/lib/zun/tmp[oslo_messaging_notifications]
driver=messaging[websocket_proxy]
wsproxy_host = 192.168.1.104
wsproxy_port = 6784
base_url = ws://controller:6784/

给配置文件一个权限

chown zun:zun /etc/zun/zun.conf

同步数据库

su -s /bin/sh -c "zun-db-manage upgrade" zun
报错:There must be at least one plugin active

DB update fails during magnum install in newton
Missing files from python-magnum 3.1.1-0~cloud0
会报错

Traceback (most recent call last):File "/usr/bin/zun-db-manage", line 10, in <module>sys.exit(main())File "/usr/lib/python2.7/site-packages/zun/cmd/db_manage.py", line 67, in mainCONF.command.func()File "/usr/lib/python2.7/site-packages/zun/cmd/db_manage.py", line 29, in do_upgrademigration.upgrade(CONF.command.revision)File "/usr/lib/python2.7/site-packages/zun/db/migration.py", line 35, in upgradereturn get_backend().upgrade(version)File "/usr/lib/python2.7/site-packages/zun/db/sqlalchemy/migration.py", line 75, in upgradeget_manager().upgrade(version)File "/usr/lib/python2.7/site-packages/zun/db/sqlalchemy/migration.py", line 49, in get_manager_MANAGER = manager.MigrationManager(migration_config)File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/migration_cli/manager.py", line 47, in __init__raise ValueError('There must be at least one plugin active.')
ValueError: There must be at least one plugin active.

这个问题百度搜不出来的,没有人问过。然后,换一下关键词sqlalchemy报错There must be at least one plugin active.,可以发现有人在安装magnum的时候碰到了,这个好像是他那个版本的bug,他们已经修复了,但是我是zun,只能自己玩。
他的问题大概就是,在/usr/lib/python2.7/dist-packages/magnum/db/sqlalchemy文件夹下缺失文件alembic.ini and the whole alembic subdirectory
我看了看,我的这个文件夹/usr/lib/python2.7/site-packages/zun/db/sqlalchemy,底下的确没有这两个东西,而很奇怪,我下载的zun.tar.gz里面是有的,而且在/usr/lib/python2.7/site-packages/zun/db/sqlalchemy/migration.py可以看到,有这个一些方法,他们都使用了os.path.dirname(file),即在当前文件目录下搜索,但是这个文件夹中没有需要的这两个文件,他找不到,就只能报错了。至于为什么,可能是源码优点小bug?

def _alembic_config():path = os.path.join(os.path.dirname(__file__), 'alembic.ini')config = alembic_config.Config(path)return configdef get_manager():global _MANAGERif not _MANAGER:alembic_path = os.path.abspath(os.path.join(os.path.dirname(__file__), 'alembic.ini'))migrate_path = os.path.abspath(os.path.join(os.path.dirname(__file__), 'alembic'))migration_config = {'alembic_ini_path': alembic_path,'alembic_repo_path': migrate_path,'db_url': zun.conf.CONF.database.connection}_MANAGER = manager.MigrationManager(migration_config)return _MANAGER

解决方案吧,我就直接把它复制过去了

cd /home/kang/Desktop/linux/OpenStack/kuryrAzun(stein)/zun/zun/db/sqlalchemy
cp -r alembic /usr/lib/python2.7/site-packages/zun/db/sqlalchemy/alembic
cp alembic.ini /usr/lib/python2.7/site-packages/zun/db/sqlalchemy/alembic.ini

然后重新执行su -s /bin/sh -c “zun-db-manage upgrade” zun同步数据库,他正常了。

INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade  -> a9a92eebd9a8, create_table_zun_service
INFO  [alembic.runtime.migration] Running upgrade a9a92eebd9a8 -> 9fe371393a24, create_table_container
INFO  [alembic.runtime.migration] Running upgrade 9fe371393a24 -> 5971a6844738, add container_id column to container
INFO  [alembic.runtime.migration] Running upgrade 5971a6844738 -> 93fbb05b77b9, add memory field to container
INFO  [alembic.runtime.migration] Running upgrade 93fbb05b77b9 -> 63a08e32cc43, add task state to container
INFO  [alembic.runtime.migration] Running upgrade 63a08e32cc43 -> 1192ba19a6e9, Add cpu workdir ports hostname labels to container
INFO  [alembic.runtime.migration] Running upgrade 1192ba19a6e9 -> 72c6947c6636, create_table_image
INFO  [alembic.runtime.migration] Running upgrade 72c6947c6636 -> c5565cbaa3de, Insert status_reason to Container table
INFO  [alembic.runtime.migration] Running upgrade c5565cbaa3de -> 43e1088c3389, add image_pull_policy column
INFO  [alembic.runtime.migration] Running upgrade 43e1088c3389 -> 4a0c4f7a4a33, add meta addresses to container
INFO  [alembic.runtime.migration] Running upgrade 4a0c4f7a4a33 -> 531e4a890480, add host to container
INFO  [alembic.runtime.migration] Running upgrade 531e4a890480 -> bbcfa910a8a5, add_restart_policy_column
INFO  [alembic.runtime.migration] Running upgrade bbcfa910a8a5 -> ad43a2179cf2, add_status_detail
INFO  [alembic.runtime.migration] Running upgrade ad43a2179cf2 -> d1ef05fd92c8, add tty stdin_open
INFO  [alembic.runtime.migration] Running upgrade d1ef05fd92c8 -> 5458f8394206, add image driver field
INFO  [alembic.runtime.migration] Running upgrade 5458f8394206 -> 6fd4f7582eb0, Add resource provider table
INFO  [alembic.runtime.migration] Running upgrade 6fd4f7582eb0 -> 7975b7f0f792, Add resource class table
INFO  [alembic.runtime.migration] Running upgrade 7975b7f0f792 -> 09f196622a3f, create inventory table
INFO  [alembic.runtime.migration] Running upgrade 09f196622a3f -> e4d145e195f4, Create allocation table
INFO  [alembic.runtime.migration] Running upgrade e4d145e195f4 -> 8192905fd835, add uuid_to_resource_class
INFO  [alembic.runtime.migration] Running upgrade 8192905fd835 -> eeac0d191f5a, add compute node table
INFO  [alembic.runtime.migration] Running upgrade eeac0d191f5a -> 53a8b515057e, Add memory info to compute node
INFO  [alembic.runtime.migration] Running upgrade 53a8b515057e -> 4bf34495d060, Add container number info to compute node
INFO  [alembic.runtime.migration] Running upgrade 4bf34495d060 -> ce9944b346cb, combine tty and stdin_open
INFO  [alembic.runtime.migration] Running upgrade ce9944b346cb -> 8c3d80e18eb5, Add container cpus,cpu_used to compute node
INFO  [alembic.runtime.migration] Running upgrade 8c3d80e18eb5 -> 04ba87af76bb, Add container host operating system info
INFO  [alembic.runtime.migration] Running upgrade 04ba87af76bb -> 17ab8b533cc8, Add container hosts label info
INFO  [alembic.runtime.migration] Running upgrade 17ab8b533cc8 -> 5359d23b2322, add websocket_url and websocket_token
INFO  [alembic.runtime.migration] Running upgrade 5359d23b2322 -> 174cafda0857, add security groups
INFO  [alembic.runtime.migration] Running upgrade 174cafda0857 -> 648c25faa0be, add mem used to compute node
INFO  [alembic.runtime.migration] Running upgrade 648c25faa0be -> 75315e219cfb, Add auto_remove to container
INFO  [alembic.runtime.migration] Running upgrade 75315e219cfb -> a251f1f61217, create capsule table
INFO  [alembic.runtime.migration] Running upgrade a251f1f61217 -> 945569b3669f, add runtime column
INFO  [alembic.runtime.migration] Running upgrade 945569b3669f -> 10d65e285a59, create volume_mapping table
INFO  [alembic.runtime.migration] Running upgrade 10d65e285a59 -> 37bce72463e3, add pci device
INFO  [alembic.runtime.migration] Running upgrade 37bce72463e3 -> bcd6410d645e, add host to capsule
INFO  [alembic.runtime.migration] Running upgrade bcd6410d645e -> fc27c7415d9c, change the properties of meta_labels
INFO  [alembic.runtime.migration] Running upgrade fc27c7415d9c -> ff7b9665d504, add pci stats to compute node
INFO  [alembic.runtime.migration] Running upgrade ff7b9665d504 -> f046346d1d87, add timestamp to pci device
INFO  [alembic.runtime.migration] Running upgrade f046346d1d87 -> d2affd5b4172
INFO  [alembic.runtime.migration] Running upgrade d2affd5b4172 -> cf46a28f46bc, add container_actions table
INFO  [alembic.runtime.migration] Running upgrade cf46a28f46bc -> b6bfca998431, add container_actions_events table
INFO  [alembic.runtime.migration] Running upgrade b6bfca998431 -> 8b0082d9e7c1, drop foreign key of container_actions container_uuid
INFO  [alembic.runtime.migration] Running upgrade 8b0082d9e7c1 -> 10c9668a816d, add volumes info and addresses to capsule
INFO  [alembic.runtime.migration] Running upgrade 10c9668a816d -> 71f8b4cf1dbf, upgrade
INFO  [alembic.runtime.migration] Running upgrade 71f8b4cf1dbf -> d9714eadbdc2, add disk to container
INFO  [alembic.runtime.migration] Running upgrade d9714eadbdc2 -> 6ff4d35f4334, change properties of restart policy in capsule
INFO  [alembic.runtime.migration] Running upgrade 6ff4d35f4334 -> fb9ad4a050f8, drop_container_actions_foreign_key
INFO  [alembic.runtime.migration] Running upgrade fb9ad4a050f8 -> 50829990c965, add ondelete to container_actions_events foreign key
INFO  [alembic.runtime.migration] Running upgrade 50829990c965 -> 3f49fa520409, add availability_zone to service
INFO  [alembic.runtime.migration] Running upgrade 3f49fa520409 -> d0c606fdec3c, add disk total and used to compute node
INFO  [alembic.runtime.migration] Running upgrade d0c606fdec3c -> 372433c0afd2, add auto heal to container
INFO  [alembic.runtime.migration] Running upgrade 372433c0afd2 -> 238f94009eab, add disk_quota_supported to compute_node
INFO  [alembic.runtime.migration] Running upgrade 238f94009eab -> 2b045cb595db, Create quota & quota class tables
INFO  [alembic.runtime.migration] Running upgrade 2b045cb595db -> cff60402dd86, add capsule_id to containers
INFO  [alembic.runtime.migration] Running upgrade cff60402dd86 -> 271c7f45982d, add started_at to containers
INFO  [alembic.runtime.migration] Running upgrade 271c7f45982d -> 3298c6a5c3d9, empty message
INFO  [alembic.runtime.migration] Running upgrade 3298c6a5c3d9 -> 012a730926e8, Add quota usage
INFO  [alembic.runtime.migration] Running upgrade 012a730926e8 -> 26896d5f9053, create exec_instance table
INFO  [alembic.runtime.migration] Running upgrade 26896d5f9053 -> 3e80bbfd8da7, Convert type of 'command' from string to list
INFO  [alembic.runtime.migration] Running upgrade 3e80bbfd8da7 -> 105626c4f972, add privileged to container
INFO  [alembic.runtime.migration] Running upgrade 105626c4f972 -> 2fb377a5a519, add healthcheck to containers
INFO  [alembic.runtime.migration] Running upgrade 2fb377a5a519 -> f746cd28bcac, add host to volume mapping
INFO  [alembic.runtime.migration] Running upgrade f746cd28bcac -> bc56b9932dd9, add runtime to compute node
INFO  [alembic.runtime.migration] Running upgrade bc56b9932dd9 -> a9c9fb54274a, add_contents_to_volume_mapping_table
INFO  [alembic.runtime.migration] Running upgrade a9c9fb54274a -> a019998b09b5, add host to image
INFO  [alembic.runtime.migration] Running upgrade a019998b09b5 -> 02134de8e7d3, add_exposed_ports_to_container
INFO  [alembic.runtime.migration] Running upgrade 02134de8e7d3 -> 54bcb75afb32, Add init containers uuids to capsule
INFO  [alembic.runtime.migration] Running upgrade 54bcb75afb32 -> 35cb52c5553f, rename volume_id to cinder_volume_id in volume_mapping
INFO  [alembic.runtime.migration] Running upgrade 35cb52c5553f -> 33cdd98bb9b2, split volume_mapping table
INFO  [alembic.runtime.migration] Running upgrade 33cdd98bb9b2 -> 2b129060baff, support cpuset
INFO  [alembic.runtime.migration] Running upgrade 2b129060baff -> 21fa080c818a, add enable_cpu_pinning to compute_node
INFO  [alembic.runtime.migration] Running upgrade 21fa080c818a -> 5ffc1cabe6b4, add registry table
INFO  [alembic.runtime.migration] Running upgrade 5ffc1cabe6b4 -> 1bc34e18180b, add registry_id to container
INFO  [alembic.runtime.migration] Running upgrade 1bc34e18180b -> d73b72ab7cc6, add container_type to container
INFO  [alembic.runtime.migration] Running upgrade d73b72ab7cc6 -> 157a0595e13e, drop capsule table
继续

源码安装的需要手动创建服务文件。
创建/etc/systemd/system/zun-api.service,这里官网写的执行文件路径是/usr/local/bin/zun-api,但是我看了一下这是空的,我的路径在/usr/bin/zun-api,所以修改一下。

[Unit]
Description = OpenStack Container Service API[Service]
ExecStart = /usr/bin/zun-api
User = zun[Install]
WantedBy = multi-user.target

创建/etc/systemd/system/zun-wsproxy.service,同样换一下路径

[Unit]
Description = OpenStack Container Service Websocket Proxy[Service]
ExecStart = /usr/bin/zun-wsproxy
User = zun[Install]
WantedBy = multi-user.target

启动服务,自启动,记得查看一下status,避免出错。

systemctl enable zun-api.service zun-wsproxy.service
systemctl start zun-api.service zun-wsproxy.service
systemctl status zun-api.service zun-wsproxy.service

计算节点

Install and configure a compute node
需要提前安装好docker,kuryr libnetwork和etcd。
前面的创建用户,创建文件夹授权,下载源码,安装依赖,安装,步骤和控制节点一样。从配置文件开始不一样了。
复制一下配置文件,一定要先切换到/var/lib/zun/zun下,不然会找不到文件,因为他用的是相对路径

cd /var/lib/zun/zun
su -s /bin/sh -c "cp etc/zun/rootwrap.conf /etc/zun/rootwrap.conf" zun
su -s /bin/sh -c "mkdir -p /etc/zun/rootwrap.d" zun
su -s /bin/sh -c "cp etc/zun/rootwrap.d/* /etc/zun/rootwrap.d/" zun

配置一下zun的超级权限?好像是这个名。(Configure sudoers for zun users)。
同样,官网的路径带local,我的没有,直接是/usr/bin

echo "zun ALL=(root) NOPASSWD: /usr/bin/zun-rootwrap /etc/zun/rootwrap.conf *" | sudo tee /etc/sudoers.d/zun-rootwrap

编辑配置文件/etc/zun/zun.conf,基本配置和控制节点差不多,重复的就不写了。

[DEFAULT]
state_path = /var/lib/zun

再给配置文件授权一下

chown zun:zun /etc/zun/zun.conf

在docker和kuryr里配置一下
创建文件夹/etc/systemd/system/docker.service.d

mkdir -p /etc/systemd/system/docker.service.d

创建文件/etc/systemd/system/docker.service.d/docker.conf,编辑,注意ip要写对。

[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --group zun -H tcp://controller:2375 -H unix:///var/run/docker.sock --cluster-store etcd://controller:2379

重启一下docker

systemctl daemon-reload
systemctl restart docker
systemctl status docker

编辑配置文件/etc/kuryr/kuryr.conf

[DEFAULT]
capability_scope = global
process_external_connectivity = False

重启一下

systemctl restart kuryr-libnetwork.service
systemctl status kuryr-libnetwork.service

安装完了,创建一下服务文件/etc/systemd/system/zun-compute.service,同样要改一下路径,去掉local变成/usr/lib

[Unit]
Description = OpenStack Container Service Compute Agent[Service]
ExecStart = /usr/bin/zun-compute
User = zun[Install]
WantedBy = multi-user.target

启动服务,自启动,查看一下状态

systemctl enable zun-compute
systemctl start zun-compute
systemctl status zun-compute

验证

Verify operation
安装一下python-zunclient

pip install python-zunclient==3.3.0

验证一下

[root@controller OpenStack (keystone_admin)]#openstack appcontainer service list
+----+------------+-------------+-------+----------+-----------------+---------------------+-------------------+
| Id | Host       | Binary      | State | Disabled | Disabled Reason | Updated At          | Availability Zone |
+----+------------+-------------+-------+----------+-----------------+---------------------+-------------------+
|  1 | controller | zun-compute | up    | False    | None            | 2020-09-18 04:26:20 | nova              |
+----+------------+-------------+-------+----------+-----------------+---------------------+-------------------+

创建一个Container

安装numactl

yum install -y numactl

查看一下当前的网络

[root@controller OpenStack (keystone_admin)]#openstack network list
+--------------------------------------+-----------+--------------------------------------+
| ID                                   | Name      | Subnets                              |
+--------------------------------------+-----------+--------------------------------------+
| 28b14a2f-fd62-4758-bd58-66d47dafd89a | private   | 97112257-804d-4e1f-82be-ccc9eaca2c8b |
| 2e9365f9-c631-4ff2-bb29-284b07cf4edf | public    | 37fe42b0-2eca-4eea-9a2f-5a47346d22f2 |
| 427086b7-9c9f-474b-9897-85030041b904 | vxlan-net | 2a16f678-7e3c-45a2-b5e6-a043bb88e7e9 |
+--------------------------------------+-----------+--------------------------------------+

设置一下环境变量,使用

export NET_ID=28b14a2f-fd62-4758-bd58-66d47dafd89a

创建一个

[root@controller OpenStack (keystone_admin)]#openstack appcontainer run --name container --net network=$NET_ID cirros
+-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field             | Value                                                                                                                                                                                                                 |
+-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| addresses         | None                                                                                                                                                                                                                  |
| links             | [{u'href': u'http://controller:9517/v1/containers/757e6996-118e-491f-84cf-0f0ec664c8e5', u'rel': u'self'}, {u'href': u'http://controller:9517/containers/757e6996-118e-491f-84cf-0f0ec664c8e5', u'rel': u'bookmark'}] |
| image             | cirros                                                                                                                                                                                                                |
| labels            | {}                                                                                                                                                                                                                    |
| disk              | 0                                                                                                                                                                                                                     |
| security_groups   | None                                                                                                                                                                                                                  |
| image_pull_policy | None                                                                                                                                                                                                                  |
| user_id           | 076b4800654a456fbd6b96244af827de                                                                                                                                                                                      |
| uuid              | 757e6996-118e-491f-84cf-0f0ec664c8e5                                                                                                                                                                                  |
| hostname          | None                                                                                                                                                                                                                  |
| auto_heal         | False                                                                                                                                                                                                                 |
| environment       | {}                                                                                                                                                                                                                    |
| memory            | 2048                                                                                                                                                                                                                  |
| project_id        | 91973963c02a4480bc281545fe4319f7                                                                                                                                                                                      |
| privileged        | False                                                                                                                                                                                                                 |
| status            | Creating                                                                                                                                                                                                              |
| workdir           | None                                                                                                                                                                                                                  |
| healthcheck       | None                                                                                                                                                                                                                  |
| auto_remove       | False                                                                                                                                                                                                                 |
| status_detail     | None                                                                                                                                                                                                                  |
| cpu_policy        | shared                                                                                                                                                                                                                |
| host              | None                                                                                                                                                                                                                  |
| image_driver      | docker                                                                                                                                                                                                                |
| task_state        | None                                                                                                                                                                                                                  |
| status_reason     | None                                                                                                                                                                                                                  |
| name              | container                                                                                                                                                                                                             |
| restart_policy    | None                                                                                                                                                                                                                  |
| ports             | None                                                                                                                                                                                                                  |
| command           | [u'ping', u'8.8.8.8']                                                                                                                                                                                                 |
| runtime           | None                                                                                                                                                                                                                  |
| registry_id       | None                                                                                                                                                                                                                  |
| cpu               | 1.0                                                                                                                                                                                                                   |
| interactive       | False                                                                                                                                                                                                                 |
+-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

然后执行openstack appcontainer list查看一下状态

[root@controller OpenStack (keystone_admin)]#openstack appcontainer list
+--------------------------------------+-----------+--------+----------+---------------+-----------+-------+
| uuid                                 | name      | image  | status   | task_state    | addresses | ports |
+--------------------------------------+-----------+--------+----------+---------------+-----------+-------+
| 757e6996-118e-491f-84cf-0f0ec664c8e5 | container | cirros | Creating | image_pulling |           | []    |
+--------------------------------------+-----------+--------+----------+---------------+-----------+-------+

然后发现状态ERROR

好像是拉取不到镜像,我觉得是docker搜索不到这个镜像,因为执行docker search redis,可以找到,可能就是没有cirros吧。
然后感觉他应该能从glance获取镜像,查看一下/etc/zun/zun.conf,搜索glance,发现image_list=glance,docker这个字段,取消注释。
重新建立容器,但是这时候报错500,还是23750拒绝连接。检查一下kuryr的服务,重启。
kuryr重启后,重新创建容器。

ohhhhhhhhhhhh,终于成功了。

docker私有仓库

创建仓库

创建存储目录

mkdir -p /opt/data/registry

创建容器

docker run -d -p 5000:5000 -v /opt/data/registry:/var/lib/registry --name private-registry registry

如果报错
docker: Error response from daemon: Conflict. The container name “/private-registry” is already in use by container “9bad47733c6cc8abc0199759a578442634228d7141bbcea267e1de4158d3baaa”. You have to remove (or rename) that container to be able to reuse that name.
说明已经有一个容器叫这个名字了,执行docker container ls --all查看一下

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
9bad47733c6c        registry            "/entrypoint.sh /etc…"   14 minutes ago      Created                                 private-registry

把这个删除就行,然后重新创建容器。
这里有一个小误区,-p参数是指示端口映射,使用为-p xxx:yyy,即主机的xxx映射为容器的yyy,因此这里我写的有问题,如果主机的5050被占用了,应该改成-p 5050:5000,而不是-p 5000:5000
这里端口我换成5050试一下,因为5000好像被httpd占用了,不知道干什么了。但是创建之后,可以使用5000端口push镜像。这里将主机的5050端口映射给容器的5000端口。

[root@controller kang]# docker container rm private-registry
[root@controller kang]# docker run -d -p 5050:5000 -v /opt/data/registry:/var/lib/registry --name private-registry registry
[root@controller kang]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                              NAMES
4f34ed10721f        registry            "/entrypoint.sh /etc…"   39 seconds ago      Up 38 seconds       5000/tcp, 0.0.0.0:5050->5050/tcp   private-registry

添加一下权限,编辑/etc/docker/daemon.json

{"registry-mirrors":["https://docker.mirrors.ustc.edu.cn","https://kfwkfulq.mirror.aliyuncs.com","https://2lqq34jg.mirror.aliyuncs.com","https://p336w651.mirror.aliyuncs.com","https://registry.docker-cn.com","http://hub-mirror.c.163.com"],"insecure-registries":["192.168.1.104:5050"]
}

重启服务

systemctl daemon-reload
systemctl restart docker

启动容器,如果docker ps看不见有东西的话,那么加个参数-a,因为默认只显示运行中的容器。

docker restart private-registry

上传镜像

修改镜像的tag

[root@controller kang]# docker tag cirros:latest 192.168.1.104:5050/my-cirros
[root@controller kang]# docker images
REPOSITORY                     TAG                 IMAGE ID            CREATED             SIZE
registry                       latest              2d4f4b5309b1        3 months ago        26.2MB
192.168.1.104:5050/my-cirros   latest              3c82e4d066cf        7 months ago        12.6MB
cirros                         latest              3c82e4d066cf        7 months ago        12.6MB
zun-registry.com/my-cirros     latest              3c82e4d066cf        7 months ago        12.6MB

如果想删除这个标签的话,执行docker rmi zun-registry.com/my-cirros,就可以把对应的标签删除了。
然后push。

[root@controller kang]# docker push 192.168.1.104:5050/my-cirros
The push refers to repository [192.168.1.104:5050/my-cirros]
858d98ac4893: Pushed
aa107a407592: Pushed
b993cfcfd8fd: Pushed
latest: digest: sha256:c7d58d6d463247a2540b8c10ff012c34fd443426462e891b13119a9c66dfd28a size: 943

执行curl 192.168.1.104:5050/v2/_catalog查看一下

[root@controller kang]# curl 192.168.1.104:5050/v2/_catalog
{"repositories":["my-cirros"]}

这里如果想测试浮动IP和ssh之类的,下载这个镜像ilpan/base-ssh,是一个基于ubuntu 16.04配置了ssh的镜像。

镜像的导出和导入

设立仓库的用途是方便离线使用镜像,既然离线了,肯定就没法下载镜像,然后呢,可以从有网的地方pull好镜像,然后执行如下命令导出镜像

docker save -o xx.tar xxx:latest

导入镜像执行

docker load -i xx.tar
或者
docker load < xx.tar

容器测试

测试一下,创建容器

openstack appcontainer run --name container222 --net network=$NET_ID 192.168.1.104:5050/my-cirros

然后稍等一下就可以看到容器创建成功了。好像都没有image_pulling这个过程了,直接就到了container_creating

容器删除

普通的docker镜像删除是这样docker rmi {image_id}
仓库里的镜像也可以删除,避免仓库太臃肿和出现一些奇怪的问题,主要是两步,先删除镜像文件,然后更新一下仓库的什么玩意,垃圾回收吧

rm -rf /var/lib/registry/docker/registry/v2/repositories/{image_name}/
registry garbage-collect /etc/docker/registry/config.yaml

最后可以重启一下仓库registry

报错

这里有几个报错,可以依次看看。

image could not be found

首先我这疯狂报错找不到镜像,可是我觉得我没什么问题。

顺着他的路径去找一下,找到了这个方法

可以看到,其实他这里在检查镜像的format,必须是docker才行,然后我这里加了一行输出,输出镜像的format,都是bare。bare是给虚拟机用的,如果想给docker用,需要在上传镜像的时候format选择docker格式。

permission deny

在创建容器的时候报错拒绝访问

这个就简单了,给这个文件夹授权一下

chmod 777 /var/lib/glance/images

找不到镜像

然后呢,如果用docker引擎的话,好像就是直接从docker hub下载的,如果网不好的话可能就下载失败之类的。
docker是有cirros的,执行docker search cirros,官方有一个镜像。
如果用glance,其实我还没成功。

OpenStack(Stein)版配置Zun组件相关推荐

  1. OpenStack Stein版搭建详解

    目录 .基础环境配置 1.1 节点硬件规划 1.2 节点网络规划 1.3 关闭防火墙 1.4 配置yum源 1.5 配置节点IP 1.6 配置主机名 1.7 配置主机名解析 1.8 配置NTP服务 2 ...

  2. 使用devstack在单机上安装openstack(stein版本)和zun的踩坑之路

    需求 公司已有环境是openstack分布式版本,调试有些麻烦,因此想在单机上安装openstack,即devstack,并安装组件zun及zun-ui,以便对zun组件进行调试开发 环境版本 ope ...

  3. openstack M 版 neutron网络组件基础入门

    在我们openstack学习当中,网络组件neutron无疑是令很多人很难理解的,可以说要深入理解 了neutron组件,你基本完成了openstack 60%的学习,存储方面只要不涉及到分布式,剩下 ...

  4. OpenStack Stein版部署

    文章目录 一.测试环境 二.环境准备(controller和compute) 1. 修改主机名称 2. 关闭防火墙.selinux.NetworkManger 3. 安装时间服务器 3.1 contr ...

  5. centos7安装与配置OpenStack-Zun组件(Stein版)

    文章目录 一.基本环境参数 二.controller节点zun安装 2.1 创建数据库 2.2 创建openstack用户.服务.端点 2.3 安装.启动zun服务 2.3.1 创建用户.组 2.3. ...

  6. pymysql安装_centos7.6 安装openstack stein组件之四

    启用stein包: yum install centos-release-openstack-stein -y yum install python-openstackclient openstack ...

  7. OpenStack Victoria版——6.2计算节点-Nova计算服务组件

    6.2计算节点-Nova计算服务组件 更多步骤:OpenStack Victoria版安装部署系列教程 OpenStack部署系列文章 OpenStack Victoria版 安装部署系列教程 Ope ...

  8. Openstack“T版“全组件手动部署

    Openstack"T版"全组件手动部署 部署Keystone 创建数据库实例和数据库用户 安装.配置keystone.数据库.Apache 初始化认证服务数据库 配置bootst ...

  9. OpenStack T版—Neutron组件部署详解

    目录 一.OpenStack网络 1.1.OpenStack网络概述 1.2.Linux虚拟网络 1.3.OpenStack网络基础服务 1.4.Neutron主要插件.代理与服务 1.5.小结 二. ...

最新文章

  1. openstack queens 版本 linux bridge起不来的解决办法
  2. 2004年c语言试题2,C语言试题(2004~2005第2学期)A重修
  3. 开发日记-20190914 关键词 汇编语言王爽版 第七章第八章
  4. c++经典书籍--Effective C++
  5. c++ 程序时间运算 函数;
  6. mybatis14--注解的配置
  7. vc 中对对话框的几种操作
  8. c语言程序设计B试题,c语言程序设计期末试题B(含答案)Word版
  9. 渐进增强和优雅降级有什么区别
  10. python列表推导式生成随机数_Python:列表推导式/生成器推导式
  11. Pycharm 新版本打开md文件卡死-解决办法
  12. 人性的弱点---第三篇---得人同意于你的十二种方法3
  13. 科研英文论文翻译工具——Copytranslator
  14. HNOI 2009 图的同构记数 题解
  15. 候选键的计算(数据库系统概论)
  16. 魔客吧php登录界面模板,精仿魔客吧网站模板discuz模板_带VIP购买等多个插件
  17. Arduino + W5100调试笔记(1)
  18. 视觉检测零件同轴度 测试零件同轴度,检测是否同心圆
  19. 【转】为什么要使用ModelDriven
  20. 销售报表案例--如何应用Excel创建销售漏斗分析仪

热门文章

  1. 【Java十大热门游戏合集】Java经典游戏项目(附源码课件
  2. 虚拟现实数字沙盘三维电子沙盘元宇宙大数据人工智能无人机倾斜摄影三维全景建模第16课
  3. POI导出excel:设置字体颜色、行高自适应、列宽自适应、锁住单元格、合并单元格...
  4. hyperMILL2018刀具库模板
  5. 相信边缘的力量!2022全球边缘计算大会·深圳站圆满落幕!
  6. 51单片机的轮胎气压监测系统_汽车轮胎压力监测系统
  7. 物联卡中心:4G为什么会被限速?别乱猜了,真相在这儿!
  8. Educational Codeforces Round 93 (Rated for Div. 2) A. Bad Triangle 签到
  9. 【网络层协议】计算机网络基础知识点
  10. 用html代码写一个普通直尺,一种直尺的制作方法