作者:Tatsuya Naganawa 译者:TF编译组

多kube-master部署

3个Tungsten Fabric控制器节点:m3.xlarge(4 vcpu)-> c3.4xlarge(16 vcpu)(由于schema-transformer需要cpu资源进行acl计算,因此我需要添加资源)
100 kube-master, 800 workers: m3.medium

在下面这个链接内容中,tf-controller安装和first-containers.yaml是相同的

  • https://github.com/tnaganawa/tungstenfabric-docs/blob/master/TungstenFabricPrimer.md#2-tungstenfabric-up-and-running

Ami也相同(ami-3185744e),但是内核版本通过yum -y update kernel(转换为映像,并用于启动实例)更新

/tmp/aaa.pem是ec2实例中指定的密钥对

附cni.yaml文件:

  • https://github.com/tnaganawa/tungstenfabric-docs/blob/master/multi-kube-master-deployment-cni-tungsten-fabric.yaml
(在其中一个Tungsten Fabric控制器节点键入命令)
yum -y install epel-release
yum -y install parallelaws ec2 describe-instances --query 'Reservations[*].Instances[*].PrivateIpAddress' --output text | tr '\t' '\n' > /tmp/all.txt
head -n 100 /tmp/all.txt > masters.txt
tail -n 800 /tmp/all.txt > workers.txtulimit -n 4096
cat all.txt | parallel -j1000 ssh -i /tmp/aaa.pem -o StrictHostKeyChecking=no centos@{} id
cat all.txt | parallel -j1000 ssh -i /tmp/aaa.pem centos@{} sudo sysctl -w net.bridge.bridge-nf-call-iptables=1-
cat -n masters.txt | parallel -j1000 -a - --colsep '\t' ssh -i /tmp/aaa.pem centos@{2} sudo kubeadm init --token aaaaaa.aaaabbbbccccdddd --ignore-preflight-errors=NumCPU --pod-network-cidr=10.32.{1}.0/24 --service-cidr=10.96.{1}.0/24 --service-dns-domain=cluster{1}.local
-
vi assign-kube-master.py
computenodes=8
with open ('masters.txt') as aaa:with open ('workers.txt') as bbb:for masternode in aaa.read().rstrip().split('\n'):for i in range (computenodes):tmp=bbb.readline().rstrip()print ("{}\t{}".format(masternode, tmp))-
cat masters.txt | parallel -j1000 ssh -i /tmp/aaa.pem centos@{} sudo cp /etc/kubernetes/admin.conf /tmp/admin.conf
cat masters.txt | parallel -j1000 ssh -i /tmp/aaa.pem centos@{} sudo chmod 644 /tmp/admin.conf
cat -n masters.txt | parallel -j1000 -a - --colsep '\t' scp -i /tmp/aaa.pem centos@{2}:/tmp/admin.conf kubeconfig-{1}
-
cat -n masters.txt | parallel -j1000 -a - --colsep '\t' kubectl --kubeconfig=kubeconfig-{1} get node
-
cat -n join.txt | parallel -j1000 -a - --colsep '\t' ssh -i /tmp/aaa.pem centos@{3} sudo kubeadm join {2}:6443 --token aaaaaa.aaaabbbbccccdddd --discovery-token-unsafe-skip-ca-verification
-
(modify controller-ip in cni-tungsten-fabric.yaml)
cat -n masters.txt | parallel -j1000 -a - --colsep '\t' cp cni-tungsten-fabric.yaml cni-{1}.yaml
cat -n masters.txt | parallel -j1000 -a - --colsep '\t' sed -i -e "s/k8s2/k8s{1}/" -e "s/10.32.2/10.32.{1}/" -e "s/10.64.2/10.64.{1}/" -e "s/10.96.2/10.96.{1}/"  -e "s/172.31.x.x/{2}/" cni-{1}.yaml
-
cat -n masters.txt | parallel -j1000 -a - --colsep '\t' kubectl --kubeconfig=kubeconfig-{1} apply -f cni-{1}.yaml
-
sed -i 's!kubectl!kubectl --kubeconfig=/etc/kubernetes/admin.conf!' set-label.sh
cat masters.txt | parallel -j1000 scp -i /tmp/aaa.pem set-label.sh centos@{}:/tmp
cat masters.txt | parallel -j1000 ssh -i /tmp/aaa.pem centos@{} sudo bash /tmp/set-label.sh
-
cat -n masters.txt | parallel -j1000 -a - --colsep '\t' kubectl --kubeconfig=kubeconfig-{1} create -f first-containers.yaml

在OpenStack上嵌套安装Kubernetes

可以在all-in-one的openstack节点上尝试嵌套安装kubernetes。

在ansible-deployer安装了该节点之后,

  • https://github.com/tnaganawa/tungstenfabric-docs/blob/master/TungstenFabricPrimer.md#openstack-1

此外,需要手动创建为vRouter TCP/9091连接本地服务

  • https://github.com/Juniper/contrail-kubernetes-docs/blob/master/install/kubernetes/nested-kubernetes.md#option-1-fabric-snat–link-local-preferred

此配置将创建DNAT/SNAT,例如从src: 10.0.1.3:xxxx, dst-ip: 10.1.1.11:9091到src: compute’s vhost0 ip:xxxx dst-ip: 127.0.0.1:9091,因此openstack VM中的CNI可以与计算节点上的vrouter-agent直接通信,并为容器选择端口/ip信息。

  • IP地址可以来自子网,也可以来自子网外部。

在该节点上,将创建两个Centos7(或ubuntu bionic)节点,并将使用相同的程序(见下面链接)安装kubernetes集群,

  • https://github.com/tnaganawa/tungstenfabric-docs/blob/master/TungstenFabricPrimer.md#kubeadm

当然yaml文件需要嵌套安装。

./resolve-manifest.sh contrail-nested-kubernetes.yaml > cni-tungsten-fabric.yamlKUBEMANAGER_NESTED_MODE: "{{ KUBEMANAGER_NESTED_MODE }}" ## this needs to be "1"
KUBERNESTES_NESTED_VROUTER_VIP: {{ KUBERNESTES_NESTED_VROUTER_VIP }} ## this parameter needs to be the same IP with the one defined in link-local service (such as 10.1.1.11)

如果coredns接收到ip,则说明嵌套安装正常。

vRouter ml2插件

我尝试了vRouter neutron插件的ml2功能。

  • https://opendev.org/x/networking-opencontrail/
  • https://www.youtube.com/watch?v=4MkkMRR9U2s

使用AWS上的三个CentOS7.5(4 cpu,16 GB内存,30 GB磁盘,ami: ami-3185744e)。

随附基于本文件的步骤。

  • https://opendev.org/x/networking-opencontrail/src/branch/master/doc/source/installation/playbooks.rst
openstack-controller: 172.31.15.248
tungsten-fabric-controller (vRouter): 172.31.10.212
nova-compute (ovs): 172.31.0.231(命令在tungsten-fabric-controller上,使用centos用户(不是root用户))sudo yum -y remove PyYAML python-requests
sudo yum -y install git patch
sudo easy_install pip
sudo pip install PyYAML requests ansible==2.8.8
ssh-keygenadd id_rsa.pub to authorized_keys on all three nodes (centos user (not root))git clone https://opendev.org/x/networking-opencontrail.git
cd networking-opencontrail
patch -p1 < ml2-vrouter.diff cd playbooks
cp -i hosts.example hosts
cp -i group_vars/all.yml.example group_vars/all.yml(ssh to all the nodes once, to update known_hosts)ansible-playbook main.yml -i hosts- devstack日志位于/opt/stack/logs/stack.sh.log中- openstack进程日志写在/var/log/messages中- 'systemctl list-unit-files | grep devstack'显示openstack进程的systemctl条目(openstack控制器节点)一旦devstack因mariadb登录错误而失败,请键入此命令进行修复。(对于openstack控制器的ip和fqdn,需要修改最后两行)命令将由“centos”用户(不是root用户)键入.mysqladmin -u root password adminmysql -uroot -padmin -h127.0.0.1 -e 'GRANT ALL PRIVILEGES ON *.* TO '\''root'\''@'\''%'\'' identified by '\''admin'\'';'mysql -uroot -padmin -h127.0.0.1 -e 'GRANT ALL PRIVILEGES ON *.* TO '\''root'\''@'\''172.31.15.248'\'' identified by '\''admin'\'';'mysql -uroot -padmin -h127.0.0.1 -e 'GRANT ALL PRIVILEGES ON *.* TO '\''root'\''@'\''ip-172-31-15-248.ap-northeast-1.compute.internal'\'' identified by '\''admin'\'';'

下附hosts、group_vars/all和patch的代码。(有些仅是错误修正,但有些会更改默认行为)

[centos@ip-172-31-10-212 playbooks]$ cat hosts
controller ansible_host=172.31.15.248 ansible_user=centos# 此host应为计算主机组中的一个.
# 单独部署Tungsten Fabtic计算节点的playbook尚未准备好.
contrail_controller ansible_host=172.31.10.212 ansible_user=centos local_ip=172.31.10.212[contrail]
contrail_controller[openvswitch]
other_compute ansible_host=172.31.0.231 local_ip=172.31.0.231 ansible_user=centos[compute:children]
contrail
openvswitch
[centos@ip-172-31-10-212 playbooks]$ cat group_vars/all.yml
---
# IP address for OpenConrail (e.g. 192.168.0.2)
contrail_ip: 172.31.10.212# Gateway address for OpenConrail (e.g. 192.168.0.1)
contrail_gateway:# Interface name for OpenConrail (e.g. eth0)
contrail_interface:# IP address for OpenStack VM (e.g. 192.168.0.3)
openstack_ip: 172.31.15.248# 在VM上使用的OpenStack分支.
openstack_branch: stable/queens# 也可以选择使用其它插件版本(默认为OpenStack分支)
networking_plugin_version: master# Tungsten Fabric docker image tag for contrail-ansible-deployer
contrail_version: master-latest# 如果为true,则使用Tungsten Fabric驱动程序安装networking_bgpvpn插件
install_networking_bgpvpn_plugin: false# 如果true,则与设备管理器(将启动)和vRouter集成
# 封装优先级将被设置为'VXLAN,MPLSoUDP,MPLSoGRE'.
dm_integration_enabled: false# 带有DM集成拓扑的文件的可选路径。设置并启用DM集成后,topology.yaml文件将复制到此位置
dm_topology_file:# 如果为true,则将为当前ansible用户创建的实例密码设置为instance_password的值
change_password: false
# instance_password: uberpass1# 如果已设置,请使用此数据覆盖docker daemon /etc config文件
# docker_config:
[centos@ip-172-31-10-212 playbooks]$ [centos@ip-172-31-10-212 networking-opencontrail]$ cat ml2-vrouter.diff
diff --git a/playbooks/roles/contrail_node/tasks/main.yml b/playbooks/roles/contrail_node/tasks/main.yml
index ee29b05..272ee47 100644
--- a/playbooks/roles/contrail_node/tasks/main.yml
+++ b/playbooks/roles/contrail_node/tasks/main.yml
@@ -7,7 +7,6 @@- epel-release- gcc- git
-      - ansible-2.4.*- yum-utils- libffi-develstate: present
@@ -61,20 +60,20 @@chdir: ~/contrail-ansible-deployer/executable: /bin/bash-- name: Generate ssh key for provisioning other nodes
-  openssh_keypair:
-    path: ~/.ssh/id_rsa
-    state: present
-  register: contrail_deployer_ssh_key
-
-- name: Propagate generated key
-  authorized_key:
-    user: "{{ ansible_user }}"
-    state: present
-    key: "{{ contrail_deployer_ssh_key.public_key }}"
-  delegate_to: "{{ item }}"
-  with_items: "{{ groups.contrail }}"
-  when: contrail_deployer_ssh_key.public_key
+#- name: Generate ssh key for provisioning other nodes
+#  openssh_keypair:
+#    path: ~/.ssh/id_rsa
+#    state: present
+#  register: contrail_deployer_ssh_key
+#
+#- name: Propagate generated key
+#  authorized_key:
+#    user: "{{ ansible_user }}"
+#    state: present
+#    key: "{{ contrail_deployer_ssh_key.public_key }}"
+#  delegate_to: "{{ item }}"
+#  with_items: "{{ groups.contrail }}"
+#  when: contrail_deployer_ssh_key.public_key- name: Provision Node before deploy contrailshell: |
@@ -105,4 +104,4 @@sleep: 5host: "{{ contrail_ip }}"port: 8082
-    timeout: 300
\ No newline at end of file
+    timeout: 300
diff --git a/playbooks/roles/contrail_node/templates/instances.yaml.j2 b/playbooks/roles/contrail_node/templates/instances.yaml.j2
index e3617fd..81ea101 100644
--- a/playbooks/roles/contrail_node/templates/instances.yaml.j2
+++ b/playbooks/roles/contrail_node/templates/instances.yaml.j2
@@ -14,6 +14,7 @@ instances:config_database:config:control:
+      analytics:webui:{% if "contrail_controller" in groups["contrail"] %}vrouter:
diff --git a/playbooks/roles/docker/tasks/main.yml b/playbooks/roles/docker/tasks/main.yml
index 8d7971b..5ed9352 100644
--- a/playbooks/roles/docker/tasks/main.yml
+++ b/playbooks/roles/docker/tasks/main.yml
@@ -6,7 +6,6 @@- epel-release- gcc- git
-      - ansible-2.4.*- yum-utils- libffi-develstate: present
@@ -62,4 +61,4 @@- docker-py==1.10.6- docker-compose==1.9.0state: present
-    extra_args: --user
\ No newline at end of file
+    extra_args: --user
diff --git a/playbooks/roles/node/tasks/main.yml b/playbooks/roles/node/tasks/main.yml
index 0fb1751..d9ab111 100644
--- a/playbooks/roles/node/tasks/main.yml
+++ b/playbooks/roles/node/tasks/main.yml
@@ -1,13 +1,21 @@---
-- name: Update kernel
+- name: Install required utilitiesbecome: yesyum:
-    name: kernel
-    state: latest
-  register: update_kernel
+    name:
+      - python3-devel
+      - libibverbs  ## needed by openstack controller node
+    state: present-- name: Reboot the machine
-  become: yes
-  reboot:
-  when: update_kernel.changed
-  register: reboot_machine
+#- name: Update kernel
+#  become: yes
+#  yum:
+#    name: kernel
+#    state: latest
+#  register: update_kernel
+#
+#- name: Reboot the machine
+#  become: yes
+#  reboot:
+#  when: update_kernel.changed
+#  register: reboot_machine
diff --git a/playbooks/roles/restack_node/tasks/main.yml b/playbooks/roles/restack_node/tasks/main.yml
index a11e06e..f66d2ee 100644
--- a/playbooks/roles/restack_node/tasks/main.yml
+++ b/playbooks/roles/restack_node/tasks/main.yml
@@ -9,7 +9,7 @@become: yespip:name:
-      - setuptools
+      - setuptools==43.0.0- requestsstate: forcereinstall[centos@ip-172-31-10-212 networking-opencontrail]$

完成安装大约需要50分钟。

尽管/home/centos/devstack/openrc可以用于“demo”用户登录,但是需要管理员访问权限来指定其网络类型(vRouter为空,ovs为“vxlan”),因此需要手动创建adminrc。

[centos@ip-172-31-15-248 ~]$ cat adminrc
export OS_PROJECT_DOMAIN_ID=default
export OS_REGION_NAME=RegionOne
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_IDENTITY_API_VERSION=3
export OS_PASSWORD=admin
export OS_AUTH_TYPE=password
export OS_AUTH_URL=http://172.31.15.248/identity  ## this needs to be modified
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_VOLUME_API_VERSION=2
[centos@ip-172-31-15-248 ~]$ openstack network create testvn
openstack subnet create --subnet-range 192.168.100.0/24 --network testvn subnet1
openstack network create --provider-network-type vxlan testvn-ovs
openstack subnet create --subnet-range 192.168.110.0/24 --network testvn-ovs subnet1-ovs- 创建了两个虚拟网络
[centos@ip-172-31-15-248 ~]$ openstack network list
+--------------------------------------+------------+--------------------------------------+
| ID                                   | Name       | Subnets                              |
+--------------------------------------+------------+--------------------------------------+
| d4e08516-71fc-401b-94fb-f52271c28dc9 | testvn-ovs | 991417ab-7da5-44ed-b686-8a14abbe46bb |
| e872b73e-100e-4ab0-9c53-770e129227e8 | testvn     | 27d828eb-ada4-4113-a6f8-df7dde2c43a4 |
+--------------------------------------+------------+--------------------------------------+
[centos@ip-172-31-15-248 ~]$- testvn's provider:network_type为空[centos@ip-172-31-15-248 ~]$ openstack network show testvn
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2020-01-18T16:14:42Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | e872b73e-100e-4ab0-9c53-770e129227e8 |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | None                                 |
| is_vlan_transparent       | None                                 |
| mtu                       | 1500                                 |
| name                      | testvn                               |
| port_security_enabled     | True                                 |
| project_id                | 84a573dbfadb4a198ec988e36c4f66f6     |
| provider:network_type     | local                                |
| provider:physical_network | None                                 |
| provider:segmentation_id  | None                                 |
| qos_policy_id             | None                                 |
| revision_number           | 3                                    |
| router:external           | Internal                             |
| segments                  | None                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | 27d828eb-ada4-4113-a6f8-df7dde2c43a4 |
| tags                      |                                      |
| updated_at                | 2020-01-18T16:14:44Z                 |
+---------------------------+--------------------------------------+
[centos@ip-172-31-15-248 ~]$ - 它是在Tungsten Fabric的数据库中创建的(venv) [root@ip-172-31-10-212 ~]# contrail-api-cli --host 172.31.10.212 ls -l virtual-network
virtual-network/e872b73e-100e-4ab0-9c53-770e129227e8  default-domain:admin:testvn
virtual-network/5a88a460-b049-4114-a3ef-d7939853cb13  default-domain:default-project:dci-network
virtual-network/f61d52b0-6577-42e0-a61f-7f1834a2f45e  default-domain:default-project:__link_local__
virtual-network/46b5d74a-24d3-47dd-bc82-c18f6bc706d7  default-domain:default-project:default-virtual-network
virtual-network/52925e2d-8c5d-4573-9317-2c346fb9edf0  default-domain:default-project:ip-fabric
virtual-network/2b0469cf-921f-4369-93a7-2d73350c82e7  default-domain:default-project:_internal_vn_ipv6_link_local
(venv) [root@ip-172-31-10-212 ~]# - 另一方面,testvn-ovs's provider:network_type是vxlan,并且segmentation ID,mtu都是自动指定的[centos@ip-172-31-15-248 ~]$ openstack network show testvn
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2020-01-18T16:14:42Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | e872b73e-100e-4ab0-9c53-770e129227e8 |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | None                                 |
| is_vlan_transparent       | None                                 |
| mtu                       | 1500                                 |
| name                      | testvn                               |
| port_security_enabled     | True                                 |
| project_id                | 84a573dbfadb4a198ec988e36c4f66f6     |
| provider:network_type     | local                                |
| provider:physical_network | None                                 |
| provider:segmentation_id  | None                                 |
| qos_policy_id             | None                                 |
| revision_number           | 3                                    |
| router:external           | Internal                             |
| segments                  | None                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | 27d828eb-ada4-4113-a6f8-df7dde2c43a4 |
| tags                      |                                      |
| updated_at                | 2020-01-18T16:14:44Z                 |
+---------------------------+--------------------------------------+
[centos@ip-172-31-15-248 ~]$ openstack network show testvn-ovs
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2020-01-18T16:14:47Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | d4e08516-71fc-401b-94fb-f52271c28dc9 |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | None                                 |
| is_vlan_transparent       | None                                 |
| mtu                       | 1450                                 |
| name                      | testvn-ovs                           |
| port_security_enabled     | True                                 |
| project_id                | 84a573dbfadb4a198ec988e36c4f66f6     |
| provider:network_type     | vxlan                                |
| provider:physical_network | None                                 |
| provider:segmentation_id  | 50                                   |
| qos_policy_id             | None                                 |
| revision_number           | 3                                    |
| router:external           | Internal                             |
| segments                  | None                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | 991417ab-7da5-44ed-b686-8a14abbe46bb |
| tags                      |                                      |
| updated_at                | 2020-01-18T16:14:49Z                 |
+---------------------------+--------------------------------------+
[centos@ip-172-31-15-248 ~]$

CentOS 8安装过程

centos8.2
ansible-deployer is usedonly python3 is used (no python2)- 需要ansible 2.8.x1 x for tf-controller and kube-master, 1 vRouter(all nodes)
yum install python3 chrony
alternatives --set python /usr/bin/python3(vRouter nodes)
yum install network-scripts- 这是必需的,因为vRouter当前不支持NetworkManager(ansible node)
sudo yum -y install git
sudo pip3 install PyYAML requests ansible\
cirros-deployment-86885fbf85-tjkwn   1/1     Running   0          13s   10.47.255.249   ip-172-31-2-120.ap-northeast-1.compute.internal
[root@ip-172-31-7-20 ~]#
[root@ip-172-31-7-20 ~]#
[root@ip-172-31-7-20 ~]# kubectl exec -it cirros-deployment-86885fbf85-7z78k sh
/ # ip -o a
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
17: eth0    inet 10.47.255.250/12 scope global eth0\       valid_lft forever preferred_lft forever
/ # ping 10.47.255.249
PING 10.47.255.249 (10.47.255.249): 56 data bytes
64 bytes from 10.47.255.249: seq=0 ttl=63 time=0.657 ms
64 bytes from 10.47.255.249: seq=1 ttl=63 time=0.073 ms
^C
--- 10.47.255.249 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.073/0.365/0.657 ms
/ # - 为了使chrony在安装路由器后正常工作,可能需要重新启动chrony服务器[root@ip-172-31-4-206 ~]#  chronyc -n sources
210 Number of sources = 5
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^? 169.254.169.123               3   4     0   906  -8687ns[  -12us] +/-  428us
^? 129.250.35.250                2   7     0  1002   +429us[ +428us] +/-   73ms
^? 167.179.96.146                2   7     0   937   +665us[ +662us] +/- 2859us
^? 194.0.5.123                   2   6     0  1129   +477us[ +473us] +/-   44ms
^? 103.202.216.35                3   6     0   933  +9662ns[+6618ns] +/-  145ms
[root@ip-172-31-4-206 ~]#
[root@ip-172-31-4-206 ~]#
[root@ip-172-31-4-206 ~]# service chronyd status
Redirecting to /bin/systemctl status chronyd.service
· chronyd.service - NTP client/serverLoaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)Active: active (running) since Sun 2020-06-28 16:00:34 UTC; 33min agoDocs: man:chronyd(8)man:chrony.conf(5)Main PID: 727 (chronyd)Tasks: 1 (limit: 49683)Memory: 2.1MCGroup: /system.slice/chronyd.service└─727 /usr/sbin/chronydJun 28 16:00:33 localhost.localdomain chronyd[727]: Using right/UTC timezone to obtain leap second data
Jun 28 16:00:34 localhost.localdomain systemd[1]: Started NTP client/server.
Jun 28 16:00:42 localhost.localdomain chronyd[727]: Selected source 169.254.169.123
Jun 28 16:00:42 localhost.localdomain chronyd[727]: System clock TAI offset set to 37 seconds
Jun 28 16:19:33 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[727]: Source 167.179.96.146 offline
Jun 28 16:19:33 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[727]: Source 103.202.216.35 offline
Jun 28 16:19:33 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[727]: Source 129.250.35.250 offline
Jun 28 16:19:33 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[727]: Source 194.0.5.123 offline
Jun 28 16:19:33 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[727]: Source 169.254.169.123 offline
Jun 28 16:19:33 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[727]: Can't synchronise: no selectable sources
[root@ip-172-31-4-206 ~]# service chronyd restart
Redirecting to /bin/systemctl restart chronyd.service
[root@ip-172-31-4-206 ~]#
[root@ip-172-31-4-206 ~]#
[root@ip-172-31-4-206 ~]# service chronyd status
Redirecting to /bin/systemctl status chronyd.service
· chronyd.service - NTP client/serverLoaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)Active: active (running) since Sun 2020-06-28 16:34:41 UTC; 2s agoDocs: man:chronyd(8)man:chrony.conf(5)Process: 25252 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS)Process: 25247 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)Main PID: 25250 (chronyd)Tasks: 1 (limit: 49683)Memory: 1.0MCGroup: /system.slice/chronyd.service└─25250 /usr/sbin/chronydJun 28 16:34:41 ip-172-31-4-206.ap-northeast-1.compute.internal systemd[1]: Starting NTP client/server...
Jun 28 16:34:41 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[25250]: chronyd version 3.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND>
Jun 28 16:34:41 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[25250]: Frequency 35.298 +/- 0.039 ppm read from /var/lib/chrony/drift
Jun 28 16:34:41 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[25250]: Using right/UTC timezone to obtain leap second data
Jun 28 16:34:41 ip-172-31-4-206.ap-northeast-1.compute.internal systemd[1]: Started NTP client/server.
[root@ip-172-31-4-206 ~]#  chronyc -n sources
210 Number of sources = 5
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* 169.254.169.123               3   4    17     4  -2369ns[  -27us] +/-  451us
^- 94.154.96.7                   2   6    17     5    +30ms[  +30ms] +/-  148ms
^- 185.51.192.34                 2   6    17     3  -2951us[-2951us] +/-  150ms
^- 188.125.64.6                  2   6    17     3  +9526us[+9526us] +/-  143ms
^- 216.218.254.202               1   6    17     5    +15ms[  +15ms] +/-   72ms
[root@ip-172-31-4-206 ~]# [root@ip-172-31-4-206 ~]# contrail-status
Pod      Service      Original Name           Original Version  State    Id            Status         rsyslogd                             nightly-master    running  5fc76e57c156  Up 16 minutes
vrouter  agent        contrail-vrouter-agent  nightly-master    running  bce023d8e6e0  Up 5 minutes
vrouter  nodemgr      contrail-nodemgr        nightly-master    running  9439a304cbcf  Up 5 minutes
vrouter  provisioner  contrail-provisioner    nightly-master    running  1531b1403e49  Up 5 minutes   WARNING: container with original name '' have Pod or Service empty. Pod: '' / Service: 'rsyslogd'. Please pass NODE_TYPE with pod name to container's envvrouter kernel module is PRESENT
== Contrail vrouter ==
nodemgr: active
agent: active[root@ip-172-31-4-206 ~]#

原文链接:
https://github.com/tnaganawa/tungstenfabric-docs/blob/master/TungstenFabricKnowledgeBase.md

往期精选

Tungsten Fabric知识库丨vRouter内部运行探秘
Tungsten Fabric知识库丨更多组件内部探秘
Tungsten Fabric知识库丨 构建、安装与公有云部署
Tungsten Fabric知识库丨测试2000个vRouter节点部署


Tungsten Fabric入门宝典系列文章——
1.首次启动和运行指南
2. TF组件的七种“武器”
3. 编排器集成
4.关于安装的那些事(上)
5.关于安装的那些事(下)
6.主流监控系统工具的集成
7.开始第二天的工作
8.8个典型故障及排查Tips
9.关于集群更新的那些事
10.说说L3VPN及EVPN集成
11.关于服务链、BGPaaS及其它
12.关于多集群和多数据中心
13.多编排器用法及配置


Tungsten Fabric知识库丨关于OpenStack、K8s、CentOS安装问题的补充相关推荐

  1. Tungsten Fabric知识库丨测试2000个vRouter节点部署

    作者:Tatsuya Naganawa 译者:TF编译组 由于GCP允许启动多达5k个节点:),因此vRouter的规模测试程序主要针对该平台来介绍. 话虽如此,AWS也可以使用相同的程序 第一个目标 ...

  2. Tungsten Fabric知识库丨vRouter内部运行探秘

    原文链接: https://github.com/tnaganawa/tungstenfabric-docs/blob/master/TungstenFabricKnowledgeBase.md 作者 ...

  3. 在K8s上轻松部署Tungsten Fabric的两种方式

    第一种:在AWS的K8s上部署TF 首先介绍下如何在AWS上使用Kubernetes编排的Tungsten Fabric集群部署沙盒,15分钟就可以搞定.Tungsten Fabric集群由部署节点. ...

  4. TF Live 直播回放丨Frank Wu:当OpenStack遇到Tungsten Fabric

    10岁的OpenStack,已经是开源IaaS世界里的"成年人",自从遇到开源SDN小伙伴Tungsten Fabric,两人便成为闯荡混合多云世界的好搭档. 5月26日,在TF中 ...

  5. TF实战丨使用Vagrant安装Tungsten Fabric

    本文为苏宁网络架构师陈刚的原创文章. 01 准备测试机 在16G的笔记本没跑起来,就干脆拼凑了一台游戏工作室级别的机器:双路E5-2860v3 CPU,24核48线程,128G DDR4 ECC内存, ...

  6. 如何在OpenStack Kolla上部署Tungsten Fabric(附14个常见的配置问题)

    首先,使用contil-kolla-ansible-deployer容器在OpenStack Kolla上部署Tungsten Fabric(注:原文为Contrail,本文以功能一致的Tungste ...

  7. Tungsten Fabric SDN — 与 OpenStack 的集成架构

    目录 文章目录 目录 Tungsten Fabric 与 OpenStack 的集成架构 OpenStack Instance 的实例化流程 Tungsten Fabric 与 OpenStack 的 ...

  8. Tungsten Fabric SDN — 与 OpenStack 的集成部署

    目录 文章目录 目录 部署架构 资源配置 软件版本 Tungsten Fabric 与 OpenStack 的集成部署 Action1. 基础环境设置 Action2. 安装软件依赖 Action3. ...

  9. Tungsten Fabric入门宝典丨8个典型故障及排查Tips

    Tungsten Fabric入门宝典系列文章,来自技术大牛倾囊相授的实践经验,由TF中文社区为您编译呈现,旨在帮助新手深入理解TF的运行.安装.集成.调试等全流程.如果您有相关经验或疑问,欢迎与我们 ...

最新文章

  1. Rancher搭建NFS服务器
  2. 重磅!神策数据游戏行业解决方案全面上线,速来围观
  3. 关于内存流与字符串的转换
  4. 在ODM公司要不要跳槽到创业公司
  5. 【LeetCode】剑指 Offer 60. n个骰子的点数
  6. php 中c函数重载,php函数重载的替代方法--伪重载详解
  7. 将PowerShell连接到SQL Server –使用其他帐户
  8. python PIL库 Image.new 和 paste
  9. iOS开发编译错误:std::terminate(), referenced from:
  10. 银行卡号的编码规则及校验
  11. notepad++ 设置保护色
  12. Linux如何快速生成大文件
  13. 小波奇异点检测C语言,matlab小波变换对奇异点的检测
  14. 如何知道计算机显卡内存,电脑显卡是什么 怎么查显卡显存【图文】
  15. 在网页上使用苹方字体
  16. P2437 蜜蜂路线
  17. 远程桌面从服务器拷文件出错
  18. 宝尚在线炒股 12.22 午评
  19. 配置yun源和在虚拟机中安装JDK
  20. 2016世界人工智能大会 AI领袖共启智能+新纪元

热门文章

  1. qt准确获取本机mac和ip地址
  2. 老挑毛 U盘 winPe 制作 流程
  3. (Tiled官方文档翻译)第十节:使用无限地图(Tiled1.1)
  4. remosaic插值算法_手机镜头像素:硬件直出和插值有啥区别?
  5. nginx无法下载文件,报404的解决方法
  6. C++并发编程(C++11到C++17)
  7. 英语affrike非洲affrike单词
  8. 明天14:00,棕榈泉见~友盟+、大麦网、凯叔讲故事都在
  9. 中国防静电塑料卷轴市场深度研究分析报告
  10. 安全研究 # Neural Network-based Graph Embedding for Cross-Platform Binary Code Similarity Detection