docker build -t dvm.adsplatformproxy:v1.0.0 .      #build images
docker run -e WWNamespace=dev -e ZKServerAddress=******  -p 6000:80  6cb913a34ae3    #run container,本地起进程
docker run -ti 6cb913a34ae3 /bin/bash   #远程进入image文件;exit退出
docker rmi -f b54d6e186ef4  #远程删除image
docker rmi -rf b54d6e186ef4
docker build -t dvm.adsplatformproxy:v1.0.0 .      #build images
docker run -e WWNamespace=dev -e ZKServerAddress=******  -p 6000:80  6cb913a34ae3    #run container,本地起进程
docker run -ti 6cb913a34ae3 /bin/bash   #远程进入image文件;exit退出
docker rmi -f b54d6e186ef4  #远程删除image
docker rmi -rf b54d6e186ef4

查看docker仓库的tag信息

For the latest (as of 2015-07-31) version of Registry V2, you can get this image from DockerHub:

docker pull distribution/registry:master

List all repositories (effectively images):

curl -X GET https://myregistry:5000/v2/_catalog
> {"repositories":["redis","ubuntu"]}

List all tags for a repository:

curl -X GET https://myregistry:5000/v2/ubuntu/tags/list
> {"name":"ubuntu","tags":["14.04"]}
 kubectl get deployment  #获取发布kubectl delete deployment ****  #删除发布,自动删除service、pods

  

kubectl create ******.yaml  #创建yaml,包含config、ingress配置
kubectl get ingress   #获取ingress配置
kubectl get configmap  #获取config配置

 

kubectl edit configmaps *****-config -n *namespace* 

kubectl get configmap  #1.获取配置文件
kubectl edit configmap ******    #2.修改配置文件

 ..\refreshconfig.ps1 -ConfigMapName dvm-website-config    #3.psl更新配置   或者删除已有的pod,会自动创建pod并使用新的configmap

kubectl create -f rc-nginx.yaml kubectl replace -f rc-nginx.yamlkubectl edit po rc-nginx-btv4j    kubectl delete -f rc-nginx.yaml
kubectl delete po rc-nginx-btv4j
kubectl delete po -lapp=nginx-2    kubectl describe po rc-nginx-2-btv4j  # kubectl get namespace

kubectl get po <podname> -o #yaml 以yawl格式输出pod的详细信息。
kubectl get po <podname> -o json # 以json格式输出pod的详细信息。
kubectl get po rc-nginx-2-btv4j -o=custom-columns=LABELS:.metadata.labels.app  #使用”-o=custom-columns=“定义直接获取指定内容的值

  

  

  

  

 

configmap 的更新

A new command, kubectl rolling-restart that takes an RC name and incrementally deletes all the pods controlled by the RC and allows the RC to recreate them.

Small work around (I use deployments and I want to change configs without having real changes in image/pod):

  • create configMap
  • create deployment with ENV variable (you will use it as indicator for your deployment) in any container
  • update configMap
  • update deployment (change this ENV variable)

k8s will see that definition of the deployment has been changed and will start process of replacing pods
PS:
if someone has better solution, please share

It feels like the right solution here would enable you to restart a deployment, and reuse most of the deployment parameters for rollouts like MinReadyCount, while allowing for command-line overrides like increasing the parallelism for emergency situations where you need everything to bounce immediately.

We would also like to see this for deployments maybe like kubectl restart deployment some-api

Kubernetes is allowed to restart Pods for all sorts of reasons, but the cluster admin isn't allowed to.
I understand the moral stand that 'turn it off and on again' may not be a desired way to operate... but I also think it should be ok to let those people who wish to, to restart a Deployment without resorting to the range of less appetizing tricks like:

  • deleting pods
  • dummy labels
  • dummy environment variables
  • dummy config maps mapped to environment variable
  • rebooting the worker nodes
  • cutting the power to the data centre ?

'No, no, I'm not restarting anything, just correcting a typo in this label here' ?

This feature will be useful in pair with kubectl apply: apply will update configs, including Replication Controllers, but pods won't be restarted.

Could you explain? Should I just use kubectl apply -f new_config.yml with updated deployments, and these deployments will be rolling-restarted?

kubectl patch deployment mydeployment -p '{"spec":{"template":{"spec":{"containers":[{"name":"mycontainer","env":[{"name":"RESTART_","value":"'$(date +%s)'"}]}]}}}}'

make sure the date is evaluated within the shell:

should be app responsibility to watch filesystem for changes, as mentioned you can use checksums on the configmap/secret and force restarts that way

but if you don't want to change the config at all and just do a rolling restart with arbitrary pause, a simple pipeline does the job (this one sleeps 30seconds between terminated pod)

kubectl get po -l release=my-app -o name | cut -d"/" -f2 | while read p;do kubectl  delete po $p;sleep 30;done

alternative command:

kubectl get pods|grep somename|awk '{print $1}' | xargs -i sh -c 'kubectl delete pod -o name {} && sleep 4'

Two and half years on and people are still crafting new workarounds, with dummy env vars, dumy labels, ConfigMap and Secret watcher sidecars, scaling to zero, and straight out rolling-update shell scripts to simulate the ability the trigger a rolling update. Is this still something cluster admins should not be allowed to do honestly, without the tricks?

kubectl scale --replicas=0 deployment application
kubectl scale --replicas=1 deployment application

Another trick is to intially run:

kubectl set image deployment/my-deployment mycontainer=myimage:latest

and then:

kubectl set image deployment/my-deployment mycontainer=myimage

It will actually be triggering the rolling-update but be sure you have also imagePullPolicy: "Always" set.

another trick I found, where you don't have to change the image name, is to change the value of a field that will trigger a rolling update, like terminationGracePeriodSeconds. You can do this using kubectl edit deployment your_deployment or kubectl apply -f your_deployment.yaml or using a patch like this:

kubectl patch deployment your_deployment -p \
  '{"spec":{"template":{"spec":{"terminationGracePeriodSeconds":31}}}}'

# Force an upgrade even though the docker-compose.yml for the services didn't change$ rancher-compose up --force-upgrade

You can always write a custom pid1 that notices the confimap has changed and restarts your app.

You can also eg: mount the same config map in 2 containers, expose a http health check in the second container that fails if the hash of config map contents changes, and shove that as the liveness probe of the first container (because containers in a pod share the same network namespace). The kubelet will restart your first container for you when the probe fails.

Of course if you don't care about which nodes the pods are on, you can simply delete them and the replication controller will "restart" them for you.

using a deployment I would scale it down and then up. You will still have that small amount of down time though. You can do it in one line to reduce that...

kubectl scale deployment/update-demo --replicas=0;

kubectl scale deployment/update-demo --replicas=4;

If you don't want to find all the pods, and don't care about downtime - just remove the RC and then re-create the RC.

The current best solution to this problem (referenced deep in https://github.com/kubernetes/kubernetes/issues/22368 linked in the sibling answer) is to use Deployments, and consider your ConfigMaps to be immutable.

When you want to change your config, create a new ConfigMap with the changes you want to make, and point your deployment at the new ConfigMap. If the new config is broken, the Deployment will refuse to scale down your working ReplicaSet. If the new config works, then your old ReplicaSet will be scaled to 0 replicas and deleted, and new pods will be started with the new config.

Not quite as quick as just editing the ConfigMap in place, but much safer.

Often times configmaps or secrets are injected as configuration files in containers. Depending on the application a restart may be required should those be updated with a subsequent helm upgrade, but if the deployment spec itself didn't change the application keeps running with the old configuration resulting in an inconsistent deployment.

Yes. In order to do a rolling update, both the previous and new versions of the configmap must simultaneously exist.

I assume the "main configmap" would be embedded in Helm's representation of the chart. The live configmaps would be the configmap versions in use, and should be garbage collected with the replicasets generated by Deployment.

 

镜像可以在命令行生成,如:
 docker build . -t *******:v1.0.0.3
随后Tag到Docker仓库:
 docker tag  *******:v1.0.0.3 [ip]:5000/[path]:v1.0.0.3
如果确认无误后要推送到仓库,可以执行:
 docker push [ip]:5000/[path]:v1.0.0.3:v1.0.0.3
如果要在VS中生成镜像,需要考虑把Docker-compose设置为启动项目,并至少启动项目一次。每次需要重新生成镜像时,建议先在Docker-Compose项目中先清理,再生成。
随后的Tag和Push参考上面的流程。

1、 在开发机上安装Docker,并将[ip]:5000设置为insecure registry
2、 下载kubectl.exe,并把相关的路径放入到环境变量Path中
3、 把相关的crt和key文件放入到用户根目录的.kube目录下,如C:\Users\user01\.kube
4、 打开命令行,在配置文件中新建cluster,如
kubectl config  set-cluster bjoffice --server=https://[ip]:6443 --insecure-skip-tls-verify
5、 将客户信息绑定
kubectl.exe config set-credentials username --client-certificate=username.crt --client-key=username.key
6、 绑定用户Context
kubectl config set-context username-context --cluster=[] --namespace=yourns --user=username
7、 切换用户当前Context
kubectl config use-context username-context

设定完成后大家就可以按照容器模式开发和管理微服务了。
             附件中包括kubernetes控制工具,相应的证书文件和示范的yaml.

1、 熟悉基于容器的微服务开发和打包(Dev)
2、 熟悉容器的部署
3、 制定相关开发规范
4、 制定管理规范,如自动发布/回滚机制和监控
5、 部署群集环境并运营
6、 考虑用不同的开发堆栈提高效率
7、 总结提高

Dev:
Build and push the image
Create the yaml
Manage dev configmap/secrets
Build custom base images if necessary
May has the view access to the online production*(TBD)

Test:
Test base on the images/yaml
Manage the test configmap/secrets
Tag the release branch/image
Push the yaml/image to production repository (manual script/CI)

OPS:
Infrastructure management(Nodes)
Online resource monitoring(CPU/Memory)
Manage the production configmap/secrets
Nodes & deployment management if necessary
Replicator control if necessary

Yaml书写应该分成三部分:
1、 Config
2、 Deployment and service
3、 Ingress
具体大家可以参考Docs\Yaml\Deploy下的Sample。
请需要更新,只更新相关部分并尽量避免delete/create方式更新,考虑使用edit模式编辑。

在某些情况下,可能需要跳过部署直接以交互模式调试镜像(如Job),可以直接用kubectl run命令,ran参数后面是deployment 名称:
                          kubectl -n dev run downloadimagetest --image=[ip]:5000/[path]:v1.0.0.6 -- /usr/bin/tail -f /dev/null
             然后以kubecl exec -ti  podname /bin/bash 命令登陆这个pod进行调试。
             请注意这个过程不会设置yaml中的环境参数,如果需要请额外设置。
             如果只是配置上的调整,无需delete/create整个yaml,设置deployment的image参数就可以了。
                           kubectl -n dev set image deployment/******-deployment dvm-adsproxyapi=[ip]:5000/[patch]:v1.0.0.412
             或者用edit命令人工编辑(会弹出记事本或者vi):
                           kubectl -n dev edit deployment/******-deployment
             如果在开发过程中无法Push/Pull 镜像,请考虑重新启动Docker。

如果需要利用Visual Studio 生成Docker镜像,需要将目标项目设置为Docker-compose,同时将模式设置为Release。
在Debug模式下,VS只会生成很少量的dll, 供VS使用。
目前我们的Dockerfile的Sample中,默认从./obj/Docker/publish中复制文件(这个目录包含所有dll)。
如果需要人工Publish但又不想build Solution级别的Docker-compose,则可以在项目目录(Dockerfile文件所在位置)执行以下命令
dotnet publish ******.WebApi.csproj -c Release -o ./obj/Docker/publish
其中项目名称和模式请根据实际替换。
然后用docker build -t tagname . 打包。
大家也可以写.cmd文件直接把这两步合并到一块。

大家在开发的使用经常遇到需要部署后调试的问题,这种情况下大家应该考虑启用本地Docker容器,如:
docker run -e WWNamespace=[] -e ZKServerAddress=[ip1],[ip2],[ip3]  -p 5000:80 5aed1c78f55e
其中-e 是设置环境变量,-p 是设置端口映射,第一个端口是主机端口,第二个端口是容器的端口(对于WebApi通常是80),最后一个端口是容器编号。
如果要在后台运行,可以加入-d参数,之后结束这个进程即可。

"file integrity checksum failed" while pushing image to registry

1.docker system prune -a        ,solved the problem,重启docker

2.或者docker =》settings=》advance:提高内存

3.Had exactly the same problema. It worked after I deleted the images and rebuilt

转载于:https://www.cnblogs.com/panpanwelcome/p/8472651.html

docker 命令2相关推荐

  1. 设置普通用户执行docker命令,执行docker命令无需输入密码或者切换root用户

    每次执行docker命令都要输入密码或者切换root用户,非常不方便,尤其是在shell脚本中就更麻烦了,一起来解决这个问题: 1. 创建名为docker的组,如果之前已经有该组就会提示已存在: su ...

  2. Docker 入门系列(2)- Docker 镜像, 免 sudo 使用 docker 命令、获取查看、修改镜像标签、查找删除创建镜像、导入导出镜像

    1. 免 sudo 使用 docker 命令 如果还没有 docker group 就添加一个 sudo groupadd docker 将用户加入该 group 内 sudo gpasswd -a ...

  3. docker desktop ubuntu镜像_资深专家都知道的顶级 Docker 命令!

    开发人员一直在努力提高 Docker 的使用率和性能,命令也在不停变化.Docker 命令经常被弃用,或被替换为更新且更有效的命令,本文总结了近年来资深专家最常用的命令列表并给出部分使用方法. 目前, ...

  4. 这54个docker命令!你必须懂!

    Docker是一个不断发展的系统,开发人员积极改进使用和性能.所以命令总是在变化.docker一些老的命令经常被弃用,并被新的或更有效的命令取代.您可以使用帮助选项检查Docker安装上的最新可用命令 ...

  5. docker 命令详细解释

    docker命令详解github地址:链接描述如果对你有帮助,请给我star下! 此次操作都是在unbantu17.01下进行,docker版本是17.10.0-ce,docker-compose是1 ...

  6. Ubuntu、CentOS 解决docker命令权限问题(sudo)

    Ubuntu.CentOS 解决docker命令权限问题(sudo) 参考文章: (1)Ubuntu.CentOS 解决docker命令权限问题(sudo) (2)https://www.cnblog ...

  7. 安装Docker和下载images镜像和常用Docker命令

    安装Docker和下载images镜像和常用Docker命令 我的是centos7的方法: $sudo yum install docker 直接yum安装contos7使用centos6.5先获取e ...

  8. Docker命令查询

    2019独角兽企业重金招聘Python工程师标准>>> Docker命令查询 基本语法 docker [OPTIONS] COMMAND [arg...] 一般来说,Docker 命 ...

  9. Docker 命令自动补全必须有

    点击上方蓝色"程序猿DD",选择"设为星标" 回复"资源"获取独家整理的学习资料! 前言 不知道这个小伙伴有多久没用过 Docker 了, ...

  10. Docker 命令终极教程:8步走

    Docker容器已经从一种锦上添花的技术转变成了部署环境的必需品.有时,作为开发人员,我们需要花费大量时间调试或研究Docker工具来帮助我们提高生产力.每一次新技术浪潮来临之际,我们都需要花费大量时 ...

最新文章

  1. linux系统分析命令,Linux操作系统基础解析之(四)——Linux基本命令剖析(2)
  2. 老域名优化出新招,三点技巧来相助
  3. 计算机排线知识,一种计算机排线梳理装置制造方法及图纸
  4. 自如:全额承担“望京跑路二房东”受害客户损失,预计约500余万元
  5. 古典绘画水墨文化艺术插图手绘合集,再也不愁没有设计灵感!
  6. Zend 创始人提议创建 PHP 方言,暂命名为 P++
  7. 【CentOS7】设置静态IP地址
  8. Eclipse开发struts完全指南(二)安装与配置
  9. 餐饮水单打印软件_除了进销存,管家婆软件能做的很多!
  10. codeblocks下载安装及问题解决
  11. android fsck_msdos 分析(二)
  12. fedora15 一些简单应用
  13. 如何添加油猴脚本用以模拟点击网页按钮
  14. http状态码全解读
  15. CSO246未能找到类型或命名空间名“System“(是否缺少using指令或程序集引用?)
  16. Flink CDC 新一代数据集成框架
  17. Debian自动化安装
  18. access 分组序号_二级Access数据库备考笔记之报表排序和分组
  19. USTCOJ 1240 黑屋 非位运算版
  20. 特斯拉为什么刹不住车

热门文章

  1. WPF仿微信界面发送消息简易版
  2. 信号报告(Java)
  3. c++ 调用python_闲话python 48: C/C++扩展Python与Swig工具
  4. 设计模式—原型模式及其扩展(思维导图)
  5. 经典问题:流水线调度(51nod)
  6. 图像椒盐噪声和高斯噪声
  7. [debug] RuntimeError: “nll_loss_forward_reduce_cuda_kernel_2d_index“ not implemented for ‘Int‘
  8. [python] 字典 pop(key)函数:删除字典中key及其值,并返回该值
  9. java读取、写入保存、遍历ini文件配置数据
  10. js中数据结构数组Array、映射Map、集合Set、对象、JSON