K8s第七篇授权认证
K8s授权认证:
- 认证检查:检查是否为合法的用户,支持多种认证插件,用户只要通过一种插件的认证,则为合法的用户。
- 授权检查:检查用户的权限。
- 准入控制:如果用户的操作关联到多个资源,则进行准入控制检查。
客户端可以通过API Server的URL对资源进行操作:
- user:username,uid。
- group:组。
- extra:额外信息。
- API Request Path:K8s的API Server会提供url来实现用户对资源的操作,每个资源都会有自己的url。
将k8smaster的8080端口反向代理到6443端口
#k8s api server使用6443端口做的HTTPS认证。
[root@k8smaster ~]# netstat -anput | grep 6443
tcp 0 0 192.168.43.45:46918 192.168.43.45:6443 ESTABLISHED 837/kubelet
tcp 0 0 192.168.43.45:47050 192.168.43.45:6443 ESTABLISHED 4364/kube-controlle
tcp 0 0 192.168.43.45:46760 192.168.43.45:6443 ESTABLISHED 4364/kube-controlle
tcp 0 0 192.168.43.45:46962 192.168.43.45:6443 ESTABLISHED 5109/kube-proxy
tcp 0 0 192.168.43.45:46764 192.168.43.45:6443 ESTABLISHED 4438/kube-scheduler
tcp6 0 0 :::6443 :::* LISTEN 4307/kube-apiserver
tcp6 0 0 192.168.43.45:6443 192.168.43.176:37346 ESTABLISHED 4307/kube-apiserver
tcp6 0 0 192.168.43.45:6443 192.168.43.176:58268 ESTABLISHED 4307/kube-apiserver
tcp6 0 0 192.168.43.45:6443 192.168.43.45:46760 ESTABLISHED 4307/kube-apiserver
tcp6 0 0 192.168.43.45:6443 192.168.43.136:59222 ESTABLISHED 4307/kube-apiserver
tcp6 0 0 192.168.43.45:6443 10.244.0.28:54622 ESTABLISHED 4307/kube-apiserver
tcp6 0 0 192.168.43.45:6443 10.244.0.29:33264 ESTABLISHED 4307/kube-apiserver
tcp6 0 0 192.168.43.45:6443 192.168.43.45:47050 ESTABLISHED 4307/kube-apiserver
tcp6 0 0 192.168.43.45:6443 192.168.43.45:46962 ESTABLISHED 4307/kube-apiserver
tcp6 0 0 ::1:59460 ::1:6443 ESTABLISHED 4307/kube-apiserver
tcp6 0 0 192.168.43.45:6443 192.168.43.136:59202 ESTABLISHED 4307/kube-apiserver
tcp6 0 0 192.168.43.45:6443 192.168.43.136:38694 ESTABLISHED 4307/kube-apiserver
tcp6 0 0 192.168.43.45:6443 192.168.43.176:58290 ESTABLISHED 4307/kube-apiserver
tcp6 0 0 192.168.43.45:6443 192.168.43.45:46764 ESTABLISHED 4307/kube-apiserver
tcp6 0 0 192.168.43.45:6443 192.168.43.45:39168 ESTABLISHED 4307/kube-apiserver
tcp6 0 0 192.168.43.45:6443 192.168.43.45:46918 ESTABLISHED 4307/kube-apiserver
tcp6 0 0 ::1:6443 ::1:59460 ESTABLISHED 4307/kube-apiserver
#在root用户的家目录下会有k8s的认证配置文件,包含了客户端证书,客户端私钥。
[root@k8smaster ~]# cat .kube/config
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNU1UQXlPREV3TlRBek9Gb1hEVEk1TVRBeU5URXdOVEF6T0Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBSzQwCjhheHhqaS9PY1lFOHpEWmpyYjBrdXBaZHB6S2xYdGJ6bStrcXFtbkRJL01mNWN1a21ITlAxT295aDlUZVdkS1MKVDBLWkZuekR4TklyQTBYOEZVRzBpbll3QkswQnQrUEtjakVMUkNuUVcxWjJFdU9iOHBkZEtsS29ra0IwU0psSApOUGM1L0RqL2Z1ZHRxaEd0Ym1mc0RzUEMwZlFzaGZNLzkxYTlZZXBWOE41K0phaWxma0R6SHhIbG9aNmZid04wCnRiZm9JT2J6TjZwR2FCTFlnbkNJdzhPanBnelNZZ2U1N3pIR2tMQ1B3Q29sWUpJdFFtR0RKcUdDZUUzY1UvYW0KQ0ZRT0dRczE2R0pJbEFkNm5Tc1RSTElIRFRjektIb2diZFJUR3Nqb3NoV0xhZk50Y01PNEcxY04zVXBDNExtcAorZVNPMThMcDRnMXRPWlUyZHNNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFLZkJhcVlpMXp2N2hSeFdsUHpuYmJOS2lkQkkKUjBydWdxenNDZU1FUnJCazVuMndaU2dJN1FwTml3UGk1ZXk0ZEVJbTc4NzBJTEJQUG9kbGt5a0haUkw1bWhOcwpxd1JjbkZOV3JKVkFuTldtSURSZ3JCZHdsOGFCaTBoSVJacGR1Q3lXU3RlV2RXME5zNEgrTVpxQUZrRG01d25rCmRqQlFDelZaR2pQMHBWZnJKUG03WCtmNXRKdUhmR0RISTA1ZC9uRVlmYnBSdXNqK1c5am56eE1PZjFLbFNSaUgKOE1PdElXMmtSbE5CM21zYWJ6MFFSWC9kdE40c1FOaitad2lBWmUwTjRKT3p5bXR0eXEzTktPNUNGamhmTWNuYgp3cTJmUVdPb1lWL1Fja1o4ckFudTNCQzN0eDhLS2lxMmExNjN5TzFrYktubE9HSzU3a244ZEJNL2ZWQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.43.45:6443
name: kubernetes
contexts:
- context:cluster: kubernetesuser: kubernetes-adminname: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-adminuser:client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJZDBGd2gzUkhQZmt3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB4T1RFd01qZ3hNRFV3TXpoYUZ3MHlNREV3TWpjeE1EVXdOREZhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXVIOEc3TTVhSFhmNXB1NzYKTWRNMmdRYzQ2d0wrL0lGZE5QbENzRXYwSEgzWXVrV2JDYWpMMU5WZ0QrbzdXbGlvRFZxRWwySGh1VW1Nb2NGcgpVZzEzTGlWLzdzZjJaNkFEQ1JIRnhpdEpsWEFDS2FOS3ZIRU5HRFVlRmROVHNVWUVvelhmRGE1Q3V1eC8yUUt5Ck5JbjFxdDA0ZGp1Z3JkK0V6eVB1cmYyZ295dG43WHJpcW1lbHNVanlJZTM3K2ZPbDBzNENxOEdublJPTE1KU2MKNGxsNmxmMlVXbHpNVUxPSmpCTEYvYVErRzIxaWc3eVZkUUJ0VHNpVUc1MkhzZllQZnMxVmNnNDNIaVBGZ2pmaAo4cTFQSFUwU0NmZVgrdStSMWVOT2pMd3RqMzVMcEdVNWZDYW82SDV4WnU1RjJROE1iSmU1L0ZyWCtERXYxS3RKCmdMczF3d0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFEUjdyMDBoTE9DYWYyOFM5UHBPM2E4Y3FIOUllaVZ6ell5dAplZEk2bGx2QWJtS3hLRWxueWwxM2toaFU4MVRmcnprbXZublJaS25TNm41TFZDYkthQUF0RXNwK1Y3bXNnOVNMCjBzVkdIb0MrM1kyVFR1UVQ5VHVWM1FoRnNQOHFJcmR3Ni9kck1PdEdjMjE0dmlqTC9QV3RlWFh1VkZuZmd3QVUKanp4cGkvYXZ6bjVhd1hyK3hkTENBdmJJMW1Qb3RwalFvZUkyZWpWS3NnNlFWY2lsbVF3MisxOStoYThwSjFLSwo3MjRYendyZFp1eWRhbWZ2aFBiaDhkd3JNbGl4YVlKYWpwc0wxbkxPaHljeTJ2MjBEYUExeVFoR1FRdEx6OC9GCkZ4Ym9QUXg3Q0ZyYkVvdDZkZ3JhSkFBSmFGYllpRW9YUzNKREVSRlNTdElDVHM5OWVJMD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdUg4RzdNNWFIWGY1cHU3Nk1kTTJnUWM0NndMKy9JRmROUGxDc0V2MEhIM1l1a1diCkNhakwxTlZnRCtvN1dsaW9EVnFFbDJIaHVVbU1vY0ZyVWcxM0xpVi83c2YyWjZBRENSSEZ4aXRKbFhBQ0thTksKdkhFTkdEVWVGZE5Uc1VZRW96WGZEYTVDdXV4LzJRS3lOSW4xcXQwNGRqdWdyZCtFenlQdXJmMmdveXRuN1hyaQpxbWVsc1VqeUllMzcrZk9sMHM0Q3E4R25uUk9MTUpTYzRsbDZsZjJVV2x6TVVMT0pqQkxGL2FRK0cyMWlnN3lWCmRRQnRUc2lVRzUySHNmWVBmczFWY2c0M0hpUEZnamZoOHExUEhVMFNDZmVYK3UrUjFlTk9qTHd0ajM1THBHVTUKZkNhbzZINXhadTVGMlE4TWJKZTUvRnJYK0RFdjFLdEpnTHMxd3dJREFRQUJBb0lCQVFDZEFDd0tkSWVuTUNPWQo5U0NnS2RibDhobHpsRGNjOVpFMXRUQVZDbTJQbVdCSEUxaWQzYkNuUzNUVjFrUHYzQ1lXUndNeU42OTRsNmcvCk5uTjNmZEgveVJXWFF6N2lhLzVwUjJDQUJQSTNZdnZVSnd0QVZRd0puNW9jaEp0aDdlMmdYZ1dVaE1od2ZUVkcKbk02OWV2RStGOGNtaGhOMEl4UEhtaEpRcWRaN1F0RG1NSlU3T24wUi81SW5IV3BtZlFOUGdOV3hla2MrNGhzZQpWcGR5WTUweGxYVThQUGN2ZFUzVS9MTEt4MWd2VVgrZU1HbVo3dWtFeWNVSTVEQSsyRm45L2lSUTZGOFAzWUgwCkdIQlVIQllOck9LN2FhTm5hOU0vbWlvZnhWVnorMmxuSHVFREloemZPemFWZEZjM1FLaDE5REdiYUExV1VCUlcKWmF4T1hVOFJBb0dCQU5kblZUM29kbHh5VGU3VnlPRGtlVENrY1BaSTAyREpuL0N5cVVjRDNqM0x5VGFNTWx0YgpnMmhtTnc2ZklPaDM3OUVkakd0N0VGa1YrdzJQOU5wMklJMUx3U1ZULzVtTEQ1Z3hyRFBseHdCdjNuMTM2dEdxCjZVQ2cydkkwV3lQVkdLZDJPekl1Nkp3UTNaYnNkUlBadFNaTHgvcDByWGF5aTdWQzlpeE1JRmdOQW9HQkFOdEUKZlhMN1A3YmNGYVVldldlRmpxNzJFVTJobTBwTTBsUWFPcGViU0xOTDF4MmR3WGtzSS83OFZRZDlpSlRWcDVDVApDUGtkbzMvaVBGWDFmdmt1QmhIbXM1RUtmWHJxdEdXam94ajVVTXE4U24xY1ZTMlNHY2srNHRkTkFiOVFTTHdNCkh4RDZYVUUvZW4wYjZNemlkZVJNWmRmY1BiSHI3OFlyUGtBRzlnRVBBb0dBTUJvQVBCbnNUSXF1QXBhMURCdVoKUUphSUwwZG1CS2doMGxOalg5dHFScXg2VzNjRlM4ZHMyZVJ4aVE5Wi91L0JteFlaSkd0UDVFVDNVamtDZWNLRgpWR2hGVW51bWlYZzNYRXBEWlRkN3NBcExTZ044YWFQY0FMV3JEd2xJRFFGcVJ3TXRCdkRZdXZrOU1wWE5NMGliCm5saXY2S3NqalcwanE2K3ZYNGNFZGdVQ2dZQTZSTkV4cFNNaGJRc3pmaC9IU3U3SUFBeEpIUkV2aFlxL1h0a0QKUVBqbzdOYVZ3RDZSL1BEejZncU9td1dZeDg1bjFTc2xTSU1Ta1FTSHMxMnl5bEJDb1pSR2p3c1poeFc1ak9yaQowQjV3UWVscHR3Zkx2Ryt0MDFCazlzbm9GV1crMDFuT0lUcDNCRytBbjlJVjRIaUQydW1WbTZtcGhwR0prQ1JTCno0YkFjUUtCZ0V2THM0SjMwaCtIb0VUODdOVFQ5WTBWYzBGRFBXWG0xcDdmKzJKa1pzejJ0dUhOOUcvR0RIUU0KY1N6Tll2MFI4azJqR0tnQjdnSVBDWlFkN29YVW81YU5maXVtZ2dDQUJ0RjV3eWFBWGtFWTRxVExabWRTbm5peQpUcktMa2l3aXRRd2lmckg4bHo1TkxTL2tGcXFKMXRuOWhJWWRDMmtsUHFDdnVjbUliMk1ECi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
#我们使用curl通过api server的url来实现对资源的操作,但是curl没有k8s认证,所以我们通过proxy 将api server反向代理到本机的8080端口,这样curl就不会走HTTPS的认证。
[root@k8smaster ~]# kubectl proxy --port=8080 &
Starting to serve on 127.0.0.1:8080
[root@k8smaster ~]# ss -tnl | grep 8080
LISTEN 0 128 127.0.0.1:8080 *:*
#查看namespace
[root@k8smaster ~]# curl http://localhost:8080/api/v1/namespaces
{"kind": "NamespaceList","apiVersion": "v1","metadata": {"selfLink": "/api/v1/namespaces","resourceVersion": "242473"
},"items": [{"metadata": {"name": "default","selfLink": "/api/v1/namespaces/default","uid": "7d5c1c7e-babc-4a03-9d5a-1f71a8f43bc3","resourceVersion": "150","creationTimestamp": "2019-10-28T10:51:23Z"},"spec": {"finalizers": ["kubernetes"]},"status": {"phase": "Active"}},{"metadata": {"name": "kube-node-lease","selfLink": "/api/v1/namespaces/kube-node-lease","uid": "bfe8130f-8dd9-4517-8eaa-24fdd4be2d8e","resourceVersion": "38","creationTimestamp": "2019-10-28T10:51:20Z"},"spec": {"finalizers": ["kubernetes"]},"status": {"phase": "Active"}},{"metadata": {"name": "kube-public","selfLink": "/api/v1/namespaces/kube-public","uid": "aa0e095d-57d3-433d-908e-b688572975c0","resourceVersion": "37","creationTimestamp": "2019-10-28T10:51:20Z"},"spec": {"finalizers": ["kubernetes"]},"status": {"phase": "Active"}},{"metadata": {"name": "kube-system","selfLink": "/api/v1/namespaces/kube-system","uid": "970dbea7-9819-4096-9769-2a588921b739","resourceVersion": "35","creationTimestamp": "2019-10-28T10:51:20Z"},"spec": {"finalizers": ["kubernetes"]},"status": {"phase": "Active"}}]
}
#查看kube-system名称空间下所有的deployment。
[root@k8smaster ~]# kubectl get deploy -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 2/2 2 2 63d
[root@k8smaster ~]# curl http://localhost:8080/apis/apps/v1/namespaces/kube-system/deployments
HTTP Request verb(HTTP请求动作):
- get
- post
- put
- delete
API Request verb(K8s Api Server请求动作):
- get
- list
- create
- update
- patch
- watch
- proxy
- redirect
- delete
- deletecollection
用户:
- Use Account(用户账号):一般是指由独立于Kubernetes之外的其他服务管理的用 户账号,例如由管理员分发的密钥、Keystone一类的用户存储(账号库)、甚至是包 含有用户名和密码列表的文件等。Kubernetes中不存在表示此类用户账号的对象, 因此不能被直接添加进 Kubernetes 系统中 。
- Service Account(服务账号):是指由Kubernetes API 管理的账号,用于为Pod 之中的服务进程在访问Kubernetes API时提供身份标识( identity ) 。Service Account通常要绑定于特定的命名空间,它们由 API Server 创建,或者通过 API 调用于动创建 ,附带着一组存储为Secret的用于访问API Server的凭据。
ServiceAccount:
#创建一个ServiceAccount
[root@k8smaster ~]# kubectl create serviceaccount admin
serviceaccount/admin created
[root@k8smaster ~]# kubectl get sa
NAME SECRETS AGE
admin 1 5s
default 1 63d
[root@k8smaster ~]# kubectl describe sa admin
Name: admin
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: admin-token-bs7sd
Tokens: admin-token-bs7sd
Events: <none>
[root@k8smaster ~]# kubectl get secret
NAME TYPE DATA AGE
admin-token-bs7sd kubernetes.io/service-account-token 3 81s
default-token-kk2fq kubernetes.io/service-account-token 3 63d
mysql-root-password Opaque 1 10d
#指定Pod使用我们创建的ServiceAccount
[root@k8smaster ~]# vim Pod-demo.yaml
apiVersion: v1
kind: Pod
metadata:name: pod-sa-demonamespace: defaultlabels:app: myapptier: frontendannotations:k8s.com/created-by: "cluster admin"
spec:containers:- name: myappimage: ikubernetes/myapp:v1ports:- name: httpcontainerPort: 80serviceAccountName: admin
[root@k8smaster ~]# kubectl apply -f Pod-demo.yaml
pod/pod-sa-demo created
[root@k8smaster ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pod-sa-demo 1/1 Running 0 102s
[root@k8smaster ~]# kubectl describe pod pod-sa-demo
Name: pod-sa-demo
Namespace: default
Priority: 0
Node: k8snode1/192.168.43.136
Start Time: Tue, 31 Dec 2019 17:15:18 +0800
Labels: app=myapptier=frontend
Annotations: k8s.com/created-by: cluster adminkubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{"k8s.com/created-by":"cluster admin"},"labels":{"app":"myapp","tier":"frontend"...
Status: Running
IP: 10.244.1.139
IPs: <none>Containers:myapp:Container ID: docker://08de4d8f62d0c5b900664fda23e7243a94ef925d2003a62f2fe95876bf034c79Image: ikubernetes/myapp:v1Image ID: docker-pullable://ikubernetes/myapp@sha256:40ccda7b7e2d080bee7620b6d3f5e6697894befc409582902a67c963d30a6113Port: 80/TCPHost Port: 0/TCPState: RunningStarted: Tue, 31 Dec 2019 17:15:19 +0800Ready: TrueRestart Count: 0Environment: <none>Mounts:/var/run/secrets/kubernetes.io/serviceaccount from admin-token-bs7sd (ro)
Conditions:Type StatusInitialized True Ready True ContainersReady True PodScheduled True
Volumes:admin-token-bs7sd:Type: Secret (a volume populated by a Secret)SecretName: admin-token-bs7sd #发现与上面的admin的Tokens一致。Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300snode.kubernetes.io/unreachable:NoExecute for 300s
Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 113s default-scheduler Successfully assigned default/pod-sa-demo to k8snode1Normal Pulled 113s kubelet, k8snode1 Container image "ikubernetes/myapp:v1" already present on machineNormal Created 113s kubelet, k8snode1 Created container myappNormal Started 113s kubelet, k8snode1 Started container myapp
#查看k8s控制台信息
[root@k8smaster ~]# kubectl config view
apiVersion: v1
clusters:
- cluster:certificate-authority-data: DATA+OMITTEDserver: https://192.168.43.45:6443 #集群api server的路径name: kubernetes
contexts:
- context:cluster: kubernetes #集群名user: kubernetes-admin #用户名name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes #当前使用的用户
kind: Config
preferences: {}
users:
- name: kubernetes-adminuser:client-certificate-data: REDACTEDclient-key-data: REDACTED
创建自定义的用户并认证:
[root@k8smaster ~]# cd /etc/kubernetes/
[root@k8smaster kubernetes]# ls
admin.conf controller-manager.conf kubelet.conf manifests pki scheduler.conf
[root@k8smaster kubernetes]# cd pki/
[root@k8smaster pki]# ls
apiserver.crt apiserver.key ca.crt front-proxy-ca.crt front-proxy-client.key
apiserver-etcd-client.crt apiserver-kubelet-client.crt ca.key front-proxy-ca.key sa.key
apiserver-etcd-client.key apiserver-kubelet-client.key etcd front-proxy-client.crt sa.pub
#创建证书和客户端密钥。
[root@k8smaster pki]# (umask 077; openssl genrsa -out k8s.key 2048)
Generating RSA private key, 2048 bit long modulus
.....................+++
.....+++
e is 65537 (0x10001)
[root@k8smaster pki]# openssl req -new -key k8s.key -out k8s.csr -subj "/CN=k8s"
[root@k8smaster pki]# openssl x509 -req -in k8s.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out k8s.crt -days 365
Signature ok
subject=/CN=k8s
Getting CA Private Key
[root@k8smaster pki]# ls
apiserver.crt apiserver.key ca.crt etcd front-proxy-client.crt k8s.csr sa.pub
apiserver-etcd-client.crt apiserver-kubelet-client.crt ca.key front-proxy-ca.crt front-proxy-client.key k8s.key
apiserver-etcd-client.key apiserver-kubelet-client.key ca.srl front-proxy-ca.key k8s.crt sa.key
[root@k8smaster pki]# kubectl config set-credentials k8s --client-certificate=./k8s.crt --client-key=./k8s.key --embed-certs=true
User "k8s" set.
[root@k8smaster pki]# kubectl config view
apiVersion: v1
clusters:
- cluster:certificate-authority-data: DATA+OMITTEDserver: https://192.168.43.45:6443name: kubernetes
contexts:
- context:cluster: kubernetesuser: kubernetes-adminname: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: k8s
user:client-certificate-data: REDACTEDclient-key-data: REDACTED
- name: kubernetes-admin
user:client-certificate-data: REDACTEDclient-key-data: REDACTED
#给定安全上下文
[root@k8smaster pki]# kubectl config set-context k8s@kubernetes --cluster=kubernetes --user=k8s
Context "k8s@kubernetes" created.
[root@k8smaster pki]# kubectl config view
apiVersion: v1
clusters:
- cluster:certificate-authority-data: DATA+OMITTEDserver: https://192.168.43.45:6443
name: kubernetes
contexts:
- context:cluster: kubernetesuser: k8s
name: k8s@kubernetes
- context:cluster: kubernetesuser: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: k8s
user:client-certificate-data: REDACTEDclient-key-data: REDACTED
- name: kubernetes-admin
user:client-certificate-data: REDACTEDclient-key-data: REDACTED
#使用我们创建的账户
[root@k8smaster pki]# kubectl config use-context k8s@kubernetes
Switched to context "k8s@kubernetes".
#发现权限被拒绝,因为我们创建的账户并没有设置任何权限。
[root@k8smaster pki]# kubectl get pods
Error from server (Forbidden): pods is forbidden: User "k8s" cannot list resource "pods" in API group "" in the namespace "default"
#我们再切换到管理员账户
[root@k8smaster pki]# kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".
#发现已经有权限了。
[root@k8smaster pki]# kubectl get pods
NAME READY STATUS RESTARTS AGE
pod-sa-demo 1/1 Running 0 90m
#将当前的集群状态生成一个配置文件。(默认的配置文件再用户家目录下的.kube/config)
[root@k8smaster ~]# kubectl config set-cluster mycluster --kubeconfig=/tmp/test.conf --server="https://192.168.43.45:6443" --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true
Cluster "mycluster" set.
[root@k8smaster ~]# kubectl config view --kubeconfig=/tmp/test.conf
apiVersion: v1
clusters:
- cluster:certificate-authority-data: DATA+OMITTEDserver: https://192.168.43.45:6443name: mycluster
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
授权插件RBAC:
RBAC (Role-Based Access Control,基于角色的访问控制)是一种新型、灵活且使用广泛的访问控制机制,它将权限授予“角色”(role)之上,这一点有别于传统访问控制机制中 将权限直接赋予使用者的方式,简单点来说就是将权限绑定到role中,然后用户和role绑定,这样用户就拥有了和role一样的权限。
Role和ClusterRole:
- Role是只作用于命名空间级别的,用于定义命名空间内资源权限集合。
- ClusterRole则用于集群级别的资源权限集合,它们都是标准的 API 资源类型 。
- 一般来说, ClusterRole 的许可授权作用于整个集群,因此常用于控制 Role 无法生效的资源类型,这包括集群级别的资源(如Nodes)、非资源类型的端点(如/healthz)和作用于所有命名空间的资源(例如跨命名空间获取任何资源的权限)。
RoleBinding和ClusterRoleBinding:
- RoleBinding用于将Role上的许可权限绑定到一个或一组用户之上,它隶属于且仅能作用于一个命名空间。绑定时,可以引用同一名称中的Role,也可以引用集群级别的 ClusterRole。
- ClusterRoleBinding则把ClusterRole中定义的许可权限绑定在一个或一组用户之上,它仅可以引用集群级别的ClusterRole。
- 一个命名空间中可以包含多个Role和RoleBinding对象,类似地,集群级别也可以同时存在多个ClusterRole和ClusterRoleBinding对 象。而一个账户也可经由RoleBinding ClusterRoleBinding关联至多个角色,从而具有多重许可授权。
role&rolebinding(role和rolebinding之间的绑定):
#创建一个role,具有可以查看default名称空间内Pod的权限。
[root@k8smaster ~]# kubectl create role pods-reader --verb=get,list,watch --resource=pods --dry-run -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:creationTimestamp: nullname: pods-reader
rules:
- apiGroups:- ""resources:- podsverbs:- get- list- watch
[root@k8smaster ~]# kubectl create role pods-reader --verb=get,list,watch --resource=pods --dry-run -o yaml > role-demo.yaml
[root@k8smaster ~]# vim role-demo.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:name: pods-readernamespace: default
rules:
- apiGroups:- ""resources:- podsverbs:- get- list- watch
[root@k8smaster ~]# kubectl apply -f role-demo.yaml
role.rbac.authorization.k8s.io/pods-reader created
[root@k8smaster ~]# kubectl get role
NAME AGE
pods-reader 6s
[root@k8smaster ~]# kubectl describe role pods-reader
Name: pods-reader
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"name":"pods-reader","namespace":"default"},"rules...
PolicyRule:Resources Non-Resource URLs Resource Names Verbs--------- ----------------- -------------- -----pods [] [] [get list watch]
#创建一个rolebindding,将我们之前创建的k8s用户绑定到role。
[root@k8smaster ~]# kubectl create rolebinding k8s-read-pods --role=pods-reader --user=k8s --dry-run -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:creationTimestamp: nullname: k8s-read-pods
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: pods-reader
subjects:
- apiGroup: rbac.authorization.k8s.iokind: Username: k8s
[root@k8smaster ~]# kubectl create rolebinding k8s-read-pods --role=pods-reader --user=k8s --dry-run -o yaml > rolebinding-demo.yaml
[root@k8smaster ~]# vim rolebinding-demo.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: k8s-read-pods
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: pods-reader
subjects:
- apiGroup: rbac.authorization.k8s.iokind: Username: k8s
[root@k8smaster ~]# kubectl apply -f rolebinding-demo.yaml
rolebinding.rbac.authorization.k8s.io/k8s-read-pods created
[root@k8smaster ~]# kubectl get rolebinding
NAME AGE
k8s-read-pods 20s
[root@k8smaster ~]# kubectl describe rolebinding k8s-read-pods
Name: k8s-read-pods
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{},"name":"k8s-read-pods","namespace":"default...
Role:Kind: RoleName: pods-reader
Subjects:Kind Name Namespace---- ---- ---------User k8s
#验证权限
[root@k8smaster ~]# kubectl config use-context k8s@kubernetes
Switched to context "k8s@kubernetes".
[root@k8smaster ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
pod-sa-demo 1/1 Running 1 21h
#但是我们只授予了这个role拥有查看Pod的权限,所有查看名称空间的话会报权限拒绝。
[root@k8smaster ~]# kubectl get namespace
Error from server (Forbidden): namespaces is forbidden: User "k8s" cannot list resource "namespaces" in API group "" at the cluster scope
#并且也查看不到其他名称空间内的Pods
[root@k8smaster ~]# kubectl get pods -n kube-system
Error from server (Forbidden): pods is forbidden: User "k8s" cannot list resource "pods" in API group "" in the namespace "kube-system"
clusterrole&clusterrolebinding(clusterrole和clusterrolebinding之间的绑定):
#创建一个clusterrole,赋予它可以查看所有名称空间内Pod的权限。
[root@k8smaster ~]# kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".
[root@k8smaster ~]# kubectl create clusterrole cluster-reader --verb=get,list,watch --resource=pods -o yaml --dry-run
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:creationTimestamp: nullname: cluster-reader
rules:
- apiGroups:- ""resources:- podsverbs:- get- list- watch
[root@k8smaster ~]# kubectl create clusterrole cluster-reader --verb=get,list,watch --resource=pods -o yaml --dry-run > clusterrole-demo.yaml
[root@k8smaster ~]# vim clusterrole-demo.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: cluster-reader
rules:
- apiGroups:- ""resources:- podsverbs:- get- list- watch
[root@k8smaster ~]# kubectl apply -f clusterrole-demo.yaml
clusterrole.rbac.authorization.k8s.io/cluster-reader created
#创建一个clusterrolebinding,让它绑定到我们的k8s用户。
[root@k8smaster ~]# kubectl get rolebinding
NAME AGE
k8s-read-pods 21m
[root@k8smaster ~]# kubectl delete rolebinding k8s-read-pods
rolebinding.rbac.authorization.k8s.io "k8s-read-pods" deleted
[root@k8smaster ~]# kubectl create clusterrolebinding k8s-read-all-pods --clusterrole=cluster-reader --user=k8s --dry-run -o yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:creationTimestamp: nullname: k8s-read-all-pods
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-reader
subjects:
- apiGroup: rbac.authorization.k8s.iokind: Username: k8s
[root@k8smaster ~]# kubectl create clusterrolebinding k8s-read-all-pods --clusterrole=cluster-reader --user=k8s --dry-run -o yaml > clusterrolebinding-demo.yaml
[root@k8smaster ~]# vim clusterrolebinding-demo.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:name: k8s-read-all-pods
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-reader
subjects:
- apiGroup: rbac.authorization.k8s.iokind: Username: k8s
[root@k8smaster ~]# kubectl apply -f clusterrolebinding-demo.yaml
clusterrolebinding.rbac.authorization.k8s.io/k8s-read-all-pods created
[root@k8smaster ~]# kubectl describe clusterrolebinding k8s-read-all-pods
Name: k8s-read-all-pods
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"rbac.authorization.k8s.io/v1beta1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"k8s-read-all-pods"},"ro...
Role:Kind: ClusterRoleName: cluster-reader
Subjects:Kind Name Namespace---- ---- ---------User k8s
#验证权限
[root@k8smaster ~]# kubectl config use-context k8s@kubernetes
Switched to context "k8s@kubernetes".
[root@k8smaster ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
pod-sa-demo 1/1 Running 1 21h
#我们发现这次k8s用户可以查看到其他名称空间的Pod了
[root@k8smaster ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-bf7759867-8h4x8 1/1 Running 6 64d
coredns-bf7759867-slmsz 1/1 Running 6 64d
etcd-k8smaster 1/1 Running 18 64d
kube-apiserver-k8smaster 1/1 Running 61 63d
kube-controller-manager-k8smaster 1/1 Running 16 64d
kube-flannel-ds-amd64-6zhtw 1/1 Running 15 64d
kube-flannel-ds-amd64-wnh9k 1/1 Running 6 64d
kube-flannel-ds-amd64-wqvz9 1/1 Running 15 64d
kube-proxy-2j8w9 1/1 Running 15 64d
kube-proxy-kqxlq 1/1 Running 14 64d
kube-proxy-nb82z 1/1 Running 6 64d
kube-scheduler-k8smaster 1/1 Running 16 64d
traefik-ingress-controller-8wbtb 0/1 CrashLoopBackOff 343 15d
traefik-ingress-controller-bmpbk 0/1 CrashLoopBackOff 343 15d
#但是只有查看Pod的权限。
[root@k8smaster ~]# kubectl get service -n kube-system
Error from server (Forbidden): services is forbidden: User "k8s" cannot list resource "services" in API group "" in the namespace "kube-system"
clusterrole&rolebinding(clusterrole和rolebinding之间的绑定):
[root@k8smaster ~]# kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".
[root@k8smaster ~]# kubectl delete clusterrolebinding k8s-read-all-pods
clusterrolebinding.rbac.authorization.k8s.io "k8s-read-all-pods" deleted
[root@k8smaster ~]# kubectl create rolebinding k8s-read-pods --clusterrole=cluster-reader --user=k8s --dry-run -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:creationTimestamp: nullname: k8s-read-pods
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-reader
subjects:
- apiGroup: rbac.authorization.k8s.iokind: Username: k8s
[root@k8smaster ~]# kubectl create rolebinding k8s-read-pods --clusterrole=cluster-reader --user=k8s --dry-run -o yaml > rolebinding-clusterrole-demo.yaml
[root@k8smaster ~]# vim rolebinding-clusterrole-demo.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: k8s-read-podsnamespace: default
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-reader
subjects:
- apiGroup: rbac.authorization.k8s.iokind: Username: k8s
[root@k8smaster ~]# kubectl apply -f rolebinding-clusterrole-demo.yaml
rolebinding.rbac.authorization.k8s.io/k8s-read-pods created
[root@k8smaster ~]# kubectl describe rolebinding k8s-read-pods
Name: k8s-read-pods
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{},"name":"k8s-read-pods","namespace":"default...
Role:Kind: ClusterRoleName: cluster-reader
Subjects:Kind Name Namespace---- ---- ---------User k8s
#验证权限
[root@k8smaster ~]# kubectl config use-context k8s@kubernetes
Switched to context "k8s@kubernetes".
[root@k8smaster ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
pod-sa-demo 1/1 Running 1 21h
[root@k8smaster ~]# kubectl get pods -n kube-system
Error from server (Forbidden): pods is forbidden: User "k8s" cannot list resource "pods" in API group "" in the namespace "kube-system"
admin角色:
admin是default名称空间的管理员,它可以管理default名称空间的所有资源,可以实现增删改查,但是没有对default名称空间外的任何权限。
#创建一个rolebinding让k8s用户绑定到k8s集群default名称空间的admin角色
[root@k8smaster ~]# kubectl create rolebinding default-ns-admin --clusterrole=admin --user=k8s
rolebinding.rbac.authorization.k8s.io/default-ns-admin created
[root@k8smaster ~]# kubectl config use-context k8s@kubernetes
Switched to context "k8s@kubernetes".
#可以查
[root@k8smaster ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
pod-sa-demo 1/1 Running 1 21h
[root@k8smaster ~]# cd /data/configmap/
#可以创建
[root@k8smaster configmap]# kubectl apply -f .
pod/pod-cm created
pod/pod-se created
[root@k8smaster configmap]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pod-cm 1/1 Running 0 90s
pod-sa-demo 1/1 Running 1 21h
pod-se 1/1 Running 0 90s
#可以删
[root@k8smaster configmap]# kubectl delete -f .
pod "pod-cm" deleted
pod "pod-se" deleted
[root@k8smaster configmap]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pod-sa-demo 1/1 Running 1 21h
#但是没有对其他名称空间的权限
[root@k8smaster ~]# kubectl get pods -n kube-system
Error from server (Forbidden): pods is forbidden: User "k8s" cannot list resource "pods" in API group "" in the namespace "kube-system"
cluster-admin角色:
cluster-admin是集群的管理员,它具有对整个集群的管理权限。
[root@k8smaster configmap]# kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".
[root@k8smaster configmap]# kubectl get clusterrolebinding cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"creationTimestamp: "2019-10-28T10:51:21Z"labels:kubernetes.io/bootstrapping: rbac-defaultsname: cluster-adminresourceVersion: "96"selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-adminuid: 78362692-0520-436c-bb3d-a5a5c6e0a8bd
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.iokind: Groupname: system:masters
参考链接:https://blog.csdn.net/IT8421/article/details/89389609
K8s第七篇授权认证相关推荐
- 【云原生 | Kubernetes 实战】18、K8s 安全实战篇之 RBAC 认证授权(下)
目录 一.常见角色(role)授权的案例 1.1 允许读取核心 API 组的 Pod 资源 1.2 允许读写 apps API 组中的 deployment 资源 1.3 允许读取 Pod 以及读写 ...
- 【云原生 | Kubernetes 实战】18、K8s 安全实战篇之 RBAC 认证授权(上)
目录 一.k8s 安全管理:认证.授权.准入控制概述 1.1 认证 认证基本介绍 授权基本介绍 准入控制基本介绍 为什么需要准入控制器呢? k8s 客户端访问 apiserver 的几种认证方式 ku ...
- spring cloud+.net core搭建微服务架构:Api授权认证(六)
前言 这篇文章拖太久了,因为最近实在太忙了,加上这篇文章也非常长,所以花了不少时间,给大家说句抱歉.好,进入正题.目前的项目基本都是前后端分离了,前端分Web,Ios,Android...,后端也基本 ...
- CCIE理论-第七篇-SD-WAN网络(二)
CCIE理论-第七篇-SD-WAN网络(二) 首先回顾一波SD-WAN里面的几个主要角色 1.Vmanage 2.vsmart 3.vbond 4.vedge 其中,vbond和vedge实际上是一个 ...
- Android仿人人客户端(v5.7.1)——Auth授权认证(整理流程,重构代码)
转载请标明出处:http://blog.csdn.net/android_ls/article/details/8748901 声明:关于仿人人项目,我是边看人人官方提供的API,边思考,整理好思路, ...
- 选择适合的Node js授权认证策略
分享一下我老师大神的人工智能教程!零基础,通俗易懂!http://blog.csdn.net/jiangjunshow 也欢迎大家转载本篇文章.分享知识,造福人民,实现我们中华民族伟大复兴! 选择适合 ...
- 微服务架构如何设计API代理网关和OAuth2授权认证框架
1,授权认证与微服务架构 1.1,由不同团队合作引发的授权认证问题 去年的时候,公司开发一款新产品,但人手不够,将B/S系统的Web开发外包,外包团队使用Vue.js框架,调用我们的WebAPI,但是 ...
- 【鉴权/授权】基于角色的简单授权认证
微信公众号:趣编程ACE 关注可了解.NET日常开发技巧.如需源码,请公众号留言 源码; 上文回顾 [鉴权/授权]一步一步实现一个简易JWT鉴权 [鉴权/授权]自定义一个身份认证Handler 授权小 ...
- 网页提示未认证授权的应用服务器,授权认证(IdentityServer4)
区别 OpenId: Authentication :认证 Oauth: Aurhorize :授权 输入账号密码,QQ确认输入了正确的账号密码可以登录 --->认证 下面需要勾选的复选框(获取 ...
最新文章
- matlab中滤波器函数filter的c语言实现
- 线程返回值的方式介绍
- 3.4.3 后退N帧协议(GBN)
- 解决了一个小问题 好像把逻辑有点复杂
- 推荐 | 8 个 SpringBoot 精选项目
- (转载)Linux僵死进程的产生与避免
- 应用安全 - 渗透测试 - .net网站
- android 人脸识别_小模型,高精度!小视科技推出极致轻量型人脸识别SDK
- MySQL:浅析 Impossible WHERE noticed after reading const tables
- 计算机课教案学法,计算机应用基础教学方法初探
- 用matlab解拉格朗日,用MATLAB实现拉格朗日插值
- centos7安装google浏览器
- 彻底清除Mac缓存数据的方法,这样清理Mac缓存数据太干净了
- NX的尺寸控制与半径补偿(重要)
- python迅雷下载任务出错_迅雷下载任务出错的原因和解决方法 来研究下吧
- 三国志战略版:Daniel_典韦分析
- 追剧还能得红包 《欢乐颂2》五美邀你来“抢”搜狗搜索现金大礼
- 微信登录报错Scope 参数错误或没有 Scope 权限
- 解决matlab2018A中文乱码问题
- 项目四 无线网络配置(使用华为模拟器eNSP)