k8s Krew 插件使用指南

琉璃4个月前技术文章219

kubectl Krew 插件使用指南

前言:在某些情况下,只是使用 kubectl 命令的时候存在效率比较低、功能不满足的问题,针对这些问题,社区提出了krew插件项目。还有一点是:大部分工程师还是喜欢使用黑屏命令行,因为这样的效率是最高的,而且排查问题会更直接。

1、国内安装krew 插件

由于众所周知的网络原因,国内安装krew会比较困难,这里我们采用加速域名加离线安装的方式来进行规避。

$ set -x; cd "$(mktemp -d)" &&
  OS="$(uname | tr '[:upper:]' '[:lower:]')" &&
  ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" &&
  KREW="krew-${OS}_${ARCH}" &&
  curl -fsSLO "https://github.91chi.fun/https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz" &&
  tar zxvf "${KREW}.tar.gz" &&
  curl -fsSLO "https://github.91chi.fun/https://github.com/kubernetes-sigs/krew/releases/latest/download/krew.yaml" &&
  ./"${KREW}" install --manifest=krew.yaml --archive=${KREW}.tar.gz && ./"$KREW" update
$ echo 'export PATH="${PATH}:${HOME}/.krew/bin"' >> ~/.bashrc
$ source ~/.bashrc
## 注:在国内下载不下来的所有插件,我们都可以把安装包以及yaml文件单独下载下来,然后通过--manifest和--archive进行安装

2、先安装比较热门一些插件,例如:ns、mtail、get-all等

$ kubectl-krew install ns
$ kubectl-krew install get-all
$ kubectl-krew install mtail

3、使用krew ns插件

## 通过ns 显示所有命名空间以及目前所在的命名空间
$ kubectl-ns
default
flink
kube-node-lease
kube-public
kube-system
monitoring
## 通过ns 切换命名空间
$ kubectl-ns kube-system
Context "kubernetes-admin@kubernetes" modified.
Active namespace is "kube-system".

4、使用krew mtail插件

## 这里我们可以看到前面切换到kube-system命名空间,我们确实可以直接看到此命名空间下的pod
$ kubectl get po
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-7f4f5bf95d-569h2   1/1     Running   5          113d
calico-node-9h885                          1/1     Running   3          113d
calico-node-r4wck                          1/1     Running   3          113d
calico-node-zlsfx                          1/1     Running   3          113d
coredns-74ff55c5b-h9h4k                    1/1     Running   3          113d
coredns-74ff55c5b-qshsv                    1/1     Running   3          113d
etcd-node1                                 1/1     Running   3          113d
etcd-node2                                 1/1     Running   3          113d
etcd-node3                                 1/1     Running   3          113d
kube-apiserver-node1                       1/1     Running   5          113d
kube-apiserver-node2                       1/1     Running   3          113d
kube-apiserver-node3                       1/1     Running   3          113d
kube-controller-manager-node1              1/1     Running   15         113d
kube-controller-manager-node2              1/1     Running   16         113d
kube-controller-manager-node3              1/1     Running   12         113d
kube-proxy-5lpm9                           1/1     Running   3          113d
kube-proxy-kqrs4                           1/1     Running   3          113d
kube-proxy-ptkvz                           1/1     Running   3          113d
kube-scheduler-node1                       1/1     Running   11         113d
kube-scheduler-node2                       1/1     Running   13         113d
kube-scheduler-node3                       1/1     Running   15         113d
## 然后我们使用mtail 一次性跟踪多个相同标签的pod
$ kubectl-mtail component=etcd
+ kubectl logs --follow etcd-node1 '' --tail=10
+ kubectl logs --follow etcd-node2 '' --tail=10
+ kubectl logs --follow etcd-node3 '' --tail=10
[etcd-node2] 2022-08-06 15:38:41.034312 I | etcdserver/api/etcdhttp: /health OK (status code 200)
[etcd-node2] 2022-08-06 15:38:51.034530 I | etcdserver/api/etcdhttp: /health OK (status code 200)
[etcd-node2] 2022-08-06 15:39:01.034467 I | etcdserver/api/etcdhttp: /health OK (status code 200)
[etcd-node1] 2022-08-06 15:38:38.515923 I | etcdserver/api/etcdhttp: /health OK (status code 200)
[etcd-node1] 2022-08-06 15:38:48.515273 I | etcdserver/api/etcdhttp: /health OK (status code 200)
[etcd-node1] 2022-08-06 15:38:58.513805 I | etcdserver/api/etcdhttp: /health OK (status code 200)
[etcd-node3] 2022-08-06 15:38:36.850658 I | mvcc: finished scheduled compaction at 47047460 (took 39.029057ms)
[etcd-node3] 2022-08-06 15:38:45.623930 I | etcdserver/api/etcdhttp: /health OK (status code 200)
[etcd-node3] 2022-08-06 15:38:55.623279 I | etcdserver/api/etcdhttp: /health OK (status code 200)
## 通过输出我们可以看到确实是同时跟踪了所有etcd pod,这在我们debug 很多pod的时候,可以提升很高的效率

5、使用 krew get-all 插件

## 为什么要有get-all 插件?因为kubectl get all 命令实际上并不是列出所有资源对象
## 需要特别注意的是:此命令在k8s集群对象非常多的时候,非常的消耗资源,请谨慎使用,默认输出所有对象
$ kubectl-get_all
NAME                                                                                                               NAMESPACE                  AGE
componentstatus/controller-manager                                                                                                            <unknown>  
componentstatus/scheduler                                                                                                                     <unknown>  
componentstatus/etcd-0                                                                                                                        <unknown>  
configmap/webhook-configmap                                                                                        default                    11d        
configmap/coredns                                                                                                  kube-system                113d       
configmap/grafana-dashboards                                                                                       monitoring                 100d       
endpoints/flink-operator-controller-manager-metrics-service                                                        default                    11d        
endpoints/kube-dns                                                                                                 kube-system                113d       
endpoints/thanos-store                                                                                             monitoring                 74d        
namespace/default                                                                                                                             113d       
namespace/kube-public                                                                                                                         113d       
namespace/kube-system                                                                                                                         113d       
namespace/monitoring                                                                                                                          100d       
node/node1                                                                                                                                    113d       
node/node2                                                                                                                                    113d       
node/node3                                                                                                                                    113d       
persistentvolumeclaim/data-prometheus-k8s-1                                                                        monitoring                 73d        
persistentvolume/pvc-ecf5e60b-2fd0-42db-984a-4c24c49e7dd8                                                                                     54d        
pod/nfs-subdir-external-provisioner-7bbf9b47dd-89t8z                                                               default                    112d       
pod/calico-node-zlsfx                                                                                              kube-system                113d       
secret/endpointslicemirroring-controller-token-c88rd                                                               kube-system                113d       
serviceaccount/default                                                                                             default                    113d       
service/clickhouse-ck-cluster-x                                                                                    default                    74d        
service/thanos-store                                                                                               monitoring                 74d        
mutatingwebhookconfiguration.admissionregistration.k8s.io/flink-operator-mutating-webhook-configuration                                       11d        
validatingwebhookconfiguration.admissionregistration.k8s.io/flink-operator-validating-webhook-configuration                                   11d        
customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com                                                       100d       
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com                                                             100d    
……
## 通过上面的输出我们可以看到集群内的所有的资源全部都被列了出来,包括命名空间内的和全局的
## 通过-n 可以限制查询的命名空间
$ kubectl get-all -n kube-system
NAME                                                                                     NAMESPACE    AGE
configmap/calico-config                                                                  kube-system  113d  
configmap/coredns                                                                        kube-system  113d   
endpoints/kube-dns                                                                       kube-system  113d  
endpoints/kubelet                                                                        kube-system  113d  
pod/calico-kube-controllers-7f4f5bf95d-569h2                                             kube-system  113d   
pod/calico-node-r4wck                                                                    kube-system  113d  
……

6、安装 node-shell

$ kubectl krew index add kvaps https://github.com/kvaps/krew-index
$ kubectl krew install kvaps/node-shell

7、使用 node-shell 连接node 节点

$  kubectl get no -o wide
NAME    STATUS   ROLES                  AGE    VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
node1   Ready    control-plane,master   113d   v1.20.11   172.16.104.70   <none>        Ubuntu 20.04.2 LTS   5.4.0-109-generic   containerd://1.5.5
node2   Ready    control-plane,master   113d   v1.20.11   172.16.104.62   <none>        Ubuntu 20.04.2 LTS   5.4.0-109-generic   containerd://1.5.5
node3   Ready    control-plane,master   113d   v1.20.11   172.16.104.64   <none>        Ubuntu 20.04.2 LTS   5.4.0-109-generic   containerd://1.5.5
$ ip r | grep 172.16
default via 172.16.104.1 dev enp1s0 proto dhcp src 172.16.104.62 metric 100
100.66.209.192/26 via 172.16.104.70 dev tunl0 proto bird onlink
100.74.135.0/26 via 172.16.104.64 dev tunl0 proto bird onlink
172.16.104.0/22 dev enp1s0 proto kernel scope link src 172.16.104.62
## 可以看到我们确实是在node2服务器上面

8、通过上面的几个插件,我们可以看到krew确实有存在的必要,我们可以查看官网上面的插件列表,去获取我们需要的插件。除了我们上面列出来的插件之外,还有很多插件可以使用,大家可以去探索一下。

krew 插件列表:https://krew.sigs.k8s.io/plugins/

相关文章

scylladb下线数据中心

1、在要下线的老数据中心所有节点运行数据修复nodetool -h ::FFFF:127.0.0.1 repair -pr2、更改所有业务keyspace的复制策略不在写入老的数据中心--查看所有的k...

大数据高可用系列--kudu高可用应急方案

大数据高可用系列--kudu高可用应急方案

1 设置机架感知1.1 前置说明    1.9版本后的kudu已经支持机架感知(cdh6之后的版本中的kudu已支持),由于kudu的每个Tablet一般是三副...

win2016系统新增辅助网卡无法访问公网

win2016系统新增辅助网卡无法访问公网

问题现象:一台阿里云win2016系统服务器,在主网卡已绑定弹性公网ip之后,再新增了一块辅助网卡,无法访问公网。另外,使用NAT网关做了dnat到辅助网卡的映射。该台服务器网卡信息为:主网卡:172...

Storage Classes之nfs provisioner

Storage Classes之nfs provisioner

每个 StorageClass 都有一个制备器(Provisioner),用来决定使用哪个卷插件制备 PV。 该字段必须指定。这里我们使用nfs作为StorageClass的制备器,官方并未对nfs进...

ES运维(五)聚合分析流程及精准度

ES运维(五)聚合分析流程及精准度

1、 概述ES是一个近实时的搜索引擎,提供近实时海量数据的聚合分析功能,但这个海量数据聚合分析是会损失一定的精准度来满足实时性能需要的。 2、 分布式系统的近似统计算法如下图,在分布式数据分...

使用CoreDNS搭建DNS服务器

使用CoreDNS搭建DNS服务器

简介CoreDNS是一个DNS服务器/转发器,用Go编写,可以链接插件。每个插件执行一个 (DNS) 功能。CoreDNS是云原生计算基金会毕业的项目。CoreDNS是一个快速灵活的DNS服务器。这里...

发表评论    

◎欢迎参与讨论,请在这里发表您的看法、交流您的观点。