对具有 root 访问权限的 kubernetes pods 执行命令

我有一个逃生舱运行的名称“詹金斯-应用程序-2843651954-4zqdp”。我想在这个吊舱上暂时安装一些软件。我怎么能这么做?

我正在尝试这个 -kubectl exec -it jenkins-app-2843651954-4zqdp -- /bin/bash 然后运行 apt-get install 命令,但是因为我访问的用户没有 sudo 访问权限,所以我无法运行命令

202061 次浏览
  • Use kubectl describe pod ... to find the node running your Pod and the container ID (docker://...)
  • SSH into the node
  • run docker exec -it -u root ID /bin/bash

There are some plugins for kubectl that may help you achieve this: https://github.com/jordanwilson230/kubectl-plugins

One of the plugins called, 'ssh', will allow you to exec as root user by running (for example) kubectl ssh -u root -p nginx-0

  • docker container ls to find container ID
  • docker exec -it -u root ID /bin/bash

For my case, I was in need for root access (or sudo) to container to give the chown permission to a specific mount path.

I cannot SSH to machine because I designed my infrastructure to be fully automated with Terraform without any manual access.

Instead, I found that initContainers does the job:

  initContainers:
- name: volume-prewarming
image: busybox
command: ["sh", "-c", "chown -R 1000:0 \{\{ .Values.persistence.mountPath }}"]
volumeMounts:
- name: \{\{ .Chart.Name }}
mountPath: \{\{ .Values.persistence.mountPath }}

I've also created a whole course about Production grade running kubernetes on AWS using EKS

Building on @jordanwilson230's answer he also developed a bash-script called exec-as which uses Docker-in-Docker to accomplish this: https://github.com/jordanwilson230/kubectl-plugins/blob/krew/kubectl-exec-as

When installed via kubectl plugin manager krewkubectl krew install exec-as you can simply

kubectl exec-as -u <username> <podname> -- /bin/bash

This only works in Kubernetes clusters which allow priviledged containers.

We can exec into kubernetes pod through the following command.

kubectl exec --stdin --tty pod-name -n namespace-name -- /bin/bash

In case anyone is working on AKS, follow these steps:

  • Identify the pod that is running the container
  • Identity the node that is running that pod (kubectl describe pod -n <namespace> <pod_name> | grep "Node:", or look for it on Azure portal)
  • SSH to AKS the cluster node

Once you are inside a node, perform these commands to get into the container:

  • sudo su (you must get root access to use docker commands)
  • docker exec -it -u root ID /bin/bash (to get the container id, use docker container ps)

Just in case you come across to look for an answer for minikube, the minikube ssh command can actually work with docker command together here, which makes it fairly easy:

  1. Find the container ID:

    $ minikube ssh docker container ls
    
  2. Add the -u 0 option to docker command (quote is necessary for the whole docker command):

    $ minikube ssh "docker container exec -it -u 0 <Container ID> /bin/bash"
    

NOTE: this is NOT for Kubernetes in general, it works for minikube only. While I feel we need the root access quit a lot in local development environment, it's worth to mention it in this thread.

To login as different i use exec-as plugin in kubernetes here are the steps you can follow

Make sure git is installed

Step : 1 Install Krew plugin

  begin
set -x; set temp_dir (mktemp -d); cd "$temp_dir" &&
set OS (uname | tr '[:upper:]' '[:lower:]') &&
set ARCH (uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/') &&
set KREW krew-$OS"_"$ARCH &&
curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/$KREW.tar.gz" &&
tar zxvf $KREW.tar.gz &&
./$KREW install krew &&
set -e KREW; set -e temp_dir
end

Step : 2 Install exec-as

kubectl krew install exec-as

Step : 3 Try with root or different user

kubectl exec-as -u root frontend-deployment-977b8fd4c-tb5pz

WARNING: You installed plugin "prompt" from the krew-index plugin repository. These plugins are not audited for security by the Krew maintainers. Run them at your own risk.

That's all well and good, but what about new versions of kubernetes that use containerd? using nerdctl exec -uroot -ti 817d52766254 sh there is no full-fledged root, part of the system in this read-only mode

Working with kubernetes 1.21, none of the docker and kubectl-plugin approaches worked for me. (since k8s 1.21 uses cri-o as container runtime).

What did work for me was using runc:

  • get containerID via

kubectl get pod -o jsonpath="{.status.containerStatuses[].containerID}" | sed 's/.*////'

  • containerID is something like

4ed493495241b061414b94425bb03b682534241cf19776f8809aeb131fa5a515

  • get node pod is running on
kubectl describe pod <podname>  | grep Node:
Node:         mynode.cluster.cloud.local/1.1.148.63
  • ssh into node

  • on node, run (might have to use sudo):

runc exec -t -u 0 containerID sh

so something like:

runc exec -t -u 0 4ed493495241b061414b94425bb03b682534241cf19776f8809aeb131fa5a515 sh

Adding to the answer from henning-jay, when using containerd as runtime.

get containerID via

kubectl get pod <podname> -o jsonpath="{.status.containerStatuses[].containerID}" | sed 's,.*//,,'

containerID will be something like 7e328fc6ac5932fef37f8d771fd80fc1a3ddf3ab8793b917fafba317faf1c697

lookup the node for pod

kubectl get pod <podname> -o wide

on node, trigger runc - since its invoked by containerd, the --root has to be changed

runc --root /run/containerd/runc/k8s.io/ exec -t -u 0 <containerID> sh

None of the answers above did work for me in a semi-modern k8s 1.22 in GKE using containerd, not docker.

So my solution is not a pure cli one, but it is one that works in a terminal.

I use k9s, the top-like tool for k8s, to:

  1. go to a pod,
  2. find the right container within it and then hit the s button.

enter image description here

In k8s deployment configuration, you can set to run the container as root

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
containers:
- image: my-image
name: my-app
...
securityContext:
allowPrivilegeEscalation: false
runAsUser: 0

Notice that runAsUser: 0 property. Then connect to the POD/container as usual and you will be authenticated as root from the beginning.

If you're using a moden Kubernetes version it's likely running containerd instead of docker for it's container runtime.

To exec as root you must have SSH access and SUDO access to the node on which the container is running.

  1. Get the container id of the pod
 kubectl get podcassandra-0 -n cassandra -o jsonpath="{.status.containerStatuses[].containerID}" | sed 's/.*\/\///'
8e1f2e5907087b5fd55d98849fef640ca73a5ca04db2e9fc0b7d1497ff87aed9
  1. Use runc to exec as root.
sudo runc --root /run/containerd/runc/k8s.io/ exec -t -u 0 8e1f2e5907087b5fd55d98849fef640ca73a5ca04db2e9fc0b7d1497ff87aed9 sh