Pod with hostNetwork set to true is not able to pick IAM roles passed with annotation, instead uses host(ec2) IAM role. #266

@prudhvigodithi

Description

Hey, I have installed kube2iam in kops cluster where the networking we are using in weave, this works perfectly fine if the test pods ran without host network set to true, gave me access denied error and doing curl gave me the right IAM role passed as part of pod annotation.
But when the pod ran with host network set to true it gets the access of host ec2 IAM role bypassing the annotation of IAM role passed to that pod and able to query AWS services that the ec2 IAM role has, long story short when used host network set to true inside the pod the host IAM role is inherited and gets the permissions of what the host has and not able to use the restricted IAM role passed as part of the pod annotation, is it the right behavior? does host network true does not make use of pod annotations IAM role?

This could open a scenario and security concern as when pods use host docker-engine when you run docker container inside a pod using --net=host, it will pick up host IAM role and its permissions, though the pod runs without host network as true and just because it is using the docker engine from host though it can use the IAM role passed as part of annotation if we run a container inside that pod with --net=host it will bypass the annotation passed to the pod and inherits ec2 IAM role.

kubectl exec -it jdk12 bash ----> this pod does not run with hostNetwork: true
once we exec inside the pod and run a container as:
apache@jdk12:/home/apache$ docker run -d -it --net=host fstab/aws-cli:latest
Once we run with --net=host inside the pod and exec it staright away inherits host IAM role.
apache@jdk12:/home/apache$ docker exec -it d13dc46a83a7 bash -l

Pod YAML: ---> straight away inherits host IAM role when started with host network as true, without host network if we follow the above steps we get access to that host IAM permissions.

when run with host network: true it does not make use of iam.amazonaws.com/role: Test-Limited and gets the IAM role of host, but works fine when commented host network: true

apiVersion: v1
kind: Pod
metadata:
  name: aws-cli
  labels:
    name: aws-cli
  annotations:
    iam.amazonaws.com/role: Test-Limited
spec:
  hostNetwork: true
  containers:
  - image: fstab/aws-cli:latest
    command: [ "/bin/bash", "-c", "--" ]
    args: [ "while true; do sleep 9200000; done;" ]
    name: aws-cli
    - name: AWS_DEFAULT_REGION
      value: us-east-1

kube2iam DS yaml:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  generation: 2
  labels:
    app.kubernetes.io/instance: kube2
    app.kubernetes.io/managed-by: Tiller
    app.kubernetes.io/name: kube2iam
    helm.sh/chart: kube2iam-2.1.0
  name: kube2-kube2iam
  namespace: kube-system
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: kube2
      app.kubernetes.io/name: kube2iam
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: kube2
        app.kubernetes.io/name: kube2iam
    spec:
      containers:
      - args:
        - --host-interface=weave
        - --node=$(NODE_NAME)
        - --auto-discover-base-arn
        - --auto-discover-default-role=true
        - --use-regional-sts-endpoint
        - --host-ip=$(HOST_IP)
        - --iptables=true
        - --verbose
        - --debug
        - --app-port=8181
        - --metrics-port=8181
        - name: AWS_DEFAULT_REGION
          value: us-east-1
        - name: HOST_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        - name: HTTPS_PROXY
          value: http://test-proxy.us-east-1.aws:4438
        - name: HTTP_PROXY
          value: http://test-proxy.us-east-1.aws:4438
        - name: NO_PROXY
          value: .dist.kope.io,ec2.us-east-1.amazonaws.com,.s3.amazonaws.com,127.0.0.1,localhost,.k8s.local,.elb.amazonaws.com,100.96.9.0/11,.ec2.internal,api.qacd.k8s.local,api.internal.,internal.,.elb.us-east-1.amazonaws.com,elasticloadbalancing.us-east-1.amazonaws.com,autoscaling.us-east-1.amazonaws.com,178.28.0.1
        - name: http_proxy
          value: http://test-proxy.us-east-1.aws:4438
        - name: https_proxy
          value: http://test-proxy.us-east-1.aws:4438
        - name: no_proxy
          value: .dist.kope.io,ec2.us-east-1.amazonaws.com,.s3.amazonaws.com,127.0.0.1,localhost,.k8s.local,.elb.amazonaws.com,100.96.9.0/11,.ec2.internal,api.qacd.k8s.local,api.internal.,internal.,.elb.us-east-1.amazonaws.com,elasticloadbalancing.us-east-1.amazonaws.com,autoscaling.us-east-1.amazonaws.com,178.28.0.1
        image: jtblin/kube2iam:0.10.7
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 8181
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 5
          successThreshold: 1
          timeoutSeconds: 1
        name: kube2iam
        ports:
        - containerPort: 8181
          hostPort: 8181
          name: http
          protocol: TCP
        resources: {}
        securityContext:
          privileged: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      hostNetwork: true
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: kube2-kube2iam
      serviceAccountName: kube2-kube2iam
      terminationGracePeriodSeconds: 30
  templateGeneration: 2
  updateStrategy:
    type: OnDelete
status:
  currentNumberScheduled: 2
  desiredNumberScheduled: 2
  numberAvailable: 2
  numberMisscheduled: 0
  numberReady: 2
  observedGeneration: 2
  updatedNumberScheduled: 2

Does it relate to this ticket?
because even here the sample deployment ran with hostNetwork: true option
#184