Hi Nassar and Qadar,

Thank you so much for your response. please find my below scenario and yaml file and help me.

What you have given is for the pod, but I am looking for a gatekeeper deployed through helm and fluxcd. Please find the attached helm YAML file. And you will notice that in lines 18th and 19th need to disable the network. In my production, the Current setting host network is set to true for helm release but we need to set false. This is an expected task.

What I need is

  • I need to deploy this helm release in my lab environment, not sure how this can be deployed. Please help ( I am clear about a single pod, but not clear about in help release and gatekeeper, and flux cd scenario)
  • And then how we disable the host network to false for kind “HelmRelease”
  • Need to know how to test after making a change and what the impact is.
  • Unfortunately, I do not have steps on how it was deployed in production, else I would have followed the same steps in my lab environment. Please help.

    I really appreciate any help you can provide.

    277264-gatekeeper-helm-release.txt

    hello @sns

    you can change the value of the hostNetwork from helm chart Values.yaml file once you pull it from this link

    1- helm repo add gatekeeper https://open-policy-agent.github.io/gatekeeper/charts
    2- helm pull gatekeeper/gatekeeper
    3- tar -zxvf gatekeeper-3.11.0.tgz

    you can change the value of the hostNetwork to false from the values.yaml as below :

    or you can add --set flag using helm command :

    helm install -n gatekeeper-system [RELEASE_NAME] gatekeeper/gatekeeper --set controllerManager.hostNetwork=false

    you can check the helm chart that you are using and follow the same steps.

    I advise you to use the gatekeeper extension with aks as its supported check this document as below:

    also to use the fluxV2 check this document

    Hi Ammar,

    Thank you so much for your response.

    In my case in my lab environment gatekeeper is already installed and running but with default host network setting which is false.

    Please find the highlighted portions in the attached image.

  • Not able to open tar file to edit. ( It is a MAC pro M1 machine)
  • Also , tried through command, it says upgrade issue, not sure if upgrade can be used.
  • Also , regarding testing , my understanding is that we should be able to reach service cluster ip before and after change of host network right ?
  • sorry for mistake, the correct version is 3.7.0

    hello @sns

    can you try to add flag --install as below :

    helm upgrade --install -n gatekeeper-system 3.7.0 gatekeeper/gatekeeper --set controllerManager.hostNetwork=true

    If the ANSWER is helpful, please click "Accept Answer" and upvote it.

    Thank you!

    hello @sns

    I hope you are doing fine.

    hostNetwork is a setting at the pod level, I would not set it unless you know you need it.

    https://www.alibabacloud.com/help/en/container-service-for-kubernetes/latest/use-the-host-networ k this is an example of how it gets set.

    You can imagine for example, many CNI use kubernetes daemonset to install themselves. But if they do not set hostNetwork: true, deploying the daemonset will require CNI to allocation IPs per pod. But CNI isn't installed, because that's what the DS is supposed to do...hostNetwork: true is one way out of that.

    It presents other issues though, as once you start manually assigning IP:PORT to pods, you tend to have higher chances of conflicts, need to be careful about remapping ports manually, etc. -- things the normal k8s networking model is supposed to solve.

    once you start to change the host network from true to false, it will assign normal IP from the pods CIDR range.

    example:

    deployment it use hostnetwork is set to true as below:

    the POD created it took the same IP of the node:

    once you remove the Hostnetwork flag it will take IP from the same pod IP range.

    once you exec pod and test it will work with you with the new POD IP or the service name/IP

    I suggest testing it on a test cluster.

    I hope this can help you

    Looking forward to your feedback,

    Best Regards,

    Hi @sns ,

    If HostNetwork set to true; pod will use the node networking namespace and network resources of the node instead of the regular isolation, the networking aspect of the containers will be the same as if the process is working on the node directly, so the pod can access any service running on the localhost of the node, listen to addresses and monitor the traffic of other pods on the same node.

    Use cases:

  • kube-proxy which configures iptables in the node networking namespace.
  • Applications that need to pcap on node.
  • I would not set HostNetowrk to true (Default is False) unless you know you need it.

    Hope this helps.

    Please "Accept as Answer" if it helped, so that it can help others in the community looking for help on similar topics.

    hello @sns

    can you try to add flag --install as below :

    helm upgrade --install -n gatekeeper-system 3.7.0 gatekeeper/gatekeeper --set controllerManager.hostNetwork=true

    If the ANSWER is helpful, please click "Accept Answer" and upvote it.

    Thank you!

    It worked, thanks so much. Bellow are my follow up questions. Please help.

  • Out of 5 entries only 3 entries got updated., how do we identify other 2 objects and change ? ( gave you the output of kubectl get all )
  • I want to test before and after host network change. How do I test ? can you provide the steps.
  • Thanks again

    hello @sns

    once you have available nodes schedule pods using the flag: .hostNetwork=true, make sure the number of pods replica = nodes number else you will get an error as below if the number of the pods is greater than nodes :

    Warning FailedScheduling 3m43s default-scheduler 0/2 nodes are available: 2 node(s) didn't have free ports for the requested pod ports.
    Warning FailedScheduling 3m36s (x1 over 3m42s) default-scheduler 0/2 nodes are available: 2 node(s) didn't have free ports for the requested pod ports.

    to open the needed port from the node to that pod.

    using hostNetwork= false no need as the pods will take the IP from the pods subnet range.

    as you are using the service name you will not find a change, as mentioned in the mechanism of how the network host works in the previous comments.

    by default the replica it will use 3 pods, the old pods should be deleted, can you check the logs and describe those pods?

    kubectl describe pods -n gatekeeper-system gatekeeper-controller-manager-xxx

    Here are the logs:
    277541-logs.txt

    the above logs are output of kubectl logs gatekeeper-controller-manager-7cc5cd6b8f-jrqnq -n gatekeeper-system

    "I am okay to change number of pods to 2 for contorller manager deployment, how do we change it ?"

    hello @sns

    use this helm command to change the replicas to 2 :

    helm upgrade --install -n gatekeeper-system 3.7.0 gatekeeper/gatekeeper --set controllerManager.hostNetwork=true --set replicas=2

    Hope this helps. Please "Accept as Answer" if it helped, so that it can help others in the community looking for help on similar topics.