Finally I managed to solve the issue. let me break the ice, the culprit was the Network Firewall.
Now let me explain what happened. The issue relied in the communication between Kube API Server and worker nodes. It was only kubectl exec, logs, port-forward
these commands which did not work earlier, all other kubectl
worked pretty well. The solution was hidden in the fact how these commands are actually executed.
In contrast to other kubectl
commands exec, logs, top, port-forward
these works slightly different way. These commands needs direct communication between kubectl client
and worker nodes
, hence it requires TCP tunnel
to be established. And that tunnel
is established via Konnectivity agents
which are deployed on all worker nodes
. This agent
establish a connection with kube API Server
via a TCP port 8132
. Hence this 8132
must be allowed in the egress firewall rule.
So in my case this port was missing in the rules hence all the Konnectivity agent pods
were down, meaning no tunnel was established, which signifies the error message No agent available
.
Reference - https://cloud.google.com/kubernetes-engine/docs/troubleshooting/kubectl#konnectivity_proxy