79579307

Date: 2025-04-17 13:07:34
Score: 1
Natty:
Report link

Finally I managed to solve the issue. let me break the ice, the culprit was the Network Firewall.

Now let me explain what happened. The issue relied in the communication between Kube API Server and worker nodes. It was only kubectl exec, logs, port-forward these commands which did not work earlier, all other kubectl worked pretty well. The solution was hidden in the fact how these commands are actually executed.

In contrast to other kubectl commands exec, logs, top, port-forward these works slightly different way. These commands needs direct communication between kubectl client and worker nodes, hence it requires TCP tunnel to be established. And that tunnel is established via Konnectivity agents which are deployed on all worker nodes. This agent establish a connection with kube API Server via a TCP port 8132. Hence this 8132 must be allowed in the egress firewall rule.

So in my case this port was missing in the rules hence all the Konnectivity agent pods were down, meaning no tunnel was established, which signifies the error message No agent available.

Reference - https://cloud.google.com/kubernetes-engine/docs/troubleshooting/kubectl#konnectivity_proxy

Reasons:
  • Blacklisted phrase (1): did not work
  • Long answer (-1):
  • Has code block (-0.5):
  • Self-answer (0.5):
  • Low reputation (1):
Posted by: Utshab Saha