64616424

Date: 2020-10-30 22:19:21
Score: 6
Natty:
Report link

I have the same issue now, one of my pods is in pending state. Here is my pod description. I dont know which service is using the port and I dont know how to find that out, please help.

Name:           prom-prometheus-node-exporter-qw47k
Namespace:      metrics
Priority:       0
Node:           <none>
Labels:         app=prometheus-node-exporter
                chart=prometheus-node-exporter-1.10.0
                controller-revision-hash=7644fc664f
                heritage=Helm
                jobLabel=node-exporter
                pod-template-generation=1
                release=prom
Annotations:    kubernetes.io/psp: eks.privileged
Status:         Pending
IP:
IPs:            <none>
Controlled By:  DaemonSet/prom-prometheus-node-exporter
Containers:
  node-exporter:
    Image:      quay.io/prometheus/node-exporter:v1.0.0
    Port:       9100/TCP
    Host Port:  9100/TCP
    Args:
      --path.procfs=/host/proc
      --path.sysfs=/host/sys
      --web.listen-address=$(HOST_IP):9100
      --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
      --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
    Liveness:   http-get http://:9100/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:  http-get http://:9100/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      HOST_IP:  0.0.0.0
    Mounts:
      /host/proc from proc (ro)
      /host/sys from sys (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from prom-prometheus-node-exporter-token-dtjss (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  proc:
    Type:          HostPath (bare host directory volume)
    Path:          /proc
    HostPathType:
  sys:
    Type:          HostPath (bare host directory volume)
    Path:          /sys
    HostPathType:
  prom-prometheus-node-exporter-token-dtjss:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  prom-prometheus-node-exporter-token-dtjss
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     :NoScheduleop=Exists
                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                 node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                 node.kubernetes.io/not-ready:NoExecute op=Exists
                 node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                 node.kubernetes.io/unreachable:NoExecute op=Exists
                 node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason             Age                 From                Message
  ----     ------             ----                ----                -------
  Normal   NotTriggerScaleUp  104s                cluster-autoscaler  pod didn't trigger scale-up (it wouldn't fit if a new node is added): 1 node(s) didn't have free ports for the requested pod ports, 1 node(s) didn't match node selector
  Warning  FailedScheduling   18s (x6 over 107s)  default-scheduler   0/3 nodes are available: 1 Insufficient pods, 2 node(s) didn't have free ports for the requested pod ports.
Reasons:
  • Blacklisted phrase (1): i have the same issue
  • RegEx Blacklisted phrase (3): please help
  • Long answer (-1):
  • Has code block (-0.5):
  • Me too answer (2): I have the same issue
  • Unregistered user (0.5):
  • Low reputation (1):
Posted by: Yash