79823443

Date: 2025-11-18 14:07:02
Score: 0.5
Natty:
Report link

I am using the KubernetesExecutor to run PythonOperators in Kubernetes Pods. I was able to override the namespace of each Pod dynamically in the function pod_mutation_hook in airflow_local_settings.py. However, the scheduler would stop queuing new task instances as soon as the limit of AIRFLOW__CORE__PARALLELISM was reached, even if all Pods where in status Completed. The scheduler would still report DEBUG - 120 running task instances for executor KubernetesExecutor. I assume this is, because the KubeJobWatcher does only look at one namespace. I don't know, if the KubeJobWatcher can be configured to watch multiple namespaces.

Reasons:
  • Long answer (-0.5):
  • Has code block (-0.5):
  • Single line (0.5):
  • Low reputation (1):
Posted by: Lorenz Feineis