Actually, the same is true for other job schedulers as well. I have run both SLURM and LSF clusters during my career, and both will suffer badly from repeated polling. SLURM suffers worse; LSF's design was altered in around LSF 7.0 to split the scheduler from the master batch daemon, which made it considerably more resilient to abuse from repeated polling in tight loops, but it is still vulnerable to the same problem.
HPC systems are like F1 racing cars. They are designed for performance, and need to be driven correctly to get the best performance out of them. F1 car designers assume the driver is skilled and knows what they're doing. HPC systems are the same; the designers of these schedulers assume that the HPC users are skilled and will use them correctly.
Repeated polling is just making the compute work for no benefit. If your job is going to take several hours to run, polling every 5 minutes is going to make no significant difference to your experience compared to polling every 5 seconds, but it'll be a lot nicer for the scheduler and for your fellow users.
If your jobs are running so quickly that polling every few seconds is necessary, then your workflow probably has bigger problems. Very small jobs are usually very inefficient, because the total time to your results will be dominated by scheduler overhead and queueing time, rather than by actual workload execution time, and in such cases it makes sense to batch up your workload so that each job actually runs a large number of tasks in succession, so that the execution time dominates the overall runtime.