79392187

Date: 2025-01-27 21:18:37
Score: 1
Natty:
Report link

Tried a few approaches, and I settled on

indices = np.where(
    array.rolling(...)
    .mean()
    .notnull()
)

This was able to handle the large array without using more than a few GB of RAM when the array is on disk. It used even less when the array is backed by dask. Credit goes to ThomasMGeo on the Pangeo forum. I suspect that calling .construct() isn't actually using that much memory, but a .stack() call I had in an earlier version was using a lot of memory.

Reasons:
  • Has code block (-0.5):
  • Self-answer (0.5):
  • Low reputation (1):
Posted by: ganzk