79575791

Date: 2025-04-15 18:16:02
Score: 2.5
Natty:
Report link

fixed ---

I changed to a while loop and hard-reset the bin index after every full iteration of the inner loop. The code now does what it needs to do :D

LENGTH = len(rawdata)  # dimension of problem
      
bindex=0
dindex=0
while bindex<=2:
    while dindex<=LENGTH-1:
        if dfbins.loc[bindex][1] < dfrawdata.loc[dindex][0] < dfbins.loc[bindex][3]:
            emptyarray[bindex].append(dfrawdata.loc[dindex][1])  
        dindex = dindex + 1
    dindex = 0
    bindex = bindex +1  

This above block does it for 2 bins only because while bindex <=2

I can change that condition and it will do the whole data.

But it's still very slow.

For a problem where len(bins) = 450 and len(rawdata)=160000 it needs optimisation.

So the question is: how to more elegantly (pythonically) write a piece of python code that does the same as what I wrote?

Reasons:
  • Long answer (-0.5):
  • Has code block (-0.5):
  • Ends in question mark (2):
  • Self-answer (0.5):
  • Low reputation (1):
Posted by: datalung