I think user3666197's answer provided a lot of extremely useful technical context here, so I will highlight some other points at a higher level in my answer here. If you are looking for a general rule of thumb for whether numpy or native python will be faster, the rule of thumb is:
Numpy speeds up CPU bound tasks, but performs slower on IO bound tasks.
The context of this is that numpy does a ton of things to set up when executing code; every time you are executing a numpy function it is equally equipped to perform extremely complex computation on a 10 Exabyte n-dimensional array running on a super computer, as it to do a simple scalar addition on a chromebook. Thus, each time you run a numpy function it requires a little bit of time to set itself up. In user366619's answer they highlighted the details of such overhead. The other thing I would want to amend to that is if your problem is more CPU bound or more IO bound. The more CPU bound the problem, the more it will gain from using numpy.
Travis Oliphant, the creator of numpy, seems to regularly address this and basically comes back to the fact that numpy will always beat out other solutions on much larger and more computationally intensive problems. Otherwise, pure python solutions are much faster for smaller and more IO bound problems. Here is Travis addressing your question directly in an interview from a few years ago:
https://youtu.be/gFEE3w7F0ww?si=mfTO-uJQRIZdMKoL&t=6080