I am going to answer this with a cool and easy to follow example, I had to do some research before answering this and simplifying things always help me, so let's first introduce the cache terminology:
Cache: a cache is like a special reading desk where the most frequently used books are placed. Instead of running deep into the shelves every time, the computer can grab the specific book from the desk.
Now how does this relate to your question?
The ArrayList stores data in a row in the reading desk. So when the computer reads one book, it already has the next one nearby, making reading super fast and efficient. Therefore, making sequential access faster.
The LinkedList is like a scattered set of books where you have to follow notes leading to each next book. Every time the computer needs a new book, it has to walk through another shelf.
Now you might want a more "technical" answer, so here it is:
In ArrayList:
Since all elements are on a single block, accessing one element automatically loads nearby elements into the cache. This obviously makes iteration much faster, as the CPU retrieves more elements on one go.
In LinkedList:
Since nodes are scattered across memory, accessing one node doesn’t guarantee the next one is nearby. Each node access involves a pointer dereference, leading to higher cache misses.