Sometimes we want our hashes to be very evenly distributed and using a random mapping can help achieve this.
Say we want a fast lookup for some objects that have IDs between 1 and 100 (not necessarily 100 objects!). We can do this by putting them into some buckets, lets say we have 10 buckets. We can put IDs 1 to 10 in bucket 1, 11 to 20 in bucket 2, 21 to 30 in bucket 3 etc. Now if we want to retrieve ID 70 we know to look in bucket 7, a smaller search space.
We have used the IDs as our hash, but what if the IDs are not evenly distributed in this range? What if consecutive IDs are more common and only the IDs 31 to 50 appear? Then we will have 2 full buckets and 8 empty ones, it would be a quicker search if all ten buckets had 2 objects each. (Searching through a bucket of 2 objects is faster than searching a full bucket).
The solution to this is to shuffle the IDs around in a random way to avoid any bias towards any buckets.
We might have ID 1 becomes 34, ID 34 becomes 60, ID 60 becomes 14, and so on until every ID is mapped to a new ID uniquely. It means that whoever creates a hash only has to worry about uniqueness and not the even distribution of values.
The hashing function should not change for the lifespan of the data structure that uses it.
It is allowed to be generated differently at the time of the data structure's creation, but not allowed to change while it contains data.