The issue comes down to how MongoDB stores and indexes dates. Your ts
field is stored as an ISODate (a proper BSON date), but your Java query is treating it as a numeric value (epoch milliseconds). This means the index on ts
(which expects a Date type) is ignored, forcing MongoDB to do a COLLSCAN instead of an IXSCAN.
Your Java query converts timestamps to toEpochMilli()
, which results in a Long value (e.g., 1733852133000
).
MongoDB’s index on ts
is built on BSON Date objects, not raw numbers.
When you query with a Long instead of a Date, MongoDB sees a type mismatch and ignores the index, defaulting to a full collection scan.
Date
Instead of LongYou need to ensure that your Java query uses Date objects instead of epoch milliseconds. Here’s the correct way to do it:
java
Copy code
ZonedDateTimelowerBound = ...; ZonedDateTime upperBound = ...; Date lowerDate = Date.from(lowerBound.toInstant()); Date upperDate = Date.from(upperBound.toInstant()); var query = Query.query(new Criteria().andOperator( Criteria.where("ts").gte(lowerDate), Criteria.where("ts").lt(upperDate) )); var result = mongoTemplate.find(query, Events.class);
Date.from(lowerBound.toInstant())
ensures that you’re passing a proper Date
object that MongoDB’s index can recognize.
The MongoDB query now correctly translates to:
json
Copy code
{ "ts": { "$gte": ISODate("2025-01-01T01:00:00Z"), "$lt": ISODate("2025-01-02T01:00:00Z") } }
instead of:
json
Copy code
{ "ts": { "$gte": 1733852133000, "$lt": 1733853933000 } }
This allows MongoDB to use the index properly, resulting in IXSCAN instead of COLLSCAN.
Convert ZonedDateTime
to Date
before querying. Using raw epoch milliseconds (long
) prevents the index from being used, leading to slow queries.