79524221

Date: 2025-03-20 22:49:54
Score: 0.5
Natty:
Report link

The issue comes down to how MongoDB stores and indexes dates. Your ts field is stored as an ISODate (a proper BSON date), but your Java query is treating it as a numeric value (epoch milliseconds). This means the index on ts (which expects a Date type) is ignored, forcing MongoDB to do a COLLSCAN instead of an IXSCAN.

Why is this happening?

  1. Your Java query converts timestamps to toEpochMilli(), which results in a Long value (e.g., 1733852133000).

  2. MongoDB’s index on ts is built on BSON Date objects, not raw numbers.

  3. When you query with a Long instead of a Date, MongoDB sees a type mismatch and ignores the index, defaulting to a full collection scan.

The Fix: Use Date Instead of Long

You need to ensure that your Java query uses Date objects instead of epoch milliseconds. Here’s the correct way to do it:

java

Copy code

ZonedDateTimelowerBound = ...; ZonedDateTime upperBound = ...; Date lowerDate = Date.from(lowerBound.toInstant()); Date upperDate = Date.from(upperBound.toInstant()); var query = Query.query(new Criteria().andOperator( Criteria.where("ts").gte(lowerDate), Criteria.where("ts").lt(upperDate) )); var result = mongoTemplate.find(query, Events.class);

Why Does This Work?

TL;DR

Convert ZonedDateTime to Date before querying. Using raw epoch milliseconds (long) prevents the index from being used, leading to slow queries.

Reasons:
  • RegEx Blacklisted phrase (0.5): Why is this
  • Long answer (-1):
  • Has code block (-0.5):
  • Contains question mark (0.5):
  • Low reputation (1):
Posted by: Perez Christopher