I think for delta table we need to give starting version or a starting timestamp so that it does not read all versions every time.
spark.readStream
.option("startingTimestamp", "2018-10-18")
.table("user_events")
spark.readStream
.option("startingVersion", "5")
.table("user_events")
In addition to that adding skipChangeCommits to true should help fix your issue.
https://docs.databricks.com/aws/en/structured-streaming/delta-lake#specify-initial-position
https://docs.databricks.com/aws/en/structured-streaming/delta-lake#ignore-updates-and-deletes