79666822

Date: 2025-06-15 19:50:25
Score: 0.5
Natty:
Report link

With no explain and no info on data distribution regarding your date, detail and status it's difficult to say (for date: could you give an approximate of the number of rows over the 3-months period, so that we know how many percents they represent of the total 65 million rows?),
but by decreasing order of probability I would say:

Speed up your dates range clause
Remove the between

First of all: can you have future dates in loginaudit?. If not, instead of Cast_as(date) BETWEEN [3 months ago] AND [today], just Cast_as(date) >= [3 months ago]: you'll spare a comparison for each row (BETWEEN is date >= [start] AND date <= [today], so if your data is always <= [today], do not let your RDMS look for it).

Make good use of an index over the date

Then an index over date would allow the RDBMS to quickly limit the number of rows to inspect for further tests.
However each row's date has to be passed through Cast_as() to return a value to be compared, so an index on the column would be useless (the index on column is only used if you directly compare the column to some bounds).

You could improve your query by creating by creating a function index:
CREATE INDEX la_date_trunc_ix ON loginaudit (builtin.Cast_as(loginaudit.date, 'TIMESTAMP_TZ_TRUNCED'));
But /!\ this will be more costly than a direct index on the column, and given the number of log entries you write per day you perhaps do not want to slow down your inserts too much. You'll have to find a balance between a simple index on date (will slow down writes too, but not as much) and a function index (will slow down writes more; but be a bit smaller and quicker on reads).

But if we read it further, you're truncating the row's date to TIMESTAMP_TZ_TRUNCED (truncates 2025-03-15 15:05:17.587282 to 2025-03-15 00:00:00.000000?),
to compare it to something described as DATETIME_AS_DATE (so probably 2025-03-15).
So, be it 2025-03-15 15:05:17.587282 or 2025-03-15 00:00:00.000000, both are < 2025-03-15:
builtin.Cast_as is useless, just directly rewrite your cond to loginaudit.date >= builtin.Relative_ranges('mago2', 'END', 'DATETIME_AS_DATE').

… And of course do not forget to have a (simple!) index on loginaudit(date).

(and then depending on the ratio of old data compared to 3 last months' data, maybe partitioning by month, but let's first try with correct index use.

Verify that Relative_ranges is stable

I hope SuiteQL delivers its builtin functions as stable; you should ensure it.

If it is stable, the RDBMS can understand that builtin.Relative_ranges('mago2', 'END', 'DATETIME_AS_DATE') will output the same value for each row, so it can precompute it at the start of the query and consider it is a constant (thus allowing to use the index).

If it is not stable, the RDBMS will prudently recompute it for each row (thus it will probably priorize other indexes than this one that would logically be the more selective).

Speed up your enumerations

This is probably less of a concern, depending on the proportion of your rows having the given values.
Moreover, I would expect the date index to be a big boost.

So I'll briefly give leads here, without expanding; you can restart here if all efforts on date aren't enough.

A simple enum

But if your audit contains 20 % of 'Success' compared to other values, an index on it will be useful (particularly as a composite index with the Cast_as() function as first member, if you stayed with a function index).

The NOT(detail IN (…)) OR detail IS NULL filter

This one is more complex. There too, judge of the proportion of rows matching the condition: if 80 % of your rows match, no need to index.

Else:

First of all, rewriting NOT (detail IN (…)) to detail NOT IN (…) would make it more clear (and maybe more optimizable by the RDBMS? Not sure, an explain plan would tell).

Then I would try to make it a positive IN: instead of excluding some detail values, list all possible other values.

And as you have a NULL to test too, which will prevent Oracle to index it,
you would probably test with COALESCE(detail, '-') IN ('…', '…', '-') after having created a function index on COALESCE(detail, '-').

Reasons:
  • RegEx Blacklisted phrase (2.5): could you give
  • Long answer (-1):
  • Has code block (-0.5):
  • Contains question mark (0.5):
  • High reputation (-1):
Posted by: Guillaume Outters