79765093

Date: 2025-09-15 11:59:27
Score: 3.5
Natty:
Report link

To answer original question

Can someone explain how (if) Hekaton handles the situation when crash happens after commit and before changes are persistent in the log?

From Durability for Memory-Optimized Tables

All changes made to disk-based tables or durable memory-optimized tables are captured in one or more transaction log records. When a transaction commits, SQL Server writes the log records associated with the transaction to disk before communicating to the application or user session that the transaction has committed.

To answer

Could you please clarify if writing into log happens while the server process changes data in memory OR it starts only on commit.

From SQL Server In-Memory OLTP Internals for SQL Server 2016

For In-Memory OLTP transactions, log records are generated only at commit time. In-Memory OLTP does not use a write-ahead logging (WAL) protocol, such as used when processing operations on disk-based tables. With WAL, SQL Server writes to the log before writing any changed data to disk, and this can happen even for uncommitted data written out during checkpoint. For In-Memory OLTP, dirty data is never written to disk. Furthermore, In-Memory OLTP groups multiple data changes into one log record to minimize the overhead both for the overall size of the log and reducing the number of writes to log buffer. Not using WAL is one of the factors that allows In-Memory OLTP commit processing to be extremely efficient.

Quick test to see how it actually works.

There must be separate file group for memory optimized tables so creating it.

ALTER DATABASE mydb ADD FILEGROUP mydb_mem
    CONTAINS MEMORY_OPTIMIZED_DATA;

ALTER DATABASE mydb ADD FILE (
    name='mydb_mem_name', filename='C:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019\MSSQL\DATA\mydb_mem_file')
    TO FILEGROUP mydb_mem;

First surprise is - after we created an empty table SQL Server disk usage under mydb_mem_file grows by 0.9GB!

create table t_mem
(id int primary key nonclustered, txt varchar(1000))
with (memory_optimized = on, durability = schema_and_data);

Now we populate data without commit

set statistics time on;

begin transaction;

with r(i, x) as
(
    select 1, replicate('x', 1000)
    union all
    select i + 1, x
    from r  
    where i < 1e6
)
insert into t_mem
select *
from r
option (maxrecursion 0);

Disk usage stays the same indeed. And below is an elapsed time.

SQL Server parse and compile time: 
   CPU time = 0 ms, elapsed time = 1 ms.

 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 0 ms.

 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 0 ms.

 SQL Server Execution Times:
   CPU time = 11828 ms,  elapsed time = 13337 ms.

(1000000 rows affected)

Finally we commit data and disk usage grows by 1GB.

set statistics time on;

commit;

Here is how long it took

SQL Server parse and compile time: 
   CPU time = 0 ms, elapsed time = 0 ms.

 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 0 ms.

 SQL Server Execution Times:
   CPU time = 703 ms,  elapsed time = 2093 ms.

Summary

For memory optimized tables SQL Server flushes all the changes during commit. Hence commit may take much longer than for regular tables.

Reasons:
  • RegEx Blacklisted phrase (2.5): Can someone explain how
  • RegEx Blacklisted phrase (2.5): Could you please clarify
  • Long answer (-1):
  • Has code block (-0.5):
  • Contains question mark (0.5):
  • Self-answer (0.5):
  • High reputation (-1):
Posted by: Dr Y Wit