Yes..
First, some tables for testing.
IF OBJECT_ID('FIFO_BUYING') IS NOT NULL DROP TABLE FIFO_BUYING;
IF OBJECT_ID('FIFO_SELLING') IS NOT NULL DROP TABLE FIFO_SELLING;
CREATE TABLE FIFO_BUYING (
ID INT IDENTITY(1,1) PRIMARY KEY,
[DATE] DATETIME,
UNIT_PRICE DECIMAL(18,2),
QUANTITY DECIMAL(18,2)
);
CREATE TABLE FIFO_SELLING (
ID INT IDENTITY(1,1) PRIMARY KEY,
[DATE] DATETIME,
UNIT_PRICE DECIMAL(18,2),
QUANTITY DECIMAL(18,2)
);
INSERT INTO FIFO_BUYING ([DATE], [UNIT_PRICE], [QUANTITY]) VALUES
('2023-01-01 16:01', 100.0, 2.0),
('2023-01-08 13:40', 105.0, 4.0),
('2023-01-08 14:11', 102.0, 2.0),
('2023-01-12 14:00', 101.0, 7.0),
('2023-01-14 17:55', 103.0, 5.0);
INSERT INTO FIFO_SELLING ([DATE], [UNIT_PRICE], [QUANTITY]) VALUES
('2023-01-02 10:21', 150.0, 1.0),
('2023-01-08 13:45', 140.0, 3.0),
('2023-01-10 17:30', 145.0, 3.0),
('2023-01-10 17:55', 130.0, 5.0),
('2023-01-15 12:15', 135.0, 6.0);
before we join tables we will need cumulative quantities. that will help us pair right rows with each other. elimineting unnecessary matches is the key for performance. there are many ways to do it but Its simple with "SUM(...) OVER (ORDER BY ...)".
The join will do all the job. so i need to explain this a little further.
WITH OrderedBuy AS (
SELECT TOP 100 PERCENT
*,
SUM(QUANTITY) OVER (ORDER BY [DATE]) AS CUMULATIVE_QUANTITY
FROM FIFO_BUYING
ORDER BY [DATE]
),
OrderedSell AS (
SELECT TOP 100 PERCENT
*,
SUM(QUANTITY) OVER (ORDER BY [DATE]) AS CUMULATIVE_QUANTITY
FROM FIFO_SELLING
ORDER BY [DATE]
)
select *
FROM OrderedBuy B
JOIN OrderedSell S
ON (B.CUMULATIVE_QUANTITY - B.QUANTITY) < S.CUMULATIVE_QUANTITY
AND (S.CUMULATIVE_QUANTITY - S.QUANTITY) < B.CUMULATIVE_QUANTITY
First, what does CUMULATIVE_QUANTITY - QUANTITY mean?
That gives us the starting point of that row’s block. So:
B.CUMULATIVE_QUANTITY - B.QUANTITY → Where this buy batch starts
B.CUMULATIVE_QUANTITY → Where it ends
Same idea for selling:
S.CUMULATIVE_QUANTITY - S.QUANTITY → Sell batch starts
S.CUMULATIVE_QUANTITY → Sell batch ends
Now the join becomes:
That’s what this checks:
Which is classic interval overlap logic. You’ve probably used it before in date ranges or scheduling overlaps.
Just replace "date ranges" with "quantity ranges"
That’s all.
Note that having precalculated cumulative quantities will impact performance dramaticly
and final query will be like;
WITH OrderedBuy AS (
SELECT TOP 100 PERCENT
*,
SUM(QUANTITY) OVER (ORDER BY [DATE]) AS CUMULATIVE_QUANTITY
FROM FIFO_BUYING
ORDER BY [DATE]
),
OrderedSell AS (
SELECT TOP 100 PERCENT
*,
SUM(QUANTITY) OVER (ORDER BY [DATE]) AS CUMULATIVE_QUANTITY
FROM FIFO_SELLING
ORDER BY [DATE]
)
SELECT
B.[DATE] AS BuyDate,
B.UNIT_PRICE AS BuyPrice,
B.QUANTITY AS BuyQty,
B.CUMULATIVE_QUANTITY AS BuyCumQty,
S.[DATE] AS SellDate,
S.UNIT_PRICE AS SellPrice,
S.QUANTITY AS SellQty,
S.CUMULATIVE_QUANTITY AS SellCumQty,
GREATEST(0,
LEAST(B.CUMULATIVE_QUANTITY, S.CUMULATIVE_QUANTITY)
- GREATEST(B.CUMULATIVE_QUANTITY - B.QUANTITY, S.CUMULATIVE_QUANTITY - S.QUANTITY)
) AS ProcessQty,
ROUND((S.UNIT_PRICE - B.UNIT_PRICE) *
GREATEST(0,
LEAST(B.CUMULATIVE_QUANTITY, S.CUMULATIVE_QUANTITY)
- GREATEST(B.CUMULATIVE_QUANTITY - B.QUANTITY, S.CUMULATIVE_QUANTITY - S.QUANTITY)
), 2) AS Profit
FROM OrderedBuy B
JOIN OrderedSell S
ON (B.CUMULATIVE_QUANTITY - B.QUANTITY) < S.CUMULATIVE_QUANTITY
AND (S.CUMULATIVE_QUANTITY - S.QUANTITY) < B.CUMULATIVE_QUANTITY
ProcessQty is something else to be careful with. there are ways more than one to calculate it.. like using LAG function to check with the qty on previous row but you still need something for first row..
PROCESS_QTY= ABS(Lag(B.CUMULATIVE_QUANTITY, 1) OVER(ORDER BY S.DATE ASC)-Lag(S.CUMULATIVE_QUANTITY, 1) OVER(ORDER BY S.DATE ASC))
at the end this is the aswer of "How many units from this buying row match with this selling row?". Nothing special.