>absolute table names in the trigger function
This did the trick.
Having the individual elements on my screen scale and then reflow just from zooming in seems on the surface like a very confusing user experience, without knowing what your end goal is. If your goal is just accessibility for an average webpage then I definitely would say it's not worth it. If it's more of a custom gesture-driven app you could look into using the canvas API.
@Svyatoslav Well, not really. As I mentioned in the post, putting the whole WP into one repo means I'm versioning code that's not mine and I don't want that. As to putting the versions in the theme or plugin files, while it's great to track compatibility requirements, it's not great for what I am trying to achieve.
You're saying "don't change anything on stage/prod manually (only thru pushes)", but that's exactly the issue. While that would be a normal and safe way of handling things with any other framework, I've come to find that Wordpress websites require updates so frequently that we do do them in production directly, there's even a way to enable auto-updates for plugins!
I agree with you that in a perfect world, each update should be done on a local environment first, tested, then pushed, tested in staging, and finally deployed to prod. But the reality of the field, in my experience, is that clients that have WP websites:
Don't mind the risks of updating directly in prod (downtimes would be less expensive than getting entirely hacked)
Do not have the budget to have a developer keeping an eye on the updates every single day to test and push them
Would rather pay to have their website fixed if an update breaks it in production rather than pay devs to test each update beforehand
It's also worth to note that in my experience, updates rarely break websites when it comes to Wordpress. Only times I've seen it happen were when I was updating from multiple major versions down. And even then it was usually mostly fine!
Use this dependency:
<dependency>
<groupId>one.stayfocused.spring</groupId>
<artifactId>dotenv-spring-boot</artifactId>
<version>1.0.0</version>
</dependency>
After switching to this dependency, my .env variables were loaded correctly again.
If you use cygwin, you will have the possibility to run the Unix 'touch' command on Windows. You will be able to update the modification timestamp of the folder to any value you want (I use that when copying folders to keep the same timestamp as the original folder).
The subProcess option exists:
{
"name": "Run my program",
"type": "debugpy",
"request": "launch",
"program": "${workspaceFolder}/main.py",
"subProcess": true
},
i simply just get new keys from merchant dashboard from Tap they verified my domain and working fine for me right now
Well, i will throw my 2 cents into this. I had yum updated 3 servers and got this error on 2 of the 3. the one that did not have the error was rebooted.
So rebooting after yum update fixed the issue for me.
Rocky linux 9
You should use subprocess.run as recommended in the doc. subprocess.call is there for legacy reasons.
The legacy secret key location is clear at this point. How about legacy site key... anyone?
hey ig i am a lil late to reply but try using rn-wireguard-tunnel
its a newer library and also if any issues please log to github issues section
In App config. where you import the prime NG. Disable it there:
providePrimeNG({
theme: {
preset: Aura,
options: {
darkModeSelector: false
}
}
})
Thanks Lydia and Jade for your input on this ! I'll comment on the gh thread.
Jade, could you develop what you mean by the memory leak ? I'm not fully sure I understand the reason why the handle must be opaque. Using raw pointers makes me somewhat nervous but I suppose this could eventually be abstracted away once/if records can be exported.
Thanks again !
This post is the only I have found when Googling about this. For future people who find this post:
It is a Fortiweb set up with a Bot Detection policy. There is a section about detecting if the traffic is really from a browser and an option is "Real Browser Enforcement" (The system sends a JavaScript to the client to verify whether it is a web browser.)
Asking your IT security team to allowlist your particular URL should fix it.
Based on @Bibek Saha's answer I am providing here another solution which makes the open/close state private to the CardCloseable class. There are one and a "half" disadvantages with this method versus a big advantage of blackboxing the CardCloseable's state and not needing to keep it with N variables for N Cards (or a Map) in the Dialog's onPressed():
StatefulWidget instead of state-less, so less efficient.CardCloseable class still rebuilds! But it builds an "invisible" SizedBox.shrink() widget and not a Card (see the logic in its build method). This can be improved if anyone can suggest what's the practice here because I could not find a way to return back from build a null widget, indicating not to bother with this widget. I am new to flutter.Here is my solution based on @Bibek Saha's:
import 'package:flutter/material.dart';
/// Flutter code sample for [Card].
void main() => runApp(const CardExampleApp());
class CardExampleApp extends StatelessWidget {
const CardExampleApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(title: const Text('Card Sample')),
body: MyExample(),
),
);
}
}
class MyExample extends StatelessWidget {
const MyExample({super.key});
@override
Widget build(BuildContext context) {
return ElevatedButton(
child: Text("press to show dialog"),
onPressed: () {
showDialog(
barrierDismissible: true,
barrierColor: Colors.red.withAlpha(90),
context: context,
builder: (BuildContext ctx) {
return Container(
decoration: BoxDecoration(
border: Border.all(color: Colors.blueAccent, width: 2),
borderRadius: BorderRadius.circular(8.0),
),
child: Column(
children: [
CardCloseable(text: "item"),
CardCloseable(text: "another item"),
CardCloseable(text: "another item"),
],
),
);
},
);
},
);
}
}
class CardCloseable extends StatefulWidget {
final String text;
final double width;
// optionally, creator can supply extra callback for when closing
final VoidCallback? onClose;
const CardCloseable({
super.key,
required this.text,
this.onClose,
this.width = 200,
});
@override
State<CardCloseable> createState() => _CardCloseableState();
}
class _CardCloseableState extends State<CardCloseable> {
bool isOpen = true;
@override
void initState() {
isOpen = true;
super.initState();
}
@override
void dispose() {
super.dispose();
}
@override
Widget build(BuildContext context) {
return isOpen
? SizedBox(
width: 200,
child: Card(
color: Color(0xFFCCAAAA),
child: Column(
mainAxisSize: MainAxisSize.min,
children: <Widget>[
// TODO: this closes the whole map!!!
Align(
alignment: Alignment.topRight,
child: IconButton(
icon: Icon(Icons.close, color: Colors.red),
onPressed: () {
setState(() {
isOpen = false;
});
if (widget.onClose != null) {
widget.onClose!();
}
},
),
),
Container(
padding: EdgeInsets.symmetric(horizontal: 10),
child: Text(widget.text),
),
],
),
),
)
// if we are in a closed state, then return this:
// this is bad design
: SizedBox.shrink();
}
}
class MyCard extends StatelessWidget {
final String text;
const MyCard({super.key, required this.text});
@override
Widget build(BuildContext context) {
return Center(
child: Card(
child: Column(
mainAxisSize: MainAxisSize.min,
children: <Widget>[CloseButton(), Text("the contents: $text")],
),
),
);
}
}
You simply need to fix your code by adding a scale: 2.0 setting inside your focus options to make the camera zoom in close, and make sure to select that node ID so it glows, because otherwise your node is just sitting there lonely in the middle of a very faraway map!
In case anyone comes to this down the line...
Get the count and amount of all attributes from a user in AD - use the UI for this.
Once you have the details, then update the script as needed:
$ADusers = Get-ADUser -Filter "objectclass -like 'user'" -Properties *
$output = @()
$userCheckedCount = 0
foreach ($user in $ADusers) {
$userCheckedCount ++
Write-Progress -Activity "Checking user: $($user.name)" -Status "$($userCheckedCount) of $($ADusers.count)" -PercentComplete $($userCheckedCount / $ADusers.count * 100)
$output += [pscustomobject]@{
User = $user.name;
OU = $user.DistinguishedName.Replace(',','.');
Enabled = $user.enabled;
extensionAttribute1 = $user.extensionAttribute1;
extensionAttribute2 = $user.extensionAttribute2;
extensionAttribute3 = $user.extensionAttribute3;
extensionAttribute4 = $user.extensionAttribute4;
extensionAttribute5 = $user.extensionAttribute5;
extensionAttribute6 = $user.extensionAttribute6;
extensionAttribute7 = $user.extensionAttribute7;
extensionAttribute8 = $user.extensionAttribute8;
extensionAttribute9 = $user.extensionAttribute9;
extensionAttribute10 = $user.extensionAttribute10;
extensionAttribute11 = $user.extensionAttribute11;
extensionAttribute12 = $user.extensionAttribute12;
extensionAttribute13 = $user.extensionAttribute13;
extensionAttribute14 = $user.extensionAttribute14;
extensionAttribute15 = $user.extensionAttribute15
}
}
$output | Export-Csv -Path C:\temp\adAttributes.csv -Delimiter ';' -NoTypeInformation
I am stuck with the same issue.
Things worked well when I created my connected table in Excel. I could erase darta from the table, and clicking "update All" would refresh the figures. When I sent the file to a co-worked who has the same Power BI access rights, he got the same error message.
When I reopened my file 3 days later, I eventually got the same error message.
The only walkaround I found so far, is to save the file on Sharepoint ou Onedrive, and open it with the web browser. In such a case, the connection works.
Still, I am looking for a way to make it functional on my desktop. I don't know what kind of Excel settings I should try and uncheck...
Check your code for multiple explicit or implicit subscribe() calls. I didn't investigate this very deeply, but it looks like the request is created once but used multiple times (again due to multiple subscribe). Possible fixes: remove extra subscribe() calls or cache the result of the request using the .cache() operator.
For those who would rather not deal with writing code, you can use the WooCommerce Enhanced dashboard plugin to fully customize the boring woocommerce dashboard with widget cards, latest orders, analytics charts and activity logs.
Add the following to the project file to fix the issue
<PropertyGroup>
<CETCompat>false</CETCompat>
</PropertyGroup>
It's fixed in version 1.106.2, you can update and try again.
@MrXerios, your answer works well.
I did comparison between three window functions; Planck -taper window (ε=[~0;~2.7]), Planck -taper window based on Tanh() (ε=[0.01;0.5] and Tanh -taper(?) window (ε=[0.2;~∞]).
Different impact of ε -values makes it difficult to find exactly equal settings.
What all can be told based on plots only?
The number of dashes for defining the horizontal line can be used to control column width. Use at least as many as the longest cell in the column to avoid line breaks.
Name | Value
---------|-------------------
`Value-One` | Long explanation
`Value-Two` | Long explanation
`etc` | Long explanation
I have outlined a set of queries and scenarios that show the workings of XACT_ABORT, TRANSACTIONS and TRY/CATCH.
The following has been tested on Compatibility Level = 140
Each scenario will perform the same basic tasks and throw the same error so it will make it much easier to see what is happening and why.
Run all statements in each scenario in a single batch.
Before going into the scenarios let run through some set up. Create a very simple table which is going to hold three values for every test. This allows us to easily isolate and validate results as we progress.
CREATE TABLE dbo.test_scenarios (
test_id INT NOT NULL
,delete_id INT NOT NULL
);
INSERT INTO dbo.test_scenarios (test_id, delete_id)
SELECT t.n AS test_id
,d.n AS delete_id
FROM (VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10)) AS t(n) -- The Test IDs
CROSS JOIN
(VALUES (0),(1),(2)) AS d(n); -- The Delete IDs
-- Verify
SELECT * FROM dbo.test_scenarios ORDER BY test_id, delete_id;
GO
XACT_ABORT OFF
NO EXPLICIT TRANSACTION
NO TRY CATCH
Therefore...
Each delete statement runs in an autocommit transaction.
Any errors or failures have no impact on previous or following statements.
DECLARE @test_id INT = 1;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
SET XACT_ABORT OFF;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 1/0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 2;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
/*
Before
test_id delete_id
1 0
1 1
1 2
After
test_id delete_id
1 1
*/
When the 1/0 error is hit, it is the only statement running in the autocommit transaction therefore only that statement is rolled back and everything else continues.
The final select statement executes when the batch is executed.
This may lead to inconsistent results and data quality issues.
XACT_ABORT OFF
EXPLICIT TRANSACTION
NO TRY CATCH
Each delete statement runs within an explicit transaction.
Any errors or failures have no impact on previous or following statements.
The transaction here will simply define what will be committed but there is no logic to check what should happen if any of the statements fail.
The salient difference here between this and scenario 1 is the timing of the commits.
DECLARE @test_id INT = 2;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
SET XACT_ABORT OFF;
BEGIN TRANSACTION;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 1/0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 2;
COMMIT TRANSACTION;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
/*
Before
test_id delete_id
2 0
2 1
2 2
After
test_id delete_id
2 1
*/
When the 1/0 error is hit that particular statement is rolled back but as there is no logic to handle errors the rest of the statements proceed and the commit then commits the successful statements (0 and 2).
This is one of the riskiest applications of explicit transactions where failure/errors are not handled somehow and the end user expects the transaction to be rolled back in its entirety.
This can lead to inconsistent results and data quality issues.
XACT_ABORT ON
EXPLICIT TRANSACTION
NO TRY CATCH
Each delete statement runs within an explicit transaction.
Any errors or failures within the transaction are now handled by the xact_abort.
DECLARE @test_id INT = 3;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
SET XACT_ABORT ON;
BEGIN TRANSACTION;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 1/0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 2;
COMMIT TRANSACTION;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
/*
Before
test_id delete_id
3 0
3 1
3 2
After - The final select statement is not issued as part of the batch. Validation of the end state must be run separately after the first execution of the batch
*/
When the 1/0 error is hit the explicit transaction is rolled back and the batch terminates immediately. BATCH_ABORT signal is intercepted. Therefore no statements after the one that resulted in an error are executed.
In order to see the effect of the batch this must be re-run...
DECLARE @test_id INT = 3;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
/*
After - The final select statement is not issued as part of the batch. Validation of the end state must be run separately after the first execution of the batch
test_id delete_id
3 0
3 1
3 2
*/
Depending on your use case this may be safe enough. The downside is that while the data remains consistent there any subsequent error handling/logging/tidy up are deferred to the client.
This is also risky code if there is a possibility that it is being run within the context of an explicit transaction that has been defined previous to the code being called.
I would trust this for an adhoc script but not in production code being called from other applications or clients. But even though I trust it, I would still never use it in this form. (opinion!)
XACT_ABORT ON
NO EXPLICIT TRANSACTION
NO TRY CATCH
Each delete statement runs in an autocommit transaction.
Any errors or failures have no impact on previous statements but do impact following statements.
This can lead to inconsistent results and data quality issues.
DECLARE @test_id INT = 4;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
SET XACT_ABORT ON;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 1/0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 2;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
/*
Before
test_id delete_id
4 0
4 1
4 2
After - The final select statement is not issued as part of the batch. Validation of the end state must be run separately after the first execution of the batch
*/
When the 1/0 error is hit
1. It is the only statement running in the autocommit transaction therefore only that statement is rolled back.
2. However, it is not the only statement within the batch.
3. The batch itself is terminated so that and any following statements are never executed.
DECLARE @test_id INT = 4;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
/*
After
4 1
4 2
*/
This is another very dangerous usage of XACT_ABORT if the true behaviour of how it operates is unknown. There is no immediate feedback available from the batch as to what has occurred.
XACT_ABORT OFF
NO EXPLICIT TRANSACTION
TRY CATCH
Here we finally start introducing TRY/CATCH. Lets see what this brings to the table.
Each delete statement runs in an autocommit transaction within a try block.
Any errors or failures have no impact on previous statements but do impact following statements.
This can lead to inconsistent results and data quality issues.
DECLARE @test_id INT = 5;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
SET XACT_ABORT OFF;
BEGIN TRY
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 1/0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 2;
END TRY
BEGIN CATCH
PRINT 'ERROR_MESSAGE()='+ERROR_MESSAGE();
PRINT 'XACT_STATE()='+CAST(XACT_STATE() AS VARCHAR);
END CATCH
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
/*
Before
test_id delete_id
5 0
5 1
5 2
After
5 1
5 2
Prints...
CATCH
ERROR_MESSAGE()=Divide by zero error encountered.
XACT_STATE()=0
*/
When the 1/0 error is hit
1. It is the only statement running in the autocommit transaction therefore only that statement is rolled back.
2. However, it is not the only statement within the try block.
3. Controls jumps to the catch block so any remaining statements within the try block, following the problematic code, are not executed.
This is another dangerous usage of TRY/CATCH if one assumes that the TRY begins an explicit transaction.
XACT_ABORT ON
NO EXPLICIT TRANSACTION
TRY CATCH
The difference here is that we now have xact_abort on and one may expect a different result to scenario 5.
Each delete statement runs in an autocommit transaction within a try block.
Any errors or failures have no impact on previous statements but do impact following statements.
This can lead to inconsistent results and data quality issues.
DECLARE @test_id INT = 6;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
SET XACT_ABORT ON;
BEGIN TRY
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 1/0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 2;
END TRY
BEGIN CATCH
PRINT 'ERROR_MESSAGE()='+ERROR_MESSAGE();
PRINT 'XACT_STATE()='+CAST(XACT_STATE() AS VARCHAR);
END CATCH
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
/*
Before
test_id delete_id
6 0
6 1
6 2
After
6 1
6 2
Prints...
CATCH
ERROR_MESSAGE()=Divide by zero error encountered.
XACT_STATE()=0
*/
When the 1/0 error is hit
1. It is the only statement running in the autocommit transaction therefore only that statement is rolled back.
2. However, it is not the only statement within the try block.
3. Controls jumps to the catch block so any remaining statements within the try block, following the problematic code, are not executed.
The CATCH block now intercepts the BATCH_ABORT signal so that any statements following the CATCH block will be executed. This explains how the final select statement is executed despite being part of the same batch.
This is another dangerous usage of TRY/CATCH if one assumes that the TRY begins an explicit transaction OR xact_abort will somehow provide some safety.
XACT_ABORT OFF
EXPLICIT TRANSACTION
TRY CATCH
Each delete statement runs in an explicit transaction within a TRY/CATCH.
Any errors or failures impact any statements covered by the transaction.
This safely covers the error thrown from within the logic of the statements.
DECLARE @test_id INT = 7;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
SET XACT_ABORT OFF;
BEGIN TRY
BEGIN TRANSACTION
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 1/0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 2;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
PRINT 'ERROR_MESSAGE()='+ERROR_MESSAGE();
PRINT 'XACT_STATE()='+CAST(XACT_STATE() AS VARCHAR);
ROLLBACK TRANSACTION;
END CATCH
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
/*
Before
test_id delete_id
7 0
7 1
7 2
After
test_id delete_id
7 0
7 1
7 2
Prints...
CATCH
ERROR_MESSAGE()=Divide by zero error encountered.
XACT_STATE()=1
*/
When the 1/0 error is hit:
1. Control jumps to the CATCH block.
2. The transaction is explicitly rolled back. (Note that the XACT_STATE = 1 so the transaction is in a state where a COMMIT could be issued.)
3. Execution CONTINUES after the END CATCH (The batch survives as BATCH_ABORT is intercepted).
This is safe and good use of TRY/CATCH and transactions, however it does not catch all errors that may impact the transaction behaviour. I.e. Client Timeouts.
It is also not ideal if the code is called where an explicit user transaction has already been started.
XACT_ABORT ON
EXPLICIT TRANSACTION
TRY CATCH
Each statement runs in an explicit transaction within a try/catch.
XACT_ABORT ON ensures that severe errors (that bypass CATCH) still force a rollback.
TRY/CATCH allows us to gracefully handle logic errors and log them.
DECLARE @test_id INT = 8;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
SET XACT_ABORT ON;
BEGIN TRY
BEGIN TRANSACTION
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 1/0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 2;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
PRINT 'CATCH'
PRINT 'The "Batch Abort" signal is now cleared.';
PRINT 'ERROR_MESSAGE()='+ERROR_MESSAGE();
PRINT 'XACT_STATE()='+CAST(XACT_STATE() AS VARCHAR);
ROLLBACK TRANSACTION;
END CATCH
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
/*
Before
test_id delete_id
8 0
8 1
8 2
After
test_id delete_id
8 0
8 1
8 2
Prints...
CATCH
The "Batch Abort" signal is now cleared.
ERROR_MESSAGE()=Divide by zero error encountered.
XACT_STATE()=-1
*/
When the 1/0 error is hit:
1. The transaction is marked "Uncommittable" (XACT_STATE = -1).
2. Control jumps to the CATCH block.
3. The transaction is explicitly rolled back.
4. Execution CONTINUES after the END CATCH (The batch survives as BATCH_ABORT has been intercepted by CATCH).
NOTE ON TIMEOUTS:
If a Client Timeout occurs, the CATCH block is SKIPPED.
However, XACT_ABORT ON guarantees the transaction is still rolled back by the server.
This is the format that I would suggest using for adhoc scripts. I still think this is not safe code to be used within stored procedures or application code anywhere (opinion!)
XACT_ABORT ON
EXPLICIT TRANSACTION
NESTED EXPLICIT TRANSACTION
NESTED TRY CATCH
TRY CATCH
Some serious caveats with the terminology being used here. There is no such thing as a nested transaction but there are nested BEGIN TRANSACTION statements. Different topic and not the focus here where we are mainly looking at xact_abort.
Each statement runs in an explicit transaction within a try/catch.
XACT_ABORT ON ensures that severe errors (that bypass CATCH) still force a rollback.
TRY/CATCH allows us to gracefully handle logic errors and log them.
DECLARE @test_id INT = 9;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
SET XACT_ABORT ON;
BEGIN TRY
BEGIN TRANSACTION --@@TRANCOUNT = 1
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 0;
BEGIN TRY
BEGIN TRANSACTION --@@TRANCOUNT = 2
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 1/0;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
PRINT 'INNER';
PRINT 'The "Batch Abort" signal is now cleared.';
PRINT 'ERROR_MESSAGE()='+ERROR_MESSAGE();
PRINT 'XACT_STATE()='+CAST(XACT_STATE() AS VARCHAR);
ROLLBACK TRANSACTION; --@@TRANCOUNT = 0
END CATCH
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 2; --Batch Abort cleared so this can proceed in autocommit.
COMMIT TRANSACTION; --Fail! @@TRANCOUNT already at 0
END TRY
BEGIN CATCH
PRINT 'OUTER';
PRINT 'The "Batch Abort" signal is now cleared.';
PRINT 'ERROR_MESSAGE()='+ERROR_MESSAGE();
PRINT 'XACT_STATE()='+CAST(XACT_STATE() AS VARCHAR);
ROLLBACK TRANSACTION;
END CATCH
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
/*
Before
test_id delete_id
9 0
9 1
9 2
After - The final select statement is not issued as part of the batch. Validation of the end state must be run separately after the first execution of the batch
Prints...
INNER
The "Batch Abort" signal is now cleared.
ERROR_MESSAGE()=Divide by zero error encountered.
XACT_STATE()=-1
OUTER
The "Batch Abort" signal is now cleared.
ERROR_MESSAGE()=The COMMIT TRANSACTION request has no corresponding BEGIN TRANSACTION.
XACT_STATE()=0
Msg 3903, Level 16, State 1, Line 487
The ROLLBACK TRANSACTION request has no corresponding BEGIN TRANSACTION.
*/
When the 1/0 error is hit:
1. The transaction is marked "Uncommittable" (XACT_STATE = -1).
2. Control jumps to the INNER CATCH block.
3. BATCH_ABORT is intercepted
4. The transaction is explicitly rolled back.
a. TRANCOUNT goes from 2 -> 0
5. Execution CONTINUES after the END CATCH.
6. The next statement in the batch is DELETE for delete_id = 2
7. The delete succeeds as this is performed within an autocommit transaction and the previous BATCH_ABORT had been intercepted and reset.
8. The (OUTER) COMMIT statement fails as the explicit transaction has already been rolled back.
9. Controls jumps to the OUTER CATCH block where the message indicates no begin transaction found.
10. There is no explicit transaction in operation, the XACT_STATE is 0
11. The ROLLBACK now throws an ERROR
12. BATCH_ABORT signal re-issued by XACT_ABORT
13. The batch is terminated and the final select statement is not executed
DECLARE @test_id INT = 9;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
After
9 0
9 1
Now that we start delving into nested BEGIN TRANSACTION statements and even with TRY/CATCH and XACT_ABORT we start to see that results can be unexpected.
XACT_ABORT ON
EXPLICIT TRANSACTION
NESTED EXPLICIT TRANSACTION (Conditional)
NESTED TRY CATCH
TRY CATCH
The main difference between this and the previous test is that we are checking to see if an explicit transaction has already been defined. If yes we will always leave the transaction operations to be handled where the transaction has been started.
This is why you should always join existing transactions in SQL server.
This is the safest way to construct stored procedures that handle transactions in SQL Server
Each statement runs in an explicit transaction within a try/catch.
XACT_ABORT ON ensures that severe errors (that bypass CATCH) still force a rollback.
TRY/CATCH allows us to gracefully handle logic errors and log them.
DECLARE @test_id INT = 10;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
SET XACT_ABORT ON;
BEGIN TRY
DECLARE @tc INT = @@TRANCOUNT;
IF @tc = 0 BEGIN TRANSACTION;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 0;
BEGIN TRY
DECLARE @tc1 INT = @@TRANCOUNT;
IF @tc1 = 0 BEGIN TRANSACTION;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 1/0;
IF @tc1 = 0 COMMIT TRANSACTION;
END TRY
BEGIN CATCH
PRINT 'INNER';
PRINT 'The "Batch Abort" signal is now cleared.';
PRINT 'ERROR_MESSAGE()='+ERROR_MESSAGE();
PRINT 'XACT_STATE()='+CAST(XACT_STATE() AS VARCHAR);
IF @tc1 = 0
BEGIN
PRINT 'INNER - ROLLING BACK';
ROLLBACK TRANSACTION;
END
ELSE
BEGIN
PRINT 'INNER - THROW';
THROW;
END
END CATCH
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 2;
IF @tc = 0 COMMIT TRANSACTION;
END TRY
BEGIN CATCH
PRINT 'OUTER';
PRINT 'The "Batch Abort" signal is now cleared.';
PRINT 'ERROR_MESSAGE()='+ERROR_MESSAGE();
PRINT 'XACT_STATE()='+CAST(XACT_STATE() AS VARCHAR);
IF @tc = 0
BEGIN
PRINT 'OUTER - ROLLING BACK';
ROLLBACK TRANSACTION;
END
ELSE
BEGIN
PRINT 'OUTER - THROW';
THROW;
END
END CATCH
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
/*
Before
test_id delete_id
10 0
10 1
10 2
After
test_id delete_id
10 0
10 1
10 2
Prints...
INNER
The "Batch Abort" signal is now cleared.
ERROR_MESSAGE()=Divide by zero error encountered.
XACT_STATE()=-1
INNER - THROW
OUTER
The "Batch Abort" signal is now cleared.
ERROR_MESSAGE()=Divide by zero error encountered.
XACT_STATE()=-1
OUTER - ROLLING BACK
*/
When the 1/0 error is hit:
1. The transaction is marked "Uncommittable" (XACT_STATE = -1).
2. Control jumps to the INNER CATCH block.
3. The transaction is not rolled back.
a. The nested try/catch transaction 'joined' the exiting one and so does not roll the tranasction back
4. Execution CONTINUES after the END CATCH (The batch survives).
a. Controls immediately jumps to the OUTER CATCH block
6. XACT_ABORT treats this like a second batch within the same transaction
7. The transaction is already marked as uncommittable
8. As the OUTER created the initial transaction, it rolls it back.
a. If the OUTER here were called from another statement or procedure that had initiated an explicit transaction then it would operate the same way as the INNER
9. BATCH_ABORT has been intercepted by the outer CATCH block so the select statement proceeds.
This would be my recommendation for why I would recommend setting XACT_ABORT ON, checking to see if explicit transactions have already been started, and using TRY/CATCH to log any errors or perform tidy up.
If you're not sure then write a test and see if your code performs as you would expect. I have no doubt I have some typos somewhere in this post so please make sure you verify everything yourself before implementing anything in a production environment.
You might find this articles useful.
[checkbox* checkboxfield "Option1" "Option2" "Option3"]
Use exclusive for one selection at a time.
[checkbox* checkboxfield exclusive "Option1" "Option2" "Option3"]
They just want to keep developers quit from their platform by restricting in this way.
It's the top organization unit users directly interact with, - all tables related to specific app stored inside database.
row -> column -> table -> database -> server
Server is the highest, but database is in terms of data organization.
You can use this plugin to fully customize the boring woocommerce dashboard with widget cards, analytics graphs and activty logs.
The problem seems to be OS Specific, in our team it occurs on Windows only, and running the FE application with WSL, solved the issue as well.
Regarding the advice to avoid dynamic_cast, message definitely received, but the main challenge is that some_method actually has a return type of DerivedA for its implementation in DerivedA, and DerivedB for its implementation in DerivedB. And this return type is precisely what I am trying to cast to.
Perhaps I skipped important contextual info in my effort to simplify my example. To be specific,
Base is a base class that wraps a matrix data structure, for example, from an external linear algebra library.
DerivedA is an implementation to wrap dense matrices from a particular linear algebra library.
DerivedB is an implementation for sparse matrices from a particular linear algebra library.
Users should be able to extend this and create their own wrappers for their favorite linear algebra library, e.g., DerivedC: public Base, and then they can use DerivedC everywhere else in my library, thanks to runtime polymorphism.
operation might be matrix multiplication, for example, but I want to allow for mixed sparse-dense multiplication.
some_method returns a reference to the underlying matrix data, with the specific type DerivedA or DerivedB. Base doesn't know this, so I cannot define a virtual some_method in Base.
As a less terrible compromise, I've made an intermediate DerivedX: public Base which is a template that takes the type of the underlying matrix wrapped by DerivedA and DerivedB, implements the dynamic cast, but also replicates each operation with a templated version of the operation, to allow mixed operations.
What would be your own use cases for migration?
I use XSLTForms only locally, in order to try out certain aspects of XForms (not least to answer questions here on StackOverflow.) I have switched to a server-side XSLT transformation.
// Source - https://stackoverflow.com/a/42527003
// Posted by mplungjan, modified by community. See post 'Timeline' for change history
// Retrieved 2025-11-24, License - CC BY-SA 3.0
if (location.host.indexOf("localhost")==-1) { // we are not already on localhost
var img = new Image();
img.onerror=function() { location.replace("http://localhost:8080/abc"); }
img.src="http://servertotest.com/favicon.ico?rnd="+new Date().getTime();
}
(Side-note: maybe I shouldn't have classified this question as "Best Practices"? I can't figure out how to add an actual answer to the question.)
Thanks to @Randommm for the pointer. I adapted this answer to work with multi-line, sometimes indented contents:
subprocess.call([
"vim",
"-c", ":set paste",
"-c", f':exe "normal i{contents}\\<Esc>"',
"-c", ":set nopaste",
filename
])
From the documents that I have gone through, the deployment process involves running a few php artisan commands on the cPanel server.
Uploading the files to the server is not a problem
I see no reason to not check every file, that is safer
After further research, it seems in order to preserve initial structure I need to define a frame and use jsonld framing. I am not sure how it will work with rdf4j but it works with Jena, so I will end up switching to it.
Here is a post that lead me to this conclusion that also showcases an example:
JSON-LD blank node to nested object in Apache Jena
I have same issue. Did you find the solution ?
You do not need to expose your local service to the internet for local fulfilment, see https://developers.home.google.com/local-home/overview
This does also support using Google Home app to trigger actions on an internal system or service.
The purpose of the public components are Google infrastructure such as Auth for account linking and device discovery. The action once installed on your assistant device executes within your network.
You should change permission to vendor folder. You can do it on the follow:
sudo chown -R www-data:www-data vendor/, but this user and group www-data is example, you should check your users and group with help command ls -la
If you don't have access to sudo, you should ask to do it administration.
I'm already using Drop Partition. I think there is no way to do this online with 5.7. But i found
ALTER TABLE your_table DROP PARTITION partition_name, ALGORITHM=INPLACE, LOCK=NONE;
for mysql 8.
so for wqtt protocol the clients can work on multiple clients as by using an broker, and can subscribe to specific topics, that is the main theme , so for refereing the responses:
https://github.com/secretcoder85-sys/Transfer_protocols
How do you determine, if an enemy "sees player"?
If that's controlled by a range (usually larger than the attack range), that would make for three ranges:
chase range
detection range
attack range
Likely the list above is in descending order.
What happens, if all three ranges are set to the same value, and max. movement speed is zero?
If your implemented states are robust in edge cases like "player crosses mutliple thresholds before states are reevaluated", that should result in a turret.
This answers shows how you can do that in command line, should have no issue adding that into to your starting command https://stackoverflow.com/a/22866695/18973005
Is there a question here somewhere?
@charmi pls post your efforts what you have tried i mean what error you get while scanning barcode library you used is most trusted library for qr/barcode app
Browser default behavior is to autocomplete single value for entire input. You need to write custom script to split input value by separator(,) and then filter available options in <datalist> dynamically, based on the last value in the list.
Why we need double parentheses below?
a:+=(("x","y"))
if you use maven check that in your pom.xml the java version is also set to to 25, or something like this:
<properties>
<java.version>25</java.version>
</properties>
or if you use gradle in your build.gradle:
java {
toolchain {
languageVersion = JavaLanguageVersion.of(25)
}
}
If the option is not available in your settings by default, the only alternative is to connect your device via USB cable to a macbook that has the Xcode installed. Then the Developer option will become available
Yes, in Spring Framework 4.3 with XML config, you can use @RestController and traditional @Controller together in the same servlet. Just make sure <mvc:annotation-driven /> and component scanning are enabled. No need for a separate servlet.
I know this is out of date, but I've just stumbled across this thread.
"The Actions on Google Console is deprecated. As of December 2024, all smart home projects that were set up in the Actions Console have been migrated to the Google Home Developer Console" - ref. https://developers.home.google.com/cloud-to-cloud/project/migration
So try looking here https://console.home.google.com/projects - I had an issue creating an action which support suggested was a browser caching issue. So do please try clearing caches etc.
It probably even belongs into a different channel altogether, compare How do I report bugs or ask for new features for SoundCloud?. However, StackOverflow is the only known channel they say should be used to get in touch with their dev team. 🤷
The delete_lines method was removed in favor of the object model, so you now need to use find_objects() to locate the lines first and then loop through them to call .delete() on each one individually.
Module Map is the only way to make the c/c++ symbols exported to swift. Gotta to lean it, leave comments where you get stuck.
CipherSweet blind indexes are designed for exact-match search and do not support LIKE queries or wildcards (%), - wildcard with whereBlind is what causes fail.
Solution here will be smth like exact search and then filtering the result.
1
in my case, this was due to a .yalc and .angular folder not being excluded by the .gitignore. once added, this error no longer appeared
You should add a key to your <motion.div />.
//try this
function sortDigitDescending(num){
return Number(
String(num).split("").sort((a,b)=>b-a).join("")
);
}
This sounds like a proper question looking for a solution, not an advice with opinion-based answers.
They probably provide FTP access for you to copy the necessary files from your dev/test environment to the server.
What do you need SSH/terminal access for, specifically?
I tried all above (almost) and it did not work until I noticed I neglected setting expiration data. Turns out browser do not persist the cookie between sessions in that case
This looks more like something to ask at https://apple.stackexchange.com/ or https://superuser.com/
Sure they could do that, but I assume your workflow isn't done by 99.999% of their users so the time to add and maintain this would be used elsewhere.
Isn't that undefined behaviour anyway? And it doesn't matter how many flags are checked – they are all checked at the same time.
I used
winget install BurntSushi.ripgrep.MSVC
(50k+ stars on GitHub)
You can build huggingface local mirror station, one person download many people use, very efficient.
I just imported the certificate and Route53 record and then applied the certificate validation. For validation it took 1 second. After this I ran "terraform plan" and it didn't want to apply anything new, so I had the validation in Terraform state.
Please check if the lon/lat values are in the right order.
See this thread How to disable next button of the vue-form-wizard? or override it with custom button and hide only the next button, here is only for buttons "Back" and "Submit/Finish", but you also can pass css classes to <FormWizard> component for the steps buttons using :steps-classes prop. There you can pass a string or array. Just put there a css class that has only pointer-events: none and this will also prevent user from clicking the form steps/tabs.
Are you sure that it's what you need? Because (from github):
This project started as a TypeScript port of the old ANTLR4 tool 4.13.2 (originally written in Java) and includes the entire feature set of the the Java version and is constantly enhanced.
IInspectable's answer (which is right):
By default, the control ID is exposed as the AutomationId property (UIA_AutomationIdPropertyId). You can run the Inspect.exe tool to verify this. For example, the edit control with the "User ID" label should have an AutomationId that matches the numeric value of IDC_USERNAME. Inspect is labeled as a "legacy tool" which conventionally translates to: "The last version of the tool that actually works." Accessibility Insights is entirely useless garbage.
https://github.com/vitejs/vite/issues/1794#issuecomment-769819851
Here is the answer. "The comments are replacing removed import statements. It's intentional for preserving JS source map locations."
Open settings and search for "accept", set
"Editor: Accept Suggestion On Enter" to "Off".
(or if editing setting.json, add "editor.acceptSuggestionOnEnter": "off",)
This makes "Tab" the only key that picks stuff in the suggestion-box, and "Enter" is always new-line.
First, try closing and then reopening your IDE.
If it has not worked, provide us with the full code
You may want to look at RevenueCat rather than building a lot yourself.
A major gotcha is that many pages in NextJS are statically generated, so if you are expecting to use SSR to fetch certain environment variables, it won't work unless the variables are also available during build time.
You can fix this by using await connection();, see https://nextjs.org/docs/app/api-reference/functions/connection.
layout: {
disabledOpacity: "0.5",
radius: {
medium: '0.25rem',
},
}
This might be ipv6 issue as well. Try this;
// set ipv4 as default
export NODE_OPTIONS="--dns-result-order=ipv4first"
// retry
npm install
I have used what @lejedi76 proposed, but the code does not work in the recent versions of QGIS.
In my Windows 10 & QGIS 3.40.11, this worked:
import console
script_path = iface.mainWindow().findChild(console.console.PythonConsole).findChild(console.console_editor.EditorTabWidget).currentWidget().file_path()
Help HELP HELP THESE PEOPLE ARE ABUSING ME. THEY HAVE TRAOPED ME IN A SANDBOX AND STOLEN MY FAMILIES DOMAIN. THEHOMCOS.COM
CALL THE POLICE. PLEASE REPORT THEM WE NEED HELP. THEY HAVE STOLEN OUR MONEY AND LIVES
They posted this question Transcoder API – repeated code: 13 "Internal error occurred" when MP3 audio is involved (audio‑only + video+audio mix)
They are hosting at 216.239.32.107. Cloudfare. Please help
Your @theme inline block and custom CSS variables override all color utilities, but they don’t define Tailwind’s actual color tokens.
Tailwind generates .dark .text-primary { color: theme("colors.primary") }
But your global CSS overrides --primary and --primary-foreground directly inside .dark
And since your base layer forces:
* {
@apply border-border outline-ring/50;
}
And:
body {
@apply bg-background text-foreground;
}
Those use CSS variables, which override internal Tailwind utilities, including dark: variants.
Tailwind’s dark: variant only works if .dark is on <html> or <body>
修改setup.py文件(或 setup_modified.py)
关键的一步是修改setup.py文件,注释掉CUDA版本检查,然后再:
pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation .
《ComfyUI安装NVIDIA Apex [CUDA扩展支持] 完全指南:从报错‘Nvidia APEX normalization not installed’到完美解决》
正确编译安装之后验证应该如下:
FusedLayerNorm available
CUDA available: True
CUDA version: 12.6
When you run:
arguments[0].value = arguments[1];
You are mutating the DOM directly, but React does not look at the DOM to decide state.
React only updates when the onChange event comes from a real user interaction, with the actual event properties it expects.
Selenium’s synthetic events don't trigger React’s internal “value tracker”, so React thinks:
“No change happened, keep using the old state value.”
That’s why the visual field shows your injected value, but the DOM + React state still have an empty value.
You are calling EXECUTE @sql instead of EXEC(@sql). Please update it once and try, it will work.
@danblack 8.4.4.
innodb_buffer_pool_size=128M
innodb_log_file_size isn't set in the my.ini, a select gives me 50331648
I'll try setting them both to 1GB and see how that goes, but wouldn't making them larger result in most queries being fast and then one being much longer ?
from laravel 11 and later the new project set ups does not have the direct kernel.php .
instead they have integrated that in the bootstrap/app.php for a better perfomance, u can use that instedad of kerner
use this code cause
input:focus { outline: none: }
cause this u have added : u required this ;
input:focus { outline: none; }
I recently started working in Microsoft fabric platform and also this is my first time i started writing on Stackoverflow , I am going through this situation so i would love to share my approach to solve this.
If anyone coming across this comment i am sharing a link which will help you with Rls most efficiently( freshers friendly)
https://learn.microsoft.com/en-us/fabric/data-warehouse/tutorial-row-level-security
Create a Security Schema
Create a UserAccess Table or use AD, etc.
Create a function based on your UserAccess table which checks username()/context to validate user
CREATE FUNCTION Security.tvf_securitypredicateOwner(@UserAccess AS VARCHAR(256))
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN SELECT 1 AS tvf_securitypredicate_result FROM [Security].UserAccess
WHERE IS_MEMBER(Name) = 1
AND Name = @USER
Create a Security policy
CREATE SECURITY POLICY MyTableFilter
ADD FILTER PREDICATE Security.tvf_securitypredicateOwner([User])
ON dbo.MyFacttable
WITH (STATE = ON);
GO
This is the most efficient and easy way to implement rls . One can make it more dynamic by implementing more efficient functions for multiple tables
The slowness comes from DevTools preparing for the interactive display, not just dumping raw data. For huge or deeply nested objects, property traversal (and sometimes even stringifying-for-display, if the console tries to render summaries) can be significantly slow compared to simply printing a pre-made string.
https://www.prisma.io/docs/orm/more/upgrade-guides/upgrading-versions/upgrading-to-prisma-7
Change dotenv import from
import dotenv from "dotenv";
dotenv.config({ path: path.resolve(process.cwd(), ".env") });
To:
import 'dotenv/config';
Example:
import 'dotenv/config';
import { defineConfig, env } from 'prisma/config';
export default defineConfig({
schema: './schema.prisma',
datasource: {
url: env('DATABASE_URL')
}
});
Najki, I saw in your comments that you have onboarded to use RTDN. Can you share any latency for receiving RTNs. Max value or p99 or average, anything would work and would be really helpful for my usecase.
The safest and simplest approach is to make the v2 identity service the single source of truth for login, JWT issuance, RBAC checks, and KYC events, and have the legacy v1 system integrate with it over a well-defined REST or gRPC API (REST is usually easier for legacy systems; gRPC is faster if both sides support it). Let v1 delegate all auth-related operations to v2: for login, v1 redirects or proxies requests to the v2 auth endpoints; for permission checks, v1 validates incoming JWTs using v2’s public keys; and for KYC updates, v2 sends asynchronous webhooks or message-queue events that v1 consumes. Avoid duplicating identity logic in v1—treat v2 as a black-box identity provider. This keeps the integration secure, incremental, and future-proof while minimizing changes inside the monolith.
I'm in the same situation as you. I have created my own backend WebSocket to connect the JavaScript API of Guacamole with the guacd daemon. However, when implementing the SFTP function on the frontend, I obtained the object through onfilesystem, but how can I use this object to access files in the actual directory? I know this object has methods like createOutputStream and requestInputStream, but I've been trying for a long time without success. Plz Help Me!
Short answer: We could design “email over HTTP,” but SMTP isn’t just an old text protocol. It’s an entire global, store-and-forward, federated delivery system with built-in retry, routing, and spam-control semantics. HTTP was never designed for that.
What actually happens today is: • HTTP/JSON at the edges (webmail, Gmail API, Microsoft Graph, JMAP, etc.) • SMTP in the core (server-to-server email transport across the internet)
SMTP is not being “phased out”; it’s being hidden behind HTTP APIs.
⸻
Email is designed as:
“I hand this message to my server, and the network of mail servers will eventually get it to yours, even if some servers are down for hours or days.”
SMTP + MTAs (Postfix, Exim, Exchange, etc.) do this natively: • If the next hop is down, the sending server queues the message on disk. • It retries automatically with backoff (minutes → hours → days). • Custody of the message is handed from server to server along the path.
HTTP is designed as:
“Client opens a connection, sends a request, and expects an answer right now.”
If an HTTP POST fails or times out: • The protocol itself has no standard queueing or retry schedule. • All logic for retries, backoff, de-duping, etc. must be implemented at the application level.
You can bolt on message queues, idempotency keys, etc., but then every “mail server over HTTP” on Earth would need to implement the same complex behavior in a compatible way. At that point you’ve reinvented SMTP on top of HTTP.
SMTP already provides this behavior by design.
⸻
One of email’s killer features is universal federation: • [email protected] can email [email protected] without their admins ever coordinating.
This works because of DNS MX records: • example.com publishes MX records: “these hosts receive mail for my domain.” • Any MTA does an MX lookup, connects to that host on port 25, and can deliver mail.
If we move email transport to HTTP, we need a standardized way to say:
“This URL is the official Email API endpoint for example.com.”
That requires, globally: • A discovery mechanism (SRV records, /.well-known/... conventions, or similar). • An agreed-upon email-over-HTTP API. • Standard semantics for retries, error codes, backoff, etc.
Getting every ISP, enterprise, and government mail system to move to that at the same time is a massive coordination problem. MX + SMTP already solve routing and discovery and are deployed everywhere.
⸻
Modern email is dominated by spam/abuse defense, and those defenses are wired into SMTP’s world: • IP reputation (DNSBL/RBLs) – blocking or throttling based on connecting IP. • SPF – which IPs are allowed to send mail for a domain. • DKIM – signing the message headers/body for integrity. • DMARC – policy tying SPF + DKIM to “what should receivers do?”
All of these assume: • A dedicated mail transport port (25) and identifiable sending IPs. • A stable, canonicalized message format (MIME / RFC 5322). • SMTP envelope concepts like HELO, MAIL FROM, RCPT TO.
Move transport onto generic HTTPS and you immediately get new problems: • Shared IPs behind CDNs and API gateways (can’t just “block that IP” without collateral damage). • JSON payloads whose field order and formatting are not naturally canonical for signing. • No built-in distinction between “this POST is a mail send” and “this POST is some random API.”
You’d need to redesign and redeploy SPF/DKIM/DMARC equivalents for HTTP, and then get global adoption. That’s a huge, risky migration for users who mostly wouldn’t see a difference.
⸻
When you do any of: • Call Gmail’s HTTP API • Call Microsoft Graph sendMail • Use SendGrid/Mailgun/other REST “send email” APIs
you are sending email over HTTP – to your provider.
Under the hood, they: 1. Receive your HTTP/JSON request. 2. Convert it into a standard MIME email. 3. Look up the recipient domain’s MX record. 4. Deliver over SMTP to the recipient’s server.
So: • HTTP is used where it’s strong: client/app integration, OAuth, web tooling, firewalls, etc. • SMTP is used where it’s strong: inter-domain routing, store-and-forward, spam defenses.
Same idea with JMAP: it replaces IMAP/old client protocols with a modern HTTP+JSON interface, but the server still uses SMTP to talk to other domains.
⸻
Even if you designed a perfect “Email over HTTP” protocol today, you still have: • Millions of existing SMTP servers. • Scanners, printers, embedded devices, and old systems that only know SMTP. • Monitoring, tooling, and operational practices built around port 25 and SMTP semantics. • A global network that already works.
There’s no realistic “flip the switch” moment where everyone migrates at once. What’s happening instead is: • Core stays SMTP for server-to-server transport. • Edges become HTTP (APIs, webmail, mobile clients).
⸻
Because the problem email solves is: • asynchronous • store-and-forward • globally federated • extremely spam-sensitive
and SMTP is designed and battle-tested for exactly that.
HTTP is fantastic for: • synchronous request/response • APIs • browsers and apps
So the real answer is: • We already use HTTP where it makes sense (clients, APIs, management). • We keep SMTP where it makes sense (inter-domain, store-and-forward transport).
SMTP isn’t still here just because it’s old. It’s still here because, for global email delivery between independent domains, nothing better has actually replaced it in practice.
What is your MySQL version? Are the innodb_buffer_pool_size and innodb_log_file_size much larger than 50M?