You cannot reliably add javax.script (JSR-223 scripting API) to an Android app using Gradle the toolchain does not support plugging in arbitrary JDK java.* / javax.* classes like this.
Shift-Command-T to open Terminal in macOS Recovery.
I know this is an old issue but my friend and I created a library for this exact purpose! It takes any arbitrary text and segments it into morphemes. Here's a bunch of links for it below and I hope yall find it helpful!
The thing is that to translate code to binary you need to know binary and write your compiler accordingly and Assembly being the first programming language to exist is the closest thing to binary that is kind of readable-ish (with great emphasis on kind off) but translates directly to binary, every line of code in assembly is binary instructions just polished to "look" human readable, so you'll find it used in compilers for high-level languages, the language used to create operating systems, and non-wasting memory perfect applications
>! >!
! >! >! Use Env_file as a list in the combined class ! >! >! ! >! >! from pydantic_settings import BaseSettings, SettingsConfigDict ! >! >! ! >! >! class Settings(BaseSettings): ! >! >! model_config = SettingsConfigDict( ! >! >! env_prefix="APP_", ! >! >! env_file=[".env.database", ".env.auth"], # multiple files supported ! >! >! env_file_encoding="utf-8", ! >! >! extra="ignore", ! >! >! ) ! >! >! ! >! >! # Explicitly declare fields so IDE knows them ! >! >! db_host: str = "localhost" ! >! >! auth_secret_key: str = "change-me" ! >! >! ! >! >! settings = Settings() ! >! >! print(settings) ! >! >! ! >! >! Settings(db_host='db.example.com', auth_secret_key='secret-from-env-file') ! >! >!
I thought that the behavior in this sense is the same, and hence the explanation could be the same too, taken from TypeScript
Flow isn't TypeScript. Is there a reason the question is about the former (except for what looks like an afterthought) but tagged as the latter?
If you’re sure the information provided to connect to the database is correct, the next step is to check your internet service provider (ISP). Some ISPs enforce firewall rules which block you to connect with you database with a specificc port. If that’s not the case, make sure you’re allowed to access the Red Hat server.
This is the first time I’ve tried to answer a question.
I’ve been using VSCode on Windows at my workstation for years to edit my TeX files (with a WSL virtual machine to have a Linux LaTeX distribution). Latex Workshop for vscode is great and every thing i need (latexindant, chktex, cspell) are easy to setup. I’d recommend using VSCode workspaces to achieve your goal. Once the workspace is saved (as a .code-workspace file), you can open it from Windows Explorer, and all your windows will be restored.
This script detects what window the computer it is running on is focused into, and logs when it changes, from a quick search, but I don't have experience with it, so I can't give a exact bit of code, but I can give a psuedocode path.
1 - Have this script running on every admins computer.
2 - Assign a variable to each chat, let's call it human focus for this. When a chat is initially opened, human focus is 0, signalling for this chatbot to be active.
3 - When this script detects a human focusing into the window of a specific chat, set that chats 'human focus' to 1, signalling for the chatbot to be disabled.
3.5 - If the human sends a message, set 'human focus' to 2.
4 - When they shift focus away, if 'human focus' is 1, set it back to 0, reactivating the chatbot, if not, do nothing.
Thanks for your comments and help. But that does not answer my question .. I asked how can this be done via the current GH GUI, and not with any command line commands.
Here is the correct way (I made by myself just a minute ago) via the GH UI only:
Either fork the desired repository or branch in desired repository OR if you already forked that in the past > update it if it's behind the original
Create/edit file(s) now in your own branch to make changes
Go to the overview and select the target branch you made the the commits before
Click on the line saying "This branch is x commits ahead of xxxxx"
Click on the green button saying "View pull request"
The new PR with all changes will then be created in the target repository
Just one note: do not delete at GH your files until they are merged!
While if you have made the whole work on your PC and submitted everything to GH, deleting those files locally is allowed and has o effect to and in GitHub.
The boilerplate is the safest option but remeber you can put everything in a single hpp file and call it a day, it's not that hard, plus you can add documentation inside the hpp file, so it's not a bad option
const userActive = injectQueries(() => ({
queries: [this.user, this.roles],
select: (user, roles) => {
if (!user || !roles) return null;
return {
...user,
roles
};
}
}));
As @molbdnilo mentioned in a comment, @@ refers to the path to the test input file, not its contents.
By default, AFL/AFL++ read from stdin. If you want them to use argv instead, you have to employ a small trick to get it to fill argv with stuff from stdin. You can find an example on AFL++'s repository: https://github.com/AFLplusplus/AFLplusplus/tree/stable/utils/argv_fuzzing.
This stumped for a few days and none of the solutions were viable or worked.
If you are running php-fpm with nginx
In the server{} block in www.yourdomain.com.config add:
proxy_pass_header Server;
In your PHP controller:
header("Server: Singer Sewing machine");
I needed a solution fast to pass PCI and was panicking that I only had a few days to find a viable solution. This got it done.
I could not find a way to set it in either php.ini or the fpm configs.
Please note that as of version 142, Chrome has deprecated Private Network Access in favor of Local network access restrictions. I can confirm that attempting to send a CORS request to a domain that resolves to a private IP address without the permission enabled results in an error like the following:
Access to XMLHttpRequest at '<resource-url>' from origin '<origin>' has been blocked by CORS policy: Permission was denied for this request to access the `unknown` address space.
Ensure that you have the following permission enabled on the affected site(s):
For more information, see this blogpost.
As for the original questions:
Is Chrome blocking this due to Private Network Access (PNA)?
As far as I know, PNA only blocked requests from insecure contexts. But this has been deprecated as well.
PNA CORS preflight requests (i.e. access-control-request-private-network/access-control-allow-private-network headers) was supposed to be enforced in Chrome 130 but it was ultimately put on hold before PNA was deprecated. Thus, it's very unlikely that PNA is blocking your requests.
Is there any server-side configuration that can allow this pattern?
There's no way to control this from an HTTP server since this is a security feature designed to protect against malicious HTTP servers.
Is removing the internal DNS override the only reliable fix?
Removing the internal DNS override should remove the need for the permission since it will no longer be considered a local network access request.
Would routing all API calls internally through a reverse proxy (so the browser always hits a public endpoint) avoid PNA issues?
Just like the DNS approach, this should also do the trick.
Is there any recommended approach for environments where public domains resolve to internal IPs only on specific networks?
In corporate environments with managed devices, Chrome policies can be used to enable the permission on a list of domains.
It is just easy.
Plz check
SELECT e.UserID,
e.DeviceID,
e.EventName,
e.EventTime,
p.amount,
p.color
FROM EventTable e
JOIN PurchaseTable p
ON e.EventName = p.purchase_code;
that sounds like an effective though potentially expensive approach. Thanks for the suggestion I will look into it.
I’ve updated the post and included the implementations I tested. I haven’t tried Microsoft.Data.SqlClient.SqlBulkCopy yet. Since the T-SQL BULK INSERT approach didn’t provide any performance improvement over the Azure Data Factory implementation—and scaling from General Purpose (12 vCores) to Business Critical (8 vCores) also didn’t improve ADF performance—I’m starting to wonder if there are other hidden limits causing this bottleneck
Edited the answer to meet those clarified requirements.
Use PivotTo or MoveTo to move the model of the car. If you mean the mesh, look up.
car:PivotTo(CFrame.new(pos)) OR car:MoveTo(pos)
My guy, AI-generated content is banned on stack-overflow.
You need add this libs as binaryTargets
Forget about module map: it's fully managed by SPM.
@smallpepperz Thank you! That clarifies it. I guess my final question would be how this statement would look for each check of the two if conditions:
if (i - j - 1 < new_str.length && new_str[j] != new_str[i - j - 1])
Would it be, check #1:
if (8-0-1 < 8 && a != ?)
check #2:
if (8-1-1 < 8 && b != ?)
check #3
if (8-2-1 < 8 && c != ?)
And what would "new_str[i - j - 1]" be in each of these 3 iterations?
Thank you very much, works perfectly.
I don't suppose there is a way that if the count is 1, it just returns the name David Welsh, not David Welsh x 1?
It looks like intercepting routes are officially not supported with static exports, since they require a running server. Docs: https://github.com/vercel/next.js/blob/7e093afb8cb1324b2aae6a16fd8b2b6e3cc577a5/docs/01-app/02-guides/static-exports.mdx?plain=1#L292
which was clarified in
Usually, when there is a script in raw/code form, there also must be virtual environment.
Virtual environment is most likely in form of requirements.txt file. You either need miniconda or miniforge (conda) to install this environment. It's also possible that the environment is presented in a .zip file in the Releases (if we are talking GitHub).
When you have Python installed and you run any script.py file, then the file system calls for "default application" which in this case is python > this leads that the python is called with the file as arguments, but this process is already completed in "default app" which is Python console in your new window.
To avoid this, simply create either a run.bat file or type in the command:
FULL\PATH\TO\python.exe script.py
(recommended) this will call the python and the script in your current window.
or
python script.py if your environment is in the System/User PATH
(not recommended as you may want to use multiple environments on one device).
If the bug has security consequences, it should be reported directly through their responsible disclosure program
https://help.soundcloud.com/hc/en-us/articles/115003561228-Reporting-a-security-vulnerability
thanks mate.. i want to cry.. will close this question..
It is possible that their GH profile do not contain website code, but in this case it is better to reach them privately via email/Social Media account if the bug has security concequences.
Looking at your code, the constructor is written as contructor instead of constructor, so the constructor never runs and this.page stays undefined. Fixing the spelling will solve the error.
i have no clue. been trying few hours still couldn't solve it.
I understand it like this:
A group gives a user continuous, long-term access. But if a user who is already in a group needs temporary access to a specific resource, adding them to another group or changing policies is inconvenient. In that situation, the simplest option is to let the user assume a role, use the required permissions temporarily, and then switch back when done.
Groups → long-term, continuous access
Roles → temporary, on-demand access without permanently changing user permissions
@life888888: IntelliJ IDEA Community Edition does not support Spring Boot, not even "classic" Spring. See https://www.jetbrains.com/idea/features/. This is a feature of the paid Ultimate Edition.
I don't know Pydantic well, but this one seems adapted to your problematic:
https://docs.pydantic.dev/latest/concepts/validators/#model-after-validator
Thanks. I need to store the binary data as a constant among other (non-binary) data in a constants file.
Could you put your comment in an answer so I can accept it?
Would not p.communicate() help in this case?
stdout, stderr = p.communicate()
this way the python could actually request the return message and wait for the process to finish.
For my Android Note App I do encrypt data before backup (End-to-end encryption), so even if someone can get data, they cannot read it.
You can try to follow my solutions
To implement step-1 to step-4 quickly, you can checkout my opensource library. My library support methods for convert data to JSON, zip file, encrypt/decrypt data easily.
Git Hub: https://github.com/vuthaiduy1990/android-wind-library/wiki
@kikon Thank you for such a detailed explanation! Wondering if you could clarify this last part:
j is the position at the end of the string, and i - j - 1 is the position from the start of the string; since i = 8, j = 6I understand why i would be 8, since the original length of the array (which is 6 items) expands to 8 items. But why would j be 6 at that point? Maybe I'm not understanding exactly what j is supposed to represent. You mentioned that "j is the position at the end of the string". Does that mean that j loops over the array items from right to left, i.e. the last item of the original array would be the 0 index for j, and the second to last item in the original array is 1 index for j, etc..?
It says I can't delete it.
Sorry.
if anyone stumbles on this issue, try
driver.executeScript("mobile: pressKey", Map.ofEntries(Map.entry("keycode", 4)))
MySQL has a structure with levels.
A database is a big container. Inside the database, you have tables. Inside tables, you have rows and columns. So the database is the highest level because it holds everything.
I thought this was the main site? stackoverflow.com right? How and where should I move this question or do I start again somewhere else?
It might be worth checking if your directus domain is inserted correctly (i.e. without the https:// protocol). Also I would consider relaxing the pathname parameter (e.g. * ) to rule out configuration error.
>absolute table names in the trigger function
This did the trick.
Having the individual elements on my screen scale and then reflow just from zooming in seems on the surface like a very confusing user experience, without knowing what your end goal is. If your goal is just accessibility for an average webpage then I definitely would say it's not worth it. If it's more of a custom gesture-driven app you could look into using the canvas API.
@Svyatoslav Well, not really. As I mentioned in the post, putting the whole WP into one repo means I'm versioning code that's not mine and I don't want that. As to putting the versions in the theme or plugin files, while it's great to track compatibility requirements, it's not great for what I am trying to achieve.
You're saying "don't change anything on stage/prod manually (only thru pushes)", but that's exactly the issue. While that would be a normal and safe way of handling things with any other framework, I've come to find that Wordpress websites require updates so frequently that we do do them in production directly, there's even a way to enable auto-updates for plugins!
I agree with you that in a perfect world, each update should be done on a local environment first, tested, then pushed, tested in staging, and finally deployed to prod. But the reality of the field, in my experience, is that clients that have WP websites:
Don't mind the risks of updating directly in prod (downtimes would be less expensive than getting entirely hacked)
Do not have the budget to have a developer keeping an eye on the updates every single day to test and push them
Would rather pay to have their website fixed if an update breaks it in production rather than pay devs to test each update beforehand
It's also worth to note that in my experience, updates rarely break websites when it comes to Wordpress. Only times I've seen it happen were when I was updating from multiple major versions down. And even then it was usually mostly fine!
Use this dependency:
<dependency>
<groupId>one.stayfocused.spring</groupId>
<artifactId>dotenv-spring-boot</artifactId>
<version>1.0.0</version>
</dependency>
After switching to this dependency, my .env variables were loaded correctly again.
If you use cygwin, you will have the possibility to run the Unix 'touch' command on Windows. You will be able to update the modification timestamp of the folder to any value you want (I use that when copying folders to keep the same timestamp as the original folder).
The subProcess option exists:
{
"name": "Run my program",
"type": "debugpy",
"request": "launch",
"program": "${workspaceFolder}/main.py",
"subProcess": true
},
i simply just get new keys from merchant dashboard from Tap they verified my domain and working fine for me right now
Well, i will throw my 2 cents into this. I had yum updated 3 servers and got this error on 2 of the 3. the one that did not have the error was rebooted.
So rebooting after yum update fixed the issue for me.
Rocky linux 9
You should use subprocess.run as recommended in the doc. subprocess.call is there for legacy reasons.
The legacy secret key location is clear at this point. How about legacy site key... anyone?
hey ig i am a lil late to reply but try using rn-wireguard-tunnel
its a newer library and also if any issues please log to github issues section
In App config. where you import the prime NG. Disable it there:
providePrimeNG({
theme: {
preset: Aura,
options: {
darkModeSelector: false
}
}
})
Thanks Lydia and Jade for your input on this ! I'll comment on the gh thread.
Jade, could you develop what you mean by the memory leak ? I'm not fully sure I understand the reason why the handle must be opaque. Using raw pointers makes me somewhat nervous but I suppose this could eventually be abstracted away once/if records can be exported.
Thanks again !
This post is the only I have found when Googling about this. For future people who find this post:
It is a Fortiweb set up with a Bot Detection policy. There is a section about detecting if the traffic is really from a browser and an option is "Real Browser Enforcement" (The system sends a JavaScript to the client to verify whether it is a web browser.)
Asking your IT security team to allowlist your particular URL should fix it.
Based on @Bibek Saha's answer I am providing here another solution which makes the open/close state private to the CardCloseable class. There are one and a "half" disadvantages with this method versus a big advantage of blackboxing the CardCloseable's state and not needing to keep it with N variables for N Cards (or a Map) in the Dialog's onPressed():
StatefulWidget instead of state-less, so less efficient.CardCloseable class still rebuilds! But it builds an "invisible" SizedBox.shrink() widget and not a Card (see the logic in its build method). This can be improved if anyone can suggest what's the practice here because I could not find a way to return back from build a null widget, indicating not to bother with this widget. I am new to flutter.Here is my solution based on @Bibek Saha's:
import 'package:flutter/material.dart';
/// Flutter code sample for [Card].
void main() => runApp(const CardExampleApp());
class CardExampleApp extends StatelessWidget {
const CardExampleApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(title: const Text('Card Sample')),
body: MyExample(),
),
);
}
}
class MyExample extends StatelessWidget {
const MyExample({super.key});
@override
Widget build(BuildContext context) {
return ElevatedButton(
child: Text("press to show dialog"),
onPressed: () {
showDialog(
barrierDismissible: true,
barrierColor: Colors.red.withAlpha(90),
context: context,
builder: (BuildContext ctx) {
return Container(
decoration: BoxDecoration(
border: Border.all(color: Colors.blueAccent, width: 2),
borderRadius: BorderRadius.circular(8.0),
),
child: Column(
children: [
CardCloseable(text: "item"),
CardCloseable(text: "another item"),
CardCloseable(text: "another item"),
],
),
);
},
);
},
);
}
}
class CardCloseable extends StatefulWidget {
final String text;
final double width;
// optionally, creator can supply extra callback for when closing
final VoidCallback? onClose;
const CardCloseable({
super.key,
required this.text,
this.onClose,
this.width = 200,
});
@override
State<CardCloseable> createState() => _CardCloseableState();
}
class _CardCloseableState extends State<CardCloseable> {
bool isOpen = true;
@override
void initState() {
isOpen = true;
super.initState();
}
@override
void dispose() {
super.dispose();
}
@override
Widget build(BuildContext context) {
return isOpen
? SizedBox(
width: 200,
child: Card(
color: Color(0xFFCCAAAA),
child: Column(
mainAxisSize: MainAxisSize.min,
children: <Widget>[
// TODO: this closes the whole map!!!
Align(
alignment: Alignment.topRight,
child: IconButton(
icon: Icon(Icons.close, color: Colors.red),
onPressed: () {
setState(() {
isOpen = false;
});
if (widget.onClose != null) {
widget.onClose!();
}
},
),
),
Container(
padding: EdgeInsets.symmetric(horizontal: 10),
child: Text(widget.text),
),
],
),
),
)
// if we are in a closed state, then return this:
// this is bad design
: SizedBox.shrink();
}
}
class MyCard extends StatelessWidget {
final String text;
const MyCard({super.key, required this.text});
@override
Widget build(BuildContext context) {
return Center(
child: Card(
child: Column(
mainAxisSize: MainAxisSize.min,
children: <Widget>[CloseButton(), Text("the contents: $text")],
),
),
);
}
}
You simply need to fix your code by adding a scale: 2.0 setting inside your focus options to make the camera zoom in close, and make sure to select that node ID so it glows, because otherwise your node is just sitting there lonely in the middle of a very faraway map!
In case anyone comes to this down the line...
Get the count and amount of all attributes from a user in AD - use the UI for this.
Once you have the details, then update the script as needed:
$ADusers = Get-ADUser -Filter "objectclass -like 'user'" -Properties *
$output = @()
$userCheckedCount = 0
foreach ($user in $ADusers) {
$userCheckedCount ++
Write-Progress -Activity "Checking user: $($user.name)" -Status "$($userCheckedCount) of $($ADusers.count)" -PercentComplete $($userCheckedCount / $ADusers.count * 100)
$output += [pscustomobject]@{
User = $user.name;
OU = $user.DistinguishedName.Replace(',','.');
Enabled = $user.enabled;
extensionAttribute1 = $user.extensionAttribute1;
extensionAttribute2 = $user.extensionAttribute2;
extensionAttribute3 = $user.extensionAttribute3;
extensionAttribute4 = $user.extensionAttribute4;
extensionAttribute5 = $user.extensionAttribute5;
extensionAttribute6 = $user.extensionAttribute6;
extensionAttribute7 = $user.extensionAttribute7;
extensionAttribute8 = $user.extensionAttribute8;
extensionAttribute9 = $user.extensionAttribute9;
extensionAttribute10 = $user.extensionAttribute10;
extensionAttribute11 = $user.extensionAttribute11;
extensionAttribute12 = $user.extensionAttribute12;
extensionAttribute13 = $user.extensionAttribute13;
extensionAttribute14 = $user.extensionAttribute14;
extensionAttribute15 = $user.extensionAttribute15
}
}
$output | Export-Csv -Path C:\temp\adAttributes.csv -Delimiter ';' -NoTypeInformation
I am stuck with the same issue.
Things worked well when I created my connected table in Excel. I could erase darta from the table, and clicking "update All" would refresh the figures. When I sent the file to a co-worked who has the same Power BI access rights, he got the same error message.
When I reopened my file 3 days later, I eventually got the same error message.
The only walkaround I found so far, is to save the file on Sharepoint ou Onedrive, and open it with the web browser. In such a case, the connection works.
Still, I am looking for a way to make it functional on my desktop. I don't know what kind of Excel settings I should try and uncheck...
Check your code for multiple explicit or implicit subscribe() calls. I didn't investigate this very deeply, but it looks like the request is created once but used multiple times (again due to multiple subscribe). Possible fixes: remove extra subscribe() calls or cache the result of the request using the .cache() operator.
For those who would rather not deal with writing code, you can use the WooCommerce Enhanced dashboard plugin to fully customize the boring woocommerce dashboard with widget cards, latest orders, analytics charts and activity logs.
Add the following to the project file to fix the issue
<PropertyGroup>
<CETCompat>false</CETCompat>
</PropertyGroup>
It's fixed in version 1.106.2, you can update and try again.
@MrXerios, your answer works well.
I did comparison between three window functions; Planck -taper window (ε=[~0;~2.7]), Planck -taper window based on Tanh() (ε=[0.01;0.5] and Tanh -taper(?) window (ε=[0.2;~∞]).
Different impact of ε -values makes it difficult to find exactly equal settings.
What all can be told based on plots only?
The number of dashes for defining the horizontal line can be used to control column width. Use at least as many as the longest cell in the column to avoid line breaks.
Name | Value
---------|-------------------
`Value-One` | Long explanation
`Value-Two` | Long explanation
`etc` | Long explanation
I have outlined a set of queries and scenarios that show the workings of XACT_ABORT, TRANSACTIONS and TRY/CATCH.
The following has been tested on Compatibility Level = 140
Each scenario will perform the same basic tasks and throw the same error so it will make it much easier to see what is happening and why.
Run all statements in each scenario in a single batch.
Before going into the scenarios let run through some set up. Create a very simple table which is going to hold three values for every test. This allows us to easily isolate and validate results as we progress.
CREATE TABLE dbo.test_scenarios (
test_id INT NOT NULL
,delete_id INT NOT NULL
);
INSERT INTO dbo.test_scenarios (test_id, delete_id)
SELECT t.n AS test_id
,d.n AS delete_id
FROM (VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10)) AS t(n) -- The Test IDs
CROSS JOIN
(VALUES (0),(1),(2)) AS d(n); -- The Delete IDs
-- Verify
SELECT * FROM dbo.test_scenarios ORDER BY test_id, delete_id;
GO
XACT_ABORT OFF
NO EXPLICIT TRANSACTION
NO TRY CATCH
Therefore...
Each delete statement runs in an autocommit transaction.
Any errors or failures have no impact on previous or following statements.
DECLARE @test_id INT = 1;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
SET XACT_ABORT OFF;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 1/0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 2;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
/*
Before
test_id delete_id
1 0
1 1
1 2
After
test_id delete_id
1 1
*/
When the 1/0 error is hit, it is the only statement running in the autocommit transaction therefore only that statement is rolled back and everything else continues.
The final select statement executes when the batch is executed.
This may lead to inconsistent results and data quality issues.
XACT_ABORT OFF
EXPLICIT TRANSACTION
NO TRY CATCH
Each delete statement runs within an explicit transaction.
Any errors or failures have no impact on previous or following statements.
The transaction here will simply define what will be committed but there is no logic to check what should happen if any of the statements fail.
The salient difference here between this and scenario 1 is the timing of the commits.
DECLARE @test_id INT = 2;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
SET XACT_ABORT OFF;
BEGIN TRANSACTION;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 1/0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 2;
COMMIT TRANSACTION;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
/*
Before
test_id delete_id
2 0
2 1
2 2
After
test_id delete_id
2 1
*/
When the 1/0 error is hit that particular statement is rolled back but as there is no logic to handle errors the rest of the statements proceed and the commit then commits the successful statements (0 and 2).
This is one of the riskiest applications of explicit transactions where failure/errors are not handled somehow and the end user expects the transaction to be rolled back in its entirety.
This can lead to inconsistent results and data quality issues.
XACT_ABORT ON
EXPLICIT TRANSACTION
NO TRY CATCH
Each delete statement runs within an explicit transaction.
Any errors or failures within the transaction are now handled by the xact_abort.
DECLARE @test_id INT = 3;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
SET XACT_ABORT ON;
BEGIN TRANSACTION;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 1/0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 2;
COMMIT TRANSACTION;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
/*
Before
test_id delete_id
3 0
3 1
3 2
After - The final select statement is not issued as part of the batch. Validation of the end state must be run separately after the first execution of the batch
*/
When the 1/0 error is hit the explicit transaction is rolled back and the batch terminates immediately. BATCH_ABORT signal is intercepted. Therefore no statements after the one that resulted in an error are executed.
In order to see the effect of the batch this must be re-run...
DECLARE @test_id INT = 3;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
/*
After - The final select statement is not issued as part of the batch. Validation of the end state must be run separately after the first execution of the batch
test_id delete_id
3 0
3 1
3 2
*/
Depending on your use case this may be safe enough. The downside is that while the data remains consistent there any subsequent error handling/logging/tidy up are deferred to the client.
This is also risky code if there is a possibility that it is being run within the context of an explicit transaction that has been defined previous to the code being called.
I would trust this for an adhoc script but not in production code being called from other applications or clients. But even though I trust it, I would still never use it in this form. (opinion!)
XACT_ABORT ON
NO EXPLICIT TRANSACTION
NO TRY CATCH
Each delete statement runs in an autocommit transaction.
Any errors or failures have no impact on previous statements but do impact following statements.
This can lead to inconsistent results and data quality issues.
DECLARE @test_id INT = 4;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
SET XACT_ABORT ON;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 1/0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 2;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
/*
Before
test_id delete_id
4 0
4 1
4 2
After - The final select statement is not issued as part of the batch. Validation of the end state must be run separately after the first execution of the batch
*/
When the 1/0 error is hit
1. It is the only statement running in the autocommit transaction therefore only that statement is rolled back.
2. However, it is not the only statement within the batch.
3. The batch itself is terminated so that and any following statements are never executed.
DECLARE @test_id INT = 4;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
/*
After
4 1
4 2
*/
This is another very dangerous usage of XACT_ABORT if the true behaviour of how it operates is unknown. There is no immediate feedback available from the batch as to what has occurred.
XACT_ABORT OFF
NO EXPLICIT TRANSACTION
TRY CATCH
Here we finally start introducing TRY/CATCH. Lets see what this brings to the table.
Each delete statement runs in an autocommit transaction within a try block.
Any errors or failures have no impact on previous statements but do impact following statements.
This can lead to inconsistent results and data quality issues.
DECLARE @test_id INT = 5;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
SET XACT_ABORT OFF;
BEGIN TRY
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 1/0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 2;
END TRY
BEGIN CATCH
PRINT 'ERROR_MESSAGE()='+ERROR_MESSAGE();
PRINT 'XACT_STATE()='+CAST(XACT_STATE() AS VARCHAR);
END CATCH
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
/*
Before
test_id delete_id
5 0
5 1
5 2
After
5 1
5 2
Prints...
CATCH
ERROR_MESSAGE()=Divide by zero error encountered.
XACT_STATE()=0
*/
When the 1/0 error is hit
1. It is the only statement running in the autocommit transaction therefore only that statement is rolled back.
2. However, it is not the only statement within the try block.
3. Controls jumps to the catch block so any remaining statements within the try block, following the problematic code, are not executed.
This is another dangerous usage of TRY/CATCH if one assumes that the TRY begins an explicit transaction.
XACT_ABORT ON
NO EXPLICIT TRANSACTION
TRY CATCH
The difference here is that we now have xact_abort on and one may expect a different result to scenario 5.
Each delete statement runs in an autocommit transaction within a try block.
Any errors or failures have no impact on previous statements but do impact following statements.
This can lead to inconsistent results and data quality issues.
DECLARE @test_id INT = 6;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
SET XACT_ABORT ON;
BEGIN TRY
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 1/0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 2;
END TRY
BEGIN CATCH
PRINT 'ERROR_MESSAGE()='+ERROR_MESSAGE();
PRINT 'XACT_STATE()='+CAST(XACT_STATE() AS VARCHAR);
END CATCH
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
/*
Before
test_id delete_id
6 0
6 1
6 2
After
6 1
6 2
Prints...
CATCH
ERROR_MESSAGE()=Divide by zero error encountered.
XACT_STATE()=0
*/
When the 1/0 error is hit
1. It is the only statement running in the autocommit transaction therefore only that statement is rolled back.
2. However, it is not the only statement within the try block.
3. Controls jumps to the catch block so any remaining statements within the try block, following the problematic code, are not executed.
The CATCH block now intercepts the BATCH_ABORT signal so that any statements following the CATCH block will be executed. This explains how the final select statement is executed despite being part of the same batch.
This is another dangerous usage of TRY/CATCH if one assumes that the TRY begins an explicit transaction OR xact_abort will somehow provide some safety.
XACT_ABORT OFF
EXPLICIT TRANSACTION
TRY CATCH
Each delete statement runs in an explicit transaction within a TRY/CATCH.
Any errors or failures impact any statements covered by the transaction.
This safely covers the error thrown from within the logic of the statements.
DECLARE @test_id INT = 7;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
SET XACT_ABORT OFF;
BEGIN TRY
BEGIN TRANSACTION
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 1/0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 2;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
PRINT 'ERROR_MESSAGE()='+ERROR_MESSAGE();
PRINT 'XACT_STATE()='+CAST(XACT_STATE() AS VARCHAR);
ROLLBACK TRANSACTION;
END CATCH
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
/*
Before
test_id delete_id
7 0
7 1
7 2
After
test_id delete_id
7 0
7 1
7 2
Prints...
CATCH
ERROR_MESSAGE()=Divide by zero error encountered.
XACT_STATE()=1
*/
When the 1/0 error is hit:
1. Control jumps to the CATCH block.
2. The transaction is explicitly rolled back. (Note that the XACT_STATE = 1 so the transaction is in a state where a COMMIT could be issued.)
3. Execution CONTINUES after the END CATCH (The batch survives as BATCH_ABORT is intercepted).
This is safe and good use of TRY/CATCH and transactions, however it does not catch all errors that may impact the transaction behaviour. I.e. Client Timeouts.
It is also not ideal if the code is called where an explicit user transaction has already been started.
XACT_ABORT ON
EXPLICIT TRANSACTION
TRY CATCH
Each statement runs in an explicit transaction within a try/catch.
XACT_ABORT ON ensures that severe errors (that bypass CATCH) still force a rollback.
TRY/CATCH allows us to gracefully handle logic errors and log them.
DECLARE @test_id INT = 8;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
SET XACT_ABORT ON;
BEGIN TRY
BEGIN TRANSACTION
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 1/0;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 2;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
PRINT 'CATCH'
PRINT 'The "Batch Abort" signal is now cleared.';
PRINT 'ERROR_MESSAGE()='+ERROR_MESSAGE();
PRINT 'XACT_STATE()='+CAST(XACT_STATE() AS VARCHAR);
ROLLBACK TRANSACTION;
END CATCH
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
/*
Before
test_id delete_id
8 0
8 1
8 2
After
test_id delete_id
8 0
8 1
8 2
Prints...
CATCH
The "Batch Abort" signal is now cleared.
ERROR_MESSAGE()=Divide by zero error encountered.
XACT_STATE()=-1
*/
When the 1/0 error is hit:
1. The transaction is marked "Uncommittable" (XACT_STATE = -1).
2. Control jumps to the CATCH block.
3. The transaction is explicitly rolled back.
4. Execution CONTINUES after the END CATCH (The batch survives as BATCH_ABORT has been intercepted by CATCH).
NOTE ON TIMEOUTS:
If a Client Timeout occurs, the CATCH block is SKIPPED.
However, XACT_ABORT ON guarantees the transaction is still rolled back by the server.
This is the format that I would suggest using for adhoc scripts. I still think this is not safe code to be used within stored procedures or application code anywhere (opinion!)
XACT_ABORT ON
EXPLICIT TRANSACTION
NESTED EXPLICIT TRANSACTION
NESTED TRY CATCH
TRY CATCH
Some serious caveats with the terminology being used here. There is no such thing as a nested transaction but there are nested BEGIN TRANSACTION statements. Different topic and not the focus here where we are mainly looking at xact_abort.
Each statement runs in an explicit transaction within a try/catch.
XACT_ABORT ON ensures that severe errors (that bypass CATCH) still force a rollback.
TRY/CATCH allows us to gracefully handle logic errors and log them.
DECLARE @test_id INT = 9;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
SET XACT_ABORT ON;
BEGIN TRY
BEGIN TRANSACTION --@@TRANCOUNT = 1
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 0;
BEGIN TRY
BEGIN TRANSACTION --@@TRANCOUNT = 2
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 1/0;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
PRINT 'INNER';
PRINT 'The "Batch Abort" signal is now cleared.';
PRINT 'ERROR_MESSAGE()='+ERROR_MESSAGE();
PRINT 'XACT_STATE()='+CAST(XACT_STATE() AS VARCHAR);
ROLLBACK TRANSACTION; --@@TRANCOUNT = 0
END CATCH
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 2; --Batch Abort cleared so this can proceed in autocommit.
COMMIT TRANSACTION; --Fail! @@TRANCOUNT already at 0
END TRY
BEGIN CATCH
PRINT 'OUTER';
PRINT 'The "Batch Abort" signal is now cleared.';
PRINT 'ERROR_MESSAGE()='+ERROR_MESSAGE();
PRINT 'XACT_STATE()='+CAST(XACT_STATE() AS VARCHAR);
ROLLBACK TRANSACTION;
END CATCH
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
/*
Before
test_id delete_id
9 0
9 1
9 2
After - The final select statement is not issued as part of the batch. Validation of the end state must be run separately after the first execution of the batch
Prints...
INNER
The "Batch Abort" signal is now cleared.
ERROR_MESSAGE()=Divide by zero error encountered.
XACT_STATE()=-1
OUTER
The "Batch Abort" signal is now cleared.
ERROR_MESSAGE()=The COMMIT TRANSACTION request has no corresponding BEGIN TRANSACTION.
XACT_STATE()=0
Msg 3903, Level 16, State 1, Line 487
The ROLLBACK TRANSACTION request has no corresponding BEGIN TRANSACTION.
*/
When the 1/0 error is hit:
1. The transaction is marked "Uncommittable" (XACT_STATE = -1).
2. Control jumps to the INNER CATCH block.
3. BATCH_ABORT is intercepted
4. The transaction is explicitly rolled back.
a. TRANCOUNT goes from 2 -> 0
5. Execution CONTINUES after the END CATCH.
6. The next statement in the batch is DELETE for delete_id = 2
7. The delete succeeds as this is performed within an autocommit transaction and the previous BATCH_ABORT had been intercepted and reset.
8. The (OUTER) COMMIT statement fails as the explicit transaction has already been rolled back.
9. Controls jumps to the OUTER CATCH block where the message indicates no begin transaction found.
10. There is no explicit transaction in operation, the XACT_STATE is 0
11. The ROLLBACK now throws an ERROR
12. BATCH_ABORT signal re-issued by XACT_ABORT
13. The batch is terminated and the final select statement is not executed
DECLARE @test_id INT = 9;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
After
9 0
9 1
Now that we start delving into nested BEGIN TRANSACTION statements and even with TRY/CATCH and XACT_ABORT we start to see that results can be unexpected.
XACT_ABORT ON
EXPLICIT TRANSACTION
NESTED EXPLICIT TRANSACTION (Conditional)
NESTED TRY CATCH
TRY CATCH
The main difference between this and the previous test is that we are checking to see if an explicit transaction has already been defined. If yes we will always leave the transaction operations to be handled where the transaction has been started.
This is why you should always join existing transactions in SQL server.
This is the safest way to construct stored procedures that handle transactions in SQL Server
Each statement runs in an explicit transaction within a try/catch.
XACT_ABORT ON ensures that severe errors (that bypass CATCH) still force a rollback.
TRY/CATCH allows us to gracefully handle logic errors and log them.
DECLARE @test_id INT = 10;
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
SET XACT_ABORT ON;
BEGIN TRY
DECLARE @tc INT = @@TRANCOUNT;
IF @tc = 0 BEGIN TRANSACTION;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 0;
BEGIN TRY
DECLARE @tc1 INT = @@TRANCOUNT;
IF @tc1 = 0 BEGIN TRANSACTION;
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 1/0;
IF @tc1 = 0 COMMIT TRANSACTION;
END TRY
BEGIN CATCH
PRINT 'INNER';
PRINT 'The "Batch Abort" signal is now cleared.';
PRINT 'ERROR_MESSAGE()='+ERROR_MESSAGE();
PRINT 'XACT_STATE()='+CAST(XACT_STATE() AS VARCHAR);
IF @tc1 = 0
BEGIN
PRINT 'INNER - ROLLING BACK';
ROLLBACK TRANSACTION;
END
ELSE
BEGIN
PRINT 'INNER - THROW';
THROW;
END
END CATCH
DELETE FROM dbo.test_scenarios WHERE test_id = @test_id AND delete_id = 2;
IF @tc = 0 COMMIT TRANSACTION;
END TRY
BEGIN CATCH
PRINT 'OUTER';
PRINT 'The "Batch Abort" signal is now cleared.';
PRINT 'ERROR_MESSAGE()='+ERROR_MESSAGE();
PRINT 'XACT_STATE()='+CAST(XACT_STATE() AS VARCHAR);
IF @tc = 0
BEGIN
PRINT 'OUTER - ROLLING BACK';
ROLLBACK TRANSACTION;
END
ELSE
BEGIN
PRINT 'OUTER - THROW';
THROW;
END
END CATCH
SELECT test_id, delete_id FROM dbo.test_scenarios WHERE test_id = @test_id;
/*
Before
test_id delete_id
10 0
10 1
10 2
After
test_id delete_id
10 0
10 1
10 2
Prints...
INNER
The "Batch Abort" signal is now cleared.
ERROR_MESSAGE()=Divide by zero error encountered.
XACT_STATE()=-1
INNER - THROW
OUTER
The "Batch Abort" signal is now cleared.
ERROR_MESSAGE()=Divide by zero error encountered.
XACT_STATE()=-1
OUTER - ROLLING BACK
*/
When the 1/0 error is hit:
1. The transaction is marked "Uncommittable" (XACT_STATE = -1).
2. Control jumps to the INNER CATCH block.
3. The transaction is not rolled back.
a. The nested try/catch transaction 'joined' the exiting one and so does not roll the tranasction back
4. Execution CONTINUES after the END CATCH (The batch survives).
a. Controls immediately jumps to the OUTER CATCH block
6. XACT_ABORT treats this like a second batch within the same transaction
7. The transaction is already marked as uncommittable
8. As the OUTER created the initial transaction, it rolls it back.
a. If the OUTER here were called from another statement or procedure that had initiated an explicit transaction then it would operate the same way as the INNER
9. BATCH_ABORT has been intercepted by the outer CATCH block so the select statement proceeds.
This would be my recommendation for why I would recommend setting XACT_ABORT ON, checking to see if explicit transactions have already been started, and using TRY/CATCH to log any errors or perform tidy up.
If you're not sure then write a test and see if your code performs as you would expect. I have no doubt I have some typos somewhere in this post so please make sure you verify everything yourself before implementing anything in a production environment.
You might find this articles useful.
[checkbox* checkboxfield "Option1" "Option2" "Option3"]
Use exclusive for one selection at a time.
[checkbox* checkboxfield exclusive "Option1" "Option2" "Option3"]
They just want to keep developers quit from their platform by restricting in this way.
It's the top organization unit users directly interact with, - all tables related to specific app stored inside database.
row -> column -> table -> database -> server
Server is the highest, but database is in terms of data organization.
You can use this plugin to fully customize the boring woocommerce dashboard with widget cards, analytics graphs and activty logs.
The problem seems to be OS Specific, in our team it occurs on Windows only, and running the FE application with WSL, solved the issue as well.
Regarding the advice to avoid dynamic_cast, message definitely received, but the main challenge is that some_method actually has a return type of DerivedA for its implementation in DerivedA, and DerivedB for its implementation in DerivedB. And this return type is precisely what I am trying to cast to.
Perhaps I skipped important contextual info in my effort to simplify my example. To be specific,
Base is a base class that wraps a matrix data structure, for example, from an external linear algebra library.
DerivedA is an implementation to wrap dense matrices from a particular linear algebra library.
DerivedB is an implementation for sparse matrices from a particular linear algebra library.
Users should be able to extend this and create their own wrappers for their favorite linear algebra library, e.g., DerivedC: public Base, and then they can use DerivedC everywhere else in my library, thanks to runtime polymorphism.
operation might be matrix multiplication, for example, but I want to allow for mixed sparse-dense multiplication.
some_method returns a reference to the underlying matrix data, with the specific type DerivedA or DerivedB. Base doesn't know this, so I cannot define a virtual some_method in Base.
As a less terrible compromise, I've made an intermediate DerivedX: public Base which is a template that takes the type of the underlying matrix wrapped by DerivedA and DerivedB, implements the dynamic cast, but also replicates each operation with a templated version of the operation, to allow mixed operations.
What would be your own use cases for migration?
I use XSLTForms only locally, in order to try out certain aspects of XForms (not least to answer questions here on StackOverflow.) I have switched to a server-side XSLT transformation.
// Source - https://stackoverflow.com/a/42527003
// Posted by mplungjan, modified by community. See post 'Timeline' for change history
// Retrieved 2025-11-24, License - CC BY-SA 3.0
if (location.host.indexOf("localhost")==-1) { // we are not already on localhost
var img = new Image();
img.onerror=function() { location.replace("http://localhost:8080/abc"); }
img.src="http://servertotest.com/favicon.ico?rnd="+new Date().getTime();
}
(Side-note: maybe I shouldn't have classified this question as "Best Practices"? I can't figure out how to add an actual answer to the question.)
Thanks to @Randommm for the pointer. I adapted this answer to work with multi-line, sometimes indented contents:
subprocess.call([
"vim",
"-c", ":set paste",
"-c", f':exe "normal i{contents}\\<Esc>"',
"-c", ":set nopaste",
filename
])
From the documents that I have gone through, the deployment process involves running a few php artisan commands on the cPanel server.
Uploading the files to the server is not a problem
I see no reason to not check every file, that is safer
After further research, it seems in order to preserve initial structure I need to define a frame and use jsonld framing. I am not sure how it will work with rdf4j but it works with Jena, so I will end up switching to it.
Here is a post that lead me to this conclusion that also showcases an example:
JSON-LD blank node to nested object in Apache Jena
I have same issue. Did you find the solution ?
You do not need to expose your local service to the internet for local fulfilment, see https://developers.home.google.com/local-home/overview
This does also support using Google Home app to trigger actions on an internal system or service.
The purpose of the public components are Google infrastructure such as Auth for account linking and device discovery. The action once installed on your assistant device executes within your network.
You should change permission to vendor folder. You can do it on the follow:
sudo chown -R www-data:www-data vendor/, but this user and group www-data is example, you should check your users and group with help command ls -la
If you don't have access to sudo, you should ask to do it administration.
I'm already using Drop Partition. I think there is no way to do this online with 5.7. But i found
ALTER TABLE your_table DROP PARTITION partition_name, ALGORITHM=INPLACE, LOCK=NONE;
for mysql 8.
so for wqtt protocol the clients can work on multiple clients as by using an broker, and can subscribe to specific topics, that is the main theme , so for refereing the responses:
https://github.com/secretcoder85-sys/Transfer_protocols
How do you determine, if an enemy "sees player"?
If that's controlled by a range (usually larger than the attack range), that would make for three ranges:
chase range
detection range
attack range
Likely the list above is in descending order.
What happens, if all three ranges are set to the same value, and max. movement speed is zero?
If your implemented states are robust in edge cases like "player crosses mutliple thresholds before states are reevaluated", that should result in a turret.
This answers shows how you can do that in command line, should have no issue adding that into to your starting command https://stackoverflow.com/a/22866695/18973005
Is there a question here somewhere?
@charmi pls post your efforts what you have tried i mean what error you get while scanning barcode library you used is most trusted library for qr/barcode app
Browser default behavior is to autocomplete single value for entire input. You need to write custom script to split input value by separator(,) and then filter available options in <datalist> dynamically, based on the last value in the list.
Why we need double parentheses below?
a:+=(("x","y"))
if you use maven check that in your pom.xml the java version is also set to to 25, or something like this:
<properties>
<java.version>25</java.version>
</properties>
or if you use gradle in your build.gradle:
java {
toolchain {
languageVersion = JavaLanguageVersion.of(25)
}
}
If the option is not available in your settings by default, the only alternative is to connect your device via USB cable to a macbook that has the Xcode installed. Then the Developer option will become available
Yes, in Spring Framework 4.3 with XML config, you can use @RestController and traditional @Controller together in the same servlet. Just make sure <mvc:annotation-driven /> and component scanning are enabled. No need for a separate servlet.
I know this is out of date, but I've just stumbled across this thread.
"The Actions on Google Console is deprecated. As of December 2024, all smart home projects that were set up in the Actions Console have been migrated to the Google Home Developer Console" - ref. https://developers.home.google.com/cloud-to-cloud/project/migration
So try looking here https://console.home.google.com/projects - I had an issue creating an action which support suggested was a browser caching issue. So do please try clearing caches etc.