Thank you Mark, that works great! I hope I never need triple backticks inside triple backticks.
Thanks for pointing out the option to uncheck, but alas, after unchecking the Enable JavaScript/Typescript Language Service Prototype option, you can no longer right-click and Find All References - doing so results in "Search found no results".
Do others see the same?
The chrome.debugger API allows Chrome extensions to interact with the Chrome DevTools Protocol (CDP), enabling network traffic interception. This is useful for monitoring, logging, or modifying network requests and responses in real-time.
Ensure your manifest.json includes the necessary permissions:
{
"manifest_version": 3,
"name": "Network Traffic Interceptor",
"version": "1.0",
"permissions": ["debugger", "storage"],
"host_permissions": ["https://www.google.com/*"],
"background": {
"service_worker": "service-worker.js"
},
"action": {}
}
In your service worker (service-worker.js), attach the debugger to the active tab:
async function attachDebugger(tab) {
if (!tab || !tab.id) return;
if (!tab.url.startsWith("http")) {
console.error("Debugger can only be attached to HTTP/HTTPS pages.");
return;
}
const debuggee_id = { tabId: tab.id };
try {
await chrome.debugger.detach(debuggee_id);
} catch (e) {
// Ignore if not attached
}
try {
await chrome.debugger.attach(debuggee_id, "1.3"); // https://chromedevtools.github.io/devtools-protocol/
await chrome.debugger.sendCommand(debuggee_id, "Network.enable", {});
console.log("Network interception enabled.");
} catch (error) {
console.error("Failed to attach debugger:", error);
}
} // Google's boilerplates: https://github.com/GoogleChrome/chrome-extensions-samples/blob/main/api-samples/
// usage: Attach on action click
chrome.action.onClicked.addListener(async (tab) => {
await attachDebugger(tab);
});
Set up event listeners for network events:
const pending_requests = new Map();
chrome.debugger.onEvent.addListener(function (source, method, params) {
if (method === "Network.responseReceived") {
// Store request ID for later retrieval
pending_requests.set(params.requestId, params.response.url);
}
if (method === "Network.loadingFinished") {
const request_id = params.requestId;
const url = pending_requests.get(request_id);
if (!url) return;
pending_requests.delete(request_id);
chrome.debugger.sendCommand(
source,
"Network.getResponseBody",
{ requestId: request_id },
function (result) {
if (chrome.runtime.lastError) {
console.error(
`Failed to get response body: ${chrome.runtime.lastError.message}`
);
return;
}
if (result && result.body) {
const body = result.base64Encoded ? atob(result.body) : result.body;
console.log(`Response from ${url}:`, body);
// Process the response body here
}
}
);
}
});
manifest.json and service-worker.js files as shown above.chrome://extensions/.This error stems from Network.getResponseBody and is symptomatic of the following common causes:
Network.enable must be called before requests are made. If called after, existing request IDs are invalid.Network.enable resets the domain state, invalidating previous IDs.getResponseBody before loadingFinished.Mitigation:
chrome.action.onClicked) to prevent multiple or conflicting Network.enable commands. Each Network.enable resets the Network domain state, clearing buffers and invalidating existing request IDs. Calling it redundantly or out of sequence can cause state resets mid-session, leading to the "No resource" error.chrome.runtime.lastErrorNetwork.enable with increased limits, e.g., Network.enable({ maxTotalBufferSize: 200000000, maxResourceBufferSize: 50000000 }), to prevent FIFO eviction of response data before retrieval.Network.requestWillBeSent: Request initiated.Network.responseReceived: Response headers received.Network.dataReceived: Response data chunks (if chunked).Network.loadingFinished: Response fully loaded.Network.enable initializes/reinitializes the domain, clearing buffers and invalidating IDs.The Network domain manages an in-memory buffer for response bodies. Enabling resets this buffer, ensuring fresh state but invalidating old data.
You can try with this new structure:
auth:
rootPassword: "admin"
database: "benchmarking_db"
username: "admin"
primary:
nodeSelector:
app: mysql-node
persistence:
enabled: true
storageClass: gp2
size: 8Gi
Removing the public access modifier from the constructor worked for me.
Unfortunately, the only way is to declare all 12 modules separately
I'm trying to set the locale based on the user's browser preferences from the Accept-Language header. If the first preference doesn't work, because the locale is not available, I want to fall back to the next one. So I need to know whether setlocale() genuinely succeeded. If the intl extension is installed then I can test as above, but if not?
I fixed it by just upgrading to use propshaft instead of sprockets, which was next on my list anyway.
To sleep for 500ms:
#include <stdio.h>
#include <time.h>
int main() {
struct timespec ts = {
.tv_sec = 0,
.tv_nsec = 5e8,
};
printf("Sleep now!\n");
nanosleep(&ts, NULL);
printf("Sleeping done!\n");
return 0;
}
Check your $http function parameters - in my case, I was using params instead of data, and that's why it was giving me such error. I was also writing the function a little bit differently.
$http({method: 'POST', url: 'X', params: Y}) - 415 Error
$http({method: 'POST', url: 'X', data: Y}) - No issues
It might also work the other way around - you could be wrongfully using data instead of params.
Check it out: https://www.npmjs.com/package/ngx-mq
Yes, but not in the form of a simple built-in mapping table. Windows uses its own locale naming scheme and PHP does not translate between BCP47 tags like en-US and Windows names like English_United States. To get a proper mapping you need to query what Windows itself exposes.
You can do that with the intl extension. The ResourceBundle for the root locale contains the Windows locale identifiers and their corresponding BCP47 tags. With that data you can build your own lookup table at runtime. Another option is to call Locale::canonicalize on the BCP47 tag and then use Locale::getDisplayLanguage and Locale::getDisplayRegion to compose a Windows style name. Both methods give you a consistent way to turn something like en-US into the Windows name that setlocale will actually understand.
Outside PHP the official source for the mapping is the list of Windows locale identifiers published by Microsoft. That list includes the Windows locale names, the numeric identifiers, and the matching BCP47 tags. If you need a complete and static table this document is the closest thing to an authoritative reference.
something such as this is valid?
async function handleRequest(request, env) {
const url = new URL(request.url);
// --- 1. Dynamic Content: Route /php traffic via Cloudflare Tunnel ---
if (url.pathname.startsWith('/php')) {
// --- Massaging Steps for Tunnel Routing ---
// 1. Path Rewriting: Strip the leading '/php' so Tomcat sees the root path (e.g., /login instead of /phpy/login).
const originalPathname = url.pathname;
url.pathname = originalPathname.replace('/php', '');
// 2. Origin Host Assignment: Set the internal hostname for the fetch request.
// This hostname is tied to the Cloudflare Tunnel in your Zero Trust configuration.
url.hostname = '999.999.999.999';
url.port = '9999';
url.protocol = 'http:';
// 3. Request Reconstruction: Clone the original request to apply the new URL and host headers.
const newRequest = new Request(url, request);
// 4. Host Header Override: Crucial for Tunnel. Explicitly set the Host header
// to the internal hostname so Tomcat knows which virtual host to serve.
newRequest.headers.set('Host', '999.999.999.999');
// Final fetch request uses the rebuilt request object, sending it securely
// through the Cloudflare edge to your connected Tunnel.
return fetch(newRequest);
}
// --- 2. Static Content: Serve all other traffic directly from R2 ---
else {
//usual stuff here...
//...
//...
//...
//...
//...
}
}
// Bind the handler function to the 'fetch' event listener
export default {
fetch: handleRequest,
};
The problem is that DA errors are considered as "internal errors" by the apex error handling function, and there is no handle that passes a message to the users. To fix this, you must choose an exception number, for example -20999, and create a handle in the internal section of the function. When this is done you can pass your message with raise_application_error(-20999,<message>);
For a complete description go to PLSQL error messages for the APEX end user
OK. So is there a mapping available somewhere that will map, for example, 'en-US' to 'English_United States', etc.?
The developer of the system found out that they weren't using the correct python functions to support user-assigned identities. They just tested with VM identity, not a user-assigned one.
They updated the code, and now it works.
i think i not use cloudflare for reverse proxy similar to nginx. it doesn't seem like this is a common use case. If I got stuck at a showstopper bug, I would immediately scrap using cloudflare for this purpose. I think 99% of people use cloudflare as simple CDN and forcing it to use PHP app is not good idea... hmm
On Windows the C runtime that PHP uses for locale handling does not validate locale names against any real list. It only checks the general pattern of the string. As long as the part before the dot is short enough, the call succeeds even if that locale does not exist at all.
When this happens Windows simply keeps the system default locale for every actual formatting function. That is why date output stays in English even though PHP reports that the locale was set to the exact string you passed in.
If you try a string that does not match the expected pattern, for example something with more than three letters before the dot, the call fails and PHP returns false. That is the only point where you notice that Windows rejected it.
To get real locale changes on Windows you need to use the Windows specific locale names such as English_United States with the correct code page. Only those names cause Windows to switch to a real locale and affect formatting output.
how about defining https://www.helloworld.com as the CDN/Static (R2) files, and then defining https://reverseproxy.helloworld.com as the cloudflare tunnel to externally hosted php app?
@blackgreen Thanks for the link!
wmctrl does not move a zenity window in a bash script unless it is first deactivated. By deactivated and then activated zenity you can change the position of the window. For example add the following lines before the zenity command (z-name is the name of the zenity window):
(
sleep 0,5
wmctrl -a Firefox
wmctrl -a z-name
wmctrl -r z-name -e 0,2000,400,-1,-1
) &
In the string you are sending to the printer, replace "_" with "_5F"
If "_" is specified as the hexidecimal escape character, as the other poster mentioned, then the next two characters will specify the ascii code of the character that will print. "5F" is the ascii code of the underscore, so if you send "_5F", it will print the underscore character.
In this situation I would host the PHP backend on a different cloud like Azure or AWS.
In PHP you can get stuff from r2 buckets even if your server is running outside of the Cloudflare ecosystem.
https://developers.cloudflare.com/r2/examples/aws/aws-sdk-php/
Facing the same issue now
Did you fix it?
Try this :
const handleAudio = (audioRef)=>{
if(audioRef.current){
audioRef.current.currentTime = 0;
audioRef.current.play();
}
}
Before play(), you have to load():
audioRef.current.load();
vision: r2 bucket as cdn for hosting static files, and then login/session stuff routed to externally hosted php app.
I wasn't sure if this is a common use case, as 99% of the examples and aibot answers are referring to using cloudflare's internally provided serverless workers, and rarely is there any mention of use of one's own app servers.
Though I tried much of the above, I could only sovle the problem by simply renaming the Java project directory in Windows File Explorer. Then I opened the newly named directory in VS Code and everything worked fine.
What are you trying to do with the bucket? If, for example, you just need to serve static files, I'm not sure why you'd want to use your own PHP app.
The basic problem was that I had created an App Intent extension and put the files into the extension. Apple's documentation on the developer site does not make it clear that this is the correct way to configure the project... I had assumed I needed an App Intent Extension because I saw it existed in the "New Target" menu. I moved the files back to the base app and deleted the app extension and the error about not finding the intent stopped appearing in the console.
The second issue I had was that my model type was creating a UUID identifier as part of its init. To work properly with VI, your model types need to have a STABLE identifier as the identifier will be used across processes.
Once I corrected those two issues, my PoC app worked as expected.
this isn't programming related.
Does your project needs to "analyze" it's own code or some other code? Does it need to do it in build time or in the runtime?
Yeap. we face the same. BC 25.
literally, classnames. you're looking for classnames...
https://developer.mozilla.org/en-US/docs/Web/CSS/Reference/Selectors/Class_selectors
Add the class to apply a given change in style, remove it to remove that change back to "default".
I am no longer using this bulletin board app. Thanks for your input and support
in Windows 10 - this folder is "hidden" by default
you need to turn ON - show hidden files in windows, to select this folder...
I had the same problem and fixed it in edit->preferences->external tools->regenerate project files, and make sure you have your external script editor checked and Post Processing installed in package manager.
This is solved in Codename One version 7.0.212
This worked well for me, even though it resets on page refresh.
For anyone running into the same issue: Ctrl+Shift+I didn’t open DevTools in the Apps Script editor, but right-clicking the side menu -> Inspect did.
For reference, I put it here:
<div class="Kp2okb SQyOec" jsname="Iq1FLd" style="display:none; flex-basis:"
I am late some one may find this helpfull try adding /// <reference path="./types/express.d.ts" /> this line to top of the main index.ts file or you express index file
There’s an easy way to do it: just add a ‘Woo Notice’ element. With that, you can keep the UI consistent in Divi and create the necessary classes to customize it further—either through the UI or with code if needed. Hope this helps! Woo notice module -> Divi
async afterCreate(event) {
const { result } = event;
const response = await axios.get(`http://127.0.0.1:8000/api/import/${result.GTIN}`);
const product = response.data.products.product;
strapi.log.info('Added', result.id);
let updatedProduct = await strapi
.documents('api::product.product')
.update({
documentId: result.documentId,
data: {
nutritional: await getNutrition(product)
},
});
}
Easy solution. I have to call my saved data and set it direct in the database
<item name="android:windowOptOutEdgeToEdgeEnforcement">true</item>
This shouldn't be used. This opt out is made for users that already have an application that is made with non compulsory edge to edge in mind. For developing apps for Android 15+, you only need enableEdgeToEdge() as it will enforce the same behavior in lower Android versions ensuring backward compatibility.
If for some reason you're still using optOut you need to remove that and work to make applications edge to edge.
So overall: You don't need windowOptOutEdgeToEdgeEnforcement=false, you don't need it set at all. By default it will be false for android 15+ and true for android 14 and below. So don't rely on this flag. Leave it default. You need enableEdgeToEdge(). Put this on all your activities to ensure consistent behavior across all android versions.
I hope this answers your question.
I also faced a similar issue (with the latest version 0.5.3), and the only way I could get the correct input+output channels to show was to have ASIO4ALL installed and an older version of sounddevice (e.g. 0.4.6)
I am experiencing the same issue with a dev app, this change started in iOS 26.1, if I test using iOS 26 simulator I can click the "Not Now" button, but in iOS 26.1 simulator or on device running 26.1 the not now button is greyed out. I am not sure if this is just something with the dev environment or if it happens on a production app downloaded from the store.
Sometimes with debugging on Linux, the noisy symbol messages you see aren't necessarily the "final result". Before any plug-in (e.g.: the ELFBinComposition.dll Linux/ELF/DWARF plug-in) gets a chance to take over binary/symbol loading, the debugger will go down the default path that it takes with Windows/PE/PDB and will fail (resulting in some error messages).
What does lmvm show for these binaries? I'm surprised we'd fail to find the binary & symbols for a released .NET libcoreclr. I'm a bit less surprised on the Ubuntu distro libc.
If you want to get symbols, the debugger requires BOTH the executable AND the debug package (though depending on circumstances that might be a single ELF). We don't look for the debug package if we can't find the executable. I've certainly seen some of the DebugInfoD servers (including for some Ubuntu releases) that will serve up the debug package but will NOT serve up the executable. That's fine if you're using DebugInfoD on the box in question (where the binary is on disk). It's much less fine if you're trying to analyze the dump file on a separate machine that doesn't have those files on disk (which is always the story with WinDbg).
When I'm personally analyzing core dumps I've taken from my Linux VMs that are for distros I know don't always have binaries & symbols indexed reliably, I'll copy some of the binaries I care about out of the VM along with the core dump.
I also suspect that your "rebuilding" glibc is not an identical binary. The build processes will typically embed a 160-bit SHA-1 hash as the "build-id" in the first page of an ELF binary (typically right after the program headers in an ELF note). The core filters are typically configured so that a core dump captures the first page of any mapped ELF in order to capture that ID. The debugger will not, by default, load any binary/symbols that do not have matching build-ids (much like we do for PE/PDB on the timestamp & image-size or RSDS GUID). You can, of course, override that by setting SYMOPT_LOAD_ANYTHING (with the .symopt+ command). That's not recommended unlesss you really know what you are doing since it will allow mismatched binaries & symbols to load and can result in a very poor debug experience.
I could not see in data connector extract what the role id unique to the project was though. That is necessary to update the role of a user already on a project. I added a service account user with various roles to each project, but this seems inefficient and difficult to maintain.
admin_project_roles.csv was close, but those are all just the account version of the role ids
apt-get install dos2unix
or whatever distribution you have there
OR
sed -i 's/\r$//' yourfile.adoc
I got same error and tried debugging. Found this thread online which helped out for me - https://asynx.in/blog/instagram-graph-api-comments-data-returned-with-empty-data
import time def estonia_story(): character_name = Эстония print(f {character_name} начинает свой день... ) time.sleep(1) # Небольшая пауза # 1. Читает книжку print(f \n{character_name} берет интересную книжку с полки. ) time.sleep(1.5) print(f {character_name} удобно устраивается в кресле и погружается в чтение, наслаждаясь тишиной... ) time.sleep(3) # Более долгая пауза, пока она читает print(f Несколько глав прочитано. ) time.sleep(1) # 2. Берёт кофе и поёт print(f \n{character_name} решает сделать перерыв. Пора выпить кофе! ) time.sleep(1.5) print(f {character_name} отправляется на кухню и варит ароматный кофе. ) time.sleep(2) print(f Пока кофе готовится, {character_name} тихонько напевает свою любимую народную песню. ) time.sleep(2.5) print(f Кофе готов. {character_name} пьёт его маленькими глотками, продолжая напевать. ) time.sleep(2) # 3. Танцует print(f \nВнезапно {character_name} чувствует прилив энергии! ) time.sleep(1.5) print(f Включается веселая мелодия, и {character_name} не может усидеть на месте. ) time.sleep(2) print(f {character_name} начинает танцевать, легкие движения переходят в энергичный танец! ) time.sleep(3) print(f {character_name} улыбается, наслаждаясь моментом. ) time.sleep(1) print(f \nДень {character_name} продолжается весело и активно! ) # Запускаем историю if _name_ == _main_ : estonia_story()
Change this

Depending on setting done on the project's properties (Solution explorer>Right click The Project>Properties)

In my case they were different. Changing them resolved the error.
as an addendum to what @matszwecja said, perform your exact match test against a set of existing sums, as in the "two sum" problem. If the numbers are always positive, you can prune any results greater than {target}-{minimum}
In general (PostgreSQL like-flavor), we can summarize the SQL coding order as:
SELECT
DISTINCT
FROM
JOIN
WHERE
GROUP BY
HAVING
ORDER BY
LIMIT
Whereas, its execution order will be:
FROM
JOIN
WHERE
GROUP BY
HAVING
SELECT
DISTINCT
ORDER BY
LIMIT
Its a bug in bindings generator. You can replace this cv::Vec2d to Vec2d in this file calib3d.hpp
Reference from Open-CV issues
I think what you are searching for is Currency format specifier ?
EDIT:
Just in case, be sure to use Decimal type not Double nor Float because you could lose some decimal (see: https://stackoverflow.com/a/3730040/8404545)
You can also make it simple
from django.db.models import Sum
product_total= Product.objects.aggregate(price=Sum('price'))['price'] or 0
which will give you just the amount. Eg: 4379 so the price is from your database column
To download AES-128 encrypted m3u8 chunks, you can try this web tool:M3U8-Downloader .
It automatically detects the encryption method and downloads the files, supporting multi-threading.
plugin net.ltgt.apt was discontinued in 2020. How would you do this nowadays?
I’m working in react native so the DOM approach doesn’t apply here.
I was trying to see if there’s any way to control the screen reader order without changing the JSX structure, but I’ll give that a try.
I had to go the connectors section of https://replit.com/integrations and activate GitHub to get this working
Treat subList as a QVariant when you append it:
QVariantList containerList;
containerList.append(QVariant(subList));
Thanks to @G.M. for the solution.
Are you sure your OnSleep function in App.xaml.cs is actually getting called?
https://learn.microsoft.com/en-us/dotnet/maui/fundamentals/app-lifecycle?view=net-maui-9.0
According to this link you should be overriding OnPause, OnSleep does not seem to exist.
You need to override the Getter or much better reassign the same field name. Think about it: there should only be one field named "className" only the value should be different when inheriting. It makes sense.
In trying to use PATCH Users to update role but it requires me to use a role id unique to that project. How do I look up that role id unique to that project? I've looked through all data connector files. I can look up the account level role id and that works when adding a user to a project. Updating the user already on the project with that account level role id gives an error. https://aps.autodesk.com/en/docs/acc/v1/reference/http/admin-projects-project-Id-users-userId-PATCH/
@jie, have you managed to auth via PEM?
add
# Fix OpenSSL 3.0 compatibility issue for SQL Server connections
RUN printf "openssl_conf = openssl_init\n[openssl_init]\nssl_conf = ssl_sect\n[ssl_sect]\nsystem_default = system_default_sect\n[system_default_sect]\nCipherString = DEFAULT@SECLEVEL=0\n" > /etc/ssl/openssl_allow_tls1.0.cnf
# Set OpenSSL configuration as environment variable
ENV OPENSSL_CONF=/etc/ssl/openssl_allow_tls1.0.cnf
As I said, I am in the situation of developing a lib for use both in [no_std] (and also no alloc) and with-std environments, because this will be used in WASM (no_std for size reasons) and also natively, where access to std is normal.
That was my comment about having two concrete subclasses - one for use in native, and one without. In Rust this can be done with features, so that part is fine - but the core design is still missing.
Using sized arrays is out, because the size of the data is only known at run-time. Even in the no_std case, a minimal amount of "allocation" needs to happen, although it can be as simple as a bump allocator that never frees.
Update your Info.plist file with:
<key>UIDesignRequiresCompatibility</key>
<true />
This should remove the translucent glass-like background
In my connect() function (which is a singleton), right after connecting I execute the device.connect.listen block that is in the FBP documentation. I pass a callback function as a parameter to my connect() function and within the listen block, I check the state and if it is "disconnect", I call cancel(), then invoke my callback function. Within the callback function I can do anything I want; snackbar, alert dialog, ...
This Upgrade mechanism doesn't exist in HTTP/2. HTTP/2 uses multiplexing and binary framing, which is fundamentally incompatible with how WebSockets work.
When you set up WebSockets in Actix-Web (like in that example), here's what happens:
Even if your server supports HTTP/2, WebSocket connections will always negotiate as HTTP/1.1
The "automatic upgrade to HTTP/2" mentioned in the docs applies to regular HTTP requests, not WebSocket connections
When a client requests a WebSocket upgrade, Actix will handle it over HTTP/1.1 regardless of your HTTP/2 configuration
Remove the "ADB Interface" from device manager, then "scan for hardware changes".
Once the driver had been reinstalled, it suddenly will ask for USB debug authorization.
You need to remove token and owner variables from the provider "github" {} bloc and instead, export GITHUB_TOKEN and GITHUB_OWNER environment variables.
I don't know why it doesn't work, but you can filter with the macro below.
Microsoft® Excel® for Microsoft 365 MSO (バージョン 2510 ビルド 16.0.19328.20190) 64 ビット
Sub a()
Dim dteDate As Date
dteDate = DateSerial(2013, 10, 1)
ActiveSheet.Range("$A$2:$P$2173").AutoFilter Field:=13, Criteria1:=Array( _
"="), Operator:=xlFilterValues, Criteria2:=Array(1, CStr(dteDate))
End Sub
Before filtering
After filtering
Got some interesting answers on reddit.
This is an old question, but i have faced this problem in these days.
For me a solution have been to set inside the job:
export CUDA_VISIBLE_DEVICES=0,1,2,3
and keep the Trainer configuration to
devices: 4
Maybe someone else can share their own solutions, if any.
This is not really an answer but in my case it turned out that it didn't like the fact that my D:\ drive was a virtual drive that mapped to my C:\User\xxxx\Projects folder.
Mounting the C:\User\xxxx\Projects manually resolved the issue for me.
the solution i found was to use the command for_window in my i3 config file.
basically this command sends windows with a given title, to given workspace.
for_window [title="^MATRIX$"] move to workspace 9:Dashboard, floating enable, border none, mo>
for_window [title="^CLOCK$"] move to workspace 9:Dashboard, floating enable, border none, mov>
for_window [title="^SPOTIFY$"] move to workspace 9:Dashboard, floating enable, border none, m>
for_window [title="^ASCII$"] move to workspace 9:Dashboard, floating enable, border none, mov>
for_window [title="^TYPESPEED$"] move to workspace 9:Dashboard, floating enable, border none,>
the titles are set in the script.sh when i opened the terminal window and so is the workspace.
If I forget how to do this or someone else runs into the same problem, in order not to reinvent the wheel, it's worth using a ready-made solution such as CQtDeployer. The release build will be executed in one command and it turns out to be a beautiful installer, and most importantly a working one.
SetThreadLocale changes the locale used by MultiByteToWideChar when the CodePage parameter is set to CP_THREAD_ACP. It can be set per thread, but I believe you have to call _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); first
I think the issue is related to Windows Terminal. Not powershell. When running 'btop' in linux from an ssh in Windows Terminal it usually crashes within a few minutes. Especially if the (text) screen is very large. Not just drops the connection... the entire Terminal goes away. Putty just keeps on trucking... So that's why I think it's the ANSI processing in the terminal.
This behavior is repeatable in:
Windows 11 Version 23H2 (Build 22631.6199) (Windows 11 Enterprise)
Windows Terminal Version 1.23.12371.0
Btop version 1.2.13
Linux Red Hat Enterprise Linux release 8.10 (Ootpa)
Storing a JWT in a cookie means the browser will automatically send it on every request to your API — including requests triggered by a malicious third-party site.
This makes your app vulnerable to CSRF attacks.
A CSRF token fixes this because:
the JWT cookie is auto-sent by the browser (attacker can trigger this)
the CSRF token must be sent manually by your frontend (attacker cannot send this)
So the server verifies:
JWT cookie → “this is the user’s browser”
CSRF token → “this request came from our frontend, not another site”
If the attacker triggers a request, the JWT cookie is sent, but the CSRF token is missing, so the request is rejected.
CSRF tokens do NOT protect against stolen JWTs, but they do protect against the browser being tricked into sending authenticated requests.
Since you are using SameSite=None (cross-site cookies), CSRF protection is required.
JWT in a cookie and a CSRF token aren’t duplicates, they protect against different things.
If your JWT is in a cookie, the browser will automatically send it on any request, even ones triggered by a malicious website. That means an attacker can make your browser perform actions as you without ever stealing your token. That’s classic CSRF.
A CSRF token fixes that because a malicious site can’t read it from your cookies, so it can’t include the correct value. Your backend rejects the forged request.
If a token is actually stolen (via XSS, malware, etc.), CSRF won’t help; that’s a different problem entirely.
In simple terms:
JWT cookie = your ID card
CSRF token = secret handshake
A malicious site can force your browser to use the ID, but not perform the handshake
That’s why both exist when using cookies for auth
One Simple Solution, just add a query parameter in the image url.
If your original url is: https://rk4zeiss.blob.core.windows.net/marketing/marketing20251114notxt.jpeg
Just add a query string in the url like this:
https://test001.blob.core.windows.net/marketing/offer.jpeg?x=1
x can be anything, add ts value in the query param so that it always have a new url.
I might be mistaken, but my current understanding is that JWT and CSRF tokens solve two different problems.
JWT in an HttpOnly cookie helps protect against XSS token theft, since JavaScript can’t read it.
But the browser will still send that cookie automatically, which means JWT alone doesn’t stop CSRF.
A malicious site can trigger a request that includes the JWT, but it can’t provide the CSRF header, because it cannot read the token (Same-Origin Policy).
So the server can detect forged requests by checking whether the CSRF header matches the token stored in the cookie.
Because the setup uses SameSite=None (cross-domain), CSRF protection becomes important — otherwise every cross-site request would automatically include the JWT.
That’s just how I currently see it, but I’m very open to correction if there’s a better pattern or perspective.
Your answer is in this existed question. It very time-consuming when adding reload: true to every save method but it is the only solution.
$0 = script name
$1, $2, etc. = arguments passed to script
$# = number of arguments
$@ = all arguments
What is the difference between $@ and $*?
$@ → treats each argument separately
$* → treats all arguments as a single string
ls -l | grep "^d" --> to get dir only
How do you remove blank lines from a file?
sed '/^$/d' file.txt
sed -i 's/oldword/newword/g' file.txt -- replace
awk '{print $2, $4}' file.txt
sed -n '5p' file.txt --5th line
find /path -type f -size +2G -mtime +30
find /path -type f -size +2G -mtime +30 -exec rm -f {} \;
----to find process is running
#!/bin/bash
if ! pgrep -x "tomcat" > /dev/null
then
echo "Tomcat is down! Restarting..."
systemctl start tomcat
else
echo "Tomcat is running."
fi
----Disk usage alert
#!/bin/bash
usage=$(df / | awk 'NR==2 {print $5}' | sed 's/%//')
if [ $usage -gt 80 ]; then
echo "Warning: Disk usage is ${usage}% on $(hostname)" | mail -s "Disk Alert" [email protected]
fi
-------top 5 memory consuming process
ps -eo pid,comm,%mem --sort=-%mem | head -6
---read line by line
while read line; do
echo "Line: $line"
done < file.txt
---extrat and email error
grep -i "error" /var/log/app.log > /tmp/error.log
if [ -s /tmp/error.log ]; then
mail -s "Error Alert - $(hostname)" [email protected] < /tmp/error.log
fi
I got it working under Windows 11 and Android Studio. What my issue was is that my Flutter runs always standard with administrator and Chrome does not. I worked around this with the following:
In C:\Program Files\Google\Chrome\Application I copied chrome.exe to chromeAdminRights.exe, because I don't want the original chrome itself run with admin rights for security reasons.
With right click on chromeAdminRights.exe I checked the box under --> properties / compatibility / change settings for all users / run this program as an administrator
I created a chrome.bat in my chosen folder Q:\flutter_projects\_chrome with the following contents:
"C:\Program Files\Google\Chrome\Application\chromeAdminRights.exe" --disable-web-security --user-data-dir="Q:\flutter_projects\_chrome\temp" %*
I set my system environment variable CHROME_EXECUTABLE=Q:\flutter_projects\_chrome\chrome.bat
Restart flutter and run debug chrome inside Android Studio.
just Alpha of the icon from yes to no work for me..
u can edit your icon from Mac preview during export u get the option for alpha yes/no.(apple icon accept alpha no)
Yes, you can just use the HttpClient inside your SignalStore.
seems similar to Creating custom color styles for an XSSFWorkbook in Apache POI 4.0
This can be done automatically using walrus operator (python 3.8+) :
assert (x:=getProbability(2, 3, 2, 1)) == 2/3, "wrong value = "+str(x)
Get.put()
Creates the controller immediately.
Good when you need the controller instantly
Get.lazyPut()
Creates the controller only when needed.
Best for pages that may not always open.
@postophagous thank you for wanting to get involved with this question despite being unfamiliar with GitLab.
I cannot give more context about GitLab because I am not terribly familiar with GitLab either, and I am precisely avoiding to get terribly familiar with GitLab. That's the whole point of this question: I want a set of standard, boilerplate, no-brainer, minimum required, necessary and sufficient magical incantations that must be performed on GitLab so that from that point on I do not have to care at all about the fact that I am using GitLab.
That would be:
Something like the list of conditioning steps that I listed for GitHub, which behaves fine after that.
Something like what they used to have to do in web development with normalize.css a.k.a. "CSS reset" so as to start from a blank slate which is exactly the same in all browsers and from that point build their web-site without having to worry which browser they are running on.
All I know about GitLab is that all sorts of things that work locally do not work there. For example, when I execute git branch --show-current on GitLab, I get an empty string. This probably means detached head, but I do not know for sure, and I do not care to know.
I see CI/CD providers as just tools to get a certain job done; they should be as easy as possible to use, but for various reasons (1) they are not; they require an awful lot of coercing and begging and whispering to work. Mysteriously, CI/CD folk all over the planet are willing to spend copious amounts of time learning the quirks and tricks of each CI/CD provider, with the following handicaps:
This is preposterous, and I am not willing to participate in it.
For me, things are simple: I have a build script. I run the build script locally, it builds. I now want to run this build script on the cloud. Is GitLab capable of running my script as it is? Great. Is GitLab incapable of doing that? #@*& off!
(1) various reasons: mostly aspirations of market dominance via vendor lock-in.
Yes, you are right, playing synchronized 2 videos for me right now is almost impossible, a solution is right now is using ffmpegkit for creating a video overlaid by 2 videos. Thanks for your response, if any available library you could find, please help me
This error also appears when you use var instead of val :
the underline marks the word by.
private var viewModel: SomeViewModel by viewModel()
OAuth in Spring provides a secure way to authenticate clients without exposing user credentials.
Spring Security integrates OAuth2 to support both authorization servers and resource servers.
OAuth2 in Spring relies on bearer tokens to authorize access to protected resources.
Spring can validate JWT tokens issued by an OAuth server using public keys or shared secrets.
redirectUri parameter was missing and the MSAL doesn't allow passing is parameter via the client API. found this patch Set Redirect Uri for broker silent flow on Linux platform and after applying it locally, could get a token.
https://www.kaggle.com/discussions/getting-started/168312 follow this instruction. The steps are clearly highlighted if you have any problems drop a comment