1- that's not how it works: if you read data, you're waiting for the result, and that's the point of ptefetching: you don't wait for it now, and with a bit of luck you don't wait at all
2- if the JVM is doing a decent job, there are few enough extra memory access that the cache isn't full: think of a heap for example, that has more or less predictable reads for the code, but not for the memory subsystem
SELECT COUNT(DISTINCT t.driver_id) AS drivers
FROM trips t
JOIN drivers d
ON t.driver_id = d.driver_id
JOIN vehicles v
ON t.vehicle_id = v.vehicle_id
WHERE d.driver_status = 'active'
AND v.vehicle_status = 'active';
You could remove the current user credentials with the command:
git credential reject <url>
even in pypy3 I get the same answer
i can't tell why, but those former answers did not work out for me. Thus, those who searched for it can also use this one ig:
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<scope>runtime</scope>
</dependency>
In my case the issues was caused by apparmor.
In order to fix it, as root user, I ran aa-complain openvpn
(openvpn was already defined under /etc/apparmor.d/)
May be you missing Application.ProcessMessages call?
any updates on this?
The docs (https://node-postgres.com/apis/client) state:
"... example to create a client with specific connection information:
import { Client } from 'pg'
..."
But this leads to:
import { Client } from "pg";
^^^^^^
SyntaxError: Named export 'Client' not found. The requested module 'pg' is a CommonJS module, which may not support all module.exports as named exports.
CommonJS modules can always be imported via the default export, for example using:
import pkg from 'pg';
const { Client } = pkg;
at ModuleJob._instantiate (node:internal/modules/esm/module_job:220:21)
at async ModuleJob.run (node:internal/modules/esm/module_job:321:5)
at async onImport.tracePromise.__proto__ (node:internal/modules/esm/loader:644:26)
at async asyncRunEntryPointWithESMLoader (node:internal/modules/run_main:117:5)
Node.js v22.17.1
GitHub Actions only shows tags in the workflow run dialog if the workflow file exists in the commit that tag points to. Your new tags likely point to commits that either don't contain the workflow file or have an older version of it.
When the Maven Release Plugin creates tags through GitHub Actions, it might be tagging commits before the workflow was added or updated. Check if the workflow file exists in the tagged commits by navigating to the specific tag and looking for the .github/workflows directory.
Regarding your first question:
As I read from this source: I am trying to create cookie while login with a user in Blazor web server app but I am bit lost setting a cookie over Blazor Interactive Server ist not possible, since it uses SignalR. Cookies cannot be set over already started responses (what SignalR is).
You can load the login page in server mode. The problem is posting the form. If you try doing it like this:
<EditForm Model="Input" method="post" OnValidSubmit="LoginUser">
The LoginUser method is called directly on the server via SignalR.
Try something like this:
<EditForm Model="Input" method="post" action="/Account/Login" formname="login">
[SupplyParameterFromForm(FormName = "login")]
private InputModel Input { get; set; } = new();
As far as I tested, this will make sure the EditForm is posted to the server while every other component can still be interactive. I am not sure how this affects the LoginUser method to be called or not. But this could be a start for you.
Second question:
What do you mean by that? You cannnot alter the AuthenticationStateProviders to "enable" cookie setting over SignalR. This is simply not possible. The AuthenticationStateProviders revalidates your login cridentials every 30 minutes (default).
Third question:
I would suggest to you to use the static login page. Why do you need interactive mode anyway on the login page? If you want some sort of animation, you could do that with javascript instead.
if you have postgres installed locally try to uninstall it it worked for me in my fullstack next.js project because the the port localhost:5432 is already being used by the postgres in your local machine
replace()
returns a new object (so assign it or use inplace=True
), your types must match (0
vs '0'
), and your mapping with duplicate 'polarity'
keys overwrote itself.
Use this:
sentiment_text['polarity'] = sentiment_text['polarity'].replace({0: 'negative', 4: 'positive'})
I ended up using https://github.com/victornpb/eleventy-plugin-page-assets to copy the images. It gives new names to each image and rewrites the img src attribute accordingly, but I can live with that. I suppose it also wouldn't be a good solution if I had multiple input files link to the same image, because the image would be copied to each output folder, but luckily that's not a problem in my specific case.
Sorry if this is not allowed but I have I believe something that may be similar.
I already have a script working well and doing various things but I just want to add 1 more attachment from the same drive to the email.
Everything can remain the same I am just wanting to add one additional pdf.
I have tried the above and various other suggestions but I am doing something wrong.
Below is my current script.
const SHEETID = '1rx1lCYKdhi8dhivoYpUO6EIHb2TWTpiMVduK7M-L2A4';
const DOCID = '1sRZqPCkuATT9tQDZlJDp-DicP6saBpZoAXVvKWXT_XM';
const FOLDERID = '1wsyrUM29A1LIiKCjsJE7olKb0ycG2_M5';
function PitchFees2026() {
const sheet = SpreadsheetApp.openById(SHEETID).getSheetByName('2026 Fees');
const temp = DriveApp.getFileById(DOCID);
const folder = DriveApp.getFolderById(FOLDERID);
const data = sheet.getDataRange().getValues();
const rows = data.slice(1);
rows.forEach((row,index)=>{
const file = temp.makeCopy(folder);
const doc = DocumentApp.openById(file.getId());
const body = doc.getBody();
data[0].forEach((heading,i)=>{
const header1 = heading.toUpperCase();
body.replaceText('{NAME}',row[1]);
body.replaceText('{PITCH}',row[0]);
body.replaceText('{AMOUNT}',row[3]);
body.replaceText('{FIRST}',row[4]);
body.replaceText('{SECOND}',row[5]);
body.replaceText('{REF}',row[10]);
body.replaceText('{BNAME}',row[7]);
body.replaceText('{CODE}',row[8]);
body.replaceText('{NUMBER}',row[9]);
body.replaceText('{TERMS}',row[6]);
})
doc.setName(row[10]);
const blob = doc.getAs(MimeType.PDF);
doc.saveAndClose();
const pdf = folder.createFile(blob).setName(row[10]+'.pdf');
const email = row[2];
const subject = row[10];
const messageBody = "This message (including any attachments) is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. \n \nIf you received this in error, please delete the material from your computer and contact the sender. \n\nPlease consider the environment before printing this e-mail.";
MailApp.sendEmail({
to:email,
subject:subject,
body:messageBody,
attachments: [blob.getAs(MimeType.PDF)]
});
Logger.log(row);
file.setTrashed(true);
})
}
What I am wanting to attach is
var file = DriveApp.getFileById("1vGvLVP2RV1krxnj8Mt6hMiFHVBoIdbFG");
attachments.push(file.getAs(MimeType.PDF));
So I was trying to change to bottom of my main script to...
var attachments = []
var file = DriveApp.getFileById("1vGvLVP2RV1krxnj8Mt6hMiFHVBoIdbFG");
attachments.push(file.getAs(MimeType.PDF));
MailApp.sendEmail({
to:email,
subject:subject,
body:messageBody,
attachments: [blob.getAs(MimeType.PDF)]
});
Logger.log(row);
file.setTrashed(true);
})
}
Please may some assist me. I have been on this for days so probably not seeing something obvious now. Thank you so much in advance. :-)
As there is not available answer for this. I wanna do my contribution.
I exactly had the same error & ran into same issue spent hours trying to debug. So please try below approach.
Try using list of list. I had a JSON payload, I append all the JSON into a list.. then again I put this list of JSON into a list.
Sample code:
list_of_list_data = [list(item.values()) for item in list_data]
Please let me know if it works.
Thanks for your discovery and terrific job!
I have tried so hard in the past 48h to modify your script so that I could programmatically also add some text/body in the note (together with the attachment). I also struggled immensely to have the note created in a desired subfolder.
Whenever I tried to "add" a body to the newly created note, Notes.app was basically overwriting the entire note, including the attachment.
At some point I discovered the version Ethan Schoonover authored as a "Folder Action" (see https://youtu.be/KrVcf2nN0b8, and his GitHub repo https://github.com/altercation/apple-notes-inbox). It works almost with no adaptation as a Print Plugin workflow!
This is finally the version I made, with a very minor addition (i.e. the user is prompted to specify a different Note title). I share it here with you and the Internet, hoping it might be a useful starting point for posterity.
-- This script is designed to be used in Automator to create a new note in the Notes app
-- It takes a PDF file as input, prompts the user for a note title, and creates a
-- new note with the PDF attached.
-- The note will be created in a specified folder within the Notes app.
-- The script also includes a timestamp and the original filename in the note body.
-- The script assumes the Notes app is available and the specified folder exists.
-- Note: This script is intended to be run in the context of Automator with a file input (e.g. Print Plugins or as Folder Action).
-- Heavily based on the code from: https://github.com/altercation/apple-notes-inbox
property notePrefix : ""
property notesFolder : "Resources"
on run {fileToProcess, parameters}
try
set theFile to fileToProcess as text
tell application "Finder" to set noteName to name of file theFile
-- Ask the user for a title and tags for the new note
set noteTitleDialog to display dialog "Note title:" default answer noteName
set noteTitle to text returned of noteTitleDialog
set timeStamp to short date string of (current date) as string
set noteBody to "<body><h1>" & notePrefix & noteTitle & "</h1><br><br><p><b>Filename:</b> <i>" & noteName & "</i></p><br><p><b>Automatically Imported on:</b> <i>" & timeStamp & "</i></p><br></body>"
tell application "Notes"
if not (exists folder notesFolder) then
make new folder with properties {name:notesFolder}
end if
set newNote to make note at folder notesFolder with properties {body:noteBody}
make new attachment at end of attachments of newNote with data (file theFile)
(*
Note: the following delete is a workaround because creating the attachment
apparently creates TWO attachements, the first being a sort of "ghost" attachment
of the second, real attachment. The ghost attachment shows up as a large empty
whitespace placeholder the same size as a PDF page in the document and makes the
result look empty
*)
delete first attachment of newNote
show newNote
end tell
-- tell application "Finder" to delete file theFile
on error errText
display dialog "Error: " & errText
end try
return theFile
end run
Note: I was hoping to add programmatically one or more tags to the newly created note (e.g. by asking the user, within a dialog prompt), but I failed. It seems Notes does NOT recognize strings like "#blablabla" as tags, unless they are typed within the Notes.app.
The problem was that there was no data bound to the checkbox. As soon I added a JSON Model I got fixed.
<Column width="11rem">
<m:Label text="Product Id" />
<template>
<m:CheckBox selected="{Selected}"/>
</template>
</Column>
I found the root cause of the issue.
Even though the executable file exists inside the chroot jail and is fully static (confirmed by ldd showing no dynamic dependencies), running it inside the jail failed with:
execl failed: No such file or directory
This error occurs despite the binary being present and statically linked. The reason is that the chroot environment is missing some essential system components or setup that the binary expects at runtime even static binaries sometimes rely on minimal system features or device files.
The problem was resolved when I copied a statically linked BusyBox binary into the jail and ran commands from it. BusyBox, being a fully self-contained executable that includes a shell and common utilities, works smoothly inside minimal environments without extra dependencies.
That is nice, please allow this University comment. Thanks
from pdf2image import convert_from_path
# Convert PDF to images
images = convert_from_path("/mnt/data/Anish_Kundali.pdf")
# Save images
image_paths = []
for i, img in enumerate(images):
path = f"/mnt/data/Anish_Kundali_page_{i+1}.png"
img.save(path, "PNG")
image_paths.append(path)
image_paths
Opa não está indo algo
A fala comigo
Está dando operação completa com erros
Já tirei os APK do Chrome
I've come up with my own CSS selector to do just this.
.parent > .root:has(+ .paths > :not(:empty)) > div:last-child
I doubt this is much "cleaner", but I do believe this is a clearer notation.
Identify where the store is being opened (likely using CertOpenStore with API flags).
Adjust it to explicitly specify CERT_SYSTEM_STORE_LOCAL_MACHINE instead of CURRENT_USER.
Recompile xmlsec to restore the older behavior.
this error happens for me when I change my OS from linux to windows,
delete this line of code from package.json
"lightningcss-linux-x64-gnu": "^1.30.1",
OMFG THANK YOU BEEN LOOKING FOR THIS FOR HOURS GOD!!!!!!!! SMARTEST PERSON ON THE INTERNET I SWEAR TO GOD.
Your issue is that after chroot
, the binary ./test
is no longer found inside the new root (.
).
chroot
changes the apparent root directory for the process.
Copy test into the root of the jail:
cp ./test ./testdir/test
sudo ./penaur ./testdir
and change your c++ call:
sandbox.run("/test");
Might be a little jumpy reading this I was going through docs but short answer: Don’t detach the Actix server. Own shutdown yourself, pass a cancel signal to your queue, and await both tasks. Also disable Actix’s built-in signal handlers so Ctrl+C is under your control.
Your on the right track here. what you could try is: (a) a single place to own shutdown, (b) a signal you can pass to your queue so it can stop gracefully, and (c) awaiting both tasks to completion after you request shutdown. Don’t “fire-and-forget” the Actix server future.keep its JoinHandle
and await it after stop(true)
, guaranteeing it’s fully shut down before main
returns. You can have a shared token so you can exit cleanly
I believe the issue is with Actix's built in signals. You start the server and leave it running without awaiting its shutdown. The queue worker stops, but the HTTP server keeps running. May want to dig into Actix docs here. https://actix.rs/docs/server#graceful-shutdown the doc explains why your current setup goes wonky. Axtix has its own signal handlers, so CTRL-C is not "gracefull", windows doesn't send SIGTERM with Ctrl-c. So you have to approaches: A> own the shutdown yourself or B> let Actix keep its handlers (Unix- "gracefull via SIGTERM), don't disable signals and send SIGTERM, still keep and await the server task handle so nothing stays
Do I need to avoid rt::spawn
?
Yes—don’t detach the server. Either run it directly in select!
or spawn it and await the join handle after you call stop(true)
.
Call .disable_signals()
on the HttpServer
(so Actix doesn’t install its own Ctrl+C handler), and
Show a tiny code snippet that keeps the server’s JoinHandle
, sends a cancel token to the queue, calls stop(true)
, and then awaits both tasks.
use actix_web::{App, HttpServer};
use tokio_util::sync::CancellationToken;
#[actix_web::main]
async fn main() -> anyhow::Result<()> {
let server = HttpServer::new(|| App::new())
.disable_signals() // you own shutdown
.shutdown_timeout(30) // graceful window
.bind(("0.0.0.0", 8080))?
.run();
let handle = server.handle();
let cancel = CancellationToken::new();
let cancel_q = cancel.clone();
// spawn both but KEEP the JoinHandles
let srv_task = tokio::spawn(async move { server.await.map_err(anyhow::Error::from) });
let queue_task = tokio::spawn(async move {
// your queue loop should watch cancel_q.cancelled().await
run_queue(cancel_q).await
});
tokio::select! {
_ = tokio::signal::ctrl_c() => {}
// optionally: react if the server crashes:
res = &mut async { srv_task.await } => {
eprintln!("server exited early: {:?}", res);
}
}
// trigger graceful shutdown
handle.stop(true).await; // waits up to shutdown_timeout for in-flight reqs
cancel.cancel(); // tell the queue to finish and exit
// ensure nothing is left running
let _ = srv_task.await;
let _ = queue_task.await;
Ok(())
}
I used NextJS Markdown to create page and then print it via browser.
There I can render React component as well. So wrote the below React component.
export default function PageBreak() {
return <div className="break-after-page" />
}
Then, when I want to add page break I just added <PageBreak />
More info here - https://nextjs.org/docs/app/guides/mdx
PS: Project also had tailwindcss.
No se si aun te sirva pero tenai el mismo error y era por el endpoint lo temrine en message pero es messages :)
I was reviewing your code but found that cv2.findContours isn't a good way to detect the hand in image 1, because it's detecting too much detail and not the entire object. Therefore, image 2 has less detail.
import cv2
import sys
img = cv2.imread(sys.argv[1])
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (15, 15), 0)
edged = cv2.Canny(gray, 5, 100)
contours, _ = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
if contours:
max_area = max(cv2.contourArea(c) for c in contours)
print(max_area)
else:
print(0)
better use to measure the area of the larger contour like this, and also that value in image2 that image generates a larger contour.
I recommend using a model like mediapipegoogle for more robust things and detecting hands with landmarks instead of just using contours.
This video could explain more about this topic and is good for future projects using a model with AI. youtube, it was complicate either cuz i dont speak russian btw.
It’s not clear what result (answer set) you expected.
From a syntax point of view, your second approach looks more “correct” if that makes sense:
It assigns for each protein/1 a choice/4 with 3x food/1.
But it does not define any specific relationship among those 3x food/1, e.g. these 3x food/1 can be the same, and can be the same across different proteins, as there are no further rules defined.
(Your first approach allows an empty answer set as result. While you assign a choice/4 for any combination of protein/1 and 3x food/1.)
I recommend you to check previous stackoverflow posts on Clingo as well as some introductional pdfs on that topic to better understand to the syntax and “logic”.
Good luck!
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@digitalocean" />
<meta name="twitter:title" content="Sammy the Shark" />
<meta name="twitter:description" content="Senior Selachimorpha at DigitalOcean" />
<meta name="twitter:image" content="https://html.sammy-codes.com/images/large-profile.jpg" />
El codigo desde mi pc no corria pero estuve revisando igrid y me fije que explican como funciona la estructura DLMS y como se arma el AARQ, entonces creo que cda campo debe insertarse en una posicion exacta.
comparando con tu codigo original veo que el primero donde esta la contraseña lo pones asi
aarq = AARQ_REQUEST.replace(
bytearray.fromhex("38 38 39 33 35 38 36 30"),
bytearray(SERIAL_NUMBER, 'ascii')
)
pero en micropython esta en modo fija
b'00053346'
entonces segun deepsek despues de hacer testing me recomienda que no se ponga fija porque sino genera diferentes frames
for i, (p, m) in enumerate(zip(aarq_python, aarq_micropython)):
if p != m:
print(f"Byte {i}: Python={p:02X}, MicroPython={m:02X}")
creo que desde el codigo que testie y revise el error empieza antes del bloque /xBe/x10
porque python no calcula la longitud y no coincide con la de micropyhton al meter de la contraseña.
puede que la password sea la misma pero el paquete no es el mismo, y el medido rechaza el AARQ. no tengo clara la solucion pero segun recomendacione de deepseek porque no tengo el codigo completo la solucionar podr ser construir el paquete con la longitud correcta despues de insertar la cotraseña.
Yes it useful for u but very difficult for us because we don't know about this process and that what is doing in this website we only use this link and website for the purpose to get information about our department
The solution is to make a list of available drives/OSDs and make sure the boot drive is excluded. My solution should work with both SATA and NVMe drives, but since I only have NVMe drives in my machines, I cannot test the SATA solution. Furthermore, all available drives will be seen as Ceph drives. This may not be viable for everyone. The full code is included under the FINAL EDIT comment.
Wrap the Swiper in a grid
<div className="grid">
<Swiper>...</Swiper>
</div>
I just encountered this error and the cause was that I had 2 firebase tools installed. One was through homebrew and the other was local. You can see if this is the case for you by running these command in terminals:
which firebase
npm list -g firebase-tools
If the output of those commands is different, you have the same problem I did.
In my case, removing the local library solved the issue:
rm /Users/mycomputer/.local/bin/firebase
You should also make sure you are on the latest version. Compare the output from this command:
firebase-tools -v
with the github latest version: https://github.com/firebase/firebase-tools
Question solved myself in my first comment.
short_name": "React App", "name": "Create React App Sample", "icons": [
{ }, "src": "favicon.ico", "sizes": "64x64 32x32 24x24 16x16", "type": "image/x-icon"
} "src": "logo192.png", "type": "image/png", "sizes": "192x192"
},
{ "src": "logo512.png", "type": "image/png", "sizes": "512x512"
}
"start url": "",
1, "display": "standalone", "theme_color": "#000000",
"background_color": "#ffffff"
}
I don't have the exact answer, but I have create an app to batch create folders from a list.
Just select your file destination, paste in your list and press create folders. Easily create hundreds of folders in seconds.
Check out Multiple Folder Creator on the App Store.
Note for the future readers, If you are using Rocket Version 0.5.0 (Nov 17, 2023) and later then the Outcome::Failure
variant was removed in favor of using Outcome::Error
.
Writing this for someone come here due to error:
error[E0599]: no variant or associated item named `Failure` found for enum `Outcome` in the current scope
--> src/guards/auth_guard.rs:40:32
|
40 | Err(_) => Outcome::Failure((Status::Unauthorized, ())),
| ^^^^^^^ variant or associated item not found in `Outcome<_, (Status, _), Status>`
So you have to use Outcome::Error
like(example):
Err(_) => Outcome::Error((Status::Unauthorized, ()))
Supporting Document: https://github.com/rwf2/Rocket/blob/master/CHANGELOG.md?utm_source=chatgpt.com#:~:text=Outcome%3A%3AFailure%20was%20renamed%20to%20Outcome%3A%3AError.
When you're copying a GitHub project to your computer, you have two main choices for the link: HTTPS or SSH.
If you pick HTTPS, it's like using a front door everybody has a key for. It's super easy to start since you just use your GitHub username and password or a token, but every now and then, it will ask you to log in again. Plus, it works anywhere, even if you're on tricky networks or behind firewalls.
Now, SSH is a bit like having a special VIP pass. You set it up once by creating a key (kind of like a secret handshake), and after that, pushing or pulling changes is smooth sailing without more passwords. It's more secure and faster once set up, but it’s a bit trickier to get going you gotta generate those keys and add them to your GitHub profile. Also, sometimes corporate networks block the special ports SSH uses, so that can get in your way.
So if you're new or just wanna grab stuff without fuss, go with HTTPS. But if you plan on working on projects a lot, or want that smoother, password-free flow, SSH is your friend.
In simple terms, HTTPS is quick and easy, SSH is secure and convenient.
summary for easy understanding ssh and https uses in github by sai karthik motapothula
Anyone arriving here in the future with a similar issue where you must support multiple frameworks and one of those is framework 4.0 (for very old vendor supplied application running on Windows XP in my case), the link to Thomas Levesque solution (https://thomaslevesque.com/2012/06/13/using-c-5-caller-info-attributes-when-targeting-earlier-versions-of-the-net-framework/) provided by others above works perfectly and seems to be the most straight forward solution to me since both newer and older frameworks can now use the attributes with no code differences.
I put his stub definitions into its own class file and surrounded it with the #if NET40 compiler directive so those stubs will only be used on a NET40 version compile (since I support multiple frameworks). A framework 4.0 version of each app can now access the [Caller...] attributes (I am only using [CallerMemberName], but I have no doubt the other stubs function too) and the expected values are populated into your variables. Thanks to Thomas and the others that left the link!
Instead of using Microsoft’s own module, you can use the open-source Get-AzVMSku module, which allows you to browse every Azure Gallery Image publisher, select offers, versions, and see VM sizes available to your subscription, along with quotas.
The module is available on the PowerShell Gallery:
Get-AzVMSku on PowerShell Gallery
I’ve also written a detailed guide explaining how it works and how to use it:
Browse PowerShell Azure VM SKUs & Marketplace Images with Get-AzVMSkuhttps://www.codeterraform.com/post/powershell-azure-vm-skus
I would suggest another approach as I am facing a similar issue for a singstar-like app I am coding.
I am considering creating a custom audio processing node the count the actuel buffer frames passing through it (a AudioWorkletProcessor maybe). I could provide a method giving me the actual played time based on samples count and sample time resolution.
So you would just connect those extra nodes just after the nodes you want to measure.
In my case, in monorepo, I had different versions of type-graphql
installed in the server app and a library containing model classes.
User-uploaded SVG files may embed malicious code or include it via an external reference. When a malicious SVG file is published on your website, it can be used to exploit multiple attack vectors, including:
<image>
, <script>
, or <use>
tags to send sensitive information to attacker-controlled servers.<image xlink:href="file:///...">
or <use>
references to attempt to read local or server-side files.Indeed, wrapping the image in an <img>
tag is 1 of the 3 measures you can take.
Other measures are
More detailed guidance on each measure:
<img>
TagInstead of directly embedding an SVG using <svg>
or <object>
, use the <img>
tag to render it:
<img src="safe-image.svg" alt="Safe SVG">
The <img>
tag ensures that the SVG is treated as an image and prevents JavaScript execution inside it, but it doesn’t remove malicious code from the SVG file.
Enabling the Content Security Policy (CSP) HTTP response header also prevents JavaScript execution inside the SVG.
For example:
Content-Security-Policy: default-src 'none'; img-src 'self'; style-src 'none'; script-src 'none'; sandbox
Which applies the following policy:
Directive | Purpose |
---|---|
default-src 'none' |
Blocks all content by default. |
img-src 'self' |
Allows images only from the same origin. |
style-src 'none' |
Prevents inline and external CSS styles. |
script-src 'none' |
Blocks inline and external JavaScript to prevent XSS. |
sandbox |
Disables scripts, forms, and top-level navigation for the SVG. |
Strip potentially harmful elements like <script>
, <iframe>
, <foreignObject>
, inline event handlers (e.g. onclick
) or inclusion of other (potentially malicious) files.
Examples of libraries for SVG sanitization:
1 I started the mentioned Java sanitizer project, as I could not find any solution for Java.
Render the SVG server side to a raster based format, like PNG. This may protect visiting users, but could introduce vulnerabilities at the rendering side at the server, especially using server side JavaScript (like Node.js).
Sorry for bothering everyone.
I am not too sure why the summary statistics of r_ga and insideGARnFile_WithCoord) are different, but the output graphics looked very similar. I will assume slight boundary mismatch is due to coordinate reference system (difference/transformation). I will assume the problem is solved for now. If you have any insights on the boundary mismatch, please leave your comments here.
Much appreciated! Summary statistics of insideGARnFile_WithCoord Rainfall Output from insideGARnFile_WithCoord
Summary statistics of r_ga. Rainfall Output from r_ga
dchan
For me the problem was when i was doing lookup for Symbol and getting the instrument value, the datatype of the instrument was int64 which is numpy based value but the Socket Api accept int.
so converting the int64 -> int fix the issue for me.
@STerliakov gets the credit for the answer:
Apparently, this section of the code in its entirety was added for the purpose so the user would see the data for troubleshooting purpose.
I deleted it entirely and that fixed the issue
requirePOST();
[$uploaded_files, $file_messages] = $this->saveFiles();
Flash::set('debug','Application form data '
. json_encode($_POST, JSON_PRETTY_PRINT)
. "\nUploaded files:"
. json_encode($uploaded_files,JSON_PRETTY_PRINT)
. "\nFile messages:"
. json_encode($file_messages,JSON_PRETTY_PRINT)
);
For anyone else experiencing the issue, here is the entire working function
public function submit() {
include([DB ACCESS FILE]);
$fp1 = json_encode($_POST, JSON_PRETTY_PRINT);
$fp2 = json_decode($fp1);
$owner_first = $fp2->owner_first;
$owner_last = $fp2->owner_last;
$query_insert_cl = "INSERT INTO cnvloans (owner_first,owner_last) VALUES ('$owner_first','$owner_last')";
mysqli_query($dbc, $query_insert_cl);
redirect('/page/confirm');
}
I deployed my backend on Vercel and tried using the URL there as well, but I keep getting errors like 500, 401, 400 with Axios — when I fix one, another appears. However, the code runs perfectly in Postman and Thunder Client, but when I run it on my mobile, these errors keep showing up. If you have solved this issue before, please guide me as well.
There is the library DrusillaSelect on furthest neighbor you can try. From this paper.
As an example, an Etherium address with a private key
0x91b005cb6b291f67647471ad226b937657a8d7d6
pvk 000000000000000000000000000000000000000000000000007fa9e2cd6d52fe
check the address and good luck to you
Removing --turboback
from the dev script, fixed the issue.
Before
"dev": "next dev --port 3001 --turbopack"
After
"dev": "next dev --port 3001",
OnDrawColumn and OnDrawDataCell have a TGridDrawState State.
OnDataChange is in DataSource as was answered.
And yes you can't control TDBGrid unless you overload it with ancestors methods. That's why it is rarely used in real world tasks.
Each time when it is necessary to insert some frame to stream it is much better to do it before encoder: just send previous raw frame to encoder again or make some blending between previous and next frames. Dirty tricks with already encoding bitstream may give a negative side effects: broken HRD model, broken picture order counter sequence, etc.
2025 update:
For people who are confused about why there is no ID Token
checkbox, it is hidden unless you add a correct Redirect URI
. You need to add the one for Web
platform type.
After that, in the Settings
tab you will be able to see the ID tokens
checkbox, and marking it as checked fixed the problem for me.
I didn’t get the exact same error as you, but my setup is very similar, so here are my two cents:
Solution
│
├── MyApp // Server project. Set InteractiveWebAssembly or InteractiveAuto globally in App.razor
│
├── MyApp.Client // Contains Routes.razor
│
└── SharedRCL // Contains Counter.razor page (@page "/counter") without setting any render mode
In Routes.razor
, make sure the Router
is aware of routable components in the Razor Class Library (RCL) by adding the RCL’s assembly:
<Router AppAssembly="@typeof(Program).Assembly"
AdditionalAssemblies="new[] { typeof(Counter).Assembly }"> @* <-- This line here *@
...
</Router>
Depending on your setup, you might also need to ensure that the server knows about routable components in the RCL.
In MyApp/Program.cs
, register the same assembly when mapping Razor components:
app.MapRazorComponents<App>()
.AddInteractiveWebAssemblyRenderMode()
.AddAdditionalAssemblies(typeof(MyApp.Client._Imports).Assembly)
.AddAdditionalAssemblies(typeof(Counter).Assembly); // <-- This line here
can you ty this :
The idea is to search for this differently.
try to retrieve the private IP address of the Private Endpoint through its network interfaces (NICs).
and you identify the private DNS zones linked to the virtual network (VNet) where this Private Endpoint is connected (via the Private DNS zone links).
Within these private DNS zones, you search for DNS records that match those private IP addresses — these are the FQDNs
You can do it with powershell or python sdk azure easily
Here is my workaround for blank icon issues on taskbar.
1. create a shortcut on desktop, open it.
2. drag this shortcut to the taskbar. this will pin it onto taskbar.
3. right click the icon, un-pin. done!
You can create SPA using bootstrap and jquery/vanilla.js... For that you must have a strong understanding of vanilla js or jquery
Preferences-Run/Debug-Perspectives
in "Application Types/Lunchers" box
"STM32 C/C++ Application"
Debug: None,Run:None.
not Debug:Debug.
रवि की साइकिल जंग खा चुकी थी, ब्रेक भी काम नहीं करते थे। फिर भी वह रोज़ दस किलोमीटर स्कूल जाता। दोस्तों ने मज़ाक उड़ाया, पर उसने हिम्मत नहीं हारी। पढ़ाई में अव्वल आया तो वही दोस्त बोले, "तेरी साइकिल टूटी थी, सपने नहीं।"
The confidence intervals for versicolor and virginica correspond to their reported estimates of 0.93 and 1.58. That is, the offset of versicolor is estimated as 0.93 and the confidence interval spans 0.73 to 1.13. To get the estimate of the mean of versicolor, you would add the intercept to all of those numbers: a mean of 5.01 + 0.93 with the lower confidence limit at 5.01 + 0.73 and the upper confidence limit at 5.01 + 1.13.
Store in three separate columns; it's much better for maintenance and data retrieval (note that they are three very distinct pieces of data, so putting them together will make your life harder).
If you want to have an easy way to always have the [Country ISO]-[State ISO]-[City Name] string in hand, you can create an additional generated column.
Example (Postgres):
beautified_code varchar GENERATED ALWAYS AS CONCAT_WS('-', country_iso, state_iso, city_name) STORED
In this column, the three values will always be concatenated together automatically during entry creation and update. So you don't need to worry about maintaining consistency in it.
i have the same question, i have a dataset contening medical variables using to determine whether the patient have to receive outpatient care or not,
the target variable is SOURCE :
0 for outpatient care
1 otherwise
i'm using the method of supervised learning glm (logistic regression) of caret package in R, it predicts the probability that the individual belongs to the positive class, chatgpt is saying that the positive class is the second one, but i don't know how i can get sure that the model predicts p(k="1"/xi).
glm gives only probabilities as results when using the function predict, so i must converting proba to label (0 or 1) according to the threshold. so theses proba are for p(k=first level of the factor variable) ?
Try to rename your hivepress-marketplace directory to hivepress or set for add_action higher priority (3rd parameter)
In some cases of API development, we want to ignore some fields in the JSON response. In such cases we use the @JsonIgnore annotation, therefore, it was not designed to solve the Infinite Recursion problem.
In my projects, I use the @JsonIgnore annotation with the @ManyToMany bidirectional relationship between entities, and I use @JsonManagedReference and @JsonBackReference with @ManyToOne and @OneToMany cases.
I tried everything on the Mac. But "sudo su -" worked finally, which is root directory with full permission.
Nevermind, I found a solution almost immediately after posting this question, however I am leaving this here for eventual future visitors. I fixed this problem by simply creating a temporary repository with the modified path (Git) selected, published it to github, and now the new path is saved. Don't know why it didn't save it before, but this solutio should work.
The empty views activity doesn't give Java either, nor the no activity. java is completely gone from Android Studio. now I need to learn a new language. hate this, I haven't been programming since 2021 when I finish my computer science degree. now I'm trying to get the rust off but apparently I have to start from the beginning. Kotlin, here I come!
As suggested by @furas, we can use git url.
ubuntu@ubuntu:~/hello$ poetry add scikit-learn git+https://github.com/scikit-learn-contrib/imbalanced-learn.git Creating virtualenv hello in /home/ubuntu/hello/.venv Using version ^1.7.1 for scikit-learn
Updating dependencies Resolving dependencies... (0.1s)
Package operations: 6 installs, 0 updates, 0 removals
Writing lock file ubuntu@ubuntu:~/hello$
Same here, it worked well at first for several times (dont know how often) ... is there a rate limit ?
It looks like the base path is not set correctly. You can try below
import os
import sys
sys.path.append(os.path.abspath("."))
sys.path.append("../")
# Then try to import the src modules
from src.logger import Logging
Here we are setting up the path manually but it should works.
MongoDB collections dont have a schema, but if you want to read it as spark dataframe all rows must have the same schema. So fieldA cant be a String and a json object at the same time. If it not present you shouldnt create an empty string, just drop the field or use null
I found the answer here: CosmosDB Emulator Linux Image - Mount Volume
When I tried it, it seemed to work
services:
cosmosdb:
volumes:
- cosmosdrive:/tmp/cosmos/appdata # <-- pointing to this location did it for me
volumes:
cosmosdrive:
import React from 'react';
const videos = [
{ id: 'dQw4w9WgXcQ', title: 'Rick Astley - Never Gonna Give You Up' },
{ id: '3JZ_D3ELwOQ', title: 'Maroon 5 - Sugar' },
];
export default function VideoLinks() {
return (
<div>
<h2>คลิป YouTube</h2>
<ul>
{videos.map(video => (
<li key={video.id}>
<a
href={`https://www.youtube.com/watch?v=${video.id}`}
target="_blank"
rel="noopener noreferrer"
style={{ color: 'blue', textDecoration: 'underline' }}
>
{video.title}
</a>
</li>
))}
</ul>
</div>
);
}
Yes i know it's a real old thread. But i have a question.
The Arc works fine so far, but i would like to display a value in the middle of the Gauge. How can i achieve this?
There is a well tested implementation in Android. You can even package it into your own Java project like this.
Since Spring Boot is using logback for logging, you can overwrite the default logback pattern by your own patternLayout, masking your password properties. This approach also applies to any other log output (e.g. toString methods) according to defined RegEx pattern.
see https://www.baeldung.com/logback-mask-sensitive-data for example.
is it required to call the python script from C# only?
Can't you directly expose the Python code as an API endpoint and then call the API endpoint from the C# as it will be easier?
There was nothing special in the template file but the variable "unit" holding the datum, and a for loop to distribute it's elements to one html table.
Here is how template looks like:
<tbody>
{% for element in unit %}
<tr>
<td>{{ element[0] }} </td>
<td>{{ element[1] }}</td>
<td>{{ element[2] }}</td>
</tr>
{% endfor %}
</tbody>
Thank you all, who are writing suggestions. I have found the solution to define an empty list such as unit_result, populating the list item with a loop and sending it to the template.
#prepare for the table
unit_result = []
for i in range(len(unit)):
unit_result.append((unit[i][0], unit[i][1], unit[i][2]))
return render_template('analysis.html', unit_result=unit_result)
The new template will include the following line instead of the older one:
{% for element in unit_result %}
La solución correcta implica el uso de las API de Windows (COM) o la automatización de la interfaz de usuario.
SHELL
Type shellAppType = Type.GetTypeFromProgID("Shell.Application");
object shellApp = Activator.CreateInstance(shellAppType);
I also face this issue a week ago so i use the nullish coalescing operator (??
) to provide a default string
.
constvalue: string = maybeString ?? "";
This ensures value is always a string, even if maybe String is undefined.
FWTW, MDN nowadays says in a somewhat prominent box:
The initial value should not be confused with the value specified by the browser's style sheet.
And you seem to want the latter.
https://github.com/bandirom/fastapi-socketio-handler
I created a small library to easily connect SocketIO to FastApi
Feel free to discuss about approach in the lib
When you run the application for the first time, it will download all dependencies from the remote repositories to local repositories so it is a bit time taking process so the application may face a delay. From the second time and onward when you start the application, it will load libraries from the local repos and missing libraries from the remote, so it needs a few seconds to organize this whole process.
Removing unwanted dependencies from your pom.xml can significantly improve the performance of your application.
You can resolve this issue with local builds:
Clone core via SSH (no token)
Run:
mvn clean install
This installs core into your local Maven repository (~/.m2/repository
)
Now api and web can reference it in pom.xml
without touching GitLab or HTTP tokens.
You can cd into another directory, for example if you copy the address in your address bar in your file explorer, and paste it in your powershell, and type change directory command, as seen below
cd 'D:\Users\username\Downloads"
I had this issue too and then when i had the (SA) isssue i just knew the problem, I tried vpn but still didn't work so i turn my pc off and then turn it back on and it works well now.
I found out the issue was that the feature set that I had defined was not enough to let the parser choose a different action than starting an arc. Once I added a feature to indicate if an arc was started, the parser now starts and ends arcs until the stop condition is reached. The code looks somewhat different than the example that I first posted, but it is similar. For example, the while-loop in the parse() method continues indefinitely (while True:
) but there is a break condition that comes into effect when the number of arcs reached the number of tokens in the sentence (since the number of arcs including the root arc is the same as the number of tokens). Note that I also switched from using the perceptron code to a SVM from the scikit-learn library.
A pure Python library, with no dependencies, to calculate business days between dates, quickly and efficiently. Ideal for Python developers and data scientists who value simplicity and performance.
https://github.com/cadu-leite/networkdays
0
A pure Python library, with no dependencies, to calculate business days between dates, quickly and efficiently. Ideal for Python developers and data scientists who value simplicity and performance.
A pure Python library, with no dependencies, to calculate business days between dates, quickly and efficiently. Ideal for Python developers and data scientists who value simplicity and performance.
or jut check
pure Python library, with no dependencies, to calculate business days between dates, quickly and efficiently. Ideal for Python developers and data scientists who value simplicity and performance.
https://github.com/cadu-leite/networkdays
Step-by-Step Internal Flow
BEGIN → The database assigns a transaction ID (TID) and starts tracking all operations.
Read/Write Operations → Data pages are fetched from disk into the Buffer Pool.
Locking / MVCC →
Lock-based systems: lock rows/tables to ensure isolation.
MVCC systems: keep multiple versions of data so reads don’t block writes.
Undo Logging → Before any change, the old value is written to an Undo Log.
Change in Memory → Updates are made to in-memory pages in the buffer pool.
Redo Logging (WAL) → The intended changes are written to a Write-Ahead Log on disk.
Commit → The WAL is flushed to disk, guaranteeing durability.
Post-Commit → Locks are released, and dirty pages are eventually written to disk.
Rollback (if needed) → Use the Undo Log to restore old values.
Read More: https://jating4you.blogspot.com/2025/08/understanding-internal-working-of.html
That's called an inline mode.
Here's the links in official API reference:
About inline bot: https://core.telegram.org/bots/inline
API reference: https://core.telegram.org/bots/api#inline-mode
I've done very basic image creation with PIL in Mojo. (Running on Fedora Linux, Mojo installed via pip into Python venv.) My program imports PIL.Image, creates a new image, and initialises the pixel data from a Mojo List[UInt32] converted via the Mojo Python.list() type conversion.
If you are using newer version of keycloak specifically 26, then
https://www.keycloak.org/server/containers#_importing_a_realm_on_startup
keycloak:
image: quay.io/keycloak/keycloak:26.1.4
command: start-dev --import-realm
ports:
- "8081:8080"
environment:
KC_BOOTSTRAP_ADMIN_USERNAME: admin
KC_BOOTSTRAP_ADMIN_PASSWORD: admin
KC_DB: postgres
KC_DB_URL: jdbc:postgresql://postgres:5432/keycloak
KC_DB_USERNAME: keycloak
KC_DB_PASSWORD: keycloak
volumes:
- keycloak_data:/opt/keycloak/data
- ./compose/keycloak/realms:/opt/keycloak/data/import
networks:
- keycloak