Thanks for your discovery and terrific job!
I have tried so hard in the past 48h to modify your script so that I could programmatically also add some text/body in the note (together with the attachment). I also struggled immensely to have the note created in a desired subfolder.
Whenever I tried to "add" a body to the newly created note, Notes.app was basically overwriting the entire note, including the attachment.
At some point I discovered the version Ethan Schoonover authored as a "Folder Action" (see https://youtu.be/KrVcf2nN0b8, and his GitHub repo https://github.com/altercation/apple-notes-inbox). It works almost with no adaptation as a Print Plugin workflow!
This is finally the version I made, with a very minor addition (i.e. the user is prompted to specify a different Note title). I share it here with you and the Internet, hoping it might be a useful starting point for posterity.
-- This script is designed to be used in Automator to create a new note in the Notes app
-- It takes a PDF file as input, prompts the user for a note title, and creates a
-- new note with the PDF attached.
-- The note will be created in a specified folder within the Notes app.
-- The script also includes a timestamp and the original filename in the note body.
-- The script assumes the Notes app is available and the specified folder exists.
-- Note: This script is intended to be run in the context of Automator with a file input (e.g. Print Plugins or as Folder Action).
-- Heavily based on the code from: https://github.com/altercation/apple-notes-inbox
property notePrefix : ""
property notesFolder : "Resources"
on run {fileToProcess, parameters}
try
set theFile to fileToProcess as text
tell application "Finder" to set noteName to name of file theFile
-- Ask the user for a title and tags for the new note
set noteTitleDialog to display dialog "Note title:" default answer noteName
set noteTitle to text returned of noteTitleDialog
set timeStamp to short date string of (current date) as string
set noteBody to "<body><h1>" & notePrefix & noteTitle & "</h1><br><br><p><b>Filename:</b> <i>" & noteName & "</i></p><br><p><b>Automatically Imported on:</b> <i>" & timeStamp & "</i></p><br></body>"
tell application "Notes"
if not (exists folder notesFolder) then
make new folder with properties {name:notesFolder}
end if
set newNote to make note at folder notesFolder with properties {body:noteBody}
make new attachment at end of attachments of newNote with data (file theFile)
(*
Note: the following delete is a workaround because creating the attachment
apparently creates TWO attachements, the first being a sort of "ghost" attachment
of the second, real attachment. The ghost attachment shows up as a large empty
whitespace placeholder the same size as a PDF page in the document and makes the
result look empty
*)
delete first attachment of newNote
show newNote
end tell
-- tell application "Finder" to delete file theFile
on error errText
display dialog "Error: " & errText
end try
return theFile
end run
Note: I was hoping to add programmatically one or more tags to the newly created note (e.g. by asking the user, within a dialog prompt), but I failed. It seems Notes does NOT recognize strings like "#blablabla" as tags, unless they are typed within the Notes.app.
The problem was that there was no data bound to the checkbox. As soon I added a JSON Model I got fixed.
<Column width="11rem">
<m:Label text="Product Id" />
<template>
<m:CheckBox selected="{Selected}"/>
</template>
</Column>
I found the root cause of the issue.
Even though the executable file exists inside the chroot jail and is fully static (confirmed by ldd showing no dynamic dependencies), running it inside the jail failed with:
execl failed: No such file or directory
This error occurs despite the binary being present and statically linked. The reason is that the chroot environment is missing some essential system components or setup that the binary expects at runtime even static binaries sometimes rely on minimal system features or device files.
The problem was resolved when I copied a statically linked BusyBox binary into the jail and ran commands from it. BusyBox, being a fully self-contained executable that includes a shell and common utilities, works smoothly inside minimal environments without extra dependencies.
That is nice, please allow this University comment. Thanks
from pdf2image import convert_from_path
# Convert PDF to images
images = convert_from_path("/mnt/data/Anish_Kundali.pdf")
# Save images
image_paths = []
for i, img in enumerate(images):
path = f"/mnt/data/Anish_Kundali_page_{i+1}.png"
img.save(path, "PNG")
image_paths.append(path)
image_paths
Opa não está indo algo
A fala comigo
Está dando operação completa com erros
Já tirei os APK do Chrome
I've come up with my own CSS selector to do just this.
.parent > .root:has(+ .paths > :not(:empty)) > div:last-child
I doubt this is much "cleaner", but I do believe this is a clearer notation.
Identify where the store is being opened (likely using CertOpenStore with API flags).
Adjust it to explicitly specify CERT_SYSTEM_STORE_LOCAL_MACHINE instead of CURRENT_USER.
Recompile xmlsec to restore the older behavior.
this error happens for me when I change my OS from linux to windows,
delete this line of code from package.json
"lightningcss-linux-x64-gnu": "^1.30.1",
OMFG THANK YOU BEEN LOOKING FOR THIS FOR HOURS GOD!!!!!!!! SMARTEST PERSON ON THE INTERNET I SWEAR TO GOD.
Your issue is that after chroot
, the binary ./test
is no longer found inside the new root (.
).
chroot
changes the apparent root directory for the process.
Copy test into the root of the jail:
cp ./test ./testdir/test
sudo ./penaur ./testdir
and change your c++ call:
sandbox.run("/test");
Might be a little jumpy reading this I was going through docs but short answer: Don’t detach the Actix server. Own shutdown yourself, pass a cancel signal to your queue, and await both tasks. Also disable Actix’s built-in signal handlers so Ctrl+C is under your control.
Your on the right track here. what you could try is: (a) a single place to own shutdown, (b) a signal you can pass to your queue so it can stop gracefully, and (c) awaiting both tasks to completion after you request shutdown. Don’t “fire-and-forget” the Actix server future.keep its JoinHandle
and await it after stop(true)
, guaranteeing it’s fully shut down before main
returns. You can have a shared token so you can exit cleanly
I believe the issue is with Actix's built in signals. You start the server and leave it running without awaiting its shutdown. The queue worker stops, but the HTTP server keeps running. May want to dig into Actix docs here. https://actix.rs/docs/server#graceful-shutdown the doc explains why your current setup goes wonky. Axtix has its own signal handlers, so CTRL-C is not "gracefull", windows doesn't send SIGTERM with Ctrl-c. So you have to approaches: A> own the shutdown yourself or B> let Actix keep its handlers (Unix- "gracefull via SIGTERM), don't disable signals and send SIGTERM, still keep and await the server task handle so nothing stays
Do I need to avoid rt::spawn
?
Yes—don’t detach the server. Either run it directly in select!
or spawn it and await the join handle after you call stop(true)
.
Call .disable_signals()
on the HttpServer
(so Actix doesn’t install its own Ctrl+C handler), and
Show a tiny code snippet that keeps the server’s JoinHandle
, sends a cancel token to the queue, calls stop(true)
, and then awaits both tasks.
use actix_web::{App, HttpServer};
use tokio_util::sync::CancellationToken;
#[actix_web::main]
async fn main() -> anyhow::Result<()> {
let server = HttpServer::new(|| App::new())
.disable_signals() // you own shutdown
.shutdown_timeout(30) // graceful window
.bind(("0.0.0.0", 8080))?
.run();
let handle = server.handle();
let cancel = CancellationToken::new();
let cancel_q = cancel.clone();
// spawn both but KEEP the JoinHandles
let srv_task = tokio::spawn(async move { server.await.map_err(anyhow::Error::from) });
let queue_task = tokio::spawn(async move {
// your queue loop should watch cancel_q.cancelled().await
run_queue(cancel_q).await
});
tokio::select! {
_ = tokio::signal::ctrl_c() => {}
// optionally: react if the server crashes:
res = &mut async { srv_task.await } => {
eprintln!("server exited early: {:?}", res);
}
}
// trigger graceful shutdown
handle.stop(true).await; // waits up to shutdown_timeout for in-flight reqs
cancel.cancel(); // tell the queue to finish and exit
// ensure nothing is left running
let _ = srv_task.await;
let _ = queue_task.await;
Ok(())
}
I used NextJS Markdown to create page and then print it via browser.
There I can render React component as well. So wrote the below React component.
export default function PageBreak() {
return <div className="break-after-page" />
}
Then, when I want to add page break I just added <PageBreak />
More info here - https://nextjs.org/docs/app/guides/mdx
PS: Project also had tailwindcss.
No se si aun te sirva pero tenai el mismo error y era por el endpoint lo temrine en message pero es messages :)
I was reviewing your code but found that cv2.findContours isn't a good way to detect the hand in image 1, because it's detecting too much detail and not the entire object. Therefore, image 2 has less detail.
import cv2
import sys
img = cv2.imread(sys.argv[1])
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (15, 15), 0)
edged = cv2.Canny(gray, 5, 100)
contours, _ = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
if contours:
max_area = max(cv2.contourArea(c) for c in contours)
print(max_area)
else:
print(0)
better use to measure the area of the larger contour like this, and also that value in image2 that image generates a larger contour.
I recommend using a model like mediapipegoogle for more robust things and detecting hands with landmarks instead of just using contours.
This video could explain more about this topic and is good for future projects using a model with AI. youtube, it was complicate either cuz i dont speak russian btw.
It’s not clear what result (answer set) you expected.
From a syntax point of view, your second approach looks more “correct” if that makes sense:
It assigns for each protein/1 a choice/4 with 3x food/1.
But it does not define any specific relationship among those 3x food/1, e.g. these 3x food/1 can be the same, and can be the same across different proteins, as there are no further rules defined.
(Your first approach allows an empty answer set as result. While you assign a choice/4 for any combination of protein/1 and 3x food/1.)
I recommend you to check previous stackoverflow posts on Clingo as well as some introductional pdfs on that topic to better understand to the syntax and “logic”.
Good luck!
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@digitalocean" />
<meta name="twitter:title" content="Sammy the Shark" />
<meta name="twitter:description" content="Senior Selachimorpha at DigitalOcean" />
<meta name="twitter:image" content="https://html.sammy-codes.com/images/large-profile.jpg" />
El codigo desde mi pc no corria pero estuve revisando igrid y me fije que explican como funciona la estructura DLMS y como se arma el AARQ, entonces creo que cda campo debe insertarse en una posicion exacta.
comparando con tu codigo original veo que el primero donde esta la contraseña lo pones asi
aarq = AARQ_REQUEST.replace(
bytearray.fromhex("38 38 39 33 35 38 36 30"),
bytearray(SERIAL_NUMBER, 'ascii')
)
pero en micropython esta en modo fija
b'00053346'
entonces segun deepsek despues de hacer testing me recomienda que no se ponga fija porque sino genera diferentes frames
for i, (p, m) in enumerate(zip(aarq_python, aarq_micropython)):
if p != m:
print(f"Byte {i}: Python={p:02X}, MicroPython={m:02X}")
creo que desde el codigo que testie y revise el error empieza antes del bloque /xBe/x10
porque python no calcula la longitud y no coincide con la de micropyhton al meter de la contraseña.
puede que la password sea la misma pero el paquete no es el mismo, y el medido rechaza el AARQ. no tengo clara la solucion pero segun recomendacione de deepseek porque no tengo el codigo completo la solucionar podr ser construir el paquete con la longitud correcta despues de insertar la cotraseña.
Yes it useful for u but very difficult for us because we don't know about this process and that what is doing in this website we only use this link and website for the purpose to get information about our department
The solution is to make a list of available drives/OSDs and make sure the boot drive is excluded. My solution should work with both SATA and NVMe drives, but since I only have NVMe drives in my machines, I cannot test the SATA solution. Furthermore, all available drives will be seen as Ceph drives. This may not be viable for everyone. The full code is included under the FINAL EDIT comment.
Wrap the Swiper in a grid
<div className="grid">
<Swiper>...</Swiper>
</div>
I just encountered this error and the cause was that I had 2 firebase tools installed. One was through homebrew and the other was local. You can see if this is the case for you by running these command in terminals:
which firebase
npm list -g firebase-tools
If the output of those commands is different, you have the same problem I did.
In my case, removing the local library solved the issue:
rm /Users/mycomputer/.local/bin/firebase
You should also make sure you are on the latest version. Compare the output from this command:
firebase-tools -v
with the github latest version: https://github.com/firebase/firebase-tools
Question solved myself in my first comment.
short_name": "React App", "name": "Create React App Sample", "icons": [
{ }, "src": "favicon.ico", "sizes": "64x64 32x32 24x24 16x16", "type": "image/x-icon"
} "src": "logo192.png", "type": "image/png", "sizes": "192x192"
},
{ "src": "logo512.png", "type": "image/png", "sizes": "512x512"
}
"start url": "",
1, "display": "standalone", "theme_color": "#000000",
"background_color": "#ffffff"
}
I don't have the exact answer, but I have create an app to batch create folders from a list.
Just select your file destination, paste in your list and press create folders. Easily create hundreds of folders in seconds.
Check out Multiple Folder Creator on the App Store.
Note for the future readers, If you are using Rocket Version 0.5.0 (Nov 17, 2023) and later then the Outcome::Failure
variant was removed in favor of using Outcome::Error
.
Writing this for someone come here due to error:
error[E0599]: no variant or associated item named `Failure` found for enum `Outcome` in the current scope
--> src/guards/auth_guard.rs:40:32
|
40 | Err(_) => Outcome::Failure((Status::Unauthorized, ())),
| ^^^^^^^ variant or associated item not found in `Outcome<_, (Status, _), Status>`
So you have to use Outcome::Error
like(example):
Err(_) => Outcome::Error((Status::Unauthorized, ()))
Supporting Document: https://github.com/rwf2/Rocket/blob/master/CHANGELOG.md?utm_source=chatgpt.com#:~:text=Outcome%3A%3AFailure%20was%20renamed%20to%20Outcome%3A%3AError.
When you're copying a GitHub project to your computer, you have two main choices for the link: HTTPS or SSH.
If you pick HTTPS, it's like using a front door everybody has a key for. It's super easy to start since you just use your GitHub username and password or a token, but every now and then, it will ask you to log in again. Plus, it works anywhere, even if you're on tricky networks or behind firewalls.
Now, SSH is a bit like having a special VIP pass. You set it up once by creating a key (kind of like a secret handshake), and after that, pushing or pulling changes is smooth sailing without more passwords. It's more secure and faster once set up, but it’s a bit trickier to get going you gotta generate those keys and add them to your GitHub profile. Also, sometimes corporate networks block the special ports SSH uses, so that can get in your way.
So if you're new or just wanna grab stuff without fuss, go with HTTPS. But if you plan on working on projects a lot, or want that smoother, password-free flow, SSH is your friend.
In simple terms, HTTPS is quick and easy, SSH is secure and convenient.
summary for easy understanding ssh and https uses in github by sai karthik motapothula
Anyone arriving here in the future with a similar issue where you must support multiple frameworks and one of those is framework 4.0 (for very old vendor supplied application running on Windows XP in my case), the link to Thomas Levesque solution (https://thomaslevesque.com/2012/06/13/using-c-5-caller-info-attributes-when-targeting-earlier-versions-of-the-net-framework/) provided by others above works perfectly and seems to be the most straight forward solution to me since both newer and older frameworks can now use the attributes with no code differences.
I put his stub definitions into its own class file and surrounded it with the #if NET40 compiler directive so those stubs will only be used on a NET40 version compile (since I support multiple frameworks). A framework 4.0 version of each app can now access the [Caller...] attributes (I am only using [CallerMemberName], but I have no doubt the other stubs function too) and the expected values are populated into your variables. Thanks to Thomas and the others that left the link!
Instead of using Microsoft’s own module, you can use the open-source Get-AzVMSku module, which allows you to browse every Azure Gallery Image publisher, select offers, versions, and see VM sizes available to your subscription, along with quotas.
The module is available on the PowerShell Gallery:
Get-AzVMSku on PowerShell Gallery
I’ve also written a detailed guide explaining how it works and how to use it:
Browse PowerShell Azure VM SKUs & Marketplace Images with Get-AzVMSkuhttps://www.codeterraform.com/post/powershell-azure-vm-skus
I would suggest another approach as I am facing a similar issue for a singstar-like app I am coding.
I am considering creating a custom audio processing node the count the actuel buffer frames passing through it (a AudioWorkletProcessor maybe). I could provide a method giving me the actual played time based on samples count and sample time resolution.
So you would just connect those extra nodes just after the nodes you want to measure.
In my case, in monorepo, I had different versions of type-graphql
installed in the server app and a library containing model classes.
User-uploaded SVG files may embed malicious code or include it via an external reference. When a malicious SVG file is published on your website, it can be used to exploit multiple attack vectors, including:
<image>
, <script>
, or <use>
tags to send sensitive information to attacker-controlled servers.<image xlink:href="file:///...">
or <use>
references to attempt to read local or server-side files.Indeed, wrapping the image in an <img>
tag is 1 of the 3 measures you can take.
Other measures are
More detailed guidance on each measure:
<img>
TagInstead of directly embedding an SVG using <svg>
or <object>
, use the <img>
tag to render it:
<img src="safe-image.svg" alt="Safe SVG">
The <img>
tag ensures that the SVG is treated as an image and prevents JavaScript execution inside it, but it doesn’t remove malicious code from the SVG file.
Enabling the Content Security Policy (CSP) HTTP response header also prevents JavaScript execution inside the SVG.
For example:
Content-Security-Policy: default-src 'none'; img-src 'self'; style-src 'none'; script-src 'none'; sandbox
Which applies the following policy:
Directive | Purpose |
---|---|
default-src 'none' |
Blocks all content by default. |
img-src 'self' |
Allows images only from the same origin. |
style-src 'none' |
Prevents inline and external CSS styles. |
script-src 'none' |
Blocks inline and external JavaScript to prevent XSS. |
sandbox |
Disables scripts, forms, and top-level navigation for the SVG. |
Strip potentially harmful elements like <script>
, <iframe>
, <foreignObject>
, inline event handlers (e.g. onclick
) or inclusion of other (potentially malicious) files.
Examples of libraries for SVG sanitization:
1 I started the mentioned Java sanitizer project, as I could not find any solution for Java.
Render the SVG server side to a raster based format, like PNG. This may protect visiting users, but could introduce vulnerabilities at the rendering side at the server, especially using server side JavaScript (like Node.js).
Sorry for bothering everyone.
I am not too sure why the summary statistics of r_ga and insideGARnFile_WithCoord) are different, but the output graphics looked very similar. I will assume slight boundary mismatch is due to coordinate reference system (difference/transformation). I will assume the problem is solved for now. If you have any insights on the boundary mismatch, please leave your comments here.
Much appreciated! Summary statistics of insideGARnFile_WithCoord Rainfall Output from insideGARnFile_WithCoord
Summary statistics of r_ga. Rainfall Output from r_ga
dchan
For me the problem was when i was doing lookup for Symbol and getting the instrument value, the datatype of the instrument was int64 which is numpy based value but the Socket Api accept int.
so converting the int64 -> int fix the issue for me.
@STerliakov gets the credit for the answer:
Apparently, this section of the code in its entirety was added for the purpose so the user would see the data for troubleshooting purpose.
I deleted it entirely and that fixed the issue
requirePOST();
[$uploaded_files, $file_messages] = $this->saveFiles();
Flash::set('debug','Application form data '
. json_encode($_POST, JSON_PRETTY_PRINT)
. "\nUploaded files:"
. json_encode($uploaded_files,JSON_PRETTY_PRINT)
. "\nFile messages:"
. json_encode($file_messages,JSON_PRETTY_PRINT)
);
For anyone else experiencing the issue, here is the entire working function
public function submit() {
include([DB ACCESS FILE]);
$fp1 = json_encode($_POST, JSON_PRETTY_PRINT);
$fp2 = json_decode($fp1);
$owner_first = $fp2->owner_first;
$owner_last = $fp2->owner_last;
$query_insert_cl = "INSERT INTO cnvloans (owner_first,owner_last) VALUES ('$owner_first','$owner_last')";
mysqli_query($dbc, $query_insert_cl);
redirect('/page/confirm');
}
I deployed my backend on Vercel and tried using the URL there as well, but I keep getting errors like 500, 401, 400 with Axios — when I fix one, another appears. However, the code runs perfectly in Postman and Thunder Client, but when I run it on my mobile, these errors keep showing up. If you have solved this issue before, please guide me as well.
There is the library DrusillaSelect on furthest neighbor you can try. From this paper.
As an example, an Etherium address with a private key
0x91b005cb6b291f67647471ad226b937657a8d7d6
pvk 000000000000000000000000000000000000000000000000007fa9e2cd6d52fe
check the address and good luck to you
Removing --turboback
from the dev script, fixed the issue.
Before
"dev": "next dev --port 3001 --turbopack"
After
"dev": "next dev --port 3001",
OnDrawColumn and OnDrawDataCell have a TGridDrawState State.
OnDataChange is in DataSource as was answered.
And yes you can't control TDBGrid unless you overload it with ancestors methods. That's why it is rarely used in real world tasks.
Each time when it is necessary to insert some frame to stream it is much better to do it before encoder: just send previous raw frame to encoder again or make some blending between previous and next frames. Dirty tricks with already encoding bitstream may give a negative side effects: broken HRD model, broken picture order counter sequence, etc.
2025 update:
For people who are confused about why there is no ID Token
checkbox, it is hidden unless you add a correct Redirect URI
. You need to add the one for Web
platform type.
After that, in the Settings
tab you will be able to see the ID tokens
checkbox, and marking it as checked fixed the problem for me.
I didn’t get the exact same error as you, but my setup is very similar, so here are my two cents:
Solution
│
├── MyApp // Server project. Set InteractiveWebAssembly or InteractiveAuto globally in App.razor
│
├── MyApp.Client // Contains Routes.razor
│
└── SharedRCL // Contains Counter.razor page (@page "/counter") without setting any render mode
In Routes.razor
, make sure the Router
is aware of routable components in the Razor Class Library (RCL) by adding the RCL’s assembly:
<Router AppAssembly="@typeof(Program).Assembly"
AdditionalAssemblies="new[] { typeof(Counter).Assembly }"> @* <-- This line here *@
...
</Router>
Depending on your setup, you might also need to ensure that the server knows about routable components in the RCL.
In MyApp/Program.cs
, register the same assembly when mapping Razor components:
app.MapRazorComponents<App>()
.AddInteractiveWebAssemblyRenderMode()
.AddAdditionalAssemblies(typeof(MyApp.Client._Imports).Assembly)
.AddAdditionalAssemblies(typeof(Counter).Assembly); // <-- This line here
can you ty this :
The idea is to search for this differently.
try to retrieve the private IP address of the Private Endpoint through its network interfaces (NICs).
and you identify the private DNS zones linked to the virtual network (VNet) where this Private Endpoint is connected (via the Private DNS zone links).
Within these private DNS zones, you search for DNS records that match those private IP addresses — these are the FQDNs
You can do it with powershell or python sdk azure easily
Here is my workaround for blank icon issues on taskbar.
1. create a shortcut on desktop, open it.
2. drag this shortcut to the taskbar. this will pin it onto taskbar.
3. right click the icon, un-pin. done!
You can create SPA using bootstrap and jquery/vanilla.js... For that you must have a strong understanding of vanilla js or jquery
Preferences-Run/Debug-Perspectives
in "Application Types/Lunchers" box
"STM32 C/C++ Application"
Debug: None,Run:None.
not Debug:Debug.
रवि की साइकिल जंग खा चुकी थी, ब्रेक भी काम नहीं करते थे। फिर भी वह रोज़ दस किलोमीटर स्कूल जाता। दोस्तों ने मज़ाक उड़ाया, पर उसने हिम्मत नहीं हारी। पढ़ाई में अव्वल आया तो वही दोस्त बोले, "तेरी साइकिल टूटी थी, सपने नहीं।"
The confidence intervals for versicolor and virginica correspond to their reported estimates of 0.93 and 1.58. That is, the offset of versicolor is estimated as 0.93 and the confidence interval spans 0.73 to 1.13. To get the estimate of the mean of versicolor, you would add the intercept to all of those numbers: a mean of 5.01 + 0.93 with the lower confidence limit at 5.01 + 0.73 and the upper confidence limit at 5.01 + 1.13.
Store in three separate columns; it's much better for maintenance and data retrieval (note that they are three very distinct pieces of data, so putting them together will make your life harder).
If you want to have an easy way to always have the [Country ISO]-[State ISO]-[City Name] string in hand, you can create an additional generated column.
Example (Postgres):
beautified_code varchar GENERATED ALWAYS AS CONCAT_WS('-', country_iso, state_iso, city_name) STORED
In this column, the three values will always be concatenated together automatically during entry creation and update. So you don't need to worry about maintaining consistency in it.
i have the same question, i have a dataset contening medical variables using to determine whether the patient have to receive outpatient care or not,
the target variable is SOURCE :
0 for outpatient care
1 otherwise
i'm using the method of supervised learning glm (logistic regression) of caret package in R, it predicts the probability that the individual belongs to the positive class, chatgpt is saying that the positive class is the second one, but i don't know how i can get sure that the model predicts p(k="1"/xi).
glm gives only probabilities as results when using the function predict, so i must converting proba to label (0 or 1) according to the threshold. so theses proba are for p(k=first level of the factor variable) ?
Try to rename your hivepress-marketplace directory to hivepress or set for add_action higher priority (3rd parameter)
In some cases of API development, we want to ignore some fields in the JSON response. In such cases we use the @JsonIgnore annotation, therefore, it was not designed to solve the Infinite Recursion problem.
In my projects, I use the @JsonIgnore annotation with the @ManyToMany bidirectional relationship between entities, and I use @JsonManagedReference and @JsonBackReference with @ManyToOne and @OneToMany cases.
I tried everything on the Mac. But "sudo su -" worked finally, which is root directory with full permission.
Nevermind, I found a solution almost immediately after posting this question, however I am leaving this here for eventual future visitors. I fixed this problem by simply creating a temporary repository with the modified path (Git) selected, published it to github, and now the new path is saved. Don't know why it didn't save it before, but this solutio should work.
The empty views activity doesn't give Java either, nor the no activity. java is completely gone from Android Studio. now I need to learn a new language. hate this, I haven't been programming since 2021 when I finish my computer science degree. now I'm trying to get the rust off but apparently I have to start from the beginning. Kotlin, here I come!
As suggested by @furas, we can use git url.
ubuntu@ubuntu:~/hello$ poetry add scikit-learn git+https://github.com/scikit-learn-contrib/imbalanced-learn.git Creating virtualenv hello in /home/ubuntu/hello/.venv Using version ^1.7.1 for scikit-learn
Updating dependencies Resolving dependencies... (0.1s)
Package operations: 6 installs, 0 updates, 0 removals
Writing lock file ubuntu@ubuntu:~/hello$
Same here, it worked well at first for several times (dont know how often) ... is there a rate limit ?
It looks like the base path is not set correctly. You can try below
import os
import sys
sys.path.append(os.path.abspath("."))
sys.path.append("../")
# Then try to import the src modules
from src.logger import Logging
Here we are setting up the path manually but it should works.
MongoDB collections dont have a schema, but if you want to read it as spark dataframe all rows must have the same schema. So fieldA cant be a String and a json object at the same time. If it not present you shouldnt create an empty string, just drop the field or use null
I found the answer here: CosmosDB Emulator Linux Image - Mount Volume
When I tried it, it seemed to work
services:
cosmosdb:
volumes:
- cosmosdrive:/tmp/cosmos/appdata # <-- pointing to this location did it for me
volumes:
cosmosdrive:
import React from 'react';
const videos = [
{ id: 'dQw4w9WgXcQ', title: 'Rick Astley - Never Gonna Give You Up' },
{ id: '3JZ_D3ELwOQ', title: 'Maroon 5 - Sugar' },
];
export default function VideoLinks() {
return (
<div>
<h2>คลิป YouTube</h2>
<ul>
{videos.map(video => (
<li key={video.id}>
<a
href={`https://www.youtube.com/watch?v=${video.id}`}
target="_blank"
rel="noopener noreferrer"
style={{ color: 'blue', textDecoration: 'underline' }}
>
{video.title}
</a>
</li>
))}
</ul>
</div>
);
}
Yes i know it's a real old thread. But i have a question.
The Arc works fine so far, but i would like to display a value in the middle of the Gauge. How can i achieve this?
There is a well tested implementation in Android. You can even package it into your own Java project like this.
Since Spring Boot is using logback for logging, you can overwrite the default logback pattern by your own patternLayout, masking your password properties. This approach also applies to any other log output (e.g. toString methods) according to defined RegEx pattern.
see https://www.baeldung.com/logback-mask-sensitive-data for example.
is it required to call the python script from C# only?
Can't you directly expose the Python code as an API endpoint and then call the API endpoint from the C# as it will be easier?
There was nothing special in the template file but the variable "unit" holding the datum, and a for loop to distribute it's elements to one html table.
Here is how template looks like:
<tbody>
{% for element in unit %}
<tr>
<td>{{ element[0] }} </td>
<td>{{ element[1] }}</td>
<td>{{ element[2] }}</td>
</tr>
{% endfor %}
</tbody>
Thank you all, who are writing suggestions. I have found the solution to define an empty list such as unit_result, populating the list item with a loop and sending it to the template.
#prepare for the table
unit_result = []
for i in range(len(unit)):
unit_result.append((unit[i][0], unit[i][1], unit[i][2]))
return render_template('analysis.html', unit_result=unit_result)
The new template will include the following line instead of the older one:
{% for element in unit_result %}
La solución correcta implica el uso de las API de Windows (COM) o la automatización de la interfaz de usuario.
SHELL
Type shellAppType = Type.GetTypeFromProgID("Shell.Application");
object shellApp = Activator.CreateInstance(shellAppType);
I also face this issue a week ago so i use the nullish coalescing operator (??
) to provide a default string
.
constvalue: string = maybeString ?? "";
This ensures value is always a string, even if maybe String is undefined.
FWTW, MDN nowadays says in a somewhat prominent box:
The initial value should not be confused with the value specified by the browser's style sheet.
And you seem to want the latter.
https://github.com/bandirom/fastapi-socketio-handler
I created a small library to easily connect SocketIO to FastApi
Feel free to discuss about approach in the lib
When you run the application for the first time, it will download all dependencies from the remote repositories to local repositories so it is a bit time taking process so the application may face a delay. From the second time and onward when you start the application, it will load libraries from the local repos and missing libraries from the remote, so it needs a few seconds to organize this whole process.
Removing unwanted dependencies from your pom.xml can significantly improve the performance of your application.
You can resolve this issue with local builds:
Clone core via SSH (no token)
Run:
mvn clean install
This installs core into your local Maven repository (~/.m2/repository
)
Now api and web can reference it in pom.xml
without touching GitLab or HTTP tokens.
You can cd into another directory, for example if you copy the address in your address bar in your file explorer, and paste it in your powershell, and type change directory command, as seen below
cd 'D:\Users\username\Downloads"
I had this issue too and then when i had the (SA) isssue i just knew the problem, I tried vpn but still didn't work so i turn my pc off and then turn it back on and it works well now.
I found out the issue was that the feature set that I had defined was not enough to let the parser choose a different action than starting an arc. Once I added a feature to indicate if an arc was started, the parser now starts and ends arcs until the stop condition is reached. The code looks somewhat different than the example that I first posted, but it is similar. For example, the while-loop in the parse() method continues indefinitely (while True:
) but there is a break condition that comes into effect when the number of arcs reached the number of tokens in the sentence (since the number of arcs including the root arc is the same as the number of tokens). Note that I also switched from using the perceptron code to a SVM from the scikit-learn library.
A pure Python library, with no dependencies, to calculate business days between dates, quickly and efficiently. Ideal for Python developers and data scientists who value simplicity and performance.
https://github.com/cadu-leite/networkdays
0
A pure Python library, with no dependencies, to calculate business days between dates, quickly and efficiently. Ideal for Python developers and data scientists who value simplicity and performance.
A pure Python library, with no dependencies, to calculate business days between dates, quickly and efficiently. Ideal for Python developers and data scientists who value simplicity and performance.
or jut check
pure Python library, with no dependencies, to calculate business days between dates, quickly and efficiently. Ideal for Python developers and data scientists who value simplicity and performance.
https://github.com/cadu-leite/networkdays
Step-by-Step Internal Flow
BEGIN → The database assigns a transaction ID (TID) and starts tracking all operations.
Read/Write Operations → Data pages are fetched from disk into the Buffer Pool.
Locking / MVCC →
Lock-based systems: lock rows/tables to ensure isolation.
MVCC systems: keep multiple versions of data so reads don’t block writes.
Undo Logging → Before any change, the old value is written to an Undo Log.
Change in Memory → Updates are made to in-memory pages in the buffer pool.
Redo Logging (WAL) → The intended changes are written to a Write-Ahead Log on disk.
Commit → The WAL is flushed to disk, guaranteeing durability.
Post-Commit → Locks are released, and dirty pages are eventually written to disk.
Rollback (if needed) → Use the Undo Log to restore old values.
Read More: https://jating4you.blogspot.com/2025/08/understanding-internal-working-of.html
That's called an inline mode.
Here's the links in official API reference:
About inline bot: https://core.telegram.org/bots/inline
API reference: https://core.telegram.org/bots/api#inline-mode
I've done very basic image creation with PIL in Mojo. (Running on Fedora Linux, Mojo installed via pip into Python venv.) My program imports PIL.Image, creates a new image, and initialises the pixel data from a Mojo List[UInt32] converted via the Mojo Python.list() type conversion.
If you are using newer version of keycloak specifically 26, then
https://www.keycloak.org/server/containers#_importing_a_realm_on_startup
keycloak:
image: quay.io/keycloak/keycloak:26.1.4
command: start-dev --import-realm
ports:
- "8081:8080"
environment:
KC_BOOTSTRAP_ADMIN_USERNAME: admin
KC_BOOTSTRAP_ADMIN_PASSWORD: admin
KC_DB: postgres
KC_DB_URL: jdbc:postgresql://postgres:5432/keycloak
KC_DB_USERNAME: keycloak
KC_DB_PASSWORD: keycloak
volumes:
- keycloak_data:/opt/keycloak/data
- ./compose/keycloak/realms:/opt/keycloak/data/import
networks:
- keycloak
solution for controlling usb Power on/off is in my video https://www.youtube.com/watch?v=CTlXuiL_ARM&t=5s
The error happens because HSQLDB’s SQL engine can’t find your class in its own Java classpath — fix it by adding the JAR via SET DATABASE JAVA CLASSPATH
, restarting, and ensuring the method is public static
and not in a nested Boot JAR.
please focus on bulkifying, like @eyescream did in his example
Had to build on top of the information provided, as I saw no clear rule for tokens
This worked for me:
delims^=^>^<^"^ tokens^=1^,^2^,^3^,^4^,^5
Was unable to make this work changing the order to the classical: tokens^=1^,^2^,^3^,^4^,^5 delims^=^>^<^"^
I have been unable yet to include blank spaces for delims using this approach. I tried with delims^=^>^<^"^ ^ to no avail.
Some proposals of other answers did not worked for me.
The Red Cross Society of China has died inside, 5 subversive conspiracies exposed! Extremely fearful!
Food is the most important, but now the evil seeds are on earth! For thousands of years, the Elders always charge: "Take heed, beware of the leaven of the Pharisees, and of the leaven of Herod. (Mark 8:15)"; The visible demons are not terrible, but the sugar coated shells and boiled frogs in warm water - The subtle poisoning! If a minor disease is not cured, it will inevitably go deep. If the disease goes into anorexia, it will be hopeless. The awakening time will be dark! For example, the black mage in the Lord Of The Rings left a "five finger" mark on the orc's head. Now the devil is deeply rooted in the hearts of the people and covered with human skin. The five internal organs (poisons) are complete and run everywhere, omnipresent and ferocious! In a short time, it will be a world of "walking corpses"! There is no doubt that for these irrational and immoral "ghouls", the greatest "fun of life" is of course ... eating human!
Therefore, it is still the Millennium prophecy: "Take heed that no man deceive you. For many shall come in my name, saying, I am Christ; and shall deceive many. And ye shall hear of wars and rumours of wars: see that ye be not troubled: for all these things must come to pass, but the end is not yet. For nation shall rise against nation, and kingdom against kingdom: and there shall be famines, and pestilences, and earthquakes, in divers places. All these are the beginning of sorrows. Then shall they deliver you up to be afflicted, and shall kill you: and ye shall be hated of all nations for my name's sake. And then shall many be offended, and shall betray one another, and shall hate one another. And many false prophets shall rise, and shall deceive many. And because iniquity shall abound, the love of many shall wax cold. But he that shall endure unto the end, the same shall be saved. And this gospel of the kingdom shall be preached in all the world for a witness unto all nations; and then shall the end come. (Matthew 24:4-14) "( these "4" all means "dead" in Chinese pronunciation)
Throughout the history of the "Red Cross Society", from the bloody crucifixion of Jesus to the excessive collection of papal Crusades, from religious reform to clarifying doctrines to palliative surgery and artificial intelligence, from the affected "cross salvation medicine" to the knowledge culture with developed limbs and empty mind, it is obviously an increasingly desolate, corrupt, deviated from the core and cruel human history! Death is like a lamp out, When the body is dead, there is still soul. But if the heart is dead, then all dead without burial! Don't think you can still be alone at the junction of heaven and earth (light and dark). Visually, this is the most fierce arena for the fight between good and evil! As a crossroads, the Middle East has always been a troublemaker. Jews are the representatives of mankind. The best and worst are concentrated on them! As shown as the "crime poison" has spread to the north and west, giving these Migrant Workers and Leading Sheep with "sickle hoe" and "cross pastoral stick" a blow! The Worker's disobedience also shows that the Shepherd's discipline is ineffective (or beyond the reach of the whip). Fish begins to stink at the head, and natural disasters are everywhere!
However, these are inevitable "pretty evil tide" (historical trends). War and disease are just the beginning, and the "finale" (center, focus) is coming. So the fingers connect with the heart, it is natural that there should be a church before a society; Human life matters, man-made disasters are the precursor of natural disasters, man and nature are one; Although the former is an explicit symptom, it just an irrelevant skin pain, but it can reflect a series of potential (internal) problems. It is difficult to endure pain and itch, When the invisible virus without gunpowder breaks through the heart of human nature, the whole social group has essentially died directly from the inside! This is the internal corruption of self occupation! For the time being, we can summarize the crux of all social problems with "uneasy mentality and psychological decline". There is nothing new under the sun, As early as the beginning of the last century, this obvious symptom and death tragedy have been shown on the "foretaste (heretic) wood (木)" staff, as the foot of the "human" (人; humanized) has exposed under the "cross" (十;Divinity) and become "A Rotten Wood"(木)- The Golden Time is over just after "three generations"! At most, it is regarded by outsiders(faithless)as "Umbrella(伞)of Biochemical Crisis"!
The heart is all things, So the selfish thoughts are sins, and phase is generated by mind, It has nothing to do with the environment but the state of heart-vision! If the internal contradiction is not handled, it will inevitably lead to external (physical) conflicts. New sorrows and old hatreds will come one after another,once the heart-lamp is off, there is no spark to start a prairie fire, only black smoke! Such chaos (civil strife) can be seen in a little bit, even those who are deeply hidden in "China Christian Church"(CCC)center, these otaku and "Outer-ring" also knows. What are you saying about the family scandal? The spectators see the chess game better than the players, This is reflected by insiders who are outside the church. While the other 99% of Chinese are still in the dark, completely ignorant, and even gloating about indifference, What else do they say that prevention begins with indifference? If it goes on like this, isn't it all drift with the current, and it's all over? Do me a favor please, The devil has come up with countless refreshing tricks to poison and kill our entire human group, and we are still beautifying them, fighting over trivial matters, and drifting on the core issues?
If you can recompile nginx, you might consider using the module I developed.
https://github.com/HanadaLee/ngx_http_request_cookies_filter_module
location /api {
clear_request_cookie B;
proxy_set_header $filtered_request_cookies;
proxy_pass <upstream>;
}
Basically it means…
Your project isn’t on the “old” stable Next.js behavior anymore.
In older versions of Next.js App Router API routes, this worked:
ts
CopyEdit
exportasync function PUT(request: Request, { params }: { params: { userID: string } }) { console.log(params.userID); // ✅ direct access }
because params
was just a plain object.
But in newer / canary (experimental) versions of Next.js, they changed it so:
ts
CopyEdit
context.params // ❌ not a plain object anymore
is actually a Promise that you need to await
:
ts
CopyEdit
const{ userID } = await context.params;
Why the change?
This is part of their new streaming / edge runtime updates.
It allows Next.js to lazily fetch dynamic route parameters for certain deployments (especially when running API routes at the Edge).
That’s why your build compiler is complaining:
Type '{ userID: string; }' is missing ... Promise methods...
It’s telling you: “I was expecting a Promise here, not an object.”
In case you want to read/write values from/to Settings.Global
, you should use the public methods | Settings.Global | API reference.
Just don't exec shell command in your Android application.
pip install pylint-django --upgrade
Then in your project folder create .vscode/settings.json file (or CMD+SHIFT+P: Open Workspace settings (JSON) )
in the json file add:
"pylint.args": ["--load-plugins", "pylint_django", "--django-settings-module=config.settings"]
You can’t directly “call” a PHP function from JavaScript without refreshing, because PHP runs on the server and JavaScript runs in the browser.
When registering your class, use GDREGISTER_RUNTIME_CLASS(classname) instead of GDREGISTER_CLASS(classname) to make it only run when the game is running, and not in the engine itself
Google Sheets unfortunately added a workaround to your workaround, now your function does not work.
just remove the
display:block; width:100%;
so your style look likes this
table {
border: 2px solid #f00;
}
caption {
background: #0f0;
}
As stated in the official documentation, for next-auth@4
providers require an additional .default
to work in Vite. This will no longer be necessary in next-auth@5
(authjs
).
To remove the syntax error for using .default
, add @ts-expect-error comment
import CredentialsProvider from "next-auth/providers/credentials";
import { NuxtAuthHandler } from "#auth";
export default NuxtAuthHandler({
secret: process.env.AUTH_SECRET,
providers: [
// @ts-expect-error Need to use .default here for it to work during SSR. May be fixed via Vite at some point
CredentialsProvider.default({
id: "credentials",
name: "Credentials",
type: "credentials",
credentials: {
email: { label: "Email", type: "text", placeholder: "[email protected]" },
password: { label: "Password", type: "password" },
},
async authorize(credentials: { email: string; password: string }) {
const res = await fetch(
`${process.env.NUXT_PUBLIC_API_URL}/auth/login`,
{
method: "POST",
body: JSON.stringify({
email: credentials?.email,
password: credentials?.password,
}),
}
);
console.log("res", res);
// handle the rest
},
}),
],
});
You do not need to use docker. You can either test parts of the programs with unit test. Or write an alternative main with a Build tag, which creates the context and read the event from e.g. Json files.
You wount get the correct pathes or iam restrctions. But if you do not work on the /tmp filesystem that does not matter.