You'll need to create a page template to display the Books. Eg. archive-books.php
Read the WordPress Theme Developer documentation. The basics and, specifically, the section on templates and template hierarchy:
https://developer.wordpress.org/themes/basics/
https://developer.wordpress.org/themes/basics/template-hierarchy/
Here's an attempt to do it in SwiftUI based on @Fattie's answer.
This answer uses the native ScrollView
with onScrollGeometryChange
and scrollTo(y:)
API's.
The movement is jittery unfortunately. I'm not sure if the jitteriness is from a weakness in my code and it could be done in a more optimized way, or if it's a weakness in SwiftUI as opposed to UiKit that we should report to Apple, and that the Apple Calendar app was made in UiKit instead of SwiftUI.
Also I simplified the code in the question to just the minimal code to focus on the anchoring part.
Open for suggestions and improvements.
import SwiftUI
struct ScrollData: Equatable {
let height: CGFloat
let offset: CGFloat
let inset: CGFloat
}
final class HourItem: Identifiable {
let id: UUID
let hour: Int
init(hour: Int) {
self.id = UUID()
self.hour = hour
}
}
struct TestView: View {
let minHourHeight: CGFloat = 50
let maxHourHeight: CGFloat = 400
@State private var isZooming: Bool = false
@State private var previousZoomAmount: CGFloat = 0.0
@State private var currentZoomAmount: CGFloat = 0.0
@State private var position: ScrollPosition = ScrollPosition()
@State private var scrollData = ScrollData(height: .zero, offset: .zero, inset: .zero)
@State private var anchor: CGFloat = 0
@State private var height: CGFloat = 0
private var zoomAmount: CGFloat {
1 + currentZoomAmount + previousZoomAmount
}
private var hourHeight: CGFloat {
100 * zoomAmount
}
private let currentTime: Date = Date.now
private let hourItems = (0..<25).map {
HourItem(hour: $0)
}
var body: some View {
ScrollView {
VStack(spacing: 0) {
ForEach(hourItems) { hourItem in
HourMarkView(
hour: hourItem.hour,
height: hourHeight,
currentTime: currentTime
)
}
}
.simultaneousGesture(magnification)
}
.scrollPosition($position)
.onScrollGeometryChange(for: ScrollData.self) { geometry in
ScrollData(
height: geometry.contentSize.height,
offset: geometry.contentOffset.y,
inset: geometry.contentInsets.top,
)
} action: { oldValue, newValue in
if oldValue != newValue {
scrollData = newValue
}
}
}
private var magnification: some Gesture {
MagnifyGesture(minimumScaleDelta: 0)
.onChanged(handleZoomChange)
.onEnded(handleZoomEnd)
}
private func handleZoomChange(_ value: MagnifyGesture.Value) {
if !isZooming {
anchor = value.startAnchor.y
height = scrollData.height
isZooming = true
}
let gestureScreenOffset = value.startLocation.y - (scrollData.offset+scrollData.inset)
let newZoomAmount = value.magnification - 1
currentZoomAmount = clampedZoomAmount(newZoomAmount)
position.scrollTo(y: (anchor*scrollData.height) - gestureScreenOffset)
}
private func handleZoomEnd(_: MagnifyGesture.Value) {
isZooming = false
previousZoomAmount += currentZoomAmount
currentZoomAmount = 0
}
private func clampedZoomAmount(_ newZoomAmount: CGFloat) -> CGFloat {
if hourHeight > maxHourHeight && newZoomAmount > currentZoomAmount {
return currentZoomAmount - 0.000001
} else if hourHeight < minHourHeight && newZoomAmount < currentZoomAmount {
return currentZoomAmount + 0.000001
}
return newZoomAmount
}
}
struct HourMarkView: View {
var hour: Int
var height: CGFloat
var currentTime: Date
var body: some View {
HStack(spacing: 10) {
Text(formatTime(hour))
.font(.caption)
.fontWeight(.medium)
.frame(width: 40, alignment: .trailing)
Rectangle()
.fill(Color.gray)
.frame(height: 1)
}
.frame(height: height)
.background(Color.white)
}
private func formatTime(_ hour: Int) -> String {
return String(format: "%02d:00", hour)
}
}
I know the question is for Visual Studio Code, but in the fully featured Visual Studio, the same problem is apparent. Here is my "solution".
I did this, and although not perfect, it helps!!
Tools/Options/ search for "sticky", check "Group the current scopes within a scrollable region of the editor window", Maximum sticky lines to 3, and choose "Prefer outer scopes".
Visual Studio will then present you with the namespace, class, and function that the top line of the editor area is in. See the image attached here.
I didn’t end up getting this to work reliably via a Chrome extension content script — YouTube seems to check isTrusted
on clicks, which makes synthetic extension clicks unreliable.
Instead, I switched to a Tampermonkey userscript, which can run directly in the page context and works fine for automatically skipping ads.
Here’s the script I’m using now:
// ==UserScript==
// @name AutoSkipYT
// @namespace http://tampermonkey.net/
// @version 1.0
// @description Skips YouTube ads automatically
// @author jagwired
// @match *://*.youtube.com/*
// @grant none
// ==/UserScript==
(function() {
'use strict';
const CHECK_INTERVAL = 500; // ms between checks
function skipAd() {
const skipButtons = [
'.ytp-ad-skip-button-modern',
'.ytp-skip-ad-button',
'button[aria-label^="Skip ad"]'
];
for (const selector of skipButtons) {
const button = document.querySelector(selector);
if (button && button.offsetParent !== null) {
button.click();
return;
}
}
// Seek through unskippable ads
const video = document.querySelector('video');
if (video && document.querySelector('.ad-showing, .ad-interrupting')) {
video.currentTime = video.duration - 0.1;
}
}
setInterval(skipAd, CHECK_INTERVAL);
document.addEventListener('yt-navigate-finish', skipAd);
})();
Notes:
This script clicks the skip button when it appears.
If the ad is unskippable, it jumps the video to the end of the ad.
Because it runs in page context, it works without hitting YouTube’s synthetic click checks.
If you still want a Chrome extension version, you’d likely need to inject the script into the page so that clicks are isTrusted
. But if you just want it to work, this userscript gets the job done.
To update a part of your page without a full reload, you must use JavaScript. It is not possible to do this with only Flask and HTML, because browser behavior for a form submission is to request an entirely new page.
The solution is to use JavaScript to send a request to your server in the background.
This is not truly "current", but it is close:
The only way I know to do this is to query mysql.
The user table has a last_login_on field.
$ mysql -u debian-sys-maint -p
mysql\> use redmine;
mysql\> select id,login,firstname,lastname,last_login_on from users order by last_login_on desc;
Did you (@user1496984) run where python
?
I think the use case matter before choosing the workflow. I will provide a summarized view inspired from the comments.
Use case = Data # (where # can be analyst, scientist (exhaustively add machine learning engineer, and so on)
brew
. Create .env using conda
at global level with conda mini-forge base python installation (which shows up w/ where
cmd)Use case = Web Dev
uv
Hope this helps! :)
I am still searching how uv
benefits over conda
Even I got the issue, Even if you use firebase sdk, it won't be enough because basically we will check the user status in the splash screen itself and firebase will take some time to respond, I even used authStateChanges() but I can't get it, so what I did is I used a sharePref storing isLogin values, now it works fine.
Hello how are you is the game good thank you
<canvas id="gameCanvas" width="800" height="600"></canvas>
<div id="radioPanel">
<button id="prev">⏮</button>
<button id="playPause">⏯</button>
<button id="next">⏭</button>
<span id="trackName">–</span>
</div>
<script>
const canvas = document.getElementById('gameCanvas');
const ctx = canvas.getContext('2d');
// Volante virtual
let steeringAngle = 0;
canvas.addEventListener('mousemove', e => {
const dx = e.clientX - canvas.width / 2;
steeringAngle = dx / (canvas.width / 2); // normaliza entre -1 e 1
});
// Música
const tracks = ['mus1.mp3', 'mus2.mp3', 'mus3.mp3'];
let current = 0;
const audio = new Audio(tracks[current]);
document.getElementById('playPause').onclick = () => {
audio.paused ? audio.play() : audio.pause();
};
document.getElementById('prev').onclick = () => { if (current > 0) current--; audio.src = tracks[current]; audio.play(); };
document.getElementById('next').onclick = () => { if (current < tracks.length - 1) current++; audio.src = tracks[current]; audio.play(); };
document.getElementById('trackName').innerText = tracks[current];
// Loop principal de desenho
function loop() {
ctx.clearRect(0, 0, canvas.width, canvas.height);
// Desenhar estrada segmentada com deslocamento constante
// Desenhar o carro no centro e rotacionado conforme steeringAngle
requestAnimationFrame(loop);
}
loop();
</script>
If you render your SVG to a raster-based format like PNG on the server side, rather than exposing the SVG code directly to the client, you can effectively protect the original SVG source. Combine this with caching of the rendered image to avoid performance pitfalls.
For instructions on rendering a PNG file from an SVG, see: How can I render SVG with common PHP extensions?
You can probably use CASE in your query.
SELECT
CASE
WHEN FIELD3 LIKE '44%' THEN 'YES'
ELSE 'NO'
END AS Check_status
FROM table
GROUP BY Check_status;
Building on @pkamb answer — to shrink the accessory view into the toolbar space, you can read the placement via:
@Environment(\.tabViewBottomAccessoryPlacement) var tabViewPlacement
This returns an enum with two cases: .expanded
and .inline
. You can provide different views for each case, for example:
.tabViewBottomAccessory {
switch tabViewPlacement {
case .expanded:
HStack {
Spacer().frame(width: 20)
Image(systemName: "command.square")
Text(".tabViewBottomAccessory")
Spacer()
Image(systemName: "play.fill")
Image(systemName: "forward.fill")
Spacer().frame(width: 20)
}
case .inline:
Text(".tabViewBottomAccessory")
Image(systemName: "play.fill")
Image(systemName: "forward.fill")
Spacer().frame(width: 20)
}
}
This way, you can simply omit or adjust icons in .inline
mode.
I started metro server on the web and then checked the errors in console. There was a version mismatch .i ran npx expo-doctor and after that npx expo install --check . It solved the problem
if you are here in 2025, you can get this information directly from the *http.Request r.URL.Query().Get("id")
if you want to run specific modules seeder
php artisan db:seed --class="Modules\Administration\Database\Seeders\ShiftsSeeder"
You can (usually), but there is (usually) no use of it. It is also discouraged for embedded systems, especially bare-metal.
Abhijith S
Abhishek B
Abhishek Vinodh
Adarsh Pradeep P
Adhi Sakthan S
Akash Anand G
Akash R
Alaka R
Ameer Shanavas
Anamika A
Ananya V
Anjana B
Aravind P
Arjun D
Aswin Raj
Avanthika Krishna D
Ayisha Fathima SS
Basim Muhammed
Devadath S
Devika M
Gagna Priyadarshini
Ijas Nisam
Janisha Nidhi
Jiya Elsa Shajan
Jobin John
Manasi M
Manikandan N
Midhun Chandran M
Midhun Krishna G
Muhammed Aslam N
Muhammed Faizal S
Muhammed Haroon H
Muhammed Shafi N
Nikhila Anil
Nikesh Chakaravarthy
Pranav P
Pratheeksha L
Sabith S
Sai Krishna G
Salu P
Sandana Sandeep
Sanjay Saji
Saran Krishna B
Yadav Suresh
Adithyan U
Alaka R (moved from 37)
Nikhila Anil (moved from 35)
Gagna Priyadarshini (moved from 44)
Devika M (moved from 38)
i am also facing same problem.
The solution to this problem is described here and amounts at installing many optional packages.
vs_BuildTools install ; \
--includeRecommended --includeOptional ; \
--add Microsoft.VisualStudio.Workload.VCTools ; \
--add Microsoft.VisualStudio.Component.VC.ATLMFC ; \
--add Microsoft.VisualStudio.Workload.VisualStudioExtensionBuildTools ; \
--quiet --norestart --nocache --wait
To sort with varying strings:
2
3a
3b
4
10
Use below query:
ORDER BY CAST(mini_number AS UNSIGNED), mini_number
References:
https://www.sitepoint.com/community/t/mysql-how-to-sort-numbers-with-some-data-also-containing-a-letter/340999
https://www.sitepoint.com/community/t/how-to-sort-text-with-numbers-with-sql/346088/4
please make sure you export the region in the environment variable
export AWS_DEFAULT_REGION="us-east-1"
and then trigger the pipeline.
i added a space before the username and it worked for me👍🏻
I solved this problem by installing Beckhoff TwinCAT Embedded Browser and restarting the computer.
I got tired and simply switched to SSH keys (docs). I am not sure what the real issue was, but nothing seemed to help me. After trying what I thought was every solution, I kept getting the same error. Leaving this here if anyone else gets stuck.
1- that's not how it works: if you read data, you're waiting for the result, and that's the point of ptefetching: you don't wait for it now, and with a bit of luck you don't wait at all
2- if the JVM is doing a decent job, there are few enough extra memory access that the cache isn't full: think of a heap for example, that has more or less predictable reads for the code, but not for the memory subsystem
SELECT COUNT(DISTINCT t.driver_id) AS drivers
FROM trips t
JOIN drivers d
ON t.driver_id = d.driver_id
JOIN vehicles v
ON t.vehicle_id = v.vehicle_id
WHERE d.driver_status = 'active'
AND v.vehicle_status = 'active';
You could remove the current user credentials with the command:
git credential reject <url>
even in pypy3 I get the same answer
i can't tell why, but those former answers did not work out for me. Thus, those who searched for it can also use this one ig:
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<scope>runtime</scope>
</dependency>
In my case the issues was caused by apparmor.
In order to fix it, as root user, I ran aa-complain openvpn
(openvpn was already defined under /etc/apparmor.d/)
May be you missing Application.ProcessMessages call?
any updates on this?
The docs (https://node-postgres.com/apis/client) state:
"... example to create a client with specific connection information:
import { Client } from 'pg'
..."
But this leads to:
import { Client } from "pg";
^^^^^^
SyntaxError: Named export 'Client' not found. The requested module 'pg' is a CommonJS module, which may not support all module.exports as named exports.
CommonJS modules can always be imported via the default export, for example using:
import pkg from 'pg';
const { Client } = pkg;
at ModuleJob._instantiate (node:internal/modules/esm/module_job:220:21)
at async ModuleJob.run (node:internal/modules/esm/module_job:321:5)
at async onImport.tracePromise.__proto__ (node:internal/modules/esm/loader:644:26)
at async asyncRunEntryPointWithESMLoader (node:internal/modules/run_main:117:5)
Node.js v22.17.1
GitHub Actions only shows tags in the workflow run dialog if the workflow file exists in the commit that tag points to. Your new tags likely point to commits that either don't contain the workflow file or have an older version of it.
When the Maven Release Plugin creates tags through GitHub Actions, it might be tagging commits before the workflow was added or updated. Check if the workflow file exists in the tagged commits by navigating to the specific tag and looking for the .github/workflows directory.
Regarding your first question:
As I read from this source: I am trying to create cookie while login with a user in Blazor web server app but I am bit lost setting a cookie over Blazor Interactive Server ist not possible, since it uses SignalR. Cookies cannot be set over already started responses (what SignalR is).
You can load the login page in server mode. The problem is posting the form. If you try doing it like this:
<EditForm Model="Input" method="post" OnValidSubmit="LoginUser">
The LoginUser method is called directly on the server via SignalR.
Try something like this:
<EditForm Model="Input" method="post" action="/Account/Login" formname="login">
[SupplyParameterFromForm(FormName = "login")]
private InputModel Input { get; set; } = new();
As far as I tested, this will make sure the EditForm is posted to the server while every other component can still be interactive. I am not sure how this affects the LoginUser method to be called or not. But this could be a start for you.
Second question:
What do you mean by that? You cannnot alter the AuthenticationStateProviders to "enable" cookie setting over SignalR. This is simply not possible. The AuthenticationStateProviders revalidates your login cridentials every 30 minutes (default).
Third question:
I would suggest to you to use the static login page. Why do you need interactive mode anyway on the login page? If you want some sort of animation, you could do that with javascript instead.
if you have postgres installed locally try to uninstall it it worked for me in my fullstack next.js project because the the port localhost:5432 is already being used by the postgres in your local machine
replace()
returns a new object (so assign it or use inplace=True
), your types must match (0
vs '0'
), and your mapping with duplicate 'polarity'
keys overwrote itself.
Use this:
sentiment_text['polarity'] = sentiment_text['polarity'].replace({0: 'negative', 4: 'positive'})
I ended up using https://github.com/victornpb/eleventy-plugin-page-assets to copy the images. It gives new names to each image and rewrites the img src attribute accordingly, but I can live with that. I suppose it also wouldn't be a good solution if I had multiple input files link to the same image, because the image would be copied to each output folder, but luckily that's not a problem in my specific case.
Sorry if this is not allowed but I have I believe something that may be similar.
I already have a script working well and doing various things but I just want to add 1 more attachment from the same drive to the email.
Everything can remain the same I am just wanting to add one additional pdf.
I have tried the above and various other suggestions but I am doing something wrong.
Below is my current script.
const SHEETID = '1rx1lCYKdhi8dhivoYpUO6EIHb2TWTpiMVduK7M-L2A4';
const DOCID = '1sRZqPCkuATT9tQDZlJDp-DicP6saBpZoAXVvKWXT_XM';
const FOLDERID = '1wsyrUM29A1LIiKCjsJE7olKb0ycG2_M5';
function PitchFees2026() {
const sheet = SpreadsheetApp.openById(SHEETID).getSheetByName('2026 Fees');
const temp = DriveApp.getFileById(DOCID);
const folder = DriveApp.getFolderById(FOLDERID);
const data = sheet.getDataRange().getValues();
const rows = data.slice(1);
rows.forEach((row,index)=>{
const file = temp.makeCopy(folder);
const doc = DocumentApp.openById(file.getId());
const body = doc.getBody();
data[0].forEach((heading,i)=>{
const header1 = heading.toUpperCase();
body.replaceText('{NAME}',row[1]);
body.replaceText('{PITCH}',row[0]);
body.replaceText('{AMOUNT}',row[3]);
body.replaceText('{FIRST}',row[4]);
body.replaceText('{SECOND}',row[5]);
body.replaceText('{REF}',row[10]);
body.replaceText('{BNAME}',row[7]);
body.replaceText('{CODE}',row[8]);
body.replaceText('{NUMBER}',row[9]);
body.replaceText('{TERMS}',row[6]);
})
doc.setName(row[10]);
const blob = doc.getAs(MimeType.PDF);
doc.saveAndClose();
const pdf = folder.createFile(blob).setName(row[10]+'.pdf');
const email = row[2];
const subject = row[10];
const messageBody = "This message (including any attachments) is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. \n \nIf you received this in error, please delete the material from your computer and contact the sender. \n\nPlease consider the environment before printing this e-mail.";
MailApp.sendEmail({
to:email,
subject:subject,
body:messageBody,
attachments: [blob.getAs(MimeType.PDF)]
});
Logger.log(row);
file.setTrashed(true);
})
}
What I am wanting to attach is
var file = DriveApp.getFileById("1vGvLVP2RV1krxnj8Mt6hMiFHVBoIdbFG");
attachments.push(file.getAs(MimeType.PDF));
So I was trying to change to bottom of my main script to...
var attachments = []
var file = DriveApp.getFileById("1vGvLVP2RV1krxnj8Mt6hMiFHVBoIdbFG");
attachments.push(file.getAs(MimeType.PDF));
MailApp.sendEmail({
to:email,
subject:subject,
body:messageBody,
attachments: [blob.getAs(MimeType.PDF)]
});
Logger.log(row);
file.setTrashed(true);
})
}
Please may some assist me. I have been on this for days so probably not seeing something obvious now. Thank you so much in advance. :-)
As there is not available answer for this. I wanna do my contribution.
I exactly had the same error & ran into same issue spent hours trying to debug. So please try below approach.
Try using list of list. I had a JSON payload, I append all the JSON into a list.. then again I put this list of JSON into a list.
Sample code:
list_of_list_data = [list(item.values()) for item in list_data]
Please let me know if it works.
Thanks for your discovery and terrific job!
I have tried so hard in the past 48h to modify your script so that I could programmatically also add some text/body in the note (together with the attachment). I also struggled immensely to have the note created in a desired subfolder.
Whenever I tried to "add" a body to the newly created note, Notes.app was basically overwriting the entire note, including the attachment.
At some point I discovered the version Ethan Schoonover authored as a "Folder Action" (see https://youtu.be/KrVcf2nN0b8, and his GitHub repo https://github.com/altercation/apple-notes-inbox). It works almost with no adaptation as a Print Plugin workflow!
This is finally the version I made, with a very minor addition (i.e. the user is prompted to specify a different Note title). I share it here with you and the Internet, hoping it might be a useful starting point for posterity.
-- This script is designed to be used in Automator to create a new note in the Notes app
-- It takes a PDF file as input, prompts the user for a note title, and creates a
-- new note with the PDF attached.
-- The note will be created in a specified folder within the Notes app.
-- The script also includes a timestamp and the original filename in the note body.
-- The script assumes the Notes app is available and the specified folder exists.
-- Note: This script is intended to be run in the context of Automator with a file input (e.g. Print Plugins or as Folder Action).
-- Heavily based on the code from: https://github.com/altercation/apple-notes-inbox
property notePrefix : ""
property notesFolder : "Resources"
on run {fileToProcess, parameters}
try
set theFile to fileToProcess as text
tell application "Finder" to set noteName to name of file theFile
-- Ask the user for a title and tags for the new note
set noteTitleDialog to display dialog "Note title:" default answer noteName
set noteTitle to text returned of noteTitleDialog
set timeStamp to short date string of (current date) as string
set noteBody to "<body><h1>" & notePrefix & noteTitle & "</h1><br><br><p><b>Filename:</b> <i>" & noteName & "</i></p><br><p><b>Automatically Imported on:</b> <i>" & timeStamp & "</i></p><br></body>"
tell application "Notes"
if not (exists folder notesFolder) then
make new folder with properties {name:notesFolder}
end if
set newNote to make note at folder notesFolder with properties {body:noteBody}
make new attachment at end of attachments of newNote with data (file theFile)
(*
Note: the following delete is a workaround because creating the attachment
apparently creates TWO attachements, the first being a sort of "ghost" attachment
of the second, real attachment. The ghost attachment shows up as a large empty
whitespace placeholder the same size as a PDF page in the document and makes the
result look empty
*)
delete first attachment of newNote
show newNote
end tell
-- tell application "Finder" to delete file theFile
on error errText
display dialog "Error: " & errText
end try
return theFile
end run
Note: I was hoping to add programmatically one or more tags to the newly created note (e.g. by asking the user, within a dialog prompt), but I failed. It seems Notes does NOT recognize strings like "#blablabla" as tags, unless they are typed within the Notes.app.
The problem was that there was no data bound to the checkbox. As soon I added a JSON Model I got fixed.
<Column width="11rem">
<m:Label text="Product Id" />
<template>
<m:CheckBox selected="{Selected}"/>
</template>
</Column>
I found the root cause of the issue.
Even though the executable file exists inside the chroot jail and is fully static (confirmed by ldd showing no dynamic dependencies), running it inside the jail failed with:
execl failed: No such file or directory
This error occurs despite the binary being present and statically linked. The reason is that the chroot environment is missing some essential system components or setup that the binary expects at runtime even static binaries sometimes rely on minimal system features or device files.
The problem was resolved when I copied a statically linked BusyBox binary into the jail and ran commands from it. BusyBox, being a fully self-contained executable that includes a shell and common utilities, works smoothly inside minimal environments without extra dependencies.
That is nice, please allow this University comment. Thanks
from pdf2image import convert_from_path
# Convert PDF to images
images = convert_from_path("/mnt/data/Anish_Kundali.pdf")
# Save images
image_paths = []
for i, img in enumerate(images):
path = f"/mnt/data/Anish_Kundali_page_{i+1}.png"
img.save(path, "PNG")
image_paths.append(path)
image_paths
Opa não está indo algo
A fala comigo
Está dando operação completa com erros
Já tirei os APK do Chrome
I've come up with my own CSS selector to do just this.
.parent > .root:has(+ .paths > :not(:empty)) > div:last-child
I doubt this is much "cleaner", but I do believe this is a clearer notation.
Identify where the store is being opened (likely using CertOpenStore with API flags).
Adjust it to explicitly specify CERT_SYSTEM_STORE_LOCAL_MACHINE instead of CURRENT_USER.
Recompile xmlsec to restore the older behavior.
this error happens for me when I change my OS from linux to windows,
delete this line of code from package.json
"lightningcss-linux-x64-gnu": "^1.30.1",
OMFG THANK YOU BEEN LOOKING FOR THIS FOR HOURS GOD!!!!!!!! SMARTEST PERSON ON THE INTERNET I SWEAR TO GOD.
Your issue is that after chroot
, the binary ./test
is no longer found inside the new root (.
).
chroot
changes the apparent root directory for the process.
Copy test into the root of the jail:
cp ./test ./testdir/test
sudo ./penaur ./testdir
and change your c++ call:
sandbox.run("/test");
Might be a little jumpy reading this I was going through docs but short answer: Don’t detach the Actix server. Own shutdown yourself, pass a cancel signal to your queue, and await both tasks. Also disable Actix’s built-in signal handlers so Ctrl+C is under your control.
Your on the right track here. what you could try is: (a) a single place to own shutdown, (b) a signal you can pass to your queue so it can stop gracefully, and (c) awaiting both tasks to completion after you request shutdown. Don’t “fire-and-forget” the Actix server future.keep its JoinHandle
and await it after stop(true)
, guaranteeing it’s fully shut down before main
returns. You can have a shared token so you can exit cleanly
I believe the issue is with Actix's built in signals. You start the server and leave it running without awaiting its shutdown. The queue worker stops, but the HTTP server keeps running. May want to dig into Actix docs here. https://actix.rs/docs/server#graceful-shutdown the doc explains why your current setup goes wonky. Axtix has its own signal handlers, so CTRL-C is not "gracefull", windows doesn't send SIGTERM with Ctrl-c. So you have to approaches: A> own the shutdown yourself or B> let Actix keep its handlers (Unix- "gracefull via SIGTERM), don't disable signals and send SIGTERM, still keep and await the server task handle so nothing stays
Do I need to avoid rt::spawn
?
Yes—don’t detach the server. Either run it directly in select!
or spawn it and await the join handle after you call stop(true)
.
Call .disable_signals()
on the HttpServer
(so Actix doesn’t install its own Ctrl+C handler), and
Show a tiny code snippet that keeps the server’s JoinHandle
, sends a cancel token to the queue, calls stop(true)
, and then awaits both tasks.
use actix_web::{App, HttpServer};
use tokio_util::sync::CancellationToken;
#[actix_web::main]
async fn main() -> anyhow::Result<()> {
let server = HttpServer::new(|| App::new())
.disable_signals() // you own shutdown
.shutdown_timeout(30) // graceful window
.bind(("0.0.0.0", 8080))?
.run();
let handle = server.handle();
let cancel = CancellationToken::new();
let cancel_q = cancel.clone();
// spawn both but KEEP the JoinHandles
let srv_task = tokio::spawn(async move { server.await.map_err(anyhow::Error::from) });
let queue_task = tokio::spawn(async move {
// your queue loop should watch cancel_q.cancelled().await
run_queue(cancel_q).await
});
tokio::select! {
_ = tokio::signal::ctrl_c() => {}
// optionally: react if the server crashes:
res = &mut async { srv_task.await } => {
eprintln!("server exited early: {:?}", res);
}
}
// trigger graceful shutdown
handle.stop(true).await; // waits up to shutdown_timeout for in-flight reqs
cancel.cancel(); // tell the queue to finish and exit
// ensure nothing is left running
let _ = srv_task.await;
let _ = queue_task.await;
Ok(())
}
I used NextJS Markdown to create page and then print it via browser.
There I can render React component as well. So wrote the below React component.
export default function PageBreak() {
return <div className="break-after-page" />
}
Then, when I want to add page break I just added <PageBreak />
More info here - https://nextjs.org/docs/app/guides/mdx
PS: Project also had tailwindcss.
No se si aun te sirva pero tenai el mismo error y era por el endpoint lo temrine en message pero es messages :)
I was reviewing your code but found that cv2.findContours isn't a good way to detect the hand in image 1, because it's detecting too much detail and not the entire object. Therefore, image 2 has less detail.
import cv2
import sys
img = cv2.imread(sys.argv[1])
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (15, 15), 0)
edged = cv2.Canny(gray, 5, 100)
contours, _ = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
if contours:
max_area = max(cv2.contourArea(c) for c in contours)
print(max_area)
else:
print(0)
better use to measure the area of the larger contour like this, and also that value in image2 that image generates a larger contour.
I recommend using a model like mediapipegoogle for more robust things and detecting hands with landmarks instead of just using contours.
This video could explain more about this topic and is good for future projects using a model with AI. youtube, it was complicate either cuz i dont speak russian btw.
It’s not clear what result (answer set) you expected.
From a syntax point of view, your second approach looks more “correct” if that makes sense:
It assigns for each protein/1 a choice/4 with 3x food/1.
But it does not define any specific relationship among those 3x food/1, e.g. these 3x food/1 can be the same, and can be the same across different proteins, as there are no further rules defined.
(Your first approach allows an empty answer set as result. While you assign a choice/4 for any combination of protein/1 and 3x food/1.)
I recommend you to check previous stackoverflow posts on Clingo as well as some introductional pdfs on that topic to better understand to the syntax and “logic”.
Good luck!
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@digitalocean" />
<meta name="twitter:title" content="Sammy the Shark" />
<meta name="twitter:description" content="Senior Selachimorpha at DigitalOcean" />
<meta name="twitter:image" content="https://html.sammy-codes.com/images/large-profile.jpg" />
El codigo desde mi pc no corria pero estuve revisando igrid y me fije que explican como funciona la estructura DLMS y como se arma el AARQ, entonces creo que cda campo debe insertarse en una posicion exacta.
comparando con tu codigo original veo que el primero donde esta la contraseña lo pones asi
aarq = AARQ_REQUEST.replace(
bytearray.fromhex("38 38 39 33 35 38 36 30"),
bytearray(SERIAL_NUMBER, 'ascii')
)
pero en micropython esta en modo fija
b'00053346'
entonces segun deepsek despues de hacer testing me recomienda que no se ponga fija porque sino genera diferentes frames
for i, (p, m) in enumerate(zip(aarq_python, aarq_micropython)):
if p != m:
print(f"Byte {i}: Python={p:02X}, MicroPython={m:02X}")
creo que desde el codigo que testie y revise el error empieza antes del bloque /xBe/x10
porque python no calcula la longitud y no coincide con la de micropyhton al meter de la contraseña.
puede que la password sea la misma pero el paquete no es el mismo, y el medido rechaza el AARQ. no tengo clara la solucion pero segun recomendacione de deepseek porque no tengo el codigo completo la solucionar podr ser construir el paquete con la longitud correcta despues de insertar la cotraseña.
Yes it useful for u but very difficult for us because we don't know about this process and that what is doing in this website we only use this link and website for the purpose to get information about our department
The solution is to make a list of available drives/OSDs and make sure the boot drive is excluded. My solution should work with both SATA and NVMe drives, but since I only have NVMe drives in my machines, I cannot test the SATA solution. Furthermore, all available drives will be seen as Ceph drives. This may not be viable for everyone. The full code is included under the FINAL EDIT comment.
Wrap the Swiper in a grid
<div className="grid">
<Swiper>...</Swiper>
</div>
I just encountered this error and the cause was that I had 2 firebase tools installed. One was through homebrew and the other was local. You can see if this is the case for you by running these command in terminals:
which firebase
npm list -g firebase-tools
If the output of those commands is different, you have the same problem I did.
In my case, removing the local library solved the issue:
rm /Users/mycomputer/.local/bin/firebase
You should also make sure you are on the latest version. Compare the output from this command:
firebase-tools -v
with the github latest version: https://github.com/firebase/firebase-tools
Question solved myself in my first comment.
short_name": "React App", "name": "Create React App Sample", "icons": [
{ }, "src": "favicon.ico", "sizes": "64x64 32x32 24x24 16x16", "type": "image/x-icon"
} "src": "logo192.png", "type": "image/png", "sizes": "192x192"
},
{ "src": "logo512.png", "type": "image/png", "sizes": "512x512"
}
"start url": "",
1, "display": "standalone", "theme_color": "#000000",
"background_color": "#ffffff"
}
I don't have the exact answer, but I have create an app to batch create folders from a list.
Just select your file destination, paste in your list and press create folders. Easily create hundreds of folders in seconds.
Check out Multiple Folder Creator on the App Store.
Note for the future readers, If you are using Rocket Version 0.5.0 (Nov 17, 2023) and later then the Outcome::Failure
variant was removed in favor of using Outcome::Error
.
Writing this for someone come here due to error:
error[E0599]: no variant or associated item named `Failure` found for enum `Outcome` in the current scope
--> src/guards/auth_guard.rs:40:32
|
40 | Err(_) => Outcome::Failure((Status::Unauthorized, ())),
| ^^^^^^^ variant or associated item not found in `Outcome<_, (Status, _), Status>`
So you have to use Outcome::Error
like(example):
Err(_) => Outcome::Error((Status::Unauthorized, ()))
Supporting Document: https://github.com/rwf2/Rocket/blob/master/CHANGELOG.md?utm_source=chatgpt.com#:~:text=Outcome%3A%3AFailure%20was%20renamed%20to%20Outcome%3A%3AError.
When you're copying a GitHub project to your computer, you have two main choices for the link: HTTPS or SSH.
If you pick HTTPS, it's like using a front door everybody has a key for. It's super easy to start since you just use your GitHub username and password or a token, but every now and then, it will ask you to log in again. Plus, it works anywhere, even if you're on tricky networks or behind firewalls.
Now, SSH is a bit like having a special VIP pass. You set it up once by creating a key (kind of like a secret handshake), and after that, pushing or pulling changes is smooth sailing without more passwords. It's more secure and faster once set up, but it’s a bit trickier to get going you gotta generate those keys and add them to your GitHub profile. Also, sometimes corporate networks block the special ports SSH uses, so that can get in your way.
So if you're new or just wanna grab stuff without fuss, go with HTTPS. But if you plan on working on projects a lot, or want that smoother, password-free flow, SSH is your friend.
In simple terms, HTTPS is quick and easy, SSH is secure and convenient.
summary for easy understanding ssh and https uses in github by sai karthik motapothula
Anyone arriving here in the future with a similar issue where you must support multiple frameworks and one of those is framework 4.0 (for very old vendor supplied application running on Windows XP in my case), the link to Thomas Levesque solution (https://thomaslevesque.com/2012/06/13/using-c-5-caller-info-attributes-when-targeting-earlier-versions-of-the-net-framework/) provided by others above works perfectly and seems to be the most straight forward solution to me since both newer and older frameworks can now use the attributes with no code differences.
I put his stub definitions into its own class file and surrounded it with the #if NET40 compiler directive so those stubs will only be used on a NET40 version compile (since I support multiple frameworks). A framework 4.0 version of each app can now access the [Caller...] attributes (I am only using [CallerMemberName], but I have no doubt the other stubs function too) and the expected values are populated into your variables. Thanks to Thomas and the others that left the link!
Instead of using Microsoft’s own module, you can use the open-source Get-AzVMSku module, which allows you to browse every Azure Gallery Image publisher, select offers, versions, and see VM sizes available to your subscription, along with quotas.
The module is available on the PowerShell Gallery:
Get-AzVMSku on PowerShell Gallery
I’ve also written a detailed guide explaining how it works and how to use it:
Browse PowerShell Azure VM SKUs & Marketplace Images with Get-AzVMSkuhttps://www.codeterraform.com/post/powershell-azure-vm-skus
I would suggest another approach as I am facing a similar issue for a singstar-like app I am coding.
I am considering creating a custom audio processing node the count the actuel buffer frames passing through it (a AudioWorkletProcessor maybe). I could provide a method giving me the actual played time based on samples count and sample time resolution.
So you would just connect those extra nodes just after the nodes you want to measure.
In my case, in monorepo, I had different versions of type-graphql
installed in the server app and a library containing model classes.
User-uploaded SVG files may embed malicious code or include it via an external reference. When a malicious SVG file is published on your website, it can be used to exploit multiple attack vectors, including:
<image>
, <script>
, or <use>
tags to send sensitive information to attacker-controlled servers.<image xlink:href="file:///...">
or <use>
references to attempt to read local or server-side files.Indeed, wrapping the image in an <img>
tag is 1 of the 3 measures you can take.
Other measures are
More detailed guidance on each measure:
<img>
TagInstead of directly embedding an SVG using <svg>
or <object>
, use the <img>
tag to render it:
<img src="safe-image.svg" alt="Safe SVG">
The <img>
tag ensures that the SVG is treated as an image and prevents JavaScript execution inside it, but it doesn’t remove malicious code from the SVG file.
Enabling the Content Security Policy (CSP) HTTP response header also prevents JavaScript execution inside the SVG.
For example:
Content-Security-Policy: default-src 'none'; img-src 'self'; style-src 'none'; script-src 'none'; sandbox
Which applies the following policy:
Directive | Purpose |
---|---|
default-src 'none' |
Blocks all content by default. |
img-src 'self' |
Allows images only from the same origin. |
style-src 'none' |
Prevents inline and external CSS styles. |
script-src 'none' |
Blocks inline and external JavaScript to prevent XSS. |
sandbox |
Disables scripts, forms, and top-level navigation for the SVG. |
Strip potentially harmful elements like <script>
, <iframe>
, <foreignObject>
, inline event handlers (e.g. onclick
) or inclusion of other (potentially malicious) files.
Examples of libraries for SVG sanitization:
1 I started the mentioned Java sanitizer project, as I could not find any solution for Java.
Render the SVG server side to a raster based format, like PNG. This may protect visiting users, but could introduce vulnerabilities at the rendering side at the server, especially using server side JavaScript (like Node.js).
Sorry for bothering everyone.
I am not too sure why the summary statistics of r_ga and insideGARnFile_WithCoord) are different, but the output graphics looked very similar. I will assume slight boundary mismatch is due to coordinate reference system (difference/transformation). I will assume the problem is solved for now. If you have any insights on the boundary mismatch, please leave your comments here.
Much appreciated! Summary statistics of insideGARnFile_WithCoord Rainfall Output from insideGARnFile_WithCoord
Summary statistics of r_ga. Rainfall Output from r_ga
dchan
For me the problem was when i was doing lookup for Symbol and getting the instrument value, the datatype of the instrument was int64 which is numpy based value but the Socket Api accept int.
so converting the int64 -> int fix the issue for me.
@STerliakov gets the credit for the answer:
Apparently, this section of the code in its entirety was added for the purpose so the user would see the data for troubleshooting purpose.
I deleted it entirely and that fixed the issue
requirePOST();
[$uploaded_files, $file_messages] = $this->saveFiles();
Flash::set('debug','Application form data '
. json_encode($_POST, JSON_PRETTY_PRINT)
. "\nUploaded files:"
. json_encode($uploaded_files,JSON_PRETTY_PRINT)
. "\nFile messages:"
. json_encode($file_messages,JSON_PRETTY_PRINT)
);
For anyone else experiencing the issue, here is the entire working function
public function submit() {
include([DB ACCESS FILE]);
$fp1 = json_encode($_POST, JSON_PRETTY_PRINT);
$fp2 = json_decode($fp1);
$owner_first = $fp2->owner_first;
$owner_last = $fp2->owner_last;
$query_insert_cl = "INSERT INTO cnvloans (owner_first,owner_last) VALUES ('$owner_first','$owner_last')";
mysqli_query($dbc, $query_insert_cl);
redirect('/page/confirm');
}
I deployed my backend on Vercel and tried using the URL there as well, but I keep getting errors like 500, 401, 400 with Axios — when I fix one, another appears. However, the code runs perfectly in Postman and Thunder Client, but when I run it on my mobile, these errors keep showing up. If you have solved this issue before, please guide me as well.
There is the library DrusillaSelect on furthest neighbor you can try. From this paper.
As an example, an Etherium address with a private key
0x91b005cb6b291f67647471ad226b937657a8d7d6
pvk 000000000000000000000000000000000000000000000000007fa9e2cd6d52fe
check the address and good luck to you
Removing --turboback
from the dev script, fixed the issue.
Before
"dev": "next dev --port 3001 --turbopack"
After
"dev": "next dev --port 3001",
OnDrawColumn and OnDrawDataCell have a TGridDrawState State.
OnDataChange is in DataSource as was answered.
And yes you can't control TDBGrid unless you overload it with ancestors methods. That's why it is rarely used in real world tasks.
Each time when it is necessary to insert some frame to stream it is much better to do it before encoder: just send previous raw frame to encoder again or make some blending between previous and next frames. Dirty tricks with already encoding bitstream may give a negative side effects: broken HRD model, broken picture order counter sequence, etc.
2025 update:
For people who are confused about why there is no ID Token
checkbox, it is hidden unless you add a correct Redirect URI
. You need to add the one for Web
platform type.
After that, in the Settings
tab you will be able to see the ID tokens
checkbox, and marking it as checked fixed the problem for me.
I didn’t get the exact same error as you, but my setup is very similar, so here are my two cents:
Solution
│
├── MyApp // Server project. Set InteractiveWebAssembly or InteractiveAuto globally in App.razor
│
├── MyApp.Client // Contains Routes.razor
│
└── SharedRCL // Contains Counter.razor page (@page "/counter") without setting any render mode
In Routes.razor
, make sure the Router
is aware of routable components in the Razor Class Library (RCL) by adding the RCL’s assembly:
<Router AppAssembly="@typeof(Program).Assembly"
AdditionalAssemblies="new[] { typeof(Counter).Assembly }"> @* <-- This line here *@
...
</Router>
Depending on your setup, you might also need to ensure that the server knows about routable components in the RCL.
In MyApp/Program.cs
, register the same assembly when mapping Razor components:
app.MapRazorComponents<App>()
.AddInteractiveWebAssemblyRenderMode()
.AddAdditionalAssemblies(typeof(MyApp.Client._Imports).Assembly)
.AddAdditionalAssemblies(typeof(Counter).Assembly); // <-- This line here
can you ty this :
The idea is to search for this differently.
try to retrieve the private IP address of the Private Endpoint through its network interfaces (NICs).
and you identify the private DNS zones linked to the virtual network (VNet) where this Private Endpoint is connected (via the Private DNS zone links).
Within these private DNS zones, you search for DNS records that match those private IP addresses — these are the FQDNs
You can do it with powershell or python sdk azure easily
Here is my workaround for blank icon issues on taskbar.
1. create a shortcut on desktop, open it.
2. drag this shortcut to the taskbar. this will pin it onto taskbar.
3. right click the icon, un-pin. done!
You can create SPA using bootstrap and jquery/vanilla.js... For that you must have a strong understanding of vanilla js or jquery
Preferences-Run/Debug-Perspectives
in "Application Types/Lunchers" box
"STM32 C/C++ Application"
Debug: None,Run:None.
not Debug:Debug.
रवि की साइकिल जंग खा चुकी थी, ब्रेक भी काम नहीं करते थे। फिर भी वह रोज़ दस किलोमीटर स्कूल जाता। दोस्तों ने मज़ाक उड़ाया, पर उसने हिम्मत नहीं हारी। पढ़ाई में अव्वल आया तो वही दोस्त बोले, "तेरी साइकिल टूटी थी, सपने नहीं।"
The confidence intervals for versicolor and virginica correspond to their reported estimates of 0.93 and 1.58. That is, the offset of versicolor is estimated as 0.93 and the confidence interval spans 0.73 to 1.13. To get the estimate of the mean of versicolor, you would add the intercept to all of those numbers: a mean of 5.01 + 0.93 with the lower confidence limit at 5.01 + 0.73 and the upper confidence limit at 5.01 + 1.13.
Store in three separate columns; it's much better for maintenance and data retrieval (note that they are three very distinct pieces of data, so putting them together will make your life harder).
If you want to have an easy way to always have the [Country ISO]-[State ISO]-[City Name] string in hand, you can create an additional generated column.
Example (Postgres):
beautified_code varchar GENERATED ALWAYS AS CONCAT_WS('-', country_iso, state_iso, city_name) STORED
In this column, the three values will always be concatenated together automatically during entry creation and update. So you don't need to worry about maintaining consistency in it.
i have the same question, i have a dataset contening medical variables using to determine whether the patient have to receive outpatient care or not,
the target variable is SOURCE :
0 for outpatient care
1 otherwise
i'm using the method of supervised learning glm (logistic regression) of caret package in R, it predicts the probability that the individual belongs to the positive class, chatgpt is saying that the positive class is the second one, but i don't know how i can get sure that the model predicts p(k="1"/xi).
glm gives only probabilities as results when using the function predict, so i must converting proba to label (0 or 1) according to the threshold. so theses proba are for p(k=first level of the factor variable) ?
Try to rename your hivepress-marketplace directory to hivepress or set for add_action higher priority (3rd parameter)
In some cases of API development, we want to ignore some fields in the JSON response. In such cases we use the @JsonIgnore annotation, therefore, it was not designed to solve the Infinite Recursion problem.
In my projects, I use the @JsonIgnore annotation with the @ManyToMany bidirectional relationship between entities, and I use @JsonManagedReference and @JsonBackReference with @ManyToOne and @OneToMany cases.
I tried everything on the Mac. But "sudo su -" worked finally, which is root directory with full permission.
Nevermind, I found a solution almost immediately after posting this question, however I am leaving this here for eventual future visitors. I fixed this problem by simply creating a temporary repository with the modified path (Git) selected, published it to github, and now the new path is saved. Don't know why it didn't save it before, but this solutio should work.
The empty views activity doesn't give Java either, nor the no activity. java is completely gone from Android Studio. now I need to learn a new language. hate this, I haven't been programming since 2021 when I finish my computer science degree. now I'm trying to get the rust off but apparently I have to start from the beginning. Kotlin, here I come!
As suggested by @furas, we can use git url.
ubuntu@ubuntu:~/hello$ poetry add scikit-learn git+https://github.com/scikit-learn-contrib/imbalanced-learn.git Creating virtualenv hello in /home/ubuntu/hello/.venv Using version ^1.7.1 for scikit-learn
Updating dependencies Resolving dependencies... (0.1s)
Package operations: 6 installs, 0 updates, 0 removals
Writing lock file ubuntu@ubuntu:~/hello$
Same here, it worked well at first for several times (dont know how often) ... is there a rate limit ?
It looks like the base path is not set correctly. You can try below
import os
import sys
sys.path.append(os.path.abspath("."))
sys.path.append("../")
# Then try to import the src modules
from src.logger import Logging
Here we are setting up the path manually but it should works.