Try updating Windows to latest update or upgrade to newer version (Windows 11).
That's the only method that worked for me.
import React, { useState } from 'react'; import { Moon, Sun, Play, Pause } from 'lucide-react'; import { motion } from 'framer-motion';
export default function SleepApp() { const [isDark, setIsDark] = useState(true); const [isPlaying, setIsPlaying] = useState(false);
const toggleTheme = () => setIsDark(!isDark); const togglePlay = () => setIsPlaying(!isPlaying);
return ( <div className={min-h-screen flex flex-col items-center justify-center transition-all duration-500 ${isDark ? 'bg-gray-900 text-white' : 'bg-blue-100 text-gray-900'}}> <motion.h1 initial={{ opacity: 0, y: -20 }} animate={{ opacity: 1, y: 0 }} transition={{ duration: 0.6 }} className="text-4xl font-bold mb-8" > SleepWave 💤 </motion.h1>
<motion.div
initial={{ scale: 0.9, opacity: 0 }}
animate={{ scale: 1, opacity: 1 }}
className="bg-white/10 backdrop-blur-md p-8 rounded-2xl shadow-lg flex flex-col items-center"
>
<p className="text-lg mb-4">Som relaxante para ajudar você a dormir</p>
<button
onClick={togglePlay}
className="p-4 bg-indigo-500 rounded-full shadow-md hover:bg-indigo-600 transition-all"
>
{isPlaying ? <Pause size={32} /> : <Play size={32} />}
</button>
<p className="mt-3 text-sm opacity-75">
{isPlaying ? 'Reproduzindo som de chuva...' : 'Toque para começar a relaxar'}
</p>
</motion.div>
<button
onClick={toggleTheme}
className="absolute top-6 right-6 p-3 rounded-full bg-white/10 hover:bg-white/20 transition-all"
>
{isDark ? <Sun size={24} /> : <Moon size={24} />}
</button>
<footer className="mt-10 opacity-60 text-sm">Feito por Maginata 🌙</footer>
</div>
); }
The solution was to create a new iCloud key - the old one must have corrupted and no longer worked.
As soon as a new key was created it all worked instantly.
A crate version being "yanked" means that it is no longer available to use that specific version as a dependency. In this case, the numeric crate specifies that it requires the hdf5-sys crate at version "^0.3.2" as a dependency, which means that it needs a non-yanked version 0.3.n, where n is larger than or equal to 2. The hdf5-sys crate appears to have yanked all versions lower than 0.5.0, which means that crates.io cannot find a version that matches the requirement, giving you the error message.
As the numeric crate is unmaintained, the only solution you have would be to either use an earlier version of it that does not have a dependency on hdf5-sys (version 0.0.7 appears to be the latest version satisfying this), or switch to a more maintained n-dimensional vector math library, such as using ndarray with ndarray-linalg.
Hope the documentation is helpful:
https://github.com/microsoft/MSO-Scripts/wiki/Advanced-Symbols#missing-imageid-event
Generally you need pdb file corresponding to the binary. If it is not available, maybe you can try to generate pdf from binary:
Thanks Bill. I have gone through your link and it will definitely help me in working with PostgreSQL. But here the scenario is to model and mimic the functionality of oracle procedure in Spring Data JPA do that it will not be hardly coupled with DB. I just connect to respective DB and process data using JPA entity models. So is it a good approach to write SQL queries in JPA/Hibernate Native queries and execute or any other way to model the procedure functionality in JPA framework.
It’s the more scalable and flexible design, especially for ERP/MES systems where new resource types or reporting needs often evolve. Centralizing makes it easier to query, audit, and extend the model later — at the cost of a bit more sync logic.
Perhaps you should try some free web tools; they might be able to help you. this url ilovepdf、ilovepdftoword
I got a mistake - i have created another function with same name on different script and used that script on same html page thats why this function will not run
<?xml version="1.0" encoding="UTF-8"?>
<PrintLetterBarcodeData uid="408937029261" name="Jabid" gender="M" yob="2006" co="S/O: Mohd. Kalu" house="B-1053" loc="Sanjay nagar" vtc="Jaipur" po="Shastri Nagar" dist="Jaipur" subdist="Jaipur" state="Rajasthan" pc="302016" dob="03/12/2006"/>
List.Generate to the rescue:
merge_loop = List.Generate(
() => #"Renamed Columns2",
(tbl) => Table.IsEmpty(Table.SelectRows(tbl, each [OBJECT_ID] = _TargetFolder)),
(tbl) => let
#"Merged Queries" = Table.NestedJoin(tbl, {"OBJECT_ID"}, AllFolders, {"PARENT_ID"}, "AllFolders", JoinKind.LeftOuter),
#"Expanded AllFolders" = Table.ExpandTableColumn(#"Merged Queries", "AllFolders", {"OBJECT_NAME", "OBJECT_ID"}, {"OBJECT_NAME.1", "OBJECT_ID.1"}),
#"Added Custom" = Table.AddColumn(#"Expanded AllFolders", "Path.1", each [Path]&"\"&[OBJECT_NAME.1]),
#"Removed Other Columns" = Table.SelectColumns(#"Added Custom",{"Path.1", "OBJECT_ID.1"}),
#"Renamed Columns" = Table.RenameColumns(#"Removed Other Columns",{{"OBJECT_ID.1", "OBJECT_ID"}, {"Path.1", "Path"}})
in
#"Renamed Columns"
),
non_empty_tbl = Table.SelectRows(
List.LastN(merge_loop, 1){0},
each [OBJECT_ID] = _TargetFolder
),
If you want something simple use #.k or #,k depending what you use for for 1000 separator.
You can view it through this link appifan or itmacom. So, if you upload the video to your own server, it should play on Android devices without issue. However, if you don't want to upload it to a server, the only workaround is to download the video and upload it to your YouTube channel, which will remove the restrictions. Finally, you can embed the newly uploaded video into your page, replacing the restricted one.
Excellent question about your online typing test application! Here's a practical solution for counting correct words typed.
For comparing text and counting correct words, the basic approach is straightforward:
1. Get the original text from the first textarea
2. Get what the user typed in the second textarea
3. Split both strings into word arrays using whitespace as delimiter
4. Compare word by word at matching positions
5. Count the matches to determine accuracy
Implementation considerations:
Decide whether capitalization should matter for your test. Think about how you want to handle punctuation. Consider how to treat multiple spaces between words. Determine if contractions should be single or multiple words.
For performance metrics, track the elapsed time and compare it against the accuracy score.
This word-by-word comparison method is the standard foundation used in most typing applications and online typing practice systems.
To understand professional implementations, research how typing practice websites are built. Sites built with typing practice features typically demonstrate multiple modes with detailed performance tracking and user feedback. They show best practices for building typing practice applications with comprehensive metrics.
Start with implementing this basic word comparison logic. Test it with different inputs. Then progressively add features like real-time accuracy feedback, performance calculations, and detailed error reporting.
Good luck with your typing test!
Same question here, any chance to get it resolved?
Gracias a la respuesta de:
Vijay Mohan RangarajuApr 6 at 12:54
Hey @user29023248, thanks a lot for your tip! I did try restricting RestControllerAdvice annotation using basePackages as you suggested — it was a solid direction. But even after that, Swagger still wasn’t loading properly. That’s when ChatGPT suggested using the Hidden annotation on our error handler methods to prevent Swagger from processing them. Once Idid that, everything worked perfectly. Your view on this helped me to solve this issue. Appreciate your input — it definitely helped me get on the right track!
pude encontrar el error, me di cuenta que cuando globalizas los errrores en los controllers o exceptions, haces que el verdaderos errores no aparezcan, pero con un solo
@RestControllerAdvice(basePackages = "com.uni.dev.demo.controller")
que me ayudo a controlar especificamente donde hacer
las exception
As per https://issues.apache.org/jira/browse/KAFKA-19607?jql=project%20%3D%20KAFKA%20AND%20text%20~%20%22Mirrormaker%22%20ORDER%20BY%20created%20DESC,, MM2 will only replicate consumer offsets if the consumer is still active or has reached the end of the partition.
If the consumer has lag, the offset translation to the target cluster can be inaccurate, and will be worse when the consumer is further away from the end of the topic (i.e. has high lag)
For anyone looking, I got accented characters to work in Ubuntu 25.10 after going to Language Support and changing my Keyboard input method system to XIM instead of Ibus. Ibus is known not to be compatible with Wine.
The debugger is stopping at “weird” places because the compiler inlined/optimized template/device functions (and/or you don’t have device debug symbols), so source binary mapping is not 1:1.
Quick steps to fix (do these in your Debug build)
Build with device debug info and disable device optimizations:
nvcc: add -G (Generate debug info for device) and disable optimizations for CUDA code.
Only use -G for local debug builds (it drastically changes code/performance).
Build host with debug info and no optimizations:
MSVC: C/C++ → Optimization = Disabled (/Od) and Debug Info = /Zi.
Clean + full rebuild.
Force a non-inlined function where you want a reliable breakpoint:
Use noinline on the functions you debug:
GCC/Clang/nvcc: attribute((noinline))
MSVC: __declspec(noinline)
Example:
device host attribute((noinline)) TVector3<T> operator+(...) { ... }
Start the correct debug session in Nsight:
Verify symbols and sources:
Short-run kernel for easier stepping:
Here are the top skills to master in 2025 for better career growth:
Artificial Intelligence (AI) – Understand AI tools, automation, and machine learning.
Data Analytics – Learn to analyze and interpret data for smarter decisions.
Digital Marketing – Gain expertise in SEO, social media, and content marketing.
Cybersecurity – Protect data and systems from digital threats.
Cloud Computing – Master platforms like AWS, Google Cloud, or Azure.
Communication Skills – Improve clarity, confidence, and collaboration.
Critical Thinking – Solve problems logically and creatively.
Adaptability – Stay flexible and open to learning new technologies.
Emotional Intelligence – Build better teamwork and leadership qualities.
Continuous Learning – Keep upgrading skills to stay relevant in 2025 and beyond.
For more useful contents visithttps://www.coursera.org/in/articles/high-income-skills
You can use "gfxcapture".
So can anyone tell me how it worked for you? I am just sending Original_url and some message saying checkout my blog. but its not scraping properly. just adding the link , no short description, title or the featured image.
Add this dependency to your pom.xml file
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-entitymanager</artifactId>
<version>5.3.7.Final</version>
</dependency>
Open Git Bash and enter - git update-git-for-windows
@Kaz, Thank you for introducing the #: notation and the circle notation.
As you said, gensym can be omitted in many macros, without creating many different uninterned symbols that share the same names.
In cotontrast, in the sortf macro, different #:NEW1 symbols should be used, and get-setf-expansion already makes a new uninterned #:NEW1 whenever it's used.
In sortf, the number of #:NEW1 symbols is proportional to the number of arguments that are passed to sortf. Therefore, sortf cannot use the fixed number of symbols that can be generated by the #: notation.
Flutter developers using Android Studio may encounter a common warning message related to the Flutter Device Daemon Crash, specifically advising to "Increase the maximum number of file handles available." This issue can disrupt the development workflow, but fear not: there is a straightforward solution. In this article, we'll guide you through the steps to fix this warning and get your Flutter project back on track.
In windows open the Registry Editor
Navigate to Below Path:
HKEY_LOCAL_MACHINE
└── SYSTEM
└── CurrentControlSet
└── Services
└── WebClient
└── Parameters
└── FileSizeLimitInBytes
Change Hexadecimal [ffffffff] --> Decimal [4294967295]
Flutter developers using Android Studio may encounter a common warning message related to the Flutter Device Daemon Crash, specifically advising to "Increase the maximum number of file handles available." This issue can disrupt the development workflow, but fear not, as there is a straightforward solution to address this problem. In this article, we'll guide you through the steps to fix this warning and get your Flutter project back on track.
In windows open the Registry Editor
Navigate to Below Path:
HKEY_LOCAL_MACHINE
└── SYSTEM
└── CurrentControlSet
└── Services
└── WebClient
└── Parameters
└── FileSizeLimitInBytes
Change Hexadecimal [ffffffff] --> Decimal [4294967295]
Similar to https://stackoverflow.com/a/24811490/2251785, but I can't find those urls from official doc. Here are the source code archive URLs from GitHub's documentation, using github/codeql as an example.
| Type of archive | Example | URL |
|---|---|---|
| Branch | main |
https://github.com/github/codeql/archive/refs/heads/main.zip |
| Tag | codeql-cli/v2.12.0 |
https://github.com/github/codeql/archive/refs/tags/codeql-cli/v2.12.0.zip |
| Commit | aef66c4 |
https://github.com/github/codeql/archive/aef66c462abe817e33aad91d97aa782a1e2ad2c7.zip |
You can install any zip file directly with this:
pip install https://github.com/github/codeql/archive/refs/heads/main.zip
I had this issue recently, I rolled back to 3.8.0 and it seems there is better handling in that version for my specific PDFs
I'm prefer one option by using Autodesk Design Collaboration: https://help.autodesk.com/view/COLLAB/ENU/?guid=Design_Collab_Schedule_Regular_Publishing
Try adding
ocsp_fail_open=True,
Since you downloaded the latest Eclipse (2025-09), it's possible that Eclipse hasn't fully caught up with Java 25 yet, even if it officially supports it. Eclipse's support for a new Java version often lags behind the JDK's release. One thing you could try is ensuring you have the latest version of Eclipse installed (including any updates for the Java tools).
Also check the java 25 JDK Path in eclipse.ini file
Try setting the -vm parameter directly to the Java 25 JDK instead of a specific library within it.
In your eclipse.ini file, it should look something like this:
make sure this is in different line than -vmargs, and ther are no extra spaces.
Check Java Version Using Eclipse Console:
Once you have made these changes you can check what Eclipse is using by openning the eclipse console and running this commend in the Eclipse IDE treminal or checkking the version in the console:
This will confirm if the environment is pointing to Java 25 or still using the older version.
i'm no expert and cant even connect to anything ran in kuberneties, but it runs a shit load of crap, proxy servers and stuff that is pointless to just run a container, save yourself the headache and drp it
I just uploaded nextcrumbs to github for this purpose.
Next.js isn't necessary but there is a feature that will allow the breadcrumbs to be automatically created using usePathName from next/navigation
You can find the package here.
https://github.com/ameshkin/nextcrumbs
https://www.npmjs.com/package/@ameshkin/supercrumbs
npm i @ameshkin/supercrumbs
To dump DataRow row contents to a string I use
string.Join(", ", row.ItemArray)
I tried the patch above, but that did not quite fix the same issue, so I downloaded the newer contents of MagicWord.php file from the right side of
https://gerrit.wikimedia.org/r/c/mediawiki/core/+/122449/1/includes/MagicWord.php#686
as given by @blacksunshineCoding above, uploaded it to overwrite my MagicWord.php (after first saving a copy) and that fixed the issue.
Unfortunately the subsequent purge broke my session and even after various attempts (cookies, adding cache settings to LocalSettings.php) I can't log in with this browser due to "There seems to be a problem with your login session; this action has been canceled as a precaution against session hijacking. Go back to the previous page, reload that page and then try again." but I can see my wiki media site again with all browsers even though my MediaWiki is very very old, and I can log in and change it with other browsers.
If you have this issue, I recommend doing the purge with a browser you don't use.
Must the side menu be fixed in position?
If not, you can simply wrap the menu and the main content in a container and use CSS Grid to handle the layout. This ensures the main content area correctly fills the remaining space and can scroll independently.
<div class"wrapper">
<div class="left-menu"></div>
<div class="body-wrapper"></div>
</div>
.wrapper {
display: grid;
width: 100%;
/* Use height: 100vh or 100dvh to make it full screen, or 100%
if its parent already has a defined height. */
height: 100vh;
/* The grid columns: 'auto' sizes the menu to its content,
'1fr' gives the remaining space to the body. */
grid-template-columns: auto 1fr;
/* Prevents the entire grid from adding a scrollbar if inner content is too wide. */
overflow: hidden;
}
/* Restore scrolling behavior in the main body content. */
.body-wrapper { overflow: auto }
If the menu must be fixed, please clarify your requirements so I can provide a solution tailored to that constraint.
from PIL import Image, ImageDraw, ImageFont
import random
# Escala y dimensiones
scale = 10 # 1 cm = 10 px
width_px = 183 * scale
height_px = 35 * scale
# Crear imagen de fondo blanco
visual = Image.new("RGB", (width_px, height_px), (255, 255, 255))
draw = ImageDraw.Draw(visual)
# Fuente para etiquetas
try:
font = ImageFont.truetype("arial.ttf", 15)
except:
font = ImageFont.load_default()
# Datos de fotos (tipo, tamaño cm, posición cm)
photos = [
("G1",16,22,2,2),("G2",14,20,20,2),("G3",18,18,38,2),("G4",16,22,56,2),
("G5",14,20,74,2),("G6",16,22,92,2),("G7",18,18,110,2),("G8",16,22,128,2),
("G9",14,20,146,2),("G10",16,22,164,2),
("G11",14,20,2,25),("G12",16,22,20,25),("G13",18,18,38,25),("G14",16,22,56,25),
("G15",14,20,74,25),("G16",16,22,92,25),("G17",18,18,110,25),("G18",16,22,128,25),
("G19",14,20,146,25),("G20",16,22,164,25),
("M1",10,13,5,15),("M2",10,13,20,15),("M3",10,13,35,12),("M4",10,13,50,15),
("M5",10,13,65,12),("M6",10,13,80,15),("M7",10,13,95,12),("M8",10,13,110,15),
("M9",10,13,125,12),("M10",10,13,140,15),("M11",10,13,155,12),("M12",10,13,170,15),
("M13",10,13,5,28),("M14",10,13,20,28),("M15",10,13,35,28),("M16",10,13,50,28),
("M17",10,13,65,28),("M18",10,13,80,28),("M19",10,13,95,28),("M20",10,13,110,28),
("M21",10,13,125,28),("M22",10,13,140,28),("M23",10,13,155,28),
("P1",7,10,8,7),("P2",7,10,23,7),("P3",7,10,38,7),("P4",7,10,53,7),("P5",7,10,68,7),
("P6",7,10,83,7),("P7",7,10,98,7),("P8",7,10,113,7),("P9",7,10,128,7),("P10",7,10,143,7),
("P11",7,10,158,7),("P12",7,10,173,7),("P13",7,10,8,22),("P14",7,10,23,22),("P15",7,10,38,22),
("P16",7,10,53,22),("P17",7,10,68,22),("P18",7,10,83,22),("P19",7,10,98,22),("P20",7,10,113,22),
("P21",7,10,128,22),("P22",7,10,143,22),("P23",7,10,158,22),("P24",7,10,173,22),
("P25",7,10,8,33),("P26",7,10,23,33),("P27",7,10,38,33),("P28",7,10,53,33),("P29",7,10,68,33),
("P30",7,10,83,33),("P31",7,10,98,33),
("Mi1",5,5,12,12),("Mi2",5,5,27,12),("Mi3",5,5,42,12),("Mi4",5,5,57,12),("Mi5",5,5,72,12),
("Mi6",5,5,87,12),("Mi7",5,5,102,12),("Mi8",5,5,117,12),("Mi9",5,5,132,12),("Mi10",5,5,147,12)
]
# Función para color aleatorio
def random_color():
return tuple(random.randint(100, 255) for _ in range(3))
# Dibujar fotos con color y etiqueta
for code, w_cm, h_cm, x_cm, y_cm in photos:
x0 = x_cm * scale
y0 = y_cm * scale
x1 = x0 + w_cm * scale
y1 = y0 + h_cm * scale
fill_color = random_color()
draw.rectangle([x0, y0, x1, y1], outline=(0,0,0), width=2, fill=fill_color)
text_w, text_h = draw.textsize(code, font=font)
draw.text((x0+(x1-x0-text_w)/2, y0+(y1-y0-text_h)/2), code, fill=(0,0,0), font=font)
# Guardar imagen final
visual.save("collage_visual.png")
print("Collage visual generado: collage_visual.png")
Use imports_passed_through when importing activities into workflow code:
with workflow.unsafe.imports_passed_through():
import test_activity
See https://docs.temporal.io/develop/python/python-sdk-sandbox#passthrough-modules for more info.
Thanks for using Django MongoDB Backend. Would you mind creating an issue here: https://jira.mongodb.org/projects/INTPYTHON/issues/INTPYTHON-809?filter=allopenissues ?
Procedure bindings of parameterized derived types with KIND type parameters need to be implemented for the various distinct sets of values with which the KIND type parameters will be instantiated, and they really need to be collected into a generic type-bound procedure to be useful in practice. You could also implement a TBP for a KIND PDT using an unlimited polymorphic PASS dummy argument, but that just moves the problem from compilation time to run time without adding any more flexibility.
I have been trying to do that and this is what I have been able to do:
You can find the code in my repo: https://github.com/omrastogi/dsa_questions/blob/master/cs5800-Algorithms/binary_search_trees/bst_visualization.py
by listening to fetch requests
@user31405354, How would you slim down the image with a multi-stage build if you need to remove dependencies only used in one workspace's member which is not required for the image you're trying to build ?
I had been trying to render similar binary search tree. Here is what I have:
The code for this can be found in my github repo: https://github.com/omrastogi/dsa_questions/blob/master/cs5800-Algorithms/binary_search_trees/bst_visualization.py
For now you will have to build on python=3.12 or lower. If that doesn't work please let me know.
💥 Player sejati gak cuma main, tapi menang. Lo udah di Jo777 belum? 😉
The problem is with blazor server. It uses signalr internally. With adding an other signalr.client it conflict and click button stop working. For me webasesenly worked with signalr.client
@Barmar, Thank you for the explanation about compiler optimization.
As you said, calling the car function sounds more expensive than referencing a literal or a variable.
What seems to currently work is:
address = self.base_address + line_number.virtualAddress
Here I am using virtualAddress instead of addressOffset.
Order both sets by Morton number of coordinates.
Shopify recently Increased limits in metafield and metaobject definitions
Basically, for metaobject entries:
The 1,000,000 entry limit per definition removes previous plan-based restrictions of 64,000 (non-Plus) and 128,000 (Plus).
128 definitions for Basic, Shopify, and Advanced plans for a merchant
It looks Shopify is encouraging merchants and app developers to fully leverage its metaobjects instead of using external storage.
Thanks to everyone. I completely understand what you are saying and you have basically confirmed what I already thought. There are some instances where I will have to continue to use #defines, and that is fine.
In my case it was a mistake in the .sln file. There was a mapping from x64 to Win32 like this:
{29B3DBB1-C22B-4366-B257-AFA436F24871}.Release|x64.ActiveCfg = Release|Win32
Which needs to be
{29B3DBB1-C22B-4366-B257-AFA436F24871}.Release|x64.ActiveCfg = Release|x64
This was caused by me manually editing those files and making mistakes ... I probably shouldn't do that.
I created a SQL Server trigger that blocks or allows remote connections based on IP address — without blocking port 1433 or stopping the SQL Server service. This trigger helps control remote access while keeping the benefits of TCP 1433 connections.
just RUN this Trigger and u can edit the @ip for the machine can connecte with sql server
https://github.com/ozstriker712/BLock-Allow-IP-adresse-for-Remote-Connection-SQL-SERVER
The MonoGame template package is still based on .NET Standard 2.0, and it’s not fully updated for .NET 8 yet. Because of that, the install command can fail. Trying it with .NET 6 or 7 SDK might work.
I understand that. If everything I read says to use constexpr instead of #define, then I'm assuming there must be a way of replicating #ifdef, etc ? If not, then why not just use #define?
https://stackoverflow.com/questions/21837819/base64-encoding-and-decoding#:~:text=c:\TEMP%3Etype%20c.txtdDMrKDpBUFBNT0JJc:\TEMP%3Ebase64%20%2Dd%20c.txtt3+(:APPMOBIc:\TEMP%3Ebase64%20%2Dd%20c.txt%20%3E%20c.binc:\TEMP%3Eod%20%2Dt%20x1%20c.bin0000000%2074%2033%202b%2028%203a%2041%2050%2050%204d%204f%2042%20490000014c:\TEMP%3Etype%20c.bint3+(:APPMOBIc:\TEMP%3E
Macros are nothing like proper variables. You shouldn't even be comparing them.
You’re very close — the error you’re seeing (CredentialsProviderError: Could not load credentials from any providers) isn’t really about your endpoint, but rather about AWS SDK v3 trying to sign the request even though it’s hitting your local serverless-offline WebSocket server.
Let’s walk through what’s happening and how to fix it.
When you do:
const apiGatewayClient = new ApiGatewayManagementApiClient({ endpoint });
await apiGatewayClient.send(new PostToConnectionCommand(payload));
The AWS SDK v3 automatically assumes it’s talking to real AWS API Gateway, so it:
Attempts to sign the request with AWS credentials.
Fails because serverless-offline doesn’t need or support signed requests.
Hence: Could not load credentials from any providers.
So even though your endpoint (http://localhost:3001) is correct, the client is still trying to sign requests as if it were AWS.
When using serverless-offline for WebSocket testing, you need to give the ApiGatewayManagementApiClient dummy credentials and a local region.
Here’s a working local setup:
const {
ApiGatewayManagementApiClient,
PostToConnectionCommand,
} = require("@aws-sdk/client-apigatewaymanagementapi");
exports.message = async (event, context) => {
// Use the same port that serverless-offline shows for websocket
const endpoint = "http://localhost:3001";
const connectionId = event.requestContext.connectionId;
const payload = {
ConnectionId: connectionId,
Data: "pong",
};
const apiGatewayClient = new ApiGatewayManagementApiClient({
endpoint,
region: "us-east-1",
// Dummy credentials to satisfy SDK signer
credentials: {
accessKeyId: "dummy",
secretAccessKey: "dummy",
},
});
try {
await apiGatewayClient.send(new PostToConnectionCommand(payload));
} catch (err) {
console.error("PostToConnection error:", err);
}
return { statusCode: 200, body: "pong sent" };
};
The AWS SDK v3 doesn’t let you completely disable signing, but it’s happy if you provide any credentials.
Since serverless-offline ignores them, “dummy” values are perfectly fine locally.
Start serverless offline:
npx serverless offline
Connect via WebSocket client (e.g., wscat):
npx wscat -c ws://localhost:3001
Type a message — you should see "pong" echoed back.
ProblemFixSDK tries to sign local requestsProvide dummy credentialsWrong endpointUse http://localhost:3001 (as printed by serverless-offline)Missing regionAdd region: "us-east-1"
If you want, I can also show you how to make this conditional (so it automatically switches between local and AWS endpoints depending on IS_OFFLINE), which makes deployments smoother. Would you like that?
Starts with any number of ‘abc’ or Contains any number of ‘aab’ or any number of ‘bba’ as substring or Ends with ‘abba’ or any number of ‘ccc’
You need to re-architect the structure, as the info you have given so far is not enough right now.
One way to start to remedy the issue would be to remove the coordinator all together, and only include the headers where you need them, and implement the logic accordingly. The issue you are facing is very common when trying to spread too much of the implementation out across too many files.
Once you provide more info (the exact error string at least) I'm sure we could figure it out quite quickly and help you solve this.
pytidycensus is no longer being maintained and its dependency chain is not compatible with Python 3.13 and recent NumPy builds.
The similar package is tidycensus : https://pypi.org/project/tidycensus/
you can install it using pip:
pip install tidycensus
تمام أحمد 💪
باش نكمل الخدمة ونخرج ليك النسخة الجاهزة، خاصني نأكد آخر تفصيل صغير:
في صفحة Formulaire، واش بغيتي الزر الأزرق يكون:
1️⃣ في أعلى الصفحة (فوق الخانات C2:C5)
ولا
2️⃣ في الأسفل (تحت الخانة C5، يعني بعد ما المستخدم يعمر المعلومات يلقاه مباشرة تحتها)؟
قولي شنو تختار باش ندمج بالضبط على ذاك الشكل 🎯
Try the command:
git count-objects -vH
this command gives you the size of the data being uploaded, Git might upload Libraries' files that you thought were ignored by .gitignore.
It's just a guess.
you could check and reply on the comments.
If you use "SQLTools" by Matheus Teixeira, you can disable the feature in the extensions "Settings" dialog:
Just uncheck "Highlight Query".
So, I just found this buried in F5 documentation:
These variables have no prefix - for example, a variable named foo. Local variables are scoped to the connection, and are deleted when the connection is cleared. In most cases, this is the most appropriate variable type to use.
Apparently iRules are scoped to the connection, which in theory sounds like they can be shared by irules for the same connection. So, this looks like I can add 2 irules to the same VIP, one with the variables in the irule_init, and have that one higher in priority than the irule that has all of the event logic. Can anyone confirm this will work? I may need to do some experimentation.
No.
Apple does not provide any system process that refreshes your app’s APNs token automatically after a restore or migration. The token is only refreshed once your app explicitly registers again.
From Apple’s documentation:
“APNs issues a new device token to your app when certain events happen. The app must register with APNs each time it launches.”
— Apple Developer Documentation: Registering Your App with APNs
And:
“When the device token has changed, the user must launch your app once before APNs can once again deliver remote notifications to the device.”
— Configuring Remote Notification Support (Archived Apple Doc)
That means the OS will not wake your app automatically to renew the token post-restore. The user must open the app at least once.
2. Can the app be awakened silently (e.g., background app refresh or silent push) to refresh its token before the user opens it?
Not reliably.
While background modes like silent push (content-available: 1) or background app refresh can wake your app occasionally, they don’t work until the app has been launched at least once after installation or restore.
Also, if the APNs token changed due to restore, your backend will still be sending notifications to the old, invalid token — meaning the silent push will never arrive in the first place.
“The system does not guarantee delivery of background notifications in all cases. Notifications are throttled based on current conditions, such as the device’s battery level or thermal state.”
— Apple Docs: Pushing Background Updates to Your App
So while background updates might sometimes trigger, you can’t rely on them for refreshing tokens after a restore.
3. What’s the best practice to ensure push delivery reliability after a device restore?
Here’s what works in production:
Always call registerForRemoteNotifications() on every cold launch.
Send the token to your backend inside
application(_:didRegisterForRemoteNotificationsWithDeviceToken:).
Compare the new token to the last saved one and update your backend if it changed.
Do not cache or assume the token is permanent.
“Never cache device tokens in your app; instead, get them from the system when you need them.”
— Apple Docs: Registering Your App with APNs
Treat device tokens as ephemeral — they can change anytime (reinstall, restore, OS update, etc.).
Handle APNs error responses such as:
410 Unregistered → token is invalid; stop sending.
400 BadDeviceToken → token doesn’t match app environment.
When receiving these, mark tokens as invalid and remove them from your database.
Keep a “last registration date” per device and flag stale ones.
For critical alerts (e.g., security, transactions), have fallback channels (email, SMS, etc.).
“If a provider attempts to deliver a push notification to an application, but the application no longer exists on the device, APNs returns an error code indicating that the device token is no longer valid.”
— Apple Docs: Communicating with APNs
For those still having issue with this:
Enable Databricks Apps - On-Behalf-Of User Authorization (Click on your user and then 'Preview'). For this to take effect, you need to shut down your app and start it again.
Add scopes to your app by editing the app. To edit scopes, your app must be stopped.
After configuring scopes and restarting the app, you may need end the login session and login to databricks again for scope changes to take effect. My databricks instance is configured with Google Workspace SSO, so I had to end my google session and login again for it to work.
flagged as this should be an objective Question, not part of massively downvoted "experiment" Opinion-based questions alpha experiment on Stack Overflow
please include a clearer reproduction and the complete message
Have you tried notExists() instead of id.eq(JPAExpressions.select(...).limit(1)) ?
jpaQueryFactory
.selectFrom(qVehicleLocation)
.innerJoin(qVehicleLocation.vehicle).fetchJoin()
.where(
JPAExpressions.selectOne()
.from(subLocation)
.where(
subLocation.vehicle.eq(qVehicleLocation.vehicle),
subLocation.createdAt.gt(qVehicleLocation.createdAt)
.or(
subLocation.createdAt.eq(qVehicleLocation.createdAt)
.and(subLocation.id.gt(qVehicleLocation.id))
)
)
.notExists()
)
.fetch();
This is what you need
function load() {
//Your function here
}
$(function() {
load(); //run on load
});
var loaded = setInterval(_run, 600000); //repeat every 10 mins
function _run() {
load();
clearInterval(run); //clear interval to recycle in the next 10 mins (not necessary);
}
To have cleaner approach I want like this
field: 'purchaseOrder.poCode', headerName: 'PO Number', flex: 1, minWidth: 120,
instead of below
field: 'purchaseOrder', headerName: 'PO Number', flex: 1, minWidth: 120, valueGetter: (params) => {
return params?.poCode
}
what to do if the viewB is transparent/translucent(basically its a carosel) and you want to avoid overlap?
in practice, you want the LLM to have the entire body of text prior to responding. what you should do i begin streaming the response from the LLM and send that to your speech to text processor if you want to improve voice speed.
What we actually did is just a sleep job before the job you want delayed.
Like this on windows, simple but works
Start-Sleep -Seconds 3600
Or Unix:
sleep 1h
Cheers,
Dave
Use file reference with #r:
#r @"C:\Users\<your-user>\.nuget\packages\newtonsoft.json\<package-version>\lib\<.net-version>\Newtonsoft.Json.dll"
So, letting a friend try my code on his Mac, without the xsl stylesheet parameter, he got this error
Run-time error '2004'
Method 'OpenXML' of object 'Workbooks' failed.
Which answers my #1 question.
Thanks for the contributions @timwilliams and @Haluk.
I will start exploring options like Power Query.
still having this problem in 2025 and it took some work but I got a solution working. I have networkingMode=mirrored , no JAVA_HOME conflicts, and most other connections work fine but I had to set up forwarding using usbipd-win to get it working.
install usbipd-win on Windows, using admin PowerShell, run
winget install --interactive --exact dorssel.usbipd-win
or download the .msi from https://github.com/dorssel/usbipd-win/releases
connect your phone via USB
open PowerShell as admin and list devices with usbipd list noting the phone's BUSID (e.g. 1-4, and it's VID:PID) then:
bind the device usbipd bind --busid <BUSID>
attach to WSL usbipd attach --wsl --busid <BUSID>
accept the "allow USB debugging?" prompt on the phone
restart adb in WSL: adb kill-server; adb start-server; adb devices and you should see the device showing up
after I did this the first time and selected "always allow this connection" from my phone it's worked pretty much every time. occasionally I have to do it again after a restart but it's pretty stable. I did write a script to automate the whole thing and alias it so it's easier to run if I have to reset the binding
# AttachAndroidToWSL.ps1
$deviceVidPid = "<VID:PID>"
Write-Host "Searching for device with VID:PID $deviceVidPid..."
$devices = usbipd list
$targetDevice = $devices | Where-Object {
$_ -match $deviceVidPid -and
$_ -notmatch "Attached"
}
if ($targetDevice) {
$busId = ($targetDevice -split " ")[0]
Write-Host "Found device: $targetDevice"
Write-Host "Attaching device with BUSID $busId to WSL..."
try {
usbipd bind --busid $busId | Out-Null
usbipd attach --wsl --busid $busId
Write-Host "Device attached successfully. Check adb devices in WSL."
} catch {
Write-Error "Failed to attach device: $($_.Exception.Message)"
}
} else {
Write-Host "Device with VID:PID $deviceVidPid not found or already attached."
Write-Host "Current USB devices:"
usbipd list
}
# Restart adb server in WSL (optional)
# Change WSL distribution name if it's not 'Ubuntu'
# wsl -d Ubuntu -e bash -c "adb kill-server; adb start-server"
and a connect-android powershell alias is helpful to quickly bind
function Connect-Android {
C:\path\to\script\AttachAndroidToWSL.ps1
}
Set-Alias -Name connect-android -Value Connect-Android
LLM is the model itself, a direct interface to the language model (e.g., OpenAI, Anthropic). You can call it directly with a prompt and get a response.
LLMChain is a LangChain wrapper that combines the model (llm) with a PromptTemplate and optional output logic. It doesn’t replace the LLM; it uses it internally to build a reusable, parameterized pipeline.
So it’s not one over the other, you typically use them together:
the LLM provides the intelligence, and the LLMChain structures how prompts are created and managed when interacting with it.
I'd use Power Query for this. With Office 365 this formula could be an alternative. It doesn't require LAMBDA.
=LET(_data,A1:F13,
_header,DROP(TAKE(_data,1),,1),
_body,DROP(_data,1),
VSTACK(HSTACK("",_header),
CHOOSEROWS(_body,
XMATCH(
MAXIFS(
INDEX(_body,,2),INDEX(_body,,1),UNIQUE(INDEX(_body,,1)))&UNIQUE(INDEX(_body,,1)),
INDEX(_body,,2)&INDEX(_body,,1)))))
Try the LockedList plug-in, it also has a nice UI.
Had the same issue, had to install torchcodec=0.7 so that it was compatible with my pytorch version. then reset my runtime in colab and it worked. diagram of pytorch/torchcodec compatibilities found here https://github.com/meta-pytorch/torchcodec
I ran into the same thing before, bro. The designer just shows that black screen instead of rendering the control kinda annoying. What fixed it for me was rebuilding my Nebroo project and reopening Visual Studio. Once it loads properly, the control shows fine when added to a form. It’s just how the designer handles custom controls sometimes.
Excelente me funcionó quiza por el autocompletado lo castea (///) y lo correcto seria (//)
Integrating NLP with Solr improves search quality by normalizing language, identifying entities, and expanding related terms. Instead of treating words as isolated tokens, NLP lets Solr recognize that “run,” “running,” and “ran” refer to the same concept, or that “Paris” may refer to a location entity. This results in higher recall, better matching, and more contextually relevant results.
For reference, a detailed study on this approach is available below, analyzing the impact of NLP techniques on Solr’s search relevancy.
https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1577525&dswid=-8621
Try async-trace, it should provide proper stacktrace for async await calls
I have the same problem where the action is 20 and the effected folder being empty, when in reality it should have files in it. In my case, when I looked at the history in the VSS client, the action was showing as "Archived version of ..." (where the ... is the file name).
The VssPhysicalLib>RevisionRecord.cs>Action enumeration does have an entry for ArchiveProject = 23, but not for 20.
A lazy solution to the problem but managed to work around the issue for me:
Add a new entry to the VssPhysicalLib>RevisionRecord.cs>Action enumeration: ArchiveUnknown = 20,
Add a new VSS action class to VssLogicalLib>VssAction.cs file:
public class VssNoAction : VssAction
{
public override VssActionType Type { get { return VssActionType.Label;} }
public VssNoAction()
{
//
}
public override string ToString()
{
return "No Action";
}
}
Add a new case to the switch statement in VssLogicalLib>VssRivision.cs>CreateAction() method:
case Hpdi.VssPhysicalLib.Action.ArchiveUnknown:
{
return new VssNoAction();
}
For more details, you can check this issue on trevorr/vss2git github repo: https://github.com/trevorr/vss2git/issues/39
You can do this efficiently and vectorized in NumPy using broadcasting.
import numpy as np
a = np.array([1, 3, 4, 6])
b = np.array([2, 7, 8, 10, 15])
result = b[:, None] + a
print(result)
You need to either add this header:
'Referrer-Policy': 'strict-origin-when-cross-origin'
Or you can add the following to your embed element:
referrerpolicy='strict-origin-when-cross-origin'
Either should work for you to get them back up and running.
See YouTube documentation here:
https://developers.google.com/youtube/terms/required-minimum-functionality#embedded-player-api-client-identity
@Swifty, wouldn't such use of zip create a list -- rather than an iterator? The input sequence of rows here can be very long -- I don't want to store it all in memory...
You can use @sln JSON Perl/PCRE regex functions to validate and error check.
See this link for several practical usagge examples to query and validate JSON,
and for a full explanation of the several functions available :
https://stackoverflow.com/a/79785886/15577665
json_regexp = paste0(
"(?x) \n",
" \n",
" # JSON recursion functions by @sln \n",
" \n",
" (?: \n",
" (?: # Valid JSON Object or Array \n",
" (?&V_Obj) \n",
" | (?&V_Ary) \n",
" ) \n",
" | # or, \n",
" (?<Invalid> # (1), Invalid JSON - Find the error \n",
" (?&Er_Obj) \n",
" | (?&Er_Ary) \n",
" ) \n",
" ) \n",
" \n",
" \n",
" (?(DEFINE)(?<Sep_Ary>\s*(?:,(?!\s*[}\]])|(?=\])))(?<Sep_Obj>\s*(?:,(?!\s*[}\]])|(?=})))(?<Er_Obj>(?>{(?:\s*(?&Str)(?:\s*:(?:\s*(?:(?&Er_Value)|(?<Er_Ary>\[(?:\s*(?:(?&Er_Value)|(?&Er_Ary)|(?&Er_Obj))(?:(?&Sep_Ary)|(*ACCEPT)))*(?:\s*\]|(*ACCEPT)))|(?&Er_Obj))(?:(?&Sep_Obj)|(*ACCEPT))|(*ACCEPT))|(*ACCEPT)))*(?:\s*}|(*ACCEPT))))(?<Er_Value>(?>(?&Numb)|(?>true|false|null)|(?&Str)))(?<Str>(?>"[^\\"]*(?:\\[\s\S][^\\"]*)*"))(?<Numb>(?>[+-]?(?:\d+(?:\.\d*)?|\.\d+)(?:[eE][+-]?\d+)?|(?:[eE][+-]?\d+)))(?<V_KeyVal>(?>\s*(?&Str)\s*:\s*(?&V_Value)\s*))(?<V_Value>(?>(?&Numb)|(?>true|false|null)|(?&Str)|(?&V_Obj)|(?&V_Ary)))(?<V_Ary>\[(?>\s*(?&V_Value)(?&Sep_Ary))*\s*\])(?<V_Obj>{(?>(?&V_KeyVal)(?&Sep_Obj))*\s*})) \n"
)
It seems that with these steps (I've installed another emulator with Android 14 and Google Play services just in case), plus generating my own CA certificate (not a self-signed one) and installing it on the emulator, at the same time it was configured in the assets/certs and the res/raw did the work.
I have the same problem, but I haven't been able to solve it(
I have all the data as integer, but I still get this error: [inputs.mqtt_consumer] Error in plugin: metric parse error: expected tag at 1:3: "84".
mqtt.conf for Telegraf:
[[inputs.mqtt_consumer]]
servers = ["tcp://localhost:1883"]
topics = [ "test_topic" ]
qos = 0
client_id = "qwe12"
#name_override = "entropy_available"
#topic_tag = "test_topic"
data_format = "influx"
data_type = "integer"
Influxdb:
[[inputs.mqtt_consumer]]
servers = ["tcp://localhost:1883"]
topics = [
"test_topic"
]
[[inputs.mqtt_consumer.fieldpass]]
field="value"
converter="integer"
Where am I doing wrong?
https://pypi.org/project/keyring/ uses platform services to securely store secrets, credentials, etc.
You cannot do it using "forge test" but you can deploy to the testnet and run tests off of it with the methods you show above.
I just had this come up because some program had stuck a P4CHARSET=utf8 into my p4config.txt in a depot that was not configured for utf8. So that may be one of many reasons for this error.
I think I’ve found a solution, and I’d appreciate it if someone could take a look and comment, so I know if I’m on the right track.
After numerous changes, I realized that one of the bigger problems was that I wasn’t performing a Clean + Rebuild, so Visual Studio kept caching my modifications.
In the end, the solution came down to the following part of the web.config file:
<system.web>
<authentication mode="Windows" />
<compilation debug="true" targetFramework="4.5.2" />
<httpRuntime targetFramework="4.5.2" />
<httpModules>
<add name="ApplicationInsightsWebTracking" type="Microsoft.ApplicationInsights.Web.ApplicationInsightsHttpModule, Microsoft.AI.Web" />
</httpModules>
</system.web>
<!-- Set Windows Auth for api/auth/token endpoint -->
<location path="api/auth/token">
<system.webServer>
<security>
<authentication>
<anonymousAuthentication enabled="false" />
<windowsAuthentication enabled="true" />
</authentication>
</security>
</system.webServer>
</location>
<!-- For the rest of the app, allow anonymous auth -->
<system.webServer>
<security>
<authentication>
<anonymousAuthentication enabled="true" />
<windowsAuthentication enabled="false" />
</authentication>
</security>
</system.webServer>
Now, the first endpoint passes through Windows Authentication (receives the Authorization: Negotiate ... header), while the rest of the application is authorized through CustomAuthorization using JWT tokens.
Additionally, I had to configure the following in the applicationhost.config file:
<section name="anonymousAuthentication" overrideModeDefault="Allow" />
<section name="windowsAuthentication" overrideModeDefault="Allow" />
I would appreciate it if someone could review this and provide advice or recommendations on whether this setup is acceptable.
Thank you!