is there a way to create array of Any?
Here are two ways to create an array of Any initialized with the named tuple (;some=missing):
fill!(Vector{Any}(undef, 10), (;some=missing)) # fill! returns its first argumentAny[(;some=missing) for _ in 1:10]These forms are not interchangeable when the value of the filling expression is mutable. The first with the fill! expression will use the same value for all elements. The second with the array comprehension will create a separate value for each element. For example, if the value expression is [], the first creates one empty mutable array used in all elements, and the second creates a separate empty mutable array for each element.
@allocated fill!(Vector{Any}(undef, 10), []) # 176 (bytes allocated)@allocated Any[[] for _ in 1:10] # 432 (bytes allocated)Why
Vector{Any}((; some=missing), 10)fails?
The expression Vector{Any}((; some=missing), 10) fails because no method is defined for this case.
Constructor methods are only defined (as of Julia 1.12.0) for
Here is a try to define one:
Vector{T}(value::V, n) where {T, V <: T} = fill!(Vector{T}(undef, n), value)
With this definition, the expression Vector{Any}((; some=missing), 10) works.
I found this on a google search:
npx playwright install --list
Google search results
if somebody is facing the same issue with firabase emulator for an expo app running on ios simulator, I got it fixed with below steps.
add host to the firebase.json in firebase side
"emulators": {
"functions": {
"port": 5001,
"host": "0.0.0.0"
},
"firestore": {
"port": 8080,
"host": "0.0.0.0"
},
"ui": {
"enabled": true
},
"singleProjectMode": true,
"export": {
"path": "emulator-data"
}
}
and then in expo side, where you initialize firebase apps
import Constants from "expo-constants";
import { getApps, initializeApp } from "firebase/app";
import { getAuth } from "firebase/auth";
import { connectFirestoreEmulator, getFirestore } from "firebase/firestore";
import { connectFunctionsEmulator, getFunctions } from "firebase/functions";
const firebaseConfig = {
apiKey: process.env.EXPO_PUBLIC_FIREBASE_API_KEY,
authDomain: process.env.EXPO_PUBLIC_FIREBASE_AUTH_DOMAIN,
projectId: process.env.EXPO_PUBLIC_FIREBASE_PROJECT_ID,
storageBucket: process.env.EXPO_PUBLIC_FIREBASE_STORAGE_BUCKET,
messagingSenderId: process.env.EXPO_PUBLIC_FIREBASE_MESSAGING_SENDER_ID,
appId: process.env.EXPO_PUBLIC_FIREBASE_APP_ID,
};
const app = getApps().length ? getApps()[0] : initializeApp(firebaseConfig);
const origin = Constants.expoConfig?.hostUri?.split(':')[0] || 'localhost';
export const db = getFirestore(app);
export const auth = getAuth(app);
export const functions = getFunctions(app, "asia-south1");
if (__DEV__ && process.env.EXPO_PUBLIC_USE_EMULATORS === "1") {
console.log(`🔌 Using local Firebase emulators... ${origin}`);
connectFunctionsEmulator(functions, origin, 5001);
connectFirestoreEmulator(db, origin, 8080);
}
export default app;
I found the problem. In the image there was a flag set to read from bottom-to-top instead of top-to-bottom and another left to right flag was wrongly set
Have you consired looking on x32ABI Architecture? It literally adresses this problem. It takes advantage of 64-bit instructions with 32-bit pointers to avoid memory waste (overhead)
I understand your question is multi-layered, and the run-time crashing of your custom strlen() is nearly a side note, but I thought I'd address just this one aspect nonetheless. Does your code care for the possibility of a NULL parameter as does the following custom strlen()?
Runnable code here: https://godbolt.org/z/xofh9sYqK
#include <stdio.h> /* printf() */
size_t mstrlen( const char *str )
{
size_t len = 0;
if( str ) /* Prevent run-time crash on NULL pointer. */
{
for(; str[len]; len++);
}
return len;
}
int main()
{
char s[] = "stars";
printf("mstrlen(%s) = %llu\n", s, mstrlen(s));
printf("mstrlen(NULL) = %llu\n", mstrlen(NULL));
return 0;
}
I only deploy my code once or twice a year. I often forget to create the "signed" APK. When you dont' create a signed APK, you will see the build show up in a "debug" folder.
When you select the signed APK
You will see your build in "release" folder as an .abb file. You want to drag the .abb (for me its the most recent file with no sequence number to the Google Play Console release web page.
to view the app traffic you must install the VPN and app user certificate but their is most common problem in android .crt and .der format is not supported VPN and app user certificate i try .p12 certificate and its work try that
$order = Order::where('uuid', $order_id) ->with(['client', 'service'])
->first();
This is the solution that worked for me.
The overridden prompt that you provided is incorrectly formatted. Check the format for errors, such as invalid JSON, and retry your request.
Limit: 5 files (10MB total size)
Format: .pdf, .txt, .doc, .csv, .xls, .xlsx
i am trying to upload pdf instead of jpg so it can be handle currently.
https://www.npmjs.com/package/node-html-parser seems like a good alternative if you don't want to use an offscreen doc, which seems a bit overkill imo.
The fix was to add the following to layouts.js:
export const dynamic = "force-dynamic";
Here is the PR with the fix.
I got this solution after contacting DigitalOcean support.
I managed to find the solution for this.
I removed the 2 lines for QueueProcessingOrder and QueueLimit from my rate limiting logic in RateLimiterExtension.cs file.
Also added app.UseRouting() to my Program.cs file.
My rate limiting functionality now works as desired and returns 429 status code with the message when the number of HTTP requests is limited.
As @david-maze said (thanks!), you need to add steps to the Dockerfile to build in a folder other than the root directory. Add lines like this to your Dockerfile:
COPY cargo.toml build.rs proto src /project
WORKDIR /project
And then you could just add RUN (or not just¹):
RUN cargo build
This will prevent Cargo's scope from scanning unnecessary system's files.
¹
RUNwith cache and release:RUN --mount=type=cache,target=/root/.cargo/registry \ cargo clean; cargo build --release
OVO JE BRZA OBAVIJEST.... BOŽIĆNA PONUDA KREDITA ZA SVE...
Srdačni pozdravi svima koji čitaju ovu poruku i želim da znate da ova poruka nije slučajnost ili koincidencija. Kao uvod, ja sam iz Zagreba i nisam mogao vjerovati kako sam dobio kredit od 300.000,00 eura. Ponovno sam sretan i financijski stabilan i hvala Bogu što takve kreditne tvrtke još uvijek postoje. Znam da su prevaranti posvuda i bio sam žrtva prijevare prije dok nisam upoznao ovu pouzdanu tvrtku koja mi je pokazala sve što trebam znati o kreditima i ulaganjima. Ova tvrtka me savjetovala i pomogla, pa ću savjetovati svima kojima je potreban kredit da iskoriste ovu priliku kako bi se izvukli iz financijskih poteškoća. Možete ih kontaktirati putem e-pošte ([email protected]). Brzo kontaktirajte ([email protected]) danas i uzmite svoj kredit od njih uz kamatnu stopu od 3%. Svakako kontaktirajte Michael Gard Loan Company i svi vaši financijski problemi bit će riješeni. Oni rade s odjelima za procjenu rizika, obradu, financiranje i druge odjele. Svaki od ovih timova dolazi s ogromnim bogatstvom znanja. To im omogućuje da steknu više znanja o kreditima i pruže bolje iskustvo članovima. Postignite svoju financijsku slobodu od njih danas i zahvalite mi kasnije. Jeste li u dugovima, trebate kredit, brz i pouzdan, ovo je mjesto za dobivanje vjerodostojnih kredita. Nude poslovne kredite, studentske kredite, stambene kredite, osobne kredite itd. Kamatna stopa na kredit je 3%. Kontaktirajte nas danas. Imate priliku dobiti gotovinski kredit u iznosu od 1000 (€$£ KUNA) - 6.000.000 - 900.000.000 (€$£ KUNA) s mogućnostima otplate od 1 godine do 45 godina.
WhatsApp: +385915608706
WhatsApp: +1 (717) 826-3251
E-pošta: [email protected]
© 2025 MICHEAL GARD LOAN COMPANY.
Web stranica: https://www.alliantcreditunion.org
11545 W. Touhy Ave., Chicago, IL 60666
Broj rute: 271081528
You might also want to check out mailmergic.com. It can take your Excel data and Word template and generate individual PDFs directly, without needing any VBA or macros.
It’s really straightforward to use and can save a lot of time compared to running Word macros, especially for large merges.
| header 1 | header 2 |
|---|---|
| cell 1 | cell 2 |
| cell 3 | cell 4 |
If you are looking for an API to integrate it directly into you website. You may look into https://aitranslate.in/api-documentation. It has image/PDF translation capability without losing background. Plus it offers more like OCR, erasure etc.
If you are looking for a similar image/PDF to image translation or OCR API similar to the google cloud vision, I would suggest you look into https://aitranslate.in/api-documentation
Try running these commands in your terminal:
node -v
npm -v
If both show version numbers, then Node and npm are installed fine.
If you get an error, install Node.js from https://nodejs.org — npm comes with it.
During installation, make sure the “Add to PATH” option is checked.
If you already installed it but it’s still not recognized, you can manually add this path to your system environment variables:
"C:\Program Files\nodejs\"
Sometimes the error also happens if you’re using PowerShell — try opening Command Prompt (cmd) instead and check again.
Thanks for the help guys, the problem turned out to be the server was not adding the Access-Control-Allow-Origin header. So the server was basically not sending the result back to the client because of a security policy. Once I figured that out everything just worked.
Also, adding the "success:" option turned out to be depreciated, so the done/fail answer below is a great example of how you are supposed to do it now.
Simple solution for programmer, but not optimal for CPU time:
size_t sigfigs(double x)
{
return std::to_string(x).size();//create r-value string and get its size!
}
But you must take into account presence of "-" sign, dot, exponent, their sign and all possible combinations. I think, its impossible to create universal sigfigs() function, only problem specific.
This is now (kind of) supported using the new auto copy feature and modifying the s3 event notification to filter by suffix
2@QsHG2kWyiSv3XtyL8BJjFt3SD/wdFCj3gEytAJ+5ydR4LL+h/i3oXJiOYztfwomWzHoYVsd/eXQQLkp0Hm20cSr5/L6QoCwFSK4=,gp8UKkVmOah17KshhGYYYYnuMx3VYSFsySKjT/FNlic=,/EbDVPBuE3Pyr66vZ2k4AYD4ZpbFMI05dY1srefoO3s=,hQYu6KdbygKRmjgtDa9twXNAahG+W2FEs28FjVOdUz4=,8
You can use dependencies like this which will do references instead of deep cloning at functions:
I did a cross-platform application that clones from and to GitHub/Gitlab/Gitea and Local.
Download section: https://github.com/goto-eof/fromgtog?tab=readme-ov-file#download
As I understand it, the answer is yes, VS Code, somewhat by design. VS Code does not support code running out of the box. The "play" button (in the editor title) is added by extensions ad-hoc. The play button in "Run & Debug" is builtin I think
I just update my SDK tools and then do Invalidate chaches by checking all 3 checkboxes enter image description hereenter image description hereenter image description here
Hello can someone help me add this lib through crosstool-ng?
sudo apt-get install usbmuxd libimobiledevice6 libimobiledevice-utils
I tried to do as mentioned above but without success.
git clone https://github.com/libimobiledevice/libimobiledevice.git
cd libimobiledevice
./autogen.sh --prefix=`pwd`/builds
make
sudo make install
This should be handled via either SysVar or EnvVar.
in Panel, you can assign value to a System variable which can be accessed in CAPL code.
for example,
variables
{
byte xxxxx = 0; //assuming 8 bits data
}
on sysvar sysvar:xxxx // name your system variable
{
xxxxx = @this;
}
on message CAN_0x598
{
if(counterAlive == 15)
{
counterAlive = 0;
}
else
{
counter++;
}
msg1.byte(0) = this.byte(0);
msg1.byte(1) = this.byte(1);
msg1.byte(2) = this.byte(2);
msg1.byte(3) = xxxxx ; // Changing through the Panel
msg1.byte(4) = xxxxx +1; //pay attention on overflow value as it can hold till 255
msg1.byte(5) = this.byte(5);
msg1.byte(6) = counter;
msg1.byte(7) = this.byte(7);
}
probably you’ve solved this, but i faced with your article and decided to share my workaround about the way i’ve solved it also
We initially chose the original next-runtime-env library because we wanted a Docker image that worked for multiple environments (build once, deploy many). However, using the library caused various issues with navigation. First, I encountered a bug when navigating using router.push from the custom not-found page. Then, after integrating ISR to improve performance and splitting different app layers with separate layouts, jumping between them also wiped out my global env variables.
I had two options to solve this:
I hesitated with the first option because in next-runtime-env's server environment, when you use the env() getter to get a variable from process.env[key], it calls unstable_noStore() (aka connection()), which enables dynamic rendering. I wanted to reduce unnecessary dynamic rendering. This caused issues when I moved a room from SSR to ISR. Where dynamic rendering was needed, I decided to fetch data client-side and show skeleton loaders 💅.
A few points about the final implementation for those interested:
Now that I no longer depend on next-runtime-env, I can access runtime env in server components without enabling dynamic rendering. This opens the door for further performance improvements, such as migrating individual app pages from SSR to SSG/ISR.
I had a broken password entry in "passwords and keys" with no description, just "app_id" in details.
After deleting that entry, VS-Code stopped asking for a password.
https://reveng.sourceforge.io/crc-catalogue/16.htm
See CRC-16/IBM-3740
This C++ code works
// width=16 poly=0x1021 init=0xffff refin=false refout=false xorout=0x0000 check=0x29b1 residue=0x0000 name="CRC-16/IBM-3740"
// Alias : CRC-16/AUTOSAR, CRC-16/CCITT-FALSE
unsigned short crc16ccitFalse (const char *pData, unsigned int length, unsigned short initVal /*= 0xFFFF*/)
{
const unsigned short polynomial = 0x1021;
unsigned short crc = initVal;
for (unsigned byte = 0; byte < length; ++byte) {
crc ^= (pData [byte] << 8);
for (int bit = 0; bit < 8; ++bit) {
crc = (crc & 0x8000) ? (crc << 1) ^ polynomial : (crc << 1);
}
}
return crc;
}
One of the reasons. On Linux, you need to check for the line:
127.0.0.1 localhost
in the /etc/hosts file.
declare global {
interface HTMLElement {
querySelector: (paras: any) => HTMLElement;
}
}
One way to make ft_strlen(NULL) be stopped at compiler-time, is to add this line at the top of the code:
__attribute__((nonnull(1)))
This is an attribute, a GCC/Clang compiler-specific directive that tells the compiler "The first argument to this function must not be NULL".
Opening the Firebase Console and viewing your database, whether Firestore or Realtime db, you are reading from the db. Each time you refresh the console or navigate between collections or documents, it will also triggers as new reads. The difference is that you're doing is manually via the console.
| GlobalScope | CoroutineScope |
|---|---|
tied to an entire application lifecycle |
tied to a specific component's lifecycle |
cancellation is difficult and manual |
cancellation is easy and automatic |
| header 1 | header 2 |
|---|---|
| cell 1 | cell 2 |
| cell 3 | cell 4 |
I had this problem. The reason was that in the root of my project there was 'System.ValueTuple ' file . To delete this file fix my problem.
You have to just change to user privileges to root to run the file like you can run the file with
first compile:
gcc filename.c -o out
then run with root:
sudo ./out
I ran into similar issues when I started experimenting with Roblox scripting. Some external references like Poxelio have pretty solid general Roblox tutorials that can help understand the environment setup part.
So I make a summary of some points which methods can do and functions don't, please add comments if you have more points, so that I can update the list:
Methods with pointer receivers can either take a pointer or a value, while functions with e pointer receiver must take a pointer
Methods have to be implemented in order to implement an Interface, this can't be achieved with functions
In my case, starting up Xcode before opening the project did the trick.
I have 2 python files like below:
app.py
from flask import Flask
from routes import register_main_routes
def create_app():
app = Flask(__name__)
register_main_routes(app)
print('------------create_app()')
return app
app = create_app()
if __name__ == '__main__':
print('------------__main__()')
app.run(debug=False,port=5007)
routes.py
# routes.py
def register_main_routes(app):
@app.route('/')
def home():
return "<h1>Hello Flask web !</h1>"
Function TransfertDonnéesTableauVersPressePapier() 'appelée par enregistrer
Application.CutCopyMode = False 'RAZ du Presse-Papiers
Set MyData = New DataObject
Selection.Copy 'SURPRENANT mais seulement pour TEST, en place le 05/oct/2025
'Semble supprimer l'Erreur d'exécution '-2147221040 (800401d0)'
'DataObject:GetFromClipboard Echec de OpenClipboard
ActiveSheet.ListObjects("TabInscriptions").DataBodyRange.Select
Selection.Copy 'SURPRENANT cette répétition
MyData.GetFromClipboard
On Error Resume Next
End Function
To make the loading animation work, you can't just rely on CSS :active because the network request to p.php is asynchronous and takes time, so you must use JavaScript to manage the loading state. The most efficient way is to modify your button's HTML to contain both the normal <span id="buttonText">Send Data</span> and a hidden spinning element (<span id="buttonSpinner" class="spinner">), then your JavaScript listener will immediately start the loading state by hiding the text, showing the spinner, and disabling the button; once the fetch is complete (either successful or failed), the .finally() handler runs to stop the loading state by re-enabling the button, hiding the spinner, and restoring the text, ensuring the animation only runs for the exact duration of the server request.
A lazy way to import modules: after modifying the "import" command.
If you find it useful please leave a star~
Github: https://github.com/Magic-Abracadabra/magic-import
[🎬 Demo](https://github.com/Magic-Abracadabra/magic-import/blob/main/Demo.mp4)
A brief way to use your codes. Just copy the source code to start with.
from pip import main
from importlib.metadata import distributions
installed_packages = [dist.metadata['Name'] for dist in distributions()]
normal_import = __builtins__.__import__
def install(name, globals=None, locals=None, fromlist=(), level=0):
__builtins__.__import__ = normal_import
if name not in installed_packages:
main(['install', '-U', name])
name = normal_import(name, globals, locals, fromlist, level)
__builtins__.__import__ = install
return name
__builtins__.__import__ = install
# start coding from here
Now, the python keyword ✨import✨ has been in a magic spell. Modules of the latest version can be installed before your following imports.
If you don't have the numpy,
import numpy as np
will install it first, and then this module will be successfully imported. Yeah, that easy.
After importing one package, the following libraries will work, too:
import pyaudio, pymovie, pyautogui, ...
The following techniques can make Amazon Q Developer CLI more reliable:
Cross-examining its output with other LLMs like ChatGPT significantly improves quality, often within one or two rounds of back and forth. It functions almost like watching two experts debate.
Providing a reference application that follows best practices helps guide its output.
Manually approving every write operation with a preview prevents unintended changes.
Additional input and other approaches are welcome.
go to
Settings -> Developer Options, under APPS section: "Don't keep activities" was enabled.
Where you able to figure this out by anychance?
The versioning was just wrong, mess around with your firebase versions and adjust until the problem goes away,
With the Global Interpreter Lock (GIL) removed in Python 3.14 (in the no-GIL build), true multi-threading is now possible, which opens the door to data races—where multiple threads access and modify shared data concurrently, leading to unpredictable behavior. Previously, the GIL implicitly prevented many race conditions. Now, developers must handle concurrency explicitly using thread synchronization mechanisms like threading.Lock, RLock, Semaphore, or higher-level tools like queue.Queue. Careful design, such as using immutable data structures, thread-safe collections, or switching to multiprocessing or async paradigms where appropriate, is essential to avoid bugs and ensure thread safety.
I made an account for this. Full explanation and semi-rant at the end. Here's the Windows GUI-centric approach:
First, find your user or "home" folder in Windows. In the File Explorer, click "This PC", then click "Local Disk (C:)" then click Users, then click your name. This is your user or "home" folder. In here, create a new Text Document. Rename it "_vimrc" without the quotations. For a quick test to verify it's working, open that file with Notepad and type ":colorscheme blue" without quotations. Now open Vim and you should notice the bright blue color scheme. To undo this, close Vim, open your _vimrc file and delete what you typed, then save it, re-open Vim, and Vim will return to the default color scheme.
Bear in mind Vim came from Linux which is derived from Unix. I remember when I was new to all this and what helped was using a Linux distribution (Debian) for a while. I noticed a LOT of this type of stuff resides in the "home" folder otherwise referred to as "~". Like Windows but organized differently. So when you're using something like Vim, developed for Unix/Linux, you have to think in that way. Very command prompty (No, it's a CLI! No, it's a terminal!, No, it's a TTY!!!!!), hacker man, power usery vs. Windows which is "Monkey click icon, monkey happy".
I just figured out how to do this vimrc stuff today for myself by poking around in the Vim docs in my Vim install folder, for hours. I found a file, "vimrc_example.vim". At the top it says, "An example for a vimrc file . . . to use it, copy it to" then it lists where to copy "it" to for various operating systems. This is already confusing. Is he (Bram, the creator of Vim) saying copy this file to another location? Well, it's called "vimrc_example.vim" so that assumption must be wrong because I know the file should be something like "vimrc"! Okay, so he means to say, "Copy the text of this document to your vimrc file", right? But what is that file CALLED? Does it exist? And where? Do I need to make it? Where do I put it? We will get there. So he says for Windows, to "copy it to":
$VIM\_vimrc
Yes. There. Right there. Put it in there and you're good. Ha. See Windows never really made us learn this type of stuff like Linux people have to. So, if (no, because) you don't know, $ is symbolic for the location of the install of whatever is named. And the \ means put the following file in there; the vimrc. Breaking that down, we must find Vim's install location (the $) and in that folder (the \) create a file (the illusive _vimrc).
Refer https://github.com/isar/isar/issues/1679#issuecomment-3393987462 for the solution. It worked for me.
We have keys inside the execution object, like the running_count, failed_count, succeeded_count, etc
So we can depend on them to know the status
# This command installs the necessary Python libraries.
# - ollama: The official client library to communicate with our local Ollama server.
# - pandas: A powerful library for loading and working with data from our CSV file.
!pip install -q ollama pandas
In my case it was a fresh VS2022 install. I had to open Android Device Manager for a first time, it should have initialize some stuffs... After that my device showed up!
You can try this lib : https://github.com/webcooking/zpl_to_gdimage
worked for me
5** Http Errors are for the server not client. first you need to sure about your server that works fine. then you have to debug your request with postman.
based on your code that you provided earlier, nothing is wrong. but you must debug your request.
Change public override bool Equals(object obj) to public override bool Equals(object? obj) so that the override matches the method signature of the method that your overriding.
I believe when you open the app from the start menu, it runs through a shortcut that uses the .NET version already on your PC (4.7.2) so it works. But when you double click the exe, it looks for framework 4.8 specifically, and since that version isn’t installed, it fails. Try installing Framework 4.8
**GlobalScope** is designed for top-level, application-wide coroutines that are not tied to any component lifecycle. It should be used very sparingly, typically for background tasks that must survive across the whole app lifecycle.
It is mostly caused by the ssl certificate, maybe it has expired or mismatch
<TaskerData sr="" dvi="1" tv="5.12">
<Profile sr="prof0" ve="2">
<cdate>20251012T000000</cdate>
<edate>0</edate>
<id>1</id>
<name>ZenReminder</name>
<State sr="con0" ve="2">
<Time sr="stm0" ve="2">
<hour>12</hour>
<minute>0</minute>
<repeat>1440</repeat>
</Time>
</State>
<Action sr="act0" ve="2">
<code>Notify</code>
<text>Remain calm, stay strong, stay Zen. And where Zen ends, that’s when ass kicking begins.</text>
<title>Daily Zen Reminder</title>
<sound>DEFAULT</sound>
<vibrate>1</vibrate>
<priority>2</priority>
</Action>
</Profile>
</TaskerData>
This is due to a conflict with recent versions of Jupyter. The solution is to use the pre-release version of the Jupyter extension and set up a local server:
1. Install the pre-release version of Jupyter
2. Set up your virtual environment (in the terminal):
.venv\Scripts\activate
python -m pip install --upgrade pip
pip install ipykernel jupyter
3. Start the Jupyter server:
jupyter notebook --no-browser
Copy the full URL that appears (example: `http://localhost:8888/?token=abc123...`)
4. Connect VS Code to the server:
• Open your notebook
• Click on the kernel selector
• Select “Select Another Kernel...” → “Existing Jupyter Server...”
• Paste the full URL (with the token)
• Press Enter
• Select the Python interpreter: .venv\Scripts\python.exe
And it should work now :D
Translated with DeepL.com (free version)
If you’ve ever had to clean messy text lists — emails, logs, passwords, or exported data — you know how painful duplicates can be.
Meet NodeDupe.com — a fast, clean, privacy-focused tool that removes duplicate lines instantly ⚡
Perhaps their visibility could be checked whenever they're active, so that if a PanelContainer is on top the nodes below the stack are ignored, i.e. the nodes below either toggle the PanelContainer's mouse_filter and mouse_behavior_recursive properties, or Area2D's properties (monitoring, monitorable, mask, or layer).
Close any open instance of browser first, which is loaded with the same profile sometimes works.
Right-click the Docker Desktop icon in your system tray.
If it says “Switch to Windows containers”, it’s currently in Linux mode. If not, select Switch to Linux containers.
Wait for Docker to restart.
If you move your entities into a shared model or domain package for reuse across services (common in microservices), always add @EntityScan in each service’s Spring Boot app.
i have tbe same issue. Have you found a resolution? any help much appreciated
This seems to be a classic case of, "Help me with my solution, don't ask me about the problem I'm trying to solve."
What are you trying to do overall? What kind of a service are you trying to provide?
It appears that the way you have conceptualized it does not admit of a reasonable architecture.
az account set --subscription xxxxx
az provider register --namespace Microsoft.Datafactory
The HTTP 400 error when sorting in phpMyAdmin happens because the web server (Lighttpd) blocks URLs that contain encoded line-feed characters (%0A) in the `sql_query` parameter. This is a security check that causes phpMyAdmin requests to be rejected.
To fix it, open your Lighttpd configuration file and add this line to allow phpMyAdmin URLs:
$HTTP["url"] =~ "^/phpmyadmin/" { server.http-parseopts += ( "url-ctrls-reject" => "disable" ) }
Then save the file and restart Lighttpd:
systemctl restart lighttpd
After that, phpMyAdmin will be able to sort columns again without returning the 400 error.
If you only use it for local testing, this change is safe.
For production environments, it’s better to keep this limited to phpMyAdmin instead of applying it globally.
I recently built an open-source tool that automatically converts Chrome extensions to Firefox extensions.
It uses Rust and AST-based transformations (via SWC) instead of simple string replacements, so it handles Manifest V3 extensions more reliably than older approaches.
You can check it out here:
https://github.com/OtsoBear/chrome2moz
Might be useful for anyone still looking for a practical Chrome → Firefox converter. Feedback and contributions are welcome.
Столкнулся с аналогичной проблемой: cv2 никак не хотел устанавливаться с Python 3.14. Я пробовал устанавливать компиляторы С++, недостающие файлы и перебирать настройки, но оказалось, что нужно было просто заменить версию Python на 3.10 - с ней cv2 нормально установился и работает.
secp256k1 only has wheels for Linux. This means that it is not compatible with Windows OS.
If you don't have a Linux machine, you can get Linux on Windows with WSL and then install this package in that environment.
The mmec function has been superseded by the mmes function. That is written in the documentation.
I would suggest you use the GWAS by GBLUP approach to make the problem a 500x500 size problem instead of 30K x 30K, where you can still recover the 30K effects with a simple back transformation. That model should take only few seconds to run instead of hours or days. See the last section of the following vignette:
https://cran.r-project.org/web/packages/lme4breeding/vignettes/lmebreed.qg.html
I get the same error. This seems to be already reported here: https://github.com/microsoft/vscode-jupyter/issues/17042
I had the same issue. Resolved by following steps below:
1. Opened Google Chrome signed out of Github
2. In VS2022 upper right corner account button, I clicked on Add another account and signed in to Github when it opened Github.com in the browser
3. I signed in and it redirected to VS2022
4. Problem solved
curl -sSL https://raw.githubusercontent.com/shoneyJ/grepjson/master/install.sh | bash
I replaced jq with grepjson in my workflow to:
✅ Fuzzy-search JSON without knowing structure
✅ Find values across nested documents
✅ Simple pattern matching like grep
https://github.com/shoneyJ/grepjson
Install gcc-9-multilib instead of gcc-multilib.
For my part, I added ".svn" to my URL and it worked: https://xx.yyy.zz -> https://xx.yyy.zz.svn
Some SVN servers configured with Apache + mod_dav_svn use the .svn extension in the URL to point to the actual repository on the server.
If you forget .svn, Apache can't find the repository → Could not find the requested SVN filesystem. Probably the result of the Apache version upgrade!
For a Node.js solution with automatic restart on network loss, you'll need:
1. **Range requests** with the S3 SDK to resume from last byte
2. **State persistence** to track completed chunks
3. **Retry logic** with exponential backoff
4. **Chunk verification** (SHA256)
I implemented exactly this in S3Ra (https://github.com/Fellurion/NGAPP) - it's an Electron app with a Node.js backend.
**Core implementation approach:**
- Split downloads into configurable chunks (default 200MB)
- Store chunk state in JSON: `{chunks: [{index: 0, completed: true, hash: '...'}]}`
- Use `getObject` with Range header: `Range: 'bytes=start-end'`
- Verify each chunk before marking complete
- On restart: read state file, resume from first incomplete chunk
# Detaches applications from Google Play Store, disabling updates.
# Needs root and wget binary.
PACKAGES_TO_DETACH=$(cat <<-END
'com.google.android.youtube',
'com.sec.android.app.sbrowser',
'com.google.android.inputmethod.latin',
''
END
)
APP_FOLDER=/data/data/com.adamioan.scriptrunner/files
if [ ! -d "$APP_FOLDER" ]; then APP_FOLDER=/data/user/0/com.adamioan.scriptrunner/files; fi
if [ ! -d "$APP_FOLDER" ]; then
echo "Cannot determine SH Script Runner folder. Exiting. $APP_FOLDER"
exit 2
fi
WGET_BIN=/system/bin/wget
if [ ! -f "$WGET_BIN" ]; then WGET_BIN=/system/sbin/wget; fi
if [ ! -f "$WGET_BIN" ]; then WGET_BIN=/system/xbin/wget; fi
if [ ! -f "$WGET_BIN" ]; then
echo "wget binary is missing"
exit 1
fi
echo "WGET binary found in $WGET_BIN"
echo "Application folder found $APP_FOLDER"
SQLITE_FILE="$APP_FOLDER/sqlite"
echo "SQLITE binary path $SQLITE_FILE"
if [ ! -f "$SQLITE_FILE" ]; then
echo "SQLITE binary does not exist. Downloading to $SQLITE_FILE..."
"$WGET_BIN" "http://www.adamioannides.com/sites/com.adamioan.scriptrunner/resources/sqlite" -q -O "$SQLITE_FILE" > /dev/null 2>&1
if [ ! -f "$SQLITE_FILE" ]; then
echo "SQLITE binary cannot be downloaded"
exit 3
fi
else
echo "SQLITE binary exists"
fi
echo "Setting permissions..."
chmod 755 "$SQLITE_FILE"
echo "Killing Play Store..."
am force-stop com.android.vending
echo "Patching database..."
STORE_DB_FILE=/data/data/com.android.vending/databases/library.db
"$SQLITE_FILE" "$STORE_DB_FILE" "UPDATE ownership SET library_id = 'u-wl' WHERE doc_id IN ($PACKAGES_TO_DETACH)"
echo "Process completed"
My solution was, after creating the DB using CodeFirst, I used the context.Database.ExecuteSqlRaw method to create a temporary table with the same structure without a primary key, then I deleted the original table and renamed the new table and that was it.
Just a thought... assuming you aren't dealing with vast quantities of data, the simplest approach might simply be to get everything in your initial fetch request and then filter and sort the resulting array of objects.
Thanks @sakshi-sharma for your informative tips! Here's my fully working solution.
Eventually, I've decided that it's better to create a new file containing build timestamp (and add it to .gitignore) rather than update an existing one. Also, I prefer having it as a data file rather than .py (in case I want to ignore it being missing - like, when running my project from the IDE).
So, in the end, I am creating a .dotenv-like file src/myproject/.build_info containing key like TIMESTAMP=2025-10-11 21:37:57 each time I execute build --wheel.
Changes to pyproject.toml:
dependencies = [
...more stuff...
"python-dotenv>=1.1.0",
]
[build-system]
requires = ["setuptools"] # no "wheel" needed
build-backend = "setuptools_build_hook"
backend-path = ["."] # important!
[tool.setuptools.package-data]
"*" = [
...more stuff...,
".build_info",
]
New file setuptools_build_hook.py in project's root:
"""
Setuptools build hook wrapper that writes file `src/myproject/.build_info`
containing build timestamp when building WHL files with `build --wheel`.
"""
from datetime import datetime
from os import PathLike
from pathlib import Path
from setuptools import build_meta
def build_wheel(
wheel_directory: str | PathLike[str],
config_settings: dict[str, str | list[str] | None] | None = None,
metadata_directory: str | PathLike[str] | None = None,
) -> str:
"""Creates file `src/myproject/.build_info` with key TIMESTAMP, then proceeds normally."""
Path("src/myproject/.build_info").write_text(f"TIMESTAMP={datetime.now():%Y-%m-%d %H:%M:%S}\n", encoding="utf-8")
print("* Written .build_info.")
return build_meta.build_wheel(wheel_directory, config_settings, metadata_directory)
# Proxy (wrappers) for setuptools.build_meta
get_requires_for_build_wheel = build_meta.get_requires_for_build_wheel
get_requires_for_build_sdist = build_meta.get_requires_for_build_sdist
prepare_metadata_for_build_wheel = build_meta.prepare_metadata_for_build_wheel
build_sdist = build_meta.build_sdist
get_requires_for_build_editable = build_meta.get_requires_for_build_editable
prepare_metadata_for_build_editable = build_meta.prepare_metadata_for_build_editable
build_editable = build_meta.build_editable
And now, how to read this value in runtime:
import myproject as this_package
from io import StringIO
build_timestamp: str | None = None
# noinspection PyBroadException
try:
build_timestamp = dotenv_values(stream=StringIO(resources.files(this_package).joinpath(".build_info")
.read_text(encoding="utf-8")))["TIMESTAMP"]
except Exception:
pass
Please vote the question if it helps. Did not found clear information about solving this issue.
Sorry for reopening this old topic, but I'm looking for just the same. Can you clarify (post some code perhaps?) ho you fixed this? Thank you.
So my method is, to roll the dice again and hope for a double six and then usually on the next turn I pass Go. No functions needed. You're welcome x
Typically it's not meant to be human readable, you would either:
Export the value for a service to read and use
Use the console where there are links to associated resource, or build a console yourself which links these resources.
I'm also encountering it here, it's really annoying. I'm now in the third hour trying to figure out the issue but I haven't
I think you should try unplugging it and then plug it back in. That will fix x
For non-production servers, server.http-parseopts = ( "url-ctrls-reject" => "disable" ) can be set to bypass the problem. This is not recommended for production servers.
Herllo Derek,
I am a bit late but, I was researching stuff like this, so it could be useful to someone passing as I did a while back
I have not quite figured exactly what you (Derek) want/ wanted to do.
But I can give a very simple alternative coding to get a UDF to change other cells, directly in terms of simplicity, ( possibly very indirectly in terms of what is happening behind the scenes.: What is going on here is sometimes considered as VBA having some redirection and ended up a bit lost, or rather does not know where it came from ).
No guarantees, but it may be something for you or others to consider
_ First put all these codings in a normal module
Option Explicit
' This is the main UDF, used by writing in a cell something of this form =UDF_Where(E3:E5)
Function UDF_Where(ByVal Cels As Range) As String ' Looking at this conventionally, a string is likely to be returned by this function in the cell you put the UDF into
Let UDF_Where = "This is cell " & ActiveCell.Address & ", where the UDF is in" ' Conventional use of UDF to change value of the cell that it is in
Worksheets("Derek").Evaluate Name:="OverProc(" & Cels.Address & ")" ' Unconventional use of a UDF to change other cells ' The Evaluate(" ") thing takes the syntax of Excel spreadsheet So I need this sort of thing
End Function
Sub OverProc(Cels As Range) ' This can be a Sub or Function
Dim SteerCel As Range
For Each SteerCel In Cels
Let SteerCel = "This is cell " & SteerCel.Address & ", from the range I passed my UDF (" & Cels.Address & ")"
Next SteerCel
ActiveCell.Offset(10, 0) = "This cell is 10 rows down from where my UDF is"
End Sub
( You will need to name a worksheet "Derek"., (That is not a general requirement but just ties up with the demo coding above and in the uploaded workbook) )
_ Now, In the worksheet named "Derek", type in any cell, for example D2, the following
=UDF_Where(E3:E5)
, then hit Enter
You should see these results
Alan
‘StackOverflowUDFChangeOtherCells.xls’ https://app.box.com/s/knpm51iolgr1pu3ek2j96rju8aifu4ow
I fixed the issue by downgrading the Jupyter extension.