thanks. would not have been able to figure this out. Thanks for including the screen shots as well. I'll update if it works or not.
When using Angular you should keep your static images in the ./assets folder and then refer to those images by using for example src="/assets/img.png".
As of November 2024, AWS introduced support for appending data to objects stored in Amazon S3 Express One Zone buckets.
Example in JavaScript AWS SDK v3
s3.send(
new PutObjectCommand({
Bucket: 'bucket',
Key: 'fileKey',
Body: 'some body',
WriteOffsetBytes: 123,
})
)
I've tried this pattern with regex101.com and it seems to be working as described:
example@___.com - no match
[email protected] - no match
[email protected] - match
[email protected] - match
So the regex itself seems to be "correct" - could it be that the issue is somewhere else in the code?
As a side note - it also matches lines like:
example@./com
example@x com
Dot . outside of character class construct [] means "any character". If you want it to match exactly dot character and nothing else, escape it with slash: \.
Add this to your Django settings file:
SHELL_PLUS = "ipython"
IPYTHON_ARGUMENTS = ["--ext", "autoreload", "-c", "%autoreload 2", "-i"]
and see this answer to understand how it works.
Communicating with JavaScript and Python is easy. All you need to do is use Python’s requests library to communicate. But this is just for the requests. You will also need a server to communicate. We can use Python’s Flask library to set up a server and communicate. There is also a part of Python’s Flask called jsonify that can allow you to turn our Python-generated format to JSON. We would also need to use Node.js’s child_processes() library to handle our communications with our Python while doing the main task of our JavaScript. This can work, if your Flask server is booted up. I forgot to boot it up and my HTML did absolutely nothing when I interacted with it. So boot up your Flask server and create the Pycommunicate.js package (or whatever you want to call it)
how are you?
I had the same problem and solved it by using P-User and password, instead of username and password
cf login -a <cloud_foundry_api> -u <your_p_or_s_user> -p <your_password>
The same thing happens for the BTP CLI.
I hope it works for you
This is an old question but I found a solution. I'm using a QSortFilterProxyModel but it should work with your tree model. Just emit the dataChanged() signal with invalid model indicies like so:
dataChanged(QModelIndex(), QModelIndex());
This way no data is affected and the treeView refreshes.
After understanding more about how admins create segments in the Wix UI, it appears that segments are dynamic, in that you do not manually add a contact to a segment, but rather the segment is simply a filtered view of contacts updated once a day.
# Subject: Internal script counter.
# Example: time of file download, and so on.
now=$(date +'%s')sec;
echo ""
echo "[INF] Running script..."
echo ""
# The sleep timer command contains by default the running time in a probative form which by a comparison between the output and itself if the output is correct and the output time was printed correctly.
# Timmer command, Input: 5 seconds.
sleep 5s
InfTimExe=$(TZ='UTC' date --date now-$now +"%Hhours:%Mmins.%Ssecs")
# Output: 5 seconds.
echo "[INF] Internal run time of the script: $InfTimExe"
echo ""
Go to your NetBeans folder >project folder>src>main>create a resources folder>then images folder>paste your images here issue solved.
https://github.com/moses-palmer/pynput/issues/614#issuecomment-2661108140
the issues has been solved by moss palmer however as of the current writing it has not been
I've had similar problems, even just coping project files from one computer to another.
I found that if I "Clean" they build configurations, then re-build the app seems to work. My thinking is that the .O (object) files compiled with a "different" compiler are looking for things 'VCL.VIRTUALIMAGELIST.O' in the "wrong" place, Sydney compiled differently than the new x64 in Athens, or something like that.
Anyway: Right click Build Configuration, select Clean. Then Build
just clear your browsers local storage and also delete published views if you have them
Given your use case and based on @raghvendra-n's answer, you could take advantage of post_logout_redirect_uri parameter.
const signOut: JwtAuthContextType['signOut'] = useCallback(() => {
removeTokenStorageValue();
removeGlobalHeaders(['Authorization']);
setAuthState({
authStatus: 'unauthenticated',
isAuthenticated: false,
user: null
});
auth.signoutRedirect({post_logout_redirect_uri: "http://host:8080/connect/logout"});
}, [removeTokenStorageValue]);
What I would like to add, is as can be seen from the code, signoutRedirect needs an absolute path to the URL
I've been hitting this error as well, so did some testing.
It looks like this is being caused by a bug that has already been fixed in Next.js - it seems to have been introduced in 15.1.0, and it is fixed in the 15.2.0 canary releases.
Downgrading Next.js to 15.0.4 fixed the issue for me.
Yes, you're absolutely right, testing AWS SDK v2 is too way verbose.The main issue is that AWS removed interfaces from service clients (like s3.Client), so we can't just mock them directly like we did in v1. That forces us to stub entire clients, which is a pain.
In my opinion, it's better to Wrap AWS clients in a small interface and mock that instead.
Because of using struct instead of interfaces in AWS SDKv2 you cannot mock s3.Client directly.
How to fix? Instead of testing against s3.Client, define a minimal interface for only the methods you need:
type S3API interface {
PutObject(ctx context.Context, params *s3.PutObjectInput) (*s3.PutObjectOutput, error)
}
type S3Client struct {
Client *s3.Client
}
func (s *S3Client) PutObject(ctx context.Context, params *s3.PutObjectInput) (*s3.PutObjectOutput, error) {
return s.Client.PutObject(ctx, params)
}
Now in your real code, you should depend on S3API not s3.Client which makes your mocking simpler.
With the interface in place, we don’t need AWS SDK stubs anymore. We can just do this:
type MockS3 struct{}
func (m MockS3) PutObject(ctx context.Context, params *s3.PutObjectInput) (*s3.PutObjectOutput, error) {
if *params.Bucket == "fail-bucket" {
return nil, errors.New("mocked AWS error")
}
return &s3.PutObjectOutput{}, nil
}
And you can do this magic everywhere like your entire code, and you might use a robust level of abstraction which not depend on AWS SDK.
See this:
type MyUploader struct {
s3Client S3API
}
func (u *MyUploader) Upload(ctx context.Context, bucket, key string, body []byte) error {
_, err := u.s3Client.PutObject(ctx, &s3.PutObjectInput{
Bucket: &bucket,
Key: &key,
Body: body,
})
return err
}
With this setup, your service doesn’t care whether it’s using a real AWS client or a mock—it just calls PutObject().
Excellent, this worked like a charm.
sorry to continue on old topic, but as experience shows Microsoft is pretty slow on the development lately. They just released support for UWP in .NET 9. There are some adjustments to do, but they promise that UWP API is available as before, no new features though (except supporting still the UWP technology stack on .NET 9, since it was based before on .NET Core 3.1 that is now out of service (since December 13, 2022).
It requires last update of Visual Studio 2022, v17.13. https://learn.microsoft.com/en-us/visualstudio/releases/2022/release-notes#desktop
const http = require('http');
const fs = require('fs');
const Canvas = require('canvas');
http.createServer(function (req, res) {
fs.readFile(__dirname + '/image.jpg', async function(err, data) {
if (err) throw err;
const img = await Canvas.loadImage(data);
const canvas = Canvas.createCanvas(img.width, img.height);
const ctx = canvas.getContext('2d');
ctx.drawImage(img, 0, 0, img.width / 4, img.height / 4);
res.write('<html><body>');
res.write('<img src="' + canvas.toDataURL() + '" />');
res.write('</body></html>');
res.end();
});
}).listen(8124, "127.0.0.1");
console.log('Server running at http://127.0.0.1:8124/');
I was working with 2025.2.1 and noticed a huge slowdown and it didn't matter how big the project was. It was very odd as it would autocomplete very fast outside of a function, but then take a few seconds within a function after an arbitrary line of code.
I rolled back to 2024.10.1 and it is blazing fast again for autocomplete even on my large project with tens of thousands of lines of code.
Finally, they officially brought this feature to C# extension https://github.com/dotnet/vscode-csharp/issues/6834
Just go to settings and enable Format On Type:
Soved by adding iconipy "assets" folder into "_internal" as whole folder inside pyinstaller .spec file + add "iconipy" as hiddenimports:

I tried the solution of setting sizes above but it has many problems in my case because the state must be set in my application based on stored preference and the expanded size is thus not known on construction.
I found it is much simpler to just set the visibility of enclosed panels false and revalidate. Sample code below for this effect in my code:
static prefs=Preferences.node("myprefsnode"); // check, might be different for you
private class GroupPanel extends JPanel {
private TitledBorder border;
private Dimension collapsedSize;
private boolean collapsible = true, collapsed;
final String collapsedKey;
JPanel placeholderPanel = new JPanel();
Cursor normalCursor = new Cursor(Cursor.DEFAULT_CURSOR),
uncollapseCursor = new Cursor(Cursor.N_RESIZE_CURSOR),
collapseCursor = new Cursor(Cursor.S_RESIZE_CURSOR);
public GroupPanel(String title) {
setName(title);
collapsedKey = "GroupPanel." + getName() + "." + "collapsed";
border = new TitledBorder(getName());
border.setTitleColor(Color.black);
setToolTipText(String.format("Group %s (click title to collapse or expand)", title));
setAlignmentX(LEFT_ALIGNMENT);
setAlignmentY(TOP_ALIGNMENT);
// because TitledBorder has no access to the Label we fake the size data ;)
final JLabel l = new JLabel(title);
Dimension d = l.getPreferredSize(); // size of title text of TitledBorder
collapsedSize = new Dimension(getMaximumSize().width, d.height + 2); // l.getPreferredSize(); // size of title text of TitledBorder
collapsed = prefs.getBoolean(collapsedKey, false);
setTitle();
addMouseMotionListener(new MouseMotionAdapter() {
@Override
public void mouseMoved(MouseEvent e) {
if (isMouseInHotArea(e)) {
if (collapsed) {
setCursor(uncollapseCursor);
} else {
setCursor(collapseCursor);
}
} else {
setCursor(normalCursor);
}
}
});
addMouseListener(new MouseAdapter() {
@Override
public void mouseClicked(MouseEvent e) {
if (!collapsible) {
return;
}
if (getBorder() != null && getBorder().getBorderInsets(GroupPanel.this) != null) {
Insets i = getBorder().getBorderInsets(GroupPanel.this);
if (e.getX() " + getName());
}
setBorder(border);
}
public void setCollapsible(boolean collapsible) {
this.collapsible = collapsible;
}
public boolean isCollapsible() {
return this.collapsible;
}
public void setTitle(String title) {
border.setTitle(title);
}
/**
* @return the collapsed
*/
public boolean isCollapsed() {
return collapsed;
}
}
I'm just getting started with ghostty but I pasted title = "$CWD"
I couldn't navigate to the correct config using ghostty docs but the UI settings opened the config txt edit seen on my screen shot.
I found the other config options you see at https://ghostty.zerebos.com/settings/application

Apparently, it can also happen when you have two different queries in your SQL file, and you didn't separate them with a semicolon. I had this issue - it didn't recognize column names and marked my CTE name with a red line. The query itself worked perfectly fine but as soon as I added semicolon to the query above it the red lines went away.
The problem is that I was using caddy to serve httpS, so this was useless.
I finally used this Caddyfile:
<my-domain> {
reverse_proxy localhost:3000
handle_path /api/* {
reverse_proxy localhost:8000
}
}
It's only best practice for timers or event handlers like socket or websockets etc. You are using over a variable of array data type which can also freed from memory if you don't use cleanup function.
Unfortunately there isn't a way to connect from a service as you've already discovered.
You can run a DropPoint manually by specifying '/asexe' on the command line but you'll need to set it as a startup task and of course it won't function when there is no logged in user.
React 19 currently does not support the latest version 8.17 of react three fiber. You might wanna install the Release Candidate version 9 by running:
npm i @react-three/[email protected]
The reason is the file extension you placed in the VM. In iOS, the extension and the format of the extension are different from Linux or Windows. This is the issue, and I don’t have a perfect solution, but the temporary solution is to change the VM each time you change the device. Put the VM code in a text file and replace it every time you change the device. If you find a perfect and permanent solution, please let me know.(instagram : @AmeedDarawsha)
In my opinion, the CSRF token should be stored in the main memory of your React application—as a variable held in react state, preferably within a global react context. Keep in mind that the CSRF token will be lost on a page refresh or when opening a new tab. To handle this, create an API endpoint (for example, GET /api/csrf-token) on your server that generates and returns a token using contextual data (like the user's email, session, or user ID).
This endpoint should be called during the initial setup of your React app. While you might consider using the useEffect hook, this can lead to race conditions. Instead, using useLayoutEffect is advisable because it executes synchronously before the browser paints, ensuring that your authentication flow remains seamless.
For additional context, check out this article which outlines a similar approach.
(late answer) it's sorta cheating, but when i specified "https://" and did the advance-ignore warning, it works as well.
I found an answer for now So, for some reason, I should not add the dependency @tanstack/react-query on repos that will consume my shared repo. After it removed, it works fine
A big thank you for all your help, after configuring the shadow plug in and running ./gradlew shadowJar, the issue was resolved. Added these to the build.gradle
plugins {
id 'java'
id 'com.github.johnrengelman.shadow' version '8.1.1'
}
shadowJar {
archiveClassifier.set('')
manifest {
attributes('Main-Class': 'org.example.Main')
}
}
The code snippet doesn't actually enable the timer, or the peripheral clock to it. It should not work at all as is.
That said... An intermittently working timer could indicate some hardware problems. Try pressing on the IC or flexing the PCB a little. If it stops you have bad solder joints. Hit it with a cold spray... Are the voltage rails stable?
Try toggling an LED in the main loop with a delay. Does that also hang when timer 1 is not counting as expected?
Aside from hardware issues, you could be stuck in an interrupt routine from some code you haven't shown here. Are you sure you have only enabled the one interrupt?
It doesn't matter. Message listener is one of the modes of communication between your application and end systems, therefore Message listener can be in System layer or can be in experience layer, it depends on who are client of data/information and who are the producer. It means, if client of information wants to exchange data via messaging system, then ideally message listener/sender can be in experience layer, and if producer willing to exchange data via messaging system, then message listener/sender can be in system layer. So, first decide data client/users, and data producer, then put their mode of communication (i.e. API, File, Database connectors, JMS connector etc.) accordingly either in system layer or exp. layer. Any kind of client logic must in exp. layer doesn't matter what mode of connection/communication is..... and any kinds of data producer logic must be in system layer regardless of what modes of communications producers are using. Hope this helps.
def num(li): li=str(li) list1=list(li) print(list1) led = { '0':('###','# #','# #','# #','###'), '1':(' ##','###',' ##',' ##',' ##'), '2':('###',' #','###','# ','###'), '3':('###',' #','###',' #','###'), '4':('# #','# #','###',' #',' #'), '5':('###','# ','###',' #','###'), '6':('###','# ','###','# #','###'), '7':('###',' #',' #',' #',' #'), '8':('###','# #','###','# #','###'), '9':('###','# #','###',' #','###') } length = len(led[list1[1]]) for i in range(length): for j in list1: print(led[j][i], end='\t')
Thanks me later guys and enjoy
What you're describing is more of a polymorphic relationship. It's one-to-one-maybe. You have the same problem if you introduce a 3rd table called 'sysadmin' or similar and want to associate a user to ONE of them.
Relational databases don't really work like this. You can make it cascade so that if the user is deleted, then the admin or sysadmin is deleted with a foreign key constraint. But you can't delete the admin or sysadmin and cascade up to the user, because there's no way of saying on the user table that the relationship is one OR the other table. Relational databases make you choose only one per column.
So you can use multiple columns, but if you have 20 types of user, you'll have 19 null fields, and that sucks too. Most people just let it hang and take the one-way cascade as 'good enough'.
Sometimes coding languages and databases don't fit nicely.
do you already fixed it? im facing the same problem anh try many way but maybe the response is fixing...
use this version of library it will work:
<dependency>
<groupId>org.apache.poi</groupId>
<artifactId>poi-ooxml</artifactId>
<version>3.9</version>
</dependency>
Can you register a cors in configurations?
@Bean
public WebMvcConfigurer corsConfigurer() {
return new WebMvcConfigurer() {
@Override
public void addCorsMappings(CorsRegistry registry) {
registry.addMapping("/**").allowedMethods("POST", "GET");
}
};
}
after in securityFilterChain inset cors as default:
@Bean
public SecurityFilterChain securityFilterChain(HttpSecurity httpSecurity) throws Exception {
httpSecurity.authorizeHttpRequests(auth -> {
auth.anyRequest().authenticated();
});
httpSecurity.cors(withDefaults()).formLogin()...
return httpSecurity.build();
}
The solution was to put the jsons in the public folder
"use client";
import React from "react";
import loaderData from "@/public/lotties/Loader.json";
import dynamic from "next/dynamic";
function Loading() {
const Lottie = dynamic(() => import("react-lottie"), { ssr: false });
const defaultOptions = {
loop: true,
autoplay: true,
animationData: loaderData,
rendererSettings: {
preserveAspectRatio: "xMidYMid slice",
},
};
return (
<div className="flex flex-col items-center justify-center h-screen">
<Lottie
options={defaultOptions}
isClickToPauseDisabled
style={{ width: "150px", height: "100px", cursor: "default" }}
/>
<h6 className="text-2xl font-light text-center mt-4">Hang tight! We’re discovering your passion...</h6>
</div>
);
}
export default Loading;
res.status(200).json(randomFoodList);
It sends the response to user but doesn't stops the code execution and when your code is executing further it is returning the response again, that's why you're getting this error
Replace res.status with return res.status
Importing files in Tally Prime can sometimes lead to errors due to formatting issues, incorrect XML/Excel structures, or missing data. Here’s a step-by-step guide to troubleshoot and fix the problem:
1️⃣ Check the File Format Tally Prime supports XML & Excel (.xls/.xlsx) formats. Ensure the structure matches Tally’s required format. If the format is incorrect, Tally won’t process the import. 2️⃣ Verify Data Entries If your file contains wrong ledger names, missing account heads, or invalid dates, Tally may reject it. Open the file and cross-check all entries before importing. 3️⃣ Enable Import Permissions Go to Gateway of Tally → F12 (Configuration) → Data Configuration and check if import permissions are enabled. 4️⃣ Correct Path Issues If importing from an external location, ensure the file path is correct and the file is accessible. 5️⃣ Identify the Error Message Tally Prime usually provides an error log file specifying what went wrong. Check the log file, identify the issue, and correct it before re-importing. 6️⃣ Restart & Reattempt Import If the error persists, restart Tally Prime and try importing again. Sometimes, a simple restart fixes temporary glitches. 7️⃣ Use TDL for Custom Import If you’re dealing with bulk imports or complex data, using Tally TDL (Tally Definition Language) scripts can help in customizing imports without errors. 💡 Still Facing Issues? For expert guidance and Tally training, visit Excellent Infotech – where we help businesses and professionals master Tally with ease!
I have got the same issue , when working with GCP to extract the files and run a schedular In order to get away the error we can use
const credentialsPath = path.join(__dirname, "grand-practice-450211-k3-fd655f6f0a5f.json");
process.env.GOOGLE_APPLICATION_CREDENTIALS = credentialsPath;
This ensures the authentication to the GCP
try with "--module-root-dir", $JBOSS_HOME is your wildfly or JBoss installation folder module add --module-root-dir=$JBOSS_HOME/modules/system/layers/base --name=com.oracle --resources=ojdbc17.jar --dependencies=javax.api,javax.transaction.api
It is always good practice to clean up any memory leaks like timer apis, event handlers or any other external api which can potentially lead to any kind of memory leaks. But In case of local state, I believe there is not need to clean that manually. When the component unmounted, it's local variables are un-referenced Garbage collector will automatically detect un-referenced memory and clean that up (using algorithms like mark-and-sweep). There is no need to manually do that.
Any chance values non-numeric(ie. 5 vs "5")?
print(df.dtypes) maybe can help
df['TMDB Vote Average'] = pd.to_numeric(df['TMDB Vote Average'], errors='coerce') maybe needed
this is not possible. when your app is uninstalled it will remove all data associated with your app and this is by design. You can use cloud or a backend instead.
This is what I use
strtotime(date('m/d/y h:00'));
They are linked. With linked lists you can do whatever you want in any direction without any need in reverse. Just start from the end and build new list from end to head
This would be more helpful to me if I knew where the code belongs.
I installed anaconda and was then able to install Darts with no issue.
use this -> const api_key = import.meta.env.VITE_API_KEY; also in .env file use -> VITE_ in place of REACT_APP_ if you are using VITE in your project.
this works for me.
I shall perform necromancy and raise this thread from the dead, for the sake of anyone else lead to this answer by an internet search.
The problem is was that ASE was used to try to do the restore/load, but it was not the backup of an ASE database.
The clue is in the question where it talks about "dbbackup" used to take the backup. That's not an SAP ASE command.
It is an SAP ASA (Adaptive Server Anywhere) command. Two different products, doomed to failure. "You can't get there from here."
ASE is quite reasonably complaining that what's asked to load isn't an ASE database. Use ASA to restore an ASA backup.
There is actually simple trick for this. On svg element add overflow
overflow:visible
const objArray = [ { foo: 1, bar: 2}, { foo: 3, bar: 4}, { foo: 5, bar: 6} ];
const res = objArray.reduce((acc, curr) => {
acc.push(curr.foo)
return acc;
}, [])
console.log(res)
alright i find out i didn't use the right library , my bad, i used "vis-network/standalone" and i need to use :react-vis-network-graph"
Node.js is single-threaded in the sense that JavaScript execution runs on a single thread (the event loop). However, it is designed for non-blocking operations, meaning that time-consuming tasks like database queries, file I/O, and network requests do not block the main thread. Instead, Node.js delegates these tasks:
#include <stdio.h>
int main() {
int max, x, n = 2; //init variables
printf("Enter max number: ");
scanf("%i", &max);
/*prints prime numbers while the max value
is greater than the number being checked*/
do {
x = 0; //using x as a flag
for (int i = 2; i <= (n / 2); i++)
{
if (n % i == 0)
{
x = 1;
break;
}
}
if (x == 0) //if n is prime, print it!
printf("the no %i is prime\n", n);
else
printf("the no %i is not prime\n", n);
n++; //increase number to check for prime-ness
}
while (n < max);
return 0;
}
"The Navigator window should be the window that is usually in the bottom left corner under the project window." When it is not in the bottom left corner you should open it in the NETBEANS menu WINDOWS and select NAVIGATOR (or use CTRL-7).
In case you want to append a slice
arr := new([][]string)
*arr = append(*arr, []string{"abc", "def"})
fmt.Println(*arr)
See playground
Maybe libiconv instead for conversion?
I think you should switch to the WebClient from Vertx.
See: https://quarkus.io/guides/vertx-reference#use-the-vert-x-web-client
Some of the newest and best programming languages for logic programming are precisely the modern versions of Prolog. Check out for example Ciao Prolog which, apart from the classic Prolog functionality, has many semantic and syntactic extensions and features including constraint logic programming, meta-programming, higher-order, functions, foreign interfaces, assertions/types with unified static and dynamic verification, auto-documentation, lots of libraries, execution in wasm, a very useful playground, notebooks, etc., etc.
Can you please provide the logs from Azure? Do they indicate any kind of error? Do you have access to a console in the Azure Web App? What does it say if you try ping the service from the webapp itself?
For me the error happened right after installing @mui/x-date-pickers
Your suggestion of flow into the network from the top-left corner works great!
I implemented it using four tables for the flows into a cell, from the top, the right, the bottom and from the cell to the left. These flows are constrained by zero flow through the outside boundaries, except for the single flow at the top left or second from left, by continuity across interior boundaries and zero flows through the boundaries of the black cells. I then “discovered” that the condition of a single unit of flow remaining in each uncovered cell and the zero flow condition for the covered cells correspond exactly to the mask I used to identify covered and uncovered cells. Then, by maximizing the number of cells where the remaining flow corresponds to whether the cells is covered or not, one finds the solution.
I tested my MiniZinc model on several instances, including a 15x15 hard one. Most of the solvers find the unique solution within half a second.
BTW, you must have quite some experience with network flow models to have realised that the connectedness condition could be modelled as network flow.
So, with great appreciation, thank you for your suggestions.
Remember to check if you have Strict Mode or noImplicitAny on: https://www.typescriptlang.org/tsconfig/#noImplicitAny
If you are using VS Code's built-in TypeScript without a tsconfig file, then Strict Mode is off by default.
This turned out to be my original problem.
Disabling VPN helped. For some weird reason the answer has a minimum character limit, so here you go.
You can create a static class called "New", then create a static function with your class name as the function name. To use, your syntax will be New.MyGeneratedClass(parameters).
'public static class New
{ public static MyGeneratedClass MyGeneratedClass(object parameters) { return new MyGeneratedClass(); } }'
Q Why create something for a custom user if I don't end up using it anyway because the form inherits from UserCreationForm. So, for example, form.save() does not use my manager.
Answer to you Question
You need a custom user manager because Django’s default one won’t handle your custom fields properly like using email instead of a username.
Even though UserCreationForm doesn’t use it by default, your its still needed when you either want to create user manually, or to ensure superuser are setup correctly
Q2 At what point should this create_user be executed?
Answer You should call create_user when you need to create a user manually, e.g, in your views or .py script Django's UserCreationForm ignores it unless you override save(), so tweak the form or call create_user manually when adding users outside it.
Hope I´m getting this right ... you need just a one call inside the event card component? if that what you need yo probably need to pass an empty dependency array ... that way the api call will be triggered just once ...
useEffect(() => {
async () => {
setZones(await fetchZones());
}
}, [])
variation of @Gille's using getline
awk '{ "date +%d-%m-%Y --date=\""NR" day\""|getline theDate; print $0, theDate }' /tmp/File_1
A 16-02-2025
B 17-02-2025
C 18-02-2025
I found that on my ~/.bashrc file there's a resize -s 50 50 that sets the terminal window lines and columns to 50. Commenting this out fixes the issue.
If you only want to identify dates which have 2 years difference then you can do with below query which will even not require self join:
DECLARE @MinDate DATE = (SELECT MIN(date_start) FROM t)
SELECT date_start
FROM t
WHERE DATEDIFF(DAY, @MinDate , date_start) % 365 = 1
Looks like one of your JVMs is being killed because it exits. If your test threw an exception, called System.exit, it times out waiting for a non-daemon thread to complete or if the JVM just crashes these could be the reason test suites may not run.
https://maven.apache.org/surefire/maven-surefire-plugin/examples/shutdown.html
It's weird "killed" is all that is written to the console. I'd expect a little more feedback from surefire or maven about why the JVM was killed. Could someone have written a test that is doing something it shouldn't?
Btw, see also the related discussion in the Prolog Community Discourse, which includes also a discussion of the Ciao Prolog hiord approach, based on call/n.
Per recommendation for JDK9+ with module-info.java I added this:
<build>
<plugins>
...
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<parameters>true</parameters>
<annotationProcessorPaths>
<path>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>${lombok.version}</version>
</path>
</annotationProcessorPaths>
</configuration>
</plugin>
</plugins>
</build>
and mvn clean package start to work correctly
Ok it was easier than what I thought! Inspired by this example I changed the code to:
scipy.stats.kruskal(*[group["variable"].values for name, group in df.groupby("treatment") if group["variable"].values.size > 4])
all databases are being improved, even the php language, look at the differences between php 5.6 and php 8. you should adapt and php PDO is an application programming interface.. look at [enter link description here][1]https://en.wikipedia.org/wiki/Database_abstraction_layer so with the pdo you can manage more than the mysql and the sqlite. follow the php.net
I have granted the "Storage Blob Data Contributor" role to my "Access Connector for Azure Databricks". Then, when I try to create the External Location in the azure Databricks works fine.
regarding your question I guess you are trying to find angle between two parts ? Do you know if angle between two parts is stored as parameter in the STEP (STP) or OBJ files generated by Autocad, Autodesk ?
regards
This error means that Node is treating your file as a CommonJS module, but you're using ESM (import/export) syntax.
Change your import to a require:
const { PrismaClient } = require('@prisma/client');
That is not possible with AKS and OMS. it will only collect STDOUT\STDERR from the containers.
Is this still true in 2025? Do I need to run sidecar container that does tail -f with my custom log file?
class MyHttpOverrides extends HttpOverrides { final String cert;
MyHttpOverrides(this.cert);
@override HttpClient createHttpClient(SecurityContext? context) { final SecurityContext securityContext = SecurityContext(withTrustedRoots: false); securityContext.setTrustedCertificatesBytes(cert.codeUnits); return super.createHttpClient(securityContext) ..badCertificateCallback = (X509Certificate cert, String host, int port) => true; }
I call this in main.dart:
final cert = await rootBundle.loadString('assets/certificates/dev-cert.pem'); HttpOverrides.global = MyHttpOverrides(cert);
I used this method but sometimes it works and sometimes it doesn't. Now it doesn't work anymore. I think maybe the certificate file has changed. please help me
did you find an answer for this question ?
What version of InfluxDB are you using?
Try to add the import React from 'react' at the top in main.jsx file
The issue you're experiencing with the UI not updating immediately after login is likely due to how Next.js handles client-side navigation and state updates. When you use redirect in a server action, it performs a full page reload, which can cause the UI to appear "stuck" or unstyled until the page refreshes. To fix this, you need to ensure that the client-side state updates properly after a successful login without requiring a full page reload.
Use useRouter for Client-Side Navigation
Instead of using redirect in the server action, use Next.js's useRouter hook to handle navigation on the client side. This ensures a smoother transition and avoids the full page reload.
Agree with the comment above, multiplexing is necessary to forward multiple TCP connections over a single socket stream. Additionally we can try using existing protocols like SSH-style multiplexing SOCKS5 or QUIC-based tunneling to avoid reinventing the wheel.. a TUN/TAP interface could be an option as well.
As per the documentation, @quasar/app-vite/tsconfig-preset has been dropped, so update your tsconfig.json with:
{
"extends": "./.quasar/tsconfig.json"
}
With the latest nodejs.
import { setTimeout as sleep} from 'node:timers/promises';
(async () => {
console.log('a');
await sleep(2000);
console.log('b');
})();
select
t0.*,
t2.id two_years_later_id, t2.date_start two_years_later_date
from t t0
left join t t2 on t2.date_start = dateadd(year, 2, t0.date_start)
--where t2.id is not null -- Uncomment if you are only interested the matching dates
;
See the results in a fiddle.
I want provide some insight on the implementation.
balance_dirty_pages() (Linux kernel 6.10) is the function that is called by a process to check whether any of the thresholds have been crossed or not.
vm.dirty_background_ratio is the percentage of system memory which when dirty, causes the system to start writing data to the disk.
vm.dirty_ratio is the percentage of system memory which when dirty, causes the process doing writes to block and write out dirty pages to the disk.
(Ref: Snippet from accepted answer)
From above explanation, we may think that a possible implementation can be as following:
balance_dirty_pages()
{
d = amount of dirty pages in available memory
if(d > vm.dirty_ratio)
{
start synchronous write
}
else if (d > vm.dirty_background_ratio)
{
trigger background writeback thread to run
}
return
}
However, implementation is sort of like this (at an algorithmic level):
balance_dirty_pages()
{
while(1)
{
d = amount of dirty pages in available memory
if(d > vm.dirty_ratio)
{
trigger background writeback thread to run
sleep(some time)
}
other stuff
}
return
}
i.e. the point I want to highlight is that writeback is performed by the background writeback thread irrespective of the threshold crossed.
In case of vm.dirty_background_ratio, process returns immediately after triggering background thread.
In case of vm.dirty_ratio, process sleeps for some time after triggering the writeback thread. After it wakes up, it again loops.
PS: This is my understanding of the code. More experienced people can pitch in and suggest corrections.
Laravel tests and Postman have different purposes when it comes to API testing.
Laravel tests (written using PHPUnit or Pest) allow developers to automate backend testing. These tests ensure that controllers, routes, and business logic function as expected. Laravel provides built-in support for unit tests (testing individual methods) and feature tests (testing the entire request-response flow). Running these tests helps catch bugs early in development and ensures stability when making changes.
Postman, on the other hand, is a manual API testing tool with a user-friendly interface. It is useful for quickly sending requests to endpoints, inspecting responses, and debugging APIs. While Postman allows basic automation with collections and scripts, it is more suited for exploratory testing and API documentation rather than fully automated test suites.
If the goal is to ensure long-term reliability with continuous integration, Laravel tests are the better option. If the focus is on quick testing and debugging, Postman is the more convenient tool. In practice, both are often used together—Postman for initial API exploration and Laravel tests for long-term maintainability.
Any solutions for this error / problem?
I had this error even after whitelisting my IP and adding 0.0.0.0 it wasn't working and I got it fixed , The problem is with mongoose.
The latest version is having problems.
So downgrade it to 8.1.1