I have another question about a memory leak.
After predicting labels, the memory is never released—even after the function finishes executing.
What should I do? I tried the following code, but it didn't work.
I use CPU and don't have any gpu.
import gc
tf.keras.backend.clear_session()
del predictions
gc.collect()
Add any() or all(), if this is work.
Snippet:
if df['twitter'].str.lower().str.contains('windows 11').any():
return 'windows 11'
elif df['twitter'].str.lower().str.contains('windows 10').any():
return 'windows 10'
return 'windows 8 or older'
This behavior is coming from Device preview package. Which is clearly seen in the screen shot . So that is for test environment only. in prod there is no device preview so there is no issue.
I had the same problem.
I opened the file etc/hosts and commented all hosts.
After rebooting Xcode is able to download new sdk.
"For others" (probably not jquery or vanilla javascript ?) response by basickarl did the trick for me.
I was looking for a simple script that is able to uncheck all "vendor" list in cookie popups, from Firefox / Chrome devtools > console tab.
The solution with $(jquery) didn't do the trick, since all implies that that website has jquery loaded.
In my case jquery was not present, so the simple oneliner solution that did the batch unchecking was the one menitoned:
document.querySelectorAll('input[type=checkbox]').forEach(el => el.checked = false);
For those who wondering about full example.
import { ExecutionContext, Injectable } from '@nestjs/common';
import { ThrottlerGuard } from '@nestjs/throttler';
function getIp(req): string {
const forwarded = req.headers['x-forwarded-for'];
if (typeof forwarded === 'string') {
return forwarded.split(',')[0].trim();
}
return req.ips?.[0] || req.ip;
}
@Injectable()
export class ThrottlerBehindProxyGuard extends ThrottlerGuard {
protected async getTracker(req: Record<string, any>): Promise<string> {
return getIp(req);
}
protected getRequestResponse(context: ExecutionContext): {
req: Record<string, any>;
res: Record<string, any>;
} {
const httpContext = context.switchToHttp();
return {
req: httpContext.getRequest(),
res: httpContext.getResponse(),
};
}
}
Then in your app.module providers array put this
providers: [
{
provide: APP_GUARD,
useClass: ThrottlerBehindProxyGuard,
},
],
After that, all @Throttle you will use, will follow rules of custom guard.
@Throttle({
default: {
limit: 1,
ttl: 60 * 1000,
},
})
@Post('forgot-password')
We have achieved this using cdxgen and BEAR - read detailed text in my blog here - https://worklifenotes.com/2025/04/30/practical-guide-to-ntia-compliant-sbom/
what works for me as the following
from .handshake import *
I have a similar issue with the error message : Cloud SQL connection failed. Please see https://cloud.google.com/sql/docs/mysql/connect-run for additional details: certificate had CN "", expected "<project_id>:<instance_id>"
I have a simple setup where I deploy my Ruby backend API on a Cloud RUN Service using an artifact, and I connect it to a PostgreSQL database on Cloud SQL. I try to connect both through a UNIX socket as it seems to be the correct way to do it (rather than TCP).
In my Cloud Run configuration, I specifically selected the database instance to automatically establish the socket in the background (according to Google Cloud documentation). According to the documentation, I'm not supposed to setup a Cloud Auth Proxy with this setup, however, I can't make it work, the connection always fails.
This is indeed a bug. The discussion is continued under https://github.com/spring-cloud/spring-cloud-stream/issues/3067
iam able to identify the issue. it is bcz ipv4 loopback address is not reachable
ping 127.0.0.1
Pinging 127.0.0.1 with 32 bytes of data:
General failure.
General failure.
General failure.
General failure.
Ping statistics for 127.0.0.1:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),
Removed unique=True
from
id = Column(String, unique=True, nullable=False, primary_key=True)
and it solved the problem
Alternatively use fs::dir_copy()
https://www.rdocumentation.org/packages/fs/versions/1.3.1/topics/copy
Also seems to be faster...
Is it possible to create a Toplevel window in tkinter that is initially withdrawn without flashing
Yes. You can add the argument top.deiconify and use root.after().
snippet:
def dlg():
top = tk.Toplevel(root)
top.withdraw()
# do more stuff and later deiconify top
top.after(1000, top.deiconify)
What I found is that I needed to force the inclusion in the build of my @BindingAdapters, which were on their own in a separate .kt. I did this in a hacky but adequate way, by having a global boolean variable in the bindings file e.g. INCLUDE_DATA_BINDINGS and setting it somewhere that's definitely included, in my case the onCreate() of my Application subclass.
Not exactly the perfect answer, but setting the CORS AllowedOrigins to a wildcard does work:
new Bucket(this, "MyBucket", new BucketProps()
{
BucketName = "myBucketName",
//....
Cors =
[
new CorsRule()
{
AllowedHeaders = //...
AllowedMethods = //...
AllowedOrigins = ["https://webapp-*.transfer-webapp.<REGION>.on.aws"]
}
]
});
I have solved this problem. It is because I also installed MySQL on my Windows side and started the MySQL server. After I shut down the MySQL server on the Windows side, I can install it successfully. (If it doesn't work, you can try to uninstall the MySQL on the Windows side)
Today I notice a new user could registered with admin rights.
I think I’m infected by the same.
Out site title was changed, too.
The automatic register status was changed to Admin, for every new registed user.
-> Now I have updated the plugIn (Order Delivery pro)
-> I’ve deleted the new user ‚fallinlove‘
Have everybody more information about this new vulnerability?
Should I do everthing more, or is it fixed with the update.
"it wouldn't be reliable" "what if you had 2 identical resources" - it doesn't matter. the man just wanted to do that. he doesn't need an existential question or an opinion about it, just how to do it.
Currently this is all you need, (according to MDN Only Samsung Internet browser does not support this):
.vertical-text {
writing-mode: sideways-lr;
}
https://developer.mozilla.org/en-US/docs/Web/CSS/writing-mode#:~:text=10.3-,sideways%2Dlr,-132
Works nicely even when wraping vertical text.
It seems Cloud SQL Auth Proxy failed to set up proxy connections to the instance. It is likely because the language connectors/ auth proxy version is too old. Have you tried upgrading your Cloud SQL proxy version? If you are using Cloud SQL Auth Proxy, make sure you are using the most recent version, see keeping the Cloud SQL Auth Proxy up to date.
You can also check this documentation about Requirements for using the Cloud SQL Auth Proxy, it mentions connections to Cloud SQL instances using (shared/ customer-managed) Certificate Authority (CA) with the recommended Cloud SQL Auth Proxy version.
If upgrading the proxy version doesn't work for you and if you have a support package, I would recommend you getting help through reaching out to Google Cloud Support for a more in-depth analysis of your issue.
FastAPI expects to return something JSON-serializable, so instead of returning a plain string, return a dictionary:
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
def first_api():
return {"message": "Hello World"}
if the error is from implementation of firebase push notification the i just updated the packages to
"@react-native-firebase/app": "^19.0.1",
"@react-native-firebase/messaging": "^19.0.1",
I'd propose an alternative solution that I've used when calculating rising block tariffs that avoids the complication of array formula. This allows you to drag across all cells on the block for a prorata calculation.
=IF(MIN(C11-($B5:B5),B5)>0,MIN(C11-($B5:B5),B5),0)
This might be slightly less elegant but easier to understand and troubleshoot. I used this logic when create a business electricity rates calculator for my website.
SELECT p.*, c.description, s.company_name
from products p
join categories c on p.category_id = c.category_id
Join suppliers s on p.supplier_id = s.supplier_id
I tried above in https://www.sql-practice.com/ this way u can do it, but plz post your schema to understand the issue you are facing
Creating a forum inside an application involves integrating features like user registration, post creation, commenting, and moderation. It’s best to use a scalable backend and intuitive UI. Partnering with a reliable mobile app development company can simplify the process and ensure a smooth, secure, and feature-rich forum experience.
dent(x,y,z):
result = {}
result[roll_no] = [std_name, marks]
print(result)
n = int(input("Enter No of Students:"))
for i in range(n):
roll_no = int(input("Roll No: "))
std_name = input("Student Name: ")
marks = int(input("Marks: "))
student(std_name, roll_no, marks)
it removes the error when you return it separately but what about await function it says 'await' expressions are only allowed within async functions and at the top levels of modules.ts(1308)
Eric Freitas has an excellent working solution to my issue as per the link below. Thank you Eric.
I actually work for Cloudinary, and while this plugin is third-party (so we can't provide support for it directly), I wanted to provide a workaround.
If this, or any other plugin doesn't support a feature, you could always look into applying the functionality via an Upload Preset instead. In this instance, you should be able to create/update the default upload preset used for API uploads so that it calls the addons you need
Executor has been removed from aiogram since version 3.
https://docs.aiogram.dev/en/latest/migration_2_to_3.html'
Executor has been entirely removed; you can now use the Dispatcher directly to start poll the API or handle webhooks from it.If you have both Lightsail and EC2 instances like me, make sure you are not mixing up the usernames. For EC2 instances, the default username may be ec2-user Try it out...
I didn't find either how to exclude a package. But I'm using the option --ignore-vuln with the vulnerability ID, which serves the same purpose.
diff command is great!
However, I have to speak against the md5 command, as it is crucial to understand that MD5 is not considered collision-resistant, meaning it's possible, albeit with a low probability, for different inputs to produce the same hash value, which is called a collision.
If you really have to generate a hash out of the file for some purpose, considering lowering the collision by using sha256 or better algorythm.
It's an iOS 18 simulator issue. I tried the iOS 17.0 simulator, and it worked.
Perhaps, this is more generic for you:
TRIM(Both '_' FROM REGEXP_SUBSTR(Member_Target_Name), '_[^_]+_', 3, 1,'i') ) as ThirdElement
Lets say Member_Target_Name has this structure (values in square brackets are objects with values (including the brackets) :
x_[service]_[name]_[date_in_isostyle]_[type]
so a value sample would be:
s_myDB_MyApp_20250501_t
then the suggested code snippet pick up the third [name] element with text 'MyApp'.
I got this Flatpickr error on an ASP.net project because a new connection to Jquery was automatically create in the application, so it looks as though this is created by a conflict. Because of the page flow construct jQuery.noConflict() did not work when using web form validation; a combination of asp.net 4.5 Web Forms Unobtrusive Validation jQuery Issue and this <asp:ScriptManager in Asp.Net webforms, how it works? should have worked, but only a partial success. Although not directly relevant to anyone not using the Microsoft stack, it is relevant to people using Flatpickr this may allow someone to understand what is going on and where the error "$('.selector').Flatpickr is not a function" is occurring and find a solution in their project idiom.
Listing only Archive blobs using Az CLI
I had workaround in my environment using the same command
az storage blob list --account-name accountname --container-name container-name --account-key "account-key" --query "[?properties.blobTier=='Archive'].name" --output tsv
Also, I added ForEach-Object to get the "rehydrate" information with print messages.
az storage blob list --account-name <account name> --container-name <container name> --account-key "<account key>" --query "[?properties.blobTier=='Archive'].name" --output tsv | ForEach-Object { Write-Output "Blob: $_ | Tier: Archive"; Write-Output "Rehydrating blob: $_" }
I successfully got the list of archived blobs
Output:
It may be due to there were no Archive-tier blobs at the time, or the wrong property access tier was used earlier.
Reference:
Hi is this config you gave above to sample http status codes working?
Also any metrics are there to verify this?
You can use this setup, but you need to tweak the "Extrude Mesh" scale and "Scale Element" scale. You could use maths nodes to compute them from a single thickness parameter if needed
Adding <additionalModelTypeAnnotations>@lombok.Data</additionalModelTypeAnnotations> to the openapi-generator-maven-plugin's config section did the trick for me.
I agree this css file. Your fault will be only due to wrong css.
.card img {
transition: transform 0.3s ease;
}
.card:hover img {
transform: scale(1.1);
}
.card:hover::after {
content: '';
position: absolute;
inset: 0;
background: rgba(0, 0, 0, 0.4);
}
A (hopefully) helpful tip:
Send links to @webpagebot to refresh the preview for everyone in Telegram 🙂
My two cents on this :)
Why do we use salt for passwords?
If here are two users who use "HelloWorld" as they password. We will have two identical hashes in database.
And if hacker who got our database have premade bilions of passwords with they hashes. He can compare them with our database and find hash in our database coresponding with hash in they database and will find users password. No need for brute force.
We add salt to make all passwords unique. So, two users who used "HelloWorld" as they password will have two diffirent hashes if salt is used.
It also makes premade password hash databases less useful, or maybe worthless. So, haker who got our database will need to use brute force to find each password in database, but because he will know salt used for password, it will not be hard to do if user used "HelloWorld" as they password.
In both cases security mostlly depends at user. If he used "HelloWorld" as they password, it is same as not using password at all. But if he used >30 long and made from random letter, numbers and special simbols password, even without salt it will be secure.
But lets be real, most passwords are "HelloWorld" or similar.
Pepper can make password longer and more complex, but question is how to use them. Your example to me looks like double salted password. You first salt it with server salt, get hash and salt hash with password salt.
In my case I would just make pepper made from random special symbols (because most not secure passwords only use latters and maybe some numbers). Make pepper about 32 symbol length. Forcing user passwordto be at last 10 symbol long. If I just add them together I will have 42 symbols password. If user password is 20 symbol long, I will add only 22 symbols from pepper to make password 42 symbol long. And salt this password to make hash.
In this case if hacker got my database only he will not able to brute force passwords. Because salted password will be not "HelloWorld", but "!@#&^%+-&%$@#!#$#%HelloWorld" or something similar.
But if hacker got my pepper too and knows how I add it to passwords he may be able to brute force passwords, but this means more work for hacker. And this is meaning of crypto. To make hackers work more. Here are no security impossible to breach.
And pepper makes it more secure, here are no question about it. Which is easier to brute force?
"HelloWorld" or "!@#&^%+-&%$@#!#$#%HelloWorld"
Or are they equal?
In response to other answer:
"Your implementation of peppers precludes the ability to rotate the pepper key."
Why would I need to rotate my pepper? If hacker is able to steal my pepper, it means my server is compromised so much I need to deleted all password hashes from my database and make all users to recover they accounts. It is not important if I use pepper or not. I will need to do it in both cases. So, I don't see any meaning in rorating pepper.
"It Requires You To Roll Your Own Crypto"
Why would I need to roll my own crypto? All I need to do is to hash(pepper + password); in simplest case. In more complex case I would just make funtion to add pepper and password in more complex way and use standert hash. hash(add_pepper(password));
add_pepper() will not encrypt it will just add more symbols to password and maby make somthing like this:
"!@#&^%+-&%$@#!#$#%HelloWorld" or "!@#&^%+-HelloWorld&%$@#!#$#%"
It just needs to add pepper same way for same string (for same length of string).
As I said before my function to add papper just makes password 42 symbol long. No other unnecessary things. Pepper just makes password brute forcing harder. No more, no less.
If I add pepper depending at length of users password, getting only pepper is also not as useful, because hacher will not know how much of it was added and where. At front, at back, at both ends of password? Hacker will need to get my add_pepper() too. But if hacker can get them, all security is meaningless.
Pepper don't make unbreakable security, it just makes some part of security better. So, I will use pepper.
P.S. Maybe answers against pepper is writen by hackers who don't want they work made harder? ;)
I recommend you to check the following;
The image has been updated
The path of the image (you may see it as /image.png on your device, while the server you upload it to may need to see it as /file/image.png).
Make sure the name of the image consists entirely of English characters.
These would be the solutions I would check.
items = [0.5, 0.44779812982819833, 0.4688469401495954]
print((len(items)*"{:.2f},").format(*items)[:-1])
=> 0.5, 0.45, 0.47
Ok, found the problem. I moved some code to a new function that is used in almost every start of a new test: logging in. First line in the try{}catch{} was setting the timeout to 60 seconds.......
Found this while setting up a new environment for the minimal reproducible answer and discovered the debug-function of Playwright.
my time is correct but I am still having the mongodb atlas issue
It is still saying my IP is not whitelisted
You need to use python at some point. Erindale made a tutoriel about bringing a csv in blender that you can use for this : https://www.youtube.com/watch?v=xWwoWi_vPTg
You need to add the following dependency to your maven pom.xml:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-configuration-processor</artifactId>
<version>3.4.5</version>
</dependency>
And also an annotation on the main class @ConfigurationPropertiesScan
@SpringBootApplication
@ConfigurationPropertiesScan
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
}
Update 05/2025 - PhpStorm 2025.1
File > Settings > Editor > Code Style > {Language} > Code Generation > Comment Code > (uncheck) Line comment at first column
See wxQueueEvent for safely posting an event from a secondary thread to an event handler, to be handled in the main thread.
Examples using it are in samples folder of wxWidgets. One is samples/thread/thread.cpp.
Creating a custom Linux-based speaker that appears as a Google Chromecast device is technically feasible, but there are several important aspects to consider:
mDNS and DIAL Protocols: You are correct that Chromecast relies on mDNS for discovery and DIAL for launching applications. Implementing these protocols will allow your device to be detected by Google Cast-enabled apps.
Receiver Implementation: You can create a receiver that mimics the behavior of a Chromecast receiver. This involves implementing the Cast Application Framework (CAF) to handle media playback commands and other functionalities.
CAF Receiver: You can build a custom receiver using the Cast SDK. This receiver will need to handle various media types and control protocols (like play, pause, stop) similar to official devices.
Development: You'll need to implement the necessary APIs defined by Google, which may include handling messages and media information sent from sender apps.
Google Licensing: To officially integrate with Google Cast services, you must comply with Google's licensing and certification requirements. This often includes: Registering your device with Google. Passing certification tests to ensure compatibility with Google Cast standards. Whitelisting: Google typically requires devices to be whitelisted. This means you may need to go through a formal application process to have your device recognized as a valid Google Cast receiver. 4. Additional Considerations
Compliance: Ensure you adhere to Google’s branding and operational guidelines to avoid potential legal issues.
Community and Resources: Leverage existing open-source projects or communities that have worked on similar integrations. There are repositories and forums that can provide guidance.
Steps to Implement
Set Up mDNS: Implement mDNS on your Linux device to announce itself as a Chromecast receiver.
Implement DIAL: Set up the DIAL protocol for launching applications on your device.
Develop CAF Receiver: Follow the CAF documentation to create a receiver that can handle media playback and communication with sender apps.
Testing: Test extensively with various Google Cast-enabled apps to ensure compatibility.
Certification Process: Once your device is operational, begin the certification process with Google.
As I understand you want sum of values according to dates if dates are changed then values should be changed automatically. I have answer for this '=SUMPRODUCT((B2:AF2>=B24)*(B2:AF2<=B25)*B3:AF3)'. Use this formula in your sheet First select the rows where your dates are present and match it with starting date or end date which is written in your sheet on B24 or B25. when you do change in date at B24 or B25 it will automatically update the values in total value cell.
I encountered the exact same problem, and was able to fix it by including this in the Allowed Origins (CORS) configuration in my auth0 app:
capacitor://localhost
How would you access those keys from info.plist or androidmanfiest.xml?
I have found a solution that worked for me. It was provided by an Intel Community Employee and involves deleting system registry keys.
***Please make a backup by Export to .reg file, before deleting. As making incorrect changes to the Registry may cause issues to Windows and may cause programs crashing or no longer start***
Press "Windows+R" to open the run window, enter 'regedit', find the path HKEY_CURRENT_USERS\Software\ModelTechnology Incorpporated\Modelsim, and delete all the folders.
I also uninstalled quartus and modelsim before I ran this process then restarted my computer then reinstalled quartus and modelsim.
I don't know if the uninstalling and reinstalling was necessary but after that I no longer got the invalid time string issue.
As expected, it was not a code related issue or even a Quartus/modelsim version issue as I tried upgrading to the most recent quartus and using questa rather than modelsim but I still ran into the same error.
My Enter Key Also Wasn't Working and when I heard that for many people it is caused by the extension "vscode-styled-components" which I didn't have so I ended up installing it and then disabling it.
and somehow, it worked.
edit: I think it was because of python indent as I also disabled that extension.
i got this what can i do?
1|sapphire:/ $ pm uninstall -k --user 0 co.sitic.pp
Failure [-1000]
I got the same issue. But I didn't see USB port anymore after I flash new firmware and repower following information from QDL tools. My questions are:
1.Do I need to install android drivers before shorting the module Boot?
2. What kind of android drivers do I need to install?
If you want to manipulate the timezone per test:
beforeEach(() => {
vi.stubEnv('TZ', 'UTC');
});
afterEach(() => {
vi.unstubAllEnvs();
});
There is a really nice way to recover the file's content if you are using Visual Studio Code. First, create a file with the same name as the one before it, then open it. The TIMELINE option is now visible in the bottom left corner. Select the most recent "File Saved" option to see the most recent information you saved.
- I've tried this for code file file and it works fine.
There is a dedicated datasource for this, namely data "azurerm_function_app_host_keys" "functionkeys"
By using this, you should be able to access the key with azurerm_function_app_host_key.functionkeys.default_function_key
If you need further information, please have a look at the documentation:
The issue is with the way that the memory amount is formatted.
Previously, it was "2GB", It is now "2GiB"
Thank you O.Jones, this is much faster than I thought :)
I just added the count (*) so the request is
SELECT userID, COUNT(*),
MAX(pagenumber) max_pagenumber,
MAX(date)latest_date
FROM whateveryourtablehame
GROUP BY userID
A static class is a special type of class that cannot be instantiated and only contains static members. It's used to group methods and data that belong to the class itself, rather than to any specific object.
I try and avoid this as much as possible. If performance is not critical, I suggest creating a executable in a language of your choice that calls your DLL. You then execute the executable from node - use stdio or files to communicate between the processes.
That way you avoid all the issues with building c++ modules on node. Node can change its binary APIs between versions, change its build system and so on which has often broke things for me in the past.
The author helped me a lot. Here is a link to our conversation and my specific comment that I think pinpoints the issue - https://github.com/sinclairzx81/typebox/issues/1216#issuecomment-2774538352.
tldr; I was trying to run Clean straight out of MongoDb as a security measure - remove any properties that might have made it into the database that aren't in the schema. This is not possible out of the box with Typebox - you can never run "Clean" on an ObjectId. The author describes why much better than I can and I think he even gives hints on how to do it yourself. For me, I simply didn't find that functionality (using Typebox to run Clean, yet keep ObjectIds for use inside my api) worth the effort. Typebox is great for using Decode on incoming json and Encode on outgoing objects (converting to json). I only clean at these two times, and it's working great now. In both cases, all ObjectIds are strings before the Clean takes place.
After 3 years I am still looking for one. Did you find anything for iPad which can be accepted or anything which gets things done faster
For me, I tried Hive 4-LLAP, with the following procedure.
Execute llap-service.sh. This produces a new directory: llappkg/
Update Yarnfile to remove "placement_policy"
Execute llappkg/run.sh, and this creates a new Yarn service "llapservice"
Start HiveServer2 and execute queries.
However, the is no documentation available on Hive-LLAP, so it's hard to tune parameters.
If you are using Hive on Tez and still interested in achieving the performance of Hive-LLAP wit ease, there is an alternative solution Hive on MR3. Please check out the blog that compares Hive 4 on Tez and Hive 4 on MR3 (along with Trino 468 and Spark 4.0.0-RC2).
https://mr3docs.datamonad.com/blog/2025-04-18-performance-evaluation-2.0
Many years later this would look more like this:
menuFileWebsite.addActionListener(e -> {
System.out.println("menuFileWebsite");
});
Discovered my problem - classically small fix:
ANNUALAPPSIZE needed to have ! instead of %. The %ANNUALAPPSIZE% isn't what was being incremented, so the index it was assigning to was staying at 0.
set "ANNUALAPP[!ANNUALAPPSIZE!]=!CURRENTAPP!"
click at the right-bottom corner of emulator window, it'll show adjust arrow, drag it then the window resize.
No not working,
I run streamlit run str.py
and I get the error as below
2025-05-01 10:34:01.607 WARNING streamlit.runtime.scriptrunner_utils.script_run_context: Thread 'MainThread': missing ScriptRunContext! This warning can be ignored when running in bare mode.
2025-05-01 10:34:01.755
←[33m←[1mWarning:←[0m to view this Streamlit app on a browser, run it with the following
command:
streamlit run C:\Users\str.py [ARGUMENTS]
2025-05-01 10:34:01.755 Thread 'MainThread': missing ScriptRunContext! This warning can be ignored when running in bare mode.
Have you tried the native c method? Do you have any success in running the binaries?
This blog post describes a workaround of storing a token as a SSM parameter and then using ssm.StringParameter.valueFromLookup to access the string value.
This is how the new batch file should work
@echo off
for /F %%A in (usernames.txt) do net user %%A /add password123!@#
echo done
pause
goto start
The netCDF files had several data arrays apart from time. So I first read time, associated time with name of the netCDF file it belongs to, and repartitioned the dataframe. Subsequently I added more columns using UDFs. This approach gave almost identical performance for the case where I had had 200,000 frames distributed evenly in either 4 or even 4000 files.
Go to your current "green" DB in the RDS console:
Under “Maintenance & backups”, check "Backup retention period".
If it is set to 14 days and the old one was 1 day, this is the cause.
If business allows, reduce from 14 days to 3–7 days (or back to 1 if safe).
This will immediately stop accumulating more snapshot data.
Go to RDS → Snapshots, and look at:
How many automated backups exist.
If there are manual snapshots too (they are also billed).
Delete old or unnecessary snapshots, especially from the Blue DB (if any linger).
Use AWS Cost Explorer with resource-level tags to track:
Which DB instance or snapshot is contributing to backup charges.
This will help identify which backups are costing you.
Enable RDS Lifecycle policies using AWS Backup (more granular control).
Use RDS storage autoscaling in the future (for better control on provisioned space).
If using Aurora, check if you're paying for snapshot export to S3 or long-term retention backups.
Just experienced this issue in Xcode 16.2. After deleting the derived data, rebooting the mac, re-building the project, cleaning the project, nothing worked. The solution was to force quit Xcode and re-open. The blue outlines went away, and the "internal error" warning was also gone.
Sir mene kal apni game me 5000 transfer kya tha vo mere account me se bhi cut chuka hai aur game me bhi nahi aya please Meri help kro
I'm sorry you're running into this.
Could you try running prefect-cloud login --key <key>(note the - in prefect-cloud) and see if that works instead? Additionally would you mind sharing your browser version?
In case that doesn't work here are some other workarounds:
1: Try Chrome (if you're not already)
2: Manually create/edit ~/.prefect/profiles.toml to look like:
active = "prefect-cloud"
[profiles.prefect-cloud]
PREFECT_API_URL = "https://api.prefect.cloud/api/accounts/<your account id>/workspaces/<your workspace id>"
PREFECT_API_KEY = "<your api key>"
3: Use environment variables:
export PREFECT_API_URL="https://api.prefect.cloud/api/accounts/<your account id>/workspaces/<your workspace id>"
export PREFECT_API_KEY="<your api key>"
Probably int() the integer first to ensure that it is an integer, not a string or any other value type.
Also, you can check the integer value by using print(type(Integer)) and make sure that it prints: <class 'int'> in the console.
Happy coding!
The outlines library is designed to help structure outputs from language models, but as of my last knowledge update in October 2023, it may not fully support streaming contexts directly. If you're looking to enforce a specific output format from your LLM in a streaming manner, here are some steps and suggestions:
Check Library Updates: Since libraries are frequently updated, check the official documentation or GitHub repository for any recent changes regarding streaming support.
Custom Formatting: If outlines lacks streaming capabilities, consider implementing a custom solution. You can create a wrapper around the LLM output to enforce the desired format. This would involve parsing the streamed output and validating it against the specified format.
Pydantic Integration: Continue using Pydantic for response validation. Once you receive the output from the LLM, you can pass it to a Pydantic model to ensure it conforms to your specifications.
Asynchronous Handling: Ensure your FastAPI setup is properly handling asynchronous calls, especially when dealing with streaming data. Use async functions to manage the flow of data efficiently.
Community Feedback: Since you mentioned others might have similar issues, consider reaching out in forums or communities like GitHub discussions, Stack Overflow, or dedicated Discord servers for FastAPI, RAG systems, or the specific libraries you are using.
The command they gave in the documentation didn't work for me either, it tells me that I'm not passing any parameters. I'm using version 2.2.0
You can have separate messages for every file by using the below command
git commit file1.py -m "file1 commit msg" && git commit file2.py -m "file2 commit msg"
.reg script:
Windows Registry Editor Version 5.00
[HKEY_CLASSES_ROOT\.js]
@="JSFile"
manually: use regedit:
go to HKEY_CLASSES_ROOT\.js:
set (Default) = JSFile
assoc .js=JSFILE did not work for me: assoc was not editing the registry, and cscript.exe was reading from the registry:
whatever value is set as (Default), cscript.exe will attempt to read the following key: HKEY_CLASSES_ROOT\%s\ScriptEngine, normally, it would be HKEY_CLASSES_ROOT\JSFile\ScriptEngine
I got this error while running cscript.exe on a script.js
cscript.exe //NoLogo AutoHotkey_alpha\source\scripts\minman.js
I found this link from microsoft. Could this help?
A simple way to detect motion (naively intended as pixel intensity variation) is by using background subtraction.
Check this OpenCV tutorial. Note that one of the main assumptions here is that the scene background remains 'static'. So every time an object comes or moves in the scene, you can detect it, and for example, track it using a centroid given by the average of the pixels mask.
See Motion Detection: Part 3 - Background Subtraction for a concrete example.
If you want to consider just a part of the image, do as suggested in the other answers.
Check your security group. That was my issue. I needed a security group attached to the infrastructure config with an ingress rule from vpc cidr to all ports.
There must be a new vulnerability circling. My site was just hit.
Unauthorized plugin was installed called "catnip" which featured standard c&c features. Automatic administrator login, remote file downloading, basically access to the entire site.
Site named changed to "darkoct02". User was registered shortly after with the username "fallinlove" that had admin permissions and a chefalicious mailbox.
When running your code on discord.js 14.19.2 and node v22.14.0, I receive the following output in the terminal:
carolina@Carolinas-MacBook-Air test % node index.js
Beep! Beep!🤖Bot Pending!
Beep! Beep!🤖Bot Running!
The bot is also shown as online in the Discord server I have invited it to. On my end, it does work. (Note: obviously at this stage, you can't do anything with the Discord bot as you haven't implemented any commands, but it should be displaying as online as it does for me).
Given this, I would ask that you comment providing further information about the version of node and discord.js that you are using, and to confirm that you have added the Discord bot to your server? Without this information, we can't provide further help, given you did state above that it does not work.
Some changes I would recommend to improve your code's readability in the meantime:
package.json
Add the following:
"type": "module",
index.js
// Imports at the top, switch to ES6 imports if possible instead of CommonJS
import { Client, Events, GatewayIntentBits } from "discord.js";
// Switch from string intents to the IntentsBitField class for better type safety
const client = new Client({
intents: [
GatewayIntentBits.Guilds,
GatewayIntentBits.GuildMessages
]
});
console.log("Beep! Beep!🤖Bot Pending!");
// Improve clarity by moving the console.log inside of the event listener
// Switch from string events to the Events enum for better type safety
client.on(Events.ClientReady, () => {
console.log("Beep! Beep!🤖Bot Running!");
});
// Ensure login() is always the last line in the file, and token is stored in a separate file.
client.login("token");
Other really useful resources include:
I know this is old, but what you describe looks very much like the issue described here: https://issues.chromium.org/issues/41354368
If your Pygame code runs in Pyodide with no error but no output, it's likely because:
Standard Pygame doesn't work in browsers. Use pygame-ce or pygame-cffi (WebAssembly-compatible).
You must run it inside a browser canvas, not a desktop window.
Ensure you use pygame.display.set_mode(), a game loop, and pygame.display.flip() to show output.
To anyone who comes here to see why the Id field of your model is defaulted to 0 during instantiated (through new Foo() for example), see here for reference. It is the C#'s design to default integral data type like int to 0 when it is not assigned a value.
same to some of our sites where you using WP order import/export plugin?
Had the same problem and all my numbers in the destination data frame column were strings, also some of the bars didn't show when I plotted a bar graph.
This one is sneaky, because it is not explicitly mentioned anywhere in the reference manual (RM0490). The only clue given is by looking at the system architecture diagram on page 40:
For this chip, the GPIO ports are directly connected to the core, not the AHB bus, so the DMA has no access to them. It would seem that they opted for this layout to improve latency.
To trigger pins through DMA writes, you have to disable the preload of CCRx channels, and write values at the extremities (0 or ARR) to cause the outputs to flip accordingly.
What fixed it for me was right clicking the file in file explorer -> open in terminal > npm create vite@latest. Using the command prompt in VSCode is what caused the "..." issue.
also make sure you have the latest version of npm and node
from gtts import gTTS
letra = """
Título: "Veinte Inviernos y un Café"
Verso 1
Veinte inviernos y un café,
tus ojos siguen siendo mi amanecer.
Y aunque el mundo nos probó,
ninguno soltó el timón.
Hubo noches sin dormir,
meses grises que aprendimos a escribir.
Pero el amor no es de papel,
es de barro y de miel.
Estribillo
Y aquí estamos, con arrugas en la piel
pero el alma todavía en su primer hotel.
Nos juramos sin anillos de cartón,
y nos fuimos fieles por convicción.
Criamos un hijo y un montón de sueños,
con más abrazos que diseños.
Tú y yo, sin filtros, sin red social,
solamente amor… el real.
Verso 2
Te amé cuando el sueldo no alcanzaba,
y cuando el miedo nos llamaba.
Te amé cuando dudaste de ti,
y cuando fuiste más fuerte que a mí.
Y no fuimos perfectos, ni falta hacía,
el amor real se escribe día a día.
Lo que el tiempo no robó,
fue lo mucho que aún nos damos los dos.
Estribillo final
Y aquí estamos, sin pedirle nada al destino,
más que el derecho a seguir este camino.
Veinte años y un hijo que ya vuela,
pero tú sigues siendo mi estrella.
"""
tts = gTTS(text=letra, lang='es', slow=False)
tts.save("Veinte_Inviernos_y_un_Cafe.mp3")
print("MP3 generado con éxito.")