At least Safari 26.1, Javascript successfully parses dates of the format YYYY-MM-DD without workaround.
The sample code that @duffymo code doesn't work. After making the edits that @kdsdata suggested,
I get the following error after pressing F5:
Compile error: Expected user-defined type, not project
and the following line of code gets highlighted in yellow:
Public Sub CreateWordList(TableName As String, FieldName As String, ForeignKey As String)
If you're using named table notation:
| PersonID | Hours | Jobcode | Total |
|---|---|---|---|
| 1 | 7 | 1 | 7 |
| 1 | 6 | 2 | 13 |
| 1 | 8 | 3 | 21 |
| 1 | 10 | 1 | 31 |
| 2 | 4 | 1 | 4 |
| 2 | 12 | 2 | 16 |
| 2 | 5 | 3 | 21 |
| 2 | 3 | 5 | 24 |
With the following formula for Total:
=SUMIF(INDEX([PersonID],1):[@PersonID],[@PersonID],INDEX([Hours],1):[@Hours])
converting this comment by jifi into an answer to improve visibility for those stumbling on this question in their journey of writing a their own EMV applet
3- Is there any open source EMV applet?
Yes there are open source applets you can learn from. There is a curated list of these applets on github under the name of "Javacard Curated List"
There is a bug in the library which is being worked out.
Unfortunately if using API Gateway it looks like you have to encode { and | beforehand (which you said you cannot do). I had to open an AWS support ticket about this and they pointed me to this documentation that mentions it :(
|) and the curly brace character ({ or }) are not supported for any request URL query string and must be URL-encoded.Have you tried changing the storage link on localhost then uploading it?
Is Excel running?
In my experience Excel puts locks on files and will prevent PowerShell from certain things if Excel is running.
Try using https://cdn.jsdelivr.net/npm/chart.js instead.
As the docs suggest:
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
@CyberCr0w: I don't get it, where's the magic trick?
so, you want me to use only httpsResource` for get method? rest to be httpClient` ? I do not want to create a new service file. so can I use httpClient` just by replacing `httpResource` ?
Maybe, you could push your codes,libs and packages on GitHub.Then,git clone it on kaggle.
I’d also suggest checking out this Salesforce Help article about Prepare for Connected App Usage Restrictions Change and the new “Approve Uninstalled Connected Apps” user permission.
Although it covers a different error, it’s still related to connected app authentication issues and might help others facing similar problems — for example, errors like:
OAUTH_APPROVAL_ERROR_GENERIC
https://example.com?error=invalid_client&error_description=app+must+be+installed+into+org
Use following command:
tree /f
or
tree /f > myfile.txt
Please read https://stackoverflow.com/help/minimal-reproducible-example and provide the missing details in your question (test code, complete errors), editing the question, not in comments.
I posted the complete code I am testing here.
This is old thread, but I add my comment to it anyway.
If your are receiving <not accessible> the two not mentioned reasons might be
Windows corruption
Trying debug in 'Release' mode
Windows corruption is usually helped with sfc and DISM commands on elevated cmd.
Later one, well, change from 'Release' to 'Debug' and try again.
Pour obtenir les valeurs de plusieurs composants générés dynamiquement dans un EditForm en Blazor, il faut gérer la liaison des données (binding) avec un modèle dynamique et intercepter les événements de changement (ValueChanged) de chaque composant.
You haven't posted the information that was asked for. How are we meant to know what the inputs are and what you expect the results to be?
If you go to Control panel > Notifications > Tab Rules you can find the tab with "default" where your e-mail is also marked. If you click it and then click "Edit" you can unfold "Container Manager" and uncheck the rule "Unexpectedly stopped". This should silence e-mails and notifications.
Screenshot of Synology settings to turn off the Unexpectedly Stopped notification.
// Ninjago: Dragon Rising - Season 4 Teaser Cinematic
// Scene: Y/N L/N Reveal at the Ember Ruins
// Unreal Engine 5.4 (LEGO Animation Style)
// ==============================
// SETUP
// ==============================
Actors:
- BP_YN_Character (SkeletalMesh: LEGO_Minifig_YN)
- BP_Lloyd_Character (SkeletalMesh: LEGO_Minifig_Lloyd)
- BP_Nya_Character (SkeletalMesh: LEGO_Minifig_Nya)
- BP_Kai_Character (SkeletalMesh: LEGO_Minifig_Kai)
- BP_EmberCompass (StaticMesh: SM_Ember_Compass)
- BP_EmberTrail (NiagaraSystem: NS_EmberTrail)
- BP_MoonLight (DirectionalLight)
- BP_DragonSilhouette (SkeletalMesh: SM_Dragon_Silhouette)
- BP_Camera_Main (CineCameraActor)
- BP_SkyFog (ExponentialHeightFog + VolumetricFog Enabled)
// ==============================
// SEQUENCE FLOW
// ==============================
// Timeline: 0–40 seconds
LevelSequence "DragonRising_S4_Teaser"
EVENT BeginPlay
{
PlaySequence(DragonRising_S4_Teaser)
SetPostProcess(FilmGrain=0.3, Contrast=1.15, Saturation=0.9)
SpawnNiagaraSystem(BP_EmberTrail, Location=RuinsPath)
}
// ------------------------------
// SCENE 1 – The Ember Trail (0–10s)
// ------------------------------
CameraCutTo(BP_Camera_Main)
SetCameraTransform(Location=(0,0,250), Rotation=(0,-15,0), FOV=75)
PlaySound(SFX_Dragon_Roar_Distant, Volume=0.3)
FadeIn(2s)
BP_YN_Character.PlayAnimation("Crouch_Inspect")
SpawnNiagaraSystem(NS_EmberTrail, Location=Ground_Embers)
BP_YN_Character.PlayParticleEffect("Hand_Glow_Orange")
Lloyd_Character.VO("What is that?")
YN_Character.VO("A map. Only the dragons could leave it.")
// ------------------------------
// SCENE 2 – The Meeting (10–25s)
// ------------------------------
CameraPan(Target=Group_Ninja, Speed=0.5)
Lighting.SetIntensity(0.6)
PlayMusic(MX_TensionSoft)
Kai_Character.VO("Who are you?")
YN_Character.VO("Someone who knows where the dragons sleep.")
Nya_Character.VO("Then welcome—if you’re real.")
BP_EmberCompass.AttachTo(YN_Character, "RightHandSocket")
BP_EmberCompass.PlayGlow(Intensity=3)
SpawnNiagaraSystem(NS_DragonProjection, Location=Compass_Origin)
// ------------------------------
// SCENE 3 – Threat Tease (25–40s)
// ------------------------------
CameraShake(Intensity=2.0)
PlaySound(SFX_Ground_Rumble, Volume=0.6)
SpawnActor(BP_DragonSilhouette, Location=(0,2000,800))
SetLightColor(BP_MoonLight, Color=(0.8,0.8,1))
FadeLight(BP_MoonLight, Duration=2.5)
YN_Character.VO("They’re waking...")
TextOverlay("Season 4 — Dragon Rising: New alliances... old flames", FadeIn=1.0, Duration=3.0)
FadeOut(3s)
// ==============================
// MATERIAL / LIGHTING NOTES
// ==============================
Materials:
- YN_Cape_Mat: Cloth with emissive edge burn (color #FF4500)
- Ember_Studs_Mat: Translucent additive blend, emissive intensity 6
- Compass_Mat: PBR Metal with emissive rune detail
Lighting:
- MoonLight Intensity: 2.0 lux
- EmberTrail Glow: 2200K orange
- Fog Density: 0.3 for cinematic silhouet
// Ninjago: Dragon Rising - Season 4 Teaser Cinematic
// Scene: Y/N L/N Reveal at the Ember Ruins
// Unreal Engine 5.4 (LEGO Animation Style)
// ==============================
// SETUP
// ==============================
Actors:
- BP_YN_Character (SkeletalMesh: LEGO_Minifig_YN)
- BP_Lloyd_Character (SkeletalMesh: LEGO_Minifig_Lloyd)
- BP_Nya_Character (SkeletalMesh: LEGO_Minifig_Nya)
- BP_Kai_Character (SkeletalMesh: LEGO_Minifig_Kai)
- BP_EmberCompass (StaticMesh: SM_Ember_Compass)
- BP_EmberTrail (NiagaraSystem: NS_EmberTrail)
- BP_MoonLight (DirectionalLight)
- BP_DragonSilhouette (SkeletalMesh: SM_Dragon_Silhouette)
- BP_Camera_Main (CineCameraActor)
- BP_SkyFog (ExponentialHeightFog + VolumetricFog Enabled)
// ==============================
// SEQUENCE FLOW
// ==============================
// Timeline: 0–40 seconds
LevelSequence "DragonRising_S4_Teaser"
EVENT BeginPlay
{
PlaySequence(DragonRising_S4_Teaser)
SetPostProcess(FilmGrain=0.3, Contrast=1.15, Saturation=0.9)
SpawnNiagaraSystem(BP_EmberTrail, Location=RuinsPath)
}
// ------------------------------
// SCENE 1 – The Ember Trail (0–10s)
// ------------------------------
CameraCutTo(BP_Camera_Main)
SetCameraTransform(Location=(0,0,250), Rotation=(0,-15,0), FOV=75)
PlaySound(SFX_Dragon_Roar_Distant, Volume=0.3)
FadeIn(2s)
BP_YN_Character.PlayAnimation("Crouch_Inspect")
SpawnNiagaraSystem(NS_EmberTrail, Location=Ground_Embers)
BP_YN_Character.PlayParticleEffect("Hand_Glow_Orange")
Lloyd_Character.VO("What is that?")
YN_Character.VO("A map. Only the dragons could leave it.")
// ------------------------------
// SCENE 2 – The Meeting (10–25s)
// ------------------------------
CameraPan(Target=Group_Ninja, Speed=0.5)
Lighting.SetIntensity(0.6)
PlayMusic(MX_TensionSoft)
Kai_Character.VO("Who are you?")
YN_Character.VO("Someone who knows where the dragons sleep.")
Nya_Character.VO("Then welcome—if you’re real.")
BP_EmberCompass.AttachTo(YN_Character, "RightHandSocket")
BP_EmberCompass.PlayGlow(Intensity=3)
SpawnNiagaraSystem(NS_DragonProjection, Location=Compass_Origin)
// ------------------------------
// SCENE 3 – Threat Tease (25–40s)
// ------------------------------
CameraShake(Intensity=2.0)
PlaySound(SFX_Ground_Rumble, Volume=0.6)
SpawnActor(BP_DragonSilhouette, Location=(0,2000,800))
SetLightColor(BP_MoonLight, Color=(0.8,0.8,1))
FadeLight(BP_MoonLight, Duration=2.5)
YN_Character.VO("They’re waking...")
TextOverlay("Season 4 — Dragon Rising: New alliances... old flames", FadeIn=1.0, Duration=3.0)
FadeOut(3s)
// ==============================
// MATERIAL / LIGHTING NOTES
// ==============================
Materials:
- YN_Cape_Mat: Cloth with emissive edge burn (color #FF4500)
- Ember_Studs_Mat: Translucent additive blend, emissive intensity 6
- Compass_Mat: PBR Metal with emissive rune detail
Lighting:
- MoonLight Intensity: 2.0 lux
- EmberTrail Glow: 2200K orange
- Fog Density: 0.3 for cinematic silhouet
te
Camera:
- DOF Focal Distance: 150
- Bloom Intensity: 0.8
- Lens Flare Enabled: True
te
Camera:
- DOF Focal Distance: 150
- Bloom Intensity: 0.8
- Lens Flare Enabled: True
I am using astral uv and had the same problem that a package (tksheet) couldn't be found. The porblem was easily solved by adding pyinstaller to the uv dependencies:
uv add pyinstaller
and then building again:
uv run pyinstaller --clean [<NAME>].spec
"other" seemed the best category as it was not troubleshooting, debugging, tooling, best practice nor advice.
Relevant meta: https://meta.stackoverflow.com/a/436435/2402272
Damn, I was hoping that wouldn't be the answer. Back to the drawing board to figure out how to meet the objectives given to me.
I'm not sure why you think it's a good idea to keep these details secret? Why is end unused?
Try to create a fresh database, and then connect it. It works for me. Because I run some scripts in the old database, which were conflicting. But now everything is working. So, you can give it a try.
I found the error.
Turns out there was an issue with the conan_provider.cmake. This only installed Debug and Release.
The compile_commands.json however also contained info for MinSizeRel and RelWithDbgInfo. Since conan did not fill those configs, the include paths where missing here in my compile_commands.json.
After adding the other configs to the conan provider my compile_commands was fixed and clang tidy does seam to work :)
I wrote test case to check timestamps with joda datetime and java time , both having different timestamps. if we replace it with java.time, the timestamps should be same as before but it is getting different timestamps.
In general, transaction propagation is supported by Liberty and you don't need to modify EJBs that run on traditional WebSphere to make them run on Liberty. However, as the original poster says, if you use remote transaction propagation (in which one EJB calls another EJB in a different app server or JVM), the app won't run directly on Liberty.
However, there are workarounds, as @gas says. It might be that you can modify the EJBs so that they run in the same JVM. Sometimes EJBs are configured to use remote interfaces when the calls are actually local; so that's something else that can be easily changed to avoid making remote calls.
If none of that helps, though, you can wrap the EJB in a web service before deploying it to Liberty and then use the web service client to call the EJB web service. But it's worth checking that your app does actually need to be using the EJB's remote interfaces in the first place.
More info about how to check and then how to wrap the EJB in a web service here: https://community.ibm.com/community/user/blogs/laura-cowen1/2025/09/12/transaction-propagation-on-liberty
... your test cases being what? And failing how?
Reading files with PHP is a slow thing.
First, your PHP code will be converted to machine language and executed, this itself is slow.
Next, your PHP code will have to check if "file_exists" and then read it in someway, like with file_get_contents, or reading it line by line.
"file_get_contents" will run into memory issues on large files. Also reading many files to look for a certain thing will mean you will have to open all files to find that one thing.
Say for example you have a million registered users, each user's info is stored in his/her own file.
Now you want to find out names of all male users who are living in California.
You will need to open each user file and read the data to find what you are looking for, open million files takes too much time with PHP.
In MYSQL, this is one query, will probably take less than a few seconds to execute.
What you need is a mix of both PHP file system and MYSQL.
All the data which is usually accessed again and again, you can store that particular data in a file and open that file using PHP. And all the data also goes into mysql for use with complex queries.
For example, you have file of user "JOHN_SMITH.txt" stored somewhere outside public_html.
And on website another user wants to see info about JOHN_SMITH on webpage example.com/profile/JOHN_SMITH/. So your PHP code simply opens that one file and displays the info. Takes like a second to complete that task, and no need to touch MYSQL, leaving it free for more complex searches.
Also, you need to have a really big database table and extremely high traffic before speed becomes an issue. If it has become an issue in your case with only little extra traffic, then either server is too weak, has less processing power, or your database is not formed/indexed properly.
This question already has answers in this regular question: How do I "git blame" a deleted line? there is nothing opinion based about it so "Advice" was the wrong type to pick.
My set up looks much like yours except I'm not using the "AuthorizationScopes:" field. Have you tried deleting the ".aws-sam" folder to make sure you are getting a clean build and deploy? I've had to do that on occasion to get my changes to appear.
If you are getting a 500 error you've at least gotten past CORS going in. Do note that your response needs to include those CORS headers as well or the browser will block it again.
Good luck
@julaine, Thanks. My goal was to post a question and get an object authoritative answer. "other" seemed the best category as it was not troubleshooting, debugging, tooling, best practice nor advice. I now see it results in a discussion and not something to accept.
[Originally answered here] From the Command Palette, select Select and Start Debugging. There should be a delete icon next to each contributed launch action, which will remove it from the Run and Debug dropdown list.
I would prefer to go with sth. like this:
try:
from enum import StrEnum
class StringEnum(StrEnum):
pass
except ImportError:
from enum import Enum
class StringEnum(str, Enum):
pass
class MyEnum(StringEnum):
choice1 = "choice1"
choice2 = "choice2"
Eventually, I ended up splitting my query. In the first part, I got all the records that satisfy my exclude condition and then in the later part I got the records that satisfy my include conditions.
I can't prove this, but I think Microsoft is trying to parse the <LoadUri> element too early. This "Invalid URI: The hostname could not be parsed." error message is a common exception that's thrown whenever you try to create a Uri with a bad hostname, which is true if I try to parse a value such as "https://{Claim:loadUri}". I feel like the parsing should have been done AFTER the {Claim:} tokens are resolved, but I don't know the reasoning behind it.
In short: I just couldn't do it and I had to redesign these dynamic URLs. I'm using a static url for use cases and I created a reverse proxy that routes to different destinations based off of the query string. They look like this now: <LoadUri>https://my-reverse-proxy.com?myClaim={Claim:MyClaim}</LoadUri>
It's covered on Mozilla Developer Network now: https://developer.mozilla.org/en-US/docs/Web/API/File_API/Using_files_from_web_applications#using_hidden_file_input_elements_using_the_click_method
Autodesk Platform Services (APS) applications created on or after December 10, 2024, a “Legacy endpoint is deprecated” error is returned
Your 2017 application is "grandfathered." Autodesk is giving existing apps a grace period to migrate, but won't allow new applications to revert to it. There is no sunset date for the grace period to end.
Follow the advice at the top of the documentation you linked, and use
"If I'm on the wrong path". My advice is trying to create a dynamic form is the wrong path. You're chasing the pot of gold at the end of a rainbow. It looks simple when you just have to deal with text fields, but as the number of field types grow it becomes horribly complex. Why do you think the Internet is not awash with Dynamic Form libraries?
Hey Jan Nielsen, I'm a Product Manager at Stack Overflow. We're gathering feedback on the new question type you used to ask this. If you have a moment, we'd love to hear how your experience was or what you would improve.
Thanks to #DavidMaze comment 🙏 ...
You can set an environment variable
PIP_EXTRA_INDEX_URLwith the same effect, and either pass it in via a DockerfileARG(be careful to use a multistage build to avoid leaking the token though) or a build secret. See for example Passing PIP_EXTRA_INDEX_URL to docker build.
... to pass the argument, I added in .gitlab-ci.yml, in dockerize-env \ script :
export PIP_EXTRA_INDEX_URL="https://gitlab-ci-token:[email protected]/api/v4/groups/<GROUP_ID>/-/packages/pypi/simple"
and in DockerFile.project_env:
# Add a variable accessible during image build
# so that pip can access internal pypi repo
ARG PIP_EXTRA_INDEX_URL
After that my_package_on_gitlab_private_pypi can be installed from internal Pypi 🎉
Try out this package this works to run lumi h5p locally
@Hamid You're just going to ignore all the questions posted above, then?
Hey NoobNoob , I'm a Product Manager at Stack Overflow. We're gathering feedback on the new question type you used to ask this. If you have a moment, we'd love to hear how your experience was or what you would improve.
My case, this drivers working, the last drivers no working. THX
Hey Janning Vygen , I'm a Product Manager at Stack Overflow. We're gathering feedback on the new question type you used to ask this. If you have a moment, we'd love to hear how your experience was or what you would improve.
Running pod update fixed it for me
How to change "max connection pool size" permanently?
<asp:GridView ID="GridView1" runat="server"
AllowPaging="true"
PageSize="50"
PagerStyle-HorizontalAlign="Center"
OnPageIndexChanging="GridView1_PageIndexChanging">
</asp:GridView>
/**
* Removes the unit from a value if it is unitless.
*
* @param {Number|String} $value - The value to strip the unit from.
* @return {Number} - The value without its unit.
*/
@function strip-unit($value) {
@if meta.type-of($value) == "number" and not math.is-unitless($value) {
@return math.div($value, math.div($value, 1));
} @else {
@return $value;
}
}
Please elaborate on the way you use FastAPI's workers, and your implementation of API as a whole. If uploading a CSV, preprocessing and all the math is done with one endpoint - you won't really gain any performance by increasing number of FastAPI workers. Your work is mostly CPU-bound, so it makes sense to separate all the networking and all the maths into different entities. The way I usually do this in my projects is like this: have an API/Producer (FastAPI), which processes incoming requests, if a request is nested - splits it into different jobs and later passes it for workers to process. Workers are replicated and run in parallel, each one processing it's own part of workload. After completing the work, results are passed back to Producer for a response. More technically speaking, your Producer is FastAPI, for workers I usually go with Celery, which is a popular and solid choice, but there are many others, and you'll need a way for Producer and Worker to communicate - Redis is a good choice. Adding to that - I'd suggest ditching Pandas and going with Polars, in my experience performance gain is really noticeable. So your workflow will go like that: upload a csv -> split it in chunks -> assign a separate task to process each chunk and execute them in parallel -> gather results and return a response
As per this the 'Databases and Doctrine ORM page (https://symfony.com/doc/current/doctrine.html)
I edited the .env file and used dummy values for the database url and password this satisfied Symfony.
the files are in the range of couple megabytes each since they are binary gltf files (3d models and scenes). I have tried that in debug mode but I can see the data plainly in memory, so not sure if it's because it's compiled in debug mode or if that's how it's going to be but I guess this is not such a good idea then since it won't be as efficient as just accessing external files on demand.
You can use the inheritViewBox prop, which is exactly meant for that purpose. See https://mui.com/material-ui/api/svg-icon/#svg-icon-prop-inheritViewBox
import os
import argparse
import re
import logging
from collections import defaultdict
from pathlib import Path
import datetime
# Logger will be configured in main() after parsing arguments
logger = logging.getLogger(__name__)
def find_filename_duplicates(directory):
"""Find files with duplicate names in the given directory and its subdirectories"""
files_by_name = defaultdict(list)
# Walk through the directory and collect files by name
for root, _, filenames in os.walk(directory):
for filename in filenames:
filepath = os.path.join(root, filename)
files_by_name[filename].append(filepath)
# Filter to only include names that have duplicates
duplicates = {name: paths for name, paths in files_by_name.items() if len(paths) > 1}
return duplicates
def extract_number_pattern(filename):
"""Extract the number from a filename with pattern V1_A_B_C_D_E_F__.*
where:
- A is date in YYYYMMDD format
- B is 2 digits
- C is 4-6 digits
- D is 4 digits
- E is 3 digits
- F is 2 digits (the part we'll increment)
"""
# Pattern: V1_YYYYMMDD_BB_CCCCCC_DDDD_EEE_FF__.*
pattern = re.compile(r'(V1_\d{8}_\d{2}_\d{4,6}_\d{4}_\d{3}_)(\d{2})(__.*)')
match = pattern.match(filename)
if match:
prefix = match.group(1) # Everything before the F part
number = int(match.group(2)) # The F part (2 digits)
suffix = match.group(3) # Everything after the F part
return prefix, number, suffix
return None
def increment_number_in_filename(filepath):
"""Increment the F part (2-digit number) in the filename pattern"""
path = Path(filepath)
filename = path.name
parts = extract_number_pattern(filename)
if parts:
prefix, number, suffix = parts
new_number = number + 1
# Ensure the number stays as 2 digits with leading zero if needed
new_filename = f"{prefix}{new_number:02d}{suffix}"
new_filepath = path.with_name(new_filename)
return new_filepath
# If pattern doesn't match, return the original path with a warning
logger.warning(f"File {filepath} doesn't match the expected pattern. Skipping.")
return filepath
def format_size(size):
"""Convert file size to human-readable format"""
for unit in ['B', 'KB', 'MB', 'GB']:
if size < 1024.0:
return f"{size:.2f} {unit}"
size /= 1024.0
return f"{size:.2f} TB"
def rename_duplicates(directory, dry_run=False):
"""Find and rename duplicate files, return number of renames performed"""
duplicates = find_filename_duplicates(directory)
if not duplicates:
logger.info("No files with duplicate names found.")
return 0
total_duplicates = sum(len(paths) - 1 for paths in duplicates.values())
logger.info(f"Found {len(duplicates)} duplicate filenames ({total_duplicates} duplicates in total)")
rename_count = 0
for filename, paths in duplicates.items():
logger.info(f"\nDuplicate filename: '{filename}'")
# Check if the filename matches our expected pattern
if not extract_number_pattern(filename):
logger.warning(f" Filename '{filename}' doesn't match the expected pattern V1_YYYYMMDD_BB_CCCCCC_DDDD_EEE_FF__*")
logger.warning(f" Skipping this group of files")
continue
# Keep the first file as is, rename the rest
for i, filepath in enumerate(paths):
rel_path = os.path.relpath(filepath, directory)
if i == 0:
logger.info(f" Keeping: {rel_path}")
else:
new_path = increment_number_in_filename(filepath)
# Skip if the pattern doesn't match
if new_path == filepath:
continue
# Make sure the new name doesn't already exist
while os.path.exists(new_path) and not dry_run:
new_path = increment_number_in_filename(new_path)
if dry_run:
logger.info(f" [DRY RUN] Would rename: {rel_path} → {os.path.basename(new_path)}")
else:
logger.info(f" Renaming: {rel_path} → {os.path.basename(new_path)}")
os.rename(filepath, new_path)
rename_count += 1
return rename_count
def main():
parser = argparse.ArgumentParser(
description='Find and rename files with duplicate names using pattern increment',
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Example filename pattern: V1_YYYYMMDD_BB_CCCCCC_DDDD_EEE_FF__*
where:
- YYYYMMDD is a date
- BB is 2 digits
- CCCCCC is 4-6 digits
- DDDD is 4 digits
- EEE is 3 digits
- FF is 2 digits (this is the part that will be incremented)
"""
)
parser.add_argument('directory', help='Directory to scan for filename duplicates')
parser.add_argument('--max-iterations', type=int, default=10,
help='Maximum number of renaming iterations to perform')
parser.add_argument('--log-dir', default=os.getcwd(),
help='Directory where log file should be saved (default: current directory)')
parser.add_argument('--log-name',
help='Custom log filename (default: rename_log_YYYYMMDD_HHMMSS.txt)')
parser.add_argument('--dry-run', action='store_true',
help='Perform a dry run without actually renaming files')
args = parser.parse_args()
# Configure logging based on arguments
log_dir = os.path.abspath(args.log_dir)
if not os.path.exists(log_dir):
os.makedirs(log_dir)
if args.log_name:
log_filename = os.path.join(log_dir, args.log_name)
else:
timestamp = datetime.datetime.now().strftime('%Y%m%d_%H%M%S')
log_filename = os.path.join(log_dir, f"rename_log_{timestamp}.txt")
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler(log_filename),
logging.StreamHandler()
]
)
directory = os.path.abspath(args.directory)
if not os.path.isdir(directory):
logger.error(f"Error: Directory '{directory}' does not exist.")
return
logger.info(f"Starting duplicate filename resolution in: {directory}")
logger.info(f"Log file: {os.path.abspath(log_filename)}")
if args.dry_run:
logger.info("DRY RUN MODE: No files will actually be renamed")
iteration = 1
total_renames = 0
while iteration <= args.max_iterations:
logger.info(f"\n=== Iteration {iteration} ===")
renames = rename_duplicates(directory, args.dry_run)
total_renames += renames
if renames == 0:
logger.info(f"\nProcess completed successfully after {iteration} iterations.")
logger.info(f"Total files {'that would be' if args.dry_run else ''} renamed: {total_renames}")
break
logger.info(f"{'Would rename' if args.dry_run else 'Renamed'} {renames} files in this iteration. Checking for remaining duplicates...")
if args.dry_run:
# In dry run mode, we only do one iteration since we're not actually changing anything
logger.info("\nDry run complete. Exiting after one iteration.")
break
iteration += 1
else:
logger.warning(f"\nReached maximum iterations ({args.max_iterations}). Process may not be complete.")
logger.info(f"Total files renamed so far: {total_renames}")
logger.info(f"\nDetailed log saved to: {os.path.abspath(log_filename)}")
if __name__ == "__main__":
main()
md file
# Cascade Chat Conversation
Note: _This is purely the output of the chat conversation and does not contain any raw data, codebase snippets, etc. used to generate the output._
### User Input
file duplicates viewer and changer
### Planner Response
I'll help you create a Python script to find and manage duplicate files in a directory. This tool will identify files with identical content and provide options to delete or keep one copy.
Let me create a Python script with a simple command-line interface:
*Edited relevant file*
### Planner Response
I've created a Python script called [duplicate_finder.py](cci:7://file:///c:/Users/rsrivaishnavi/CascadeProjects/windsurf-project-2/duplicate_finder.py:0:0-0:0) that helps you find and manage duplicate files. Here's what it does:
### Features:
- Scans a directory and its subdirectories for duplicate files
- Identifies duplicates by both file size and content hash
- Shows detailed information about found duplicates
- Provides an option to automatically delete duplicates (keeping one copy)
- Displays the total space that can be recovered
### How to use:
1. **Basic usage** (just find duplicates):
```bash
python duplicate_finder.py "path/to/your/directory"
python duplicate_finder.py "path/to/your/directory" --delete
Scanning for duplicate files in: C:\path\to\your\directory
Found 3 groups of duplicate files (5 duplicates in total):
Group 1 - 2.50 MB each:
1. folder1\image1.jpg
2. folder2\image_copy.jpg
3. backup\image1(1).jpg
Group 2 - 150.00 KB each:
1. docs\report.pdf
2. archive\report.pdf
Group 3 - 5.00 MB each:
1. videos\clip.mp4
2. temp\clip.mp4
Total space wasted by duplicates: 12.80 MB
--delete, it keeps the first file in each group and removes the duplicates--delete)Would you like me to run it on a specific directory for you?
*Viewed duplicate_finder.py *
file name duplicates viewer and changer
I see you're looking for a tool to find and manage files with duplicate names rather than duplicate content. Let me create a script that focuses on filename duplicates instead.
Edited relevant file
I've created a new script called filename_duplicate_finder.py that specifically finds and manages files with duplicate names (rather than duplicate content).
python filename_duplicate_finder.py "path/to/your/directory"
python filename_duplicate_finder.py "path/to/your/directory" --rename
python filename_duplicate_finder.py "path/to/your/directory" --rename --interactive
Scanning for files with duplicate names in: C:\path\to\your\directory
Found 2 duplicate filenames (3 duplicates in total):
Duplicate #1: 'report.pdf'
1. docs\report.pdf (150.00 KB)
2. archive\report.pdf (150.00 KB)
3. old\report.pdf (120.00 KB)
Duplicate #2: 'config.json'
1. settings\config.json (2.50 KB)
2. backup\config.json (2.30 KB)
With --rename: Automatically renames all duplicates except the first one in each group
report.pdf → report_1.pdf, report_2.pdfWith --rename --interactive: Gives you options for each group of duplicates:
Would you like me to make any adjustments to this script or would you like to see it in action on a specific directory?
filename must be renamed but not at the end, the file will have a pattern like .dd__text. - the dd must be incremented by 1. also, the filename should be checked across a directory having many folders. after the renaming, the directory should be again checked for duplicates. this process should repeat until there is no filename duplicates found in the directory. Also provide a detailed log step by step -of the duplicate filenames found and renamed filenames
I understand you need a more specific filename renaming pattern. You want to:
.*dd__text.* where dd is a number that should be incrementedLet me create a new script that implements this logic:
Edited relevant file
I've created a new script called pattern_filename_renamer.py that implements your specific requirements for finding and renaming duplicate filenames with a pattern-based approach.
.*dd__text.* where dd is a two-digit numberdd) by 1 when renaming filespython pattern_filename_renamer.py "path/to/your/directory"
You can also specify a maximum number of iterations:
python pattern_filename_renamer.py "path/to/your/directory" --max-iterations 20
Pattern Recognition: The script looks for filenames with the pattern .*dd__text.* where:
.* is any prefixdd is a two-digit number (like 01, 02, etc.)__text is any text with double underscores.* is any suffix including file extensionRenaming Logic:
Iteration Process:
Logging:
rename_log_20251104_202500.txt)2025-11-04 20:25:00 - INFO - Starting duplicate filename resolution in: C:\path\to\directory
2025-11-04 20:25:00 - INFO - Log file: C:\path\to\rename_log_20251104_202500.txt
2025-11-04 20:25:00 - INFO - === Iteration 1 ===
2025-11-04 20:25:00 - INFO - Found 2 duplicate filenames (3 duplicates in total)
2025-11-04 20:25:00 - INFO - Duplicate filename: 'report01__final.pdf'
2025-11-04 20:25:00 - INFO - Keeping: docs\report01__final.pdf
2025-11-04 20:25:00 - INFO - Renaming: archive\report01__final.pdf → report02__final.pdf
2025-11-04 20:25:00 - INFO - Duplicate filename: 'data05__processed.csv'
2025-11-04 20:25:00 - INFO - Keeping: data\data05__processed.csv
2025-11-04 20:25:00 - INFO - Renaming: backup\data05__processed.csv → data06__processed.csv
2025-11-04 20:25:00 - INFO - Renamed 2 files in this iteration. Checking for remaining duplicates...
2025-11-04 20:25:01 - INFO - === Iteration 2 ===
2025-11-04 20:25:01 - INFO - No files with duplicate names found.
2025-11-04 20:25:01 - INFO - Process completed successfully after 2 iterations.
2025-11-04 20:25:01 - INFO - Total files renamed: 2
2025-11-04 20:25:01 - INFO - Detailed log saved to: C:\path\to\rename_log_20251104_202500.txt
Would you like me to make any adjustments to this script or would you like to see it in action on a specific directory?
keep the directory and log file location as variable inputs
I'll update the script to make both the directory and log file location configurable as command-line arguments. This will give you more flexibility when using the tool.
Edited relevant file
I've updated the script to make both the directory and log file location configurable as command-line arguments. Here are the changes:
--log-dir: Specifies the directory where the log file should be saved
--log-name: Allows you to specify a custom log filename
rename_log_YYYYMMDD_HHMMSS.txt with current timestamppython pattern_filename_renamer.py "path/to/your/directory"
python pattern_filename_renamer.py "path/to/your/directory" --log-dir "path/to/logs"
python pattern_filename_renamer.py "path/to/your/directory" --log-name "my_custom_log.txt"
python pattern_filename_renamer.py "path/to/your/directory" --log-dir "path/to/logs" --log-name "my_custom_log.txt"
python pattern_filename_renamer.py "path/to/your/directory" --max-iterations 15 --log-dir "path/to/logs" --log-name "my_custom_log.txt"
These changes make the script more flexible and allow you to specify exactly where you want the logs to be stored, which is especially useful for automation or when running the script on multiple directories.
The exact filename regex would be - V1_A_B_C_D_E_F__.* -where A would be date format in YYYYMMDD and B would be digits of length 2 and C would be digits of length between 4 to 6 , D would be digits of length 4, E would be digits of length 3 and this part F would be 2 digit number and the rest .* would be combination of alphabets, underscore and digits. I want you to change only the 2 digit number F, by incrementing the number by 1. Make the changes accordingly to match this requirement
I'll update the script to match your specific filename pattern and ensure it only increments the 2-digit number F in the pattern V1_A_B_C_D_E_F__.* where:
Edited relevant file
Let me fix that and try again:
Edited relevant file
I apologize for the errors. I'm having trouble with the replace_file_content tool when trying to update the script to match your specific filename pattern. The error indicates there's an issue with how I'm formatting the JSON for the replacement chunks.
The specific error is about invalid character 'd' in string escape code, which suggests there might be an issue with how backslashes are being handled in the regular expression patterns.
How would you like to proceed? I could:
What would you prefer?
1
I'll create a new file from scratch with the updated pattern requirements to match your specific filename format.
Edited relevant file
I've created a new script called filename_pattern_renamer.py that specifically handles your filename pattern requirements. Here's what it does:
Specific Pattern Matching: Identifies files with the pattern V1_A_B_C_D_E_F__.* where:
Targeted Renaming: Only increments the 2-digit F part of the filename
Duplicate Resolution: Iteratively scans and renames files until no duplicates remain
Detailed Logging: Creates a comprehensive log of all actions taken
Configurable Options:
python filename_pattern_renamer.py "path/to/your/directory"
python filename_pattern_renamer.py "path/to/your/directory" --dry-run
python filename_pattern_renamer.py "path/to/your/directory" --log-dir "path/to/logs" --log-name "my_rename_log.txt"
2025-11-04 22:42:00 - INFO - Starting duplicate filename resolution in: C:\path\to\directory
2025-11-04 22:42:00 - INFO - Log file: C:\path\to\rename_log_20251104_224200.txt
2025-11-04 22:42:00 - INFO - === Iteration 1 ===
2025-11-04 22:42:00 - INFO - Found 2 duplicate filenames (3 duplicates in total)
2025-11-04 22:42:00 - INFO - Duplicate filename: 'V1_20251104_01_12345_6789_123_01__data.txt'
2025-11-04 22:42:00 - INFO - Keeping: folder1\V1_20251104_01_12345_6789_123_01__data.txt
2025-11-04 22:42:00 - INFO - Renaming: folder2\V1_20251104_01_12345_6789_123_01__data.txt → V1_20251104_01_12345_6789_123_02__data.txt
2025-11-04 22:42:00 - INFO - Duplicate filename: 'V1_20251103_02_1234_5678_321_05__report.csv'
2025-11-04 22:42:00 - INFO - Keeping: reports\V1_20251103_02_1234_5678_321_05__report.csv
2025-11-04 22:42:00 - INFO - Renaming: archive\V1_20251103_02_1234_5678_321_05__report.csv → V1_20251103_02_1234_5678_321_06__report.csv
2025-11-04 22:42:00 - INFO - Renaming: backup\V1_20251103_02_1234_5678_321_05__report.csv → V1_20251103_02_1234_5678_321_06__report.csv
2025-11-04 22:42:00 - INFO - Renamed 3 files in this iteration. Checking for remaining duplicates...
2025-11-04 22:42:01 - INFO - === Iteration 2 ===
2025-11-04 22:42:01 - INFO - Found 1 duplicate filenames (1 duplicates in total)
2025-11-04 22:42:01 - INFO - Duplicate filename: 'V1_20251103_02_1234_5678_321_06__report.csv'
2025-11-04 22:42:01 - INFO - Keeping: archive\V1_20251103_02_1234_5678_321_06__report.csv
2025-11-04 22:42:01 - INFO - Renaming: backup\V1_20251103_02_1234_5678_321_06__report.csv → V1_20251103_02_1234_5678_321_07__report.csv
2025-11-04 22:42:01 - INFO - Renamed 1 files in this iteration. Checking for remaining duplicates...
2025-11-04 22:42:02 - INFO - === Iteration 3 ===
2025-11-04 22:42:02 - INFO - No files with duplicate names found.
2025-11-04 22:42:02 - INFO - Process completed successfully after 3 iterations.
2025-11-04 22:42:02 - INFO - Total files renamed: 4
2025-11-04 22:42:02 - INFO - Detailed log saved to: C:\path\to\rename_log_20251104_224200.txt
The script now precisely matches your requirements for finding and renaming duplicate files with the specific pattern format you described. Would you like me to make any adjustments to this script?
Ok, after some research I found an old workaround in my project related to KeyboardAutoScroll on iOS, and it turns out this is what broke the ScrollView in .NET 9.
Thanks for your responses!
These days, you can use npm audit with flags to filter.
npm audit --audit-level=(low|moderate|high|critical)
This comes from their docs page: https://docs.npmjs.com/cli/v7/commands/npm-audit
I have same problem. No input is accepted, and the module repeats the sequence described ending with SMS done.
Thanks for any help.
I couldn't just downgrade the React Native version in the same project; I had to create a new project from scratch with React Native version 0.72.4, reimplement the entire project, and search for libraries compatible with that older version. It was very painful, but it was the only thing I could do to solve my problem.
@KJ
excuse me, I meant jpeg.
Thanks! I’ve added a minimal sample PDF + the source PNG here: (link).
I’m not relying on viewer defaults—I compute the page size as points = pixels * 72 / targetDPI and draw the CGImage 1:1 (no interpolation). Despite that, thin glyphs show “holes” at typical zoom in Preview/PDFKit.
Do you know if iOS/macOS forces a particular image filter (e.g., JPEG re-encode) or disables smoothing for 1-bit/grayscale images inside PDFs? Any flag to guarantee lossless Flate (8-bit) or CCITT G4 (1-bit)?
Agree that PDF has no intrinsic DPI—only user space. That’s why I scale the page so one image pixel maps cleanly at a chosen effective DPI. The issue persists even with 8-bit grayscale (not just 1-bit).
Are there known viewer behaviors in Preview/PDFKit that render bilevel/high-contrast rasters without anti-aliasing? If so, would your recommendation be to (a) embed as 8-bit grayscale at 2× resolution, or (b) pre-apply a tiny morphological close/blur to avoid “holes”? Any other reliable workaround?
I hope it's useful
I recommend the use of Plutoprint library on github, easy to use as the maintainers answer questions
update: none does not work for me on apple silicon mac. It has updated itself three times to the newest version. ``It was when I restarted my computer (not sure it happened every time I restarted though). It probably works until I restart the computer. Then it updates anyway..
/**
* null, undefined, '', [] => true
* 0, '0', true, false, NaN => false
*/
const isEmpty = (v) => ![v].join('');
return ( new Set(arr.filter(Boolean)).size !== arr.filter(Boolean).length ); //true\false
This is expected behaviour. In Outlook, the Company field is just plain text, but in Dynamics 365, the Company Name field on a Contact is a lookup to an Account. Because of that, it will only sync if Dynamics can find an Account with the exact same name, or if the system is configured to automatically create the Account. Check your Contact Synchronization Field Mappings and make sure “Company” maps to ParentCustomerId, and enable “Create parent account for new contacts” if you want Dynamics to auto-create the Account when one doesn’t already exist.
For frequent real time communication like this what actual protocol are you planning to use on top of TCP/IP? Websocket maybe? Or something else?
Perhaps you can create the refs in your custom hook.
Before:
const menuRef = useRef()
const menuButtonRef = useRef()
const toggleMenu = useToggleMenu(menuRef, menuButtonRef);
After:
const { toggleMenu, menuRef, menuButtonRef } = useToggleMenu();
I found the solution the issue is not from our side it occurred from backend. I simply update the latest supabase image and some how it sudden work!
I hope you get correct answer , i don't know this one
For xUnit install these 3 packages:
1. xunit
2. xunit.runner.visualstudio
3. Microsoft.NET.Test.Sdk
For new Android simulator versions, you should do this configuration to bring back the soft keyboard
disable `Use stylus to write in text fields`
so then the keyboard will appear again
I was hoping a more concise solution using maybe vector operations, but at least with loops it works. Just two remarks : the rationale for having charfillvalue and fillvalue is a litlle bit disappoiting and
in Fortran the shift argument could be a vector such as in the following example
`a = eoshift(reshape([(i,i=1,9)], [3, 3]) , shift=[1, 2, 1], boundary=-5, dim=2) `
transforms
1 4 7 4 7 -5
2 5 8 -> 8 -5 -5
3 6 9 6 9 -5
alter table EMPLOYEE modify column id INTEGER;-- downgrade if it was BIGINT
use @reference "tailwindcss"; inside the component level.
you can check the docs from here.
I followed instructions on this url and its works
https://mlocati.github.io/articles/php-windows-imagick.html
I am trying to refactor this class https://github.com/grastvei007/tagsystem/blob/develop/tag.h
And are now in the process of changing the qvariant on line 95 to std::variant. Since the qvariant seems not so good when using list, it returns a copy of the list with a call .toList().
Since the enums I have for the different data types are used across the applications then it make sense to be able to forward this into the std::variant, and it probably make more sense for string than int and double.
The whole class should probably been a template, and construct a new Tag like this Tag<Double> tag;
Shreya dad
header 1 header 2 cell 1 cell 2 cell 3 cell 4
@JesusFreke
How does android runtime state machine work?
If found it! We have to add this argument to dspy.LM(): reasoning_effort="disable"
@Drew - At first - I really want to optimize number of renders, not commits. I have supplied the state mutation function in the post, but again - this is more an academical question. Redux allows you to create selectors to re-render only on change of any specified leave of the redux state tree (using createSelector + useSelector). I am trying to accomplish something similar within my own library. Without depending on Redux. Problem is not in my code, problem is in how the context works and that it does not support such behaviour by default. I want to re-render my component only when a specific item is added or removed from the list stored in useState. I will create a simple MRE today, give me a moment.
Depends on your stack in API, what are you using? And how are you using the database?
Put this formula to E3:
=IF(A3=2,$B$2,IF(A3=A2+1,B2,C2))
Try sudo apt-get install libc6-dev-i386
This looks like another instance of an infamous SAM bug where intrinsic functions do not work for some properties, but only in AWS:Serverless resources. A known workaround is to include the AWS::LanguageExtensions transform before Serverless like this:
Transform:
- AWS::LanguageExtensions
- AWS::Serverless-2016-10-31
May be late but you might want to do this since the accepted answer didn't work for me.
This is how you should change your where input:
AND: [
yourWhereInput,
{
OR: [
{ email: null },
{ email: { isSet: false } }
]
}
]
@VLAZ you are totally right. My mistake is that I didn't suspect the consequences of (undefined===undefined) is true, hence dangerous. I have to review my old code, where it may have happened silently.
I'm sure we have a lot about topic at stackoverflow but I like this site for different Spring questions and guides so https://www.baeldung.com/exception-handling-for-rest-with-spring
In your pom.xml, looks like you are using
<spring-ai.version>1.0.0-SNAPSHOT</spring-ai.version>
The GPT-5 support was added in 1.1.0-M1 (or later). the name gpt-5 might not recognized by Spring AI in your version, causing fallback behavior to default gpt-4o-mini. Consider to upgrade your version.
Yes, you can connect on-premises storage to cloud servers, but how it depends on the type.
A NAS (Network Attached Storage) can share its files with cloud servers over a secure VPN or a dedicated network connection. As cloud providers offer their own managed file services that work like a NAS.
A traditional SAN (Storage Area Network) cannot plug directly into cloud servers. Instead, we have to use the cloud’s own block storage services, which serve the same purpose. I prefer we should always consider network latency and data transfer costs.
I think I should have clarified, what I have right now: I can read binary data from files on demand to put in cpu memory then upload that to the gpu and discard the data afterwards to free cpu memory. I was only wondering if I would be able to do that somehow without needing extra files next to the compiled binary (i.e all the data required is neatly inside 1 exe, preferably without consuimg extra ram) that's all.
Adding
esbuild: {
jsx: "automatic",
jsxDev: false,
},
in vite.config.Ts fixed it for me.
https://stackoverflow.com/a/31635761/4380510
https://stackoverflow.com/a/70431036/4380510
Do this and you will be able to do it on all versions.
The key is to set the title as animated:
title = ax.set_title("First title", animated = True)
Then when updating the plot:
title.set_text("Second title")
ax.draw_artist(title)
canvas.blit(ax.bbox)
canvas.flush_events()
There was a great research on that in USSR around at the same time, 1975, when schcema was created.
An attached document is inside.
Прояв - is a display.
It is a bit engaged.
This answer is not deleted, so it is not even a red flag.
mshajkarami's answer solved my problem. The attribute of my custom view's 'attr' duplicated with one of the existing attributes in the system's 'values', and the problem was fixed after I renamed it.
Have you found a solution for this?
@chux No, you can't change the type of the question, not even as a moderator.
Thread on Meta: Can an Advice post be changed into a QA post?