@julaine, Thanks. My goal was to post a question and get an object authoritative answer. "other" seemed the best category as it was not troubleshooting, debugging, tooling, best practice nor advice. I now see it results in a discussion and not something to accept.
[Originally answered here] From the Command Palette, select Select and Start Debugging. There should be a delete icon next to each contributed launch action, which will remove it from the Run and Debug dropdown list.
I would prefer to go with sth. like this:
try:
from enum import StrEnum
class StringEnum(StrEnum):
pass
except ImportError:
from enum import Enum
class StringEnum(str, Enum):
pass
class MyEnum(StringEnum):
choice1 = "choice1"
choice2 = "choice2"
Eventually, I ended up splitting my query. In the first part, I got all the records that satisfy my exclude condition and then in the later part I got the records that satisfy my include conditions.
I can't prove this, but I think Microsoft is trying to parse the <LoadUri> element too early. This "Invalid URI: The hostname could not be parsed." error message is a common exception that's thrown whenever you try to create a Uri with a bad hostname, which is true if I try to parse a value such as "https://{Claim:loadUri}". I feel like the parsing should have been done AFTER the {Claim:} tokens are resolved, but I don't know the reasoning behind it.
In short: I just couldn't do it and I had to redesign these dynamic URLs. I'm using a static url for use cases and I created a reverse proxy that routes to different destinations based off of the query string. They look like this now: <LoadUri>https://my-reverse-proxy.com?myClaim={Claim:MyClaim}</LoadUri>
It's covered on Mozilla Developer Network now: https://developer.mozilla.org/en-US/docs/Web/API/File_API/Using_files_from_web_applications#using_hidden_file_input_elements_using_the_click_method
Autodesk Platform Services (APS) applications created on or after December 10, 2024, a “Legacy endpoint is deprecated” error is returned
Your 2017 application is "grandfathered." Autodesk is giving existing apps a grace period to migrate, but won't allow new applications to revert to it. There is no sunset date for the grace period to end.
Follow the advice at the top of the documentation you linked, and use
"If I'm on the wrong path". My advice is trying to create a dynamic form is the wrong path. You're chasing the pot of gold at the end of a rainbow. It looks simple when you just have to deal with text fields, but as the number of field types grow it becomes horribly complex. Why do you think the Internet is not awash with Dynamic Form libraries?
Hey Jan Nielsen, I'm a Product Manager at Stack Overflow. We're gathering feedback on the new question type you used to ask this. If you have a moment, we'd love to hear how your experience was or what you would improve.
Thanks to #DavidMaze comment 🙏 ...
You can set an environment variable
PIP_EXTRA_INDEX_URLwith the same effect, and either pass it in via a DockerfileARG(be careful to use a multistage build to avoid leaking the token though) or a build secret. See for example Passing PIP_EXTRA_INDEX_URL to docker build.
... to pass the argument, I added in .gitlab-ci.yml, in dockerize-env \ script :
export PIP_EXTRA_INDEX_URL="https://gitlab-ci-token:[email protected]/api/v4/groups/<GROUP_ID>/-/packages/pypi/simple"
and in DockerFile.project_env:
# Add a variable accessible during image build
# so that pip can access internal pypi repo
ARG PIP_EXTRA_INDEX_URL
After that my_package_on_gitlab_private_pypi can be installed from internal Pypi 🎉
Try out this package this works to run lumi h5p locally
@Hamid You're just going to ignore all the questions posted above, then?
Hey NoobNoob , I'm a Product Manager at Stack Overflow. We're gathering feedback on the new question type you used to ask this. If you have a moment, we'd love to hear how your experience was or what you would improve.
My case, this drivers working, the last drivers no working. THX
Hey Janning Vygen , I'm a Product Manager at Stack Overflow. We're gathering feedback on the new question type you used to ask this. If you have a moment, we'd love to hear how your experience was or what you would improve.
Running pod update fixed it for me
How to change "max connection pool size" permanently?
<asp:GridView ID="GridView1" runat="server"
AllowPaging="true"
PageSize="50"
PagerStyle-HorizontalAlign="Center"
OnPageIndexChanging="GridView1_PageIndexChanging">
</asp:GridView>
/**
* Removes the unit from a value if it is unitless.
*
* @param {Number|String} $value - The value to strip the unit from.
* @return {Number} - The value without its unit.
*/
@function strip-unit($value) {
@if meta.type-of($value) == "number" and not math.is-unitless($value) {
@return math.div($value, math.div($value, 1));
} @else {
@return $value;
}
}
Please elaborate on the way you use FastAPI's workers, and your implementation of API as a whole. If uploading a CSV, preprocessing and all the math is done with one endpoint - you won't really gain any performance by increasing number of FastAPI workers. Your work is mostly CPU-bound, so it makes sense to separate all the networking and all the maths into different entities. The way I usually do this in my projects is like this: have an API/Producer (FastAPI), which processes incoming requests, if a request is nested - splits it into different jobs and later passes it for workers to process. Workers are replicated and run in parallel, each one processing it's own part of workload. After completing the work, results are passed back to Producer for a response. More technically speaking, your Producer is FastAPI, for workers I usually go with Celery, which is a popular and solid choice, but there are many others, and you'll need a way for Producer and Worker to communicate - Redis is a good choice. Adding to that - I'd suggest ditching Pandas and going with Polars, in my experience performance gain is really noticeable. So your workflow will go like that: upload a csv -> split it in chunks -> assign a separate task to process each chunk and execute them in parallel -> gather results and return a response
As per this the 'Databases and Doctrine ORM page (https://symfony.com/doc/current/doctrine.html)
I edited the .env file and used dummy values for the database url and password this satisfied Symfony.
the files are in the range of couple megabytes each since they are binary gltf files (3d models and scenes). I have tried that in debug mode but I can see the data plainly in memory, so not sure if it's because it's compiled in debug mode or if that's how it's going to be but I guess this is not such a good idea then since it won't be as efficient as just accessing external files on demand.
You can use the inheritViewBox prop, which is exactly meant for that purpose. See https://mui.com/material-ui/api/svg-icon/#svg-icon-prop-inheritViewBox
import os
import argparse
import re
import logging
from collections import defaultdict
from pathlib import Path
import datetime
# Logger will be configured in main() after parsing arguments
logger = logging.getLogger(__name__)
def find_filename_duplicates(directory):
"""Find files with duplicate names in the given directory and its subdirectories"""
files_by_name = defaultdict(list)
# Walk through the directory and collect files by name
for root, _, filenames in os.walk(directory):
for filename in filenames:
filepath = os.path.join(root, filename)
files_by_name[filename].append(filepath)
# Filter to only include names that have duplicates
duplicates = {name: paths for name, paths in files_by_name.items() if len(paths) > 1}
return duplicates
def extract_number_pattern(filename):
"""Extract the number from a filename with pattern V1_A_B_C_D_E_F__.*
where:
- A is date in YYYYMMDD format
- B is 2 digits
- C is 4-6 digits
- D is 4 digits
- E is 3 digits
- F is 2 digits (the part we'll increment)
"""
# Pattern: V1_YYYYMMDD_BB_CCCCCC_DDDD_EEE_FF__.*
pattern = re.compile(r'(V1_\d{8}_\d{2}_\d{4,6}_\d{4}_\d{3}_)(\d{2})(__.*)')
match = pattern.match(filename)
if match:
prefix = match.group(1) # Everything before the F part
number = int(match.group(2)) # The F part (2 digits)
suffix = match.group(3) # Everything after the F part
return prefix, number, suffix
return None
def increment_number_in_filename(filepath):
"""Increment the F part (2-digit number) in the filename pattern"""
path = Path(filepath)
filename = path.name
parts = extract_number_pattern(filename)
if parts:
prefix, number, suffix = parts
new_number = number + 1
# Ensure the number stays as 2 digits with leading zero if needed
new_filename = f"{prefix}{new_number:02d}{suffix}"
new_filepath = path.with_name(new_filename)
return new_filepath
# If pattern doesn't match, return the original path with a warning
logger.warning(f"File {filepath} doesn't match the expected pattern. Skipping.")
return filepath
def format_size(size):
"""Convert file size to human-readable format"""
for unit in ['B', 'KB', 'MB', 'GB']:
if size < 1024.0:
return f"{size:.2f} {unit}"
size /= 1024.0
return f"{size:.2f} TB"
def rename_duplicates(directory, dry_run=False):
"""Find and rename duplicate files, return number of renames performed"""
duplicates = find_filename_duplicates(directory)
if not duplicates:
logger.info("No files with duplicate names found.")
return 0
total_duplicates = sum(len(paths) - 1 for paths in duplicates.values())
logger.info(f"Found {len(duplicates)} duplicate filenames ({total_duplicates} duplicates in total)")
rename_count = 0
for filename, paths in duplicates.items():
logger.info(f"\nDuplicate filename: '{filename}'")
# Check if the filename matches our expected pattern
if not extract_number_pattern(filename):
logger.warning(f" Filename '{filename}' doesn't match the expected pattern V1_YYYYMMDD_BB_CCCCCC_DDDD_EEE_FF__*")
logger.warning(f" Skipping this group of files")
continue
# Keep the first file as is, rename the rest
for i, filepath in enumerate(paths):
rel_path = os.path.relpath(filepath, directory)
if i == 0:
logger.info(f" Keeping: {rel_path}")
else:
new_path = increment_number_in_filename(filepath)
# Skip if the pattern doesn't match
if new_path == filepath:
continue
# Make sure the new name doesn't already exist
while os.path.exists(new_path) and not dry_run:
new_path = increment_number_in_filename(new_path)
if dry_run:
logger.info(f" [DRY RUN] Would rename: {rel_path} → {os.path.basename(new_path)}")
else:
logger.info(f" Renaming: {rel_path} → {os.path.basename(new_path)}")
os.rename(filepath, new_path)
rename_count += 1
return rename_count
def main():
parser = argparse.ArgumentParser(
description='Find and rename files with duplicate names using pattern increment',
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Example filename pattern: V1_YYYYMMDD_BB_CCCCCC_DDDD_EEE_FF__*
where:
- YYYYMMDD is a date
- BB is 2 digits
- CCCCCC is 4-6 digits
- DDDD is 4 digits
- EEE is 3 digits
- FF is 2 digits (this is the part that will be incremented)
"""
)
parser.add_argument('directory', help='Directory to scan for filename duplicates')
parser.add_argument('--max-iterations', type=int, default=10,
help='Maximum number of renaming iterations to perform')
parser.add_argument('--log-dir', default=os.getcwd(),
help='Directory where log file should be saved (default: current directory)')
parser.add_argument('--log-name',
help='Custom log filename (default: rename_log_YYYYMMDD_HHMMSS.txt)')
parser.add_argument('--dry-run', action='store_true',
help='Perform a dry run without actually renaming files')
args = parser.parse_args()
# Configure logging based on arguments
log_dir = os.path.abspath(args.log_dir)
if not os.path.exists(log_dir):
os.makedirs(log_dir)
if args.log_name:
log_filename = os.path.join(log_dir, args.log_name)
else:
timestamp = datetime.datetime.now().strftime('%Y%m%d_%H%M%S')
log_filename = os.path.join(log_dir, f"rename_log_{timestamp}.txt")
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler(log_filename),
logging.StreamHandler()
]
)
directory = os.path.abspath(args.directory)
if not os.path.isdir(directory):
logger.error(f"Error: Directory '{directory}' does not exist.")
return
logger.info(f"Starting duplicate filename resolution in: {directory}")
logger.info(f"Log file: {os.path.abspath(log_filename)}")
if args.dry_run:
logger.info("DRY RUN MODE: No files will actually be renamed")
iteration = 1
total_renames = 0
while iteration <= args.max_iterations:
logger.info(f"\n=== Iteration {iteration} ===")
renames = rename_duplicates(directory, args.dry_run)
total_renames += renames
if renames == 0:
logger.info(f"\nProcess completed successfully after {iteration} iterations.")
logger.info(f"Total files {'that would be' if args.dry_run else ''} renamed: {total_renames}")
break
logger.info(f"{'Would rename' if args.dry_run else 'Renamed'} {renames} files in this iteration. Checking for remaining duplicates...")
if args.dry_run:
# In dry run mode, we only do one iteration since we're not actually changing anything
logger.info("\nDry run complete. Exiting after one iteration.")
break
iteration += 1
else:
logger.warning(f"\nReached maximum iterations ({args.max_iterations}). Process may not be complete.")
logger.info(f"Total files renamed so far: {total_renames}")
logger.info(f"\nDetailed log saved to: {os.path.abspath(log_filename)}")
if __name__ == "__main__":
main()
md file
# Cascade Chat Conversation
Note: _This is purely the output of the chat conversation and does not contain any raw data, codebase snippets, etc. used to generate the output._
### User Input
file duplicates viewer and changer
### Planner Response
I'll help you create a Python script to find and manage duplicate files in a directory. This tool will identify files with identical content and provide options to delete or keep one copy.
Let me create a Python script with a simple command-line interface:
*Edited relevant file*
### Planner Response
I've created a Python script called [duplicate_finder.py](cci:7://file:///c:/Users/rsrivaishnavi/CascadeProjects/windsurf-project-2/duplicate_finder.py:0:0-0:0) that helps you find and manage duplicate files. Here's what it does:
### Features:
- Scans a directory and its subdirectories for duplicate files
- Identifies duplicates by both file size and content hash
- Shows detailed information about found duplicates
- Provides an option to automatically delete duplicates (keeping one copy)
- Displays the total space that can be recovered
### How to use:
1. **Basic usage** (just find duplicates):
```bash
python duplicate_finder.py "path/to/your/directory"
python duplicate_finder.py "path/to/your/directory" --delete
Scanning for duplicate files in: C:\path\to\your\directory
Found 3 groups of duplicate files (5 duplicates in total):
Group 1 - 2.50 MB each:
1. folder1\image1.jpg
2. folder2\image_copy.jpg
3. backup\image1(1).jpg
Group 2 - 150.00 KB each:
1. docs\report.pdf
2. archive\report.pdf
Group 3 - 5.00 MB each:
1. videos\clip.mp4
2. temp\clip.mp4
Total space wasted by duplicates: 12.80 MB
--delete, it keeps the first file in each group and removes the duplicates--delete)Would you like me to run it on a specific directory for you?
*Viewed duplicate_finder.py *
file name duplicates viewer and changer
I see you're looking for a tool to find and manage files with duplicate names rather than duplicate content. Let me create a script that focuses on filename duplicates instead.
Edited relevant file
I've created a new script called filename_duplicate_finder.py that specifically finds and manages files with duplicate names (rather than duplicate content).
python filename_duplicate_finder.py "path/to/your/directory"
python filename_duplicate_finder.py "path/to/your/directory" --rename
python filename_duplicate_finder.py "path/to/your/directory" --rename --interactive
Scanning for files with duplicate names in: C:\path\to\your\directory
Found 2 duplicate filenames (3 duplicates in total):
Duplicate #1: 'report.pdf'
1. docs\report.pdf (150.00 KB)
2. archive\report.pdf (150.00 KB)
3. old\report.pdf (120.00 KB)
Duplicate #2: 'config.json'
1. settings\config.json (2.50 KB)
2. backup\config.json (2.30 KB)
With --rename: Automatically renames all duplicates except the first one in each group
report.pdf → report_1.pdf, report_2.pdfWith --rename --interactive: Gives you options for each group of duplicates:
Would you like me to make any adjustments to this script or would you like to see it in action on a specific directory?
filename must be renamed but not at the end, the file will have a pattern like .dd__text. - the dd must be incremented by 1. also, the filename should be checked across a directory having many folders. after the renaming, the directory should be again checked for duplicates. this process should repeat until there is no filename duplicates found in the directory. Also provide a detailed log step by step -of the duplicate filenames found and renamed filenames
I understand you need a more specific filename renaming pattern. You want to:
.*dd__text.* where dd is a number that should be incrementedLet me create a new script that implements this logic:
Edited relevant file
I've created a new script called pattern_filename_renamer.py that implements your specific requirements for finding and renaming duplicate filenames with a pattern-based approach.
.*dd__text.* where dd is a two-digit numberdd) by 1 when renaming filespython pattern_filename_renamer.py "path/to/your/directory"
You can also specify a maximum number of iterations:
python pattern_filename_renamer.py "path/to/your/directory" --max-iterations 20
Pattern Recognition: The script looks for filenames with the pattern .*dd__text.* where:
.* is any prefixdd is a two-digit number (like 01, 02, etc.)__text is any text with double underscores.* is any suffix including file extensionRenaming Logic:
Iteration Process:
Logging:
rename_log_20251104_202500.txt)2025-11-04 20:25:00 - INFO - Starting duplicate filename resolution in: C:\path\to\directory
2025-11-04 20:25:00 - INFO - Log file: C:\path\to\rename_log_20251104_202500.txt
2025-11-04 20:25:00 - INFO - === Iteration 1 ===
2025-11-04 20:25:00 - INFO - Found 2 duplicate filenames (3 duplicates in total)
2025-11-04 20:25:00 - INFO - Duplicate filename: 'report01__final.pdf'
2025-11-04 20:25:00 - INFO - Keeping: docs\report01__final.pdf
2025-11-04 20:25:00 - INFO - Renaming: archive\report01__final.pdf → report02__final.pdf
2025-11-04 20:25:00 - INFO - Duplicate filename: 'data05__processed.csv'
2025-11-04 20:25:00 - INFO - Keeping: data\data05__processed.csv
2025-11-04 20:25:00 - INFO - Renaming: backup\data05__processed.csv → data06__processed.csv
2025-11-04 20:25:00 - INFO - Renamed 2 files in this iteration. Checking for remaining duplicates...
2025-11-04 20:25:01 - INFO - === Iteration 2 ===
2025-11-04 20:25:01 - INFO - No files with duplicate names found.
2025-11-04 20:25:01 - INFO - Process completed successfully after 2 iterations.
2025-11-04 20:25:01 - INFO - Total files renamed: 2
2025-11-04 20:25:01 - INFO - Detailed log saved to: C:\path\to\rename_log_20251104_202500.txt
Would you like me to make any adjustments to this script or would you like to see it in action on a specific directory?
keep the directory and log file location as variable inputs
I'll update the script to make both the directory and log file location configurable as command-line arguments. This will give you more flexibility when using the tool.
Edited relevant file
I've updated the script to make both the directory and log file location configurable as command-line arguments. Here are the changes:
--log-dir: Specifies the directory where the log file should be saved
--log-name: Allows you to specify a custom log filename
rename_log_YYYYMMDD_HHMMSS.txt with current timestamppython pattern_filename_renamer.py "path/to/your/directory"
python pattern_filename_renamer.py "path/to/your/directory" --log-dir "path/to/logs"
python pattern_filename_renamer.py "path/to/your/directory" --log-name "my_custom_log.txt"
python pattern_filename_renamer.py "path/to/your/directory" --log-dir "path/to/logs" --log-name "my_custom_log.txt"
python pattern_filename_renamer.py "path/to/your/directory" --max-iterations 15 --log-dir "path/to/logs" --log-name "my_custom_log.txt"
These changes make the script more flexible and allow you to specify exactly where you want the logs to be stored, which is especially useful for automation or when running the script on multiple directories.
The exact filename regex would be - V1_A_B_C_D_E_F__.* -where A would be date format in YYYYMMDD and B would be digits of length 2 and C would be digits of length between 4 to 6 , D would be digits of length 4, E would be digits of length 3 and this part F would be 2 digit number and the rest .* would be combination of alphabets, underscore and digits. I want you to change only the 2 digit number F, by incrementing the number by 1. Make the changes accordingly to match this requirement
I'll update the script to match your specific filename pattern and ensure it only increments the 2-digit number F in the pattern V1_A_B_C_D_E_F__.* where:
Edited relevant file
Let me fix that and try again:
Edited relevant file
I apologize for the errors. I'm having trouble with the replace_file_content tool when trying to update the script to match your specific filename pattern. The error indicates there's an issue with how I'm formatting the JSON for the replacement chunks.
The specific error is about invalid character 'd' in string escape code, which suggests there might be an issue with how backslashes are being handled in the regular expression patterns.
How would you like to proceed? I could:
What would you prefer?
1
I'll create a new file from scratch with the updated pattern requirements to match your specific filename format.
Edited relevant file
I've created a new script called filename_pattern_renamer.py that specifically handles your filename pattern requirements. Here's what it does:
Specific Pattern Matching: Identifies files with the pattern V1_A_B_C_D_E_F__.* where:
Targeted Renaming: Only increments the 2-digit F part of the filename
Duplicate Resolution: Iteratively scans and renames files until no duplicates remain
Detailed Logging: Creates a comprehensive log of all actions taken
Configurable Options:
python filename_pattern_renamer.py "path/to/your/directory"
python filename_pattern_renamer.py "path/to/your/directory" --dry-run
python filename_pattern_renamer.py "path/to/your/directory" --log-dir "path/to/logs" --log-name "my_rename_log.txt"
2025-11-04 22:42:00 - INFO - Starting duplicate filename resolution in: C:\path\to\directory
2025-11-04 22:42:00 - INFO - Log file: C:\path\to\rename_log_20251104_224200.txt
2025-11-04 22:42:00 - INFO - === Iteration 1 ===
2025-11-04 22:42:00 - INFO - Found 2 duplicate filenames (3 duplicates in total)
2025-11-04 22:42:00 - INFO - Duplicate filename: 'V1_20251104_01_12345_6789_123_01__data.txt'
2025-11-04 22:42:00 - INFO - Keeping: folder1\V1_20251104_01_12345_6789_123_01__data.txt
2025-11-04 22:42:00 - INFO - Renaming: folder2\V1_20251104_01_12345_6789_123_01__data.txt → V1_20251104_01_12345_6789_123_02__data.txt
2025-11-04 22:42:00 - INFO - Duplicate filename: 'V1_20251103_02_1234_5678_321_05__report.csv'
2025-11-04 22:42:00 - INFO - Keeping: reports\V1_20251103_02_1234_5678_321_05__report.csv
2025-11-04 22:42:00 - INFO - Renaming: archive\V1_20251103_02_1234_5678_321_05__report.csv → V1_20251103_02_1234_5678_321_06__report.csv
2025-11-04 22:42:00 - INFO - Renaming: backup\V1_20251103_02_1234_5678_321_05__report.csv → V1_20251103_02_1234_5678_321_06__report.csv
2025-11-04 22:42:00 - INFO - Renamed 3 files in this iteration. Checking for remaining duplicates...
2025-11-04 22:42:01 - INFO - === Iteration 2 ===
2025-11-04 22:42:01 - INFO - Found 1 duplicate filenames (1 duplicates in total)
2025-11-04 22:42:01 - INFO - Duplicate filename: 'V1_20251103_02_1234_5678_321_06__report.csv'
2025-11-04 22:42:01 - INFO - Keeping: archive\V1_20251103_02_1234_5678_321_06__report.csv
2025-11-04 22:42:01 - INFO - Renaming: backup\V1_20251103_02_1234_5678_321_06__report.csv → V1_20251103_02_1234_5678_321_07__report.csv
2025-11-04 22:42:01 - INFO - Renamed 1 files in this iteration. Checking for remaining duplicates...
2025-11-04 22:42:02 - INFO - === Iteration 3 ===
2025-11-04 22:42:02 - INFO - No files with duplicate names found.
2025-11-04 22:42:02 - INFO - Process completed successfully after 3 iterations.
2025-11-04 22:42:02 - INFO - Total files renamed: 4
2025-11-04 22:42:02 - INFO - Detailed log saved to: C:\path\to\rename_log_20251104_224200.txt
The script now precisely matches your requirements for finding and renaming duplicate files with the specific pattern format you described. Would you like me to make any adjustments to this script?
Ok, after some research I found an old workaround in my project related to KeyboardAutoScroll on iOS, and it turns out this is what broke the ScrollView in .NET 9.
Thanks for your responses!
These days, you can use npm audit with flags to filter.
npm audit --audit-level=(low|moderate|high|critical)
This comes from their docs page: https://docs.npmjs.com/cli/v7/commands/npm-audit
I have same problem. No input is accepted, and the module repeats the sequence described ending with SMS done.
Thanks for any help.
I couldn't just downgrade the React Native version in the same project; I had to create a new project from scratch with React Native version 0.72.4, reimplement the entire project, and search for libraries compatible with that older version. It was very painful, but it was the only thing I could do to solve my problem.
@KJ
excuse me, I meant jpeg.
Thanks! I’ve added a minimal sample PDF + the source PNG here: (link).
I’m not relying on viewer defaults—I compute the page size as points = pixels * 72 / targetDPI and draw the CGImage 1:1 (no interpolation). Despite that, thin glyphs show “holes” at typical zoom in Preview/PDFKit.
Do you know if iOS/macOS forces a particular image filter (e.g., JPEG re-encode) or disables smoothing for 1-bit/grayscale images inside PDFs? Any flag to guarantee lossless Flate (8-bit) or CCITT G4 (1-bit)?
Agree that PDF has no intrinsic DPI—only user space. That’s why I scale the page so one image pixel maps cleanly at a chosen effective DPI. The issue persists even with 8-bit grayscale (not just 1-bit).
Are there known viewer behaviors in Preview/PDFKit that render bilevel/high-contrast rasters without anti-aliasing? If so, would your recommendation be to (a) embed as 8-bit grayscale at 2× resolution, or (b) pre-apply a tiny morphological close/blur to avoid “holes”? Any other reliable workaround?
I hope it's useful
I recommend the use of Plutoprint library on github, easy to use as the maintainers answer questions
update: none does not work for me on apple silicon mac. It has updated itself three times to the newest version. ``It was when I restarted my computer (not sure it happened every time I restarted though). It probably works until I restart the computer. Then it updates anyway..
/**
* null, undefined, '', [] => true
* 0, '0', true, false, NaN => false
*/
const isEmpty = (v) => ![v].join('');
return ( new Set(arr.filter(Boolean)).size !== arr.filter(Boolean).length ); //true\false
This is expected behaviour. In Outlook, the Company field is just plain text, but in Dynamics 365, the Company Name field on a Contact is a lookup to an Account. Because of that, it will only sync if Dynamics can find an Account with the exact same name, or if the system is configured to automatically create the Account. Check your Contact Synchronization Field Mappings and make sure “Company” maps to ParentCustomerId, and enable “Create parent account for new contacts” if you want Dynamics to auto-create the Account when one doesn’t already exist.
For frequent real time communication like this what actual protocol are you planning to use on top of TCP/IP? Websocket maybe? Or something else?
Perhaps you can create the refs in your custom hook.
Before:
const menuRef = useRef()
const menuButtonRef = useRef()
const toggleMenu = useToggleMenu(menuRef, menuButtonRef);
After:
const { toggleMenu, menuRef, menuButtonRef } = useToggleMenu();
I found the solution the issue is not from our side it occurred from backend. I simply update the latest supabase image and some how it sudden work!
I hope you get correct answer , i don't know this one
For xUnit install these 3 packages:
1. xunit
2. xunit.runner.visualstudio
3. Microsoft.NET.Test.Sdk
For new Android simulator versions, you should do this configuration to bring back the soft keyboard
disable `Use stylus to write in text fields`
so then the keyboard will appear again
I was hoping a more concise solution using maybe vector operations, but at least with loops it works. Just two remarks : the rationale for having charfillvalue and fillvalue is a litlle bit disappoiting and
in Fortran the shift argument could be a vector such as in the following example
`a = eoshift(reshape([(i,i=1,9)], [3, 3]) , shift=[1, 2, 1], boundary=-5, dim=2) `
transforms
1 4 7 4 7 -5
2 5 8 -> 8 -5 -5
3 6 9 6 9 -5
alter table EMPLOYEE modify column id INTEGER;-- downgrade if it was BIGINT
use @reference "tailwindcss"; inside the component level.
you can check the docs from here.
I followed instructions on this url and its works
https://mlocati.github.io/articles/php-windows-imagick.html
I am trying to refactor this class https://github.com/grastvei007/tagsystem/blob/develop/tag.h
And are now in the process of changing the qvariant on line 95 to std::variant. Since the qvariant seems not so good when using list, it returns a copy of the list with a call .toList().
Since the enums I have for the different data types are used across the applications then it make sense to be able to forward this into the std::variant, and it probably make more sense for string than int and double.
The whole class should probably been a template, and construct a new Tag like this Tag<Double> tag;
Shreya dad
header 1 header 2 cell 1 cell 2 cell 3 cell 4
@JesusFreke
How does android runtime state machine work?
If found it! We have to add this argument to dspy.LM(): reasoning_effort="disable"
@Drew - At first - I really want to optimize number of renders, not commits. I have supplied the state mutation function in the post, but again - this is more an academical question. Redux allows you to create selectors to re-render only on change of any specified leave of the redux state tree (using createSelector + useSelector). I am trying to accomplish something similar within my own library. Without depending on Redux. Problem is not in my code, problem is in how the context works and that it does not support such behaviour by default. I want to re-render my component only when a specific item is added or removed from the list stored in useState. I will create a simple MRE today, give me a moment.
Depends on your stack in API, what are you using? And how are you using the database?
Put this formula to E3:
=IF(A3=2,$B$2,IF(A3=A2+1,B2,C2))
Try sudo apt-get install libc6-dev-i386
This looks like another instance of an infamous SAM bug where intrinsic functions do not work for some properties, but only in AWS:Serverless resources. A known workaround is to include the AWS::LanguageExtensions transform before Serverless like this:
Transform:
- AWS::LanguageExtensions
- AWS::Serverless-2016-10-31
May be late but you might want to do this since the accepted answer didn't work for me.
This is how you should change your where input:
AND: [
yourWhereInput,
{
OR: [
{ email: null },
{ email: { isSet: false } }
]
}
]
@VLAZ you are totally right. My mistake is that I didn't suspect the consequences of (undefined===undefined) is true, hence dangerous. I have to review my old code, where it may have happened silently.
I'm sure we have a lot about topic at stackoverflow but I like this site for different Spring questions and guides so https://www.baeldung.com/exception-handling-for-rest-with-spring
In your pom.xml, looks like you are using
<spring-ai.version>1.0.0-SNAPSHOT</spring-ai.version>
The GPT-5 support was added in 1.1.0-M1 (or later). the name gpt-5 might not recognized by Spring AI in your version, causing fallback behavior to default gpt-4o-mini. Consider to upgrade your version.
Yes, you can connect on-premises storage to cloud servers, but how it depends on the type.
A NAS (Network Attached Storage) can share its files with cloud servers over a secure VPN or a dedicated network connection. As cloud providers offer their own managed file services that work like a NAS.
A traditional SAN (Storage Area Network) cannot plug directly into cloud servers. Instead, we have to use the cloud’s own block storage services, which serve the same purpose. I prefer we should always consider network latency and data transfer costs.
I think I should have clarified, what I have right now: I can read binary data from files on demand to put in cpu memory then upload that to the gpu and discard the data afterwards to free cpu memory. I was only wondering if I would be able to do that somehow without needing extra files next to the compiled binary (i.e all the data required is neatly inside 1 exe, preferably without consuimg extra ram) that's all.
Adding
esbuild: {
jsx: "automatic",
jsxDev: false,
},
in vite.config.Ts fixed it for me.
https://stackoverflow.com/a/31635761/4380510
https://stackoverflow.com/a/70431036/4380510
Do this and you will be able to do it on all versions.
The key is to set the title as animated:
title = ax.set_title("First title", animated = True)
Then when updating the plot:
title.set_text("Second title")
ax.draw_artist(title)
canvas.blit(ax.bbox)
canvas.flush_events()
There was a great research on that in USSR around at the same time, 1975, when schcema was created.
An attached document is inside.
Прояв - is a display.
It is a bit engaged.
This answer is not deleted, so it is not even a red flag.
mshajkarami's answer solved my problem. The attribute of my custom view's 'attr' duplicated with one of the existing attributes in the system's 'values', and the problem was fixed after I renamed it.
Have you found a solution for this?
@chux No, you can't change the type of the question, not even as a moderator.
Thread on Meta: Can an Advice post be changed into a QA post?
Thank you to all!!
With this I solved
<<=IMPORTRANGE("link to sheet1";"A1:A"&CONTA.VALORI(C:C))>>
Thank you for the detailed explanation, that makes perfect sense.
Since I don’t currently have an admin account (and therefore can’t create an App Registration or grant Directory.Read.All consent), I understand that the client secret flow won’t work in my case.
I’ll try switching to the delegated flow with an interactive browser login using my normal user account, so the app can act under my own permissions.
Just to confirm: with that approach, I’ll only be able to read the groups and users that my account has access to, right?
Once we get an admin account later, I can switch back to the app-only (client credentials) approach with full directory scope.
Thanks again for pointing out /groups?$expand=transitiveMembers, that’s very helpful.
Also, Just to know, is there any other workaround to read the groups from Azure Directory/Entra using C#?
I was on the Gradle JDK 17.0.9, I changed it to JDK 21.0.1. after that it worked again.
(Chat gpt help: https://chatgpt.com/share/690b1ae2-263c-8006-b2fd-7a6e2997c816)
Is that viewer open source ? I'd be happy to take a look at it.
Use This :
Attachment::fromData(fn() => $pdf->output(), 'doc.pdf')->withMime('application/pdf');
Can you post this as a question? This new advice feature is counterproductive: even fewer questions will be asked, defeating the voting / reputation system.
I have had a similar issue with other activities, do you not have existing projects which have this activity which you can open to check whether this is an issue with the packages in the project or if your entire studio might be corrupted?
--color-red-500 is not defined anywhere in your :root variables.?
"the naturally expected behavior always seems to be what happens"
For some values of "always" and/or "naturally expected behavior". Exhibit A. Running on 32 bit platforms lately much?
I should have been clearer about what I want to check for, we are using trivy and audit in our pipelines but it's the server stuff that I need to check manually (old school infra). A subscription service posting a report to Slack every day or week would be perfect, so newreleases.io looks promising!
It would have been so nice if there was a standard way to publish release info, EOL's etc. Everyone seems to do it differently.
Btw, if I were a hacker, roave/security-advisories would be the first package I would try publish backdoors into. 🙂
composer require intervention/image:^2
Run this to install a PurePHP package without Laravel wrapper, which I think is useless.
This content really helped me boost my knowledge in the domain. I was planning to work in this field but wasn’t getting interview calls. After getting my resume redesigned from crazeneurons.com
for ₹599, I started receiving calls within 2 weeks.
Check real feedback: https://crazeneurons.com/
Contact on WhatsApp: https://wa.me/918368195998?text=I%20want%20an%20ATS-Friendly%20Resume
This seems to be a macOS 26.0 specific issue and it is fixed in macOS 26.1.
std::variant "encodes" the type it currently contains already
If you tell me why you want .pyd files for speed? IP protection? I can help the build setup because the best path differs depending on your true purpose
Assuming that these are in different spreadsheets, it would be better to import the entire range, A:C, and then use QUERY or some other method on that data to get what you want. People have enough issues with the connection of one IMPORTRANGE, which is what you would need to use if you wanted to count the values in column C to determine the range size to request for column A.
await in Python is just point of switching context. It used in asynchronous functions to pause execution until the awaited task is complete, allowing other tasks to run at the same time.
Yes, Windows 11 prioritizes internet connectivity, so automatically switching from your non-internet Hotspot A to the "auto-connect" internet-enabled Hotspot B is expected behavior. This is often due to the Network Connectivity Status Indicator (NCSI) detecting no internet on A.
To diagnose, check the Event Viewer under Microsoft-Windows-WLAN-AutoConfig/Operational for connection/disconnection events.
To prevent the switch, you can uncheck "Connect automatically" for Hotspot B in Windows WiFi settings CooMeet, or try setting Hotspot A as a "Preferred" connection using the netsh wlan command.
Allan walker ,
thanks for your answer I have tried all of this but the billing one .
could it take 9 days like my status ?
first kill server then do this
adb kill-server
adb start-server
adb devices
adb tcpip 5555
adb shell ip addr show wlan0
it will give u ip address like 192.XXX.XX.XXX
adb connect 192.168.18.140:5555 (change with yours)
and boom!!!
I faced the same 16KB page size issue — turned out my emulator image and gradle dependency versions weren’t synced. After cleaning the project and re-syncing, it worked. I actually found a helpful discussion about this while browsing tech blogs on Jobipo — worth checking if you’re into dev-related job tips.
This is a common issue with the Google Play Integrity API setup — sometimes it gets stuck at “Integration started” for days, especially when linking new apps or Firebase projects.
Here are a few things to check and try:
Verify Google Cloud Project Link
Make sure your app is correctly linked to the same Google Cloud project where the Play Integrity API is enabled.
Go to Play Console → Setup → App integrity → Play Integrity API and confirm the correct project is selected.
Enable Billing on the Cloud Project
Even though Integrity API has a free tier, Google requires billing enabled to activate the API in some regions or accounts.
Go to Google Cloud Console → Billing → Link billing account.
Recheck API Enablement
Visit Google Cloud Console → APIs & Services → Enabled APIs & Services.
Confirm Play Integrity API is listed as Enabled.
Firebase App Check Timing
After setup, the activation process can take anywhere from a few hours to several days.
If it’s stuck for more than a week, try unlinking and relinking the API in Play Console.
Contact Google Support via Play Console Form
Instead of general tickets, use the Play Console > Help > Contact support form. It routes directly to the Integrity API team.
Mention your App ID, Project Number, and Integration started status in the message.
The error Ad failed to load: 3 means no ad fill, not a coding issue. The adaptive test ad unit /21775744923/example/adaptive-banner often doesn’t serve ads. To test adaptive banners, use Google’s universal AdMob test ID (ca-app-pub-3940256099942544/9214589741) or create your own ad unit in Google Ad Manager with active line items. Also ensure you calculate adaptive size dynamically using AdSize.getCurrentOrientationAnchoredAdaptiveBannerAdSize() and set your test device ID before loading the ad.
This error usually occurs due to a name conflict or an incorrect package. You need to install the correct package and then import it. After doing this, the error AttributeError: module ‘mysql.connector’ has no attribute ‘CMySQLConnection’ will be fixed.
Neither Int nor Double are types. What task are you solving?
My cpanel doesnt even have this file = /etc/dovecot/dovecot-sql.conf.ext
And I am getting these errors.
Now twice in one month. Last time i had to manually recreate the whole server, moving one by one email account to the new server.... WHY!?
server2 dovecot[9970]: auth-worker(10118): Error: conn unix:auth-worker (pid=9978,uid=97): auth-worker<2>: dict([email protected]): Invalid password in passdb: crypt() failed: Invalid argument
As of Ktor Version 3.2.0, Ktor has a dependency injection plugin. You can see the docs here: https://ktor.io/docs/server-dependency-injection.html?topic=rx
This is worked for me
val iFrameHtmlData = """<iframe width="100%" height="100%" src="$iframeUrl" frameborder="0" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>""".trimIndent()
val baseUrl = "https://$packageName"
loadDataWithBaseURL(baseUrl, iFrameHtmlData, "text/html", "utf-8", null)
You may find the documentation:
https://developers.google.com/youtube/terms/required-minimum-functionality#embedded-player-api-client-identity
It is codes, and you can put them togheter into one program . I hav done it many times. My last game is my best for now. when i get a macBook agai i can use my free developer account to make real 1st class games.
Are you looking for a discussion here, or did you mean to actually create this as a QnA and get an answer solution?
USE THIS
you can use this tool to edit your fonts
@chux You are wrong.:) s[-n] is not equivalent to s[UINT_MAX].
WITH duplicate_data AS( select email,dob,country from customers Group by email dob,country Having count(*)>1) select cus.* from customers c join duplicate_data d on cus.email=d.email and cus.dob=d.dob and cus.country=d.country; try modifying your query like this.
"As in how urgently do I need to go thru my code base looking for these instances?" --> Step 1, save time and enable many (if not just about all) compiler warnings.