<View style={{ width: "100%", height: 1 }} />
That line of code solve the problem. Hope they solve it soon
Does someone have any news on this ?
Zoom confirmed this was an issue with their API and fixed it.
There are two views that contain firewall related data:
In the master database there is the sys.firewall_rules view
In both the master and the actual client databases there is a sys.database_firewall_rules view
The list of firewall rules you see in the Azure Portal corresponds with the sys.firewall_rules view in the master database.
To set the rules you can see in the Portal through t-sql you'd use, in the master database:
sys.sp_set_firewall_rule
I was trying to delete the greyed out rule with sys.sp_delete_database_firewall_rule. That's why it didn't work. I should have used sys.sp_delete_firewall_rule
EXECUTE sys.sp_delete_firewall_rule @name='name of rule to delete'
My mistake was that I was trying to remove a server firewall rule with the sp to delete a database firewall rule.
Compare: https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-delete-firewall-rule-azure-sql-database?view=azuresqldb-current
To: https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-delete-database-firewall-rule-azure-sql-database?view=azuresqldb-current
When I used sys.sp_delete_firewall_rule it all worked perfectly.
The main issue is that Electron creates a .desktop shortcut to the app in Linux with the icon set as per default - only the app name. In Linux world this means the app icon needs to be present in the /usr/share/icons folder, or in the app root folder using the name of the app and .png or .svg (png works well and size does not matter)
There are two solutions here:
Copy your app icon to the root folder of your app and rename to <your-app-name>.png
Edit the /usr/share/applications/<your-app-name>.desktop file and change the Icon= to the full path to your app icon in the installed version of your app. Something like: /opt/<your-app-name>/resources/app/build/icon.png
You can also add these solutions to your package.json linux afterInstall .tpl script file.
You might need to logout and login again to view icon changes in desktop.
There's a great solution using MAKEARRAY:
E.g. to make 10 single column rows containing the string "foo":
=makearray(10, 1, lambda(_,__, "foo"))
I contacted AWS support and they mentioned there's indeed a warm up period. This is what the response said:
Although SQS FIFO queue high throughput supports up to 70,000 TPS with batching, internally SQS must scale up its backend hosts for the queue to handle the increased load. The scaling process typically takes low hundreds of milliseconds to complete, during which temporary throttling may occur.
It seems the scaling up is not instantaneous, and there could be throttling even with high throughput, and even if you are not reaching 70,000 TPS limit (depends on region to region).
The only option here is to build a retry mechanism, you can also try reaching out to AWS support so they can keep the queue scaled up.
I managed to solve this by setting the browser to "" in angular.json build section
"outputPath": {
"base": "dist/my-app",
"browser": ""
},
insanely stupid bug
For anyone else who may run into this problem: it looks like as if Windows/OpenSSL is not the right combination to generate p12 files, as we could solve this issue by creating p12 file on a Mac device (Keychain....), although then mobileprovision file needs to be regenerated accordingly. I could get IPA file finally and hopefully it can be successfully executed on iPhone device and then distributed via Testflight.
The issue seems to be that it's impossible to go from 32PM -> 16RM in one go. You must first pass through 16PM.
That's because the bitness (D bit) (Default operand size for CS) in the GDT is still cached in the hidden part of the Segment Selectors.
So, you must first change to a 16 bit gdt.
Question, is the 16 bit GDT the same size as the 32 bit one. Set the D bits and base and limits within the 64kb and 1mb limits.
Then in 16 bit protected mode, jump to real mode.
Question: To be sure and double check: 16 bit PM exist on modern machines?
My new post on retrocomputing ended up having nice information/answers and was very helpful. Anything relevant or complementary you want to say.
If you don't want to update your nuxt.config.ts file, another way you can go about it is by starting your app with the command:
HOST=0.0.0.0 PORT=3000 npm run dev
You can retrieve the number of connects and bids for a job posting using the Upwork Marketplace GraphQL API by querying the "job" object with the appropriate fields. The API exposes detailed job-related information â including metrics like how freelancers have already submitted proposals (bids) and how many connects are required apply.
If you want, I can provide the exact code snippet that accomplishes this. Just send me a message, and I'II share it with you.
Although @certainperformance answer shows a solution it does not answer OP's root question:
Why is the IDE showing it? The code works, I'm wondering why it is invalid.
jQuery offers several ways to attach a function that will run when the DOM is ready. All of the following syntaxes are equivalent:
$( handler )$( document ).ready( handler )$( "document" ).ready( handler )$( "img" ).ready( handler )$().ready( handler )As of jQuery 3.0, only the first syntax is recommended; the other syntaxes still work but are deprecated (mostly because they can lead to incorrect assumptions about the method's behavior).
There is also
$(document).on( "ready", handler )but it is deprecated as of jQuery 1.8 and was removed in jQuery 3.0. It was removed because:
when multiple functions are added via successive calls to this method, they run when the DOM is ready in the order in which they are added. As of jQuery 3.0, jQuery ensures that an exception occuring in one handler does not prevent subsequently added handlers from executing. ~ jQuery docs
Related question: Why "$().ready(handler)" is not recommended?
can we use it in django 4.2.x LTS ??
Here you go:
frame: .zero
____
Sorry for necro, but i'm encountered same problem, and maybe this would be useful for those who also tired of this yellow markers. In my case, "Tools" -> "Options" -> "Language client" -> uncheck a "Python Language Server (Python version)" and restarting a Qt Creator IDE does the trick. This not remove any autocompletes for classes etc.
<span style="color: yellow;">âȘ</span>
As suggested, I found the issue and so posting it in an answer.
I did not realize that in the first coloring sequence, the opening/closing escape sequence \[ and \] were duplicated, creating a nesting which was likely the cause.
Specifically, this:
\[\[\033[01;31m\]\]
before \u@\h.
After some debugging.
It seems the library does not reject and cannot verify requests without content type. After adding content type the library correctly spots all kind of errors.
Had this issue in Chrome and not other browsers. If your host is making a request to a local host Chrome will show a popup like the following:

You might be like me and have accidentally clicked blocked. Chrome then subsequently pretends to send requests to the local service.
Hello guys after lots of research i got that zero touch portal is being banned in india, now all the locker apps and finince applications are using just normal FRP protection in under guidence of Googl. please dont wast your time for zero touch (actully google detecting miss use of zero touch in india so it is banned in india).
if you are not indian and loocking for zero touch then you have to became first android resailler for silver of gold and then google will approve your portal otherwise you are wasting your time
If you are using docker desktop then:
docker desktop stop
My OS is MacOS Sequoia 15.7.1. After upgrading to anaconda-navigator 2.7, I received the same error as you. Opening a terminal window and running "conda update conda" resolved my issue.
=0 puts a pointer to a function called __pure_virtual_called in vtbl and this function throws an runtime error if the function was called in runtime.
<intent-filter>
<action android:name="android.intent.action.MAIN"/>
<category android:name="android.intent.category.LAUNCHER"/>
</intent-filter>
<meta-data android:name="com.applisto.appcloner.originalActivityName" android:value="com.ergames.phonecontactshackersimulatox.MainActivity"/>
</activity>
<receiver android:name="com.applisto.appcloner.classes.FakeCamera$FakeCameraReceiver">
<intent-filter>
<action android:name="com.applisto.appcloner.action.FAKE_CAMERA_SELECT_CAMERA_PICTURE"/>
<action android:name="com.applisto.appcloner.action.FAKE_CAMERA_ROTATE_CLOCKWISE"/>
<action android:name="com.applisto.appcloner.action.FAKE_CAMERA_ROTATE_ANTI_CLOCKWISE"/>
</intent-filter>
</receiver>
<receiver android:name="com.applisto.appcloner.classes.DisableClipboardAccess$ClearClipboardReceiver"/>
<activity android:name="com.applisto.appcloner.classes.FakeCamera$FakeCameraActivity" android:theme="@android:style/Theme.Translucent.NoTitleBar.Fullscreen"/>
</application>
<uses-permission android:name="android.permission.READ_LOGS"/>
<uses-permission android:name="android.permission.FLASHLIGHT"/>
<uses-permission android:name="net.dinglisch.android.tasker.PERMISSION_RUN_TASKS"/>
<uses-permission android:name="android.permission.BLUETOOTH_ADMIN"/>
<uses-permission android:name="android.permission.VIBRATE"/>
<uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW"/>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.CHANGE_WIFI_STATE"/>
<uses-permission android:name="android.permission.INTERNET"/>
<uses-permission android:name="android.permission.REQUEST_IGNORE_BATTERY_OPTIMIZATIONS"/>
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/>
<uses-permission android:name="android.permission.USE_FINGERPRINT"/>
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE"/>
<uses-permission android:name="android.permission.CAMERA"/>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.BLUETOOTH"/>
<uses-permission android:name="android.permission.WRITE_SETTINGS"/>
<uses-permission android:name="android.permission.READ_SETTINGS"/>
</manifest>
I checked with a friend of mine and he said the following.
By default, COBOL uses double quotes (â) for string delimiters instead of single quotes (â). This is different than other languages and often causes confusion.
The default for CRTSQLCBLI is OPTION(*QUOTESQL).
The user can either change their string delimiter character to â or use CRTSQLCBLI OPTION(*APOSTSQL) to allow the use of single quotes. Most users use OPTION(*APOSTSQL) so their embedded SQL looks the same in all languages.
At least Safari 26.1, Javascript successfully parses dates of the format YYYY-MM-DD without workaround.
The sample code that @duffymo code doesn't work. After making the edits that @kdsdata suggested,
I get the following error after pressing F5:
Compile error: Expected user-defined type, not project
and the following line of code gets highlighted in yellow:
Public Sub CreateWordList(TableName As String, FieldName As String, ForeignKey As String)
If you're using named table notation:
| PersonID | Hours | Jobcode | Total |
|---|---|---|---|
| 1 | 7 | 1 | 7 |
| 1 | 6 | 2 | 13 |
| 1 | 8 | 3 | 21 |
| 1 | 10 | 1 | 31 |
| 2 | 4 | 1 | 4 |
| 2 | 12 | 2 | 16 |
| 2 | 5 | 3 | 21 |
| 2 | 3 | 5 | 24 |
With the following formula for Total:
=SUMIF(INDEX([PersonID],1):[@PersonID],[@PersonID],INDEX([Hours],1):[@Hours])
converting this comment by jifi into an answer to improve visibility for those stumbling on this question in their journey of writing a their own EMV applet
3- Is there any open source EMV applet?
Yes there are open source applets you can learn from. There is a curated list of these applets on github under the name of "Javacard Curated List"
There is a bug in the library which is being worked out.
Unfortunately if using API Gateway it looks like you have to encode { and | beforehand (which you said you cannot do). I had to open an AWS support ticket about this and they pointed me to this documentation that mentions it :(
|) and the curly brace character ({ or }) are not supported for any request URL query string and must be URL-encoded.Have you tried changing the storage link on localhost then uploading it?
Is Excel running?
In my experience Excel puts locks on files and will prevent PowerShell from certain things if Excel is running.
Try using https://cdn.jsdelivr.net/npm/chart.js instead.
As the docs suggest:
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
@CyberCr0w: I don't get it, where's the magic trick?
so, you want me to use only httpsResource` for get method? rest to be httpClient` ? I do not want to create a new service file. so can I use httpClient` just by replacing `httpResource` ?
Maybe, you could push your codes,libs and packages on GitHub.Then,git clone it on kaggle.
Iâd also suggest checking out this Salesforce Help article about Prepare for Connected App Usage Restrictions Change and the new âApprove Uninstalled Connected Appsâ user permission.
Although it covers a different error, itâs still related to connected app authentication issues and might help others facing similar problems â for example, errors like:
OAUTH_APPROVAL_ERROR_GENERIC
https://example.com?error=invalid_client&error_description=app+must+be+installed+into+org
Use following command:
tree /f
or
tree /f > myfile.txt
Please read https://stackoverflow.com/help/minimal-reproducible-example and provide the missing details in your question (test code, complete errors), editing the question, not in comments.
I posted the complete code I am testing here.
This is old thread, but I add my comment to it anyway.
If your are receiving <not accessible> the two not mentioned reasons might be
Windows corruption
Trying debug in 'Release' mode
Windows corruption is usually helped with sfc and DISM commands on elevated cmd.
Later one, well, change from 'Release' to 'Debug' and try again.
Pour obtenir les valeurs de plusieurs composants générés dynamiquement dans un EditForm en Blazor, il faut gérer la liaison des données (binding) avec un modÚle dynamique et intercepter les événements de changement (ValueChanged) de chaque composant.
You haven't posted the information that was asked for. How are we meant to know what the inputs are and what you expect the results to be?
If you go to Control panel > Notifications > Tab Rules you can find the tab with "default" where your e-mail is also marked. If you click it and then click "Edit" you can unfold "Container Manager" and uncheck the rule "Unexpectedly stopped". This should silence e-mails and notifications.
Screenshot of Synology settings to turn off the Unexpectedly Stopped notification.
// Ninjago: Dragon Rising - Season 4 Teaser Cinematic
// Scene: Y/N L/N Reveal at the Ember Ruins
// Unreal Engine 5.4 (LEGO Animation Style)
// ==============================
// SETUP
// ==============================
Actors:
- BP_YN_Character (SkeletalMesh: LEGO_Minifig_YN)
- BP_Lloyd_Character (SkeletalMesh: LEGO_Minifig_Lloyd)
- BP_Nya_Character (SkeletalMesh: LEGO_Minifig_Nya)
- BP_Kai_Character (SkeletalMesh: LEGO_Minifig_Kai)
- BP_EmberCompass (StaticMesh: SM_Ember_Compass)
- BP_EmberTrail (NiagaraSystem: NS_EmberTrail)
- BP_MoonLight (DirectionalLight)
- BP_DragonSilhouette (SkeletalMesh: SM_Dragon_Silhouette)
- BP_Camera_Main (CineCameraActor)
- BP_SkyFog (ExponentialHeightFog + VolumetricFog Enabled)
// ==============================
// SEQUENCE FLOW
// ==============================
// Timeline: 0â40 seconds
LevelSequence "DragonRising_S4_Teaser"
EVENT BeginPlay
{
PlaySequence(DragonRising_S4_Teaser)
SetPostProcess(FilmGrain=0.3, Contrast=1.15, Saturation=0.9)
SpawnNiagaraSystem(BP_EmberTrail, Location=RuinsPath)
}
// ------------------------------
// SCENE 1 â The Ember Trail (0â10s)
// ------------------------------
CameraCutTo(BP_Camera_Main)
SetCameraTransform(Location=(0,0,250), Rotation=(0,-15,0), FOV=75)
PlaySound(SFX_Dragon_Roar_Distant, Volume=0.3)
FadeIn(2s)
BP_YN_Character.PlayAnimation("Crouch_Inspect")
SpawnNiagaraSystem(NS_EmberTrail, Location=Ground_Embers)
BP_YN_Character.PlayParticleEffect("Hand_Glow_Orange")
Lloyd_Character.VO("What is that?")
YN_Character.VO("A map. Only the dragons could leave it.")
// ------------------------------
// SCENE 2 â The Meeting (10â25s)
// ------------------------------
CameraPan(Target=Group_Ninja, Speed=0.5)
Lighting.SetIntensity(0.6)
PlayMusic(MX_TensionSoft)
Kai_Character.VO("Who are you?")
YN_Character.VO("Someone who knows where the dragons sleep.")
Nya_Character.VO("Then welcomeâif youâre real.")
BP_EmberCompass.AttachTo(YN_Character, "RightHandSocket")
BP_EmberCompass.PlayGlow(Intensity=3)
SpawnNiagaraSystem(NS_DragonProjection, Location=Compass_Origin)
// ------------------------------
// SCENE 3 â Threat Tease (25â40s)
// ------------------------------
CameraShake(Intensity=2.0)
PlaySound(SFX_Ground_Rumble, Volume=0.6)
SpawnActor(BP_DragonSilhouette, Location=(0,2000,800))
SetLightColor(BP_MoonLight, Color=(0.8,0.8,1))
FadeLight(BP_MoonLight, Duration=2.5)
YN_Character.VO("Theyâre waking...")
TextOverlay("Season 4 â Dragon Rising: New alliances... old flames", FadeIn=1.0, Duration=3.0)
FadeOut(3s)
// ==============================
// MATERIAL / LIGHTING NOTES
// ==============================
Materials:
- YN_Cape_Mat: Cloth with emissive edge burn (color #FF4500)
- Ember_Studs_Mat: Translucent additive blend, emissive intensity 6
- Compass_Mat: PBR Metal with emissive rune detail
Lighting:
- MoonLight Intensity: 2.0 lux
- EmberTrail Glow: 2200K orange
- Fog Density: 0.3 for cinematic silhouet
// Ninjago: Dragon Rising - Season 4 Teaser Cinematic
// Scene: Y/N L/N Reveal at the Ember Ruins
// Unreal Engine 5.4 (LEGO Animation Style)
// ==============================
// SETUP
// ==============================
Actors:
- BP_YN_Character (SkeletalMesh: LEGO_Minifig_YN)
- BP_Lloyd_Character (SkeletalMesh: LEGO_Minifig_Lloyd)
- BP_Nya_Character (SkeletalMesh: LEGO_Minifig_Nya)
- BP_Kai_Character (SkeletalMesh: LEGO_Minifig_Kai)
- BP_EmberCompass (StaticMesh: SM_Ember_Compass)
- BP_EmberTrail (NiagaraSystem: NS_EmberTrail)
- BP_MoonLight (DirectionalLight)
- BP_DragonSilhouette (SkeletalMesh: SM_Dragon_Silhouette)
- BP_Camera_Main (CineCameraActor)
- BP_SkyFog (ExponentialHeightFog + VolumetricFog Enabled)
// ==============================
// SEQUENCE FLOW
// ==============================
// Timeline: 0â40 seconds
LevelSequence "DragonRising_S4_Teaser"
EVENT BeginPlay
{
PlaySequence(DragonRising_S4_Teaser)
SetPostProcess(FilmGrain=0.3, Contrast=1.15, Saturation=0.9)
SpawnNiagaraSystem(BP_EmberTrail, Location=RuinsPath)
}
// ------------------------------
// SCENE 1 â The Ember Trail (0â10s)
// ------------------------------
CameraCutTo(BP_Camera_Main)
SetCameraTransform(Location=(0,0,250), Rotation=(0,-15,0), FOV=75)
PlaySound(SFX_Dragon_Roar_Distant, Volume=0.3)
FadeIn(2s)
BP_YN_Character.PlayAnimation("Crouch_Inspect")
SpawnNiagaraSystem(NS_EmberTrail, Location=Ground_Embers)
BP_YN_Character.PlayParticleEffect("Hand_Glow_Orange")
Lloyd_Character.VO("What is that?")
YN_Character.VO("A map. Only the dragons could leave it.")
// ------------------------------
// SCENE 2 â The Meeting (10â25s)
// ------------------------------
CameraPan(Target=Group_Ninja, Speed=0.5)
Lighting.SetIntensity(0.6)
PlayMusic(MX_TensionSoft)
Kai_Character.VO("Who are you?")
YN_Character.VO("Someone who knows where the dragons sleep.")
Nya_Character.VO("Then welcomeâif youâre real.")
BP_EmberCompass.AttachTo(YN_Character, "RightHandSocket")
BP_EmberCompass.PlayGlow(Intensity=3)
SpawnNiagaraSystem(NS_DragonProjection, Location=Compass_Origin)
// ------------------------------
// SCENE 3 â Threat Tease (25â40s)
// ------------------------------
CameraShake(Intensity=2.0)
PlaySound(SFX_Ground_Rumble, Volume=0.6)
SpawnActor(BP_DragonSilhouette, Location=(0,2000,800))
SetLightColor(BP_MoonLight, Color=(0.8,0.8,1))
FadeLight(BP_MoonLight, Duration=2.5)
YN_Character.VO("Theyâre waking...")
TextOverlay("Season 4 â Dragon Rising: New alliances... old flames", FadeIn=1.0, Duration=3.0)
FadeOut(3s)
// ==============================
// MATERIAL / LIGHTING NOTES
// ==============================
Materials:
- YN_Cape_Mat: Cloth with emissive edge burn (color #FF4500)
- Ember_Studs_Mat: Translucent additive blend, emissive intensity 6
- Compass_Mat: PBR Metal with emissive rune detail
Lighting:
- MoonLight Intensity: 2.0 lux
- EmberTrail Glow: 2200K orange
- Fog Density: 0.3 for cinematic silhouet
te
Camera:
- DOF Focal Distance: 150
- Bloom Intensity: 0.8
- Lens Flare Enabled: True
te
Camera:
- DOF Focal Distance: 150
- Bloom Intensity: 0.8
- Lens Flare Enabled: True
I am using astral uv and had the same problem that a package (tksheet) couldn't be found. The porblem was easily solved by adding pyinstaller to the uv dependencies:
uv add pyinstaller
and then building again:
uv run pyinstaller --clean [<NAME>].spec
"other" seemed the best category as it was not troubleshooting, debugging, tooling, best practice nor advice.
Relevant meta: https://meta.stackoverflow.com/a/436435/2402272
Damn, I was hoping that wouldn't be the answer. Back to the drawing board to figure out how to meet the objectives given to me.
I'm not sure why you think it's a good idea to keep these details secret? Why is end unused?
Try to create a fresh database, and then connect it. It works for me. Because I run some scripts in the old database, which were conflicting. But now everything is working. So, you can give it a try.
I found the error.
Turns out there was an issue with the conan_provider.cmake. This only installed Debug and Release.
The compile_commands.json however also contained info for MinSizeRel and RelWithDbgInfo. Since conan did not fill those configs, the include paths where missing here in my compile_commands.json.
After adding the other configs to the conan provider my compile_commands was fixed and clang tidy does seam to work :)
I wrote test case to check timestamps with joda datetime and java time , both having different timestamps. if we replace it with java.time, the timestamps should be same as before but it is getting different timestamps.
In general, transaction propagation is supported by Liberty and you don't need to modify EJBs that run on traditional WebSphere to make them run on Liberty. However, as the original poster says, if you use remote transaction propagation (in which one EJB calls another EJB in a different app server or JVM), the app won't run directly on Liberty.
However, there are workarounds, as @gas says. It might be that you can modify the EJBs so that they run in the same JVM. Sometimes EJBs are configured to use remote interfaces when the calls are actually local; so that's something else that can be easily changed to avoid making remote calls.
If none of that helps, though, you can wrap the EJB in a web service before deploying it to Liberty and then use the web service client to call the EJB web service. But it's worth checking that your app does actually need to be using the EJB's remote interfaces in the first place.
More info about how to check and then how to wrap the EJB in a web service here: https://community.ibm.com/community/user/blogs/laura-cowen1/2025/09/12/transaction-propagation-on-liberty
... your test cases being what? And failing how?
Reading files with PHP is a slow thing.
First, your PHP code will be converted to machine language and executed, this itself is slow.
Next, your PHP code will have to check if "file_exists" and then read it in someway, like with file_get_contents, or reading it line by line.
"file_get_contents" will run into memory issues on large files. Also reading many files to look for a certain thing will mean you will have to open all files to find that one thing.
Say for example you have a million registered users, each user's info is stored in his/her own file.
Now you want to find out names of all male users who are living in California.
You will need to open each user file and read the data to find what you are looking for, open million files takes too much time with PHP.
In MYSQL, this is one query, will probably take less than a few seconds to execute.
What you need is a mix of both PHP file system and MYSQL.
All the data which is usually accessed again and again, you can store that particular data in a file and open that file using PHP. And all the data also goes into mysql for use with complex queries.
For example, you have file of user "JOHN_SMITH.txt" stored somewhere outside public_html.
And on website another user wants to see info about JOHN_SMITH on webpage example.com/profile/JOHN_SMITH/. So your PHP code simply opens that one file and displays the info. Takes like a second to complete that task, and no need to touch MYSQL, leaving it free for more complex searches.
Also, you need to have a really big database table and extremely high traffic before speed becomes an issue. If it has become an issue in your case with only little extra traffic, then either server is too weak, has less processing power, or your database is not formed/indexed properly.
This question already has answers in this regular question: How do I "git blame" a deleted line? there is nothing opinion based about it so "Advice" was the wrong type to pick.
My set up looks much like yours except I'm not using the "AuthorizationScopes:" field. Have you tried deleting the ".aws-sam" folder to make sure you are getting a clean build and deploy? I've had to do that on occasion to get my changes to appear.
If you are getting a 500 error you've at least gotten past CORS going in. Do note that your response needs to include those CORS headers as well or the browser will block it again.
Good luck
@julaine, Thanks. My goal was to post a question and get an object authoritative answer. "other" seemed the best category as it was not troubleshooting, debugging, tooling, best practice nor advice. I now see it results in a discussion and not something to accept.
[Originally answered here] From the Command Palette, select Select and Start Debugging. There should be a delete icon next to each contributed launch action, which will remove it from the Run and Debug dropdown list.
I would prefer to go with sth. like this:
try:
from enum import StrEnum
class StringEnum(StrEnum):
pass
except ImportError:
from enum import Enum
class StringEnum(str, Enum):
pass
class MyEnum(StringEnum):
choice1 = "choice1"
choice2 = "choice2"
Eventually, I ended up splitting my query. In the first part, I got all the records that satisfy my exclude condition and then in the later part I got the records that satisfy my include conditions.
I can't prove this, but I think Microsoft is trying to parse the <LoadUri> element too early. This "Invalid URI: The hostname could not be parsed." error message is a common exception that's thrown whenever you try to create a Uri with a bad hostname, which is true if I try to parse a value such as "https://{Claim:loadUri}". I feel like the parsing should have been done AFTER the {Claim:} tokens are resolved, but I don't know the reasoning behind it.
In short: I just couldn't do it and I had to redesign these dynamic URLs. I'm using a static url for use cases and I created a reverse proxy that routes to different destinations based off of the query string. They look like this now: <LoadUri>https://my-reverse-proxy.com?myClaim={Claim:MyClaim}</LoadUri>
It's covered on Mozilla Developer Network now: https://developer.mozilla.org/en-US/docs/Web/API/File_API/Using_files_from_web_applications#using_hidden_file_input_elements_using_the_click_method
Autodesk Platform Services (APS) applications created on or after December 10, 2024, a âLegacy endpoint is deprecatedâ error is returned
Your 2017 application is "grandfathered." Autodesk is giving existing apps a grace period to migrate, but won't allow new applications to revert to it. There is no sunset date for the grace period to end.
Follow the advice at the top of the documentation you linked, and use
"If I'm on the wrong path". My advice is trying to create a dynamic form is the wrong path. You're chasing the pot of gold at the end of a rainbow. It looks simple when you just have to deal with text fields, but as the number of field types grow it becomes horribly complex. Why do you think the Internet is not awash with Dynamic Form libraries?
Hey Jan Nielsen, I'm a Product Manager at Stack Overflow. We're gathering feedback on the new question type you used to ask this. If you have a moment, we'd love to hear how your experience was or what you would improve.
Thanks to #DavidMaze comment đ ...
You can set an environment variable
PIP_EXTRA_INDEX_URLwith the same effect, and either pass it in via a DockerfileARG(be careful to use a multistage build to avoid leaking the token though) or a build secret. See for example Passing PIP_EXTRA_INDEX_URL to docker build.
... to pass the argument, I added in .gitlab-ci.yml, in dockerize-env \ script :
export PIP_EXTRA_INDEX_URL="https://gitlab-ci-token:[email protected]/api/v4/groups/<GROUP_ID>/-/packages/pypi/simple"
and in DockerFile.project_env:
# Add a variable accessible during image build
# so that pip can access internal pypi repo
ARG PIP_EXTRA_INDEX_URL
After that my_package_on_gitlab_private_pypi can be installed from internal Pypi đ
Try out this package this works to run lumi h5p locally
@Hamid You're just going to ignore all the questions posted above, then?
Hey NoobNoob , I'm a Product Manager at Stack Overflow. We're gathering feedback on the new question type you used to ask this. If you have a moment, we'd love to hear how your experience was or what you would improve.
My case, this drivers working, the last drivers no working. THX
Hey Janning Vygen , I'm a Product Manager at Stack Overflow. We're gathering feedback on the new question type you used to ask this. If you have a moment, we'd love to hear how your experience was or what you would improve.
Running pod update fixed it for me
How to change "max connection pool size" permanently?
<asp:GridView ID="GridView1" runat="server"
AllowPaging="true"
PageSize="50"
PagerStyle-HorizontalAlign="Center"
OnPageIndexChanging="GridView1_PageIndexChanging">
</asp:GridView>
/**
* Removes the unit from a value if it is unitless.
*
* @param {Number|String} $value - The value to strip the unit from.
* @return {Number} - The value without its unit.
*/
@function strip-unit($value) {
@if meta.type-of($value) == "number" and not math.is-unitless($value) {
@return math.div($value, math.div($value, 1));
} @else {
@return $value;
}
}
Please elaborate on the way you use FastAPI's workers, and your implementation of API as a whole. If uploading a CSV, preprocessing and all the math is done with one endpoint - you won't really gain any performance by increasing number of FastAPI workers. Your work is mostly CPU-bound, so it makes sense to separate all the networking and all the maths into different entities. The way I usually do this in my projects is like this: have an API/Producer (FastAPI), which processes incoming requests, if a request is nested - splits it into different jobs and later passes it for workers to process. Workers are replicated and run in parallel, each one processing it's own part of workload. After completing the work, results are passed back to Producer for a response. More technically speaking, your Producer is FastAPI, for workers I usually go with Celery, which is a popular and solid choice, but there are many others, and you'll need a way for Producer and Worker to communicate - Redis is a good choice. Adding to that - I'd suggest ditching Pandas and going with Polars, in my experience performance gain is really noticeable. So your workflow will go like that: upload a csv -> split it in chunks -> assign a separate task to process each chunk and execute them in parallel -> gather results and return a response
As per this the 'Databases and Doctrine ORM page (https://symfony.com/doc/current/doctrine.html)
I edited the .env file and used dummy values for the database url and password this satisfied Symfony.
the files are in the range of couple megabytes each since they are binary gltf files (3d models and scenes). I have tried that in debug mode but I can see the data plainly in memory, so not sure if it's because it's compiled in debug mode or if that's how it's going to be but I guess this is not such a good idea then since it won't be as efficient as just accessing external files on demand.
You can use the inheritViewBox prop, which is exactly meant for that purpose. See https://mui.com/material-ui/api/svg-icon/#svg-icon-prop-inheritViewBox
import os
import argparse
import re
import logging
from collections import defaultdict
from pathlib import Path
import datetime
# Logger will be configured in main() after parsing arguments
logger = logging.getLogger(__name__)
def find_filename_duplicates(directory):
"""Find files with duplicate names in the given directory and its subdirectories"""
files_by_name = defaultdict(list)
# Walk through the directory and collect files by name
for root, _, filenames in os.walk(directory):
for filename in filenames:
filepath = os.path.join(root, filename)
files_by_name[filename].append(filepath)
# Filter to only include names that have duplicates
duplicates = {name: paths for name, paths in files_by_name.items() if len(paths) > 1}
return duplicates
def extract_number_pattern(filename):
"""Extract the number from a filename with pattern V1_A_B_C_D_E_F__.*
where:
- A is date in YYYYMMDD format
- B is 2 digits
- C is 4-6 digits
- D is 4 digits
- E is 3 digits
- F is 2 digits (the part we'll increment)
"""
# Pattern: V1_YYYYMMDD_BB_CCCCCC_DDDD_EEE_FF__.*
pattern = re.compile(r'(V1_\d{8}_\d{2}_\d{4,6}_\d{4}_\d{3}_)(\d{2})(__.*)')
match = pattern.match(filename)
if match:
prefix = match.group(1) # Everything before the F part
number = int(match.group(2)) # The F part (2 digits)
suffix = match.group(3) # Everything after the F part
return prefix, number, suffix
return None
def increment_number_in_filename(filepath):
"""Increment the F part (2-digit number) in the filename pattern"""
path = Path(filepath)
filename = path.name
parts = extract_number_pattern(filename)
if parts:
prefix, number, suffix = parts
new_number = number + 1
# Ensure the number stays as 2 digits with leading zero if needed
new_filename = f"{prefix}{new_number:02d}{suffix}"
new_filepath = path.with_name(new_filename)
return new_filepath
# If pattern doesn't match, return the original path with a warning
logger.warning(f"File {filepath} doesn't match the expected pattern. Skipping.")
return filepath
def format_size(size):
"""Convert file size to human-readable format"""
for unit in ['B', 'KB', 'MB', 'GB']:
if size < 1024.0:
return f"{size:.2f} {unit}"
size /= 1024.0
return f"{size:.2f} TB"
def rename_duplicates(directory, dry_run=False):
"""Find and rename duplicate files, return number of renames performed"""
duplicates = find_filename_duplicates(directory)
if not duplicates:
logger.info("No files with duplicate names found.")
return 0
total_duplicates = sum(len(paths) - 1 for paths in duplicates.values())
logger.info(f"Found {len(duplicates)} duplicate filenames ({total_duplicates} duplicates in total)")
rename_count = 0
for filename, paths in duplicates.items():
logger.info(f"\nDuplicate filename: '{filename}'")
# Check if the filename matches our expected pattern
if not extract_number_pattern(filename):
logger.warning(f" Filename '{filename}' doesn't match the expected pattern V1_YYYYMMDD_BB_CCCCCC_DDDD_EEE_FF__*")
logger.warning(f" Skipping this group of files")
continue
# Keep the first file as is, rename the rest
for i, filepath in enumerate(paths):
rel_path = os.path.relpath(filepath, directory)
if i == 0:
logger.info(f" Keeping: {rel_path}")
else:
new_path = increment_number_in_filename(filepath)
# Skip if the pattern doesn't match
if new_path == filepath:
continue
# Make sure the new name doesn't already exist
while os.path.exists(new_path) and not dry_run:
new_path = increment_number_in_filename(new_path)
if dry_run:
logger.info(f" [DRY RUN] Would rename: {rel_path} â {os.path.basename(new_path)}")
else:
logger.info(f" Renaming: {rel_path} â {os.path.basename(new_path)}")
os.rename(filepath, new_path)
rename_count += 1
return rename_count
def main():
parser = argparse.ArgumentParser(
description='Find and rename files with duplicate names using pattern increment',
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Example filename pattern: V1_YYYYMMDD_BB_CCCCCC_DDDD_EEE_FF__*
where:
- YYYYMMDD is a date
- BB is 2 digits
- CCCCCC is 4-6 digits
- DDDD is 4 digits
- EEE is 3 digits
- FF is 2 digits (this is the part that will be incremented)
"""
)
parser.add_argument('directory', help='Directory to scan for filename duplicates')
parser.add_argument('--max-iterations', type=int, default=10,
help='Maximum number of renaming iterations to perform')
parser.add_argument('--log-dir', default=os.getcwd(),
help='Directory where log file should be saved (default: current directory)')
parser.add_argument('--log-name',
help='Custom log filename (default: rename_log_YYYYMMDD_HHMMSS.txt)')
parser.add_argument('--dry-run', action='store_true',
help='Perform a dry run without actually renaming files')
args = parser.parse_args()
# Configure logging based on arguments
log_dir = os.path.abspath(args.log_dir)
if not os.path.exists(log_dir):
os.makedirs(log_dir)
if args.log_name:
log_filename = os.path.join(log_dir, args.log_name)
else:
timestamp = datetime.datetime.now().strftime('%Y%m%d_%H%M%S')
log_filename = os.path.join(log_dir, f"rename_log_{timestamp}.txt")
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler(log_filename),
logging.StreamHandler()
]
)
directory = os.path.abspath(args.directory)
if not os.path.isdir(directory):
logger.error(f"Error: Directory '{directory}' does not exist.")
return
logger.info(f"Starting duplicate filename resolution in: {directory}")
logger.info(f"Log file: {os.path.abspath(log_filename)}")
if args.dry_run:
logger.info("DRY RUN MODE: No files will actually be renamed")
iteration = 1
total_renames = 0
while iteration <= args.max_iterations:
logger.info(f"\n=== Iteration {iteration} ===")
renames = rename_duplicates(directory, args.dry_run)
total_renames += renames
if renames == 0:
logger.info(f"\nProcess completed successfully after {iteration} iterations.")
logger.info(f"Total files {'that would be' if args.dry_run else ''} renamed: {total_renames}")
break
logger.info(f"{'Would rename' if args.dry_run else 'Renamed'} {renames} files in this iteration. Checking for remaining duplicates...")
if args.dry_run:
# In dry run mode, we only do one iteration since we're not actually changing anything
logger.info("\nDry run complete. Exiting after one iteration.")
break
iteration += 1
else:
logger.warning(f"\nReached maximum iterations ({args.max_iterations}). Process may not be complete.")
logger.info(f"Total files renamed so far: {total_renames}")
logger.info(f"\nDetailed log saved to: {os.path.abspath(log_filename)}")
if __name__ == "__main__":
main()
md file
# Cascade Chat Conversation
Note: _This is purely the output of the chat conversation and does not contain any raw data, codebase snippets, etc. used to generate the output._
### User Input
file duplicates viewer and changer
### Planner Response
I'll help you create a Python script to find and manage duplicate files in a directory. This tool will identify files with identical content and provide options to delete or keep one copy.
Let me create a Python script with a simple command-line interface:
*Edited relevant file*
### Planner Response
I've created a Python script called [duplicate_finder.py](cci:7://file:///c:/Users/rsrivaishnavi/CascadeProjects/windsurf-project-2/duplicate_finder.py:0:0-0:0) that helps you find and manage duplicate files. Here's what it does:
### Features:
- Scans a directory and its subdirectories for duplicate files
- Identifies duplicates by both file size and content hash
- Shows detailed information about found duplicates
- Provides an option to automatically delete duplicates (keeping one copy)
- Displays the total space that can be recovered
### How to use:
1. **Basic usage** (just find duplicates):
```bash
python duplicate_finder.py "path/to/your/directory"
python duplicate_finder.py "path/to/your/directory" --delete
Scanning for duplicate files in: C:\path\to\your\directory
Found 3 groups of duplicate files (5 duplicates in total):
Group 1 - 2.50 MB each:
1. folder1\image1.jpg
2. folder2\image_copy.jpg
3. backup\image1(1).jpg
Group 2 - 150.00 KB each:
1. docs\report.pdf
2. archive\report.pdf
Group 3 - 5.00 MB each:
1. videos\clip.mp4
2. temp\clip.mp4
Total space wasted by duplicates: 12.80 MB
--delete, it keeps the first file in each group and removes the duplicates--delete)Would you like me to run it on a specific directory for you?
*Viewed duplicate_finder.py *
file name duplicates viewer and changer
I see you're looking for a tool to find and manage files with duplicate names rather than duplicate content. Let me create a script that focuses on filename duplicates instead.
Edited relevant file
I've created a new script called filename_duplicate_finder.py that specifically finds and manages files with duplicate names (rather than duplicate content).
python filename_duplicate_finder.py "path/to/your/directory"
python filename_duplicate_finder.py "path/to/your/directory" --rename
python filename_duplicate_finder.py "path/to/your/directory" --rename --interactive
Scanning for files with duplicate names in: C:\path\to\your\directory
Found 2 duplicate filenames (3 duplicates in total):
Duplicate #1: 'report.pdf'
1. docs\report.pdf (150.00 KB)
2. archive\report.pdf (150.00 KB)
3. old\report.pdf (120.00 KB)
Duplicate #2: 'config.json'
1. settings\config.json (2.50 KB)
2. backup\config.json (2.30 KB)
With --rename: Automatically renames all duplicates except the first one in each group
report.pdf â report_1.pdf, report_2.pdfWith --rename --interactive: Gives you options for each group of duplicates:
Would you like me to make any adjustments to this script or would you like to see it in action on a specific directory?
filename must be renamed but not at the end, the file will have a pattern like .dd__text. - the dd must be incremented by 1. also, the filename should be checked across a directory having many folders. after the renaming, the directory should be again checked for duplicates. this process should repeat until there is no filename duplicates found in the directory. Also provide a detailed log step by step -of the duplicate filenames found and renamed filenames
I understand you need a more specific filename renaming pattern. You want to:
.*dd__text.* where dd is a number that should be incrementedLet me create a new script that implements this logic:
Edited relevant file
I've created a new script called pattern_filename_renamer.py that implements your specific requirements for finding and renaming duplicate filenames with a pattern-based approach.
.*dd__text.* where dd is a two-digit numberdd) by 1 when renaming filespython pattern_filename_renamer.py "path/to/your/directory"
You can also specify a maximum number of iterations:
python pattern_filename_renamer.py "path/to/your/directory" --max-iterations 20
Pattern Recognition: The script looks for filenames with the pattern .*dd__text.* where:
.* is any prefixdd is a two-digit number (like 01, 02, etc.)__text is any text with double underscores.* is any suffix including file extensionRenaming Logic:
Iteration Process:
Logging:
rename_log_20251104_202500.txt)2025-11-04 20:25:00 - INFO - Starting duplicate filename resolution in: C:\path\to\directory
2025-11-04 20:25:00 - INFO - Log file: C:\path\to\rename_log_20251104_202500.txt
2025-11-04 20:25:00 - INFO - === Iteration 1 ===
2025-11-04 20:25:00 - INFO - Found 2 duplicate filenames (3 duplicates in total)
2025-11-04 20:25:00 - INFO - Duplicate filename: 'report01__final.pdf'
2025-11-04 20:25:00 - INFO - Keeping: docs\report01__final.pdf
2025-11-04 20:25:00 - INFO - Renaming: archive\report01__final.pdf â report02__final.pdf
2025-11-04 20:25:00 - INFO - Duplicate filename: 'data05__processed.csv'
2025-11-04 20:25:00 - INFO - Keeping: data\data05__processed.csv
2025-11-04 20:25:00 - INFO - Renaming: backup\data05__processed.csv â data06__processed.csv
2025-11-04 20:25:00 - INFO - Renamed 2 files in this iteration. Checking for remaining duplicates...
2025-11-04 20:25:01 - INFO - === Iteration 2 ===
2025-11-04 20:25:01 - INFO - No files with duplicate names found.
2025-11-04 20:25:01 - INFO - Process completed successfully after 2 iterations.
2025-11-04 20:25:01 - INFO - Total files renamed: 2
2025-11-04 20:25:01 - INFO - Detailed log saved to: C:\path\to\rename_log_20251104_202500.txt
Would you like me to make any adjustments to this script or would you like to see it in action on a specific directory?
keep the directory and log file location as variable inputs
I'll update the script to make both the directory and log file location configurable as command-line arguments. This will give you more flexibility when using the tool.
Edited relevant file
I've updated the script to make both the directory and log file location configurable as command-line arguments. Here are the changes:
--log-dir: Specifies the directory where the log file should be saved
--log-name: Allows you to specify a custom log filename
rename_log_YYYYMMDD_HHMMSS.txt with current timestamppython pattern_filename_renamer.py "path/to/your/directory"
python pattern_filename_renamer.py "path/to/your/directory" --log-dir "path/to/logs"
python pattern_filename_renamer.py "path/to/your/directory" --log-name "my_custom_log.txt"
python pattern_filename_renamer.py "path/to/your/directory" --log-dir "path/to/logs" --log-name "my_custom_log.txt"
python pattern_filename_renamer.py "path/to/your/directory" --max-iterations 15 --log-dir "path/to/logs" --log-name "my_custom_log.txt"
These changes make the script more flexible and allow you to specify exactly where you want the logs to be stored, which is especially useful for automation or when running the script on multiple directories.
The exact filename regex would be - V1_A_B_C_D_E_F__.* -where A would be date format in YYYYMMDD and B would be digits of length 2 and C would be digits of length between 4 to 6 , D would be digits of length 4, E would be digits of length 3 and this part F would be 2 digit number and the rest .* would be combination of alphabets, underscore and digits. I want you to change only the 2 digit number F, by incrementing the number by 1. Make the changes accordingly to match this requirement
I'll update the script to match your specific filename pattern and ensure it only increments the 2-digit number F in the pattern V1_A_B_C_D_E_F__.* where:
Edited relevant file
Let me fix that and try again:
Edited relevant file
I apologize for the errors. I'm having trouble with the replace_file_content tool when trying to update the script to match your specific filename pattern. The error indicates there's an issue with how I'm formatting the JSON for the replacement chunks.
The specific error is about invalid character 'd' in string escape code, which suggests there might be an issue with how backslashes are being handled in the regular expression patterns.
How would you like to proceed? I could:
What would you prefer?
1
I'll create a new file from scratch with the updated pattern requirements to match your specific filename format.
Edited relevant file
I've created a new script called filename_pattern_renamer.py that specifically handles your filename pattern requirements. Here's what it does:
Specific Pattern Matching: Identifies files with the pattern V1_A_B_C_D_E_F__.* where:
Targeted Renaming: Only increments the 2-digit F part of the filename
Duplicate Resolution: Iteratively scans and renames files until no duplicates remain
Detailed Logging: Creates a comprehensive log of all actions taken
Configurable Options:
python filename_pattern_renamer.py "path/to/your/directory"
python filename_pattern_renamer.py "path/to/your/directory" --dry-run
python filename_pattern_renamer.py "path/to/your/directory" --log-dir "path/to/logs" --log-name "my_rename_log.txt"
2025-11-04 22:42:00 - INFO - Starting duplicate filename resolution in: C:\path\to\directory
2025-11-04 22:42:00 - INFO - Log file: C:\path\to\rename_log_20251104_224200.txt
2025-11-04 22:42:00 - INFO - === Iteration 1 ===
2025-11-04 22:42:00 - INFO - Found 2 duplicate filenames (3 duplicates in total)
2025-11-04 22:42:00 - INFO - Duplicate filename: 'V1_20251104_01_12345_6789_123_01__data.txt'
2025-11-04 22:42:00 - INFO - Keeping: folder1\V1_20251104_01_12345_6789_123_01__data.txt
2025-11-04 22:42:00 - INFO - Renaming: folder2\V1_20251104_01_12345_6789_123_01__data.txt â V1_20251104_01_12345_6789_123_02__data.txt
2025-11-04 22:42:00 - INFO - Duplicate filename: 'V1_20251103_02_1234_5678_321_05__report.csv'
2025-11-04 22:42:00 - INFO - Keeping: reports\V1_20251103_02_1234_5678_321_05__report.csv
2025-11-04 22:42:00 - INFO - Renaming: archive\V1_20251103_02_1234_5678_321_05__report.csv â V1_20251103_02_1234_5678_321_06__report.csv
2025-11-04 22:42:00 - INFO - Renaming: backup\V1_20251103_02_1234_5678_321_05__report.csv â V1_20251103_02_1234_5678_321_06__report.csv
2025-11-04 22:42:00 - INFO - Renamed 3 files in this iteration. Checking for remaining duplicates...
2025-11-04 22:42:01 - INFO - === Iteration 2 ===
2025-11-04 22:42:01 - INFO - Found 1 duplicate filenames (1 duplicates in total)
2025-11-04 22:42:01 - INFO - Duplicate filename: 'V1_20251103_02_1234_5678_321_06__report.csv'
2025-11-04 22:42:01 - INFO - Keeping: archive\V1_20251103_02_1234_5678_321_06__report.csv
2025-11-04 22:42:01 - INFO - Renaming: backup\V1_20251103_02_1234_5678_321_06__report.csv â V1_20251103_02_1234_5678_321_07__report.csv
2025-11-04 22:42:01 - INFO - Renamed 1 files in this iteration. Checking for remaining duplicates...
2025-11-04 22:42:02 - INFO - === Iteration 3 ===
2025-11-04 22:42:02 - INFO - No files with duplicate names found.
2025-11-04 22:42:02 - INFO - Process completed successfully after 3 iterations.
2025-11-04 22:42:02 - INFO - Total files renamed: 4
2025-11-04 22:42:02 - INFO - Detailed log saved to: C:\path\to\rename_log_20251104_224200.txt
The script now precisely matches your requirements for finding and renaming duplicate files with the specific pattern format you described. Would you like me to make any adjustments to this script?
Ok, after some research I found an old workaround in my project related to KeyboardAutoScroll on iOS, and it turns out this is what broke the ScrollView in .NET 9.
Thanks for your responses!
These days, you can use npm audit with flags to filter.
npm audit --audit-level=(low|moderate|high|critical)
This comes from their docs page: https://docs.npmjs.com/cli/v7/commands/npm-audit
I have same problem. No input is accepted, and the module repeats the sequence described ending with SMS done.
Thanks for any help.
I couldn't just downgrade the React Native version in the same project; I had to create a new project from scratch with React Native version 0.72.4, reimplement the entire project, and search for libraries compatible with that older version. It was very painful, but it was the only thing I could do to solve my problem.
@KJ
excuse me, I meant jpeg.
Thanks! Iâve added a minimal sample PDF + the source PNG here: (link).
Iâm not relying on viewer defaultsâI compute the page size as points = pixels * 72 / targetDPI and draw the CGImage 1:1 (no interpolation). Despite that, thin glyphs show âholesâ at typical zoom in Preview/PDFKit.
Do you know if iOS/macOS forces a particular image filter (e.g., JPEG re-encode) or disables smoothing for 1-bit/grayscale images inside PDFs? Any flag to guarantee lossless Flate (8-bit) or CCITT G4 (1-bit)?
Agree that PDF has no intrinsic DPIâonly user space. Thatâs why I scale the page so one image pixel maps cleanly at a chosen effective DPI. The issue persists even with 8-bit grayscale (not just 1-bit).
Are there known viewer behaviors in Preview/PDFKit that render bilevel/high-contrast rasters without anti-aliasing? If so, would your recommendation be to (a) embed as 8-bit grayscale at 2Ă resolution, or (b) pre-apply a tiny morphological close/blur to avoid âholesâ? Any other reliable workaround?
I hope it's useful
I recommend the use of Plutoprint library on github, easy to use as the maintainers answer questions
update: none does not work for me on apple silicon mac. It has updated itself three times to the newest version. ``It was when I restarted my computer (not sure it happened every time I restarted though). It probably works until I restart the computer. Then it updates anyway..
/**
* null, undefined, '', [] => true
* 0, '0', true, false, NaN => false
*/
const isEmpty = (v) => ![v].join('');
return ( new Set(arr.filter(Boolean)).size !== arr.filter(Boolean).length ); //true\false
This is expected behaviour. In Outlook, the Company field is just plain text, but in Dynamics 365, the Company Name field on a Contact is a lookup to an Account. Because of that, it will only sync if Dynamics can find an Account with the exact same name, or if the system is configured to automatically create the Account. Check your Contact Synchronization Field Mappings and make sure âCompanyâ maps to ParentCustomerId, and enable âCreate parent account for new contactsâ if you want Dynamics to auto-create the Account when one doesnât already exist.
For frequent real time communication like this what actual protocol are you planning to use on top of TCP/IP? Websocket maybe? Or something else?
Perhaps you can create the refs in your custom hook.
Before:
const menuRef = useRef()
const menuButtonRef = useRef()
const toggleMenu = useToggleMenu(menuRef, menuButtonRef);
After:
const { toggleMenu, menuRef, menuButtonRef } = useToggleMenu();
I found the solution the issue is not from our side it occurred from backend. I simply update the latest supabase image and some how it sudden work!
I hope you get correct answer , i don't know this one
For xUnit install these 3 packages:
1. xunit
2. xunit.runner.visualstudio
3. Microsoft.NET.Test.Sdk
For new Android simulator versions, you should do this configuration to bring back the soft keyboard
disable `Use stylus to write in text fields`
so then the keyboard will appear again