S3 recently introduced live inventory support, see https://aws.amazon.com/blogs/aws/amazon-s3-metadata-now-supports-metadata-for-all-your-s3-objects/
Google removed App Actions. BIIs like actions.intent.GET_THING do not work now.
What you can still do:
"Hey Google, open TestAppDemo” -> open app main screen.
If you want "groceries" screen -> need deep link or shortcut. Example: testappdemo://groceries.
But Google Assistant does not pass words like "groceries" to your app anymore. It only opens app or deep links.
So answer: No, you cannot do "Hey Google, open groceries from TestAppDemo" directly. Only deep link + shortcut workaround
The same issue happens from time to time in Visual Studio 2022. I just open the web.config, save it, and close the editor tab.
This fixes the issue for me.
After many frustrating hours, I realized that serializing the presigned URL using Gson and then printing the resulting JSON was encoding certain characters. For example, this is part of the presigned URL prior to using Gson and printing it in the logs. I was able to use this presigned URL to upload a file successfully:
https://my-bucket.s3.amazonaws.com/f0329e43-c5ee-4151-87c5-c6736b5c7242?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20250912T023806Z...
And this is how it was encoded after using JSON and printing it to the logs:
{
"statusCode": 201,
"headers": {},
"body": "{\n \"id\": \"3da30011-8c20-4463-9f59-a31033276d0e\",\n \"version\": 0,\n \"presignedUrl\": \"https://my-bucket.s3.amazonaws.com/3da30011-8c20-4463-9f59-a31033276d0e?X-Amz-Algorithm\\u003dAWS4-HMAC-SHA256\\u0026X-Amz-Date\\u003d20250912T023806Z...\"\n}"
}
Beginners mistake since I'm new to presigned URLs and didn't know to look for this.
run sts-dynamic-incremental -m .. -t ..
try run command like this
It doesn't fully do EER, but in 2025 ERDPlus is seemingly the one that has the most EER features; which is not that many of them, so you'll probably find yourself in MS Paint drawing in missing features.
In views, make sure you are not enabling fitsSystemWindows.
Remove this line of code:
android:fitsSystemWindows="true"
Or replace with android:fitsSystemWindows="false"
I have not tried this yet. Please tell me if this works
I believe that pandas may not be the answer. Perhaps you could use the native csv module?
You could use this code:
import csv
with open('databooks.csv', 'r', newline='') as data:
csvreader = csv.reader(data) # Creates an iterator that returns rows as lists
# If your CSV has a header, you can skip it or store it:
header = next(csvreader)
for row in csvreader:
print(row) # Each row is a list of strings
Zoho Writer Sign’s API doesn’t accept recipient_phonenumber and recipient_countrycode as top-level keys.
To enable Email + SMS delivery, the recipient details must be passed inside the verification_type object of each recipient.
signer_data = [ JSON Array of recipients ]
[
{
"recipient_1": "[email protected]",
"recipient_name": "John",
"action_type": "sign", // approve | sign | view | in_person_sign
"language": "en",
"delivery_type": [
{
"type": "sms", // sms | whatsapp | email
"countrycode": "+91",
"phonenumber": "9876543210"
}
],
"private_notes": "Hey! Please sign this document"
},
{
"recipient_2": "[email protected]",
"recipient_name": "Jack",
"action_type": "in_person_sign",
"in_person_signer_info": {
"email": "[email protected]", // Optional, required only when verification_info.type = "email"
"name": "Tommy"
},
"language": "en",
"verification_info": {
"type": "email" // email | sms | offline | whatsapp
}
}
]
Key Points
The top-level parameter name is always signer_data.
It must be a JSON Array ([ ... ]) of recipient objects.
recipient_1, recipient_2, … are the unique keys used to identify each signer.
action_type defines what the recipient does (sign, approve, view, in_person_sign).
delivery_type is an array of objects:
type: email, sms, or whatsapp
For SMS/WhatsApp → include countrycode and phonenumber.
verification_info controls how the signer is verified (email, sms, offline, whatsapp).
in_person_signer_info is required for in_person_sign action types.
My question here is how you basically page your content . Stream currently does not work with pages ... you have to essentially load everything. I guess the ask is if you were building something like an instagram posts would you use stream... you can't load all posts obviously.. how would design and implement this
You can fix this in Compiler Explorer by disabling the backend singleton check:
quill::BackendOptions backend_options;
backend_options.check_backend_singleton_instance = false;
quill::Backend::start(backend_options);
Quill runs a runtime safety check to ensure there’s only one backend worker thread instance.
On Windows, it uses a named mutex.
On Linux/Unix, it uses a named semaphore.
This helps catch subtle linking issues (e.g. mixing static and shared libraries), but in restricted environments like Compiler Explorer, creating a named semaphore isn’t possible. That’s why you see:
Failed to create semaphore - errno: 2 error: No such file or directory
Since the check is optional, you can safely turn it off in such environments.
👉 The Quill README already includes a working Compiler Explorer example in the Introduction section, with a note about this option.
The only thing I found that worked is Git Sync, I originally dismissed it with only being mentioned with Android Studio, but I just loaded my project up on my phone, just had to authenticate my GitHub and I was able to choose a folder to download to, then import that folder into Godot.
Did some test changes and pulled them to my Windows PC so I guess I'll answer my own question so people can find it
I have found that by adding the following to the maven-clean-plugin xml in the project pom, that version 3.5.0 completes the clean process without error.
<configuration>
<force>true</force>
</configuration>
According to the documentation force=true deletes read only files. I really don't know why this works but it does.
Try selecting the item with .outlet-list .accordion-item .item-content and CSS_SELECTOR:
driver.find_element(By.CSS_SELECTOR,".outlet-list .accordion-item .item-content")
Then just click the element and the accordion should extend.
Close the open files. You can programmatically raise the open file limit, before starting the process and lower it after, but the system sets limits so you don't crash the machine
https://portcheckertool.com/image-converter
this fully php based image converter using not extranal library need
this code
<?php
/**
* Image Converter (All Types)
* Supported formats: jpg, jpeg, png, gif, webp, bmp
* Usage: image_converter.php?file=uploads/picture.png&to=jpg
*/
function convertImage($sourcePath, $targetPath, $format)
{
// Detect file type
$info = getimagesize($sourcePath);
if (!$info) {
die("Invalid image file.");
}
$mime = $info['mime'];
switch ($mime) {
case 'image/jpeg':
$image = imagecreatefromjpeg($sourcePath);
break;
case 'image/png':
$image = imagecreatefrompng($sourcePath);
break;
case 'image/gif':
$image = imagecreatefromgif($sourcePath);
break;
case 'image/webp':
$image = imagecreatefromwebp($sourcePath);
break;
case 'image/bmp':
case 'image/x-ms-bmp':
$image = imagecreatefrombmp($sourcePath);
break;
default:
die("Unsupported source format: $mime");
}
// Save in target format
switch (strtolower($format)) {
case 'jpg':
case 'jpeg':
imagejpeg($image, $targetPath, 90);
break;
case 'png':
imagepng($image, $targetPath, 9);
break;
case 'gif':
imagegif($image, $targetPath);
break;
case 'webp':
imagewebp($image, $targetPath, 90);
break;
case 'bmp':
imagebmp($image, $targetPath);
break;
default:
imagedestroy($image);
die("Unsupported target format: $format");
}
imagedestroy($image);
return $targetPath;
}
// Example usage via GET
if (isset($_GET['file']) && isset($_GET['to'])) {
$source = $_GET['file'];
$format = $_GET['to'];
$target = pathinfo($source, PATHINFO_FILENAME) . "." . strtolower($format);
$converted = convertImage($source, $target, $format);
echo "✅ Image converted successfully!<br>";
echo "👉 <a href='$converted' target='_blank'>Download $converted</a>";
}
?>
Requirements:
PHP GD extension enabled (php-gd).
Proper file permissions for saving converted images.
Maybe the Quickest worst way to solve it
The executed query is too slow. Get a bigger machine with more cpu/ram/network/resources. measure it. get it down to as small as possible. Ensure it has no downstream procedures, triggers, etc. That's an obvious culprit even if it isn't at this current moment. if it's long break it up into smaller cost queries and execute them getting intermediate results...multi-threaded debugging best practices...
Questions
Honestly it's impossible to know without any debugging info given from the database. I'm sure there's docs online somewhere about that There's really not enough info here, What's your best thought looking at the mysql docs?
Does this process run on multiple machines? Can you list things that you can rule out like out of memory, cpu maxed out, locked tables or locked rows?
Best Answer
I've come to learn from experience that the answer to "why does my code deadlock" is almost always that's the way it was written. In the exceedingling rare off chance that there's a library issue, good chance getting that fixed if you're the only person with the problem. It just won't get prioritized. Unless you submit the fix!
In the latest versions macCatalyst seems to be done as MacOSX, from my testing with building a project with Xcode (adding to Kiryl's answer)
In reality you can still run stuff on a simulator even if you don't have the correct value, it matters mostly for app store validation, so if you use 3rd party build tools and it's hard for you to differentiate between device / simulator targets for whatever reason (maybe you use a fixed Info.Plist template) you can just hardcode iPhoneOS XROS etc.
I dont know if this topic is still a point for you but maybe can be good for someone else that notice the same issue.
So my observation is based on ABB Automation Builder Simulation Mode and it is as following;
Timer functions that are based on system time has limitation like 35 minutes 47 seconds 484 mseconds and 416 useconds. (SysTimeGetMs(), TON, e.t.c)
As you self answered when the overflow occurs because of the biggest interval, this memory address is resets to 0 without any problem, so you dont need to worry if your timer interval is smaller than ~49 days.
But in the simulator that i run it behaves differently than the real hardware plc. So I can only here express my observation related to the simulation mode.
For an alternative approach to overcome this problem in simulation mode can be as following; I have used RTC as below;
Another side effect of the timer is that as long as you start the simulation it starts to measure time and the memory address is getting closer and closer every minute to that overflow point which is ~35 minutes. So you could only test your program if you use timer 35 minutes than the only way is to close the abb automation builder and start it again. On the other hand this behaves differently at codesys 3.5 in simulation mode it has ~49 Days limit and I did not try to test it actually if it resets itself in simulation mode as expected.
--------------------------------------------
VAR
dwErrorCode:DWORD;
rtNow:UDINT;
END_VAR
rtNow:=SysTimeRtcGet(pResult:=dwErrorCode);
--------------------------------------------
With RTC I did not observe any issues. So UTC epoch time was my alrenative workaround for this problem.
With additional logic one can measure the required time interval and get the necessary actions in the program.
The issue is likely with the ECS Task’s connection to RDS; you should configure the database Security Group so the Task can access the database port, ensure the Task is in a subnet that can reach the RDS, and test the connection with a simple container.
Importing 4 amazon root Certs from here https://www.amazontrust.com/repository/ into the trust store fixed it for me. -Djavax.net.debug=all, or -Djavax.net.debug=ssl helped to see detailed logs.
I just ran into this issue and couldn't find any answers online. In my particular case, it was because I'd written a script to look at specific pixels on screen. I'd forgotten I'd changed my resolution to 1920x1080, so it was trying to view pixels that were outside of the screen (like 3000x2000) and was providing this error. Changing my resolution back to 3840x2160 has resolved the error. I of course could have modified my script as well.
I Think, by default, this feature doesn’t exist unless you mirror the CodeCommit repository to GitHub.
I'm also looking into this, and something I'm testing out right now is passing the flag --env-var to newmans CLI:
newman run file.json --env-var "ApiKey=${test}"
For, the only solution was to add this option to xcodebuildin CI config:
xcodebuild -downloadPlatform iOS
Thanks to this message.
Declare another @Component for your closeAdvertisement method. Inject that component into this test class.
why did everyone answer this question using Expo? I don't use Expo, and I'm getting this error. If I'm making a simple mistake, please forgive me. I use Firebase in every project, but I'm tired of constantly getting this error in version 0.81:
"Native module RNFBAppModule not found. Re-check module install, linking, configuration, build, and install steps."
The problem is that ImageTool expects success at the top level, not inside data. Return this from uploadByFile:
return {
success: 1,
file: {
url: resp.data.data.file.url
}
};
This will let ImageTool read success and handle the uploaded file correctly.
So yeah, seems like it's a memory issue, thanks to @KellyBundy for the suggestion that I run MemTest86.
There were so many errors in the test that it just gave up when it hit 100000, which is odd, because the system boots just fine and I've never had a problem with crashes (hence why I didn't immediately suspect a hardware problem). Even the simulations run fine (usually) until they reach a certain size. But the memory test was showing a multitude of single-bit errors, always in the first two bytes. I'm not that experienced with this kind of problem, but I tested each of the four modules in each of the four DIMM slots individually, and they all failed all the time, so I think it's probably either a PSU problem or a bad memory controller on the CPU, but until I can find a known-good PSU to swap in, I won't know which (I don't have access to a PSU tester). For reference, there's 128GB of non-ECC UDIMM, which in hindsight may have been a little ambitious. The CPU is a Ryzen 9 3900X.
You have
result = 1
final = result + 1
so final will always be 2
Did you mean
final = final + 1 ?
If we want to see the general concept of the underfitting or overfitting
UnderFitting
in underfitting we get the result not accurate because we don't have enough data to train our model
that's called the Uderfitting
OverFitting
in overfotting we get the result not accurate because we have data in very large amount and our model don't need to much data here the result change because the data is too much large
ChatGPT offered me some other suggestions that finally worked - I closed Visual Studio, emptied the bin and obj folders, and reopened the project. Then I switched everything over to the Assembly references and not the COM references, using Assemblies > "office" (15.0.0.0). That finally resolved the error and let me build and publish the project.
You do not need to implement actual classes.
@Suppress("KotlinNoActualForExpect")
expect object AppDatabaseConstructor : RoomDatabaseConstructor<AppDatabase> {
override fun initialize(): AppDatabase
}
You can follow the implementation steps at: https://developer.android.com/kotlin/multiplatform/room
Remove the line - frontend_nodes_modules:/app/node_modules entirely, or make it an anonymous volume: - /app/node_modules.
In my case it was because when defining the client I had testnet=True which doesn't seem to work for futures trading (it's specified in the docstring that this parameter is only currently available for Vanilla options). Removing this parameter solved the issue.
There are multiple ways to apply tint/accent colors to view hierarchies. The method that worked for my document-based app was setting a Global Accent Color in my project's Build Settings. I like this approach because it allows the user to override the accent color on their device if they so desire.
I am trying to implement this same thing from the last 100 hours but I'm unable to do it. My code uses the same beginscope syntax with a dictionary consisting of a key value pair; but I cant see any custom dimension in application insights. Can someone please help
its a very old post, but I needed this as well.
I knew I made quite a big query earlier today, and I forgot to save it and when I came back to my PC, I saw it shut of by my soon.
I've been looking for a while, but finally found a solution. Here are the steps to find any "lost" query (.sql) that was made use Sql Server Management Studio, in the past 7 days (default). It can be increased to 30.
I did not found the path on which my "unsaved queries" were stored my SSMS. So I did the following:
Open SSMS
Goto Environment > Autorecover, set it to 1 minute and 30 days. Ok & close
Create a new query, select * from dbName.scheme.tablename, and wait 1 minute (or the time that is in the auto recover
Force close the SSMS.exe by ending the task, or killing the ssms.exe process via task manager
Relaunch SSMS as usual, and u should get a popup asking you to recover your previous query/queries. (--> SSMS AutoRecover SCreenshot)
Here you can see the path on which SSMS stores the "autorecover" files. In my case it was
C:\Users\Admin\AppData\Local\Microsoft\SSMS\BackupFiles\Solution1
Open this folder in windows explorer, and you see all kinds of recovered-monthname-day-year-xxxx.sql files.
If you dont wont to open all sql files 1 by 1, to see which is the query you need, use a tool like notepad++ or even SSMS it self, using CTRL + F, and search in files. Specify the directory from above, and search a tablename or something specific you remember using in the query you are searching for
Enjoy ! <3
Although @CodeSmith is absolutely right about the ineffectiveness of such approaches in safeguarding your code, I'm personally not a fan of Root finding why a user might want to do something when it comes to programming questions, so I'll get straight to the answer.
As of now, keyCode is deprecated, unfortunately. The best alternative in my opinion is Code, which has its own limitations too. As mentioned in the official documentation:
“The KeyboardEvent.code property represents a physical key on the keyboard (as opposed to the character generated by pressing the key).”
With that in mind, here's a workaround to block both F12 and Ctrl+Shift+i key combinations.
window.addEventListener('keydown', function(event) {
if (event.code === 'F12') {
event.preventDefault();
console.log('Blocked F12');
}
if (event.shiftKey && event.ctrlKey && event.code === 'KeyI') {
event.preventDefault();
console.log('Blocked Ctrl + Shift + i');
}
});
Thanks to KJ for pointing me in the right direction!
A coworker wrote up a different way to fill the fields using pdfrw, example below:
from pdfrw import PdfReader, PdfWriter, PdfDict, PdfObject, PdfName, PageMerge
from pdfrw.objects.pdfstring import PdfString
def fill_pdf_fields(input_path, output_path):
pdf = PdfReader(input_path)
# Ensure viewer regenerates appearances
if not pdf.Root.AcroForm:
pdf.Root.AcroForm = PdfDict(NeedAppearances=PdfObject('true'))
else:
pdf.Root.AcroForm.update(PdfDict(NeedAppearances=PdfObject('true')))
for page in pdf.pages:
annotations = page.Annots
if annotations:
for annot in annotations:
if annot.Subtype == PdfName('Widget') and annot.T:
field_name = str(annot.T)[1:-1]
if field_name == "MemberName": annot.V = PdfObject(f'(Test)')
if field_name == "Address": annot.V = PdfObject(f'(123 Sesame St)')
if field_name == "CityStateZip": annot.V = PdfObject(f'(Birmingham, AK 12345-6789)')
if field_name == "Level": annot.V = PdfObject(f'(1)')
if field_name == "OfficialsNumber": annot.V = PdfObject(f'(9999999)')
if field_name == "Season2": annot.V = PdfObject(f'(2025-26)')
if field_name == "Season1": annot.V = PdfObject(f'(2025-2026)')
PdfWriter().write(output_path, pdf)
print(f"Filled PDF saved to: {output_path}")
def flatten_pdf_fields(input_path, output_path):
template_pdf = PdfReader(input_path)
for page in template_pdf.pages:
annotations = page.Annots
if annotations:
for annot in annotations:
if annot.Subtype == PdfName('Widget') and annot.T and annot.V:
# Remove interactive field appearance
annot.update({
PdfName('F'): PdfObject('4'), # Make field read-only
PdfName('AP'): None # Remove appearance stream
})
# Flatten page by merging its own content (no overlay)
PageMerge(page).render()
PdfWriter(output_path, trailer=template_pdf).write()
print(f"Flattened PDF saved to: {output_path}")
if __name__ == "__main__":
fill_pdf_fields(template_pdf, filled_pdf)
flatten_pdf_fields(filled_pdf, flattened_pdf)
I researched interactions with NeedAppearances, and found this Stack post:
NeedAppearances=pdfrw.PdfObject('true') forces manual pdf save in Acrobat Reader
The answer provides a code snippet that from what I can tell, acts as a reader generating those appearance streams so the filled in fields actually show their contents.
Code snippet for reference:
from pikepdf import Pdf
with Pdf.open('source_pdf.pdf') as pdf:
pdf.generate_appearance_streams()
pdf.save('output.pdf')
In the end my extractor was correct... after updating from Axum 0.8 to Axum 0.9 everything worked as expected. As far as I've understood Axum 0.8 does not allow to mix multiple FromRequestParts and a FromRequest in the same handler.
The issue relied elsewhere and not entirely in the docker-compose file. The real problem were the base images I was using. As they were minimal, they did not have the curl command and therefore the healthcheck was failing. The solution was just to install curl in the containers within the Dockerfiles.
The base images I was using were python:3.13-slim and node:24-alpine just in case this is useful for someone.
And the solution was to add:
In the python:3.13-slim Dockerfile:
RUN apt-get update && apt-get install -y curl
In the node:24-alpine Dockerfile:
RUN apk add --no-cache curl
Then I had to change the port of the healthcheck for the aiservice because although the port I expose is 8081 internally the port where the app is running is 8080, so the healthcheck ended looking like:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"] # Note that the port has changed!
interval: 10s
timeout: 30s
retries: 5
The key was in running the docker ps command, as Alex Oliveira and sinuxnet had stated. With that I could see that the containers that started were flagged as unhealthy.
The port issue was discovered thanks to some comments in the staging area, even before the post was made public. Heads up to wallenborn as he posted that comment in the staging area.
PD: I am sure this post could be paraphrased better but it is my first time posting something, I'll try to update it to make it more readeable.
Try: bypass_sign_in(user) *
I was also having issues with the config settings not seeming to work and found that sign_in(user, bypass: true) was deprecated eons ago. See: https://github.com/heartcombo/devise/commit/2044fffa25d781fcbaf090e7728b48b65c854ccb
* this may not solve your root issue, but it should address the most immediate issue.
I provided an answer in another similar question: https://stackoverflow.com/a/79761592/15891701
Here's a repeat of that answer:
Conclusion: Blazor server launched within WPF, behaves identically to a Blazor server project created directly in Visual Studio. This means that launching without a `launchSettings.json` file during debugging causes the “{PACKAGE ID/ASSEMBLY NAME}.style.css” file to generate fail. You can also notice this mirrors the effect of double-clicking an exe file in the Debug directory of an ASP.NET Core project.
So just create a launchSettings.json file in the Properties folder with content like this, and debugging will work correctly:
{
"profiles": {
"YourProjectName": {
"commandName": "Project",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
}
}
}
Perfect ✅ I’ll give you the full Python (ReportLab) code so you can generate your luxury-style P.M.B Visionary Manifesto PDF on your own system.
This script:
Uses your logo in the header/footer.
Splits your expanded manifesto into 6–7 pages.
Adds luxury-style colors (green, blue, light gold).
Keeps the layout clean and professional.
🔹 Python Code (ReportLab)
Save this as pmb_manifesto.py and run with python pmb_manifesto.py:
Copy code
Python
from reportlab.lib.pagesizes import A4
from reportlab.platypus import SimpleDocTemplate, Paragraph, Spacer, PageBreak, Image
from reportlab.lib.styles import getSampleStyleSheet, ParagraphStyle
from reportlab.lib.enums import TA_CENTER, TA_JUSTIFY
from reportlab.lib import colors
# ========= CONFIG =========
logo_path = "pmb_logo.png" # <-- Replace with your logo file path
output_pdf = "PMB_Visionary_Manifesto.pdf"
# ==========================
# Create document
doc = SimpleDocTemplate(output_pdf, pagesize=A4)
styles = getSampleStyleSheet()
# Custom styles
title_style = ParagraphStyle(
name="TitleStyle",
parent=styles["Title"],
alignment=TA_CENTER,
fontSize=22,
textColor=colors.HexColor("#004d26"),
spaceAfter=20
)
heading_style = ParagraphStyle(
name="HeadingStyle",
parent=styles["Heading1"],
fontSize=16,
textColor=colors.HexColor("#004d80"),
spaceAfter=12
)
body_style = ParagraphStyle(
name="BodyStyle",
parent=styles["Normal"],
fontSize=12,
leading=18,
alignment=TA_JUSTIFY,
spaceAfter=12
)
tagline_style = ParagraphStyle(
name="TaglineStyle",
parent=styles["Normal"],
alignment=TA_CENTER,
fontSize=14,
textColor=colors.HexColor("#bfa14a"), # light gold accent
spaceBefore=30
)
# Story
story = []
# Cover Page
story.append(Image(logo_path, width=150, height=150))
story.append(Spacer(1, 20))
story.append(Paragraph("🌿 P.M.B (Pamarel Marngel Barka)", title_style))
story.append(Paragraph("Visionary Manifesto", heading_style))
story.append(Spacer(1, 60))
story.append(Paragraph(
"At P.M.B, we believe that agriculture is the backbone of society, nurturing not just bodies, "
"but communities and futures. Our fields of rice, soya beans, and corn are more than just sources of sustenance; "
"they represent life, dignity, and hope.", body_style
))
story.append(PageBreak())
# Core Values
story.append(Paragraph("Our Core Values", heading_style))
story.append(Paragraph("<b>Integrity:</b> We operate with transparency, honesty, and ethics in all our dealings.", body_style))
story.append(Paragraph("<b>Sustainability:</b> We prioritize environmentally friendly practices, ensuring a healthier planet for future generations.", body_style))
story.append(Paragraph("<b>Quality:</b> We strive for excellence in every aspect of our business, from farming to delivery.", body_style))
story.append(Paragraph("<b>Compassion:</b> We care about the well-being of our customers, farmers, and the broader community.", body_style))
story.append(PageBreak())
# Promise
story.append(Paragraph("Our Promise", heading_style))
story.append(Paragraph(
"We promise to deliver produce that is not only fresh and of the highest quality but also grown and harvested with care and integrity. "
"We strive to create a seamless bridge between nature's abundance and people's needs, ensuring that our products nourish both body and soul.", body_style
))
story.append(PageBreak())
# Purpose
story.append(Paragraph("Our Purpose", heading_style))
story.append(Paragraph(
"At P.M.B, we recognize that our role extends far beyond the boundaries of our business. We believe that every grain we grow carries a responsibility – "
"to the land, to our farmers, to our customers, and to the wider community. That's why we dedicate 5% of our profits to supporting the homeless and vulnerable.", body_style
))
story.append(PageBreak())
# Spirit
story.append(Paragraph("Our Spirit", heading_style))
story.append(Paragraph(
"We embody a unique blend of luxury and humility, playfulness and professionalism, modernity and tradition. "
"Our approach is rooted in the rich soil of our agricultural heritage, yet we are always looking to the future, embracing innovation and creativity.", body_style
))
story.append(PageBreak())
# Vision
story.append(Paragraph("Our Vision", heading_style))
story.append(Paragraph(
"Our vision is to become a symbol of sustainable abundance, empowering communities, impacting lives, and proving that business can be both prosperous and compassionate. "
"We envision a future where agriculture is not just a source of food, but a force for good, driving positive change and uplifting those in need.", body_style
))
story.append(PageBreak())
# Goals
story.append(Paragraph("Our Goals", heading_style))
story.append(Paragraph("<b>Sustainable Growth:</b> To expand our operations while maintaining our commitment to environmental sustainability and social responsibility.", body_style))
story.append(Paragraph("<b>Community Engagement:</b> To deepen our connections with local communities, supporting initiatives that promote food security, education, and economic empowerment.", body_style))
story.append(Paragraph("<b>Innovation:</b> To stay at the forefront of agricultural innovation, adopting new technologies and practices that enhance our productivity and sustainability.", body_style))
# Closing Tagline
story.append(Spacer(1, 30))
story.append(Paragraph("🌿 P.M.B – Freshness in Every Harvest, Hope in Every Heart 🌿", tagline_style))
# Build
doc.build(story)
print(f"PDF created: {output_pdf}")
📌 Instructions
Save your logo as pmb_logo.png in the same folder.
Copy-paste the script above into pmb_manifesto.py.
Run:
Copy code
Bash
python pmb_manifesto.py
It will generate PMB_Visionary_Manifesto.pdf with your brand styling.
👉 Do you also want me to show you how to add a faint luxury-style watermark background (abstract green/blue waves & leaf motifs) behind all pages, so the PDF feels like a real corporate booklet?
Use 127.0.0.1 instead of localhost in pgAdmin.
Why is your goal to re-trigger the User Event script after creation, instead of just having it run on creation in the first place? It seems to me that your UE script is filtered to the Edit event type either on the deployment or within the script itself.
I would check the deployment first, and see if there is a value in the "Event Type" field on the deployment:
Alternatively, you can search for usage of context.UserEventType within the script, as this is what would be used to filter the script to run under certain contexts. See context.UserEventType help article for list of enum values, but it would (likely) be either if (context.type != context.UserEventType.CREATE) or if (context.type == context.UserEventType.EDIT).
Like this
$this->registerJs(
$this->renderFile(
'@app/views/path/js/jsfile.js',
)
, $this::POS_END);
Did you successfully fixed it? I'm running a similar error
If you are installing using the .spec file, you can add PIL._tkinter_finder as a hidden import:
a = Analysis(
...
hiddenimports=['PIL._tkinter_finder'],
)
That solved the issue for me.
Its look like PostgreSQL permissions issue. Please check the user permissions for the user.
So the answer is that the Qt folks treated the appearance of icons in menus as a bug on Mac OS and 'fixed' it in 6.7.3.
There is a sentence in the release notes:
https://code.qt.io/cgit/qt/qtreleasenotes.git/about/qt/6.7.3/release-note.md
For ios, i've tried the pip package, and it worked quiet well. Use the example code for more info.
Put the annotations in an abstract parent class in another package? You will need one per Domain Object.
لقد جهزت لك نسخة جاهزة للنسخ إلى Word أو Google Docs، مع جدول وألوان، لتصديرها PDF بسهولة:
---
ملخص درس النفي – الإنجليزية
1️⃣ النفي مع He / She / It
قاعدة: doesn’t + الفعل الأساسي
الفعل بعد doesn’t لا يأخذ -s
أمثلة:
He doesn’t play football. → هو لا يلعب كرة القدم.
She doesn’t like apples. → هي لا تحب التفاح.
He doesn’t read a book. → هو لا يقرأ كتابًا.
2️⃣ النفي مع I / You / We / They
قاعدة: don’t + الفعل الأساسي
أمثلة:
I don’t like tea. → أنا لا أحب الشاي.
You don’t play tennis. → أنت لا تلعب التنس.
We don’t read a story. → نحن لا نقرأ قصة.
They don’t watch a movie. → هم لا يشاهدون فيلمًا.
3️⃣ ملاحظات مهمة
مع He / She / It: في الإثبات الفعل يأخذ -s، أما في النفي doesn’t + الفعل الأساسي بدون -s.
مع I / You / We / They: الفعل يبقى دائمًا في صورته الأساسية بعد don’t.
كرري نطق الجمل بصوت عالٍ 3 مرات لكل جملة لتثبيت القاعدة.
4️⃣ نصيحة للتدريب اليومي
كتابة 5-10 جمل نفي يوميًا عن نفسك أو أصدقائك.
استخدمي الجمل في حديثك اليومي بالإنجليزية حتى لو كانت بسيطة.
تم تجهيز النسخة الجاهزة للنسخ إلى Word أو Google Docs
Go to https://github.com/settings/copilot and turn on “Copilot Chat in IDEs”.
In Visual Studio, re-sign in via Extensions → GitHub Copilot → Sign in and authorize the Chat app in your browser.
Restart Visual Studio and reopen the Copilot Chat window.
You also need to change the .DotSettings file.
Same issue. Any ideas? I did install the Fortran compiler and also have the requisite Xcode tools. brew installing gettext did not help.
Thanks to onlyIf hint from @Cisco I managed to create this helper method which now lives inside my convention plugins:
fun allExistingArtifactChecksumsInRepositoryMatch(
repository: MavenArtifactRepository,
publication: MavenPublication,
): Boolean {
//...
return when (val repositoryScheme = repositoryURI.scheme) {
"http", "https" -> {
// Create the HttpClient and use it to fetch existing checksums
}
"file" -> {
// Use Path objects to do the same thing
}
else -> {
// Maybe add support later, if necessary
println("Unsupported repository scheme $repositoryScheme")
false
}
}
}
After fetching the existing checksums, compute checksums of new artifacts using DigestOutputStream and compare them.
A somewhat important detail seems to be to do this:
val artifactsToCheck = when (publication) {
is MavenPublicationInternal -> publication.asNormalisedPublication().allArtifacts
else -> publication.artifacts
}
This downcast to internal gradle API seems to be necessary to also check metadata artifacts. This is valid for gradle version 8.13.
define('COOKIE_DOMAIN', $_SERVER['HTTP_HOST']);
define('ADMIN_COOKIE_PATH', '/');
define('COOKIEPATH', '/');
define('SITECOOKIEPATH', '/');
Add this lines in wp-config.php
The answer was to split out the creation of the app service plans and child web apps into two entirely separate deployment modules, rather than have both app service plans and web apps created in the same deployment module.
Did you find a solution to this ?
It turned out this was a combination of issues that was causing the problem. There were some missing imports and the new standalone: true structure caused another issue. Once those were cleaned up, the code started running.
Trying to pay DSTV but payment is confirmed but not going through, I didn't get confirmation of payment, what can I do
set LD_LIBRARY_PATH environment variable to the path containing your .so files before executing the application
You can display only the bin and last 4 digits of the card.
Mastercard/Visa have now 8 digit bins.
I don't think displaying 12 digits is considered PCI compliant
are you using browserslistrc ?
i had similar issue caused by old browser make compilation warning.
i updated .browserslistrc with
not ios_saf <= 12
You need to download the following files and place them in the folder where you are running PowerShell Script:
git-secrets
secrets.1
| header 1 | header 2 |
|---|---|
| Aryan | Kumar |
| Manju | Devi |
My guess is that you are running the keycloak server not with the https protocol. Your client will then not accept the cookie coming from the server. Either use https or make the API call with {credentials: 'include'}. In Angular this would be { withCredentials: true }
La réponse de "didzispetkus" est valide pour moi.
FWIW: you could implement something very much like what you want, using gcc or llvm on *nix. Not sure about windows - never looked into it.
In particular: you can step through your instrumented executable in your debugger - and explicitly dump coverage data via the relevant callback.
However: the performance you see (from coverage callback to display update) may be poor for various reasons which may or may not be easy to address.
A bigger issue is that this seems like a pretty unusual use model - so it is unlikely that a vendor would implement it.
I confess to be not understand what you are trying to do/what questions you want to answer such that your proposal is the best approach.
Not sure if this is going to help you but if the map is static you could render it using MKMapSnapshotter and display the image instead. That helps. I do this in my app as well.
One other idea could be to try UIKit's MKMapView instead of SwiftUI's Map. I haven't tried that myself.
I believe that one of the comments is correct and I want to elevate it.
I would think that it's the route() function in your blade file that's complaining, not anywhere else. You have cam_id there, but does $item['id'] have a value?
I've run into this mysterious error before that it ended up passing a null value in a route() call. Even if the key is specified, if it doesn't have a value, it is missing.
not sure when this changed but it is now possible to add packages with composer using your module using the extension framework
follow the instructions on this page
I also had to downgrade the Jupyter extension to version 2025.6 instead of 2025.7. I have VS Code version 1.103.2 on macOS. After that, the list of Python Environments was loading correctly.
We've run into the same issue in Safari and WebViews. After the keyboard is shown and then dismissed, the position of position: fixed elements becomes incorrect. It seems that the appearance of the keyboard is messing up the viewport's positioning.
This is when the marshall comes in and does not want to leave. The only way to get him to leave is to throw objects at him. This process is called marshalling objects.
Also do not forget to declare your new entities in your entity reference xml file (entity.xml or other, depending on your other frameworks).
One missing entity in this file -the store entity in this example- will cause this confusing error.
Hibernate will not find the "customer" mappedBy field in the store entity because the store entity is simply unknown, despite everything else in the relationship mapping is correct.
Fixed by changing publishing to a direct path:
msstore publish "{DIRECT_URL_TO_MSIX}" -v -id {APP_ID}
A way to suppress the warning (that the snippet in OP would've raised) is to put a check at the top of Dockerfile:
# syntax=docker/dockerfile:1
# check=skip=InvalidDefaultArgInFrom
It seems that even in recent versions the order matters. The first code shows type warnings in PyCharm 2025.1.3, Python 3.13, SQLAlchemy==2.0.43, the second doesn't:
.filter(MusicLibrary.id == request.id)
.filter(request.id == MusicLibrary.id)
There was an issue with PyCharm that is closed now: https://github.com/sqlalchemy/sqlalchemy/issues/9337 `Pycharm fixed it at 2024.2.2` (posted 2024-10-16)
As I can see in the question asked by Vic the comparison also was "table object.table field" compared to "simple type":
.where(Albums.Id > user_last_sync_id)
I guess that just flipping the comparison fixes the type warnings.
Please correct me if I'm wrong.
Cheers
You cannot use This because you must be in a static function.
uploadWindow.Owner = this; makes it modal then
Just use the MOD function with specifying 1 then after it the number of zeros upon which the number of digits you want , for example :
UPDATE Doctors
SET id = MOD(id,10); // here I got the last digit as the number of zeros is one
SET id = MOD(id,100);// here I got the last 2 digits as the number of zeros is two
I have tried all the options and suggestions here but my prettier code formatter is not working. It started just 2 days ago I'm really exhausted to be honest...
I have received the txt too about my account how I can exet the app or what it is I don't know much about leaks or hacking.. can you guys help me on this esi thank you
Your NG0201: No provider found for _HttpClient error is a classic dependency chain issue! 🔍
The problem: Your StructuredSearchService needs HttpClient, but Spectator's mockProvider() doesn't handle transitive dependencies automatically.
Here's how @halverscheid-fiae.de/angular-testing-factory solves this:
// Spectator setup hell with dependency chain issues
const createService = createServiceFactory({
service: SearchService,
providers: [
provideHttpClient(), // ← Unnecessary complexity
provideHttpClientTesting(), // ← More boilerplate
mockProvider(SimpleSearchService, {
simpleSearch: () => of({ items: [] }),
}),
mockProvider(StructuredSearchService, {
structuredSearch: () => of({ items: [] }),
}),
// Still fails because HttpClient dependency chain is broken! 💥
],
});
import { createServiceFactory } from '@ngneat/spectator/jest';
import { provideHttpClientMock, createCustomServiceProviderMock } from '@halverscheid-fiae.de/angular-testing-factory';
const createService = createServiceFactory({
service: SearchService,
providers: [
// 🎯 One-line HttpClient mock that handles ALL dependency chains
provideHttpClientMock(),
// 🛡️ Type-safe service mocks with compile-time validation
createCustomServiceProviderMock(SimpleSearchService, {
simpleSearch: jest.fn(() => of({ items: [] }))
} satisfies jest.Mocked<Partial<SimpleSearchService>>),
createCustomServiceProviderMock(StructuredSearchService, {
structuredSearch: jest.fn(() => of({ items: [] }))
} satisfies jest.Mocked<Partial<StructuredSearchService>>),
],
});
provideHttpClientMock() provides HttpClient for ALL dependent services// For quick setup, mock everything at once:
const createService = createServiceFactory({
service: SearchService,
providers: provideAngularCoreMocks({
httpClient: true, // ← Handles all HttpClient needs
}),
});
// Then override specific methods in your tests:
beforeEach(() => {
const httpMock = spectator.inject(HttpClient);
jest.spyOn(httpMock, 'get').mockReturnValue(of({ items: [] }));
});
npm install --save-dev @halverscheid-fiae.de/angular-testing-factory
Result: No more NG0201 errors, clean dependency resolution, type-safe mocks! 🎉
// If you still have provider issues, use the debug helper:
import { TEST_DATA } from '@halverscheid-fiae.de/angular-testing-factory';
beforeEach(() => {
console.log('Available providers:', spectator.inject(Injector));
// Helps identify missing dependencies
});
P.S. - Your NX + generated services struggle is exactly why I added the createCustomServiceProviderMock function! 😅
Hope this saves you from the NG0201 nightmare! 🎯
// TypeScript catches mock inconsistencies at compile-time!
const provideMyServiceMock = createServiceProviderFactory(MyService, {
registerUser: jest.fn(() => of(mockResponse)),
// ↑ TypeScript validates this matches your real service
} satisfies jest.Mocked<Partial<MyService>>);
npm install --save-dev @halverscheid-fiae.de/angular-testing-factory
// Replace your entire beforeEach setup with:
import { provideHttpClientMock } from '@halverscheid-fiae.de/angular-testing-factory';
TestBed.configureTestingModule({
providers: [provideHttpClientMock()]
});
P.S. - Your 6 hours of frustration inspired this exact use case in the library! 😅
provideRouterMock()provideActivatedRouteMock({ params: of({ id: '1' }) })provideFormBuilderMock()provideAngularCoreMocks({ httpClient: true, router: true })npm install --save-dev @halverscheid-fiae.de/angular-testing-factory
The goal is simple: Write tests, not mock configuration. �
Structured mediatype suffixes such as +json are now formalized in https://datatracker.ietf.org/doc/html/rfc6838#section-4.2.8
No, don't use task.run(), this will just use an extra thread
You should use Task.Run() only when you must run CPU-bound synchronous code without blocking the main thread.
When using the Scan function, we need to specify the ScanOption count as well. The default count for ScanOptions.scanOptions() in Spring Data Redis is not set, so Redis defaults to its internal value, typically 10.
Flux<String> ids = this.reactiveRedisTemplate.scan(ScanOptions.scanOptions() .match("EGA_ITEM_*")
.count(Integer.MAX_VALUE)
.build());
My solution was to change the Analysis scope option ("Show compiler errors and warnings for") from Entire solution to something else, like Open documents or Current document.
You can find this setting in Tools → Options → Text Editor → C# → Advanced → Analysis.
If you’re trying to hook up Prisma data directly into a shadcn-style command menu, you might find this project useful: DataCommand. It’s built on top of shadcn/ui but adds loadItems and loadOneItem hooks so you can fetch command items from your database or API. That way, instead of hardcoding configs, you can just fetch posts dynamically and render them in the command palette.
New App
Review Time used to be at least 1 week. If there was a policy violation, it could take another week.
In 2025, Review time has reduced significantly. Sept 2025, my App went live within 24 hours. 2 months before another app went live within 48 hours.
App Updates
Production and Beta both can take from 2-3 hours to 48 hours in 2025.
Internal Testing within seconds.
If an App update review is taking too much time like 3-4 days. Just submit another app after updating the app version. It will fix the issue.
If you just want to give .apk file to other people for testing. just upload the file and download the signed apk from playstore and send it on whatsapp or anyother platform.
On iPad, the app icon requirements are slightly different from iPhone, so if you only provided iPhone sizes in your Asset Catalog, iOS will upscale whatever it finds. That’s why you’re seeing a blurry / generic looking icon on iPad.
Here’s what you need to check and fix:
In Xcode, go to Assets.xcassets → AppIcon.
By default, Xcode shows iPhone slots (60pt, 120pt, 180pt, etc.).
For iPad support, you need to switch the AppIcon’s device type.
👉 Select your AppIcon set in the asset catalog. In the right-side Attributes Inspector, under “Devices,” make sure both iPhone and iPad are checked.
Now you’ll see the iPad slots appear (20pt, 29pt, 40pt, 76pt, 83.5pt, 1024pt).
You’ll need to provide images at these sizes (in px):
iPad App Icon
20pt → 20x20 @1x, 40x40 @2x
29pt → 29x29 @1x, 58x58 @2x
40pt → 40x40 @1x, 80x80 @2x
76pt → 76x76 @1x, 152x152 @2x
83.5pt → 83.5x83.5 @2x → 167x167
App Store: 1024x1024 (no alpha, PNG)
If you’re missing, say, the 76pt or 83.5pt iPad icon, iOS will fall back to scaling the iPhone versions, which is what you’re seeing.
After adding the missing sizes, Clean Build Folder (⇧⌘K in Xcode).
Delete the app from the iPad simulator/device.
Re-run and check the new icons.
✅ After doing this, your iPad will display the correct crisp app icon instead of the fallback blurry one.
I dont think you need the exact image pixels if u do need 'em reply back . always happy to help. have a good day
Your 6-hour struggle is exactly why I built @halverscheid-fiae.de/angular-testing-factory! 🚀
Your core issue is mock setup complexity - you're spending more time fighting mocks than testing logic. Here's how the library solves this:
// Manual HttpClient mock hell
beforeEach(async () => {
mockCoreService = MockService(CoreService);
// + HttpClientTestingModule import
// + Manual spy setup
// + Mock return value configuration
// + Prayer that it works 🙏
});
beforeEach(async () => {
await TestBed.configureTestingModule({
providers: [
provideHttpClientMock({
post: jest.fn(() => of({ message: 'Registration successful!', status: 'success' }))
}),
// Your CoreService will automatically use the mocked HttpClient
]
}).compileComponents();
});
HttpClientTestingModule setupsatisfies jest.Mocked<Partial<T>> prevents mock driftCoreService automatically gets the mocked HttpClientit('should submit form successfully', fakeAsync(() => {
// Form setup (unchanged)
component.contactFormGroup.setValue({
username: 'testuser',
fullname: 'Test User',
email: '[email protected]',
password: 'TestPass123!'
});
// Test execution (unchanged)
component.submitForm(new Event('submit'));
tick(100);
// Assertions work because HttpClient is properly mocked
const httpClientMock = TestBed.inject(HttpClient);
expect(httpClientMock.post).toHaveBeenCalledWith(
'http://localhost:3000/register',
expect.any(Object),
expect.any(Object)
);
}));
// TypeScript catches mock inconsistencies at compile-time!
const provideMyServiceMock = createServiceProviderFactory(MyService, {
registerUser: jest.fn(() => of(mockResponse)),
// ↑ TypeScript validates this matches your real service
} satisfies jest.Mocked<Partial<MyService>>);
npm install --save-dev @halverscheid-fiae.de/angular-testing-factory
// Replace your entire beforeEach setup with:
import { provideHttpClientMock } from '@halverscheid-fiae.de/angular-testing-factory';
TestBed.configureTestingModule({
providers: [provideHttpClientMock()]
});
Result: 80% less boilerplate, 100% more reliable tests!
provideRouterMock()provideActivatedRouteMock({ params: of({ id: '1' }) })provideFormBuilderMock()provideAngularCoreMocks({ httpClient: true, router: true })P.S. - Your 6 hours of frustration inspired this exact use case in the library! 😅
The goal is simple: Write tests, not mock configuration.
Hope this saves you (and others) those painful mock-setup hours! 🎉
The solutions using InputBindingBehavior did not work reliably for me. I believe this is due to the fact that the Loaded event does not guarantee that all data bindings are evaluated. The above did work for me on one control but did not work on another simply because the key bindings that were copied onto the parent window did not have a command.
What does seem to work reliably for me is to set the focus in Xaml like this:
<UserControl x:Class="MyApp.UI.Controls.FunctionButton"
...
Focusable="True"
FocusManager.FocusedElement="{Binding RelativeSource={RelativeSource Self}}"
>
...
</UserControl>
I've managed to get things working.
In the real code, I was emitting an event from the stimulus controller after creating the chart but before drawing it. the resulting actions (adding indicators etc) were interfering with the rendering
Qt’s QML engine and tooling expect meta object revisions in .qmltypes files to match their export version.
Your system-installed Qt QML modules (/usr/lib/x86_64-linux- gnu/qt5/qml/QtQuick.2/plugins.qmltypes) have exportMetaObjectRevisions: [0] but the tooling expects it to be [2] (i.e., version 2.0).
This inconsistency triggers the warning in VS Code’s QML extension.
To fix or suppress it
Check Qt version compatibility
Make sure the Qt version used by VS Code extension matches the installed Qt version.
Run qmake -v or qtpaths --version to see your Qt version.
Check the VS Code QML extension documentation or settings for Qt version compatibility.
You can also try reinstalling QT because sometimes Updating or reinstalling QT packages fixes the mismatches.
It turns out that [CaptureConsole] doesn't capture all output. It only captures output to the console for code running in the same thread as the unit test. If you use a TestHost then the ultimate code in the controllers runs in a separate thread and that console output is not captured. Created a documentation issue in github.com/xunit/xunit/issues/3399