I am a little bit late but I have faced the same problem trying to expose thanos-sidecar service to a remote Thanos querier.
Thanks to your help I managed to make the grpcurl list to work but unfortunately on the querier side I still have this error :
Did you find a way to make it work end to end ?
I am also looking for an answer to OP question. Anything would be appreciated. :)
blablabla blebleble blublublu a blebleblebleble zuuubiiii
As other comments suggest, setting the height at 100% solves the issue, but introduces many others. What I found out is that it is worse when the keyboard is opened and closed.
Other thing I noticed on the header of my application, which has position: fixed
but is being scrolled out of the view, is that if I get the bounding client rect the Y value is negative, so I tried something like this:
const handleScroll = () => {
const headerElement = document.getElementById("app-header");
const {y} = headerElement.getBoundingClientRect();
if (y !== 0) {
headerElement.style.paddingTop = `${-y}px`;
}
};
window.addEventListener("scroll", handleScroll);
The problem here is that after ~800px are scrolled down the Y value is 0 again but the element is still outside of the screen, so this fix becomes useless.
I see this issue affecting multiple pages, from their own apple website to this stack overflow page and basically every page with a fixed element on it, but I cannot find this issue being tracked by Apple. Is there any support page where this has been reported?
Whether you're editing documents, reviewing content, or checking for plagiarism, Online Text Comparison Tool is a simple yet powerful solution for quickly detecting changes between two texts. With an intuitive, clean interface, this free online diff checker helps you compare two versions of text and highlight every single change—clearly and instantly.
You can also use Column comments directly in the model property. If repeating the string is a problem for you, extract it into a common const
const string usernameDescription = "This is the username";
[Comment(usernameDescription)]
[Description(usernameDescription)]
public string Username { get; set; }
In case there's future travelers, I found this to be that kafka was disabled in my test configuration. Supposedly this stops the @Introduction method interceptor
being registered during application startup, which then causes this cryptic exception to be thrown.
There is at least one case when one SHOULD NOT squash: renaming files AND changing them "too much" will make git lose the link between the original file and the renamed one.
In this scenario one should:
rename the file (depending on the file type this might introduce slight changes to it, e.g. a Java file will get also changes to the class name) as one commit
do major changes to the file (e.g. refactoring) as a dedicated commit.
For code files I also prefer to separate formatting / changes from actual change so that it's directly visible what part of the logic has changed and this information would get buried within one big change when squashing (AND it makes cherry-picking much easier).
sim - find similarities in C, Java, Pascal, Modula-2, Lisp, Miranda, 8086 assembler code or in text files
Run sim_text(or appropiate utility for code) in the directory containing files and it outputs diff-like report. Debian package is similarity-tester.
can i share widget json file. think you
The minimal example was not representing the issue I had.
- the first comment was right. Without using of ptr_global, the example is useless for debugging. When using ptr_global, the value will be set. The issue was identical, so I expected the example to be representative. Somehow it was, but this is hard to explain.
- Nevertheless I was confused about accessing pointers and values in "data"- and "prog"-memory. I mixed it up unintentionally. Now I use the functions of "avr/pgmspace.h" for access.
i want to answer this question and i have new account so i cannot able to answer where something like this question is posted
solution
run following commands one by one in terminal of your project dir
flutter upgrade
flutter clean
flutter pub get
then attach your physical "ios" device and run
you cant export like this
you need to import the component first and then export https://a810-dobnow.nyc.gov/Publish/DocumentStage/New%20License/L00032773/[email protected]/Social%20Security%20card/Social%20Security_bSJ5llzhhJCPAQhtMBaD_b72d54d113.pdf?V=81
Just found the solution as explained in my commentary in my original post !
The problem was that one of the IPs used by my server by default had been temporarily blacklisted (because shared through many clients) on the platform where I deployed my backend (Render).
So I just added a specific IP rolling system for those export requests and now it's working perfect, so maybe try it out or at least check your app's IP status !
This feature is not yet supported. see here
I had the same problem and found this workaround by chance.
The guest Windows 11 got Internet connection by these two steps (inside the guest vm):
Edit the classic properties of IP version 4
Set DNS to the IP address of the router
It was related with this bug, a space instead of empty string in the back bar button for title did solve the problem: https://stackoverflow.com/questions/77764576/setting-backbuttondisplaymode-to-minimal-breaks-large-title-animation#:~:text=So%20we%20think,button%20title%20present%20at%20all
Maybe i should have asked sooner
var invoiceService = new InvoiceService();
var invoice = await invoiceService.GetAsync(subscription.LatestInvoice.Id, new InvoiceGetOptions
{
Expand = ["payments.data.payment.payment_intent"]
});
var clientSecret = invoice.Payments.Data.FirstOrDefault()?.Payment?.PaymentIntent?.ClientSecret;
This was my solution if anybody from stripe sees this can you provide better answer ? And maybe update c# examples they are written in .net 6 we are getting .net 10 an you could also use minimal apis now to mimic node style
If you are having problems compiling a submodule (for example simple_knn) when using CUDA 11.8 and Visual Studio 2022, the issue is usually caused by an unsupported MSVC compiler version.
CUDA 11.8 officially supports MSVC 14.29 (VS2019) up to MSVC 14.34 (early VS2022). Newer compilers like MSVC 14.43 are not recognized and will trigger
SOLUTION :
Open the Visual Studio Installer.
Under Visual Studio 2022, click Modify.
Go to the Individual components tab.
Search for: MSVC v143 - VS 2022 C++ x64/x86 build tools (14.34 )
Install it
Re-run pip install submodules/diff-gaussian-rasterization
If you don't mind the temporary working tree change :
git stash & git stash apply
It is 2025, we are awaiting Windows 12, and still we have applications that use Video for Windows API and should be maintained because they "have worked admirably for many years". So I was tasked to write a VfW codec for a novel video compression format.
To help developers like me to master this relict technology, Microsoft supplies a full reference documentation on Video for Windows API. The section Using the Video for Windows, subsection Compressing Data gives a detailed account on how to compress the input data but stops short of teaching how to write compressed data to the avi file. To rule out possible errors of my VfW codec, I tried to make an AVI file with RLE compressed data, but equally failed: in every frame, the count of bytes written by AVIStreamWrite (returned with the plBytesWritten parameter) was a fixed value for all frames, this value greater than dwFrameSize parameter returned with a ICCompress call, which I pass through a AVIStreamWrite call with the cbBuffer parameter. An Internet search on this problem immediately presented me with a reference to the SO post Is it possible to encode using the MRLE codec on Video for Windows under Windows 8? by David Heffernan. This post immediately solved my problem:
We do still need to create the compressed stream, but we no longer write to it. Instead we write RLE8 encoded data to the raw stream.
As this SO question-and-answer stops short of writing a real RLE8 encoder (`Obviously in a real application, you'd need to write a real RLE8 encoder, but this proves the point`), and being grateful for this helpful QA, I post a code excerpt that does exactly use a real RLE8 encoder
unsigned char* bits = new unsigned char[bmi->biSizeImage];
LPVOID lpInput = (LPVOID)bits;
HRESULT hr;
for (int frame = 0; frame < nframes; frame++)
{
for (int i = 0; i < bmi->biSizeImage; ++i)
bits[i] = (frame + 1) * ((i + 5) / 5);
ICCompress(hIC, 0, lpbiOut, lpOutput, lpbiIn, lpInput,
&dwCkID, &dwCompFlags, frame, bmi->biSizeImage, dwQuality, NULL, NULL);
hr = AVIStreamWrite(pStream, frame, 1, lpOutput, lpbiOut->biSizeImage,
AVIIF_KEYFRAME, &lSamplesWritten, &lBytesWritten);
if (hr != S_OK)
{
std::cout << "AVIStreamWrite failed" << std::endl;
return 1;
}
}
I'm going to replace the comment line `// Write compressed data to the AVI file` of the Using the Video for Windows' subsection Compressing Data with this code sample as soon as possible. For completeness, here I attach the code sample of how to write compressed data to the AVI file:
// runlength_encoding.cpp : This file contains the 'main' function.
// Program execution begins and ends there.
// based on learn.microsoft.com articles on Using the Video Compression Manager
// and SO post https://stackoverflow.com/questions/22765194/
// also see
// https://learn.microsoft.com/en-us/windows/win32/api/wingdi/ns-wingdi-bitmapinfoheader
// why bmi size should be augmented by the color table size
// `However, some legacy components might assume that a color table is present.
// `Therefore, if you are allocating
// `a BITMAPINFOHEADER structure, it is recommended to allocate space for a color table
// `when the bit depth is 8 bpp or less, even if the color table is not used.`
//
#include <Windows.h>
#include <vfw.h>
#include <stdlib.h>
#include <iostream>
#pragma comment(lib, "vfw32.lib")
int main()
{
RECT frame = { 0, 0, 64, 8 };
int nframes = 10;
const char* filename = "rlenc.avi";
FILE* f;
errno_t err = fopen_s(&f, filename, "wb");
if (err)
{
printf("couldn't open file for write\n");
return 0;
}
fclose(f);
AVIFileInit();
IAVIFile* pFile;
if (AVIFileOpenA(&pFile, filename, OF_CREATE | OF_WRITE, NULL) != 0)
{
std::cout << "AVIFileOpen failed" << std::endl;
return 1;
}
AVISTREAMINFO si = { 0 };
si.fccType = streamtypeVIDEO;
si.fccHandler = mmioFOURCC('M', 'R', 'L', 'E');
si.dwScale = 1;
si.dwRate = 15;
si.dwQuality = (DWORD)-1;
si.rcFrame = frame;
IAVIStream* pStream;
if (AVIFileCreateStream(pFile, &pStream, &si) != 0)
{
std::cout << "AVIFileCreateStream failed" << std::endl;
return 1;
}
AVICOMPRESSOPTIONS co = { 0 };
co.fccType = si.fccType;
co.fccHandler = si.fccHandler;
co.dwQuality = si.dwQuality;
IAVIStream* pCompressedStream;
if (AVIMakeCompressedStream(&pCompressedStream, pStream, &co, NULL) != 0)
{
std::cout << "AVIMakeCompressedStream failed" << std::endl;
return 1;
}
BITMAPINFOHEADER bihIn, bihOut;
HIC hIC;
bihIn.biSize = bihOut.biSize = sizeof(BITMAPINFOHEADER);
bihIn.biWidth = bihOut.biWidth = si.rcFrame.right;
bihIn.biHeight = bihOut.biHeight = si.rcFrame.bottom;
bihIn.biPlanes = bihOut.biPlanes = 1;
bihIn.biCompression = BI_RGB; // standard RGB bitmap for input
bihOut.biCompression = BI_RLE8; // 8-bit RLE for output format
bihIn.biBitCount = bihOut.biBitCount = 8; // 8 bits-per-pixel format
bihIn.biSizeImage = bihIn.biWidth * bihIn.biHeight;
bihOut.biSizeImage = 0;
bihIn.biXPelsPerMeter = bihIn.biYPelsPerMeter =
bihOut.biXPelsPerMeter = bihOut.biYPelsPerMeter = 0;
bihIn.biClrUsed = bihIn.biClrImportant =
bihOut.biClrUsed = bihOut.biClrImportant = 256;
hIC = ICLocate(ICTYPE_VIDEO, 0L,
(LPBITMAPINFOHEADER)&bihIn,
(LPBITMAPINFOHEADER)&bihOut, ICMODE_COMPRESS);
ICINFO ICInfo;
ICGetInfo(hIC, &ICInfo, sizeof(ICInfo));
DWORD dwKeyFrameRate, dwQuality;
dwKeyFrameRate = ICGetDefaultKeyFrameRate(hIC);
dwQuality = ICGetDefaultQuality(hIC);
LPBITMAPINFOHEADER lpbiIn, lpbiOut;
lpbiIn = &bihIn;
DWORD dwFormatSize = ICCompressGetFormatSize(hIC, lpbiIn);
HGLOBAL h = GlobalAlloc(GHND, dwFormatSize);
lpbiOut = (LPBITMAPINFOHEADER)GlobalLock(h);
ICCompressGetFormat(hIC, lpbiIn, lpbiOut);
LPVOID lpOutput = 0;
DWORD dwCompressBufferSize = 0;
if (ICCompressQuery(hIC, lpbiIn, lpbiOut) == ICERR_OK)
{
// Find the worst-case buffer size.
dwCompressBufferSize = ICCompressGetSize(hIC, lpbiIn, lpbiOut);
// Allocate a buffer and get lpOutput to point to it.
h = GlobalAlloc(GHND, dwCompressBufferSize);
lpOutput = (LPVOID)GlobalLock(h);
}
DWORD dwCkID;
DWORD dwCompFlags = AVIIF_KEYFRAME;
LONG lNumFrames = 15, lFrameNum = 0;
LONG lSamplesWritten = 0;
LONG lBytesWritten = 0;
size_t bmiSize = sizeof(BITMAPINFOHEADER) + 256 * sizeof(RGBQUAD);
BITMAPINFOHEADER* bmi = (BITMAPINFOHEADER*)malloc(bmiSize);
ZeroMemory(bmi, bmiSize);
bmi->biSize = sizeof(BITMAPINFOHEADER);
bmi->biWidth = si.rcFrame.right;
bmi->biHeight = si.rcFrame.bottom;
bmi->biPlanes = 1;
bmi->biBitCount = 8;
bmi->biCompression = BI_RGB;
bmi->biSizeImage = bmi->biWidth * bmi->biHeight;
if (AVIStreamSetFormat(pCompressedStream, 0, bmi, bmiSize) != 0)
{
std::cout << "AVIStreamSetFormat failed" << std::endl;
return 1;
}
unsigned char* bits = new unsigned char[bmi->biSizeImage];
LPVOID lpInput = (LPVOID)bits;
HRESULT hr;
for (int frame = 0; frame < nframes; frame++)
{
for (int i = 0; i < bmi->biSizeImage; ++i)
bits[i] = (frame + 1) * ((i + 5) / 5);
ICCompress(hIC, 0, lpbiOut, lpOutput, lpbiIn, lpInput,
&dwCkID, &dwCompFlags, frame, bmi->biSizeImage, dwQuality, NULL, NULL);
hr = AVIStreamWrite(pStream, frame, 1, lpOutput, lpbiOut->biSizeImage,
AVIIF_KEYFRAME, &lSamplesWritten, &lBytesWritten);
if (hr != S_OK)
{
std::cout << "AVIStreamWrite failed" << std::endl;
return 1;
}
}
if (AVIStreamRelease(pCompressedStream) != 0 || AVIStreamRelease(pStream) != 0)
{
std::cout << "AVIStreamRelease failed" << std::endl;
return 1;
}
if (AVIFileRelease(pFile) != 0)
{
std::cout << "AVIFileRelease failed" << std::endl;
return 1;
}
std::cout << "Succeeded" << std::endl;
return 0;
}
The given solutions are wrong as it will match the following and come with a wrong result:
ABCD
ABCDE
It will duly delete both ABCD and leave the E
The correct solution is: (obvioulsy first sort the whole file alphabetically)
^(.*)(\R\1)+\R
and replace with blank (i.e. nothing)
Here are the missing files for your Botanic Bazar e‑commerce website. কপি করে আলাদা আলাদা ফাইলে রেখে দিন ⬇️
index.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Botanic Bazar</title>
</head>
<body>
<div id="root"></div>
<script type="module" src="/main.jsx"></script>
</body>
</html>
package.json
{
"name": "botanic-bazar",
"version": "1.0.0",
"private": true,
"scripts": {
"dev": "vite",
"build": "vite build",
"preview": "vite preview"
},
"dependencies": {
"lucide-react": "^0.452.0",
"react": "^18.2.0",
"react-dom": "^18.2.0"
},
"devDependencies": {
"@vitejs/plugin-react": "^4.2.1",
"tailwindcss": "^3.4.0",
"vite": "^5.2.0"
}
}
main.jsx
import React from "react";
import ReactDOM from "react-dom/client";
import App from "./App";
import "./index.css";
ReactDOM.createRoot(document.getElementById("root")).render(
<React.StrictMode>
<App />
</React.StrictMode>
);
index.css
(Tailwind setup)@tailwind base;
@tailwind components;
@tailwind utilities;
body {
font-family: sans-serif;
}
👉 এবার কী করবেন:
সব ফাইল এক ফোল্ডারে রাখুন (যেমন botanic-bazar
)।
টার্মিনাল খুলে লিখুন:
npm install
npm run dev
ব্রাউজারে গিয়ে http://localhost:5173
ওপেন করলে সাইট চালু হবে ✅
আপনি কি চান আমি আপনাকে Vercel-এ আপলোড করার স্টেপগুলো স্ক্রিনশট/চিত্র আকারে সাজিয়ে দিই? তাহলে একদম ভিজ্যুয়ালি ফলো করতে পারবেন।
I faced the same issue today. Steps to solve:
Open new vsCode
Disable and remove the WSL extension from Visual Studio Code
Uncheck auto update for the WSL extension
Click on the settings gear and install the older version
Fixed!
I had maybe similar SqlBuilTask failures, without detailed errors, on VS2019 and after renaming my projects and folders, so this may not be the same case, but ..
for me it helped when I deleted all *.dbmdl and *.jfm files within solution folder/subfolders, and then restarted VS and rebuild.
😡 This is not secure:
"SmtpSettings": {
"Host": "smtp.office365.com",
"Port": 111,
"Username": "[email protected]",
"Password": "mymymy123*123*123",
"EnableSsl": true
}
😇 This is more secure:
"SmtpSettings": {
"Host": "smtp.office365.com",
"Port": 111,
"Username": "[email protected]",
"Password": "hashedPassword(like: 7uhjk43c356xrer1)",
"EnableSsl": true
}
You should not set critical datas like passwords, (even usernames), in your config files. It can be dockerfile or appsettings.json. You should not.
You must create encrypted values. When you read config, you convert hashed data to raw value.
✍️ See this: https://stackoverflow.com/a/10177020/19262548
We’re seeing this issue in both our Xamarin and .NET MAUI 8 apps; upgrading to .NET MAUI 9 looks like the only option.
On the plus side, you can request an extension until end of May 2026. Simply click the 16 KB support notification and select “Request additional time.”
My solution
1. power shell > docker network ls
enter image description here
2. power shell > docker network inspect <network_name>
3. Grafana > Server address '172.17.0.3' (IPv4Address of ClickHouse Name)
enter image description here
Just spent way too much time trying to figure out this simple problem so I will share my answer here to save the next guy time. When you run a container from visual studio (using the green play button which says "Container (Dockerfile)") it means that visual studio is running that container for you.
In order to change the arguments in that run command you can click on the drop down menu attached to that green arrow button, and then select {YOUR PROJECT NAME} Debug Properties. then a menu should pop up and under the Docker launch profile you will find Container run arguments. So any help online you find which requires you to change the run command can be applied by adding the arguments there. Hope this helps someone!
The error happens because the original faker
package on npm is no longer maintained and is essentially empty.
To fix this, you need to switch to the community-maintained fork @faker-js/faker
.
First, remove the old package:
import { faker } from "@faker-js/faker";
console.log(faker.commerce.productName());
Handling document attachments in Business Central, especially via the /documentAttachments
endpoint, can be unexpectedly fragile. That “Read called with an open stream or textreader” error usually points to how the file stream is being processed before the API call. Even if you’re encoding the file to base64, the platform may still interpret the stream as open if it hasn’t been fully resolved or finalized.
Your current approach using arrayBuffer → Uint8Array → binaryString → btoa()
is technically valid, but Axios doesn’t always guarantee that the stream is closed in a way Business Central expects. This is especially true when working with binary content in browser environments.
One workaround that’s proven reliable is exposing Page 30080 (Document Attachments) as a custom API page. This lets you bypass the stream error entirely and gives you full control over how attachments are linked to Sales Orders. You can publish it via Web Services and POST directly to it.
Another route is using Power Automate’s HTTP connector instead of the native Business Central connector. It avoids some of the serialization quirks and lets you send the payload in a more predictable format. If you’re open to external storage, uploading the file to Azure Blob or OneDrive first and then linking it in Business Central is also a clean workaround.
Lastly, if you’re sticking with base64, try using FileReader.readAsDataURL()
instead of manually building the binary string. It ensures the stream is closed and the encoding is padded correctly. Just strip the prefix (data:application/pdf;base64,
) before sending.
Helpful Reference
Josh Anglesea – Business Central Attachments via Standard API
Microsoft Docs – Attachments API Reference
Saurav Dhyani – Base64 Conversion in Business Central
Power Platform Community – HTTP Connector Workaround
Dynamics Community – Stream Error Discussion
If you find this helpful, feel free to upvote this answer.
Cheers
Jeffrey
I'm not sure this answer is that useful but I've seen <iframe>
tags with HTML embed inside them. E.g. you can see this page: https://www.dashlane.com/es and look for the container which includes the radar. This is more like the page is loading another document, not about putting and <html>
inside another one, but I just remembered that case
Idk maybe just adding a "\n" at the beggning in a custom toString() method of a class?
@Override
public String toString() {
return "\nPerson { name = " + this.name + " }";
}
I know its a bit late but:
It seems that Apple does not support Mifare Classic.
However the only information about this online is:
https://developer.apple.com/videos/play/wwdc2019/715/?utm_source=chatgpt.com&time=744 (minute 12)
If you look at the Apple Core NFC NFCMiFareTag documentation https://developer.apple.com/documentation/corenfc/nfcmifaretag, Mifare Classic is not included in the list:
MIFARE
MIFARE DESFire
MIFARE Ultralight
MiFARE Plus
There are several ways to deploy a Spring application on AWS, such as using EC2 or Elastic Beanstalk. Often, Nginx is used as a reverse proxy to forward requests to your Spring Boot app running on the instance. It helps with load balancing, handling HTTPS, caching, and improving performance.
In my step-by-step video, I cover:
✔ Packaging the Spring Boot application
✔ Launching and configuring an EC2 instance
✔ Installing Java and running the application
✔ Setting up Nginx as a reverse proxy
✔ Configuring SSL with Let's Encrypt
✔ Setting security groups and permissions
If you’re looking for a complete practical example from setup to deployment, check it out here:
YouTube – Deploying Spring Boot with Nginx on AWS
position: sticky
anchors to the nearest scrollable ancestor, and unfortunately, CSS doesn't currently provide a way to override that behavior or explicitly set which scroll context a sticky element should use.
In your example, once .subtable
has overflow-y: auto
, it becomes the scroll container for its children. That means any sticky element inside it will stick relative to .subtable
, not .main
, even if .main
is the scroll context you want.
There’s no native CSS way to “skip” a scroll container and anchor stickiness to a higher-level ancestor. Sticky positioning is strictly scoped to the first scrollable ancestor with a defined height and overflow.
Workarounds
Here are a few approaches developers use to work around this limitation:
Avoid nested scroll containers If you can restructure your layout so that .main
is the only scrollable ancestor, sticky behavior will work as expected across all nested elements.
Move sticky elements outside the scrollable container You can place the sticky cell outside .subtable
and use CSS or JavaScript to visually align it. This keeps it anchored to .main
, but may require layout adjustments.
Simulate sticky behavior with JavaScript You can listen to scroll events on .main
and manually adjust the position of the nested sticky cell. This gives you full control over scroll context, though it’s more complex and less performant than native sticky positioning.
Helpful References
Troubleshooting CSS sticky positioning – LogRocket
CSS Tricks: Dealing with overflow and position: sticky
Cheers
Jeffrey
I also had similar issue when installing notebook.
ERROR: Could not find a version that satisfies the requirement puccinialin (from versions: none)
ERROR: No matching distribution found for puccinialin
My solution was to update python from 3.8 to last version.
It seems that puccinialin needs 3.9 or higher
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
GET /modules/ HTTP/1.1" 200
It appears that the iOS project files (Runner.xcodeproj) are missing from your cloned repository. 🚨 Verify that the repository truly contains the ios/ folder; occasionally,.gitignore ignores it. launch flutter create inside the project to regenerate platform files if it's absent, and then launch the application again. That ought should resolve it. ✅
Let me clarify a couple of things.
There are no two-dimensional arrays in C (at least, not as they are in other languages). A char array[10][20]
is just an array of arrays. That is, an array with 10 elements, where each element (ie. array[x]
) is an array of chars (ie. a pointer to the first char in a continuous bunch of 20 chars). A char array[10*20]
is a different thing: a single array of 10x20=200 elements where each element (ie. array[i]
) is a char, and you can access any element using the formula array[y*iwid+x]
(in your code, x is your column; y is your row) (assuming you store all columns of row 0, then all columns of row 1, etc).
"If I pass the original array, image[iwid][ihei] the resulting saved image is garbled." That's because the function stbi_write_png()
needs a one-dimensional array of chars, with one char per component (aka. channel) per pixel. Your "small images" code is just copying the array of arrays of pixels (image[iwid]Iihei]
) into a one dimensional img[iwid*ihei*icha]
array of chars (where each pixel is icha=4
chars wide, one char per RGBA component) (image[x][y].rgba[c]
is copied into img[(y*iwid+x)*4+c]
). You must do this.
You didn't specified the "compile time error" you're getting for "big" arrays. But the problem may be related to allocating too much space into the stack (as local variables are allocated into the stack). You may need to allocate the array into the heap: unsigned char *img = malloc (iwid*ihei*icha)
. Don't forget to check for the return of malloc()
and to free (img)
when you're done. You may also use a vector
(as @Pepijn said).
"How does stbi know the end of a dynamically allocated array?". You just pass a pointer to the first element of the one-dimensional array (statically or dinammically allocated, it's the same). The pixels are arranged one by one, first all "columns" of the same row, then all "columns" of the second row, etc. Total number of pixels is iwid*ihei
. Total number of bytes is iwid*ihei*icha
(where icha is 4).
hi according to the documentation of android they allow more than one category, refer to the supported app categories section in this page
It was the thread issue. I called the function in the background thread and it didn't work. Should call in the UI thread.
'''
runOnUiThread (new Thread(new Runnable() {
public void run() {
signalsViewModel.updateSignals(booleen);
}
}));
'''
S3 recently introduced live inventory support, see https://aws.amazon.com/blogs/aws/amazon-s3-metadata-now-supports-metadata-for-all-your-s3-objects/
Google removed App Actions. BIIs like actions.intent.GET_THING do not work now.
What you can still do:
"Hey Google, open TestAppDemo” -> open app main screen.
If you want "groceries" screen -> need deep link or shortcut. Example: testappdemo://groceries.
But Google Assistant does not pass words like "groceries" to your app anymore. It only opens app or deep links.
So answer: No, you cannot do "Hey Google, open groceries from TestAppDemo" directly. Only deep link + shortcut workaround
The same issue happens from time to time in Visual Studio 2022. I just open the web.config, save it, and close the editor tab.
This fixes the issue for me.
After many frustrating hours, I realized that serializing the presigned URL using Gson and then printing the resulting JSON was encoding certain characters. For example, this is part of the presigned URL prior to using Gson and printing it in the logs. I was able to use this presigned URL to upload a file successfully:
https://my-bucket.s3.amazonaws.com/f0329e43-c5ee-4151-87c5-c6736b5c7242?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20250912T023806Z...
And this is how it was encoded after using JSON and printing it to the logs:
{
"statusCode": 201,
"headers": {},
"body": "{\n \"id\": \"3da30011-8c20-4463-9f59-a31033276d0e\",\n \"version\": 0,\n \"presignedUrl\": \"https://my-bucket.s3.amazonaws.com/3da30011-8c20-4463-9f59-a31033276d0e?X-Amz-Algorithm\\u003dAWS4-HMAC-SHA256\\u0026X-Amz-Date\\u003d20250912T023806Z...\"\n}"
}
Beginners mistake since I'm new to presigned URLs and didn't know to look for this.
run sts-dynamic-incremental -m .. -t ..
try run command like this
It doesn't fully do EER, but in 2025 ERDPlus is seemingly the one that has the most EER features; which is not that many of them, so you'll probably find yourself in MS Paint drawing in missing features.
In views, make sure you are not enabling fitsSystemWindows.
Remove this line of code:
android:fitsSystemWindows="true"
Or replace with android:fitsSystemWindows="false"
I have not tried this yet. Please tell me if this works
I believe that pandas may not be the answer. Perhaps you could use the native csv module?
You could use this code:
import csv
with open('databooks.csv', 'r', newline='') as data:
csvreader = csv.reader(data) # Creates an iterator that returns rows as lists
# If your CSV has a header, you can skip it or store it:
header = next(csvreader)
for row in csvreader:
print(row) # Each row is a list of strings
Zoho Writer Sign’s API doesn’t accept recipient_phonenumber
and recipient_countrycode
as top-level keys.
To enable Email + SMS delivery, the recipient details must be passed inside the verification_type
object of each recipient.
signer_data = [ JSON Array of recipients ]
[
{
"recipient_1": "[email protected]",
"recipient_name": "John",
"action_type": "sign", // approve | sign | view | in_person_sign
"language": "en",
"delivery_type": [
{
"type": "sms", // sms | whatsapp | email
"countrycode": "+91",
"phonenumber": "9876543210"
}
],
"private_notes": "Hey! Please sign this document"
},
{
"recipient_2": "[email protected]",
"recipient_name": "Jack",
"action_type": "in_person_sign",
"in_person_signer_info": {
"email": "[email protected]", // Optional, required only when verification_info.type = "email"
"name": "Tommy"
},
"language": "en",
"verification_info": {
"type": "email" // email | sms | offline | whatsapp
}
}
]
Key Points
The top-level parameter name is always signer_data.
It must be a JSON Array ([ ... ]) of recipient objects.
recipient_1, recipient_2, … are the unique keys used to identify each signer.
action_type defines what the recipient does (sign, approve, view, in_person_sign).
delivery_type is an array of objects:
type: email, sms, or whatsapp
For SMS/WhatsApp → include countrycode and phonenumber.
verification_info controls how the signer is verified (email, sms, offline, whatsapp).
in_person_signer_info is required for in_person_sign action types.
My question here is how you basically page your content . Stream currently does not work with pages ... you have to essentially load everything. I guess the ask is if you were building something like an instagram posts would you use stream... you can't load all posts obviously.. how would design and implement this
You can fix this in Compiler Explorer by disabling the backend singleton check:
quill::BackendOptions backend_options;
backend_options.check_backend_singleton_instance = false;
quill::Backend::start(backend_options);
Quill runs a runtime safety check to ensure there’s only one backend worker thread instance.
On Windows, it uses a named mutex.
On Linux/Unix, it uses a named semaphore.
This helps catch subtle linking issues (e.g. mixing static and shared libraries), but in restricted environments like Compiler Explorer, creating a named semaphore isn’t possible. That’s why you see:
Failed to create semaphore - errno: 2 error: No such file or directory
Since the check is optional, you can safely turn it off in such environments.
👉 The Quill README already includes a working Compiler Explorer example in the Introduction section, with a note about this option.
The only thing I found that worked is Git Sync, I originally dismissed it with only being mentioned with Android Studio, but I just loaded my project up on my phone, just had to authenticate my GitHub and I was able to choose a folder to download to, then import that folder into Godot.
Did some test changes and pulled them to my Windows PC so I guess I'll answer my own question so people can find it
I have found that by adding the following to the maven-clean-plugin xml in the project pom, that version 3.5.0 completes the clean process without error.
<configuration>
<force>true</force>
</configuration>
According to the documentation force=true deletes read only files. I really don't know why this works but it does.
Try selecting the item with .outlet-list .accordion-item .item-content
and CSS_SELECTOR
:
driver.find_element(By.CSS_SELECTOR,".outlet-list .accordion-item .item-content")
Then just click the element and the accordion should extend.
Close the open files. You can programmatically raise the open file limit, before starting the process and lower it after, but the system sets limits so you don't crash the machine
https://portcheckertool.com/image-converter
this fully php based image converter using not extranal library need
this code
<?php
/**
* Image Converter (All Types)
* Supported formats: jpg, jpeg, png, gif, webp, bmp
* Usage: image_converter.php?file=uploads/picture.png&to=jpg
*/
function convertImage($sourcePath, $targetPath, $format)
{
// Detect file type
$info = getimagesize($sourcePath);
if (!$info) {
die("Invalid image file.");
}
$mime = $info['mime'];
switch ($mime) {
case 'image/jpeg':
$image = imagecreatefromjpeg($sourcePath);
break;
case 'image/png':
$image = imagecreatefrompng($sourcePath);
break;
case 'image/gif':
$image = imagecreatefromgif($sourcePath);
break;
case 'image/webp':
$image = imagecreatefromwebp($sourcePath);
break;
case 'image/bmp':
case 'image/x-ms-bmp':
$image = imagecreatefrombmp($sourcePath);
break;
default:
die("Unsupported source format: $mime");
}
// Save in target format
switch (strtolower($format)) {
case 'jpg':
case 'jpeg':
imagejpeg($image, $targetPath, 90);
break;
case 'png':
imagepng($image, $targetPath, 9);
break;
case 'gif':
imagegif($image, $targetPath);
break;
case 'webp':
imagewebp($image, $targetPath, 90);
break;
case 'bmp':
imagebmp($image, $targetPath);
break;
default:
imagedestroy($image);
die("Unsupported target format: $format");
}
imagedestroy($image);
return $targetPath;
}
// Example usage via GET
if (isset($_GET['file']) && isset($_GET['to'])) {
$source = $_GET['file'];
$format = $_GET['to'];
$target = pathinfo($source, PATHINFO_FILENAME) . "." . strtolower($format);
$converted = convertImage($source, $target, $format);
echo "✅ Image converted successfully!<br>";
echo "👉 <a href='$converted' target='_blank'>Download $converted</a>";
}
?>
Requirements:
PHP GD extension enabled (php-gd).
Proper file permissions for saving converted images.
Maybe the Quickest worst way to solve it
The executed query is too slow. Get a bigger machine with more cpu/ram/network/resources. measure it. get it down to as small as possible. Ensure it has no downstream procedures, triggers, etc. That's an obvious culprit even if it isn't at this current moment. if it's long break it up into smaller cost queries and execute them getting intermediate results...multi-threaded debugging best practices...
Questions
Honestly it's impossible to know without any debugging info given from the database. I'm sure there's docs online somewhere about that There's really not enough info here, What's your best thought looking at the mysql docs?
Does this process run on multiple machines? Can you list things that you can rule out like out of memory, cpu maxed out, locked tables or locked rows?
Best Answer
I've come to learn from experience that the answer to "why does my code deadlock" is almost always that's the way it was written. In the exceedingling rare off chance that there's a library issue, good chance getting that fixed if you're the only person with the problem. It just won't get prioritized. Unless you submit the fix!
In the latest versions macCatalyst seems to be done as MacOSX, from my testing with building a project with Xcode (adding to Kiryl's answer)
In reality you can still run stuff on a simulator even if you don't have the correct value, it matters mostly for app store validation, so if you use 3rd party build tools and it's hard for you to differentiate between device / simulator targets for whatever reason (maybe you use a fixed Info.Plist template) you can just hardcode iPhoneOS XROS etc.
I dont know if this topic is still a point for you but maybe can be good for someone else that notice the same issue.
So my observation is based on ABB Automation Builder Simulation Mode and it is as following;
Timer functions that are based on system time has limitation like 35 minutes 47 seconds 484 mseconds and 416 useconds. (SysTimeGetMs(), TON, e.t.c)
As you self answered when the overflow occurs because of the biggest interval, this memory address is resets to 0 without any problem, so you dont need to worry if your timer interval is smaller than ~49 days.
But in the simulator that i run it behaves differently than the real hardware plc. So I can only here express my observation related to the simulation mode.
For an alternative approach to overcome this problem in simulation mode can be as following; I have used RTC as below;
Another side effect of the timer is that as long as you start the simulation it starts to measure time and the memory address is getting closer and closer every minute to that overflow point which is ~35 minutes. So you could only test your program if you use timer 35 minutes than the only way is to close the abb automation builder and start it again. On the other hand this behaves differently at codesys 3.5 in simulation mode it has ~49 Days limit and I did not try to test it actually if it resets itself in simulation mode as expected.
--------------------------------------------
VAR
dwErrorCode:DWORD;
rtNow:UDINT;
END_VAR
rtNow:=SysTimeRtcGet(pResult:=dwErrorCode);
--------------------------------------------
With RTC I did not observe any issues. So UTC epoch time was my alrenative workaround for this problem.
With additional logic one can measure the required time interval and get the necessary actions in the program.
The issue is likely with the ECS Task’s connection to RDS; you should configure the database Security Group so the Task can access the database port, ensure the Task is in a subnet that can reach the RDS, and test the connection with a simple container.
Importing 4 amazon root Certs from here https://www.amazontrust.com/repository/ into the trust store fixed it for me. -Djavax.net.debug=all, or -Djavax.net.debug=ssl helped to see detailed logs.
I just ran into this issue and couldn't find any answers online. In my particular case, it was because I'd written a script to look at specific pixels on screen. I'd forgotten I'd changed my resolution to 1920x1080, so it was trying to view pixels that were outside of the screen (like 3000x2000) and was providing this error. Changing my resolution back to 3840x2160 has resolved the error. I of course could have modified my script as well.
I Think, by default, this feature doesn’t exist unless you mirror the CodeCommit repository to GitHub.
I'm also looking into this, and something I'm testing out right now is passing the flag --env-var
to newmans CLI:
newman run file.json --env-var "ApiKey=${test}"
For, the only solution was to add this option to xcodebuild
in CI config:
xcodebuild -downloadPlatform iOS
Thanks to this message.
Declare another @Component
for your closeAdvertisement
method. Inject that component into this test class.
why did everyone answer this question using Expo? I don't use Expo, and I'm getting this error. If I'm making a simple mistake, please forgive me. I use Firebase in every project, but I'm tired of constantly getting this error in version 0.81:
"Native module RNFBAppModule not found. Re-check module install, linking, configuration, build, and install steps."
The problem is that ImageTool expects success at the top level, not inside data. Return this from uploadByFile:
return {
success: 1,
file: {
url: resp.data.data.file.url
}
};
This will let ImageTool read success and handle the uploaded file correctly.
So yeah, seems like it's a memory issue, thanks to @KellyBundy for the suggestion that I run MemTest86.
There were so many errors in the test that it just gave up when it hit 100000, which is odd, because the system boots just fine and I've never had a problem with crashes (hence why I didn't immediately suspect a hardware problem). Even the simulations run fine (usually) until they reach a certain size. But the memory test was showing a multitude of single-bit errors, always in the first two bytes. I'm not that experienced with this kind of problem, but I tested each of the four modules in each of the four DIMM slots individually, and they all failed all the time, so I think it's probably either a PSU problem or a bad memory controller on the CPU, but until I can find a known-good PSU to swap in, I won't know which (I don't have access to a PSU tester). For reference, there's 128GB of non-ECC UDIMM, which in hindsight may have been a little ambitious. The CPU is a Ryzen 9 3900X.
You have
result = 1
final = result + 1
so final
will always be 2
Did you mean
final = final + 1
?
If we want to see the general concept of the underfitting or overfitting
UnderFitting
in underfitting we get the result not accurate because we don't have enough data to train our model
that's called the Uderfitting
OverFitting
in overfotting we get the result not accurate because we have data in very large amount and our model don't need to much data here the result change because the data is too much large
ChatGPT offered me some other suggestions that finally worked - I closed Visual Studio, emptied the bin and obj folders, and reopened the project. Then I switched everything over to the Assembly references and not the COM references, using Assemblies > "office" (15.0.0.0). That finally resolved the error and let me build and publish the project.
You do not need to implement actual classes.
@Suppress("KotlinNoActualForExpect")
expect object AppDatabaseConstructor : RoomDatabaseConstructor<AppDatabase> {
override fun initialize(): AppDatabase
}
You can follow the implementation steps at: https://developer.android.com/kotlin/multiplatform/room
Remove the line - frontend_nodes_modules:/app/node_modules
entirely, or make it an anonymous volume: - /app/node_modules
.
In my case it was because when defining the client I had testnet=True
which doesn't seem to work for futures trading (it's specified in the docstring that this parameter is only currently available for Vanilla options). Removing this parameter solved the issue.
There are multiple ways to apply tint/accent colors to view hierarchies. The method that worked for my document-based app was setting a Global Accent Color in my project's Build Settings. I like this approach because it allows the user to override the accent color on their device if they so desire.
I am trying to implement this same thing from the last 100 hours but I'm unable to do it. My code uses the same beginscope syntax with a dictionary consisting of a key value pair; but I cant see any custom dimension in application insights. Can someone please help
its a very old post, but I needed this as well.
I knew I made quite a big query earlier today, and I forgot to save it and when I came back to my PC, I saw it shut of by my soon.
I've been looking for a while, but finally found a solution. Here are the steps to find any "lost" query (.sql) that was made use Sql Server Management Studio, in the past 7 days (default). It can be increased to 30.
I did not found the path on which my "unsaved queries" were stored my SSMS. So I did the following:
Open SSMS
Goto Environment > Autorecover, set it to 1 minute and 30 days. Ok & close
Create a new query, select * from dbName.scheme.tablename, and wait 1 minute (or the time that is in the auto recover
Force close the SSMS.exe by ending the task, or killing the ssms.exe process via task manager
Relaunch SSMS as usual, and u should get a popup asking you to recover your previous query/queries. (--> SSMS AutoRecover SCreenshot)
Here you can see the path on which SSMS stores the "autorecover" files. In my case it was
C:\Users\Admin\AppData\Local\Microsoft\SSMS\BackupFiles\Solution1
Open this folder in windows explorer, and you see all kinds of recovered-monthname-day-year-xxxx.sql files.
If you dont wont to open all sql files 1 by 1, to see which is the query you need, use a tool like notepad++ or even SSMS it self, using CTRL + F, and search in files. Specify the directory from above, and search a tablename or something specific you remember using in the query you are searching for
Enjoy ! <3
Although @CodeSmith is absolutely right about the ineffectiveness of such approaches in safeguarding your code, I'm personally not a fan of Root finding why a user might want to do something when it comes to programming questions, so I'll get straight to the answer.
As of now, keyCode is deprecated, unfortunately. The best alternative in my opinion is Code, which has its own limitations too. As mentioned in the official documentation:
“The KeyboardEvent.code
property represents a physical key on the keyboard (as opposed to the character generated by pressing the key).”
With that in mind, here's a workaround to block both F12
and Ctrl+Shift+i
key combinations.
window.addEventListener('keydown', function(event) {
if (event.code === 'F12') {
event.preventDefault();
console.log('Blocked F12');
}
if (event.shiftKey && event.ctrlKey && event.code === 'KeyI') {
event.preventDefault();
console.log('Blocked Ctrl + Shift + i');
}
});
Thanks to KJ for pointing me in the right direction!
A coworker wrote up a different way to fill the fields using pdfrw, example below:
from pdfrw import PdfReader, PdfWriter, PdfDict, PdfObject, PdfName, PageMerge
from pdfrw.objects.pdfstring import PdfString
def fill_pdf_fields(input_path, output_path):
pdf = PdfReader(input_path)
# Ensure viewer regenerates appearances
if not pdf.Root.AcroForm:
pdf.Root.AcroForm = PdfDict(NeedAppearances=PdfObject('true'))
else:
pdf.Root.AcroForm.update(PdfDict(NeedAppearances=PdfObject('true')))
for page in pdf.pages:
annotations = page.Annots
if annotations:
for annot in annotations:
if annot.Subtype == PdfName('Widget') and annot.T:
field_name = str(annot.T)[1:-1]
if field_name == "MemberName": annot.V = PdfObject(f'(Test)')
if field_name == "Address": annot.V = PdfObject(f'(123 Sesame St)')
if field_name == "CityStateZip": annot.V = PdfObject(f'(Birmingham, AK 12345-6789)')
if field_name == "Level": annot.V = PdfObject(f'(1)')
if field_name == "OfficialsNumber": annot.V = PdfObject(f'(9999999)')
if field_name == "Season2": annot.V = PdfObject(f'(2025-26)')
if field_name == "Season1": annot.V = PdfObject(f'(2025-2026)')
PdfWriter().write(output_path, pdf)
print(f"Filled PDF saved to: {output_path}")
def flatten_pdf_fields(input_path, output_path):
template_pdf = PdfReader(input_path)
for page in template_pdf.pages:
annotations = page.Annots
if annotations:
for annot in annotations:
if annot.Subtype == PdfName('Widget') and annot.T and annot.V:
# Remove interactive field appearance
annot.update({
PdfName('F'): PdfObject('4'), # Make field read-only
PdfName('AP'): None # Remove appearance stream
})
# Flatten page by merging its own content (no overlay)
PageMerge(page).render()
PdfWriter(output_path, trailer=template_pdf).write()
print(f"Flattened PDF saved to: {output_path}")
if __name__ == "__main__":
fill_pdf_fields(template_pdf, filled_pdf)
flatten_pdf_fields(filled_pdf, flattened_pdf)
I researched interactions with NeedAppearances, and found this Stack post:
NeedAppearances=pdfrw.PdfObject('true') forces manual pdf save in Acrobat Reader
The answer provides a code snippet that from what I can tell, acts as a reader generating those appearance streams so the filled in fields actually show their contents.
Code snippet for reference:
from pikepdf import Pdf
with Pdf.open('source_pdf.pdf') as pdf:
pdf.generate_appearance_streams()
pdf.save('output.pdf')
In the end my extractor was correct... after updating from Axum 0.8 to Axum 0.9 everything worked as expected. As far as I've understood Axum 0.8 does not allow to mix multiple FromRequestParts and a FromRequest in the same handler.
The issue relied elsewhere and not entirely in the docker-compose file. The real problem were the base images I was using. As they were minimal, they did not have the curl command and therefore the healthcheck was failing. The solution was just to install curl in the containers within the Dockerfiles.
The base images I was using were python:3.13-slim and node:24-alpine just in case this is useful for someone.
And the solution was to add:
In the python:3.13-slim Dockerfile:
RUN apt-get update && apt-get install -y curl
In the node:24-alpine Dockerfile:
RUN apk add --no-cache curl
Then I had to change the port of the healthcheck for the aiservice because although the port I expose is 8081 internally the port where the app is running is 8080, so the healthcheck ended looking like:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"] # Note that the port has changed!
interval: 10s
timeout: 30s
retries: 5
The key was in running the docker ps
command, as Alex Oliveira and sinuxnet had stated. With that I could see that the containers that started were flagged as unhealthy.
The port issue was discovered thanks to some comments in the staging area, even before the post was made public. Heads up to wallenborn as he posted that comment in the staging area.
PD: I am sure this post could be paraphrased better but it is my first time posting something, I'll try to update it to make it more readeable.
Try: bypass_sign_in(user)
*
I was also having issues with the config settings not seeming to work and found that sign_in(user, bypass: true)
was deprecated eons ago. See: https://github.com/heartcombo/devise/commit/2044fffa25d781fcbaf090e7728b48b65c854ccb
* this may not solve your root issue, but it should address the most immediate issue.
I provided an answer in another similar question: https://stackoverflow.com/a/79761592/15891701
Here's a repeat of that answer:
Conclusion: Blazor server launched within WPF, behaves identically to a Blazor server project created directly in Visual Studio. This means that launching without a `launchSettings.json` file during debugging causes the “{PACKAGE ID/ASSEMBLY NAME}.style.css” file to generate fail. You can also notice this mirrors the effect of double-clicking an exe file in the Debug directory of an ASP.NET Core project.
So just create a launchSettings.json
file in the Properties folder with content like this, and debugging will work correctly:
{
"profiles": {
"YourProjectName": {
"commandName": "Project",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
}
}
}
Perfect ✅ I’ll give you the full Python (ReportLab) code so you can generate your luxury-style P.M.B Visionary Manifesto PDF on your own system.
This script:
Uses your logo in the header/footer.
Splits your expanded manifesto into 6–7 pages.
Adds luxury-style colors (green, blue, light gold).
Keeps the layout clean and professional.
🔹 Python Code (ReportLab)
Save this as pmb_manifesto.py and run with python pmb_manifesto.py:
Copy code
Python
from reportlab.lib.pagesizes import A4
from reportlab.platypus import SimpleDocTemplate, Paragraph, Spacer, PageBreak, Image
from reportlab.lib.styles import getSampleStyleSheet, ParagraphStyle
from reportlab.lib.enums import TA_CENTER, TA_JUSTIFY
from reportlab.lib import colors
# ========= CONFIG =========
logo_path = "pmb_logo.png" # <-- Replace with your logo file path
output_pdf = "PMB_Visionary_Manifesto.pdf"
# ==========================
# Create document
doc = SimpleDocTemplate(output_pdf, pagesize=A4)
styles = getSampleStyleSheet()
# Custom styles
title_style = ParagraphStyle(
name="TitleStyle",
parent=styles["Title"],
alignment=TA_CENTER,
fontSize=22,
textColor=colors.HexColor("#004d26"),
spaceAfter=20
)
heading_style = ParagraphStyle(
name="HeadingStyle",
parent=styles["Heading1"],
fontSize=16,
textColor=colors.HexColor("#004d80"),
spaceAfter=12
)
body_style = ParagraphStyle(
name="BodyStyle",
parent=styles["Normal"],
fontSize=12,
leading=18,
alignment=TA_JUSTIFY,
spaceAfter=12
)
tagline_style = ParagraphStyle(
name="TaglineStyle",
parent=styles["Normal"],
alignment=TA_CENTER,
fontSize=14,
textColor=colors.HexColor("#bfa14a"), # light gold accent
spaceBefore=30
)
# Story
story = []
# Cover Page
story.append(Image(logo_path, width=150, height=150))
story.append(Spacer(1, 20))
story.append(Paragraph("🌿 P.M.B (Pamarel Marngel Barka)", title_style))
story.append(Paragraph("Visionary Manifesto", heading_style))
story.append(Spacer(1, 60))
story.append(Paragraph(
"At P.M.B, we believe that agriculture is the backbone of society, nurturing not just bodies, "
"but communities and futures. Our fields of rice, soya beans, and corn are more than just sources of sustenance; "
"they represent life, dignity, and hope.", body_style
))
story.append(PageBreak())
# Core Values
story.append(Paragraph("Our Core Values", heading_style))
story.append(Paragraph("<b>Integrity:</b> We operate with transparency, honesty, and ethics in all our dealings.", body_style))
story.append(Paragraph("<b>Sustainability:</b> We prioritize environmentally friendly practices, ensuring a healthier planet for future generations.", body_style))
story.append(Paragraph("<b>Quality:</b> We strive for excellence in every aspect of our business, from farming to delivery.", body_style))
story.append(Paragraph("<b>Compassion:</b> We care about the well-being of our customers, farmers, and the broader community.", body_style))
story.append(PageBreak())
# Promise
story.append(Paragraph("Our Promise", heading_style))
story.append(Paragraph(
"We promise to deliver produce that is not only fresh and of the highest quality but also grown and harvested with care and integrity. "
"We strive to create a seamless bridge between nature's abundance and people's needs, ensuring that our products nourish both body and soul.", body_style
))
story.append(PageBreak())
# Purpose
story.append(Paragraph("Our Purpose", heading_style))
story.append(Paragraph(
"At P.M.B, we recognize that our role extends far beyond the boundaries of our business. We believe that every grain we grow carries a responsibility – "
"to the land, to our farmers, to our customers, and to the wider community. That's why we dedicate 5% of our profits to supporting the homeless and vulnerable.", body_style
))
story.append(PageBreak())
# Spirit
story.append(Paragraph("Our Spirit", heading_style))
story.append(Paragraph(
"We embody a unique blend of luxury and humility, playfulness and professionalism, modernity and tradition. "
"Our approach is rooted in the rich soil of our agricultural heritage, yet we are always looking to the future, embracing innovation and creativity.", body_style
))
story.append(PageBreak())
# Vision
story.append(Paragraph("Our Vision", heading_style))
story.append(Paragraph(
"Our vision is to become a symbol of sustainable abundance, empowering communities, impacting lives, and proving that business can be both prosperous and compassionate. "
"We envision a future where agriculture is not just a source of food, but a force for good, driving positive change and uplifting those in need.", body_style
))
story.append(PageBreak())
# Goals
story.append(Paragraph("Our Goals", heading_style))
story.append(Paragraph("<b>Sustainable Growth:</b> To expand our operations while maintaining our commitment to environmental sustainability and social responsibility.", body_style))
story.append(Paragraph("<b>Community Engagement:</b> To deepen our connections with local communities, supporting initiatives that promote food security, education, and economic empowerment.", body_style))
story.append(Paragraph("<b>Innovation:</b> To stay at the forefront of agricultural innovation, adopting new technologies and practices that enhance our productivity and sustainability.", body_style))
# Closing Tagline
story.append(Spacer(1, 30))
story.append(Paragraph("🌿 P.M.B – Freshness in Every Harvest, Hope in Every Heart 🌿", tagline_style))
# Build
doc.build(story)
print(f"PDF created: {output_pdf}")
📌 Instructions
Save your logo as pmb_logo.png in the same folder.
Copy-paste the script above into pmb_manifesto.py.
Run:
Copy code
Bash
python pmb_manifesto.py
It will generate PMB_Visionary_Manifesto.pdf with your brand styling.
👉 Do you also want me to show you how to add a faint luxury-style watermark background (abstract green/blue waves & leaf motifs) behind all pages, so the PDF feels like a real corporate booklet?
Use 127.0.0.1 instead of localhost in pgAdmin.
Why is your goal to re-trigger the User Event script after creation, instead of just having it run on creation in the first place? It seems to me that your UE script is filtered to the Edit event type either on the deployment or within the script itself.
I would check the deployment first, and see if there is a value in the "Event Type" field on the deployment:
Alternatively, you can search for usage of context.UserEventType
within the script, as this is what would be used to filter the script to run under certain contexts. See context.UserEventType help article for list of enum values, but it would (likely) be either if (context.type != context.UserEventType.CREATE)
or if (context.type == context.UserEventType.EDIT)
.
Like this
$this->registerJs(
$this->renderFile(
'@app/views/path/js/jsfile.js',
)
, $this::POS_END);
Did you successfully fixed it? I'm running a similar error
If you are installing using the .spec file, you can add PIL._tkinter_finder
as a hidden import:
a = Analysis(
...
hiddenimports=['PIL._tkinter_finder'],
)
That solved the issue for me.
Its look like PostgreSQL permissions issue. Please check the user permissions for the user.
So the answer is that the Qt folks treated the appearance of icons in menus as a bug on Mac OS and 'fixed' it in 6.7.3.
There is a sentence in the release notes:
https://code.qt.io/cgit/qt/qtreleasenotes.git/about/qt/6.7.3/release-note.md
For ios, i've tried the pip package, and it worked quiet well. Use the example code for more info.
Put the annotations in an abstract parent class in another package? You will need one per Domain Object.
لقد جهزت لك نسخة جاهزة للنسخ إلى Word أو Google Docs، مع جدول وألوان، لتصديرها PDF بسهولة:
---
ملخص درس النفي – الإنجليزية
1️⃣ النفي مع He / She / It
قاعدة: doesn’t + الفعل الأساسي
الفعل بعد doesn’t لا يأخذ -s
أمثلة:
He doesn’t play football. → هو لا يلعب كرة القدم.
She doesn’t like apples. → هي لا تحب التفاح.
He doesn’t read a book. → هو لا يقرأ كتابًا.
2️⃣ النفي مع I / You / We / They
قاعدة: don’t + الفعل الأساسي
أمثلة:
I don’t like tea. → أنا لا أحب الشاي.
You don’t play tennis. → أنت لا تلعب التنس.
We don’t read a story. → نحن لا نقرأ قصة.
They don’t watch a movie. → هم لا يشاهدون فيلمًا.
3️⃣ ملاحظات مهمة
مع He / She / It: في الإثبات الفعل يأخذ -s، أما في النفي doesn’t + الفعل الأساسي بدون -s.
مع I / You / We / They: الفعل يبقى دائمًا في صورته الأساسية بعد don’t.
كرري نطق الجمل بصوت عالٍ 3 مرات لكل جملة لتثبيت القاعدة.
4️⃣ نصيحة للتدريب اليومي
كتابة 5-10 جمل نفي يوميًا عن نفسك أو أصدقائك.
استخدمي الجمل في حديثك اليومي بالإنجليزية حتى لو كانت بسيطة.
تم تجهيز النسخة الجاهزة للنسخ إلى Word أو Google Docs
Go to https://github.com/settings/copilot and turn on “Copilot Chat in IDEs”.
In Visual Studio, re-sign in via Extensions → GitHub Copilot → Sign in and authorize the Chat app in your browser.
Restart Visual Studio and reopen the Copilot Chat window.
You also need to change the .DotSettings file.
Same issue. Any ideas? I did install the Fortran compiler and also have the requisite Xcode tools. brew installing gettext did not help.