I know this is an old post, but someone might come across it like I did when having the same behaviour on a cPanel hosting server. In my case, I had forgotten that my domain was going through Cloudflare and they were caching my content to speed up performance. When you are entering a url for a resource that you are sure you have deleted and it still comes up, there's caching going on somehwere. That can be at the user (browser) level, hosting server level, or even at the domain level as it is if you use Cloudflare or something similar.
you can use free api for this
chek link:https://rapidapi.com/moham3iof/api/email-validation-scoring-api
Is this what you're looking for?
import re
s = "ha-haa-ha-haa"
m = re.match(r"(ha)-(haa)-\1-\2", s)
print(m.group())
This outputs
ha-haa-ha-haa
as expected
There is now a NuGet package that manages the registry for you to allow easy file extension association: dotnet-file-associator
There is now a NuGet package that manages the registry for you to allow easy file extension association: dotnet-file-associator
There is now a NuGet package that manages the registry for you to allow easy file extension association: dotnet-file-associator
One possible solution I am considering is to use pthread_self()
as a pthread_t
value that is guaranteed not to be one of the spun-off threads.
This is a known issue with the Samsung Keyboard (not Flutter and not your code).
The workaround is: set keyboardType: TextInputType.text
for all fields and use persistent FocusNode
s.
If none of the solutions here worked for you, here's what finally solved it for me: I discovered that "Emulate a focused page" was enabled in Chrome DevTools few months ago and forgot about it. I was using DevTools to debug the visibilitychange event, but DevTools itself was preventing the event from firing by emulating constant focus. Two hours of my life gone.
Did you get the solution?
Please share it, I am facing the same issue and not able to figure yout
When looking more into it, it seems like there was no requests on the web app for an extended time period, so the solution was to go :
web app -> config -> always on -> on
In Angular SurveyJS Form Library, a question is rendered by the Question component (question.component.ts/question.component.html). This component contains various UI elements. For instance, the component which render question errors, the header component which may appear above or under the main content and the component which renders the main question content.
When you register a custom question type and implement a custom renderer, you actually override the question content component.
Based on your explanation, I understand that you wish to customize the appearance of SurveyJS questions. If you wish to modify the appearance of SurveyJS questions, we suggest that you create a custom theme using the embedded Theme Editor. A custom theme contains various appearance settings such as colors, element sizes, border radius, fonts, etc.
If you cannot create 100% identical forms by using a custom theme, the next step to try is to modify the form CSS using corresponding API: Apply Custom CSS Classes.
Unfortunately, the entire question component cannot be overridden at the moment. If you plan to override the entire question component, take note that all SurveyJS question UI features will be unavailable; in particular, you would require to handle responsiveness and render question errors, etc. Please let me know if you would like to proceed with overriding the entire question component.
I suggest that you use default SurveyJS question types and align a question title on the left and introduce the space between the title and an input field by placing a question within a panel and specifying questionTitleWidth
. For example: View Demo.
Create a 1/18 scale commercialized figurine of the car in the picture, in a realistic style, in a real environment. The figurine is placed on a computer desk. The figurine has a transparent acrylic base, with no text on the base. The content on the computer screen is the Blender modeling process of this figurine. Next to the computer screen is a TAMIYA style toy packaging box printed with the original artwork.
You can add an additional hover delay in your settings.json file:
"editor.hover.delay": 2000,
It's not missing, you deleted it.
The file was in your .gitignore and you deleted your local, untracked copy. That's why it's not in git history. This is standard practice.
Your app still runs because Spring is loading config from somewhere else.
Look for application.yml in src/main/resources.
Look for a profile-specific file, like application-dev.properties.
Check your run configuration's VM arguments for --spring.config.location or -Dspring.profiles.active.
Recreate the file and move on.
Just had the same issue on dbt cloud. Seems like a bug to me.
I have recently made it working with using cookies where you send timestamp and mac in separate cookies and it works fine
Use MAX(CASE…)
with GROUP BY
.
OneHotEncoder(handle_unknown="ignore")
# as it is code
x_train, x_test, y_train, y_test = train_test_split(x,y, test_size=0.25, random_state=42)
When you used InMemoryUserDetailsManager, security stored not the user object itself, but UserDetails which was safe and serialization does not have necessary. However with JPA Auth Server objects that contain OAuth2Authorization use ser via jackson there is a problem that jackson does not trust that class custom user. Consequently leading to 2 approaches, i guess, jackson Mixin like
public abstract class UserMixin {
@JsonCreator
public UserMixin(@JsonProperty("username") String username,
@JsonProperty("password") String password) {}
}
then in your config class register that Mixin. Second (much easier) add constructor to requered fields with @JsonCreator to your costr. and for every parameter @JsonProperty
have you tried to use sql mock ?
pgAdmin doesn't create PostgreSQL servers, it provides a GUI to access and manage them, which is why you don't see a "create server" button - such a button existed in earlier versions but was poorly named.
Server groups are folders to organise your registered connections.
To add an existing server, right click your server group of choice, then "Register > Server...", and enter the connection details.
Some installations of pgAdmin may come bundled with a PostgreSQL server too, in which case you will have likely configured this server and set the credentials during installation. Alternatively, you may want to run your server through Docker, a VPS, or a managed Postgres hosting service, then register it in pgAdmin.
I managed to come up with a solution, AWS ElastiCache seems like it doesn't support localhost so I ran the api with a docker container so we can setup valkey, and it works like a charm, it also didn't affect the deployed api which is great.
Here is an alternative method of showing that this generates a uniform distribution of permutations.
Given a certain sequence of randint calls for Fisher-Yates, suppose we got the reversed sequence of calls for our modified algorithm.
The net effect is that we perform the swaps in reverse order. Since swaps are self-inverse, it follows that our modified algorithm produces the inverse permutation of the Fisher-Yates algorithm. Since every permutation has an inverse, it follows that the modified algorithm produces every permutation with equal probability.
(Incidentally, since rand() % N is not actually equiprobable (though the error is very slight for small N), this shows that both the standard FIsher-Yates algorithm and the modified algorithm are equally "bad", in that the set of probabilities for permutations is identical to each other (though this still assumes the PRNG is history-independent, but that's also not quite true).)
I was in a similar situation when I was skeptical about sharing my code. So before sharing my code I wanted to create another copy or create a copy of my repo with code. So below are the steps I followed.
Create a copy of the project in your local file system
Then go inside the project folder and manually delete the .git and the associated git folders
Now open the project in Visual Studio.
The from the bottom right extensions add the project to Git Repository and create a new Repository
Writing up what worked for me in the end, in case it helps anyone. It's probably obvious to people familiar with Doxygen, but it wasn't to me. Many thanks to @Albert for pushing me towards bits of the Doxygen documentation that I didn't know were there!
I have a file Reference.dox and the INPUT tag in my doxyfile points to it. In it I have:
/*! \page REFDOCS Reference Documents
Comments in the code refer to the following documents:
...
*/
There are various possibilities for the "..."
1. \anchor
\par My copy of the Coding Guidelines
\anchor MISRA
\par
Hard-copy version is on Frank's desk.
This works. The Reference Documents page has the title "MISRA guidelines" and the instructions. In the documentation for the code, I get "it's coded like this to comply with rule XYZ in MISRA" and the "MISRA" is clickable. Clicking it takes me to the Reference Documents page. There, "My copy of the Coding Guidelines" is a paragraph heading in bold, and the instructions are indented below.
2. \section
\section MISRA My copy of the Coding Guidelines
Hard-copy version is on Frank's desk.
This works too. In the documentation for the code, I get "it's coded like this to comply with rule XYZ in My copy of the Coding Guidelines", and again that is a clickable link that gets me to the Reference Documents page. There, the heading "My copy..." is followed by the instruction text.
With a couple of documents on the page, I think this is a bit easier to read because I don't have \par all over the place.
There are probably other possibilities that I don't know about but these (probably the latter) will do for me.
Incidentally: if you get to https://www.doxygen.nl/manual/commands.html you can expand the "Special Commands" in the left sidebar to give a clickable list of the commands so you can get to them quickly. There is an alphabetical list of commands at the top of the page, but it's a long page so you don't see it, and the sidebar list is right there. BUT THE SIDEBAR LIST IS NOT ALPHABETICAL! When I was told about e.g. \cite it took me ages to find it because somehow I believed the list was alphabetical, and it was way off the bottom of the screen instead of being near the top of the list. When I found it, \anchor was right there too.
I would suggest you do the following:-
Login to your GITHub account
Then go to the Repositories.
Now go to the Settings
Then go down to the Danger Zone section.
There you will find the option to Delete
If you get some like from the api, the problem which i faced was i coped the curl from the documentation
and then used it. The problem is the curl is deformed and the data in body should be in params instead of body.
as {
"error": {
"code": 1,
"message": "An unknown error occurred"
}
}
Try copying the curl and use an AI ( like chatGpt ) and say the curl is malformed and i am unbale to hit it in the terminal please fix.
so,
curl -s -X GET \
-F "metric=likes,replies" \
-F "access_token=<THREADS_ACCESS_TOKEN>"
"https://graph.threads.net/v1.0/<THREADS_MEDIA_ID>/insights"
will be converted into something like the following (which works!)
curl -s -X GET "https://graph.threads.net/v1.0/<threads_post_id/insights?metric=likes,replies&access_token=<token>"
For me the only format that worked was "MM/DD/YYYY HH:MM" with no extra leading zeros and using a 24 hour clock, so "7/14/2022 15:00" worked but "07/14/2022 3:00 PM" did not.
In your package's pubspec.yaml
:
flutter:
fonts:
- family: TwinkleStar
fonts:
- asset: assets/fonts/TwinkleStar-Regular.ttf
In your widget:
Text(
'Hello World',
style: TextStyle(
fontFamily: "packages/sdk/TwinkleStar",
fontSize: 24,
),
);
Hii: Lusia
href="javascript:se(param1, param2)"
→ means when you click the link, run the JavaScript function se(...)
instead of going to a URL.
se
is just a normal function defined in the page’s scripts.
The n[0], i[0], p[0], d[0]
are variables (probably arrays) whose first elements are passed as arguments.
So it’s just:
link click → run function se(...)
with those values.
ggMarginal
has groupColour = TRUE
and groupFill = TRUE
which allows to take the groups from the initial ggplot.
library(ggplot2)
library(ggExtra)
p <- ggplot(iris, aes(x = Sepal.Length, y = Sepal.Width, color = Species)) +
geom_point()
ggMarginal(p, type = "density", groupColour = TRUE, groupFill = TRUE)
If your goal is to have two-dimensional, always visible scrollbars, currently Flet doesn’t support this feature out of the box. I ran into the same limitation, so I created an extension for Flet that adds true two-dimensional scrollbars and some additional functionality. It might help in your case: Flet-Extended-Interactive-Viewer.
With normal Flet, you can have two scrollbars, but one of them only appears at the end of the other, which isn’t ideal for some layouts.
After hours fiddling with this, I post the question, then find a fix!
I had to SSH in to the box and edit extension.ini to add extension = php_openssl.so
After re-uploading that, it gave me the option in:
Web Station > Script Language Settings > [profile] > Edit > Extensions > openssl
I swear the option was there and was ticked before, but was now unticked. Reselected, saved, and it works....
I figured it out. I had to delete the neon directory and rerun the generation.
Aapko existing tags use karne padenge jo already maujood hain. Aapke topic ke liye kuch relevant general tags ho sakte hain
Currently, Flet doesn’t support this feature out of the box. I ran into the same limitation, so I created an extension for Flet that adds two-dimensional scrollbars and some additional functionality. It might help in your case: Flet-Extended-Interactive-Viewer
Actually - I already have an answer, that worked ... so someone can find it useful.
It helped when I deleted all *.dbmdl and *.jfm files within solution folder and subfolders, then restarted VS and rebuild.
It seems this issue was picked up by the maintainer and fixed in version 65.11.2 (big thanks if you are seeing this!) https://github.com/pennersr/django-allauth/commit/5ef542b9004e808253f8cd9f2dbae0bb27365984
Version 48.5.0 of the .NET library is pinned to API version 2025-08-27.basil[0]. This means that all requests initiated by the SDK use that API version by setting the version header[1], which overrides your account’s default API version. Effectively, this means that all requests made with that SDK version use that API version.
API version 2025-03-31.basil introduced a breaking change[2], which removed the payment_intent
field from the Invoice object (due to the introduction of support for multiple partial Invoice Payments).
For API versions 2025-03-31.basil, and later, the first PaymentIntent's client_secret
can now be accessed from the Invoice object in invoice.confirmation_secret.client_secret
[3]. You should use this instead of invoice.payment_intent.client_secret
.
[0] https://github.com/stripe/stripe-dotnet/blob/master/CHANGELOG.md#4850---2025-08-27
cat "./--spaces in this filename--"
That's it
i can able to fetch subscription names based on the parameter1
resourcecontainers
| where type == "microsoft.resources/subscriptions"
| where name contains ({Environment})
| project name
You have to declare a string containing the call, then execute:
cExec = "call sp_change_password('" + password + "','" + id_user + "') ;"
EXECUTE IMMEDIATE :cExec ;
Another location would be Jetbrains' github repo for jdk8: https://github.com/JetBrains/jdk8u_jdk/blob/master/src/share/classes/com/sun/jndi/ldap/LdapCtx.java .
Pros: directly browse the source
Cons: bigger due to git repo if downloaded, no binaries
drones not working ? you ain't the problem, the place you are buying from is . Check MAVDRONES , most trusted and DGCA certified drone company . buy enterprise drone
For the collector in deployment mode. If no log found, you can exec to the pod and check the failure reason under /var/log/opentelemetry/dotnet
Depending on where your application picks up the date from, I had a similar problem solved by:
function date { /bin/date -d "yesterday"; }
or could likely use alias.
Good day.
I've the same problem.
I've trying all the solutions of the above commets but without result.
The image is less of 300kb and the resolution is 1024x768 pixel.
The length of title and description are within the limits given above and the image is with the full url in https.
I includes the screenshot of my phone.
As you can see the preview in the sharing phase is displayed correctly, but then on WhatsApp the space appears at the top for a moment and then it turns white.
I tried with another site on a different server (which I did not) and on WhatsApp you can see the preview with title, description and photos.
Could it depend on the chmode of the photo which is 666?
Open your package manager and upgrade App UI package to a newer version. Check the Version tab to see the latest versions. In my case App UI version was 1.3.1 and I've upgraded to 2.1.1.
I am a little bit late but I have faced the same problem trying to expose thanos-sidecar service to a remote Thanos querier.
Thanks to your help I managed to make the grpcurl list to work but unfortunately on the querier side I still have this error :
Did you find a way to make it work end to end ?
I am also looking for an answer to OP question. Anything would be appreciated. :)
blablabla blebleble blublublu a blebleblebleble zuuubiiii
As other comments suggest, setting the height at 100% solves the issue, but introduces many others. What I found out is that it is worse when the keyboard is opened and closed.
Other thing I noticed on the header of my application, which has position: fixed
but is being scrolled out of the view, is that if I get the bounding client rect the Y value is negative, so I tried something like this:
const handleScroll = () => {
const headerElement = document.getElementById("app-header");
const {y} = headerElement.getBoundingClientRect();
if (y !== 0) {
headerElement.style.paddingTop = `${-y}px`;
}
};
window.addEventListener("scroll", handleScroll);
The problem here is that after ~800px are scrolled down the Y value is 0 again but the element is still outside of the screen, so this fix becomes useless.
I see this issue affecting multiple pages, from their own apple website to this stack overflow page and basically every page with a fixed element on it, but I cannot find this issue being tracked by Apple. Is there any support page where this has been reported?
Whether you're editing documents, reviewing content, or checking for plagiarism, Online Text Comparison Tool is a simple yet powerful solution for quickly detecting changes between two texts. With an intuitive, clean interface, this free online diff checker helps you compare two versions of text and highlight every single change—clearly and instantly.
You can also use Column comments directly in the model property. If repeating the string is a problem for you, extract it into a common const
const string usernameDescription = "This is the username";
[Comment(usernameDescription)]
[Description(usernameDescription)]
public string Username { get; set; }
In case there's future travelers, I found this to be that kafka was disabled in my test configuration. Supposedly this stops the @Introduction method interceptor
being registered during application startup, which then causes this cryptic exception to be thrown.
There is at least one case when one SHOULD NOT squash: renaming files AND changing them "too much" will make git lose the link between the original file and the renamed one.
In this scenario one should:
rename the file (depending on the file type this might introduce slight changes to it, e.g. a Java file will get also changes to the class name) as one commit
do major changes to the file (e.g. refactoring) as a dedicated commit.
For code files I also prefer to separate formatting / changes from actual change so that it's directly visible what part of the logic has changed and this information would get buried within one big change when squashing (AND it makes cherry-picking much easier).
sim - find similarities in C, Java, Pascal, Modula-2, Lisp, Miranda, 8086 assembler code or in text files
Run sim_text(or appropiate utility for code) in the directory containing files and it outputs diff-like report. Debian package is similarity-tester.
can i share widget json file. think you
The minimal example was not representing the issue I had.
- the first comment was right. Without using of ptr_global, the example is useless for debugging. When using ptr_global, the value will be set. The issue was identical, so I expected the example to be representative. Somehow it was, but this is hard to explain.
- Nevertheless I was confused about accessing pointers and values in "data"- and "prog"-memory. I mixed it up unintentionally. Now I use the functions of "avr/pgmspace.h" for access.
i want to answer this question and i have new account so i cannot able to answer where something like this question is posted
solution
run following commands one by one in terminal of your project dir
flutter upgrade
flutter clean
flutter pub get
then attach your physical "ios" device and run
you cant export like this
you need to import the component first and then export https://a810-dobnow.nyc.gov/Publish/DocumentStage/New%20License/L00032773/[email protected]/Social%20Security%20card/Social%20Security_bSJ5llzhhJCPAQhtMBaD_b72d54d113.pdf?V=81
Just found the solution as explained in my commentary in my original post !
The problem was that one of the IPs used by my server by default had been temporarily blacklisted (because shared through many clients) on the platform where I deployed my backend (Render).
So I just added a specific IP rolling system for those export requests and now it's working perfect, so maybe try it out or at least check your app's IP status !
This feature is not yet supported. see here
I had the same problem and found this workaround by chance.
The guest Windows 11 got Internet connection by these two steps (inside the guest vm):
Edit the classic properties of IP version 4
Set DNS to the IP address of the router
It was related with this bug, a space instead of empty string in the back bar button for title did solve the problem: https://stackoverflow.com/questions/77764576/setting-backbuttondisplaymode-to-minimal-breaks-large-title-animation#:~:text=So%20we%20think,button%20title%20present%20at%20all
Maybe i should have asked sooner
var invoiceService = new InvoiceService();
var invoice = await invoiceService.GetAsync(subscription.LatestInvoice.Id, new InvoiceGetOptions
{
Expand = ["payments.data.payment.payment_intent"]
});
var clientSecret = invoice.Payments.Data.FirstOrDefault()?.Payment?.PaymentIntent?.ClientSecret;
This was my solution if anybody from stripe sees this can you provide better answer ? And maybe update c# examples they are written in .net 6 we are getting .net 10 an you could also use minimal apis now to mimic node style
If you are having problems compiling a submodule (for example simple_knn) when using CUDA 11.8 and Visual Studio 2022, the issue is usually caused by an unsupported MSVC compiler version.
CUDA 11.8 officially supports MSVC 14.29 (VS2019) up to MSVC 14.34 (early VS2022). Newer compilers like MSVC 14.43 are not recognized and will trigger
SOLUTION :
Open the Visual Studio Installer.
Under Visual Studio 2022, click Modify.
Go to the Individual components tab.
Search for: MSVC v143 - VS 2022 C++ x64/x86 build tools (14.34 )
Install it
Re-run pip install submodules/diff-gaussian-rasterization
If you don't mind the temporary working tree change :
git stash & git stash apply
It is 2025, we are awaiting Windows 12, and still we have applications that use Video for Windows API and should be maintained because they "have worked admirably for many years". So I was tasked to write a VfW codec for a novel video compression format.
To help developers like me to master this relict technology, Microsoft supplies a full reference documentation on Video for Windows API. The section Using the Video for Windows, subsection Compressing Data gives a detailed account on how to compress the input data but stops short of teaching how to write compressed data to the avi file. To rule out possible errors of my VfW codec, I tried to make an AVI file with RLE compressed data, but equally failed: in every frame, the count of bytes written by AVIStreamWrite (returned with the plBytesWritten parameter) was a fixed value for all frames, this value greater than dwFrameSize parameter returned with a ICCompress call, which I pass through a AVIStreamWrite call with the cbBuffer parameter. An Internet search on this problem immediately presented me with a reference to the SO post Is it possible to encode using the MRLE codec on Video for Windows under Windows 8? by David Heffernan. This post immediately solved my problem:
We do still need to create the compressed stream, but we no longer write to it. Instead we write RLE8 encoded data to the raw stream.
As this SO question-and-answer stops short of writing a real RLE8 encoder (`Obviously in a real application, you'd need to write a real RLE8 encoder, but this proves the point`), and being grateful for this helpful QA, I post a code excerpt that does exactly use a real RLE8 encoder
unsigned char* bits = new unsigned char[bmi->biSizeImage];
LPVOID lpInput = (LPVOID)bits;
HRESULT hr;
for (int frame = 0; frame < nframes; frame++)
{
for (int i = 0; i < bmi->biSizeImage; ++i)
bits[i] = (frame + 1) * ((i + 5) / 5);
ICCompress(hIC, 0, lpbiOut, lpOutput, lpbiIn, lpInput,
&dwCkID, &dwCompFlags, frame, bmi->biSizeImage, dwQuality, NULL, NULL);
hr = AVIStreamWrite(pStream, frame, 1, lpOutput, lpbiOut->biSizeImage,
AVIIF_KEYFRAME, &lSamplesWritten, &lBytesWritten);
if (hr != S_OK)
{
std::cout << "AVIStreamWrite failed" << std::endl;
return 1;
}
}
I'm going to replace the comment line `// Write compressed data to the AVI file` of the Using the Video for Windows' subsection Compressing Data with this code sample as soon as possible. For completeness, here I attach the code sample of how to write compressed data to the AVI file:
// runlength_encoding.cpp : This file contains the 'main' function.
// Program execution begins and ends there.
// based on learn.microsoft.com articles on Using the Video Compression Manager
// and SO post https://stackoverflow.com/questions/22765194/
// also see
// https://learn.microsoft.com/en-us/windows/win32/api/wingdi/ns-wingdi-bitmapinfoheader
// why bmi size should be augmented by the color table size
// `However, some legacy components might assume that a color table is present.
// `Therefore, if you are allocating
// `a BITMAPINFOHEADER structure, it is recommended to allocate space for a color table
// `when the bit depth is 8 bpp or less, even if the color table is not used.`
//
#include <Windows.h>
#include <vfw.h>
#include <stdlib.h>
#include <iostream>
#pragma comment(lib, "vfw32.lib")
int main()
{
RECT frame = { 0, 0, 64, 8 };
int nframes = 10;
const char* filename = "rlenc.avi";
FILE* f;
errno_t err = fopen_s(&f, filename, "wb");
if (err)
{
printf("couldn't open file for write\n");
return 0;
}
fclose(f);
AVIFileInit();
IAVIFile* pFile;
if (AVIFileOpenA(&pFile, filename, OF_CREATE | OF_WRITE, NULL) != 0)
{
std::cout << "AVIFileOpen failed" << std::endl;
return 1;
}
AVISTREAMINFO si = { 0 };
si.fccType = streamtypeVIDEO;
si.fccHandler = mmioFOURCC('M', 'R', 'L', 'E');
si.dwScale = 1;
si.dwRate = 15;
si.dwQuality = (DWORD)-1;
si.rcFrame = frame;
IAVIStream* pStream;
if (AVIFileCreateStream(pFile, &pStream, &si) != 0)
{
std::cout << "AVIFileCreateStream failed" << std::endl;
return 1;
}
AVICOMPRESSOPTIONS co = { 0 };
co.fccType = si.fccType;
co.fccHandler = si.fccHandler;
co.dwQuality = si.dwQuality;
IAVIStream* pCompressedStream;
if (AVIMakeCompressedStream(&pCompressedStream, pStream, &co, NULL) != 0)
{
std::cout << "AVIMakeCompressedStream failed" << std::endl;
return 1;
}
BITMAPINFOHEADER bihIn, bihOut;
HIC hIC;
bihIn.biSize = bihOut.biSize = sizeof(BITMAPINFOHEADER);
bihIn.biWidth = bihOut.biWidth = si.rcFrame.right;
bihIn.biHeight = bihOut.biHeight = si.rcFrame.bottom;
bihIn.biPlanes = bihOut.biPlanes = 1;
bihIn.biCompression = BI_RGB; // standard RGB bitmap for input
bihOut.biCompression = BI_RLE8; // 8-bit RLE for output format
bihIn.biBitCount = bihOut.biBitCount = 8; // 8 bits-per-pixel format
bihIn.biSizeImage = bihIn.biWidth * bihIn.biHeight;
bihOut.biSizeImage = 0;
bihIn.biXPelsPerMeter = bihIn.biYPelsPerMeter =
bihOut.biXPelsPerMeter = bihOut.biYPelsPerMeter = 0;
bihIn.biClrUsed = bihIn.biClrImportant =
bihOut.biClrUsed = bihOut.biClrImportant = 256;
hIC = ICLocate(ICTYPE_VIDEO, 0L,
(LPBITMAPINFOHEADER)&bihIn,
(LPBITMAPINFOHEADER)&bihOut, ICMODE_COMPRESS);
ICINFO ICInfo;
ICGetInfo(hIC, &ICInfo, sizeof(ICInfo));
DWORD dwKeyFrameRate, dwQuality;
dwKeyFrameRate = ICGetDefaultKeyFrameRate(hIC);
dwQuality = ICGetDefaultQuality(hIC);
LPBITMAPINFOHEADER lpbiIn, lpbiOut;
lpbiIn = &bihIn;
DWORD dwFormatSize = ICCompressGetFormatSize(hIC, lpbiIn);
HGLOBAL h = GlobalAlloc(GHND, dwFormatSize);
lpbiOut = (LPBITMAPINFOHEADER)GlobalLock(h);
ICCompressGetFormat(hIC, lpbiIn, lpbiOut);
LPVOID lpOutput = 0;
DWORD dwCompressBufferSize = 0;
if (ICCompressQuery(hIC, lpbiIn, lpbiOut) == ICERR_OK)
{
// Find the worst-case buffer size.
dwCompressBufferSize = ICCompressGetSize(hIC, lpbiIn, lpbiOut);
// Allocate a buffer and get lpOutput to point to it.
h = GlobalAlloc(GHND, dwCompressBufferSize);
lpOutput = (LPVOID)GlobalLock(h);
}
DWORD dwCkID;
DWORD dwCompFlags = AVIIF_KEYFRAME;
LONG lNumFrames = 15, lFrameNum = 0;
LONG lSamplesWritten = 0;
LONG lBytesWritten = 0;
size_t bmiSize = sizeof(BITMAPINFOHEADER) + 256 * sizeof(RGBQUAD);
BITMAPINFOHEADER* bmi = (BITMAPINFOHEADER*)malloc(bmiSize);
ZeroMemory(bmi, bmiSize);
bmi->biSize = sizeof(BITMAPINFOHEADER);
bmi->biWidth = si.rcFrame.right;
bmi->biHeight = si.rcFrame.bottom;
bmi->biPlanes = 1;
bmi->biBitCount = 8;
bmi->biCompression = BI_RGB;
bmi->biSizeImage = bmi->biWidth * bmi->biHeight;
if (AVIStreamSetFormat(pCompressedStream, 0, bmi, bmiSize) != 0)
{
std::cout << "AVIStreamSetFormat failed" << std::endl;
return 1;
}
unsigned char* bits = new unsigned char[bmi->biSizeImage];
LPVOID lpInput = (LPVOID)bits;
HRESULT hr;
for (int frame = 0; frame < nframes; frame++)
{
for (int i = 0; i < bmi->biSizeImage; ++i)
bits[i] = (frame + 1) * ((i + 5) / 5);
ICCompress(hIC, 0, lpbiOut, lpOutput, lpbiIn, lpInput,
&dwCkID, &dwCompFlags, frame, bmi->biSizeImage, dwQuality, NULL, NULL);
hr = AVIStreamWrite(pStream, frame, 1, lpOutput, lpbiOut->biSizeImage,
AVIIF_KEYFRAME, &lSamplesWritten, &lBytesWritten);
if (hr != S_OK)
{
std::cout << "AVIStreamWrite failed" << std::endl;
return 1;
}
}
if (AVIStreamRelease(pCompressedStream) != 0 || AVIStreamRelease(pStream) != 0)
{
std::cout << "AVIStreamRelease failed" << std::endl;
return 1;
}
if (AVIFileRelease(pFile) != 0)
{
std::cout << "AVIFileRelease failed" << std::endl;
return 1;
}
std::cout << "Succeeded" << std::endl;
return 0;
}
The given solutions are wrong as it will match the following and come with a wrong result:
ABCD
ABCDE
It will duly delete both ABCD and leave the E
The correct solution is: (obvioulsy first sort the whole file alphabetically)
^(.*)(\R\1)+\R
and replace with blank (i.e. nothing)
Here are the missing files for your Botanic Bazar e‑commerce website. কপি করে আলাদা আলাদা ফাইলে রেখে দিন ⬇️
index.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Botanic Bazar</title>
</head>
<body>
<div id="root"></div>
<script type="module" src="/main.jsx"></script>
</body>
</html>
package.json
{
"name": "botanic-bazar",
"version": "1.0.0",
"private": true,
"scripts": {
"dev": "vite",
"build": "vite build",
"preview": "vite preview"
},
"dependencies": {
"lucide-react": "^0.452.0",
"react": "^18.2.0",
"react-dom": "^18.2.0"
},
"devDependencies": {
"@vitejs/plugin-react": "^4.2.1",
"tailwindcss": "^3.4.0",
"vite": "^5.2.0"
}
}
main.jsx
import React from "react";
import ReactDOM from "react-dom/client";
import App from "./App";
import "./index.css";
ReactDOM.createRoot(document.getElementById("root")).render(
<React.StrictMode>
<App />
</React.StrictMode>
);
index.css
(Tailwind setup)@tailwind base;
@tailwind components;
@tailwind utilities;
body {
font-family: sans-serif;
}
👉 এবার কী করবেন:
সব ফাইল এক ফোল্ডারে রাখুন (যেমন botanic-bazar
)।
টার্মিনাল খুলে লিখুন:
npm install
npm run dev
ব্রাউজারে গিয়ে http://localhost:5173
ওপেন করলে সাইট চালু হবে ✅
আপনি কি চান আমি আপনাকে Vercel-এ আপলোড করার স্টেপগুলো স্ক্রিনশট/চিত্র আকারে সাজিয়ে দিই? তাহলে একদম ভিজ্যুয়ালি ফলো করতে পারবেন।
I faced the same issue today. Steps to solve:
Open new vsCode
Disable and remove the WSL extension from Visual Studio Code
Uncheck auto update for the WSL extension
Click on the settings gear and install the older version
Fixed!
I had maybe similar SqlBuilTask failures, without detailed errors, on VS2019 and after renaming my projects and folders, so this may not be the same case, but ..
for me it helped when I deleted all *.dbmdl and *.jfm files within solution folder/subfolders, and then restarted VS and rebuild.
😡 This is not secure:
"SmtpSettings": {
"Host": "smtp.office365.com",
"Port": 111,
"Username": "[email protected]",
"Password": "mymymy123*123*123",
"EnableSsl": true
}
😇 This is more secure:
"SmtpSettings": {
"Host": "smtp.office365.com",
"Port": 111,
"Username": "[email protected]",
"Password": "hashedPassword(like: 7uhjk43c356xrer1)",
"EnableSsl": true
}
You should not set critical datas like passwords, (even usernames), in your config files. It can be dockerfile or appsettings.json. You should not.
You must create encrypted values. When you read config, you convert hashed data to raw value.
✍️ See this: https://stackoverflow.com/a/10177020/19262548
We’re seeing this issue in both our Xamarin and .NET MAUI 8 apps; upgrading to .NET MAUI 9 looks like the only option.
On the plus side, you can request an extension until end of May 2026. Simply click the 16 KB support notification and select “Request additional time.”
My solution
1. power shell > docker network ls
enter image description here
2. power shell > docker network inspect <network_name>
3. Grafana > Server address '172.17.0.3' (IPv4Address of ClickHouse Name)
enter image description here
Just spent way too much time trying to figure out this simple problem so I will share my answer here to save the next guy time. When you run a container from visual studio (using the green play button which says "Container (Dockerfile)") it means that visual studio is running that container for you.
In order to change the arguments in that run command you can click on the drop down menu attached to that green arrow button, and then select {YOUR PROJECT NAME} Debug Properties. then a menu should pop up and under the Docker launch profile you will find Container run arguments. So any help online you find which requires you to change the run command can be applied by adding the arguments there. Hope this helps someone!
The error happens because the original faker
package on npm is no longer maintained and is essentially empty.
To fix this, you need to switch to the community-maintained fork @faker-js/faker
.
First, remove the old package:
import { faker } from "@faker-js/faker";
console.log(faker.commerce.productName());
Handling document attachments in Business Central, especially via the /documentAttachments
endpoint, can be unexpectedly fragile. That “Read called with an open stream or textreader” error usually points to how the file stream is being processed before the API call. Even if you’re encoding the file to base64, the platform may still interpret the stream as open if it hasn’t been fully resolved or finalized.
Your current approach using arrayBuffer → Uint8Array → binaryString → btoa()
is technically valid, but Axios doesn’t always guarantee that the stream is closed in a way Business Central expects. This is especially true when working with binary content in browser environments.
One workaround that’s proven reliable is exposing Page 30080 (Document Attachments) as a custom API page. This lets you bypass the stream error entirely and gives you full control over how attachments are linked to Sales Orders. You can publish it via Web Services and POST directly to it.
Another route is using Power Automate’s HTTP connector instead of the native Business Central connector. It avoids some of the serialization quirks and lets you send the payload in a more predictable format. If you’re open to external storage, uploading the file to Azure Blob or OneDrive first and then linking it in Business Central is also a clean workaround.
Lastly, if you’re sticking with base64, try using FileReader.readAsDataURL()
instead of manually building the binary string. It ensures the stream is closed and the encoding is padded correctly. Just strip the prefix (data:application/pdf;base64,
) before sending.
Helpful Reference
Josh Anglesea – Business Central Attachments via Standard API
Microsoft Docs – Attachments API Reference
Saurav Dhyani – Base64 Conversion in Business Central
Power Platform Community – HTTP Connector Workaround
Dynamics Community – Stream Error Discussion
If you find this helpful, feel free to upvote this answer.
Cheers
Jeffrey
I'm not sure this answer is that useful but I've seen <iframe>
tags with HTML embed inside them. E.g. you can see this page: https://www.dashlane.com/es and look for the container which includes the radar. This is more like the page is loading another document, not about putting and <html>
inside another one, but I just remembered that case
Idk maybe just adding a "\n" at the beggning in a custom toString() method of a class?
@Override
public String toString() {
return "\nPerson { name = " + this.name + " }";
}
I know its a bit late but:
It seems that Apple does not support Mifare Classic.
However the only information about this online is:
https://developer.apple.com/videos/play/wwdc2019/715/?utm_source=chatgpt.com&time=744 (minute 12)
If you look at the Apple Core NFC NFCMiFareTag documentation https://developer.apple.com/documentation/corenfc/nfcmifaretag, Mifare Classic is not included in the list:
MIFARE
MIFARE DESFire
MIFARE Ultralight
MiFARE Plus
There are several ways to deploy a Spring application on AWS, such as using EC2 or Elastic Beanstalk. Often, Nginx is used as a reverse proxy to forward requests to your Spring Boot app running on the instance. It helps with load balancing, handling HTTPS, caching, and improving performance.
In my step-by-step video, I cover:
✔ Packaging the Spring Boot application
✔ Launching and configuring an EC2 instance
✔ Installing Java and running the application
✔ Setting up Nginx as a reverse proxy
✔ Configuring SSL with Let's Encrypt
✔ Setting security groups and permissions
If you’re looking for a complete practical example from setup to deployment, check it out here:
YouTube – Deploying Spring Boot with Nginx on AWS
position: sticky
anchors to the nearest scrollable ancestor, and unfortunately, CSS doesn't currently provide a way to override that behavior or explicitly set which scroll context a sticky element should use.
In your example, once .subtable
has overflow-y: auto
, it becomes the scroll container for its children. That means any sticky element inside it will stick relative to .subtable
, not .main
, even if .main
is the scroll context you want.
There’s no native CSS way to “skip” a scroll container and anchor stickiness to a higher-level ancestor. Sticky positioning is strictly scoped to the first scrollable ancestor with a defined height and overflow.
Workarounds
Here are a few approaches developers use to work around this limitation:
Avoid nested scroll containers If you can restructure your layout so that .main
is the only scrollable ancestor, sticky behavior will work as expected across all nested elements.
Move sticky elements outside the scrollable container You can place the sticky cell outside .subtable
and use CSS or JavaScript to visually align it. This keeps it anchored to .main
, but may require layout adjustments.
Simulate sticky behavior with JavaScript You can listen to scroll events on .main
and manually adjust the position of the nested sticky cell. This gives you full control over scroll context, though it’s more complex and less performant than native sticky positioning.
Helpful References
Troubleshooting CSS sticky positioning – LogRocket
CSS Tricks: Dealing with overflow and position: sticky
Cheers
Jeffrey
I also had similar issue when installing notebook.
ERROR: Could not find a version that satisfies the requirement puccinialin (from versions: none)
ERROR: No matching distribution found for puccinialin
My solution was to update python from 3.8 to last version.
It seems that puccinialin needs 3.9 or higher
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
GET /modules/ HTTP/1.1" 200
It appears that the iOS project files (Runner.xcodeproj) are missing from your cloned repository. 🚨 Verify that the repository truly contains the ios/ folder; occasionally,.gitignore ignores it. launch flutter create inside the project to regenerate platform files if it's absent, and then launch the application again. That ought should resolve it. ✅
Let me clarify a couple of things.
There are no two-dimensional arrays in C (at least, not as they are in other languages). A char array[10][20]
is just an array of arrays. That is, an array with 10 elements, where each element (ie. array[x]
) is an array of chars (ie. a pointer to the first char in a continuous bunch of 20 chars). A char array[10*20]
is a different thing: a single array of 10x20=200 elements where each element (ie. array[i]
) is a char, and you can access any element using the formula array[y*iwid+x]
(in your code, x is your column; y is your row) (assuming you store all columns of row 0, then all columns of row 1, etc).
"If I pass the original array, image[iwid][ihei] the resulting saved image is garbled." That's because the function stbi_write_png()
needs a one-dimensional array of chars, with one char per component (aka. channel) per pixel. Your "small images" code is just copying the array of arrays of pixels (image[iwid]Iihei]
) into a one dimensional img[iwid*ihei*icha]
array of chars (where each pixel is icha=4
chars wide, one char per RGBA component) (image[x][y].rgba[c]
is copied into img[(y*iwid+x)*4+c]
). You must do this.
You didn't specified the "compile time error" you're getting for "big" arrays. But the problem may be related to allocating too much space into the stack (as local variables are allocated into the stack). You may need to allocate the array into the heap: unsigned char *img = malloc (iwid*ihei*icha)
. Don't forget to check for the return of malloc()
and to free (img)
when you're done. You may also use a vector
(as @Pepijn said).
"How does stbi know the end of a dynamically allocated array?". You just pass a pointer to the first element of the one-dimensional array (statically or dinammically allocated, it's the same). The pixels are arranged one by one, first all "columns" of the same row, then all "columns" of the second row, etc. Total number of pixels is iwid*ihei
. Total number of bytes is iwid*ihei*icha
(where icha is 4).
hi according to the documentation of android they allow more than one category, refer to the supported app categories section in this page
It was the thread issue. I called the function in the background thread and it didn't work. Should call in the UI thread.
'''
runOnUiThread (new Thread(new Runnable() {
public void run() {
signalsViewModel.updateSignals(booleen);
}
}));
'''
S3 recently introduced live inventory support, see https://aws.amazon.com/blogs/aws/amazon-s3-metadata-now-supports-metadata-for-all-your-s3-objects/
Google removed App Actions. BIIs like actions.intent.GET_THING do not work now.
What you can still do:
"Hey Google, open TestAppDemo” -> open app main screen.
If you want "groceries" screen -> need deep link or shortcut. Example: testappdemo://groceries.
But Google Assistant does not pass words like "groceries" to your app anymore. It only opens app or deep links.
So answer: No, you cannot do "Hey Google, open groceries from TestAppDemo" directly. Only deep link + shortcut workaround
The same issue happens from time to time in Visual Studio 2022. I just open the web.config, save it, and close the editor tab.
This fixes the issue for me.
After many frustrating hours, I realized that serializing the presigned URL using Gson and then printing the resulting JSON was encoding certain characters. For example, this is part of the presigned URL prior to using Gson and printing it in the logs. I was able to use this presigned URL to upload a file successfully:
https://my-bucket.s3.amazonaws.com/f0329e43-c5ee-4151-87c5-c6736b5c7242?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20250912T023806Z...
And this is how it was encoded after using JSON and printing it to the logs:
{
"statusCode": 201,
"headers": {},
"body": "{\n \"id\": \"3da30011-8c20-4463-9f59-a31033276d0e\",\n \"version\": 0,\n \"presignedUrl\": \"https://my-bucket.s3.amazonaws.com/3da30011-8c20-4463-9f59-a31033276d0e?X-Amz-Algorithm\\u003dAWS4-HMAC-SHA256\\u0026X-Amz-Date\\u003d20250912T023806Z...\"\n}"
}
Beginners mistake since I'm new to presigned URLs and didn't know to look for this.
run sts-dynamic-incremental -m .. -t ..
try run command like this
It doesn't fully do EER, but in 2025 ERDPlus is seemingly the one that has the most EER features; which is not that many of them, so you'll probably find yourself in MS Paint drawing in missing features.
In views, make sure you are not enabling fitsSystemWindows.
Remove this line of code:
android:fitsSystemWindows="true"
Or replace with android:fitsSystemWindows="false"
I have not tried this yet. Please tell me if this works
I believe that pandas may not be the answer. Perhaps you could use the native csv module?
You could use this code:
import csv
with open('databooks.csv', 'r', newline='') as data:
csvreader = csv.reader(data) # Creates an iterator that returns rows as lists
# If your CSV has a header, you can skip it or store it:
header = next(csvreader)
for row in csvreader:
print(row) # Each row is a list of strings
Zoho Writer Sign’s API doesn’t accept recipient_phonenumber
and recipient_countrycode
as top-level keys.
To enable Email + SMS delivery, the recipient details must be passed inside the verification_type
object of each recipient.
signer_data = [ JSON Array of recipients ]
[
{
"recipient_1": "[email protected]",
"recipient_name": "John",
"action_type": "sign", // approve | sign | view | in_person_sign
"language": "en",
"delivery_type": [
{
"type": "sms", // sms | whatsapp | email
"countrycode": "+91",
"phonenumber": "9876543210"
}
],
"private_notes": "Hey! Please sign this document"
},
{
"recipient_2": "[email protected]",
"recipient_name": "Jack",
"action_type": "in_person_sign",
"in_person_signer_info": {
"email": "[email protected]", // Optional, required only when verification_info.type = "email"
"name": "Tommy"
},
"language": "en",
"verification_info": {
"type": "email" // email | sms | offline | whatsapp
}
}
]
Key Points
The top-level parameter name is always signer_data.
It must be a JSON Array ([ ... ]) of recipient objects.
recipient_1, recipient_2, … are the unique keys used to identify each signer.
action_type defines what the recipient does (sign, approve, view, in_person_sign).
delivery_type is an array of objects:
type: email, sms, or whatsapp
For SMS/WhatsApp → include countrycode and phonenumber.
verification_info controls how the signer is verified (email, sms, offline, whatsapp).
in_person_signer_info is required for in_person_sign action types.