Use a custom header component, and specify it in the templates property in the dialog configuration. Then you can style the header in the header component itself!
Example dialog config templates usage
{
templates: {
header: Header,
footer: Footer,
},
}
Working Example
Use dynamic names and prefix namespaces to map one string to another:
env {
arch_32: i386
arch_64: x86_64
}
...
${{ env[ format('arch_{0}', machine_architecture) ] }}
If you didn't found solution for me solution was setting up proper issuer in keymanager. So
Go to admin portal of api manager, go to key stores and for issuer set up same as iss in your token "iss": "nagah.local:8443/realms/ScrapDev"
Yes - but Django doesn’t expose ssl-mode directly, you must pass it through the MySQL driver (mysqlclient or PyMySQL) using OPTIONS.
I fixed the issue by importing GDAL first, which helps to establishe its C-level environment in a clean state. Qt, being a more comprehensive and robust application framework, is designed to successfully initialize around the essential state of core libraries like GDAL, preventing the fatal conflict.
Bumping this thread because Raymond Chen wrote a blog post about it. In short: undefined behavior is a runtime concept, where an ill-formed program is a program that breaks one of the rules for how programs are written.
The difference between undefined behavior and ill-formed C++ programs
Use 3 Foreignkey on Task model and use related_name to identify each of creator, assigner and verifier
It would save us all a lot of guesswork if the mosquito devs would add the filename to the error message. For TLS there are 3-4 files involved, so the message is rather over-cryptic.
How do you create a new sender address that's unique and assigns the amounts to it in the live blockchain without going via the banking services provider?
I don't know how to get my ai development tool to insert the transaction values onto the live raw broadcast on blockcypher.
The problem is with Case Sensitivity , I changed from grpc to gRPC .
In ECOM
net.devh.boot.grpc.client.nameresolver.discovery-client.service-metadata-keys=gRPC_port
In NOTIFICATION-SERVICE :
eureka.instance.metadata-map.gRPC_port=${grpc.server.port}
laravelcollective/html has not been updated for Laravel 11, but there’s a community-maintained fork that continues support for the latest Laravel versions.
You can safely use:
"rdx/laravelcollective-html": "^6.7"
This package is a drop-in replacement for the original laravelcollective/html package and fully supports Laravel 11+.
composer require rdx/laravelcollective-html
You don’t need to change your existing code. The namespace and aliases remain the same:
In config/app.php:
'providers' => [
Collective\Html\HtmlServiceProvider::class,
],
'aliases' => [
'Form' => Collective\Html\FormFacade::class,
'Html' => Collective\Html\HtmlFacade::class,
],
In your Blade files:
{!! Form::open(['route' => 'users.store']) !!}
{!! Form::label('name', 'Name:') !!}
{!! Form::text('name') !!}
{!! Form::close() !!}
You can wrap the member in a union.
#include <iostream>
using std::cout;
struct A {
A() { cout << "A's constructor was called!\n"; }
};
struct B {
union { A member; };
B() {
cout << "B's constructor was called!\n";
}
};
int main() {
B b; // will only print "B's constructor was called!"
}
member will not have its constructor or destructor called.
If you want to have multiple members be uninitialized, create a separate union for each of them. So for example if you wanted B to hold additional members of any types C, D, E , you'd declare it as such:
struct B {
union { A a; };
union { C c; };
union { D d; };
union { E e; };
// ... and so on
B() { cout << "B's constructor was called!\n"; }
};
It's also ok to have multiple members of the same type, just make sure each member is in its own union:
struct B {
union { A x; };
union { A y; };
union { A z; };
// ... and so on
B() { cout << "B's constructor was called!\n"; }
};
Whenever you want to manually initialize any members declared like this, you may do so using the placement-new syntax. The placement-new syntax looks like this: new (&variable) Type(arguments to the constructor); This might look a little weird so I will show an example below.
Going back to the simple case, here's an example where B has a method initializeA to initialize member, using the placement-new syntax.
#include <iostream>
using std::cout;
struct A {
A() { cout << "A's constructor was called!\n"; }
};
struct B {
union { A member; };
B() {
cout << "B's constructor was called!\n";
}
void initializeA() {
new (&member) A();
}
};
int main() {
B b; // will only print "B's constructor was called!"
b.initializeA(); // will print "A's constructor was called!"
}
The placement-new syntax allows you to pass arguments to the constructor.
Finally if you have initialized some member, it's important to remember that you have to do the work of calling the destructor yourself!
Here's an example where B calls the destructor of A to destroy it.
#include <iostream>
using std::cout;
struct A {
A() { cout << "A's constructor was called!\n"; }
A() { cout << "A's destructor was called!\n"; }
};
struct B {
union { A member; };
B() {
cout << "B's constructor was called!\n";
}
void initializeA() {
new (&member) A();
}
void destroyA() {
A::~A();
}
};
int main() {
B b; // will only print "B's constructor was called!"
b.initializeA(); // will print "A's constructor was called!"
}
But this is a little error-prone since you should only call the destructor of A if you have already initialized A. Perhaps B would have a variable to keep track of this, making it a little safer. And put it in the destructor of B so that we won't forget.
#include <iostream>
using std::cout;
struct A {
A() { cout << "A's constructor was called!\n"; }
~A() { cout << "A's destructor was called!\n"; }
};
struct B {
union { A member; };
bool initialized;
B(): initialized(false) {
cout << "B's constructor was called!\n";
}
void initializeA() {
// constructor should only be called if A is uninitialized
if (!initialized) {
new (&member) A();
initialized = true;
}
}
void destroyA() {
// the destructor should only be called if A is *initialized*
if (initialized) {
A::~A();
initialized = false;
}
}
~B() {
destroyA();
}
};
int main() {
B b; // will only print "B's constructor was called!"
b.initializeA(); // will print "A's constructor was called!"
} // will print "A's destructor was called!" in B's destructor
I hope this was helpful. It's interesting how much you can do with the control C++ gives you over things like this.
magical spells really work!! I never thought there were still honest, genuine, trustworthy and very powerful spell casters until i met the spiritual helper, dr Strange last week he did a love spell for me and it worked effectively and now he just caster another healing spell for my friend who has fibroid and now she is totally free and she is presently the happiest person on earth, she keeps thanking me all day..
I just thought it would be good to tell the whole world about his good work and how genuine he is, i wasn’t thinking i could get any help because of my past experiences with other fake casters who could not bring my husband back to me and they all promised heaven and earth and all they are able to do is ask for more money all the time until i met with this man. he does all spells, Love spells, money spells, lottery spells e.t.c i wish i can save every one who is in those casters trap right now because i went though hell thinking and hoping they could help me.i recommend ([email protected])
any kind of help you want. his email address is ([email protected]) also contact him on WhatsApp +2347036991712
Its giving version conflict error
...
"version_conflicts" : 1,
...
because some update happened at the same time you were trying to delete the documents. Try sending refresh=true in delete_by_query, that will make sure index is refreshed just before it tries to delete docs and reduce your chances of getting version conflicts.
If it still happens and you want some more robust solution, you can write some code to retry delete_by_query 3 (or more) times by catching ConflictError. That will work similar to how retry_on_conflict works in case of _update calls.
You can check out this doc: https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-delete-by-query
PS: From personal experience, you can't completely get rid of the version conflict issues (its just something you gotta make peace with, when working with elastic) but you can reduce them by using things like refresh or retry_on_conflict, etc.
I’ve read several of your replies about WooCommerce/Stripe and I found them extremely helpful.
I’m currently working on a WooCommerce multi-vendor marketplace using WCFM + Stripe Connect (Direct Charges), and I’m facing an issue with partial payments when customers place orders with multiple vendors. The system still creates the order even when one of the Stripe charges fails.
Before I go too deep into building a full custom solution, I’d really appreciate your technical opinion: what would be the best approach to intercept or block order creation in WooCommerce until all Stripe charges are confirmed?
If you’re open to discussing this further, please let me know the best way to get in touch. Your expertise would be greatly appreciated.
<html>
<head>
<title>This is a title!</title>
</head>
<body style="background: lightblue">
<a href="#" target="my_target" onClick="clicked()">Click Me!<
to exclude hidden files in all subfolder
use: --exclude "**/.*"
Try to set sns.pairplot(..., dropna=True).
Two threads contend for the same lock. After first one releases it, the second will still acquire it. But before the second unlocks, the first will remove the lock from the map leading to NullPointerException.
Fix: You must hold a reference to the lock on the first get. (Use computeIfAbsent for simplicity). This ensures that even if the cache is cleared, the threads that are waiting on the lock are not affected.
Here is the prompt⬇️
1. Face Detection
Detect my gender.
Detect my face shape (oval, square, round, heart, diamond, oblong, etc.).
2. Main Composition
Create a high-quality editorial-style image.
Center: Stylized outline of my actual face with accurate proportions.
Add the detected face shape name in bold modern typography below the outline.
3. Hairstyle Suggestions
Arrange 6 hairstyle suggestions suited to my gender, face shape, and modern style.
Place them evenly around the central face.
Each hairstyle inside a clean white frame with subtle shadow.
4. Design Details
Use thin arrows pointing from each hairstyle to the central face.
Maintain a clean, minimal background.
Keep a consistent illustration style throughout.
5. Output
High resolution, sharp details, professional finish
Are you participating in IEEEXtreme 19.0? LOL
You're almost there. Use the full URL in your fetch request because your backend lives on a completely different address.
So here's how your fetch request should look like:
const response = await fetch('http://localhost:5000/users', {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(body)
});
Why http://localhost:5000/users instead of /users ? Because /users will resolve to localhost:3000/users (as indicated in your error), and that address is where the frontend lives, not the backend.
As a side note, your current code would work if you'd configured a proxy to backend URL, but you probably don't need to do that.
I got the same issues after change PC and redownload the same flutter sdk (looks like they still update fix bug if the old version was not too old). I'm using flutter 3.24.3 .
Navigate to : <your flutter sdk folder>\flutter\packages\flutter_test\pubspec.yaml
And unpin by edit path: #1.9.0
Another way to solve your issues is separate clone video_trimmer module to your lib and update pubspec.yaml of video_trimmer
It seems I got this problem on "my" win 10 os 😅...
I got «
Exception in thread "LWJGL Application" com.badlogic.gdx.utils.GdxRuntimeException: OpenGL is not supported by the video driver.
at com.badlogic.gdx.backends.lwjgl.LwjglGraphics.createDisplayPixelFormat(LwjglGraphics.java:356)
at com.badlogic.gdx.backends.lwjgl.LwjglGraphics.setupDisplay(LwjglGraphics.java:250)
at com.badlogic.gdx.backends.lwjgl.LwjglApplication.mainLoop(LwjglApplication.java:146)
at com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run(LwjglApplication.java:128)
Caused by: org.lwjgl.LWJGLException: Pixel format not accelerated
at org.lwjgl.opengl.WindowsPeerInfo.nChoosePixelFormat(Native Method)
at org.lwjgl.opengl.WindowsPeerInfo.choosePixelFormat(WindowsPeerInfo.java:52)
at org.lwjgl.opengl.WindowsDisplay.createWindow(WindowsDisplay.java:247)
at org.lwjgl.opengl.Display.createWindow(Display.java:306)
at org.lwjgl.opengl.Display.create(Display.java:848)
at org.lwjgl.opengl.Display.create(Display.java:757)
at com.badlogic.gdx.backends.lwjgl.LwjglGraphics.createDisplayPixelFormat(LwjglGraphics.java:348)
... 3 more »
...
That same program I was trying to launch always worked before. What exactly should I do to fix it ? I have absolutely no idea since I don’t even know anything about command prompts.
👋
I’ve helped quite a few clients in the exact same situation — and here’s the thing:
You don’t need to migrate or mess with PSTs or third-party tools at all.
Instead of treating GoDaddy as a separate email system, there’s a process called Defederation. It lets you safely remove GoDaddy’s control and bring the tenant fully under Microsoft’s management — no data loss, no downtime, and no need to rebuild anything.
✅ You’ll need to:
Purchase new licenses directly from Microsoft (GoDaddy ones aren’t transferable)
Disconnect GoDaddy’s federation
Update DNS & security settings
Optionally update your SharePoint URL
But the result? A clean, native Microsoft 365 tenant with full admin access and no GoDaddy limitations.
I put together a quick guide and video for exactly this situation if it helps you or your techs:
▶️ Walkthrough video
📝 Full blog post
Happy to answer questions if you run into anything tricky.
– Ahmed Masoud
🔗 LinkedIn
| header 1https://tenchat.ru/kuznecov_we | head
b ах https://tenchat.ru/kuznecov_web https://tenchat.ru/kuznecov_weber 2 |
| --- | --- |
| cell 1 | cell 2 |
| cell 3 | cell 4 |
Your JS code is a function used to listen for events. It doesn't seem to have any effect on the page layout.
Since you didn't provide all the CSS code, it's possible that the invisible styles for alertpass and userpass, both of which use position: relative, are covering the 'Forgot Password?' link. You may need to adjust their positions.
The fastest way to verify whether these two styles are causing the issue is to temporarily change their position to absolute.
The site title change should have an immediate effect (i.e., should be reflected in the API immediately). Are you sure you have actually changed the site title? How did you do that? Normally, you do that in the site settings, then "Title, description, and logo..." and then change. Could it be that you actually renamed the group, not the site?
make sure your pygame.quit() is outside the event loop and inside the while loop, otherwise any action will cause the window to close. Hope this helps!
Are HLR tests only black box? Basically, are HLRs tests only integration tests?
I was struggling with this for many hours recently. I have a sort of unique setup, but maybe the solution I arrived at will light a bulb for you. I have a Flask "pseudo"-monorepo with several apps contained in their own root-level directories. I have NPM workspaces configured in this app, and so have two levels of package.json files: one at the root and one in each app. npm install placed the @rollup/plugin-inject package in the root-level package.json, so the package-lock.json never reflected the fact that my apps actually needed it. I tried everything under the sun before I figured this out, and eventually I just placed "devDependencies": { "@rollup/plugin-inject": "^5.0.5" } in my apps' package.json, removed window.$ = window.jquery = jquery from the entry point I was feeding Vite, and violà! — Jquery had been successfully injected.
Makesure you have setup notification for both foreground and background,by default fcm send notification on background when the app is closed.
you should close stream after all chunks has been send
// Send audio chunks in goroutine
go func() {
const chunkSize = 8192
for i := 0; i < len(audioData); i += chunkSize {
end := min(i+chunkSize, len(audioData))
chunk := audioData[i:end]
if err := client.SendAudio(ctx, chunk); err != nil {
errChan <- fmt.Errorf("failed to send audio chunk: %w", err)
return
}
time.Sleep(200 * time.Millisecond)
}
errChan <- client.stream.CloseSend()
}()
This is a known issue:
https://github.com/actions/runner-images/issues/13135?utm_source=chatgpt.com
Maybe this can help you:
I haven't used Java for years, so double-check what I'm saying..
I see that you are using a Transactional annotation in that service, but I don't see any commit for it. Is it handled by Hibernate by default?
I created a simple app to demonstrate how you could detect taps on the back of a device using the z accelerometer. The app samples z-accel 50 times per second and uses a washout filter to approximate the rate of change of acceleration. That way, the value is centered at zero regardless of the orientation of the phone. When the phone is tapped, the filtered acceleration momentarily spikes before returning to zero. The logic looks for single a high value surrounded by low values. The tap does not have to be very hard.
The code can be found here, on GitHub: https://github.com/InvaderZim62/BackTap
iam building the same project for my graduation , i am using the same pipeline but didn't find good text to gcode generator script , could you please share the script you are using , you will save me if you responde to me , i hope you reply!
You defined train_data with X_train, so just use X_train or do:
train_data, train_answer, test_data, test_answer = train_test_split(data["article"],data["summaries"],test_size=0.20)
The solution is simple. Use the alias to transfer the column name on aggregated column
ISNULL(CONVERT(VARCHAR,fs.A),'') as 'A'
or
ISNULL(CONVERT(VARCHAR,fs.A),'') 'A'
Documentation
# download the state from remote
terraform state pull > terraform.tfstate
# instruct the terraform to init the new backend
terraform init \
-backend-config="bucket=${TFSTATE_BUCKET}" \
-backend-config="key=${TFSTATE_KEY}" \
-backend-config="region=${TFSTATE_REGION}"
-migrate-state
UPDATE: I got it working but I HOPE there is a cleaner/easier way?
Added this to the DataSource to ensure that date fields would be passed as ISO strings.
parameterMap: function (data, type) {
if (type !== "read") {
if (data && data.models && Array.isArray(data.models)) {
// Convert all Date objects to ISO strings
for (var model of data.models) {
for (var [key, value] of Object.entries(model)) {
if (value instanceof Date) {
model[key] = value.toISOString();
}
}
}
}
}
return data;
}
For php, using Google's php SDK ( https://github.com/google/google-api-php-client )
$this->client = new Client();
$this->client->setAuthConfig('your path to service_account.json');
$this->client->addScope(Drive::DRIVE);
//impersonating here
$this->client->setSubject('[email protected]');
$this->service = new Drive($this->client);
The data attribute of a numpy.ndarray object is a memoryview object, which is an object holding enough information to support the Python buffer protocol in a C API (including a C pointer), but does not have a Python level accessor for the buffer address. The numpy.ndarray.ctypes.data attribute is explicitly "A pointer to the memory area of the array as a Python integer" (https://numpy.org/doc/2.1/reference/generated/numpy.ndarray.ctypes.html)
For API 30+
using
enableEdgeToEdge()
Solves most of the issues. Regarding system StatusBar and SystemNavigationBar
Try “Go Easy”, its a vscode extension which works similar to nodemon but for Golang.
And its pretty good, your server logs after reloading stays there in the terminal and have shortcuts to run it too. Simple and Convenient.
The documentation for useState says (emphasis mine):
If the new value you provide is identical to the current state, as determined by an Object.is comparison, React will skip re-rendering the component and its children. This is an optimization. Although in some cases React may still need to call your component before skipping the children, it shouldn’t affect your code.
It seems most likely that in the case of your component, the React is calling it an extra time. Why? That's an implementation detail.
In the new code, do you have empty inside the fetchCurrent or only in return after the semaphore wait? Try to put your ssidResult to some ref box.
Also try to wrap your NEHotspotNetwork.fetchCurrent inside the DispatchQueue.global().async {
Things seem to work as intended by DT v(0.34.0). The original code: datatable(m) %>% formatCurrency("A", "£", digits=1) yields the following (with pound currency, rather than Euro, which is what "\U20AC" will give you)
instead of using an emulator is try using an android device for running and testing your apps.
for this use the file explorer and land to where your flutter project exists.
open the project and copy the path of your project. next open the command prompt and use cd command and your folder location to land there
e.g cd flutter_projects/firstProject
after you reach your project folder from the command prompt
plug in your android device using the charging cable to your laptop/pc, turn on usb debugging and related settings from the developer options. some phones dont have developer options enabled by default so turn that on first and then you run flutter devices command
and see if your phone is visible there it will show the os of it alongside a number assigned to it.
once you see your phone there, you then run the flutter run command
it'll take a good 5-7 minutes the first time and dont let your phone go to standby keep it open.
itl'll install and apk file of your project into your project which you can then open to see your app realtime and interact with it too.
process is long for the first time but do it enough and you'll get the hang of it.
for hot reload and refresh options the command prompt will give you options like Q to quit, r for hot reload etc.
i use this method because usage makes lot of junk files and make good systems lag too.
If you’re looking to improve the text formatting or visual styling on the Google Forms submission page, you might find tools like https://onlinefontsgenerator.com/ useful. It lets you create custom text styles that can be copied into form descriptions or confirmation messages to make them stand out a bit more visually. It’s not a direct Google Forms feature, but it helps add a unique look when you need creative text formatting.
Tried different way , but none of them is working
jffs2reset
This will erase all settings and remove any installed packages. Are you sure? [N/y]
y
MTD partition 'rootfs_data' not found
firstboot
This will erase all settings and remove any installed packages. Are you sure? [N/y]
y
MTD partition 'rootfs_data' not found
| Memory Available | Suggested MAXTRANSFERSIZE | Suggested BUFFERCOUNT |
|---|---|---|
| < 8 GB | 1 MB | 50-100 |
| 8-16 GB | 2-4 MB | 100-250 |
| \> 16 GB | 4 MB | 250-500+ (watch for OOM) |
Alright, if anyone stumbles upon this problem, I'll leave here what solved it for me !
Thanks to @miken32's suggestion I dug a bit more towards the requests' authentication in etherpad and its cookie needs. The sessionID cookie is necessary to allow a user to open a pad, it's a security feature (enabled in settings.json by setting requireSession to true).
The cookie in my case is set in PHP :
setcookie("sessionID", $session_id, 0, '/p/' . $pad_id,"",true);
It turns out that Etherpad reads the path of the cookie from the domain itself : myapp.eu/p/$pad_id. My Etherpad instance is in a sublevel : myapp.eu/etherpad. So I just need to replace the "path" argument in set cookie with the relevant value (WITH a / at the beginning) :
setcookie("sessionID", $session_id, 0, '/etherpad/p/' . $pad_id,"",true);
Note that samesite and secure must be set to "None" and true respectively, be it through the cookie or settings.json.
Thanks miken32 for your suggestion !
Since Doctrine 3, ClassUtils::getClass() has been removed.
Here's my current replacement.
use Doctrine\Persistence\Proxy;
// ...
public static function getRealClass(object|string $entityOrClass): string
{
$class = is_object($entityOrClass)
? $entityOrClass::class
: $entityOrClass;
if (is_subclass_of($class, Proxy::class, true)) {
return get_parent_class($class);
}
return $class;
}
Make sure to have lark and deeplake installed.
%pip install --upgrade --quiet lark
%pip install --upgrade --quiet libdeeplake
After five days or so, I've decided the best solution to my own particular problem is to abandon Jupyter Book and run my app with Quarto (https://quarto.org/). I don't know if my problems stemmed from trying to run Jupyter Book in a virtual environment, the transition to Jupyter Book 2.0, or a combination of the two, but in the end the time it was taking me to figure things out was making the project less and less worthwhile. Quarto seems a more mature setup and it's better suited to my needs right now.
Best of luck to the Jupyter Book team in the further development of their product, and many thanks to everyone who took the time to answer my questions.
data:image/webp;base64,UklGRloFAABXRUJQVlA4WAoAAAAQAAAAWQAAWgAAQUxQSKgBAAABkLtt2/E3d8zatt09GGu3q90x/0C2uh0zYaqt3TZW2+b3wxO+eJ67U5eImAAoHbyy/tp7EUm/u1a/ajBY9q7LSMBMXW9z095LxPfTDHW4KtGvdrAxNS0q01P1jc6K2uwYXfmfRfXnfEW7RP0uLQVpMZguUDFbjM5RcELMnoz2Sgy/ivRbTP+Okhbj6QiJmE+CvRGCHwKdEYrngiwVkksDdBCaHfyEqNdpJsc9ugnVrm4ZLhmnJUJ2iYvQddjAZ2N7QridakbVbSWMkjZyhHJOq+2ctrcS0uTWs1oPvGP1HhDa1CbwmlDLq/YSr0sZXhn5j/IJr6eNvBoX8FqQxysPvEBtN6vdQH9W/QGwAoCHnB62Gs9pQitwQpsNjBraAiO0+4DPg/by+OS1h1tsbsOVDZw3cdnsBipZeHZh0sUHW3hshv9nFp8RkgWClnAoC4ORDEYidIW9CoSvtFaFmBNsTUDcLpa6IPofK3+gcb+N/dA5zMIwqD2p7SQ0F3zW9LkAyvv/1vK7Pwx2eqjhYSdYrYlVA9sbvoX6tgEMcxYdTdySg/Ny8G8HVlA4IIwDAADwEwCdASpaAFsAPpVAm0gtKicnLjgNMLASiWwAvCmT/unTdcl5x+WHtXWt+7/hTKf+knJJ6wvE46XnmA/XL9gPey9DH+N9QDpCPQA8tH2Pf3b/bD2czAss0nq5lKwsB7yYLghaAEa+5qZlTQaqx8tG+PeXDYocuAx5BnTwGnAK9AaeLIwlUGy/TyjtD/pQu7H7QeDUS4QQwcmB5CIDyRTPPHe5nCQf5QAA/v6G9j/eQK57ZQs6Aw1pvQ6qPnfAfLQAuI/Ga48h6DPXldWhZpC/JEYDMxD9oe+UUQM6pOhZuxKGC7EtbFN6+fAX8Up4EdbbVNMEAIAmAfrveMW3Ij34v8omOCEdWeNPf/lSHKW3cS4K9MX5wdhYM7xUmSbLPb1vVER1IovDOgCVZa6wsvj5co0KcSHp9pPLrx7dx+RJRfDrGGzMaJFIKRn0mGygNX9imMhdjqrUgxCmKFoOwVHeoHGsDjMgsmzKGTTx6r33rM7t+/60Z0mCM1MsBQgXPkURS4UbuJH0M5M1kncw9FfHQBZH3sN5K+7VKmciLeM9huyqSI4akaLqSYH8qVAuU82l8P/LTwYkogmheF9iLNoYLcQ4RGEWVU78SUAZ+AeHwWFQ4B4eyOPnk9dDq5l6fYeGn99RciiSnvGCGnzJVhoKs088OLSaXlqBnYj+xqYF/7ZTAlLZ4MFcuIBNu25anIcXaKeM//i/1LrIHk5eW8DUZW1IDxGbWJBgZuXXtqQQVHl+HQ1f/nAg1rIs1kgq027+6Edtdrf8hH8nnnU5ZiR1TDcMfSadLVtp9bClV/fnjCillbmBiIRwnQsJoH5VLbn38cHk4nb0M31OgmZjZxg+MVGzbFBZ96iG1io2SyrP/jvAfOXAryOWD+3XfeHKrXgIL+tHAykLtr8mVr1VAM+9SCzstRvHfGCDOKcO+TnYHRBeIuPEM9DFpTEgcCEfOVoSZgMcr1N31sXxO2j91QgbToGl7AiEm
I think you'd be better off using this repository.
If you have EDR/antivirus or other security software, your laptop may be quarantined! Also, the rules for your laptop can been modified !
Have you tried to ping a server or computer connected on the same VPN ? To check, if this protocol works also ?
I'm using Elsa Workflows 3 for a project, and there are four types of triggers (source):
HTTP Endpoint: triggers the workflow when a given HTTP request is sent to the workflow server.
Timer: triggers the workflow each given interval based on a TimeSpan expression.
Cron: triggers the workflow each given interval based on a CRON expression.
Event: triggers when a given event is received by the workflow server.
I'm trying to figure out how to implement a Cron based trigger in code, but the only documentation I can find (here and here) is for an HTTP Based Trigger.
How do I implement a Cron based trigger in Elsa Workflows 3?
Apparently, I have to implement isAllowEmpty method as below to get the desired behavior:
public function isAllowEmpty(\Phalcon\Filter\Validation $validation, $field): bool
{
$value = $validation->getValue($field);
return $this->allowEmpty($field, $value);
}
Now this is my output:
Executing allowEmpty! // from preChecking method in Phalcon\Filter\Validation class.
Executing allowEmpty! // one from my validate method.
Field `foo` is invalid!
I found out that there is a check for isAllowEmpty method existance in Phalcon source code of Phalcon\Filter\Validation class. But isAllowEmpty method could only be found in Phalcon\Filter\Validation\Validator\File\AbstractFile and not in AbsctractValidator.
Later I will try to open the issue at Phalcon's GitHub repo.
Similar to using :empty you can also check if the element has no children. It comes in handy if the element has extra whitespace, but no content which causes :empty evaluate as false:
.container:not(:has(.grid-element--3 > *)) {
margin-top: -20px;
}
Do I correctly understand that your code looks as follows:
PropertiesConfig.ts
export class PropertiesConfig {
}
index.ts
function f(p: PropertiesConfig){
}
If so, auto imports should operate correctly. Could you please create a new ticket on YouTrack and share your small example and the IDE logs(Help | Collect Logs and Diagnostic Data) there? We will research the issue and follow up with you.
I discovered what the issue was! It was a problem in our dockerfile. We use Liquibase and had frozen our version at 4.28.0, which was fine until recently, because their apt repository stopped supplying it. Because our dockerfile was one huge run statement with a few supporting smaller ones, failing one bit failed the rest, and so the devcontainer did not have docker installed. Upgrading liquibase fixed this issue.
Inspired by @walidtlili's accepted answer (which uses the DockerFile syntax) as an inspiration, here is something I use for GitHub Actions, but my guess is that something very similar can be adapted for other CI/CD workflows.
GitHub has several choices for Linux 'runners' (the virtual machines on Microsoft Azure that run these scripts) and one of them has been the latest LTS version of Ubuntu (at the time of writing, Ubuntu noble 24.04.3 LTS). GitHub already adds lots of packages (the exact list is provided after a successful run), but, of course, you still only get ImageMagick 6.9.
Previously, I avoided compiling everything from scratch. Instead, I used the Debian ('universal') repositories to directly install it. However, recently, I noticed that Debian had released so many upgrades to those .deb packages since I first started to use them, that the versions I had were not only obsolete, but not even available from Debian's repositories (and mirrors!). As such, this method no longer worked.
I made an attempt to bring them up to date, but the problem is that they now have several conflicts with the (also updated) packages installed by Ubuntu: the versions don't match (which is understandable, since, to support ImageMagick 7 — released in 2016!! — you might need to have post-2016 versions of dependancy packages...). You can juggle around a bit with the GitHub runners' configuration (up t a point; a few things are off-limits) and certainly remove mismatched packages, replacing the original Ubuntu packages with shiny new ones from Debian, but... I quickly succumbed to the 'dependency hell'.
The solution was to inspire myself on @walidtlili's answer and do the equivalent for GitHub Actions. Note the slight differences in the packages being fetched; @walidtlili, for some reason, does three separate sudo apt-get update, possibly to guarantee that the packages are being installed in the correct order, but, in my case, it suffices listing all that are needed (considering that a few extra ones will be retrieved anyway — dependencies! — and many, such as compilers, are already pre-installed).
name: 'Compile ImageMagick 7 on Ubuntu 24.04.3'
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install packages needed for compilation
run: |
sudo apt-get update
sudo apt-get install -y --quiet wget autoconf pkgconf build-essential curl pkgconf libbz2-dev libfontconfig-dev libfreetype-dev libgs-dev libgvc6 libjpeg-dev libpng-dev libtiff-dev libxml2-dev
- name: Download ImageMagick source files to /tmp
run: |
cd /tmp
wget https://github.com/ImageMagick/ImageMagick/archive/refs/tags/7.1.2-7.tar.gz
tar xzf 7.1.2-7.tar.gz
rm 7.1.2-7.tar.gz
- name: Configure the ImageMagick build
run: |
cd /tmp/ImageMagick-7.1.2-7
sh ./configure --prefix=/usr/local --with-bzlib --with-fontconfig --with-freetype --with-gslib --with-gvc --with-jpeg --with-png --with-tiff --with-xml --with-gs-font-dir
- name: Compile ImageMagick
run: |
cd /tmp/ImageMagick-7.1.2-7
make -j
- name: Install ImageMagick
run: |
cd /tmp/ImageMagick-7.1.2-7
sudo make install
sudo ldconfig /usr/local/lib/
- name: Check if ImageMagick was built successfully
run: $(which identify) -list configure
# Moving on to the rest of the build...
Also note that I have broken up the full sequence in several steps, and, for each, I have to force cd /tmp/ImageMagick-7.1.2-7 each time¹ (each step runs independently from the others, albeit in sequence). There is no real "need" for having separate steps, but it was more convenient for me to debug. The above lines will also be quite chatty, to show what is being done, but I'd guess you could run them with --quiet flags (or the equivalent) and just ignore all the messages.
As of late 2025, the GitHub 'runners' currently take around 5 minutes to download everything, install the packages, run the ImageMagick autoconf building configuration, and finally compile all the components. I've also done all the checks (with make check) just to be 100% sure everything worked. But, to make things easier, I just list the compilation result by running identify -list configure: if that works, and shows all that I expect it to show, I know it's working correctly! 😉
If you just need to do all the above on your command line, or put it inside a script, then all you need is the following:
sudo apt-get update
sudo apt-get install -y --quiet wget autoconf pkgconf build-essential curl pkgconf libbz2-dev libfontconfig-dev libfreetype-dev libgs-dev libgvc6 libjpeg-dev libpng-dev libtiff-dev libxml2-dev
cd /tmp
wget https://github.com/ImageMagick/ImageMagick/archive/refs/tags/7.1.2-7.tar.gz
tar xzf 7.1.2-7.tar.gz
rm 7.1.2-7.tar.gz
cd /tmp/ImageMagick-7.1.2-7
sh ./configure --prefix=/usr/local --with-bzlib --with-fontconfig --with-freetype --with-gslib --with-gvc --with-jpeg --with-png --with-tiff --with-xml --with-gs-font-dir
make -j
sudo make install
sudo ldconfig /usr/local/lib/
identify -list configure
(the last step is, again, entirely optional)
Note that you don't need to do the actual compilation under the superuser (I'd seriously recommend not to); it's just the package installation that require superuser privileges, as well as the installation step. The ldconfig was added mostly because @walidtlili placed it at the end 😂
The point is helping the dynamic libraries to be found by existing packages, especially if you recompile everything and might have other things that depend on ImageMagick's dynamically loaded libraries.
Under Ubuntu, you can also do a simple
sudo echo "/usr/local/lib/ImageMagick-7.1.2/modules-Q16HDRI/filters" > /etc/ld.so.conf.d/ImageMagick.conf
... which should guarantee that the Linux code interpreter (which runs all compiled ELF binaries) correctly finds the location of the libraries. Just check that the above is the right directory — it was on my case, but YMMV.
This last step cannot be done on GitHub Acions, however, because not even the superuser is allowed to write to /etc on the virtual machine launched by a runner, for obvious security reasons. But sudo ldconfig /usr/local/lib/ works!
A last tip, you can host your own 'runners' locally, even when using GitHub Actions, allowing you full control over the virtual machine being launched. But that is definitely not relevant to the OP's original question!
I hope this can be useful for others as well.
¹ It's not strictly necessary to do it under /tmp. You can build everything locally, i.e., on whatever the virtual machine considers to be your "home". It's not as if everything will be permanently cluttered with litter — the virtual machine, after all, will be completely purged by GitHub Actions once it finishes the run. Technically speaking, even removing the downloaded source code is absolutely unnecessary; again, I just tried to keep everything as close as possible to @walidtlili solution.
For anyone searching for this in the future, PrimeFaces now uses PrimeFaces.current().ajax().update(id) for Server Side dynamic updates.
solved by changing
signingOption: default
Exposed does not support CTEs at this time. You can follow Exposed issue 868 to be kept up to date on this feature.
As a temporary solution, this adds CTE support to Exposed : https://gist.github.com/SalomonBrys/61fed3a1206d50e9865c3e76f274296b
React Query Builder has had a React Native extension for a couple of years now:
https://www.npmjs.com/package/@react-querybuilder/native
(I maintain React Query Builder.)
I know this is a super old thread, but I wanted to put the solution in that I have been using. I borrowed it from https://github.com/RamblingCookieMonster/PSSQLite/blob/master/PSSQLite/Invoke-SqliteQuery.ps1
this project scrubbed it out using a bit of imbedded c# code that is very efficient and clean:
if ($As -eq 'PSObject') {
#This code scrubs DBNulls. Props to Dave Wyatt
$cSharp = @'
using System;
using System.Data;
using System.Management.Automation;
public class DBNullScrubber
{
public static PSObject DataRowToPSObject(DataRow row)
{
PSObject psObject = new PSObject();
if (row != null && (row.RowState & DataRowState.Detached) != DataRowState.Detached)
{
foreach (DataColumn column in row.Table.Columns)
{
Object value = null;
if (!row.IsNull(column))
{
value = row[column];
}
psObject.Properties.Add(new PSNoteProperty(column.ColumnName, value));
}
}
return psObject;
}
}
'@
try {
if ($PSEdition -eq 'Core') {
# Core doesn't auto-load these assemblies unlike desktop?
# Not csharp coder, unsure why
# by fffnite
$Ref = @(
'System.Data.Common'
'System.Management.Automation'
'System.ComponentModel.TypeConverter'
)
} else {
$Ref = @(
'System.Data'
'System.Xml'
)
}
Add-Type -TypeDefinition $cSharp -ReferencedAssemblies $Ref -ErrorAction stop
} catch {
if (-not $_.ToString() -like "*The type name 'DBNullScrubber' already exists*") {
Write-Warning "Could not load DBNullScrubber. Defaulting to DataRow output: $_"
$As = 'Datarow'
}
}
}
# to use it to convert the DBNull into $null I use it like this:
try {
$CSBuilder = New-Object System.Data.Odbc.OdbcConnectionStringBuilder
$Cred = Get-Credential -Message "Please provide credentials for $Server"
$OdbcDriverName = 'SQL Server'
$CSBuilder['driver'] = $OdbcDriverName
$CSBuilder['DSURL'] = $Server
#$CSBuilder['Database'] = 'master'
$CSBuilder['Database'] = $Database
$CSBuilder['uid'] = $cred.UserName
$CSBuilder['pwd'] = $cred.GetNetworkCredential().Password
$CSBuilder['EncryptedPassword'] = 2
$CSBuilder['ConnectionIdleTimeout'] = 600
$conn = New-Object System.Data.Odbc.OdbcConnection
$conn.ConnectionString = $CSBuilder.ConnectionString
$conn.ConnectionTimeout = 30
$conn.Open()
#$conn.ChangeDatabase($Database)
$cmd = New-Object System.Data.Odbc.OdbcCommand($QryText, $conn)
$cmd.CommandTimeout = 30
$ds = New-Object System.Data.DataSet
$da = New-Object System.Data.Odbc.OdbcDataAdapter($cmd)
[void]$da.Fill($ds)
switch ($As) {
'DataSet' { return $ds }
'Table' { return $($ds.Tables[0]) }
'Row' { return $($ds.Tables[0].Rows[0]) }
'PSObject' {
foreach ($row in $ds.Tables[0].Rows) {
#--- DBNull scrubber conversion ---#
[DBNullScrubber]::DataRowToPSObject($row)
}
}
'SingleValue' {
return $ds.Tables[0] | Select-Object -ExpandProperty $ds.Tables[0].Columns[0].ColumnName
}
default { return $ds }
}
} catch {
Write-Warning "Query failed: $($_.Exception.Message)"
}
No, you cannot achieve automatic Azure Web App regional failover using only a Private DNS Zone. Private DNS zones are for internal name resolution and do not provide failover or public routing.
For public apps, use Azure DNS Public Zone with manual or scripted updates, or use Traffic Manager / Front Door for automatic failover. https://learn.microsoft.com/en-us/azure/dns/dns-overview
In VS 2022 the default behaviour is that Enter submits the chat. There's no built-in shortcut documented to insert a new line instead while staying in the chat input.
"In the Copilot Chat window, type a coding related question in the Ask Copilot text box. Press Enter or select Send to ask your question." - Microsoft Learn
Try to close your modal inside a finalize() :
onSubmit() {
isError = false;
...
this.updateSub = this.bewirtungService.updateCatering(cateringCreate)
.pipe(
finalize(()=> {
if(!isError) {
this.dialogRef.close(b);
}
})
)
.subscribe(
(b: Bewirtung) => {
...
// don't close it here
},
() => {
isError = true;
this.notificationService.error('Die Bewirtung konnte nicht geändert werden.');
}
);
}
else {...}
}
Very good example for the composition and aggregation.
This answer worked for me: https://stackoverflow.com/a/63022606/415551
sudo apt install xclip
I wonder if this is officially supported as I can not find any clear references to it.
by the way the <Extensions> tag needs to be inside the <Application> tag
Here's another more recent solution in case anyone still wondering - ref: https://moderniser.repo.cont-aid.com/en/How-to-use-the-latest-latest-AWS-icons-in-Mermaid.html)
Example:
flowchart TB
subgraph ACCOUNT[AWS Account]
subgraph GRP1[" "]
ELB@{ img: "https://api.iconify.design/logos/aws-elb.svg", label: "ELB", pos: "b", w: 60, h: 60, constraint: "on" }
end
subgraph GRP2[" "]
EC2@{ img: "https://api.iconify.design/logos/aws-ec2.svg", label: "EC2", pos: "b", w: 60, h: 60, constraint: "on" }
end
subgraph GRP3[" "]
RDS@{ img: "https://api.iconify.design/logos/aws-rds.svg", label: "RDS", pos: "b", w: 60, h: 60, constraint: "on" }
end
ELB --- EC2 --- RDS
end
classDef vpc fill:none,color:#0a0,stroke:#0a0
class ACCOUNT vpc
classDef group fill:none,stroke:none
class GRP1,GRP2,GRP3 group
When rendered, it looks like this:
Nice explanation! Managing external data files reminds me of how https://summerrtimesagamodapk.com/ handles player progress and story saves — separate, yet seamlessly loaded.
feel free to ask this question in our GitHub discussions channel - the Langfuse maintainers are happy to help you there.
@Stas Simonov's response contains a key finding: that objcopy -O binary doesn't correctly handle multiple sections inside the object file.
However, this is not just on Windows but also on Linux. It happens even for my example but it's a bit hidden.
So, if I do
gcc -c
then the sections are in the order .data, .comment and .note.gnu.property.
If I do
gcc -r
then the sections are in the order .note.gnu.property, .data and .comment.
When .data is first, it's written but then overriden by .note.gnu.property.
When .note.gnu.property is first, it's partially overriden by data because .data is smaller. That's why I see 0102030405 when I use -r.
A possible solution to this issue is to use the -j flag so we can select the .data section i.e.
objcopy -j .data -O binary ...
This way, just the .data section is copied to the binary file.
I'm honestly not sure if this is a bug, a limitation or simply a counterintuitive intended behaviour of objcopy.
For me the issue was resolved by going into build phase under "{name} Extension (macOS)" there is a "Copy Bundle Resource", it contained all other files except "content.js" and "background.js" once I added them, the error went away.
In a sync context, the user context is now available in tools as of Spring AI 1.1.0-M1
https://github.com/spring-projects/spring-ai/releases/tag/v1.1.0-M1
specifically this commit:
If anyone still has this issue, I found an easy answer that at least solved my problem. The temp directory was full, with 65,536 files. I cleaned it out and the migration worked fine after that
You also need the share folder that contains themes and icons
When you run npm ls:
[email protected]
+-- [email protected]
`-- [email protected] -> ../common-components
`-- [email protected]
[email protected] is installed in the root of the example app — correct, satisfies the peer dependency.
The extra [email protected] under common-components is not actually installed again in a separate copy; it’s just how npm shows the peer dependency link (even though it's using the root version).
In other words: npm ls reports it under both packages, but in reality there’s only one copy used.
my-api is being used at runtime1. Use require.resolve(CommonJS) or import.meta.url(ESM)
Since your project is ESM ("type": "module"), you can do:
// In App.tsx or any example file
import * as MyApi from 'my-api';
console.log('my-api path:', import.meta.resolve ? await import.meta.resolve('my-api') : MyApi);
This will show you the absolute path where my-api is being imported from.
If both common-components and common-components-example resolve to the same path, there’s only one copy in use.
2. Compare references at runtime
A more React/JS way:
import * as MyApi from 'my-api';
import { something } from '../common-components/src/SomeComponent';
console.log('Same my-api instance?', MyApi === something.__myApiInstance);
If your library common-components exposes a reference to my-api internally (or you temporarily attach it to window), you can compare the objects.
If they are strictly equal (===), then both the library and your app are using the same copy.
3. Quick hack with node_modules paths
Run this in your example app:
node -p "require.resolve('my-api')"
node -p "require.resolve('../common-components/node_modules/my-api')"
common-components is not installing a separate copy in its own node_modules.Actually we can see wrapper.jar version using:
java -classpath /path/to/jar/gradle-wrapper.jar org.gradle.wrapper.GradleWrapperMain --version
At least works with 7.1 that was unknown in my case.
I found it renaming gradle directory, so gradlew --version show error with path of main wrapper class (not found). So I think this will work with other versions.
With JavaScript :
/^(.)\1+$/i.test(value)
Works for upper and lower case mix, for a string of at least two characters :
/^(.)\1+$/i.test('a')
false
/^(.)\1+$/i.test('aa')
true
/^(.)\1+$/i.test('AA')
true
/^(.)\1+$/i.test('Aa')
true
/^(.)\1+$/i.test('Abc')
false
Changing the text of a ttk.Labelframe causes it to redraw completely, which makes the window flash. To avoid that, keep the Labelframe title static and show the “Message X of Y” info in a separate Label inside the frame instead. This removes the flicker.
For me simplest code was:
val isTestRun = Thread.currentThread().stackTrace.any { it.className.contains("androidx.test.runner") }
As a workaround, switching to a QListWidget (which has it's own model) works just fine :
# getting actual order
def qlistwidget_iter_items(lst: QListWidget, role=Qt.DisplayRole):
for i in range(lst.count()):
list_item = lst.item(i)
item = list_item.data(role)
yield item
adding item adapted :
pix = self.get_image(item.filename)
list_item = QListWidgetItem(QIcon(pix), "")
list_item.setData(Qt.ItemDataRole.DecorationRole, pix) # image
list_item.setData(Qt.ItemDataRole.UserRole, item) #
list_item.setSizeHint(thumbnail_size) # self.ui.lstViewAddedItems.gridSize())
flags = list_item.flags() | Qt.ItemFlag.ItemIsDragEnabled
flags &= ~Qt.ItemFlag.ItemIsDropEnabled
list_item.setFlags(flags)
how do we remake your program we receive sytax error on line Dim swb As Workbook: Set swb = Set swb = Workbooks.Open( _ Filename:=SRC_FILE_PATH, UpdateLinks:=True, ReadOnly:=True)
No this is not supported by IBM, but a way used by a lot of people in Legacy (11.7) Datastage, though most people prefer to do this in an DSX instead of XML Export. Also this is normally done to search & replace code already there, while adding a whole new stage is way more complex. For adding a new stage, datastage is very copy&paste friendly in the UI.
You need to use an older version of react-native-maps that support eh old architecture.
The version 1.20.1 will make the markers appear again.
You can try this one
https://www.jsdelivr.com/package/npm/@use-pico/graphql-codegen-zod
it worked straight!
I needed to change only the plugins: on codegen.yml
How do I use this adb shell pm revoke com.android.systemui android.permission.SYSTEM_ALERT_WINDOW because it doesn't work
This is a verified bug that hasn't been fixed
https://bugs.mysql.com/bug.php?id=108582
quoting here:
[23 Sep 2022 12:43] MySQL Verification Team
Hi Mr. Power Gamer, It turns out that you are correct. You can no longer use ANSI option for mysqldump, due to this bug. This is due to the reason that ANSI option enforces ONLY_FULL_GROUP_BY. We do not know whether that particular query will be changed in mysqldump, or that ANSI mode will be disabled. This, however, has nothing to do with the fact that you are correct regarding the ANSI option. This report is now a verified bug.
This package is for use with API KEY. That means you are accessing Firestore unauthenticated and security rules are applied. So you can access only "public" collections.
enter image description hereverde_transparente.pngverde_transparente.pngverde_transparente.pngverde_transparente.pngverde_transparente.pngverde_transparente.pngverde_transparente.pngverde_transparente.pngverde_transparente.pngverde_transparente.pngverde_transparente.pngverde_transparente.pngverde_transparente.pngverde_transparente.pngverde_transparente.pngverde_transparente.pngverde_transparente.png