SELECT GENDER, BG, COUNT(*) AS total_count
FROM ( SELECT GENDER, BG FROM DONOR
UNION ALL SELECT GENDER, BG FROM ACCEPTOR ) AS combined
GROUP BY GENDER, BG
ORDER BY GENDER, BG;
this error because of NaN presents in your Dataframe. I got resolved this by following
df=df.fillna('')
df1 = spark.createDataFrame(df)
The other answers have adequately addressed your issue, however I would like to share a novel approach towards the goal of "cleanly acquiring a collection of mutexes". Due to the nature of the synchronized
block in Java, it's not feasible to acquire several mutexes in turn (the loop would essentially need to be unrolled).
Object[] mutexes = new Object[4];
for (int i=0; i < 4; i++) mutexes[i] = new Object();
synchronized (mutexes[0]) {
synchronized (mutexes[1]) {
synchronized (mutexes[2]) {
synchronized (mutexes[3]) {
// all acquired
}
}
}
}
However, if you look at the resultant bytecode, you'll see that each synchronized
block is opened with a MONITORENTER
instruction and explicitly closed with a MONITOREXIT
instruction. If we had direct access to these operations, we could iterate once to enter each monitor and then iterate again to exit each monitor. Is it possible to compile valid Java code that does this? Sort of.
JNI exposes these methods in the form of JNIEnv::MonitorEnter and JNIEnv::MonitorExit. With this in mind, we can do the following:
public final class MultiLock {
public static void run(Object[] mutexes, Runnable task) {
monitorEnter(mutexes);
try {
task.run();
} finally {
monitorExit(mutexes);
}
}
private static native void monitorEnter(Object[] arr);
private static native void monitorExit(Object[] arr);
}
#include "MultiLock.h" // Header generated by javac
#include <stdlib.h>
#include <stdbool.h>
static inline bool is_valid_monitor(JNIEnv *env, jobject object) {
return object != NULL;
}
JNIEXPORT void JNICALL Java_MultiLock_monitorEnter(JNIEnv *env, jclass ignored, jobjectArray arr) {
jsize len = (*env)->GetArrayLength(env, arr);
jobject next;
for (jsize i = 0; i < len; i++) {
next = (*env)->GetObjectArrayElement(env, arr, i);
if (!is_valid_monitor(env, next)) continue;
(*env)->MonitorEnter(env, next);
}
}
JNIEXPORT void JNICALL Java_MultiLock_monitorExit(JNIEnv *env, jclass ignored, jobjectArray arr) {
jsize len = (*env)->GetArrayLength(env, arr);
jobject next;
if (len == 0) return;
for (jsize i = len - 1; i >= 0; i--) {
next = (*env)->GetObjectArrayElement(env, arr, i);
if (!is_valid_monitor(env, next)) continue;
(*env)->MonitorExit(env, next);
}
}
And use it like so:
// Load the natives somehow
Object[] mutexes = new Object[4];
for (int i=0; i < 4; i++) mutexes[i] = new Object();
MultiLock.run(mutexes, () -> {
// all acquired
});
You can disable this rule selectively. This is how to do it in case with Ionic:
"vue/no-deprecated-slot-attribute": ["error", {
"ignore": ["/^ion-/"],
}],
This way the rule will work for every tag but those starting with ion-
.
I got the API to database connection to finally work by using the following tutorial
See step 2. Create a passwordless connection. As the tutorial mentions, I used the Azure Portal to automatically create the connection for a system-assigned managed identity.
in case this happens to anyone else, strangely it was the variable names. changed PUBLIC_SUPABASE_URL
and PUBLIC_SUPABASE_ANON_KEY
to VITE_PUBLIC_SUPABASE_URL
and VITE_PUBLIC_SUPABASE_ANON_KEY
and it works. Weird bc I believe the svelte docs say that PUBLIC_... should work.
You have to bring out each segment as a cluster group and make it optional.
Each segment is self contained.
"company" \s* : \s*
( \d+ ) # (1), Company req'd
(?:
.*?
"address1" \s* : \s* "
( .*? ) # (2), Addr optional
"
)?
(?:
.*?
"country" \s* : \s* "
( .*? ) # (3), country optional
"
)?
(?:
.*?
"Name" \s* : \s* "
( .*? ) # (4), Name optional
"
)?
It is quite interesting actually the Google and Facebook uses it for faster string manipulation purposes the performance of fbstring is better so using of normal std:: string given by c++ is replaced for better performance.we create a file string and in that we include <folly/FBString.h> so when we call std:: string the backend fbstring will work it helps to boost the performance.it simple terms it is like overriding( remember don't question me not having same name and parameters) just explaining in conceptual way for std:: string we are writing our own backend code where it helps to boost our performance but be careful to placing the folder and tell the cmake to include before system include.your aim is to compiler need to give priority to your string file
This was user error. In the MAC I had to set the channel number correctly in the Sniffer dialog. Once I did that then it worked fine. Also in the Wireshark->Protocol->IEEE802.1 edit decryption keys, the password along with the SSID had to be entered as follows <password>:<ssid>
I had the same issue, but recently found out that it is just not possible. As written in this link,
The network inspector only shows requests made through HttpURLConnection. The CIO engine, by design, communicates via TCP/IP sockets.
Requests sent through CIO will not be detectable unless the Android Studio network inspector changes
total
is a "reserved word" / "invalid suffix" for Prometheus, see: https://github.com/micrometer-metrics/micrometer/wiki/1.13-Migration-Guide#invalid-meter-suffixes
You might want to look at prometheus-rsocket-proxy.
textFieldDidChangeSelection is one I use a lot
I think the requestMatchers have a higher priority than the method annotations.
I also encountered the same problem. I wanted to validate all resources under the /api/** path, but adding @PermitAll at the /api/abc path was ineffective, and even changing to @PreAuthorize("permitAll()") also didn't work.
Yes, absolutely. Gioui is specifically designed for cross-platform mobile development and works well with Go mobile for building Android apps. You can write your UI in Gioui and use gomobile bind
or gomobile build
to compile it into an Android APK. This is a fully supported and documented approach.
This is a very elegant solution but I do have one problem: although the final print generated to either the printer or to preview renders the headers/footers correctly the view of the pages in the print preview dialog shows the headers/footers upside-down.
Is there perhaps a solution?
If you run the container once with a different password and then restart it with another password, it won’t update the existing database because it’s already initialized. Try removing the old volumes or using a different volume.
So we need to remove values from table authorization_role column gws_websites and gws_store_groups based on the deleted stores.
Hi Abbas, I do not have gws_websites and gws_store_groups columns in my autorization_role table. I am on Magento 2.4.7-p6. Would you know where this info would be in newer version of Magento. I am using Community-edition.
With some regex engines {-1,}
is a possible "lazy" quantifier, another option sometimes is to provide a flag for the pattern, signifying that it should default to "lazy" matching, instead of "greedy".
Yes, you can absolutely create a compelling hierarchy tree from a nested <ul>
structure using pure CSS! It's a common and fun challenge. Your current approach is already very close; the key is to fine-tune the positioning and dimensions of the pseudo-elements (::before
and ::after
) to draw those connecting lines accurately.
Here's a refined CSS solution that should give you the desired command-line tree look, along with an explanation of the adjustments.
HTML
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>CSS Hierarchy Tree</title>
<style>
body {
font-family: monospace; /* For that terminal-like feel */
color: #333;
padding: 20px;
}
ul {
list-style: none;
margin: 0;
padding-left: 20px; /* Adjust as needed for initial indent */
position: relative;
}
li {
position: relative;
padding-left: 20px; /* Space for the horizontal line and text */
margin-bottom: 5px; /* Small space between items */
line-height: 1.5; /* Vertical spacing for text */
}
/* Horizontal line for each item */
li::before {
content: '';
position: absolute;
top: 0.75em; /* Adjust to align with the text baseline */
left: -5px; /* Pull the line slightly to the left to connect with parent's vertical line */
width: 15px; /* Length of the horizontal line */
height: 0;
border-top: 1px solid #999;
}
/* Vertical line for connecting child branches */
li::after {
content: '';
position: absolute;
top: 0.75em; /* Start from the same height as the horizontal line */
left: -5px; /* Align with the horizontal line start */
height: calc(100% + 5px); /* Extend beyond the current li to connect to siblings/next parent */
border-left: 1px solid #999;
}
/* Hide vertical line for the last child in a branch */
li:last-child::after {
height: 0.75em; /* Only extend to the horizontal line of the current li */
}
/* Remove the initial horizontal line for the very first item (if desired) */
ul:first-child > li:first-child::before {
border-top: none;
}
/* Remove the initial vertical line for the very first item (if desired) */
ul:first-child > li:first-child::after {
border-left: none;
}
/* Style for the nested ULs to create proper alignment */
li > ul {
padding-left: 20px; /* Indent for nested lists */
margin-top: 5px; /* Adjust spacing between parent and child UL */
}
/* Special handling for the vertical line coming *down* from a parent */
li:not(:last-child) > ul::before {
content: '';
position: absolute;
top: -5px; /* Align with the bottom of the parent's vertical line */
left: 0;
height: 10px; /* Length of the connecting vertical line */
border-left: 1px solid #999;
}
</style>
</head>
<body>
<ul>
<li>1830 Palmyra
<ul>
<li>1837 Kirtland
<ul>
<li>1840 Nauvoo
<ul>
<li>1841 Liverpool
<ul>
<li>1849 Liverpool
<ul>
<li>1854 Liverpool
<ul>
<li>1871 SaltLakeCity
<ul>
<li>1877 SaltLakeCity</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li>1842 Nauvoo
<ul>
<li>1858 NewYork
<ul>
<li>1874 Iowa
<ul>
<li>2013 SaltLakeCity</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</body>
</html>
Let's break down the key adjustments and the thought process behind them:
ul
Padding:
padding-left: 20px;
on the ul
is crucial. This creates the initial indentation for each level and gives space for the vertical connecting lines.li
Positioning and Padding:
position: relative;
on the li
is still essential for positioning its pseudo-elements.
padding-left: 20px;
on the li
provides the necessary spacing for the ::before
(horizontal line) and the visual "indent" for the list item's text.
margin-bottom: 5px;
and line-height: 1.5;
are for better visual spacing between list items.
li::before
(Horizontal Line):
top: 0.75em;
: This value is critical for vertically aligning the horizontal line with the text of the list item. 0.75em
often works well to place it roughly in the middle of a typical line of text.
left: -5px;
: This slight negative left
value pulls the horizontal line a bit to the left. This allows it to overlap and connect perfectly with the vertical line coming down from the parent li
(or the vertical line of a sibling li
's ::after
pseudo-element).
width: 15px;
: This defines the length of the horizontal line extending from the vertical connector to the list item's text. You can adjust this to control how far the "└───" part extends.
li::after
(Vertical Line):
top: 0.75em;
: Similar to ::before
, this ensures the vertical line starts at the same vertical position as the horizontal line.
left: -5px;
: Aligns it precisely with the horizontal line's starting point.
height: calc(100% + 5px);
: This is a significant change!
100%
makes the line extend to the bottom of the current li
.
Adding + 5px
(or a similar small value) makes it slightly overshoot the current li
. This overshoot is vital for connecting neatly with the next sibling li
's horizontal line. Without it, you'd often see a small gap.
li:last-child::after
: For the last child in a branch, we don't want the vertical line to extend indefinitely. Setting its height
to 0.75em
(or top
's value) makes it just reach the horizontal line, effectively stopping the branch.
ul:first-child > li:first-child::before/::after
(Optional Cleanup):
1830 Palmyra
in your example). This makes the tree start cleanly without extraneous lines.li > ul
Indentation:
padding-left: 20px;
on li > ul
ensures that nested ul
elements are further indented, creating the visual hierarchy.
margin-top: 5px;
adds a little breathing room between a parent item and its nested child list.
li:not(:last-child) > ul::before
(Connecting Vertical Line for Nested ULs):
This is a crucial addition to correctly render the vertical line between a parent item and its child ul
when the parent is not the last child itself.
It creates a short vertical line (height: 10px;
) that effectively bridges the gap from the parent's li::after
down to the ul
's content, maintaining the continuous vertical branch.
top: -5px;
adjusts its position to connect seamlessly.
Line Thickness & Color: Easily change border-top
and border-left
values for 1px solid #999
to 2px dashed #007bff
or whatever suits your design.
Spacing: Adjust padding-left
on ul
and li
, margin-bottom
on li
, and margin-top
on li > ul
to control the horizontal and vertical density of your tree.
Em vs. Px: Using em
for top
values and px
for width
and height
gives you a good balance. em
values scale with font size, which is good for vertical alignment with text, while px
gives precise control over line lengths.
Accessibility: While this is a visual representation, remember that the underlying HTML <ul>
structure is inherently accessible and semantically correct for lists. This CSS merely enhances the presentation.
With these adjustments, you should achieve a much cleaner and more accurate visual representation of your timeline/tree structure using only HTML and CSS. Give it a try! You'll love the results.
this issue may persist due to leftover cached routes or client code still referencing the socket API.
Here's what to check:
1. Make sure there's no socket-related code left in your `_app.js` or components.
2. Remove any rewrites in `next.config.js` for `/api/socket`.
3. Delete `.next`, `node_modules`, and `package-lock.json`, then run:
Someone knows the theme used in the print in this post?
You can find various search parameter listed in YouTube API v2.0 – API Query Parameters like license, restriction, paid_content that can help filter videos that are restricted for such specific reason. Also, if you can use YouTube API v3.0 there is one more option videoSyndicated that will restrict a search to only videos that can be played outside youtube.com.
It needs to be fixed on the Rest API where the double quotes are incorrectly added.
The owner(s) of the project need to change it. Contact them about it so that they are aware and can fix it.
If they cannot/don't fix the issue in a timely manner, and you are not forced to use this Rest API, try find a more stable library for Rest instead.
Its a bad idea to try cater for buggy data by modifying it with more code later on. It just adds unnecessary code, and opens the door to unexpected behavior and accidental data modification.
You cannot enforce a range on content length. The way you limit the file size is by allowing the client to specify the desired length when requesting the presigned URL, and if the desired length is unacceptable, you just don't give them the presigned URL but error out instead. If acceptable, then you create the presigned URL with "ContentLength": desired_length
as a parameter instead.
What about this easy to use
extension WidgetExtensions on Widget {
Page<dynamic> Function(BuildContext, GoRouterState) get transitionPage =>
(context, state) => CustomTransitionPage<dynamic>(
key: state.pageKey,
child: this,
transitionsBuilder: (context, animation, secondaryAnimation, child) {
const begin = Offset(1.0, 0.0);
const end = Offset.zero;
const curve = Curves.easeInOut;
final tween = Tween(begin: begin, end: end).chain(CurveTween(curve: curve));
return SlideTransition(position: animation.drive(tween), child: child);
}
);
}
To use
GoRoute(
path: '/',
pageBuilder: HomeScreen().transitionPage
),
This was user error. I had to set the LWIP_TCP_KEEPALIVE in the compile options like -DLWIP_TCP_KEEPALIVE. Once I did this then I did not get any errors when setting the options.
A lot of years are gone but very likely the issue could led to how an https request is done using Winhttp or in general, HttpSendRequest.
After the Certificate exchange and encrypted handshake message Windows will try to verify if what has been received is valid.
To do this, it first check certificate in "Trusted Root Certification Authorities" and in case of failure will start to "retrieve Third-Party Root Certificate from Network".
So a called to DNS and external address is performed. The problem is that in some environment maybe the calls are dropped and so your https request get stacked until a timeout.
The timeout should be 15 seconds and then the request is unlocked.
This behaviors it's completely independent from the options you can set on the HttpSendRequest about ignoring certificates because this action will be execute only later in time.
Knowing the request workflow there can be multiple way to fix it.
One is discussed in these articles:
basically set at HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\SystemCertificates\AuthRoot:
"EnableDisallowedCertAutoUpdate"=dword:00000000
"DisableRootAutoUpdate"=dword:00000001
Another way is to fix the certificate, maybe a self signed and add it correctly onto the windows certificate storage at "Trusted Root Certification Authorities" at machine level certificates.
The 15 seconds in real are a default value that can be override from local group policy:
Bonus:
to better understand the process of certificates a specific log on windows can be enabled following the instructions here https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-vista/cc749296(v%3dws.10)
You've described a classic case of “same code, different behavior” due to platform change—and that can be tricky to pin down. Let’s break it down and explore possibilities:
Alpine Linux is lightweight and uses musl libc, while RHEL 9 uses glibc and has more extensive background services.
The .NET Core runtime may behave differently due to dependencies compiled against glibc. Memory allocation strategies and garbage collection patterns can vary.
RHEL ships more diagnostics and background processes that can subtly increase memory footprint.
Your use of HttpContext
and Redirect()
looks correct and typically shouldn't hold memory.
But if the redirect endpoint is being hit very frequently, and responses are cached or buffered internally, memory can creep.
Possible culprit: server-side buffering, e.g., response streams not being disposed properly—especially if middlewares interact with HttpContext
.
Here are ways to isolate the issue:
ToolUse Casedotnet-countersTrack memory, GC, threadpool, HTTP counters in real-timedotnet-gcdumpSnapshot GC heap and analyze retained objectsdotnet-traceCapture traces to explore what’s allocating/proc/<pid>/smapsCheck actual memory usage per process, native vs managedK8s metrics (Prometheus + Grafana)Trend analysis over time per pod
Async Pitfalls: Though your endpoint is marked async
, you don’t await
anything. Consider dropping async
unless needed—it may affect thread use.
Middleware or Filters: Look at upstream middlewares or filters that might buffer HttpContext.Response
.
Logging: Excessive logging on redirect calls can gradually consume memory if not batched/flushed.
Connection Leaks: Ensure any downstream calls (not shown here) aren’t holding connections.
Try rolling back to Alpine with .NET 8.0 and compare memory diagnostics side by side with RHEL.
Consider building a minimal service that replicates your redirect pattern. Run identical traffic against both container bases and capture GC/memory snapshots.
Tune GC using environment variables—e.g., set DOTNET_GCHeapHardLimit
, DOTNET_GCHeapLimitPercent
.
This issue isn’t likely caused by one line of code—it’s the interaction of the runtime with the new OS environment. Want help analyzing a memory dump or building test scaffolds to narrow it down? I’d be glad to collaborate.
Yo he necesitado realizar esta adaptación:
::ng-deep {
.make-ag-grid-header-sticky {
.ag-root-wrapper {
display: unset;
}
.ag-root-wrapper-body {
display: block;
}
.ag-root {
display: unset;
}
.ag-header{
top: 0;
background-color: var(--ag-header-background-color);
position: sticky !important;
z-index: 100;
}
}
}
To install PyTorch and OpenCV using a fixed Conda channel order, you need to:
Set channel priorities explicitly (so Conda doesn't auto-select from multiple sources).
Use conda config to pin preferred channels.
Install packages while preserving that order.
the answer by itzmekhokan
works, but just in case you need to disable the email later than woocommerce_email
hook fired:
remove_action( 'woocommerce_created_customer_notification', array( WC()->mailer(), 'customer_new_account' ), 10, 3 );
A comprehensive, cross-platform React Native wrapper for secure key-value storage using native security features of Android and iOS. It supports biometric authentication, hardware-backed encryption, and deep platform integrations such as Android StrongBox, EncryptedSharedPreferences, and iOS Secure Enclave via the Keychain.
There are the lib:
rn-secure-keystore
Same for me, quasar is not adding the styles with safe insets, fixed it temporarily by just adding the iPhone inset utility class to the body, until a proper fix is out.
В новой версии moviepy убрали editor
https://zulko.github.io/moviepy/getting_started/updating_to_v2.html
from moviepy import VideoFileClip
clip = VideoFileClip("my_video.mp4")
print( clip.duration )
Try following all your subplot-filling with something like:
ax[0,0].legend_.remove()
handles, labels = ax[0,0].get_legend_handles_labels()
fig.legend(handles, labels, loc='upper left', bbox_to_anchor=(0.9, 1))
I finally found the root cause for ElixirLS not working for the existing project.
$ MIX_ENV=test iex -S mix
Erlang/OTP 27 [erts-15.2.3] [source] [64-bit] [smp:16:16] [ds:16:16:10] [async-threads:1] [jit:ns]
** (File.Error) could not read file "/home/jan/Workspace/project-b/config/test.exs": no such file or directory
I renamed test
for that project, but ElixirLS requires it. Adding config/test.exs
solved the issue for me.
Nested axis guides have been deprecated from ggh4x - not sure if there is another option left.
The included codes from files are macro facilities of the C language that allow both single and multiple inclusions, as can be seen in the following example. It does not necessarily to have a specific file name extension such as ".h" unless we want to indicate a code with header content, etc.
One cannot send a body with GET, it’s a limitation imposed by the HTTP protocol. Usually APIs accept data with POST, so I would try that.
Because using Property Let, Get and Set, we can have the same name for the read and the write method. Usually in a programming language your function identifier must be non-ambiguous. You can't use the same name in one specific scope. Using Property, the "visible" name is the same and the "context" of the code will "decide" what function is called. I believe that under the hood VBA will name the functions with different names.
Hi I am wondering how do I add numbers without displaying them while in foreach loop expamle:
@{ int intVariable = 0; }
@foreach(var item in @Model.Collection)
{
@(intVariable += item.MyNumberOfInteres) //-\>how can I calculate this without it being rendered to the website ?
//some Html code.......
}
In CSS, elements inside a flex container can indeed shrink in a specific order using the flex-shrink property in combination with the natural flow of the layout. However, there's no direct property like "shrink-order"
flex-shrink sets the relative rate at which an element shrinks when there's not enough space in the flex container. A higher value means the element shrinks faster or more compared to others.
also it will only work when the container has display:flex;
You can do client side, on the condition that you can fetch the Icecast stream.
To make client-side playback and ICY metadata extraction work via fetch() in the browser, CORS (Cross-Origin Resource Sharing) requirements must be properly met by the radio stream server.
I wrote the @music-metadata/icy module for this occasion. Credits to Brad who encourage me to contribute to StackOverflow, while others gave me much reason to run away.
const STREAM_URL = 'https://audio-edge-kef8b.ams.s.radiomast.io/ref-128k-mp3-stereo';
const trackDisplay = document.getElementById('track');
const audioElement = document.getElementById('player');
const mediaSource = new MediaSource();
audioElement.src = URL.createObjectURL(mediaSource);
mediaSource.addEventListener('sourceopen', async () => {
const sourceBuffer = mediaSource.addSourceBuffer('audio/mpeg');
try {
// Dynamically import the ESM-only module
const { parseIcyResponse } = await import('https://cdn.jsdelivr.net/npm/@music-metadata/[email protected]/+esm');
const response = await fetch(STREAM_URL, {
headers: { 'Icy-MetaData': '1' }
});
const audioStream = parseIcyResponse(response, metadata => {
for (const [key, value] of metadata.entries()) {
console.log(`Rx ICY Metadata: ${key}: ${value}`);
}
const title = metadata.get('StreamTitle');
if (title) {
trackDisplay.textContent = title;
}
});
const reader = audioStream.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
if (value && !sourceBuffer.updating) {
sourceBuffer.appendBuffer(value);
} else {
await new Promise(resolve => {
sourceBuffer.addEventListener('updateend', resolve, { once: true });
});
sourceBuffer.appendBuffer(value);
}
}
mediaSource.endOfStream();
} catch (err) {
console.error('Error streaming audio:', err.message);
trackDisplay.textContent = 'Failed to load stream';
}
});
<html lang="en">
<head>
<title>ICY Stream Player</title>
<style>
body { font-family: sans-serif; text-align: center; margin-top: 2em; }
audio { width: 100%; max-width: 500px; margin-top: 1em; }
#track { font-weight: bold; margin-top: 1em; color: red; font-style: italic}
</style>
</head>
<body>
<h2>ICY Stream Player</h2>
<div>Now playing: <span id="track">...</span></div>
<audio id="player" controls autoplay></audio>
</body>
</html>
Assuming the text to be in cell A1 try:
=LET(t,TEXTBEFORE(TEXTAFTER(A1,"---",1),"---"),IF(LEFT(t,1)="6",t,"N/A"))
Tools such as Valgrind and Clang memory sanitizer (clang++ -fsanitize=memory
) can check for reads of uninitalised memory (valgrind warns you by default). Valgrind is a runtime detector and clang is a static analyser, so you'll probably want the latter. The GCC toolchain does not offer an equivalent tool as far as I know.
i saw that too and i don't no why there is no reference for it.
and in updated version of w3.org is still there: Link
An array is considered a variable length array when its size is set using a variable, and that size is only known at runtime, not at compile time.
Write the output in SOP form, like for
wherever the 1 is present-
A1 = y3'y2y1'y0'+ y3y2'y1'y0' (in sop o is treated as bar and take 1 same as it is)
A2 = y3'y2'y1y0' + y3y2'y1'y0'
and then minimize the output using a K-map, and then implement it in hardware...
I did something similar in the past. When you have to manage a CAN device which has custom interface via SDO (some special SDO for example), it's better to manage it formatting your message directly in PLC code. This can be done using the object CAN interface which gives you a process image where you can place your custom messages.
See Beckhoff documentation docs
I think I found it.
It wasn't super clear to me (or my searches, and it might be validated actually to issue warning), but:
management.metrics.distribution.slo.http.server.requests=100ms,500ms,1s,3s
and
management.metrics.distribution.percentiles-histogram.http.server.requests=true
are kinda mutually exclusive. See io.micrometer.core.instrument.distribution.DistributionStatisticConfig#getHistogramBuckets
Specifying 'slo buckets' will actually create them as requested, BUT these will be burried under lots of default ones, which are created upon enabling precentiles-histogram
. There is list of 275 default buckets pre-created and we select subset of them based on expected minimum and maximum duration. By default (io.micrometer.core.instrument.AbstractTimerBuilder#AbstractTimerBuilder) these are 1 millisecond and 30s respectively. Which you can override using org.springframework.boot.actuate.autoconfigure.metrics.MetricsProperties.Distribution#minimumExpectedValue
.
I don't understand this sufficiently, and this precision might be needed for some usecase. But if you need just if something is slower than some threshold (and mostly if smth is slower than 1s, it's bad regardless of how much), it might be safer just to specify slo thresholds.
If I'm still missing something or am wrong altogeher, please let me know!
I faced the same issue where the first connection attempt or a connection after a longer period would immediately time out. From my observations, the first connection attempt always takes significantly more time. Increasing the timeout, pool size and pooling in the connection string helped in my case. I added these parameters: Timeout=60;Command Timeout=60;Pooling=true;MinPoolSize=1;MaxPoolSize=200
The process of migrating TRX tokens is described in the official documentation on GitHub:
1. Access Your Wallet: Check your TRX balance in the list of Ethereum tokens.
2. Send Tokens: Send your old ERC20 TRX tokens to migration smart contract.
3. Confirmation: After the transaction is confirmed and broadcasted to the blockchain, the migration process will start, and your wallet will be credited with new TRX tokens. Depending on network usage, it may take 5–30 minutes to process the migration.
1-Create HLS chunks where the first chunk is 2 seconds, the second chunk is 3 seconds, the third is 5 seconds, and all subsequent chunks are 5 seconds each.
2-Cache the videos using fs, also delete them when seen.
I am coming at this from a math/physics angle rather than programming, so forgive me if I am focusing on the wrong thing, but I need some clarification on what exactly you are trying to transform here. Are you trying to preserve the ratio of the scale length to the point-scale distance?
By point-scale distance, I mean the length of the perpendicular line/arc from the point to the closest point on the scale. My understanding is that you are trying to satisfy:
It's known from differential geometry (see Theorema Egregium) that you cannot project from a sphere to a plane while preserving both shapes (or angles) and areas, which I suspect is very likely to be the root cause of your problems. I am not really sure if what you are trying to achieve can be done by only preserving one or the other or if you're trying to do something impossible, but it's probably worthwhile to actually carry out the math in 3D rather than a 2D projection. The (two ends of the) scale and the point together form a triangle on the Earth's surface, so you're really trying to transform (rotate, scale, translate) a spherical triangle, which I am not sure would work. Spherical trigonometry might help you here.
The transformations you're composing are:
translation (cyan -> magenta)
scaling (ends up in green point)
rotation (takes the point in question from green point to pink point)
Using regular 2D/3D cartesian coordinates, these operations do not commute. Chiefly, the reason for this is that translation is not a linear transformation in Cartesian coordinates (Tiny proof sketch: The 0 vector does not map back to 0 under a translation). In other words, you'll get a different result if you change the order. In general, you'll apparently need to use homogenous coordinates, under which translation is linear, to avoid this problem; however, you might end up working with points/lines/areas off the surface of the Earth if you directly convert Cartesian coordinates to homogenous coordinates in this case. I cannot guess offhand if the approximation would work better with homogenous coordinates or not.
This is caused by an open issue in the wix/react-native-ui-lib package, which you have installed: https://github.com/wix/react-native-ui-lib/issues/3594
An immediate solution, if possible for your use case, is to uninstall react-native-ui-lib. This is the solution suggested in these react-native issue threads:
Otherwise you can try to patch react-native-ui-lib using patch-package, and there's a little in the above threads about how to do it, or wait for react-native-ui-lib to push an update that will fix the issue.
I'll add this here, since it's one of the top results from Google for the question:
Yes, the keyword is $Prompt$ in any of the parameter fields of the Run Configs, like VM Option, program options and so on. With $Prompt:Type your birthday$ one can even add a label to the prompt.
I didn't find a solution for a multiple choice / dropdown solution though.
It doesn't seem like opt
can be piloted to produce the output you would like, so most likely you will want to post-process its output -- the heavier alternative would be to write a C++ LLVM module.
There may be several ways to go about this. Graphviz files can reference one another (e.g. this is how OTAWA handles it), so you can have the callgraph file referencing each of the CFG files.
If this isn't satisfactory and you must have a single visual, you will want to merge the files indeed. You will need to ensure some nodes have different names, and some others' names at the contrary should match. This StackOverflow question may be a useful read: Merging graphs in Graphviz
It seems that panda requires the .vert and .frag file extensions, and won't accept .glsl extensions.
It didn't seem to give me any output to warn me of this or ask me to do this, but when I duplicated my files and changed the extensions, it no longer spits out this version error, and I can now apply the shader to the scene and camera!
mine was in a different region. Try to see if there are volumes in a different region
can we configure grafana.ini for this
As solution from Qt forum user raven-worx with stylesheet and some code modifications to handle it is described on Qt forum.
As I did not find the Copyright policy of Qt forum, I do not copy it here.
It might've changed again. Mine'r in /var/cache/dhcp.leases
.
using var stream = await FileSystem.OpenAppPackageFileAsync(rawResourcesfileName);
using var fileStream = File.Create(Path.Combine(FileSystem.AppDataDirectory, rawResourcesfileName));
await stream.CopyToAsync(fileStream);
From Storybook 7+, the backgrounds
addon was refactored. Now you must:
Define backgrounds in preview.tsx
like this
// preview.tsx
const preview: Preview = {
parameters: {
backgrounds: {
options: {
dark: { name: 'Dark', value: '#000000' },
light: { name: 'Light', value: '#FFFFFF' },
},
},
},
};
Stories like this
export const OnDark: Story = {
globals: {
backgrounds: { value: 'dark' },
},
};
For more details: https://storybook.js.org/docs/essentials/backgrounds
Please create a backup of the virtual machine and disks before applying the changes.
Change the value of SupportedFeatures to b (Hexadecimal) or 11 (Decimal) for the following three drivers, then restart the system:
HKLM\SYSTEM\CurrentControlSet\Services\frxdrvvt\SupportedFeatures
HKLM\SYSTEM\CurrentControlSet\Services\frxdrv\SupportedFeatures
HKLM\SYSTEM\CurrentControlSet\Services\frxccd\SupportedFeatures
As a follow-up note to this topic: If you're showing a dialog form with ShowDialog and that form is itself set to TopMost = True, I additionally suggest adding an event handler for the dialog form's FormClosing() event and set Me.TopMost = False at that time. This helps prevent a secondary bug where the calling form that spawned the dialog is itself kicked back behind windows that it was originally on top of when the child dialog form closes.
The current answers still did not do exactly what I wanted, so I just published another solution built on polars dataframes: polarsgrid
from polarsgrid import expand_grid
expand_grid(
fruit=["apple", "banana", "pear"],
color=["red", "green", "blue"],
check=[True, False],
)
Which returns the following data frame:
shape: (18, 3)
┌────────┬───────┬───────┐
│ fruit ┆ color ┆ check │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ bool │
╞════════╪═══════╪═══════╡
│ apple ┆ red ┆ true │
│ banana ┆ red ┆ true │
│ pear ┆ red ┆ true │
│ apple ┆ green ┆ true │
│ banana ┆ green ┆ true │
│ … ┆ … ┆ … │
│ banana ┆ green ┆ false │
│ pear ┆ green ┆ false │
│ apple ┆ blue ┆ false │
│ banana ┆ blue ┆ false │
│ pear ┆ blue ┆ false │
└────────┴───────┴───────┘
You don't need a matched group. String's replace
function will replace only the matched text and leave the remainder of the string unchanged.
let a = "aaa yyy bbb"
console.log(a.replace(/y+/, (match) => match.toUpperCase()))
// 'aaa YYY bbb'
If I restart my machine, the problem is solved, but if i suspend my mahcine the error backed again !
i think this problem occurs with many AMD processor users, while i was using my ubuntu machine and faced the same error (after suspended my machine) i just swtiched to windows os (i have both operating systems installed as dual boot) and then switched back to ubuntu, the problem was solved
Before switching from Linux to Windows:
If you run the command "sudo prime-select query" and the output is intel, you need to switch it to nvidia by using "sudo prime-select nvidia", and then switch to Windows.
you can also using "sudo prime-select on-demand"
Theoretically one could use computer vision to detect what has differentially changed. And then generate the canonical transformations to the SVG elements generating as a result a very size efficient animated SVG file when compared to a movie or gif. One of the challenges is to make the object tracking work flawlessly, another is computing requirements to do so, possibly you have to train neural networks to sort this problem. You basically have to make sure that you correctly track things for example that moved, things that grew, or changed color, or even that temporarily disappeared.
I created a project for this purpose: https://github.com/Nutomic/workspace-unused
I noticed that two of the column names in 'dat' dataframe were wrong (perhaps a recent update?). middle is now xmiddle.
p + geom_segment(data=dat, aes(x=xmin, xend=xmax,
y=xmiddle, yend=xmiddle), colour="red", size=2)
I'm running selenium/standalone-chrome
image in a VM with 1GB memory and got this error.
Maybe this is because of the lack of computing resource.
Try to give string for
this.ddlDepartments.SelectedValue
as above the code,
string ddlDepts = this.ddlDepartments.SelectedValue;
then use this string "ddlDepts" in querystring.
After working on this some time and getting nowhere I have decided to take the advice of Svyatoslav Danyliv and Scott Hannen - that is to forget Interop Excel and use ClosedXML. Have already made the change and it is working great!
All incognito tabs in a single browser window share the same cookiejar.
Since Spring's session management uses a session cookie (typically JSESSIONID
), both tabs will send the same cookie to the server, and the server will correctly associate them with the same session.
So no
.
pdftk is installed as a snap and those do not have access to all files, by default. Running
snap connect pdftk:removable-media
resolved the issue.
dataset is outdated or incompatible version
try this: pip install --upgrade datasets
I was finally able to understand where the difference is coming from. I was using GPU for Tensorflow/Keras so the computations are indeed different from Numpy, which runs on CPU.
Using this to have Tensorflow/Keras running on CPU got me the same result as in Numpy:
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
I wrote a page for that. Check it out.
Set the maxParallelForks
for both test
and integrationTest
You can find more about it here.
Or if you want to run test
and integrationTest
simultaneously set
org.gradle.parallel=true
in your gradle.properties
file. docs
Currently, the most 'modern' approach (with build.gradle.kts
file) would be like this:
android {
packaging {
resources.excludes.add("META-INF/*")
}
}
SCRIPT_STR( " import re import nltk from nltk.corpus import stopwords # Ensure stopwords are downloaded (run once in TabPy environment) try: stopwords.words('english') except: nltk.download('stopwords') stop_words = set(stopwords.words('english') + stopwords.words('german') + stopwords.words('spanish') + stopwords.words('italian')) def clean_text(text): if not isinstance(text, str): return '' text = text.lower() text = re.sub(r'[^a-záéíóúüñçßäöëêèàâî\s]', '', text) words = [w for w in text.split() if w not in stop_words and len(w) > 1] return ' '.join(words) return [clean_text(t) for t in _arg1] ", [Comments] )
I did something in python but not working can anyone help
Formula must be a valid sql. I would write it like with {alias} placehoder which reference table in from clause. Hibernate will replace alias with alias to 'a' table.
@Formula("(SELECT COUNT(*) FROM b WHERE b.a_id = {alias}.id)")
In game development, AI car development for computer players involves creating algorithms that allow non-human drivers to navigate, compete, and respond intelligently to dynamic environments. These AI-controlled cars use pathfinding, decision trees, neural networks, or behavior trees to simulate realistic driving behavior—such as overtaking, braking, avoiding obstacles, and adapting to the player's actions.
Developers often program these virtual drivers to learn from their environment and improve over time, making gameplay more challenging and engaging. Advanced AI even allows cars to mimic human-like driving styles or race strategies.
For those building games or simulations that require intelligent virtual agents, using professional AI development services can help implement sophisticated driving logic, real-time decision-making, and machine learning models that enhance the overall user experience.
Ok, found the problem. We installed DacFX on the target machine, but it turned out to be an older version. Once we used the same version as was installed on the build agent we were able to continue with the deployment.
So if anyone here has the same issue: Install DacFX on the TARGET machine as well as on your build agent. DACPAC files need to be copied to the target machine as SQLPackage executes the deployment on the target server, not from the devops agent.
Why not just put cache variable outside react code, and have inside
validUntilDate: ..
data: ..
And then you can serve cache.data if validUntilDate is < Date.now(), otherwise fetch again and populate with new data & date
If cursor was created via CreateCursor
/CreateIconIndirect
/CreateIconFromResource
/CreateIconFromResourceEx
call then there is only one instance of HICON
and you can get bitmap from it with GetIconInfoEx
/GetIconInfoEx
call (as you already doing).
But if cursor was loaded from file or resource via for LoadCursor
/LoadImage
then you can try to get ICONINFOEX
structure with GetIconInfoEx
and then use wResID
/szModName
/szResName
in LoadImage
call with desired size (DPI-aware).
Something like:
UINT dpi = GetDpiForWindow(hWnd);
INT x = GetSystemMetricsForDpi(SM_CXICON, dpi);
INT y = GetSystemMetricsForDpi(SM_CYICON, dpi);
ICONINFOEX iconInfoEx = {0};
GetIconInfoEx(hIcon, &iconInfoEx);
HMODULE hModule = GetModuleHandle(iconInfoEx.szModName);
HCURSOR hCursor = LoadImage(hModule, iconInfoEx.szResName != NULL ? iconInfoEx.szResName : MAKEINTRESOURCE(iconInfoEx.wResID), x, y, LR_DEFAULTSIZE);
Some info on this topic in Raymond Chen blog:
With Power BI, the OAuth client is maintained by Azure itself.
You will have to initiate the connectivity using Microsoft SSO, and on Snowflake, you will have to create an external OAuth for Power BI.
The link mentioned the steps you shall have to follow.
https://docs.snowflake.com/en/user-guide/oauth-powerbi
If you are creating multiple clients for different applications on Azure, then you might get into error creating a new security integration, as Snowflake does not allow creating a security integration with the same issuer.
In that case the following article will be helpful: https://community.snowflake.com/s/article/Solution-Connect-to-Snowflake-From-Power-BI-Using-SSO-When-There-Is-An-Existing-OAuth-Integration
Instead of immediately sending an adaptive card to a channel, first send a normal message to a channel. Afterwards, add a second step which updates an adaptive card but choose the Message-ID from the previous step.
This post has helped me and it shows the necessary steps: Notification of workflows
To resolve this, I disabled the horizontal spacers that AG Grid adds for pinned sections by applying the following CSS:
.ag-horizontal-left-spacer{
display: none !important;
}
.ag-horizontal-right-spacer{
display: none !important;
}
This removed the additional scrollbars under the pinned columns, resulting in a clean layout with only one horizontal scrollbar for the main scrollable area.
I'm working on defining routes in CodeIgniter 4 and want to confirm the best professional way to structure the routes, especially when using both POST and GET methods for editing and deleting records.
If both actions (edit and delete) are using the POST method, this is a alternative way to define a routes I’m using:
$routes->post('admin/news/edit/(:any)', 'Admin_news::edit/$1');
$routes->post('admin/news/delete/(:any)', 'Admin_news::delete/$1');
Alternatively, if the delete method is accessed via GET, then:
$routes->get('admin/news/delete/(:any)', 'Admin_news::delete/$1');
Please let me know if this approach is acceptable and follows best practices, or if there are any recommended improvements for cleaner or more secure route definitions.
Muqaddas Bilal
23 Parklands Way
Blackburn
BB2 4QS
Email: [email protected]
Mobile: 07932 084243
DOB: 09.06.2000
⸻
PROFILE
A creative and hardworking individual with a passion for event planning and decoration. Known for her attention to detail and ability to create beautiful, memorable party setups using balloons, ribbons, and handmade floral decorations. Originally from Pakistan and based in the UK since August 2024. Brings a positive attitude, artistic flair, and a strong sense of organisation to everything she does.
⸻
OBJECTIVE
To work in a creative and dynamic environment where I can use my event decoration and planning skills to contribute to successful celebrations and experiences. Keen to grow professionally within the events, retail or customer service sectors.
⸻
SKILLS AND COMPETENCIES
• Event and party planning
• Balloon and ribbon decor
• Floral and ribbon crafting
• Table and venue styling
• Friendly, reliable and organised
• Excellent communication and teamwork
• Strong attention to detail
• Former volleyball player – team spirit and coordination
⸻
EDUCATION
FG Post Graduate College for Women, Pakistan
FA – Passed
Gilani School, Pakistan
Completed Secondary Education
⸻
INTERESTS
I enjoy planning events and decorating spaces for special occasions such as birthdays and family gatherings. I love working with balloons, ribbons and flowers to create beautiful set-ups. I also enjoy volleyball, socialising and exploring creative ideas online.
When using AVSpeechSynthesizer and creating multiple utterances, the simulator may drop some of them.
I also facing this issue.
I am sending a large data via intent.
ArrayList<MyObj>list = new ArrayList<>();
myintent.putExtra("data",list); //This line is causing the TransactionTooLargeException
Instead of sending the data in intent, I make a static global variable and use that variable in both activity.
public static ArrayList<MyObj>list = new ArrayList<>();
This is how i resolve the issue.
Adding both of these was a catch all:
// @match *://*/*
// @match *:///*/*
The below one matches local files.
I don’t think this is related to GA4 settings, GTM doesn’t really connect to GA4; it just sends the requests based on your configuration.
Here’s a bit more info about the issue. From what I’ve seen, you likely just need to include the user parameters inside the Event Parameters field, as someone suggested.
I haven’t tested it yet myself, since this is quite new.
TL;DR: First off, remove binding redirects, then despair ahead.
A common solution people recommend for this problem is to create a binding redirect.
Quite often, though, a binding redirect which already exists (and was added in the past) IS the cause of the problem, and removing the binding redirect is the solution. So check all your app.config
and web.config
files for suspicious binding redirects for the affected module, and try to remove them before you try to change them. If after removing all binding redirects you still get a similar error (often it's about a different version), then you may consider adding a binding redirect back where needed (and creating a new annoying problem for the poor sob who has to update the NuGet packages the next time).
Binding redirects are the "DLL hell" of .NET framework. Nobody I know knows what they are and how they work, there is hardly any understandable documentation for them online, and problems with binding redirects always hit you at the worst of times, when the last thing you want to do is to nerd out about some peculiar feature of your language and build system. Not even I am autistic enough for binding redirects to pique my interest. Even Github Copilot refuses to answer questions about binding redirects, it seems.
Another common solution people always keep recommending is to perform some magical incantation that involves the gacutil
command.
While this may "work", it is a manual intervention that makes some global change on your personal development system, but not in the sources or build files of your project. Will your built project now work on every target system it is meant to be deployed to? Will other developers on your team have to run the same command? And your customers, too? Who is going to tell them? And will you even remember you ran this command and what the exact command-line was that you copied from StackOverflow (or from some AI answer that scraped it from SO)?
This is how you run into situations where something "works for you" and your manager will respond with that worn out joke: "OK, then we ship your computer".
Running gacutil
is almost NEVER the right solution unless you are a consumer and not a developer trying to get your own code to build and run (which is the target audience of this website). When you are looking at a gacutil
command line on some Chinese language website you found with Google, then it's time to turn off your computer, call it a week and head out for drinks.
Yes, it's definitely possible to use AutoIt to handle security code entry in pop-up boxes during Selenium WebDriver automation. AutoIt is particularly useful for handling Windows-native dialogs and pop-ups that Selenium can't interact with directly.