Well, I'll be damned... I used to send it to myself using WhatsApp web, then I just open the same chat on my phone, download the file and click it again to execute.
I tried a different method (thanks, @CommonsWare), just plugging the phone and using the file explorer and it works.
My best guess is that my PC does something to the file when sending it through WhatsApp web. Something that my work PC does not do for some reason.
Dark magic!
Try sending a manual task (rather than scheduled) to Celery worker, does it work?
Have you checked that Celery processes have appropriate file permissions? Celery beat writes schedule to a file by default.
Have you checked RabbitMQ logs?
You seem to be running Celery worker without a virtual environment, this might be an issue.
wp-content/mu-plugins/jet-wc-orders-status-callback.php
<?php
add_filter( 'jet-engine/listings/allowed-callbacks', static function ( $callbacks ) {
$callbacks['jet_get_wc_order_status'] = __( 'WC get order status label', 'jet-engine' );
return $callbacks;
}, 9999 );
/**
* @param $status
*
* @return string
*/
function jet_get_wc_order_status( $status ) {
if ( ! is_scalar( $status ) ) {
return '-';
}
if ( ! function_exists( 'wc_get_order_statuses' ) ) {
return $status;
}
$labels = wc_get_order_statuses();
return $labels[ $status ] ?? '-';
}
I ended up doing it this way since "localhost" was already part of the default set...
livenessProbe:
httpGet:
path: /login/
port: http
httpHeaders:
- name: Host
value: localhost
The CSS property box-decoration-break can be used to repeat margins and paddings on all pages.
.page {
box-decoration-break: clone;
padding-top: 1.5in;
padding-bottom: 1.5in;
page-break-after: always;
}
For Synology DSM or other systems running Entware package manager, Miguel's answer is correct, and the specific command to install CPAN is:
opkg install perlbase-cpan
Per the official GitHub for jsPDF (Support marked content #3096), as of July 2025 the ability to add accessibility tags has not been implemented yet.
I had a similar issue, and I think I managed to solve it by increasing the value ofnet.netfilter.nf_conntrack_udp_timeout_stream
using sysctl. It defaults to 120s, so this may be the setting that causes the timeout error in spark.
Couldn't get a single command to work on WSL
no nodejs
so edited this from @ntshetty
#!/bin/bash
# ./monitor.sh main.py &
# $1 passes the filename
# source: @ntshetty stackoverflow.com/a/50284224/3426192
python $1 & # start
while true
do
mdhash1=`find $1 -type f -exec md5sum {} \; | sort -k 2 | md5sum`
sleep 5
mdhash2=`find $1 -type f -exec md5sum {} \; | sort -k 2 | md5sum`
if [ "$mdhash1" = "$mdhash2" ]; then
echo "Identical"
else
echo "Change Detected Restarting Web Server"
pkill -9 -f $1 # get PID to stop
python $1 & # restart
fi
done
echo "Ended"
I believe you're looking for a window join. If you post the code to generate the above tables it's easier for others to validate
gcc 7.3, ruby 2.3.7, mysql 5.7, mysql2 gem 0.3.21.. Tried all of above and many more solutions..
The main cause of Segmentation Fault error was encoding: utf8mb4
in database.yml
, Once I changed it to utf8 only, the error vanished.
Here’s what actually fixed it:
Run npm config set legacy-peer-deps false
in the terminal.
Delete node_modules
and package-lock.json
.
In some cases, you may also need to delete functions the firebase and set it up again.
Run firebase deploy --only functions
.
Finally, don’t forget to run npm install
inside the functions folder before deploying again.
See the reference where I discovered this: https://stackoverflow.com/a/77823349/23242867
How about:
=COLUMNS(TEXTSPLIT(A1,","))
In the case where a cell might be empty, use this:
=IF(A1="",0,COLUMNS(TEXTSPLIT(A1,",")))
My reason for this error was due to me putting:-
http_method_names = ["POST", "GET"]
in my view class in app/views.py
Http methods need to be in all small characters:-
http_method_names = ["post", "get"]
In Flutter 3.32 you can set enableSplash to false
to disable build in gesture and make children clickable.
CarouselView(
enableSplash: false
)
If you wound up here due to having perfectly good JSON but you still get the error above, I have a question for you:
Did you transfer a file from a Windows to a WSL context?
Because if you did, that file is going to be shredded moving from the Windows Ansi /r/n context to the linux Unicode /n standard. If you are using VScode or something, copying the contents of the file over should be enough to get it to convert properly.
Passing the replacement text through echo -e
solved the problem...
$ echo ": HIGHLIGHT some more text " | sed "s/: HIGHLIGHT.\\{1,\\}\$/$(echo -e $_highlight)/g"
[BLUE]
$
The solution for this highly specific problem is to not install the "minidriver" part of the SafeNet Authentication Client package . If already installed, uninstall the entire package, then run the installer again, choose "Custom" install and make sure the Minidriver feature set is set to "don't install" (The SafeNet installer doesn't offer a "Modify" option).
This is apparently because the "minidrivers" are the ones that allow the badly designed Windows Smart Card logon system to talk to the SafeNet USB smart cards. With the minidrivers removed, all access is through the SafeNet extensions to the CryptoAPI 2 subsystem that is used by signing tools (including old tools based on the classic CryptoAPI 1).
<ul id="playlist" style="display:none;">
<li data-path="http://99.250.117.109:8000/stream" data-thumbpath="thumbnail of whatever" data-downloadable="no" data-duration="00:00">
</li>
</ul>
@ArcSet was on the right path but I think you want:
'raw.asset1' = Get-Item("c:\temp\lenovo.zip")
Did you ever find a solution to this? I'm encountering the same issue.
You will need the Spring GraphQL plugin , see this Jetbrains article for features.
Yo I have the exact same problem on that tool, did you solve it ?
Some one found a solution to this problem? I access a windows machine via remote desktop and the problem only show on Delphi editor. The object inspect work fine.
Do not use extend as it expects scalar expression not the tabular function calls so use only | myfunction("a","b").
When VS Code created the mcp.json file, it was created as a user config file on the Windows side by default:
/C:/Users/spencer/AppData/Roaming/Code/User/mcp.json
The file needs to be in your workspace:
/{path to project}/.vscode/mcp.json
The parameter set file .SSV of the SSP standard (a sibling standard to FMI) is intended for this (https://ssp-standard.org/)
If your importing tool supports the SSP standard, you can put the FMU together with a n .SSV parameter file in and SSP file.
The FMI project works on a Layered Standard FMI-LS-REF (https://github.com/modelica/fmi-ls-ref) you will also be able to put one or several .SSV files in an FMU at a defined subfolder the /extra directory of an FMU.
Yes, lv_timer_handler()
is definitely being called. But thanks to your information, I discovered that my LVGL initialization only called lv_init()
but did not set a tick source.
I now also call lv_tick_set_cb([]() -> uint32_t { return (uint32_t) millis(); })
during initialization.
As a result, my image/JPEG is now drawn correctly even without calling lv_refr_now(NULL)
.
In the vendor demo, I couldn't find where the tick is set, but I suspect it's done via lv_conf.h
.
I also had another issue: I had initialized the display with the wrong frequency (18MHz), which I had taken from another GitHub (espidf) example.
But in the official vendor demo, I saw that 14MHz is used instead.
Now, with the correct frequency, my display and slideshow run smoothly and without flickering. Great. Thanks!
def login():
UserName = input("Inany da Silva Serra: ")
passw = input("Search the password: ")
check(Inany, passw)
return UserName, passw
def check(Inany, password):
pword = {}
for line in open('unames_passwords.txt','r'):
user, password = line.split()
pword\[user\] = password
if user == UserName and password == passw:
return True
print("Thank you for logging in.")
else:
print("Username or password is incorrect")
def main():
login()
main()
As of 2025, this does seem to work as expected:
demands:
- Agent.ComputerName -equals $(server)
Maybe you are using the cart block on your cart page while customizing and overriding the classic cart template. If you want to customize your cart template, you'll need to replace the cart block with the cart shortcode.
[woocommerce_cart]
In this extended scenario, the sub-sampling is no longer over contiguous memory blocks, so the data streaming approach will also require script-level looping and will probably not be particularly efficient.
On my setup, looping over Slice2 calls is still ~20% faster than the the single call to ExprSize, but that may be because my computer has an old processor. I did notice, however, that the timing results varied a great deal and seemed to be connected to foreground activity in support of the Display floating window. For optimum and consistent timing results, I think it is important to either delay calls to ShowImage until the very end of the script, or to make sure the Display floating window is closed. Of course, these issues are side-stepped entirely if one runs the script on the foreground thread, instead of in the background, as you currently seem to do.
SELECT GENDER, BG, COUNT(*) AS total_count
FROM ( SELECT GENDER, BG FROM DONOR
UNION ALL SELECT GENDER, BG FROM ACCEPTOR ) AS combined
GROUP BY GENDER, BG
ORDER BY GENDER, BG;
this error because of NaN presents in your Dataframe. I got resolved this by following
df=df.fillna('')
df1 = spark.createDataFrame(df)
The other answers have adequately addressed your issue, however I would like to share a novel approach towards the goal of "cleanly acquiring a collection of mutexes". Due to the nature of the synchronized
block in Java, it's not feasible to acquire several mutexes in turn (the loop would essentially need to be unrolled).
Object[] mutexes = new Object[4];
for (int i=0; i < 4; i++) mutexes[i] = new Object();
synchronized (mutexes[0]) {
synchronized (mutexes[1]) {
synchronized (mutexes[2]) {
synchronized (mutexes[3]) {
// all acquired
}
}
}
}
However, if you look at the resultant bytecode, you'll see that each synchronized
block is opened with a MONITORENTER
instruction and explicitly closed with a MONITOREXIT
instruction. If we had direct access to these operations, we could iterate once to enter each monitor and then iterate again to exit each monitor. Is it possible to compile valid Java code that does this? Sort of.
JNI exposes these methods in the form of JNIEnv::MonitorEnter and JNIEnv::MonitorExit. With this in mind, we can do the following:
public final class MultiLock {
public static void run(Object[] mutexes, Runnable task) {
monitorEnter(mutexes);
try {
task.run();
} finally {
monitorExit(mutexes);
}
}
private static native void monitorEnter(Object[] arr);
private static native void monitorExit(Object[] arr);
}
#include "MultiLock.h" // Header generated by javac
#include <stdlib.h>
#include <stdbool.h>
static inline bool is_valid_monitor(JNIEnv *env, jobject object) {
return object != NULL;
}
JNIEXPORT void JNICALL Java_MultiLock_monitorEnter(JNIEnv *env, jclass ignored, jobjectArray arr) {
jsize len = (*env)->GetArrayLength(env, arr);
jobject next;
for (jsize i = 0; i < len; i++) {
next = (*env)->GetObjectArrayElement(env, arr, i);
if (!is_valid_monitor(env, next)) continue;
(*env)->MonitorEnter(env, next);
}
}
JNIEXPORT void JNICALL Java_MultiLock_monitorExit(JNIEnv *env, jclass ignored, jobjectArray arr) {
jsize len = (*env)->GetArrayLength(env, arr);
jobject next;
if (len == 0) return;
for (jsize i = len - 1; i >= 0; i--) {
next = (*env)->GetObjectArrayElement(env, arr, i);
if (!is_valid_monitor(env, next)) continue;
(*env)->MonitorExit(env, next);
}
}
And use it like so:
// Load the natives somehow
Object[] mutexes = new Object[4];
for (int i=0; i < 4; i++) mutexes[i] = new Object();
MultiLock.run(mutexes, () -> {
// all acquired
});
You can disable this rule selectively. This is how to do it in case with Ionic:
"vue/no-deprecated-slot-attribute": ["error", {
"ignore": ["/^ion-/"],
}],
This way the rule will work for every tag but those starting with ion-
.
I got the API to database connection to finally work by using the following tutorial
See step 2. Create a passwordless connection. As the tutorial mentions, I used the Azure Portal to automatically create the connection for a system-assigned managed identity.
in case this happens to anyone else, strangely it was the variable names. changed PUBLIC_SUPABASE_URL
and PUBLIC_SUPABASE_ANON_KEY
to VITE_PUBLIC_SUPABASE_URL
and VITE_PUBLIC_SUPABASE_ANON_KEY
and it works. Weird bc I believe the svelte docs say that PUBLIC_... should work.
You have to bring out each segment as a cluster group and make it optional.
Each segment is self contained.
"company" \s* : \s*
( \d+ ) # (1), Company req'd
(?:
.*?
"address1" \s* : \s* "
( .*? ) # (2), Addr optional
"
)?
(?:
.*?
"country" \s* : \s* "
( .*? ) # (3), country optional
"
)?
(?:
.*?
"Name" \s* : \s* "
( .*? ) # (4), Name optional
"
)?
It is quite interesting actually the Google and Facebook uses it for faster string manipulation purposes the performance of fbstring is better so using of normal std:: string given by c++ is replaced for better performance.we create a file string and in that we include <folly/FBString.h> so when we call std:: string the backend fbstring will work it helps to boost the performance.it simple terms it is like overriding( remember don't question me not having same name and parameters) just explaining in conceptual way for std:: string we are writing our own backend code where it helps to boost our performance but be careful to placing the folder and tell the cmake to include before system include.your aim is to compiler need to give priority to your string file
This was user error. In the MAC I had to set the channel number correctly in the Sniffer dialog. Once I did that then it worked fine. Also in the Wireshark->Protocol->IEEE802.1 edit decryption keys, the password along with the SSID had to be entered as follows <password>:<ssid>
I had the same issue, but recently found out that it is just not possible. As written in this link,
The network inspector only shows requests made through HttpURLConnection. The CIO engine, by design, communicates via TCP/IP sockets.
Requests sent through CIO will not be detectable unless the Android Studio network inspector changes
total
is a "reserved word" / "invalid suffix" for Prometheus, see: https://github.com/micrometer-metrics/micrometer/wiki/1.13-Migration-Guide#invalid-meter-suffixes
You might want to look at prometheus-rsocket-proxy.
textFieldDidChangeSelection is one I use a lot
I think the requestMatchers have a higher priority than the method annotations.
I also encountered the same problem. I wanted to validate all resources under the /api/** path, but adding @PermitAll at the /api/abc path was ineffective, and even changing to @PreAuthorize("permitAll()") also didn't work.
Yes, absolutely. Gioui is specifically designed for cross-platform mobile development and works well with Go mobile for building Android apps. You can write your UI in Gioui and use gomobile bind
or gomobile build
to compile it into an Android APK. This is a fully supported and documented approach.
This is a very elegant solution but I do have one problem: although the final print generated to either the printer or to preview renders the headers/footers correctly the view of the pages in the print preview dialog shows the headers/footers upside-down.
Is there perhaps a solution?
If you run the container once with a different password and then restart it with another password, it won’t update the existing database because it’s already initialized. Try removing the old volumes or using a different volume.
So we need to remove values from table authorization_role column gws_websites and gws_store_groups based on the deleted stores.
Hi Abbas, I do not have gws_websites and gws_store_groups columns in my autorization_role table. I am on Magento 2.4.7-p6. Would you know where this info would be in newer version of Magento. I am using Community-edition.
With some regex engines {-1,}
is a possible "lazy" quantifier, another option sometimes is to provide a flag for the pattern, signifying that it should default to "lazy" matching, instead of "greedy".
Yes, you can absolutely create a compelling hierarchy tree from a nested <ul>
structure using pure CSS! It's a common and fun challenge. Your current approach is already very close; the key is to fine-tune the positioning and dimensions of the pseudo-elements (::before
and ::after
) to draw those connecting lines accurately.
Here's a refined CSS solution that should give you the desired command-line tree look, along with an explanation of the adjustments.
HTML
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>CSS Hierarchy Tree</title>
<style>
body {
font-family: monospace; /* For that terminal-like feel */
color: #333;
padding: 20px;
}
ul {
list-style: none;
margin: 0;
padding-left: 20px; /* Adjust as needed for initial indent */
position: relative;
}
li {
position: relative;
padding-left: 20px; /* Space for the horizontal line and text */
margin-bottom: 5px; /* Small space between items */
line-height: 1.5; /* Vertical spacing for text */
}
/* Horizontal line for each item */
li::before {
content: '';
position: absolute;
top: 0.75em; /* Adjust to align with the text baseline */
left: -5px; /* Pull the line slightly to the left to connect with parent's vertical line */
width: 15px; /* Length of the horizontal line */
height: 0;
border-top: 1px solid #999;
}
/* Vertical line for connecting child branches */
li::after {
content: '';
position: absolute;
top: 0.75em; /* Start from the same height as the horizontal line */
left: -5px; /* Align with the horizontal line start */
height: calc(100% + 5px); /* Extend beyond the current li to connect to siblings/next parent */
border-left: 1px solid #999;
}
/* Hide vertical line for the last child in a branch */
li:last-child::after {
height: 0.75em; /* Only extend to the horizontal line of the current li */
}
/* Remove the initial horizontal line for the very first item (if desired) */
ul:first-child > li:first-child::before {
border-top: none;
}
/* Remove the initial vertical line for the very first item (if desired) */
ul:first-child > li:first-child::after {
border-left: none;
}
/* Style for the nested ULs to create proper alignment */
li > ul {
padding-left: 20px; /* Indent for nested lists */
margin-top: 5px; /* Adjust spacing between parent and child UL */
}
/* Special handling for the vertical line coming *down* from a parent */
li:not(:last-child) > ul::before {
content: '';
position: absolute;
top: -5px; /* Align with the bottom of the parent's vertical line */
left: 0;
height: 10px; /* Length of the connecting vertical line */
border-left: 1px solid #999;
}
</style>
</head>
<body>
<ul>
<li>1830 Palmyra
<ul>
<li>1837 Kirtland
<ul>
<li>1840 Nauvoo
<ul>
<li>1841 Liverpool
<ul>
<li>1849 Liverpool
<ul>
<li>1854 Liverpool
<ul>
<li>1871 SaltLakeCity
<ul>
<li>1877 SaltLakeCity</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li>1842 Nauvoo
<ul>
<li>1858 NewYork
<ul>
<li>1874 Iowa
<ul>
<li>2013 SaltLakeCity</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</body>
</html>
Let's break down the key adjustments and the thought process behind them:
ul
Padding:
padding-left: 20px;
on the ul
is crucial. This creates the initial indentation for each level and gives space for the vertical connecting lines.li
Positioning and Padding:
position: relative;
on the li
is still essential for positioning its pseudo-elements.
padding-left: 20px;
on the li
provides the necessary spacing for the ::before
(horizontal line) and the visual "indent" for the list item's text.
margin-bottom: 5px;
and line-height: 1.5;
are for better visual spacing between list items.
li::before
(Horizontal Line):
top: 0.75em;
: This value is critical for vertically aligning the horizontal line with the text of the list item. 0.75em
often works well to place it roughly in the middle of a typical line of text.
left: -5px;
: This slight negative left
value pulls the horizontal line a bit to the left. This allows it to overlap and connect perfectly with the vertical line coming down from the parent li
(or the vertical line of a sibling li
's ::after
pseudo-element).
width: 15px;
: This defines the length of the horizontal line extending from the vertical connector to the list item's text. You can adjust this to control how far the "└───" part extends.
li::after
(Vertical Line):
top: 0.75em;
: Similar to ::before
, this ensures the vertical line starts at the same vertical position as the horizontal line.
left: -5px;
: Aligns it precisely with the horizontal line's starting point.
height: calc(100% + 5px);
: This is a significant change!
100%
makes the line extend to the bottom of the current li
.
Adding + 5px
(or a similar small value) makes it slightly overshoot the current li
. This overshoot is vital for connecting neatly with the next sibling li
's horizontal line. Without it, you'd often see a small gap.
li:last-child::after
: For the last child in a branch, we don't want the vertical line to extend indefinitely. Setting its height
to 0.75em
(or top
's value) makes it just reach the horizontal line, effectively stopping the branch.
ul:first-child > li:first-child::before/::after
(Optional Cleanup):
1830 Palmyra
in your example). This makes the tree start cleanly without extraneous lines.li > ul
Indentation:
padding-left: 20px;
on li > ul
ensures that nested ul
elements are further indented, creating the visual hierarchy.
margin-top: 5px;
adds a little breathing room between a parent item and its nested child list.
li:not(:last-child) > ul::before
(Connecting Vertical Line for Nested ULs):
This is a crucial addition to correctly render the vertical line between a parent item and its child ul
when the parent is not the last child itself.
It creates a short vertical line (height: 10px;
) that effectively bridges the gap from the parent's li::after
down to the ul
's content, maintaining the continuous vertical branch.
top: -5px;
adjusts its position to connect seamlessly.
Line Thickness & Color: Easily change border-top
and border-left
values for 1px solid #999
to 2px dashed #007bff
or whatever suits your design.
Spacing: Adjust padding-left
on ul
and li
, margin-bottom
on li
, and margin-top
on li > ul
to control the horizontal and vertical density of your tree.
Em vs. Px: Using em
for top
values and px
for width
and height
gives you a good balance. em
values scale with font size, which is good for vertical alignment with text, while px
gives precise control over line lengths.
Accessibility: While this is a visual representation, remember that the underlying HTML <ul>
structure is inherently accessible and semantically correct for lists. This CSS merely enhances the presentation.
With these adjustments, you should achieve a much cleaner and more accurate visual representation of your timeline/tree structure using only HTML and CSS. Give it a try! You'll love the results.
this issue may persist due to leftover cached routes or client code still referencing the socket API.
Here's what to check:
1. Make sure there's no socket-related code left in your `_app.js` or components.
2. Remove any rewrites in `next.config.js` for `/api/socket`.
3. Delete `.next`, `node_modules`, and `package-lock.json`, then run:
Someone knows the theme used in the print in this post?
You can find various search parameter listed in YouTube API v2.0 – API Query Parameters like license, restriction, paid_content that can help filter videos that are restricted for such specific reason. Also, if you can use YouTube API v3.0 there is one more option videoSyndicated that will restrict a search to only videos that can be played outside youtube.com.
It needs to be fixed on the Rest API where the double quotes are incorrectly added.
The owner(s) of the project need to change it. Contact them about it so that they are aware and can fix it.
If they cannot/don't fix the issue in a timely manner, and you are not forced to use this Rest API, try find a more stable library for Rest instead.
Its a bad idea to try cater for buggy data by modifying it with more code later on. It just adds unnecessary code, and opens the door to unexpected behavior and accidental data modification.
You cannot enforce a range on content length. The way you limit the file size is by allowing the client to specify the desired length when requesting the presigned URL, and if the desired length is unacceptable, you just don't give them the presigned URL but error out instead. If acceptable, then you create the presigned URL with "ContentLength": desired_length
as a parameter instead.
What about this easy to use
extension WidgetExtensions on Widget {
Page<dynamic> Function(BuildContext, GoRouterState) get transitionPage =>
(context, state) => CustomTransitionPage<dynamic>(
key: state.pageKey,
child: this,
transitionsBuilder: (context, animation, secondaryAnimation, child) {
const begin = Offset(1.0, 0.0);
const end = Offset.zero;
const curve = Curves.easeInOut;
final tween = Tween(begin: begin, end: end).chain(CurveTween(curve: curve));
return SlideTransition(position: animation.drive(tween), child: child);
}
);
}
To use
GoRoute(
path: '/',
pageBuilder: HomeScreen().transitionPage
),
This was user error. I had to set the LWIP_TCP_KEEPALIVE in the compile options like -DLWIP_TCP_KEEPALIVE. Once I did this then I did not get any errors when setting the options.
A lot of years are gone but very likely the issue could led to how an https request is done using Winhttp or in general, HttpSendRequest.
After the Certificate exchange and encrypted handshake message Windows will try to verify if what has been received is valid.
To do this, it first check certificate in "Trusted Root Certification Authorities" and in case of failure will start to "retrieve Third-Party Root Certificate from Network".
So a called to DNS and external address is performed. The problem is that in some environment maybe the calls are dropped and so your https request get stacked until a timeout.
The timeout should be 15 seconds and then the request is unlocked.
This behaviors it's completely independent from the options you can set on the HttpSendRequest about ignoring certificates because this action will be execute only later in time.
Knowing the request workflow there can be multiple way to fix it.
One is discussed in these articles:
basically set at HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\SystemCertificates\AuthRoot:
"EnableDisallowedCertAutoUpdate"=dword:00000000
"DisableRootAutoUpdate"=dword:00000001
Another way is to fix the certificate, maybe a self signed and add it correctly onto the windows certificate storage at "Trusted Root Certification Authorities" at machine level certificates.
The 15 seconds in real are a default value that can be override from local group policy:
Bonus:
to better understand the process of certificates a specific log on windows can be enabled following the instructions here https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-vista/cc749296(v%3dws.10)
You've described a classic case of “same code, different behavior” due to platform change—and that can be tricky to pin down. Let’s break it down and explore possibilities:
Alpine Linux is lightweight and uses musl libc, while RHEL 9 uses glibc and has more extensive background services.
The .NET Core runtime may behave differently due to dependencies compiled against glibc. Memory allocation strategies and garbage collection patterns can vary.
RHEL ships more diagnostics and background processes that can subtly increase memory footprint.
Your use of HttpContext
and Redirect()
looks correct and typically shouldn't hold memory.
But if the redirect endpoint is being hit very frequently, and responses are cached or buffered internally, memory can creep.
Possible culprit: server-side buffering, e.g., response streams not being disposed properly—especially if middlewares interact with HttpContext
.
Here are ways to isolate the issue:
ToolUse Casedotnet-countersTrack memory, GC, threadpool, HTTP counters in real-timedotnet-gcdumpSnapshot GC heap and analyze retained objectsdotnet-traceCapture traces to explore what’s allocating/proc/<pid>/smapsCheck actual memory usage per process, native vs managedK8s metrics (Prometheus + Grafana)Trend analysis over time per pod
Async Pitfalls: Though your endpoint is marked async
, you don’t await
anything. Consider dropping async
unless needed—it may affect thread use.
Middleware or Filters: Look at upstream middlewares or filters that might buffer HttpContext.Response
.
Logging: Excessive logging on redirect calls can gradually consume memory if not batched/flushed.
Connection Leaks: Ensure any downstream calls (not shown here) aren’t holding connections.
Try rolling back to Alpine with .NET 8.0 and compare memory diagnostics side by side with RHEL.
Consider building a minimal service that replicates your redirect pattern. Run identical traffic against both container bases and capture GC/memory snapshots.
Tune GC using environment variables—e.g., set DOTNET_GCHeapHardLimit
, DOTNET_GCHeapLimitPercent
.
This issue isn’t likely caused by one line of code—it’s the interaction of the runtime with the new OS environment. Want help analyzing a memory dump or building test scaffolds to narrow it down? I’d be glad to collaborate.
Yo he necesitado realizar esta adaptación:
::ng-deep {
.make-ag-grid-header-sticky {
.ag-root-wrapper {
display: unset;
}
.ag-root-wrapper-body {
display: block;
}
.ag-root {
display: unset;
}
.ag-header{
top: 0;
background-color: var(--ag-header-background-color);
position: sticky !important;
z-index: 100;
}
}
}
To install PyTorch and OpenCV using a fixed Conda channel order, you need to:
Set channel priorities explicitly (so Conda doesn't auto-select from multiple sources).
Use conda config to pin preferred channels.
Install packages while preserving that order.
the answer by itzmekhokan
works, but just in case you need to disable the email later than woocommerce_email
hook fired:
remove_action( 'woocommerce_created_customer_notification', array( WC()->mailer(), 'customer_new_account' ), 10, 3 );
A comprehensive, cross-platform React Native wrapper for secure key-value storage using native security features of Android and iOS. It supports biometric authentication, hardware-backed encryption, and deep platform integrations such as Android StrongBox, EncryptedSharedPreferences, and iOS Secure Enclave via the Keychain.
There are the lib:
rn-secure-keystore
Same for me, quasar is not adding the styles with safe insets, fixed it temporarily by just adding the iPhone inset utility class to the body, until a proper fix is out.
В новой версии moviepy убрали editor
https://zulko.github.io/moviepy/getting_started/updating_to_v2.html
from moviepy import VideoFileClip
clip = VideoFileClip("my_video.mp4")
print( clip.duration )
Try following all your subplot-filling with something like:
ax[0,0].legend_.remove()
handles, labels = ax[0,0].get_legend_handles_labels()
fig.legend(handles, labels, loc='upper left', bbox_to_anchor=(0.9, 1))
I finally found the root cause for ElixirLS not working for the existing project.
$ MIX_ENV=test iex -S mix
Erlang/OTP 27 [erts-15.2.3] [source] [64-bit] [smp:16:16] [ds:16:16:10] [async-threads:1] [jit:ns]
** (File.Error) could not read file "/home/jan/Workspace/project-b/config/test.exs": no such file or directory
I renamed test
for that project, but ElixirLS requires it. Adding config/test.exs
solved the issue for me.
Nested axis guides have been deprecated from ggh4x - not sure if there is another option left.
The included codes from files are macro facilities of the C language that allow both single and multiple inclusions, as can be seen in the following example. It does not necessarily to have a specific file name extension such as ".h" unless we want to indicate a code with header content, etc.
One cannot send a body with GET, it’s a limitation imposed by the HTTP protocol. Usually APIs accept data with POST, so I would try that.
Because using Property Let, Get and Set, we can have the same name for the read and the write method. Usually in a programming language your function identifier must be non-ambiguous. You can't use the same name in one specific scope. Using Property, the "visible" name is the same and the "context" of the code will "decide" what function is called. I believe that under the hood VBA will name the functions with different names.
Hi I am wondering how do I add numbers without displaying them while in foreach loop expamle:
@{ int intVariable = 0; }
@foreach(var item in @Model.Collection)
{
@(intVariable += item.MyNumberOfInteres) //-\>how can I calculate this without it being rendered to the website ?
//some Html code.......
}
In CSS, elements inside a flex container can indeed shrink in a specific order using the flex-shrink property in combination with the natural flow of the layout. However, there's no direct property like "shrink-order"
flex-shrink sets the relative rate at which an element shrinks when there's not enough space in the flex container. A higher value means the element shrinks faster or more compared to others.
also it will only work when the container has display:flex;
You can do client side, on the condition that you can fetch the Icecast stream.
To make client-side playback and ICY metadata extraction work via fetch() in the browser, CORS (Cross-Origin Resource Sharing) requirements must be properly met by the radio stream server.
I wrote the @music-metadata/icy module for this occasion. Credits to Brad who encourage me to contribute to StackOverflow, while others gave me much reason to run away.
const STREAM_URL = 'https://audio-edge-kef8b.ams.s.radiomast.io/ref-128k-mp3-stereo';
const trackDisplay = document.getElementById('track');
const audioElement = document.getElementById('player');
const mediaSource = new MediaSource();
audioElement.src = URL.createObjectURL(mediaSource);
mediaSource.addEventListener('sourceopen', async () => {
const sourceBuffer = mediaSource.addSourceBuffer('audio/mpeg');
try {
// Dynamically import the ESM-only module
const { parseIcyResponse } = await import('https://cdn.jsdelivr.net/npm/@music-metadata/[email protected]/+esm');
const response = await fetch(STREAM_URL, {
headers: { 'Icy-MetaData': '1' }
});
const audioStream = parseIcyResponse(response, metadata => {
for (const [key, value] of metadata.entries()) {
console.log(`Rx ICY Metadata: ${key}: ${value}`);
}
const title = metadata.get('StreamTitle');
if (title) {
trackDisplay.textContent = title;
}
});
const reader = audioStream.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
if (value && !sourceBuffer.updating) {
sourceBuffer.appendBuffer(value);
} else {
await new Promise(resolve => {
sourceBuffer.addEventListener('updateend', resolve, { once: true });
});
sourceBuffer.appendBuffer(value);
}
}
mediaSource.endOfStream();
} catch (err) {
console.error('Error streaming audio:', err.message);
trackDisplay.textContent = 'Failed to load stream';
}
});
<html lang="en">
<head>
<title>ICY Stream Player</title>
<style>
body { font-family: sans-serif; text-align: center; margin-top: 2em; }
audio { width: 100%; max-width: 500px; margin-top: 1em; }
#track { font-weight: bold; margin-top: 1em; color: red; font-style: italic}
</style>
</head>
<body>
<h2>ICY Stream Player</h2>
<div>Now playing: <span id="track">...</span></div>
<audio id="player" controls autoplay></audio>
</body>
</html>
Assuming the text to be in cell A1 try:
=LET(t,TEXTBEFORE(TEXTAFTER(A1,"---",1),"---"),IF(LEFT(t,1)="6",t,"N/A"))
Tools such as Valgrind and Clang memory sanitizer (clang++ -fsanitize=memory
) can check for reads of uninitalised memory (valgrind warns you by default). Valgrind is a runtime detector and clang is a static analyser, so you'll probably want the latter. The GCC toolchain does not offer an equivalent tool as far as I know.
i saw that too and i don't no why there is no reference for it.
and in updated version of w3.org is still there: Link
An array is considered a variable length array when its size is set using a variable, and that size is only known at runtime, not at compile time.
Write the output in SOP form, like for
wherever the 1 is present-
A1 = y3'y2y1'y0'+ y3y2'y1'y0' (in sop o is treated as bar and take 1 same as it is)
A2 = y3'y2'y1y0' + y3y2'y1'y0'
and then minimize the output using a K-map, and then implement it in hardware...
I did something similar in the past. When you have to manage a CAN device which has custom interface via SDO (some special SDO for example), it's better to manage it formatting your message directly in PLC code. This can be done using the object CAN interface which gives you a process image where you can place your custom messages.
See Beckhoff documentation docs
I think I found it.
It wasn't super clear to me (or my searches, and it might be validated actually to issue warning), but:
management.metrics.distribution.slo.http.server.requests=100ms,500ms,1s,3s
and
management.metrics.distribution.percentiles-histogram.http.server.requests=true
are kinda mutually exclusive. See io.micrometer.core.instrument.distribution.DistributionStatisticConfig#getHistogramBuckets
Specifying 'slo buckets' will actually create them as requested, BUT these will be burried under lots of default ones, which are created upon enabling precentiles-histogram
. There is list of 275 default buckets pre-created and we select subset of them based on expected minimum and maximum duration. By default (io.micrometer.core.instrument.AbstractTimerBuilder#AbstractTimerBuilder) these are 1 millisecond and 30s respectively. Which you can override using org.springframework.boot.actuate.autoconfigure.metrics.MetricsProperties.Distribution#minimumExpectedValue
.
I don't understand this sufficiently, and this precision might be needed for some usecase. But if you need just if something is slower than some threshold (and mostly if smth is slower than 1s, it's bad regardless of how much), it might be safer just to specify slo thresholds.
If I'm still missing something or am wrong altogeher, please let me know!
I faced the same issue where the first connection attempt or a connection after a longer period would immediately time out. From my observations, the first connection attempt always takes significantly more time. Increasing the timeout, pool size and pooling in the connection string helped in my case. I added these parameters: Timeout=60;Command Timeout=60;Pooling=true;MinPoolSize=1;MaxPoolSize=200
The process of migrating TRX tokens is described in the official documentation on GitHub:
1. Access Your Wallet: Check your TRX balance in the list of Ethereum tokens.
2. Send Tokens: Send your old ERC20 TRX tokens to migration smart contract.
3. Confirmation: After the transaction is confirmed and broadcasted to the blockchain, the migration process will start, and your wallet will be credited with new TRX tokens. Depending on network usage, it may take 5–30 minutes to process the migration.
1-Create HLS chunks where the first chunk is 2 seconds, the second chunk is 3 seconds, the third is 5 seconds, and all subsequent chunks are 5 seconds each.
2-Cache the videos using fs, also delete them when seen.
I am coming at this from a math/physics angle rather than programming, so forgive me if I am focusing on the wrong thing, but I need some clarification on what exactly you are trying to transform here. Are you trying to preserve the ratio of the scale length to the point-scale distance?
By point-scale distance, I mean the length of the perpendicular line/arc from the point to the closest point on the scale. My understanding is that you are trying to satisfy:
It's known from differential geometry (see Theorema Egregium) that you cannot project from a sphere to a plane while preserving both shapes (or angles) and areas, which I suspect is very likely to be the root cause of your problems. I am not really sure if what you are trying to achieve can be done by only preserving one or the other or if you're trying to do something impossible, but it's probably worthwhile to actually carry out the math in 3D rather than a 2D projection. The (two ends of the) scale and the point together form a triangle on the Earth's surface, so you're really trying to transform (rotate, scale, translate) a spherical triangle, which I am not sure would work. Spherical trigonometry might help you here.
The transformations you're composing are:
translation (cyan -> magenta)
scaling (ends up in green point)
rotation (takes the point in question from green point to pink point)
Using regular 2D/3D cartesian coordinates, these operations do not commute. Chiefly, the reason for this is that translation is not a linear transformation in Cartesian coordinates (Tiny proof sketch: The 0 vector does not map back to 0 under a translation). In other words, you'll get a different result if you change the order. In general, you'll apparently need to use homogenous coordinates, under which translation is linear, to avoid this problem; however, you might end up working with points/lines/areas off the surface of the Earth if you directly convert Cartesian coordinates to homogenous coordinates in this case. I cannot guess offhand if the approximation would work better with homogenous coordinates or not.
This is caused by an open issue in the wix/react-native-ui-lib package, which you have installed: https://github.com/wix/react-native-ui-lib/issues/3594
An immediate solution, if possible for your use case, is to uninstall react-native-ui-lib. This is the solution suggested in these react-native issue threads:
Otherwise you can try to patch react-native-ui-lib using patch-package, and there's a little in the above threads about how to do it, or wait for react-native-ui-lib to push an update that will fix the issue.
I'll add this here, since it's one of the top results from Google for the question:
Yes, the keyword is $Prompt$ in any of the parameter fields of the Run Configs, like VM Option, program options and so on. With $Prompt:Type your birthday$ one can even add a label to the prompt.
I didn't find a solution for a multiple choice / dropdown solution though.
It doesn't seem like opt
can be piloted to produce the output you would like, so most likely you will want to post-process its output -- the heavier alternative would be to write a C++ LLVM module.
There may be several ways to go about this. Graphviz files can reference one another (e.g. this is how OTAWA handles it), so you can have the callgraph file referencing each of the CFG files.
If this isn't satisfactory and you must have a single visual, you will want to merge the files indeed. You will need to ensure some nodes have different names, and some others' names at the contrary should match. This StackOverflow question may be a useful read: Merging graphs in Graphviz
It seems that panda requires the .vert and .frag file extensions, and won't accept .glsl extensions.
It didn't seem to give me any output to warn me of this or ask me to do this, but when I duplicated my files and changed the extensions, it no longer spits out this version error, and I can now apply the shader to the scene and camera!
mine was in a different region. Try to see if there are volumes in a different region
can we configure grafana.ini for this
As solution from Qt forum user raven-worx with stylesheet and some code modifications to handle it is described on Qt forum.
As I did not find the Copyright policy of Qt forum, I do not copy it here.
It might've changed again. Mine'r in /var/cache/dhcp.leases
.
using var stream = await FileSystem.OpenAppPackageFileAsync(rawResourcesfileName);
using var fileStream = File.Create(Path.Combine(FileSystem.AppDataDirectory, rawResourcesfileName));
await stream.CopyToAsync(fileStream);
From Storybook 7+, the backgrounds
addon was refactored. Now you must:
Define backgrounds in preview.tsx
like this
// preview.tsx
const preview: Preview = {
parameters: {
backgrounds: {
options: {
dark: { name: 'Dark', value: '#000000' },
light: { name: 'Light', value: '#FFFFFF' },
},
},
},
};
Stories like this
export const OnDark: Story = {
globals: {
backgrounds: { value: 'dark' },
},
};
For more details: https://storybook.js.org/docs/essentials/backgrounds
Please create a backup of the virtual machine and disks before applying the changes.
Change the value of SupportedFeatures to b (Hexadecimal) or 11 (Decimal) for the following three drivers, then restart the system:
HKLM\SYSTEM\CurrentControlSet\Services\frxdrvvt\SupportedFeatures
HKLM\SYSTEM\CurrentControlSet\Services\frxdrv\SupportedFeatures
HKLM\SYSTEM\CurrentControlSet\Services\frxccd\SupportedFeatures
As a follow-up note to this topic: If you're showing a dialog form with ShowDialog and that form is itself set to TopMost = True, I additionally suggest adding an event handler for the dialog form's FormClosing() event and set Me.TopMost = False at that time. This helps prevent a secondary bug where the calling form that spawned the dialog is itself kicked back behind windows that it was originally on top of when the child dialog form closes.