May be late but you might want to do this since the accepted answer didn't work for me.
This is how you should change your where input:
AND: [
yourWhereInput,
{
OR: [
{ email: null },
{ email: { isSet: false } }
]
}
]
@VLAZ you are totally right. My mistake is that I didn't suspect the consequences of (undefined===undefined) is true, hence dangerous. I have to review my old code, where it may have happened silently.
I'm sure we have a lot about topic at stackoverflow but I like this site for different Spring questions and guides so https://www.baeldung.com/exception-handling-for-rest-with-spring
In your pom.xml, looks like you are using
<spring-ai.version>1.0.0-SNAPSHOT</spring-ai.version>
The GPT-5 support was added in 1.1.0-M1 (or later). the name gpt-5 might not recognized by Spring AI in your version, causing fallback behavior to default gpt-4o-mini. Consider to upgrade your version.
Yes, you can connect on-premises storage to cloud servers, but how it depends on the type.
A NAS (Network Attached Storage) can share its files with cloud servers over a secure VPN or a dedicated network connection. As cloud providers offer their own managed file services that work like a NAS.
A traditional SAN (Storage Area Network) cannot plug directly into cloud servers. Instead, we have to use the cloud’s own block storage services, which serve the same purpose. I prefer we should always consider network latency and data transfer costs.
I think I should have clarified, what I have right now: I can read binary data from files on demand to put in cpu memory then upload that to the gpu and discard the data afterwards to free cpu memory. I was only wondering if I would be able to do that somehow without needing extra files next to the compiled binary (i.e all the data required is neatly inside 1 exe, preferably without consuimg extra ram) that's all.
Adding
esbuild: {
jsx: "automatic",
jsxDev: false,
},
in vite.config.Ts fixed it for me.
https://stackoverflow.com/a/31635761/4380510
https://stackoverflow.com/a/70431036/4380510
Do this and you will be able to do it on all versions.
The key is to set the title as animated:
title = ax.set_title("First title", animated = True)
Then when updating the plot:
title.set_text("Second title")
ax.draw_artist(title)
canvas.blit(ax.bbox)
canvas.flush_events()
There was a great research on that in USSR around at the same time, 1975, when schcema was created.
An attached document is inside.
Прояв - is a display.
It is a bit engaged.
This answer is not deleted, so it is not even a red flag.
mshajkarami's answer solved my problem. The attribute of my custom view's 'attr' duplicated with one of the existing attributes in the system's 'values', and the problem was fixed after I renamed it.
Have you found a solution for this?
@chux No, you can't change the type of the question, not even as a moderator.
Thread on Meta: Can an Advice post be changed into a QA post?
Thank you to all!!
With this I solved
<<=IMPORTRANGE("link to sheet1";"A1:A"&CONTA.VALORI(C:C))>>
Thank you for the detailed explanation, that makes perfect sense.
Since I don’t currently have an admin account (and therefore can’t create an App Registration or grant Directory.Read.All consent), I understand that the client secret flow won’t work in my case.
I’ll try switching to the delegated flow with an interactive browser login using my normal user account, so the app can act under my own permissions.
Just to confirm: with that approach, I’ll only be able to read the groups and users that my account has access to, right?
Once we get an admin account later, I can switch back to the app-only (client credentials) approach with full directory scope.
Thanks again for pointing out /groups?$expand=transitiveMembers, that’s very helpful.
Also, Just to know, is there any other workaround to read the groups from Azure Directory/Entra using C#?
I was on the Gradle JDK 17.0.9, I changed it to JDK 21.0.1. after that it worked again.
(Chat gpt help: https://chatgpt.com/share/690b1ae2-263c-8006-b2fd-7a6e2997c816)
Is that viewer open source ? I'd be happy to take a look at it.
Use This :
Attachment::fromData(fn() => $pdf->output(), 'doc.pdf')->withMime('application/pdf');
Can you post this as a question? This new advice feature is counterproductive: even fewer questions will be asked, defeating the voting / reputation system.
I have had a similar issue with other activities, do you not have existing projects which have this activity which you can open to check whether this is an issue with the packages in the project or if your entire studio might be corrupted?
--color-red-500 is not defined anywhere in your :root variables.?
"the naturally expected behavior always seems to be what happens"
For some values of "always" and/or "naturally expected behavior". Exhibit A. Running on 32 bit platforms lately much?
I should have been clearer about what I want to check for, we are using trivy and audit in our pipelines but it's the server stuff that I need to check manually (old school infra). A subscription service posting a report to Slack every day or week would be perfect, so newreleases.io looks promising!
It would have been so nice if there was a standard way to publish release info, EOL's etc. Everyone seems to do it differently.
Btw, if I were a hacker, roave/security-advisories would be the first package I would try publish backdoors into. 🙂
composer require intervention/image:^2
Run this to install a PurePHP package without Laravel wrapper, which I think is useless.
This content really helped me boost my knowledge in the domain. I was planning to work in this field but wasn’t getting interview calls. After getting my resume redesigned from crazeneurons.com
for ₹599, I started receiving calls within 2 weeks.
Check real feedback: https://crazeneurons.com/
Contact on WhatsApp: https://wa.me/918368195998?text=I%20want%20an%20ATS-Friendly%20Resume
This seems to be a macOS 26.0 specific issue and it is fixed in macOS 26.1.
std::variant "encodes" the type it currently contains already
If you tell me why you want .pyd files for speed? IP protection? I can help the build setup because the best path differs depending on your true purpose
Assuming that these are in different spreadsheets, it would be better to import the entire range, A:C, and then use QUERY or some other method on that data to get what you want. People have enough issues with the connection of one IMPORTRANGE, which is what you would need to use if you wanted to count the values in column C to determine the range size to request for column A.
await in Python is just point of switching context. It used in asynchronous functions to pause execution until the awaited task is complete, allowing other tasks to run at the same time.
Yes, Windows 11 prioritizes internet connectivity, so automatically switching from your non-internet Hotspot A to the "auto-connect" internet-enabled Hotspot B is expected behavior. This is often due to the Network Connectivity Status Indicator (NCSI) detecting no internet on A.
To diagnose, check the Event Viewer under Microsoft-Windows-WLAN-AutoConfig/Operational for connection/disconnection events.
To prevent the switch, you can uncheck "Connect automatically" for Hotspot B in Windows WiFi settings CooMeet, or try setting Hotspot A as a "Preferred" connection using the netsh wlan command.
Allan walker ,
thanks for your answer I have tried all of this but the billing one .
could it take 9 days like my status ?
first kill server then do this
adb kill-server
adb start-server
adb devices
adb tcpip 5555
adb shell ip addr show wlan0
it will give u ip address like 192.XXX.XX.XXX
adb connect 192.168.18.140:5555 (change with yours)
and boom!!!
I faced the same 16KB page size issue — turned out my emulator image and gradle dependency versions weren’t synced. After cleaning the project and re-syncing, it worked. I actually found a helpful discussion about this while browsing tech blogs on Jobipo — worth checking if you’re into dev-related job tips.
This is a common issue with the Google Play Integrity API setup — sometimes it gets stuck at “Integration started” for days, especially when linking new apps or Firebase projects.
Here are a few things to check and try:
Verify Google Cloud Project Link
Make sure your app is correctly linked to the same Google Cloud project where the Play Integrity API is enabled.
Go to Play Console → Setup → App integrity → Play Integrity API and confirm the correct project is selected.
Enable Billing on the Cloud Project
Even though Integrity API has a free tier, Google requires billing enabled to activate the API in some regions or accounts.
Go to Google Cloud Console → Billing → Link billing account.
Recheck API Enablement
Visit Google Cloud Console → APIs & Services → Enabled APIs & Services.
Confirm Play Integrity API is listed as Enabled.
Firebase App Check Timing
After setup, the activation process can take anywhere from a few hours to several days.
If it’s stuck for more than a week, try unlinking and relinking the API in Play Console.
Contact Google Support via Play Console Form
Instead of general tickets, use the Play Console > Help > Contact support form. It routes directly to the Integrity API team.
Mention your App ID, Project Number, and Integration started status in the message.
The error Ad failed to load: 3 means no ad fill, not a coding issue. The adaptive test ad unit /21775744923/example/adaptive-banner often doesn’t serve ads. To test adaptive banners, use Google’s universal AdMob test ID (ca-app-pub-3940256099942544/9214589741) or create your own ad unit in Google Ad Manager with active line items. Also ensure you calculate adaptive size dynamically using AdSize.getCurrentOrientationAnchoredAdaptiveBannerAdSize() and set your test device ID before loading the ad.
This error usually occurs due to a name conflict or an incorrect package. You need to install the correct package and then import it. After doing this, the error AttributeError: module ‘mysql.connector’ has no attribute ‘CMySQLConnection’ will be fixed.
Neither Int nor Double are types. What task are you solving?
My cpanel doesnt even have this file = /etc/dovecot/dovecot-sql.conf.ext
And I am getting these errors.
Now twice in one month. Last time i had to manually recreate the whole server, moving one by one email account to the new server.... WHY!?
server2 dovecot[9970]: auth-worker(10118): Error: conn unix:auth-worker (pid=9978,uid=97): auth-worker<2>: dict([email protected]): Invalid password in passdb: crypt() failed: Invalid argument
As of Ktor Version 3.2.0, Ktor has a dependency injection plugin. You can see the docs here: https://ktor.io/docs/server-dependency-injection.html?topic=rx
This is worked for me
val iFrameHtmlData = """<iframe width="100%" height="100%" src="$iframeUrl" frameborder="0" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>""".trimIndent()
val baseUrl = "https://$packageName"
loadDataWithBaseURL(baseUrl, iFrameHtmlData, "text/html", "utf-8", null)
You may find the documentation:
https://developers.google.com/youtube/terms/required-minimum-functionality#embedded-player-api-client-identity
It is codes, and you can put them togheter into one program . I hav done it many times. My last game is my best for now. when i get a macBook agai i can use my free developer account to make real 1st class games.
Are you looking for a discussion here, or did you mean to actually create this as a QnA and get an answer solution?
USE THIS
you can use this tool to edit your fonts
@chux You are wrong.:) s[-n] is not equivalent to s[UINT_MAX].
WITH duplicate_data AS( select email,dob,country from customers Group by email dob,country Having count(*)>1) select cus.* from customers c join duplicate_data d on cus.email=d.email and cus.dob=d.dob and cus.country=d.country; try modifying your query like this.
"As in how urgently do I need to go thru my code base looking for these instances?" --> Step 1, save time and enable many (if not just about all) compiler warnings.
No, your Masking layer does not mask your loss because the layers.LSTM(return_sequences=False) layer breaks the mask propagation. The built-in loss function will incorrectly try to reconstruct the [0,0,0] padding.
One of the solution is to use a custom loss function and a custom metric that manually ignore padded steps.
Q1: Is there any way to visualize that this is exactly how it is working?
You can verify the mask is lost by checking model.layers[3].output_mask (the output of your LSTM(return_sequences=False) layer); it will be None.
Adjacent Q2: Is there a simple way to calculate the loss later on with the same masking...
Yes, the custom loss function below is the simple way. It works automatically for both model.fit() and model.evaluate(), which correctly calculate the masked loss at all times.
import tensorflow.keras.backend as K
import tensorflow as tf
def masked_cosine_distance(y_true, y_pred):
mask = K.cast(K.greater(K.sum(K.abs(y_true), axis=-1), 1e-9), K.floatx())
y_true_normalized = K.l2_normalize(y_true, axis=-1)
y_pred_normalized = K.l2_normalize(y_pred, axis=-1)
similarity = K.sum(y_true_normalized * y_pred_normalized, axis=-1)
loss_per_timestep = 1 - similarity
masked_loss = loss_per_timestep * mask
sum_of_losses = K.sum(masked_loss)
num_unmasked_steps = K.sum(mask)
return sum_of_losses / (num_unmasked_steps + K.epsilon())
def masked_cosine_similarity(y_true, y_pred):
mask = K.cast(K.greater(K.sum(K.abs(y_true), axis=-1), 1e-9), K.floatx())
y_true_normalized = K.l2_normalize(y_true, axis=-1)
y_pred_normalized = K.l2_normalize(y_pred, axis=-1)
similarity = K.sum(y_true_normalized * y_pred_normalized, axis=-1)
masked_similarity = similarity * mask
sum_of_similarity = K.sum(masked_similarity)
num_unmasked_steps = K.sum(mask)
return sum_of_similarity / (num_unmasked_steps + K.epsilon())
model = tf.keras.models.Sequential([
tf.keras.Input(shape=(window_size, line_feature_size)),
tf.keras.layers.Masking(mask_value=0.0), # Still good for LSTM efficiency
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=False)),
tf.keras.layers.RepeatVector(window_size),
tf.keras.layers.LSTM(32, return_sequences=True),
tf.keras.layers.LSTM(64, return_sequences=True),
tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(line_feature_size))
])
model.compile(optimizer="adam",
loss=masked_cosine_distance,
metrics=[masked_cosine_similarity])
import requests
url = "http://example.com"
try:
response = requests.get(url)
if response.status_code == 200:
print(f"{url} is reachable")
else:
print(f"{url} returned status code: {response.status_code}")
except requests.ConnectionError:
print(f"Failed to reach {url}")
I've just had the same issue trolling through documentation but got there!
Refer to https://developer.android.com/google/play/billing/getting-ready#configure-rtdn for googles guide on this.
It will guide you how:
Create the pub/sub
Configure play store to publish events to the topic
Confirm the subscription is getting the events
You can then create a subscription that suits your needs. Eg pushes the messages to a lambda or provides a subscription for your backend server to listen to.
@ikegami, any thoughts if possible and how to move this question from a general advice / other to trouble shooting / debugging? Or it is too late? Perhaps I should ask a moderator.
I found my fault. What a shame! After the filter method, I had a finally block with a save and dispose, but the problem was that I had two instances of the same book. So, I saved and disposed of one with all the changes, and then I overwrote it when I saved the second instance.
Sorry, guys. But now I know that the instance that saves the file in memory is not just a link to the file. I hope someone else learns this with my mistake.
Was able to achieve that with giving my user account, the 'Storage Blob Data Contributor' role
When we look in terms of the software interrupt like needing Hardware, Then these Interrupts will come into picture. Suppose if we want to print the value on the screen(written in c program) , The CPU will stop it's execution and that interrupt will be serviced first. It will leads to call the system call. System call is a special type of function which is going to be in the OS code. This function can't be accessed by the user without the system call. Compiler will converts the given c code to assembly code. So when ever you write the "printf("hello")", In the assembly code it will be 'rax 50,syscall' ,where rax is a register. Once syscall is instruction seen by the cpu ,it'll go to the rax register and finds the appropirate number for that syscall as there 400+ system calls in the OS. Now cpu will go to the system call table(Interrupt vector table) and finds the address of that 50. Address of that 50 is function pointer sys_write. It is also takes arguments "hello" inorder to print on the screen and cpu will execute the code of the function sys_write in the kernel mode(OS) which is to be INTERRUPT SERVICE ROUTINE. Once that interrupt service routine is done with the help of function pointer then cpu will get back to the user mode.
For the fun(), fun is pointer to that function.
| 50 | sys_write. |
| 51 | fork. |
It is not throwing exception because you are not accessing values from vector, you are just feeding the value to i.
for (int i = 0; i <= v.size(); ++i) -- you are basically saying repeat this loop 6 times(doesn't access vector v)
for (int i = 0; i <= v.size(); ++i) {
cout << v[i] << "\n"; -- this line will throw an error because you are trying to access the values outside the range of vector v
cout << "Success\n"; -- this works because it has nothing do with vector v
}
understanding your issue - Manifest Infotech Solutions
Problem Analysis
Your current issue stems from a fundamental misunderstanding of CSS positioning:
position: fixed positions elements relative to the viewport (browser window), not your background image
background-size: cover scales your background image dynamically based on viewport size
Your elements are positioned with fixed pixel values, but the background scales proportionally
Best Practices from Manifest Infotech
User Experience: The elements remain clickable and interactive at all screen sizes
Performance: Use optimized images (WebP format recommended)
Accessibility: Add proper alt text and ARIA labels
Cross-Browser Compatibility: Tested across Chrome, Firefox, Safari, and Edge
Implementation Tips
Calculate your background image's aspect ratio: width ÷ height
Position elements using percentages based on their position in the original design
Test on multiple devices and screen sizes
Consider adding minimum/maximum sizes for very small/large screens
This solution, developed following Manifest Infotech's web development standards, ensures your interactive elements remain perfectly positioned relative to your background image across all screen sizes and devices.
this is usu. because Grafana cannot uniquely identify the data points due to duplicate labels. You need to carefully checking your raw data to see if there are duplicate labels, making it impossible for Grafana to distinguish between different data points. Try google search it may give your more detailed suggestions to fix this problem?
As mentioned in the comments by Barmar, an attempt to put things into the positional argument of time when there are multiple spaces moves it to the reason.
+t @user 3d2h1s30m test
A simple fix could be the way that you run the command itself:

As long as the time duration is without spaces, your regex works perfectly!
+t @Nestling Bot#9410 "3d 2h 1s 30m" test
Like Barmar mentioned, quotes work just fine.
Another way to do this is to have one keyword positional argument that parses until the last segment of hour/minute/second/day:
@bot.command(
name="timeout",
aliases=["mute", "t"],
help=""
)
@has_permissions(moderate_members=True)
async def timeout(ctx, user: discord.Member, *, reasonAndTime="No reason provided"):
print(reasonAndTime)
# this is an odd way to go about things but hey it should work right?
time = ''
reason = ''
for param in reasonAndTime.split(' '):
try:
if param[-1] == 's': # check seconds
time += param
elif param[-1] == 'm': # check seconds
time += param
elif param[-1] == 'h': # check seconds
time += param
elif param[-1] == 'd': # check seconds
time += param
else:
reason += f' {param}'
except TypeError: # Not an int, see https://stackoverflow.com/a/3501408/15982771
reason += f' {param}'
Please ignore the advice you are being given by @Shivraj Gudas, as it's just more of the same generic cut'n'paste nonsense that has been swirling around on this topic for more than six months.
Go to the Jira section of the Atlassian public Community Forum and do a general search on the topic of the new JQL Search endpoints. You'll find the topic has been discussed many, many, MANY times there over the past six months, and there is detailed descriptions and links to extensive documentation on how to use the new endpoints and how their pagination mechanism is different to their predecessors, which have now all been deprecated.
@EricPostpischil Is not "... the value stored in *quo: It is congruent modulo 2^n to the quotient, ..." more like "... the value stored in *quo: It's magnitude is congruent modulo 2^n to the quotient ...", or are negative values -x considered just as congruent as +x?
One simple trick is Ctrl and arrows LEFT/RIGHT. <- -> It will skip by WORDS in the command line. Is kinda fast
I have same issue also, still could not fix it
This may be caused by the following reasons:
Accidentally changed htop display settings
Terminal color scheme or theme issues
Font or display scaling settings in the terminal
Solutions to try:
Press F2 in htop to access settings menu and check display options
Exit htop with F10 and restart it
Check your terminal's color configuration
If using remote connections (like putty or VSCode), you might need to adjust terminal color settings
I’m wondering if there has been any update or resolution on this issue. We are facing the same memory increase in both heap and metaspace when using StatelessSession.
(Couldn't comment on your answer despite having enough reputation, so leaving another answer instead)
Your question about \gdef versus \xdef is addressed by https://tex.stackexchange.com/a/353139, which explains that:
With [
\def<cs>{<replacement text>}] you define<cs>to look for its arguments (if any) and to be replaced by<replacement text>, which is not interpreted in any way at definition time. With [\edef] the replacement text is fully expanded at definition time.
That same distinction applies to \gdef and \xdef, respectively. They function just like \def and \edef except that their definitions are global (persisting after the end of the block in which they were executed).
So, yes, switching from \xdef to \gdef just prevents the immediate expansion of the replacement text.
.notify_one() / .notify_all() on a destructed std::atomic is UB.std::atomic_ref, because the object it points to must outlive the std::atomic_ref object.std::binary_semaphore.wait() and .notify_xxx() to a separate atomic variable that has a longer lifetimeBefore I get to my main question, I'm assuming that
std::mutex::unlock()stops touching this as soon as it puts the mutex in a state where another thread might lock it [...]
That's a valid assumption - the standard even mandates that this type of usage must be supported:
35.5.3.2.1 Class
mutex[thread.mutex.class](2) [ Note: After a thread A has called
unlock(), releasing a mutex, it is possible for another thread B to lock the same mutex, observe that it is no longer in use, unlock it, and destroy it, before thread A appears to have returned from its unlock call. Implementations are required to handle such scenarios correctly, as long as thread A doesn't access the mutex after the unlock call returns. These cases typically occur when a reference-counted object contains a mutex that is used to protect the reference count. — end note ]
Is it possible to build a mutex from C++20
std::atomicwithout spinning to avoid use-after-free when deleting?
Yes, it is possible.
But not as straight-forward as one might expect, due to the lifetime issues you already mentioned.
See 3. Solutions for a list of potential ways you could work around this limitiation.
Is there any hope that a future version of C++ might provide a safe way to notify on destroyed atomics?
There is a paper addressing this specific issue, that could have been part of C++26:
P2616 - Making std::atomic notification/wait operations usable in more situations
Revisions:
| Paper | Date | Target C++ Version |
|---|---|---|
P2616R4 |
2023-02-15 | C++26 |
P2616R3 |
2022-11-29 | C++26 |
P2616R2 |
2022-11-16 | C++26 |
P2616R1 |
2022-11-09 | C++26 |
P2616R0 |
2022-07-05 | C++26 |
The current state of this paper can be seen at cplusplus/papers Issue #1279 on github
(currently the repo is private due to the current committee meeting - archive.org version)
It is stuck with needs-revision since May 23, 2023 - so it's unclear if (and when) it'll ever become part of the standard.
So, my next question is, can I do anything about this?
The best I've come up with is to spin during destruction, but it's offensive to have to spin when we have futexes that are specifically designed to avoid spinning.
There's unfortunately no one-size-fits-all solution for this.
std::binary_semaphore might be an option - its acquire() and release() member functions do the wait / notify atomically, so there should be no lifetime problems.std::counting_semaphore / std::binary_semaphore)Am I missing some other trick I could use to do this without spinning - maybe with std::atomic_ref?
std::atomic_ref doesn't help in this case either unfortunately.
One of its rules is that the object it points to must outlive the atomic_ref object.
(which would not be the case if you destruct the object)
31.7 Class template
atomic_ref[atomics.ref.generic]
(3) The lifetime ([basic.life]) of an object referenced by*ptrshall exceed the lifetime of allatomic_refs that reference the object. While anyatomic_refinstances exist that reference the*ptrobject, all accesses to that object shall exclusively occur through thoseatomic_refinstances. [...]
Is there any benefit to the language making this UB, or is this basically a flaw in the language spec that atomics don't expose the full expressiveness of the underlying futexes on which they are implemented?
TODO
TODO
std::counting_semaphore / std::binary_semaphoreIt is relatively straightforward to wrap std::binary_semaphore into a custom mutex implementation that supports unlocking the mutex from a different thread than the one that locked it.
e.g.: godbolt
class my_mutex {
private:
std::binary_semaphore sem;
public:
my_mutex() : sem(1) {}
void lock() {
sem.acquire();
}
void unlock() {
sem.release();
}
bool try_lock() {
return sem.try_acquire();
}
template<class Rep, class Period>
bool try_lock_for(std::chrono::duration<Rep, Period> const& timeout) {
return sem.try_acquire_for(timeout);
}
template<class Clock, class Duration>
bool try_lock_until(std::chrono::time_point<Clock, Duration> const& timeout) {
return sem.try_acquire_until(timeout);
}
};
Upsides:
Downsides:
libc++ / libstdc++) versions.
std::atomic_ref and use the futex wait / wake syscallsAnother option would be to use std::atomic_ref for the atomic read & write operations, and manually handle the waking / sleeping part (by calling the syscalls directly).
e.g.: godbolt
class my_mutex {
private:
using state_t = std::uint32_t;
static constexpr state_t StateUnlocked = 0;
static constexpr state_t StateLocked = 1;
static constexpr state_t StateLockedWithWaiters = 2;
static_assert(std::atomic_ref<state_t>::is_always_lock_free);
state_t state = StateUnlocked;
void wait() {
// TODO use WaitOnAddress for windows, ... other oses ...
syscall(
SYS_futex,
&state,
FUTEX_WAIT_PRIVATE,
StateLockedWithWaiters,
NULL
);
}
void wake() {
// TODO use WakeOnAddress for windows, ... other oses ...
syscall(
SYS_futex,
&state,
FUTEX_WAKE_PRIVATE,
1
);
}
public:
void lock() {
state_t expected = StateUnlocked;
if(std::atomic_ref(state).compare_exchange_strong(
expected,
StateLocked,
std::memory_order::acquire,
std::memory_order::relaxed
)) [[likely]] {
return;
}
while(true) {
if(expected != StateLockedWithWaiters) {
expected = std::atomic_ref(state).exchange(
StateLockedWithWaiters,
std::memory_order::acquire
);
if(expected == StateUnlocked) {
return;
}
}
// TODO: maybe spin a little before waiting
wait();
expected = std::atomic_ref(state).load(
std::memory_order::relaxed
);
}
}
bool try_lock() {
state_t expected = StateUnlocked;
return std::atomic_ref(state).compare_exchange_strong(
expected,
StateLocked,
std::memory_order::acquire,
std::memory_order::relaxed
);
}
void unlock() {
state_t prev = std::atomic_ref(state).exchange(
StateUnlocked,
std::memory_order::release
);
if(prev == StateLockedWithWaiters) [[unlikely]] {
wake();
}
}
};
Upsides:
Downsides:
wait() and wake() manually, by directly calling the syscalls.std::atomic_ref::wait when the object gets destroyed)Another option would be to avoid waiting on the atomic variable directly by using another atomic variable (with a longer lifetime) solely for the wait / notify.
This is also how most standard libraries implement std::atomic::wait() for types that are not futex-sized.
(libstdc++ for example has a pool of 16 futexes it uses for waits on atomic variables that are non-futex sized. (__wait_flags::__proxy_wait is the flag used to handle wether the wait will be on the atomic value itself or one of the 16 proxy futexes))
e.g.: godbolt
class waiter {
private:
alignas(std::hardware_destructive_interference_size)
std::atomic<std::uint32_t> counter = 0;
public:
void notify_all() {
counter.fetch_add(1, std::memory_order::release);
counter.notify_all();
}
template <class T>
void wait(
std::atomic<T> const& var,
std::type_identity_t<T> const& oldval
) {
while (true) {
auto counterval = counter.load(std::memory_order::acquire);
if (var.load(std::memory_order::relaxed) != oldval) {
return;
}
counter.wait(counterval);
}
}
};
template <std::size_t N = 256>
class waiter_pool {
private:
static_assert(std::has_single_bit(N), "N should be a power of 2");
waiter waiters[N];
waiter& waiter_for_address(const void *ptr) {
std::uintptr_t addr = reinterpret_cast<std::uintptr_t>(ptr);
addr = std::hash<std::uintptr_t>{}(addr);
return waiters[addr % N];
}
public:
template <class T>
void notify_all(std::atomic<T> const& var) {
waiter& w = waiter_for_address(static_cast<const void*>(&var));
w.notify_all();
}
template <class T>
void wait(
std::atomic<T> const& var,
std::type_identity_t<T> const& oldval
) {
waiter& w = waiter_for_address(static_cast<const void*>(&var));
w.wait(var, oldval);
}
};
waiter_pool pool;
class my_mutex {
private:
using state_t = std::uint8_t;
static constexpr state_t StateUnlocked = 0;
static constexpr state_t StateLocked = 1;
static constexpr state_t StateLockedWithWaiters = 2;
static_assert(std::atomic<state_t>::is_always_lock_free);
std::atomic<state_t> state = StateUnlocked;
public:
void lock() {
state_t expected = StateUnlocked;
if (state.compare_exchange_strong(
expected,
StateLocked,
std::memory_order::acquire,
std::memory_order::relaxed
)) [[likely]] {
return;
}
while (true) {
if (expected != StateLockedWithWaiters) {
expected = state.exchange(
StateLockedWithWaiters,
std::memory_order::acquire
);
if (expected == StateUnlocked) {
return;
}
}
// TODO: maybe spin a little before waiting
pool.wait(state, StateLockedWithWaiters);
expected = state.load(std::memory_order_relaxed);
}
}
bool try_lock() {
state_t expected = StateUnlocked;
return state.compare_exchange_strong(
expected,
StateLocked,
std::memory_order::acquire,
std::memory_order::relaxed
);
}
void unlock() {
state_t prev = state.exchange(
StateUnlocked,
std::memory_order::release
);
if (prev == StateLockedWithWaiters) [[unlikely]] {
pool.notify_all(state);
}
}
};
Upsides:
my_mutex::state does not need to be 32 bits because the futex waiting is delegated to the pool. So instances of my_mutex can be much smaller.Downsides:
The limit for internal testers is 80, which you have hit. You can create an external testers group, which should have a limit of 10,000.
You could use pyYAML to parse yaml files in python (pip install pyyaml).
Then, in your code:
import yaml
with open('yourfile.yml') as f:
self.category_sector_mapping = yaml.safe_load(f)
There's likely version mismatch here between flask and werkzeug. Try downgrading with pip install Werkzeug==2.2.2
try this extension - sprintreportpro It gives an amazing Sprint Report in Azure DevOps(ADO) in PDF format with charts, burndown, quality summary, team insights etc
Update:
openjdk:11-jdk-slim Is Deprecated on Docker Hub
Replace that with
eclipse-temurin:11-jdk-jammy
I found the answer -
test = test.with_columns(
pl.when(pl.col(pl.Float32) > 8)
.then(0)
.otherwise(pl.col(pl.Float32)).name.keep()
)
I needed otherwise and to explicitly keep the column names. I thought the expression expansion was the issue but this works.
So after much more investigation, I have determined that this is just the difference in space efficiency between Parquet and Pandas for the kind of data in my files; The dataset includes several Date and Decimal columns, which can be handled in a very memory efficient way by Parquet and Spark, but not by Pandas.
If someone else has a similar issue, I suggest moving your implementation to PySpark, which can handle this data much better. Unfortunately that is not an option for me, so I have had to fundamentally alter my approach.
@0___________, chux posted an advice request, which is SE's attempt to introduce discussions posts. It should have been posted as "Troubleshooting / Debugging" to get the classic format (I think!)
can someone answer this for me??? I want to be able to launch a game like chess, or block breaker/breakout
Try using this, it's working for me.
.buttonBorderShape(.circle)
Button {
// Button actions here.
} label: {
Image(systemName: "chevron.forward")
.fontWeight(.medium)
}
.buttonBorderShape(.circle)
.buttonStyle(.glass)
I forgot to wrap the store. So I want the computed property to be in the store so I can just call store.b. Instead of having to write the computed ref seperately. So I think I'll need to do it the 'c' way. As the computed ref should be refering to a reactive object, but in the initilization state, the reactive object has not been created yet.
And yea, the proper way would be to use a proper store management system, but I was wondering if there's an easy way for it.
I also encountered the same problem and spent a long time troubleshooting it, but I’ve finally solved it.
The root cause of the issue was that Cloudflare returned a very poor-quality IP node for the domain.
From Cloudflare’s publicly listed IP ranges https://www.cloudflare.com/ips-v4/, I selected a high-quality IP node and added it to my Windows hosts file — that completely fixed the problem.
It looks like the issue is with the endpoint path. In Jira API v3, the correct endpoint for a JQL search is /rest/api/3/search, not /rest/api/3/search/jql. When you use /search/jql, Jira doesn’t return the paging metadata (startAt , maxResults , total). Try updating your URL to use /search ; that should fix the response format and bring back the pagination info.
Question format is different form before: perhaps I set it up wrong.
Use gdb to open the core dump
gdb /path/to/binary /path/to/core.dump
Then run:
(gdb) info files
@your-common-sense Wouldn't type safety be one reason?
If I had only a set() method, I can't have explicit enforcement for all 3 types. I'd have to use mixed or object|callable.
Having separate methods allows me to be more explicit in how each service is stored.
I think I once read there was a way to clone a repository with just the .git in it. However I cannot find where I read that but if that's possible, I could change my parent clone to this and avoid having all the files duplicated and then the parent would just hold the .git which would be shared by the worktree ?
I think I understand the problem. Groovy's Map coercion doesn't work on concrete types. Since the SpringApplicationBuilder is a concrete type, it instead tries to instantiate it, but because there's no default constructor it fails. When I tried Map coercion on a type that has a default constructor, it instantiates fine, but then my mocked methods aren't actually run.
TL;DR? I had to give up.
Well I faced the same issue but when using @nomicfoundation/hardhat-toolbox-mocha-ethers. So far i don't understand why Hardhat only recognise Solidity tests, but no TS tests.
Anybody can help with that?
Actually the only one solution seems to disable totally Copilot. At least for me, enabling, disabling and restarting the IDE won't work. NES will run one more time and the code completions will increase its percentage in the Copilot usage statistics, this includes enabling or disabling auto tringgering for code completions, or setting the language or setting all (*) in false, Copilot will run the NES and the code completion; is just like MS said "it's my IDE and I will make Copilot do whatever, no matter what you do".
I am the maintainer and primary author of the Spring Data DynamoDB community module -> https://github.com/prasanna0586/spring-data-dynamodb
The Spring Data projects page (https://spring.io/projects/spring-data) now points to this actively maintained version. The latest release is fully compatible with Java 21 and Spring Boot 3.5.6, and I am updating the underlying AWS SDK from v1 to v2 to align with current best practices.
The latest version and compatibility is available here -> https://github.com/prasanna0586/spring-data-dynamodb/?tab=readme-ov-file#version--spring-framework-compatibility
instead of
CFLAGS="-O2 -Wall -march=native" ./configure <options>
use
CFLAGS="-O2 -Wall -march=native -fcommon" ./configure
вопрос больше не актуален
всем спасибо
I think you misunderstood what II means. Are you saying that you want your top level function to have the same execution time as any one of the 2 functions you call? If so, then it should be II=3 in the top level. Because II is global. For example, one II is 2ns. Your proc1 and proc2 take 6ns to execute and your top level function also takes 6ns to execute.
Ran into this issue today and found that it was caused by importing the wrong scroll view.
Import like this: import { ScrollView } from 'react-native-gesture-handler';
DON'T IMPORT like this: import { ScrollView } from 'react-native';
All good but internal testing does not go through a review process. That is why we went with a closed testing and thinking to promote to a production but not sure about it.
I am officially supporting this project. The project is very much alive and is available here -> https://github.com/prasanna0586/spring-data-dynamodb. The link in spring-data project was updated to point people to the latest library. The latest version of the library uses Java 21 and spring boot 3.5.6. I am working on updating AWS SDK from V1 to V2. It should be available soon.
The following helped me with a TypeError: 'Task' object is not callable error in PyCharm when running the debugger. I was using the scrapy library and it behaved in the same way as your error while debugging, but I suspect that you are seeing a similar issue with the asyncio library.
Press Shift twice to open the search window
From the main bar select Actions
Type in Registry and select Registry...
From here scroll down to python.debug.asyncio.repl and make sure the Value column is deselected
This is taken from the following answer: Cannot debug script with trio_asyncio in PyCharm
brew doctor can be really helpful here to make sure you don't need to cleanup, install, link, add to PATH, etc.
Other answers are accurate - it's likely your npm upgrade or reinstalling icu4c
This error can show up when installing php too - I ran into this error with brew install php on Mac OS Monterey 12 (which currently installs 8.4.14). Nothing worked for me though, I've tried everything.
Does anyone know if it's possible to install a version > 8.1 on this OS? I'm convinced it's not.
You should post the HTML that you claim was working but no longer is
<Ctrl><Shift>R once I highlighted ONLY the commands that I wanted to turn into a function.
My original code block that I wanted to refactor began with a comment and the last line had a comment at the end.
# this code does this
print(a)
print(b) #another comment here.
Refactoring only worked when I ONLY highlighted the print commands.
# this code does this
print(a)
print(b) #another comment here.
I don't know if this is going to let me reply to @apnerve, but I'd have to agree with you that there's nothing technically wrong with skipping h2 and going straight to h3 - that being said, it may be prudent to use CSS to restyle your headings so h1-whatever work the way you want them to, then you have a h2 element that shows up as expected, and you can have an h3. The tiered heading system seems to be used to denote different headers as grouped by importance. H1 is a main header, important. Their importance to the crawler and ultimately organic search ranking will be directed by the standard hierarchy of elements. Creating a page which is pure clean HTML code will be easier for the crawler to recognize as "a product" or "a reservation page".
My company did a production quality test release of our Android app as an Internal Beta release. You can manage your internal testing releases in the Google Play Console under Home>Your App>Test and release>Testing>internal testing. These internal releases could only be downloaded from the app store by users added to the internal testing user group, who were sent a link that redirected them to the app store. Here is more information about Internal Testing of Android apps, it works for up to 100 invited users:
I edited the example data. Hopefully this is easier to demonstrate with now.