Building off of Austin's answer since I was also looking for an example where (tight) big-O and big-W for worst case was different. Think of this: we have some (horrible) code where we have determined that the runtime function for the worst case inputs set is 1 when n is odd and n when n is even. Then, the upper bound of the runtime of the worst case of this code is O(n), while the lower bound is W(1).
Abandoning PR helped me. I abandoned my PR, added a small change in the branch, started to create a new PR - and it got updated
No. On Xtensa (ESP32/ESP32-S3), constants that don’t fit in an instruction’s immediate field are materialized from a literal pool and fetched with L32R. A literal-pool entry is a 32-bit word, so each such constant costs 4 bytes even if the value would fit in 16 bits.
Why you’re seeing 4 bytes:
GCC emits L32R to load the constant into a register; L32R is a PC-relative 32-bit load from the pool. There’s no 16-bit “L16R” equivalent for literal pools on these cores. (Small values may be encoded with immediates like MOVI/ADDI, but once the value doesn’t fit, it becomes a pooled literal.)
What you can do instead (to actually use 16-bit storage):
Put thresholds in a table of uint16_t in .rodata (Flash) and load them at run time, instead of writing inline literals in expressions. That lets the linker pack them at 2 bytes each (modulo alignment), and the compiler can load them with 16-bit loads (l16ui) and then compare.
You can tag the source with an annotation @JsonProperty
Ex: enter image description here in myDto, although the variable that I define is the same as the one that I define in @Jsonproperty, but it's not the same. The problem that leads to this is the auto-generation of @Getter and @Setter by Loombook
This kind of roadblock is exactly why many organisations prefer something called a Mobile Device Management (MDM) Solution. Instead of relying on StageNow or custom scripts, an MDM gives direct visibility into serial numbers, IMEI, and other identifiers across ethe ntire fleet. It not only saves time but also sets a bar as to how these values are pulled and stored, which are critical when you are scaling and testing beyond few units. There are some really MDM solutions in the market like Scalefusion or SOTI.
The error “Error response from daemon: manifest for during publish” usually occurs when the Docker image you are trying to push or pull does not exist or the tag is incorrect. To resolve this issue:
By following these steps, your Docker deployment for NCERT Solutions Class 7 via Veda Academy should work smoothly without manifest errors.
Visit for more info: https://vedaacademy.in/ncert-solutions/ncert-solutions-class-7
Saving the bat file using code page 850 solved the problem for me
850 is the default windows code page for UK
I knew there had to be a trivial solution
Thanks to all who responded, espeacialy @Mark Tolonen
See, let me tell you what you did wrong here. What I am guessing here is that you allow guest posts on your websites that are draining your juice, which you are now referring to as Spam. And you have deleted it via your CMS directly. What was supposed to be done here is that you should have first used GSC to disinfect each of them and then deleted them. And still, you do not need to worry; it is a matter of days, but it will get deindexed. But yes te repution damage is real
If you really want to force a PDF to be viewed in the browser and to parse the document to get the pagecount, is to implement something like "pdf.js"
After some futher thoughts, I came to the conclusion that the answer is actually very simple: Just remove the increment after completion of the foreach loop:
#macro(renderChildItems $item $indentLevel)
#set($childItems = $transaction.workItems().search().query("linkedWorkItems:parent=$item.fields.id.get AND NOT status:obsolete AND type:(design_decision system_requirement)").sort("id"))
#foreach($child in $childItems)
<tr>
<td style="padding-left:$indentLevel$px">
$child.render.withTitle.withLinks.openLinksInNewWindow()
</td>
</tr>
#set($indentLevelNew = $indentLevels + $indentSizeInt)
#renderChildItems($child $indentLevelNew)
#end
#set($indentLevelNew = $indentLevels - $indentSizeInt) ##NEW
#end
Name=fires
TypeName=fires
TimeAttribute=time
PropertyCollectors=TimestampFileNameExtractorSPI[timeregex](time)
Schema=*the_geom:Polygon,location:String,time:java.util.Date
CanBeEmpty=true
ai-generated, fires - it is a name data store (maybe its mapping)
https://pub.dev/packages/keyboard_safe_wrapper
This package solves your problem.
TLDR:
Partial evaluation starts at RootNode.execute() and follows normal Java calls - no reflection on node classes.
Node instance constancy and AST shape are the foundation of performance.
Granularity matters; boundaries matter even more.
DSLs and directives aren’t mandatory, but they encode the performance idioms you’d otherwise have to rediscover.
Inspection with IGV is normal — nearly everyone does it when tuning a language.
Full Answers:
how does Truffle identify the code to optimize?
Truffle starts partial evaluation at RootNode.execute(VirtualFrame). During partial evaluation, the RootNode instance itself is treated as a constant, while the VirtualFrame argument represents the dynamic input to the program.
Beyond that, Truffle does not use reflection or heuristics to discover execute() methods. It simply follows the normal Java call graph starting from the RootNode. Any code reachable from that entry point is a candidate for partial evaluation.
This means you can structure Node.execute(..) calls however you like, but for the compiler to inline and optimize them, the node instances must be constant from the RootNode’s point of view. To achieve that you should:
Make fields final where possible.
Annotate node fields with @CompilationFinal if their value is stable after construction.
Use @Child / @Children to declare child nodes (this tells Truffle the AST shape and lets it treat those nodes as constants).
Granularity and @TruffleBoundary
Granularity matters a lot. Many small, type-specialized Node subclasses typically optimize better than one monolithic execute() method. @TruffleBoundary explicitly stops partial evaluation/inlining across a method boundary (useful for I/O or debugging), so placing it incorrectly can destroy performance. The usual pattern is to keep “hot” interpreter code boundary-free and push any side effects or slow paths behind boundaries.
Truffle DSLs and compiler directives
The DSLs (Specialization, Library, Bytecode DSL) are not strictly required for peak performance. Anything the DSL generates you could hand-write yourself. However, they dramatically reduce boilerplate and encode best practices: specialization guards, cached values, automatic rewriting of nodes, etc. This both improves maintainability and makes performance tuning much easier.
Similarly, compiler directives (@ExplodeLoop, @CompilationFinal(dimensions = ...), etc.) give the optimizer hints. They are incremental , you can start with a naïve interpreter, but expect to add annotations to reach competitive performance. Without them, partial evaluation may not unroll loops or constant-fold as expected.
Performance expectations and inspection
Truffle interpreters are not automatically fast. A naïve tree-walk interpreter can easily be slower under partial evaluation than as plain Java. Understanding how PE works, constants vs. dynamics, call graph shape, guard failures, loop explosion, etc. is essential.
In practice, most language implementers end up inspecting the optimized code. Graal provides two main tools:
Ideal Graph Visualizer (IGV) for looking at the compiler graphs and ASTs.
Compilation logs / Truffle’s performance counters to see node rewriting, inlining, and assumptions.
The Truffle docs have a dedicated “Optimizing Your Interpreter” guide that demonstrate the patterns. I would also recommend checking out the other language implementations for best practices.
Do not use next/head in App Router. Remove it from your components if present.
Make sure your layout.tsx has proper structure:
export default function RootLayout({ children }: { children: React.ReactNode }) {
return (
<html lang="en">
<body>{children}</body>
</html>
);
}
Convex hull is possible, there are so many ways to do that, it may not be fully accurate. Concave hull is some extra process on a convex hull, like splitting the edge with the nearest vertex that stays in between them. I think there is no simple and widely accepted solution for that.
I had the same issue when running dotnet publish for multiple platforms using a bash script (Windows / MacOS). In my case, the fix turned out to be running dotnet clean before the publish.
If the device's traits don't match the reported state, it might not render correctly.
Check if you are really using the correct version of Java in the project. In my case, after downloading the project, JDK 21 was selected by default, but the project was on 8 :) Changing it helped.
You have a typo in your -e PORT:8996. This must be = instead of ::
-e PORT=8996
But this configuration setting is non-useful and add complexity. Best is to remove -e PORT=8996, openproject will still listen on 80 within the container, but your -p 8996:80 will make openproject listen on another port on your host.
For your "Invalid host_name configuration" error, did you well access openproject using http://localhost:8996? (we cannot see the whole URL you posted, it seems truncated)
Right now your animation looks shaky because you’re resizing the whole window with window.setAttributes() on every frame that forces android to relayout the entire activity and it stutters. The smoother way is to put your dialog content inside a container view (like a FrameLayout), start it at half screen height and then animate that view’s height using a ValueAnimator. That way only the container is remeasured not the whole window, and the animation runs much more smoothly. Also use an interpolator like DecelerateInterpolator instead of LinearInterpolator to make the motion feel natural.
By using the relative path, the problem will be solved i.e cat ./file\ name\ with\ spaces or cd ./file\ name\ with\ spaces
It'll work.
Instead of placing the escape characters, click tab after two or three characters(do not forget "./")
can we integrate it in all kind of situations
When deciding whether to use a flowchart or a sequence diagram to describe a process, it really depends on what you want to explain. At Cloudairy, we often suggest starting with a flowchart when you want to give a simple, high-level view of a process. Flowcharts are perfect for showing the steps and decisions in a workflow — for example, “User signs up → Email is verified → Account is created.” They are easy for business teams, managers, and clients to understand because they focus on what happens next and where decisions are made.
A sequence diagram, on the other hand, is more technical. It shows the order of interactions between systems, components, or people over time. If you want to describe how your front-end, back-end, and database communicate during a login process, a sequence diagram is ideal. It helps developers visualize requests, responses, and timing issues so they can build or troubleshoot the system correctly.
In practice, many Cloudairy projects use both: flowcharts to get everyone on the same page, then sequence diagrams to capture the technical details. So, think about your audience — if you’re presenting to business stakeholders, use a flowchart. If you’re documenting for developers, go with a sequence diagram.
Unset the max-width from the parent container and use the full vw centred
.overflow-slider {
max-width: none !important;
width: 100vw;
margin-left: calc(-50vw + 50%);
margin-right: calc(-50vw + 50%);
}
I've create a post install script which can be found here github.com/firebase/firebase-ios-sdk/issues/15347
You don’t need to declare @JsonKey() anymore. The latest json_serializable updates handle most cases automatically, so your models should work fine without explicitly adding it.
I have also resolved this problem by selecting Business Intelligence option during SSMS21 installation.
you can disable it with this step
open keyboard setting

disable by toggle Use Stylus to write in text fields

Now when you pop the keyboard setting screen everything should be fine now

I'm in the same position, did you find a solution?
I have exactly the same problem. Except that I don't have tabBar.isTranslucent = false in my code.
The bottom constraint of my ViewController that is displayed is also attached to the view's bottomAnchor, and not to contentLayoutGuide bottomAnchor.
Is anyone else unable to solve this problem with isTranslucent?
I think that bidirectional association in SysML v2 is crossing relationship, but I am not sure - pls can someone confirm this?
I am having the exact same problem as @vuelicious, also realized about the latency values and try to delay the playtime that latency to sync the screen but still the delay is much higher than the output latency value.
Did any of you found an approach to manage this delays?
Thank you so much!
The leading “I” actually stands for “import” so as to sometimes identify imported vehicles.
The accepted answer is correct, but I'd like to add:
The solution in the accepted answer does not work well with sessions, since ordering within sessions will be lost. I don't think though that there is currently a solution that does work well with sessions.
There is currently a github feature request to add an 'abandon with custom delay' feature that would solve this problem: https://github.com/Azure/azure-service-bus/issues/454. It is scheduled to be delivered this year, but there is not hard commitment to that timeline.
Shameless self promotion: https://technology.amis.nl/azure/retries-with-backoff-in-azure-service-bus/
I recommend taking a look at this repository:
https://github.com/AleCucina/chrome-extension-remote-scripts
It shows a way to work around the restriction using JavaScript AST and an interpreter.
And why not use a thread? Is it not better?
kollaR is an newly released eye tracking package for eye tracking analyses. It includes functions for fixation and saccade classification, area-of-interest based analyses and nice visualizations for publications. kollaR was specifically designed fo facilitate comparisons and validations of event classification algorithms. This can help you select an analysis pipeline suitable for your data. kollaR is available on CRAN. A demo can be found here:
It's called Item Number Fields - It's under Menu: Customization > Lists, Records, & Fields > Item Number Fields - here you can add fields to the inventory detail record
maybe you have to need to use populate to get username instead of objectID in getData fn.
try to populate "owner" and "username" in fetch in getdata and then maybe you can
access {owner?.username}
In my case i forgot to import
implementation(libs.androidx.ui.tooling)
and only had the tooling preview imported:
implementation(libs.androidx.ui.tooling.preview)
I had this problem and I tried everything, but none of them worked. At last, I uninstalled the GitHub Copilot extensions and disabled the Copilot feature in VS Code, as it seemed this conflict was causing an issue in my workspace. Then, I deleted my project and cloned it again from GitHub, which fixed the issue.
Мешает BrancheCache, остановить можно net stop PeerDistSvc
I recommend taking a look at this repository:
Chrome-Extension-Remote-Scripts-Manifest-V3
It shows a way to work around the restriction using JavaScript AST and an interpreter.
i faced the same problem with onUploadProgress in my nextjs/nestjs app which i have this flow user choose a file i sent it to my backend and my backend send it to object storage , its not the same situation but it can help you undestand . onUploadProgress reach 100% and the download to object storage has not finished yet in reality and after wasting a lot of time with it : onUploadProgress show 100% when the file has finished uploading from your client/browser to your backend (server) and in dev mood or localhost this is is very fast so thats why it appear 100% direclty after choosing file but when deploying in production mood u will notice that it take some time its not fast like in dev mood so the 100% that is shown is not the percentage of uploading from server to object storage (its for you to handle calculating the percentage of this)
By the time of writing this answer,
You might be interested in reading this post which gives an elegant solution based on the Community ID
In ERC-20, the approve function is intended for all token holders and is not a "admin-only" action.This is how it operates:
When you call approve(spender, amount), the contract records: msg.sender allows spender to spend up to amount of their tokens.msg.sender is simply whoever sends the transaction. So you can only approve spending of your own tokens, not someone else’s.A random user can’t approve tokens from your balance they can only approve from their own wallet.For this reason, onlyOwner is not used in the majority of ERC-20 tokens. Ownership is not for regular token transfers or approvals, but rather for administrative tasks like pausing, minting, etc.
Given that you only have two columns, something like
df.group_by("Letter").all()
would also give you the desired result. If you have more columns, all() would turn everything except for the group_by column into series of lists.
I suggest to create a service that will hold and provide your properties to other beans. It will also have a scheduled method, that will periodically check last modified time attribute of your properties file and update properties int your service if it was changed.
I got caught with the same issue and i changed moduleResolution from nodenext to bundler and webpack build went successful
# Find the process ID
sudo lsof -i :5000
# Kill the process
sudo kill -9 <PID>
There’s no universal “better” choice, but here are the main practical reasons why an ESP32-S3 is often a better fit for embedded ML/IoT than a Raspberry Pi Zero 2 W:
Power consumption ESP32-S3: Tens of mA when active, µA in deep sleep. Designed for battery/low-power IoT nodes. RPi Zero 2 W: ~250–350 mA idle, much higher under load. Not practical for battery operation without a big pack. For continuous sensor logging and periodic inference, S3 is far more efficient.
Real-time behavior ESP32-S3 runs bare-metal / RTOS (FreeRTOS). You can sample sensors deterministically at 10–1000 Hz. Pi Zero 2 W runs Linux. Great for flexibility, but not hard real-time → jitter in sensor sampling. For vibration/RPM sensing, deterministic timing is critical.
Integrated connectivity and peripherals ESP32-S3: Wi-Fi, BLE, ADC, SPI, I²C, UART, I²S, CAN built-in. Pi Zero 2 W: Wi-Fi/Bluetooth, but raw sensor I/O needs extra hardware (USB dongles, HATs). With S3 you connect sensors directly, without kernel driver overhead.
Built-in AI acceleration ESP32-S3 has SIMD + ESP-NN kernels (optimized TFLite Micro ops). Run small quantized ML models in ms range. Pi Zero 2 W can run full TensorFlow/PyTorch, but inference overhead is much bigger and not power-efficient. If your model is small (RUL classifier, anomaly detection), S3 handles it natively.
Cost and availability ESP32-S3 modules (N8R8/N16R8): $4–8 range. Pi Zero 2 W: Often hard to find at retail, higher cost (~$15–20 if available).
Simplicity and reliability ESP32-S3 firmware: Single binary, OTA updates possible, boots instantly. Pi: Full OS image, needs SD card, filesystem can corrupt if power is lost. For field-deployed IoT nodes, microcontrollers are usually more robust.
When Raspberry Pi Zero 2 W makes more sense
Summary: Choose ESP32-S3 if you want low-power, real-time, robust edge ML inference with direct sensor IO. Choose Pi Zero 2 W if you need a general-purpose Linux box with heavier ML frameworks or more flexible software stack.
Yeah… this one’s on me.
Last night I forgot to charge my phone. Today, in a moment of pure genius, I plugged it into my laptop to charge. Guess what? Expo saw a shiny new physical device and decided, “Oh cool, let’s use that instead of the emulator!”
I then spent a solid hour reinstalling, reinstalling again, questioning my life choices, and convincing myself I broke everything… only to realize the problem was literally my phone being plugged in.
Moral of the story:
Don’t charge your phone from the same laptop you’re running Expo on (unless you actually want to test on it).
If Expo suddenly ignores your emulator, check your USB cable before reinstalling the universe.
Felt very stupid. Learned something new.
I'm new to Flutter, so here's a solution that worked in my project; it may not be the only way. I just wanted to provide my own version for new users like me.
Notes:
This uses package:web, which replaces the now-deprecated dart:html.
window.open(url, '_blank') attempts to open in a new tab, but if the browser blocks it (due to pop-up blockers or the Google App on iOS), it falls back to navigating in the current tab.
Calling openUrl(answerUrl, newTab: false) opens in the same tab.
import 'package:web/web.dart' as web;
// link to open
const answerUrl = 'https://stackoverflow.com/questions/ask';
// function to open link in new tab or same page
void openUrl(String url, {bool newTab = true}) {
try {
if (newTab) {
// opens new tab
final newWindow = web.window.open(url, '_blank');
if (newWindow == null) {
// Fallback if browser blocks the popup
web.window.location.href = url;
}
} else {
// Open directly in the same tab
web.window.location.href = url;
}
} catch (_) {
// Fallback for cases like Google App on iPhone
web.window.location.href = url;
}
}
// ... passing the link to the function inside onPressed
ElevatedButton(
onPressed: () => openUrl(answerUrl),
child: const Text("Go to questions"),
)
The problem is that you're trying to control both the fill color and the animation separately. The better approach is to use CSS animations for the masking effect and control them through classes that you toggle with JavaScript.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Indonesian Flag Globe Animation</title>
<style>
body {
display: flex;
justify-content: center;
align-items: center;
height: 100vh;
background: linear-gradient(to right, #2c3e50, #4ca1af);
margin: 0;
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
}
.container {
text-align: center;
padding: 30px;
background: rgba(255, 255, 255, 0.1);
backdrop-filter: blur(10px);
border-radius: 20px;
box-shadow: 0 10px 30px rgba(0, 0, 0, 0.2);
}
h1 {
color: white;
margin-bottom: 30px;
text-shadow: 0 2px 4px rgba(0, 0, 0, 0.3);
}
.globe-container {
position: relative;
width: 300px;
height: 300px;
margin: 0 auto 30px;
}
.globe {
width: 100%;
height: 100%;
}
.globe-circle {
fill: lightBlue;
transition: fill 0.5s ease;
}
.mask {
fill: white;
}
#mask-rect {
transform: translateY(-100%);
transition: transform 1s cubic-bezier(0.65, 0, 0.35, 1);
}
.globe-container:hover #mask-rect {
transform: translateY(0);
}
.globe-container:hover .globe-circle {
fill: url(#indo-flag);
}
button {
padding: 15px 40px;
font-size: 18px;
background: linear-gradient(to right, #e74c3c, #e67e22);
color: white;
border: none;
border-radius: 50px;
cursor: pointer;
transition: all 0.3s ease;
box-shadow: 0 5px 15px rgba(0, 0, 0, 0.2);
}
button:hover {
transform: translateY(-3px);
box-shadow: 0 8px 20px rgba(0, 0, 0, 0.3);
background: linear-gradient(to right, #c0392b, #d35400);
}
.instructions {
color: white;
margin-top: 20px;
font-size: 16px;
opacity: 0.8;
}
</style>
</head>
<body>
<div class="container">
<h1>Indonesian Flag Globe</h1>
<div class="globe-container">
<svg class="globe" viewBox="0 0 200 200">
<defs>
<!-- Indonesian flag gradient -->
<linearGradient id="indo-flag" x1="0%" y1="0%" x2="0%" y2="100%">
<stop offset="0%" style="stop-color:#e70011;stop-opacity:1" />
<stop offset="100%" style="stop-color:#ffffff;stop-opacity:1" />
</linearGradient>
<!-- Clip path for the globe -->
<clipPath id="globe-clip">
<circle cx="100" cy="100" r="90" />
</clipPath>
</defs>
<!-- Globe circle with initial color -->
<circle class="globe-circle" cx="100" cy="100" r="90" />
<!-- Mask for the animation -->
<g clip-path="url(#globe-clip)">
<rect id="mask-rect" x="0" y="0" width="200" height="200" class="mask" />
</g>
<!-- Globe outline -->
<circle cx="100" cy="100" r="90" fill="none" stroke="rgba(0,0,0,0.2)" stroke-width="2" />
</svg>
</div>
<button id="trigger-button">Hover Me</button>
<p class="instructions">Hover over the button to see the Indonesian flag colors fill the globe</p>
</div>
<script>
const button = document.getElementById("trigger-button");
const globeContainer = document.querySelector(".globe-container");
// We're using CSS for the animation, but we can also control with JS if needed
button.addEventListener('mouseenter', () => {
globeContainer.classList.add('hover');
});
button.addEventListener('mouseleave', () => {
globeContainer.classList.remove('hover');
});
</script>
</body>
</html>
is above code sufficient?
For anyone who has the related issue, even the chosen best answer not working, get stucked at step Selecting account:
Pls try check the targets: Signing & Capabilities, ensure both your APP and APPTests have selected the correct Team and Bundle identifier.
This issue occurred due to an outdated SDK in the app. It is likely that some support for Objective-C was removed in iOS 26, resulting in the crash.
I too struggled initially setting up Tailwind/Postcss initially, then I configured these steps combining documentations, all youtube tutorials. This will surely work, just go stepwise
Using the new Tailwind CSS Version 4+ Latest supports auto configurations. (learn more about it from documentations).
We don’t have tailwind.config.js and postcss.config.js anymore.
Start a fresh new App
Note: Ensure using
Command Prompt CMDTerminal and notPowershell pslorGit Bashor others, inside your code editor.I face no errors when doing this
npm create vite@latest my-app -- --template <template_name>
Eg. For React:
npm create vite@latest my-app -- --template react
cd my-app
code -r my-app
Opens the app in another VS Code window.
npm install tailwindcss @tailwindcss/vite
Confirm the installation via package.json
"dependencies": {
"@tailwindcss/vite": "^4.1.11",
"react": "^19.1.0",
"react-dom": "^19.1.0",
"tailwindcss": "^4.1.11"
}
(With PostCss)
npm install -D @tailwindcss/postcss
and
npm install tailwindcss @tailwindcss/vite
Confirm the installation via package.json
"devDependencies": {
"@eslint/js": "^9.35.0",
"@tailwindcss/postcss": "^4.1.13",
"@types/react": "^19.1.13",
"@types/react-dom": "^19.1.9",
"@vitejs/plugin-react": "^5.0.2",
"eslint": "^9.35.0",
"eslint-plugin-react-hooks": "^5.2.0",
"eslint-plugin-react-refresh": "^0.4.20",
"globals": "^16.4.0",
"vite": "^7.1.6"
}
vite.config.jsimport { defineConfig } from 'vite'
import react from '@vitejs/plugin-react'
import tailwindcss from '@tailwindcss/vite' //1. Line 1
// https://vite.dev/config/
export default defineConfig({
plugins: [
tailwindcss(), //2. Line 2
react()
],
})
And press CTRL + S to save.
project/src/index.css
@import "tailwindcss";
:root {
font-family: system-ui, Avenir, Helvetica, Arial, sans-serif;
.
.
remove this default css..
}
rfc
export default function App() {
return (
<h1 className='text-lg font-bold underline text-red-500'>
YB-Tutorials
</h1>
)
}
npm run dev
Visit http://localhost:5173/
Done...!
For people following the Nest.js Passport.js tutorial that end up having this issue, what fixed it for me is adding the secret as an option again
this.jwtService.sign(payload, {
secret: jwtConstants.secret
})
Based on recent open-source benchmarks (https://github.com/chrisgleissner/loom-webflux-benchmarks), virtual threads consistently achieved same or even better performance than Project Reactor. This indicates that if your main concern is simply overcoming performance bottlenecks related to thread overhead or blocking I/O, the answer is YES, virtual threads alone are often sufficient and provide a simpler programming model.
But, the reactive programming model brings benefits beyond reducing thread usage. Frameworks like Project Reactor are inherently event-driven, which provide strong support for:
The model itself—not just the performance—is a key advantage. A concrete example is the recent wave of AI chatbot applications, which must handle massive numbers of concurrent requests, integrate with third-party APIs during a conversation, and stream partial responses back to users in real time. With Reactor, this can be naturally implemented using Flux and Sinks, while with virtual threads you would need to manually manage event emission, which is less straightforward.
Change formFields to be IEnumerable<FormItem>. The default router and json parser doesn't see a json array as a .NET array.
This code doesn't have that issue. Does it solve your problem?
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Demo</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
dialog {
position: relative;
border: 1px solid black; /* Default is 2px. */
}
dialog::backdrop {
background-color: salmon;
}
#closeButton {
font-size: 1.5em;
line-height: .75em;
position: absolute;
right: 0;
top: 0;
border-left: 1px solid black;
border-bottom: 1px solid black;
padding: 3px;
}
#closeButton:hover {
cursor: pointer;
background-color: black;
color: white;
}
</style>
</head>
<body>
<dialog>
<div id="closeButton" onclick="this.parentNode.close()">×</div>
<p>This is a dialog box.</p>
</dialog>
<script>
document.getElementsByTagName('dialog')[0].showModal(); // Or show() for a non-modal display.
</script>
</body>
</html>
I went through the same situation.
If there is no org policy, you must create it.
I thought this was an organizational policy because the item default was created for some reason, but it wasn't.
I found that my logs showed after changing my device. For example, they weren't showing on my Sony Xperia, but they were showing on my OnePlus 8.
Any solution for this yet?
I have tried to validate locales but still did not help.
Your organization looks solid for learning purposes! For some fun breaks while coding, you could also check out 3pattiboss.info/ a nice way to unwind between layers.
Is this for old v1 and v2 versiones too? Or only for g2
From my several experiences using Ploars, I can categorically say that Polars will not load the entire Parquet into memory if you only select a few columns but it does column pruning under the hood.
But in another case when you use .collect() without care, Polars will try to materialize the entire rows from those column at once and the only implication is that it can destroy RAM on huge data.
In cases where one need to work on a very larg datasets, kindly use the following
· Use Scan_parquet (Lazy mode) with filters before.collect().
· Use streaming =True as you did, but combine with filters/agregations so Polars does not need to hold everything together.
I was dealing with the same issue, but i just found the issue. I just found out it was syntax error with one of the nodes. Make sure your code works before you call the Image function.
Actually changing defconfig manually or using menuconfig will lets you select I2CGPIO and I2CGPIO Fault Injector so the kernel knows about the driver but does not create any bus instance or inform the kernel which pins to use.
You should create a device Tree to tells the kernel “make a software I2C bus using these GPIO pins.”The i2c-gpio driver looks for a specific node in the device tree to understand which GPIO pins to use for the I²C bus. Without this node, the driver has no bus to attach to.
This node must be added to your AST2600's specific .dts file or one of the .dtsi files it includes.
I've just fixed it. The whole problem was the dynamic allocation of page tables on non-mapped addresses. My solution was to make an array with as big as all the possible page tables + the page directory. It's not clean, but it works
I had to run the web app as an admin to be able to launch ChromeDriver
create a new local user account as an admin
assign that to the application pool of the IIS web site
In the new versions of Android studio the shortcut to delete the current line for Windows is Shift + delete
the width/height attributes give the browser the image’s natural aspect ratio up-front, preventing layout shift. contain-intrinsic-size is only used as a fallback size before the real dimensions load, so using both keeps the layout stable in all cases.
The issue is caused by your component missing React property ref. In my case I had customized <FlexCol> component that did not have that property. After I added support of ref property into my component the issue got fixed (just passed it as a prop value to the top div). Morale - never strip off life important React properties from your custom components (key, ref, maybe more?).
type FlexColProps = {
id?: string;
key?: string | number;
ref?: React.Ref<HTMLDivElement>; // ref was missing and causing the scrollTop error!
...
}
Thanks to Tom Cools' suggestion, I found out how to do it. In the resource class:
@Inject
private ConstraintMetaModel constraintMetaModel;
@GET
@Path("constraints")
@Produces(MediaType.APPLICATION_JSON)
public Collection<String> listConstraintNames() {
return constraintMetaModel.getConstraints().stream().map(Constraint::getConstraintRef)
.map(ConstraintRef::constraintName).collect(Collectors.toSet());
}
Here is custom implemention enter image description here
import React, { useState } from "react";
type PieSlice = {
name: string;
value: number;
color: string;
};
interface CustomPieChartProps {
innerRadius?: number;
outerRadius?: number;
gapAngle?: number; // Gap between slices in degrees
data: PieSlice[];
}
const CustomPieChart: React.FC<CustomPieChartProps> = ({
innerRadius = 35,
outerRadius = 60,
gapAngle = 18,
data
}) => {
const total = data.reduce((acc, item) => acc + item.value, 0);
let cumulativeAngle = -90; // Start from top (12 o'clock position)
const [tooltip, setTooltip] = useState<{ x: number; y: number; text: string } | null>(null);
// Function to create donut slice path with asymmetric curved ends
const createSlice = (startAngle: number, endAngle: number, innerR: number, outerR: number) => {
const rad = Math.PI / 180;
const capRadius = 12; // Radius for the rounded caps
const endCapRadius = capRadius + 10; // Radius for the rounded caps at the end
// Adjust angles to account for the curved caps
const adjustedStartAngle = startAngle + gapAngle / 2;
const adjustedEndAngle = endAngle - (gapAngle) / 2;
// Outer arc points
const x1Outer = outerR + outerR * Math.cos(-adjustedStartAngle * rad);
const y1Outer = outerR + outerR * Math.sin(-adjustedStartAngle * rad);
const x2Outer = outerR + outerR * Math.cos(-adjustedEndAngle * rad);
const y2Outer = outerR + outerR * Math.sin(-adjustedEndAngle * rad);
// Inner arc points
const x1Inner = outerR + innerR * Math.cos(-adjustedEndAngle * rad);
const y1Inner = outerR + innerR * Math.sin(-adjustedEndAngle * rad);
const x2Inner = outerR + innerR * Math.cos(-adjustedStartAngle * rad);
const y2Inner = outerR + innerR * Math.sin(-adjustedStartAngle * rad);
const largeArcFlag = adjustedEndAngle - adjustedStartAngle > 180 ? 1 : 0;
return `
M${x1Outer},${y1Outer}
A${outerR},${outerR} 0 ${largeArcFlag} 0 ${x2Outer},${y2Outer}
A${capRadius},${capRadius} 0 0 0 ${x1Inner},${y1Inner}
A${innerR},${innerR} 0 ${largeArcFlag} 1 ${x2Inner},${y2Inner}
A${endCapRadius},${endCapRadius} 0 0 1 ${x1Outer},${y1Outer}
Z
`;
};
return (
<div style={{ position: "relative", width: outerRadius * 2, height: outerRadius * 2 }}>
<svg width={outerRadius * 2} height={outerRadius * 2}>
{data.map((slice) => {
const startAngle = cumulativeAngle;
const angle = (slice.value / total) * 360;
cumulativeAngle += angle;
const endAngle = cumulativeAngle;
return (
<path
key={slice.name}
d={createSlice(startAngle, endAngle, innerRadius, outerRadius)}
fill={slice.color}
onMouseMove={(e) =>
setTooltip({
x: e.nativeEvent.offsetX,
y: e.nativeEvent.offsetY,
text: `${(slice.value / total * 100).toFixed(2)}%`,
})
}
onMouseLeave={() => setTooltip(null)}
style={{ cursor: "pointer" }}
/>
);
})}
</svg>
{tooltip && (
<div
style={{
position: "absolute",
top: tooltip.y + 10,
left: tooltip.x + 10,
background: "rgba(0,0,0,0.75)",
color: "white",
padding: "4px 8px",
borderRadius: "8px",
pointerEvents: "none",
fontSize: "12px",
zIndex: 10,
}}
>
{tooltip.text}
</div>
)}
</div>
);
};
export default CustomPieChart;
The purpose of Spring Modulith is to have everything in the same module as it's a monolithic application, effectively using ArchTest to validate architectural rules and boundaries with events for communication between ApplicationModules.
| h just iieader 1 | header 2 |
|---|---|
| cell 1 | cell 2 |
| cell 3 | cell 4 |
My sweetheart when are you going to make the payment I asked you so that I can able to come over to your place the $400 but I ask you or don't you want me to come over to your place
This error happens because phone auth works differently on React Native than on web. On web you can call signInWithPhoneNumber directly, but on mobile it needs extra setup. If you’re using Expo managed workflow, you’ll need expo-firebase-recaptcha for verification. If you’re on a custom dev build or bare workflow, the easier option is to use @react-native-firebase/auth, which handles SMS sign-in natively. So i have seen your code isn’t wrong it’s just that the method you’re using only works on web unless you add the right setup for React Native.
// Defer heavy render to next frame to avoid nav jank
const [ready, setReady] = React.useState(false);
React.useEffect(() => {
const task = InteractionManager.runAfterInteractions(() => {
// Small timeout so the indicator is visible when transition is very fast
const t = setTimeout(() => setReady(true), 50);
return () => clearTimeout(t);
});
return () => task.cancel();
}, []);
solution grabbed from chatgpt5
To disable all AI features in the newest release (1.104.1), go to settings and set @id:chat.disableAIFeatures to true. This will immediately hide the chat panel, the status bar icon and all GitHub Copilot code completion features.

django-filter is built on top of Django’s forms.fields , not DRF’s serializers.Field, so you can’t plug serializer fields in directly. There isn’t a first-class “use serializer fields in filters” hook.
That leaves you with two options:
Wrap serializer fields in a forms.Field adapter (like your DRFFormFieldWrapper). This is a reasonable approach if you want to reuse the exact parsing/validation logic you already have in DRF fields. It keeps things DRY, but you’ll need to maintain the wrapper.
Implement the parsing at the forms layer (i.e. write a custom forms.Field for Jalali dates). This is the more idiomatic solution in the Django ecosystem, because filters conceptually belong to the forms layer, not the serializer layer.
If you care about maintainability and alignment with the rest of the Django stack, option #2 is the “best practice”. If avoiding duplication is more important and you’re comfortable with a thin adapter, option #1 is fine.
There’s no built-in way to bridge the two layers, so the choice depends on whether you want to stay idiomatic (forms-based) or DRY (wrapper-based).
Use a fixed-length tuple or vector where each position corresponds to a base unit (e.g., (length, time, mass, ...)). Define constants like:
METER = (1, 0, 0)
SECOND = (0, 1, 0)
METER_PER_SECOND = (1, -1, 0)
This makes operations predictable and easy to validate using basic vector math
Turns out BTLS is stricter than others about certificates' metadata.
Starting from the fact that trust-self-signed.sh "https://self-signed.badssl.com:443" worked as expected, I modeled my certificate after the one used there to include organization & locale metadata, not have X509EnhancedKeyUsageExtension and X509KeyUsageExtension, worked like a charm.
Ended up with this:
public static X509Certificate2 BuildSelfSignedServerCertificate(string host)
{
using RSA rsa = RSA.Create(2048);
CertificateRequest request = CreateRequest(rsa, host);
X509Certificate2 certificate = CreateCertificate(request);
return certificate;
static CertificateRequest CreateRequest(RSA rsa, string host)
{
X500DistinguishedNameBuilder distinguishedName = new();
distinguishedName.AddOrganizationName("Myself");
distinguishedName.AddLocalityName("Sofia");
distinguishedName.AddStateOrProvinceName("Sofia");
distinguishedName.AddCountryOrRegion("BG");
distinguishedName.AddCommonName(host);
SubjectAlternativeNameBuilder sanExtension = new();
sanExtension.AddDnsName(host);
X509BasicConstraintsExtension constraintsExtension = new(
certificateAuthority: false,
hasPathLengthConstraint: false,
pathLengthConstraint: 0,
critical: false
);
CertificateRequest request = new(distinguishedName.Build(), rsa, HashAlgorithmName.SHA256, RSASignaturePadding.Pkcs1);
request.CertificateExtensions.Add(sanExtension.Build());
request.CertificateExtensions.Add(constraintsExtension);
return request;
}
static X509Certificate2 CreateCertificate(CertificateRequest request)
{
X509Certificate2 certificate = request.CreateSelfSigned(
new DateTimeOffset(DateTime.UtcNow.AddDays(-30)),
new DateTimeOffset(DateTime.UtcNow.AddDays(365_0))
);
string password = $"{Guid.NewGuid():N}";
byte[] export = certificate.Export(X509ContentType.Pfx, password);
X509Certificate2 result = X509CertificateLoader.LoadPkcs12(export, password);
return result;
}
}
To compare a pair of certificates, one can use s_client -connect $HOST:$PORT to get it, save it as .crt and use the viewer built into Windows or some online viewer.
try:
handles = drv.window_handles
if handles:
drv.switch_to.window(handles[0])
#assume one window
#simple way to implement into your code if its a small service exception can be handled if the url of the window cant be read note:check if your server is running
Great try brother but I see exactly what’s happening.
The 503 Backend fetch failed error is almost never coming from WooCommerce itself - it’s your server (PHP-FPM, Apache, or Nginx proxy maybe) timing out or choking when too many requests or heavy payloads come in quickly.
Here’s how to fix this systematically:
Before touching your code, check:
max_execution_time → at least 300
memory_limit → 512M or higher
max_input_vars → 5000+
post_max_size / upload_max_filesize → bigger than your JSON payload
(500 products × attributes = quite big!)
You can override in .htaccess or php.ini if your host allows:
max_execution_time = 300
memory_limit = 512M
max_input_vars = 10000
post_max_size = 64M
upload_max_filesize = 64M
Even though WooCommerce allows 100 products per batch, in practice chunk size 20–30 is safer when updating stock/price.
Change:
$chunks = array_chunk($products, 50);
to:
$chunks = array_chunk($products, 20); // safer for heavy sites
Instead of sleep() (which blocks PHP execution), you should queue the next request only after the previous AJAX response succeeds.
Example flow:
Upload CSV → store data in an option or transient
First AJAX call updates batch #1
When it completes, JavaScript triggers AJAX call for batch #2
Repeat until done
This avoids overloading PHP with one giant loop.
sendBatchRequest (Retry + Delay)Sometimes WooCommerce REST API throttles requests. Add retry logic with exponential backoff:
private function sendBatchRequest($data) {
$attempts = 0;
$max_attempts = 3;
$delay = 2; // seconds
do {
$ch = curl_init($this->apiUrl);
curl_setopt_array($ch, [
CURLOPT_RETURNTRANSFER => true,
CURLOPT_POST => true,
CURLOPT_POSTFIELDS => json_encode($data),
CURLOPT_HTTPHEADER => ['Content-Type: application/json'],
CURLOPT_USERPWD => $this->apiKey . ':' . $this->apiSecret,
CURLOPT_TIMEOUT => 120,
]);
$response = curl_exec($ch);
$httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
if ($httpCode >= 200 && $httpCode < 300) {
return [
'success' => true,
'response' => json_decode($response, true),
'http_code' => $httpCode
];
}
$attempts++;
if ($attempts < $max_attempts) {
sleep($delay);
$delay *= 2; // exponential backoff
}
} while ($attempts < $max_attempts);
return [
'success' => false,
'response' => json_decode($response, true),
'http_code' => $httpCode
];
}
Here let me restructure your class so it processes products in batches via AJAX queue instead of looping all at once.
Here’s a production-ready rewrite (safe for 500–1000+ products):
<?php
/**
* Class StockUpdater
* Processes CSV file and updates WooCommerce products in batches
*/
class StockUpdater {
private $apiUrl;
private $apiKey;
private $apiSecret;
public function __construct($apiUrl, $apiKey, $apiSecret) {
$this->apiUrl = $apiUrl;
$this->apiKey = $apiKey;
$this->apiSecret = $apiSecret;
// AJAX hooks
add_action('wp_ajax_start_stock_update', [$this, 'ajaxStartStockUpdate']);
add_action('wp_ajax_process_stock_batch', [$this, 'ajaxProcessStockBatch']);
}
/**
* Parse CSV into product data
*/
private function parseCSV($csvFile) {
$products = [];
if (($handle = fopen($csvFile, 'r')) !== false) {
while (($data = fgetcsv($handle, 1000, ',')) !== false) {
$sku = trim($data[0]);
$id = wc_get_product_id_by_sku($sku);
if ($id) {
$products[] = [
'sku' => $sku,
'id' => $id,
'stock' => !empty($data[1]) ? (int) trim($data[1]) : 0,
'price' => !empty($data[2]) ? wc_format_decimal(str_replace(',', '.', trim($data[2]))) : 0,
];
}
}
fclose($handle);
}
return $products;
}
/**
* Start the update (first AJAX call)
*/
public function ajaxStartStockUpdate() {
check_ajax_referer('stock_update_nonce', 'security');
$csvFile = ABSPATH . 'wp-content/stock-update.csv'; // adjust path
$products = $this->parseCSV($csvFile);
if (empty($products)) {
wp_send_json_error(['message' => 'No products found in CSV']);
}
// Store products temporarily in transient
$batch_id = 'stock_update_' . time();
set_transient($batch_id, $products, HOUR_IN_SECONDS);
wp_send_json_success([
'batch_id' => $batch_id,
'total' => count($products),
]);
}
/**
* Process next batch (subsequent AJAX calls)
*/
public function ajaxProcessStockBatch() {
check_ajax_referer('stock_update_nonce', 'security');
$batch_id = sanitize_text_field($_POST['batch_id']);
$offset = intval($_POST['offset']);
$limit = 20; // products per batch (safe)
$products = get_transient($batch_id);
if (!$products) {
wp_send_json_error(['message' => 'Batch expired or not found']);
}
$chunk = array_slice($products, $offset, $limit);
if (empty($chunk)) {
delete_transient($batch_id);
wp_send_json_success(['done' => true]);
}
$data = ['update' => []];
foreach ($chunk as $product) {
$data['update'][] = [
'id' => $product['id'],
'sku' => $product['sku'],
'stock_quantity' => $product['stock'],
'regular_price' => $product['price'],
];
}
$response = $this->sendBatchRequest($data);
wp_send_json_success([
'done' => false,
'next' => $offset + $limit,
'response' => $response,
'remaining' => max(0, count($products) - ($offset + $limit)),
]);
}
/**
* Send batch request to WC REST API with retry logic
*/
private function sendBatchRequest($data) {
$attempts = 0;
$max_attempts = 3;
$delay = 2;
do {
$ch = curl_init($this->apiUrl);
curl_setopt_array($ch, [
CURLOPT_RETURNTRANSFER => true,
CURLOPT_POST => true,
CURLOPT_POSTFIELDS => json_encode($data),
CURLOPT_HTTPHEADER => ['Content-Type: application/json'],
CURLOPT_USERPWD => $this->apiKey . ':' . $this->apiSecret,
CURLOPT_TIMEOUT => 120,
]);
$response = curl_exec($ch);
$httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
if ($httpCode >= 200 && $httpCode < 300) {
return json_decode($response, true);
}
$attempts++;
if ($attempts < $max_attempts) {
sleep($delay);
$delay *= 2;
}
} while ($attempts < $max_attempts);
return ['error' => 'Request failed', 'http_code' => $httpCode];
}
}
jQuery(document).ready(function ($) {
$('#start-stock-update').on('click', function () {
$.post(ajaxurl, {
action: 'start_stock_update',
security: stockUpdate.nonce
}, function (response) {
if (response.success) {
processBatch(response.data.batch_id, 0, response.data.total);
} else {
alert(response.data.message);
}
});
});
function processBatch(batch_id, offset, total) {
$.post(ajaxurl, {
action: 'process_stock_batch',
batch_id: batch_id,
offset: offset,
security: stockUpdate.nonce
}, function (response) {
if (response.success) {
if (response.data.done) {
alert('Stock update complete!');
} else {
let remaining = response.data.remaining;
console.log(`Processed ${offset + 20} of ${total}. Remaining: ${remaining}`);
processBatch(batch_id, response.data.next, total);
}
} else {
alert(response.data.message);
}
});
}
});
wp_enqueue_script('stock-update', plugin_dir_url(__FILE__) . 'stock-update.js', ['jquery'], null, true);
wp_localize_script('stock-update', 'stockUpdate', [
'nonce' => wp_create_nonce('stock_update_nonce'),
]);
And here I build you a ready to use mini plugin that does exactly this:
Adds a menu in WooCommerce → Stock Updater
Lets you upload a CSV (sku, stock, price)
Shows a “Start Update” button
Runs the AJAX queue processor to update products in safe batches
<?php
/**
* Plugin Name: WooCommerce Stock Updater (CSV)
* Description: Upload a CSV (sku, stock, price) and batch update products safely via WooCommerce REST API.
* Version: 1.0
* Author: Jer Salam
*/
if (!defined('ABSPATH')) exit;
class WC_Stock_Updater {
private $apiUrl;
private $apiKey;
private $apiSecret;
public function __construct() {
$this->apiUrl = home_url('/wp-json/wc/v3/products/batch');
$this->apiKey = get_option('woocommerce_api_consumer_key');
$this->apiSecret = get_option('woocommerce_api_consumer_secret');
add_action('admin_menu', [$this, 'add_menu']);
add_action('admin_enqueue_scripts', [$this, 'enqueue_scripts']);
// AJAX
add_action('wp_ajax_start_stock_update', [$this, 'ajaxStartStockUpdate']);
add_action('wp_ajax_process_stock_batch', [$this, 'ajaxProcessStockBatch']);
}
public function add_menu() {
add_submenu_page(
'woocommerce',
'Stock Updater',
'Stock Updater',
'manage_woocommerce',
'wc-stock-updater',
[$this, 'render_admin_page']
);
}
public function enqueue_scripts($hook) {
if ($hook !== 'woocommerce_page_wc-stock-updater') return;
wp_enqueue_script('wc-stock-updater', plugin_dir_url(__FILE__) . 'stock-update.js', ['jquery'], '1.0', true);
wp_localize_script('wc-stock-updater', 'stockUpdate', [
'nonce' => wp_create_nonce('stock_update_nonce'),
'ajaxurl' => admin_url('admin-ajax.php'),
]);
}
public function render_admin_page() {
?>
<div class="wrap">
<h1>WooCommerce Stock Updater</h1>
<form method="post" enctype="multipart/form-data">
<?php wp_nonce_field('wc_stock_upload', 'wc_stock_nonce'); ?>
<input type="file" name="stock_csv" accept=".csv" required>
<input type="submit" name="upload_csv" class="button button-primary" value="Upload CSV">
</form>
<?php
if (isset($_POST['upload_csv']) && check_admin_referer('wc_stock_upload', 'wc_stock_nonce')) {
if (!empty($_FILES['stock_csv']['tmp_name'])) {
$upload_dir = wp_upload_dir();
$csv_path = $upload_dir['basedir'] . '/stock-update.csv';
move_uploaded_file($_FILES['stock_csv']['tmp_name'], $csv_path);
echo '<p><strong>CSV uploaded successfully.</strong></p>';
echo '<button id="start-stock-update" class="button button-primary">Start Update</button>';
}
}
?>
<div id="stock-update-log" style="margin-top:20px; font-family: monospace;"></div>
</div>
<?php
}
private function parseCSV($csvFile) {
$products = [];
if (($handle = fopen($csvFile, 'r')) !== false) {
while (($data = fgetcsv($handle, 1000, ',')) !== false) {
$sku = trim($data[0]);
$id = wc_get_product_id_by_sku($sku);
if ($id) {
$products[] = [
'sku' => $sku,
'id' => $id,
'stock' => !empty($data[1]) ? (int) trim($data[1]) : 0,
'price' => !empty($data[2]) ? wc_format_decimal(str_replace(',', '.', trim($data[2]))) : 0,
];
}
}
fclose($handle);
}
return $products;
}
public function ajaxStartStockUpdate() {
check_ajax_referer('stock_update_nonce', 'security');
$upload_dir = wp_upload_dir();
$csvFile = $upload_dir['basedir'] . '/stock-update.csv';
$products = $this->parseCSV($csvFile);
if (empty($products)) {
wp_send_json_error(['message' => 'No products found in CSV']);
}
$batch_id = 'stock_update_' . time();
set_transient($batch_id, $products, HOUR_IN_SECONDS);
wp_send_json_success([
'batch_id' => $batch_id,
'total' => count($products),
]);
}
public function ajaxProcessStockBatch() {
check_ajax_referer('stock_update_nonce', 'security');
$batch_id = sanitize_text_field($_POST['batch_id']);
$offset = intval($_POST['offset']);
$limit = 20;
$products = get_transient($batch_id);
if (!$products) {
wp_send_json_error(['message' => 'Batch expired or not found']);
}
$chunk = array_slice($products, $offset, $limit);
if (empty($chunk)) {
delete_transient($batch_id);
wp_send_json_success(['done' => true]);
}
$data = ['update' => []];
foreach ($chunk as $product) {
$data['update'][] = [
'id' => $product['id'],
'sku' => $product['sku'],
'stock_quantity' => $product['stock'],
'regular_price' => $product['price'],
];
}
$response = $this->sendBatchRequest($data);
wp_send_json_success([
'done' => false,
'next' => $offset + $limit,
'response' => $response,
'remaining' => max(0, count($products) - ($offset + $limit)),
]);
}
private function sendBatchRequest($data) {
$attempts = 0;
$max_attempts = 3;
$delay = 2;
do {
$ch = curl_init($this->apiUrl);
curl_setopt_array($ch, [
CURLOPT_RETURNTRANSFER => true,
CURLOPT_POST => true,
CURLOPT_POSTFIELDS => json_encode($data),
CURLOPT_HTTPHEADER => ['Content-Type: application/json'],
CURLOPT_USERPWD => $this->apiKey . ':' . $this->apiSecret,
CURLOPT_TIMEOUT => 120,
]);
$response = curl_exec($ch);
$httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
if ($httpCode >= 200 && $httpCode < 300) {
return json_decode($response, true);
}
$attempts++;
if ($attempts < $max_attempts) {
sleep($delay);
$delay *= 2;
}
} while ($attempts < $max_attempts);
return ['error' => 'Request failed', 'http_code' => $httpCode];
}
}
new WC_Stock_Updater();
jQuery(document).ready(function ($) {
$('#start-stock-update').on('click', function () {
$('#stock-update-log').html('<p>Starting stock update...</p>');
$.post(stockUpdate.ajaxurl, {
action: 'start_stock_update',
security: stockUpdate.nonce
}, function (response) {
if (response.success) {
processBatch(response.data.batch_id, 0, response.data.total);
} else {
$('#stock-update-log').append('<p style="color:red;">' + response.data.message + '</p>');
}
});
});
function processBatch(batch_id, offset, total) {
$.post(stockUpdate.ajaxurl, {
action: 'process_stock_batch',
batch_id: batch_id,
offset: offset,
security: stockUpdate.nonce
}, function (response) {
if (response.success) {
if (response.data.done) {
$('#stock-update-log').append('<p style="color:green;">Stock update complete!</p>');
} else {
let processed = offset + 20;
$('#stock-update-log').append('<p>Processed ' + processed + ' of ' + total + ' products. Remaining: ' + response.data.remaining + '</p>');
processBatch(batch_id, response.data.next, total);
}
} else {
$('#stock-update-log').append('<p style="color:red;">' + response.data.message + '</p>');
}
});
}
});
Upload the plugin folder (wc-stock-updater) with both files:
wc-stock-updater.php
stock-update.js
Activate it in WP Admin.
Go to WooCommerce → Stock Updater.
Upload your CSV (sku, stock, price).
Click Start Update.
It will process 20 products per batch until all are done without 503 errors.
Cheers Brother.
In my case I had to change from @pytest.fixture to @pytest_asyncio.fixture and just worked. Of course, keep in mind the tests should be annotated with @pytest.mark.asyncio and don't forget to install pytest-asyncio
also you can encode your table right before creating the table that saves so much extra effort
CREATE DATABASE my_database WITH ENCODING 'UTF8'
Are you creating container for Spring-boot application also or is it running locally
py38 was deprecated a year ago, none of the packages pinned, it is kinda expected that python environment materialization will fail. Base environmetn/image is also deprecated a while back. As a side note, there is not much value of using curated environment image for system managed environment. It will create an isolated environment so preinstalled dependencies won't be available. I you just need to install a few dependencies, like optuna, in the existing environment, just do
FROM mcr.microsoft.com/azureml/curated/acpt-pytorch-1.11-py38-cuda11.3-gpu:9
RUN pip install optuna=={some_compatible_version}
This is the formula in cell G9. The formula is filled up and confirmed with ctrl+shift+enter because i work with legacy Excel 2013. Not 100 % sure if this does what you want but that's how i understand the task.
=SUM(COUNTIFS($B$2:$B$18,F9,$C$2:$C$18,IF($B$2:$B$18=E9,$C$2:$C$18)))
Not really answer to the original question, but if someone wants to have a new paragraph starting with some tabulators, this is the only way I found. Basically to simulate tabulatos I had to insert ASCII 173 character followed by a space character and repeat a few times:

The Google Picker cannot filter by “files created by my app” (drive.file) out of the box. The Picker’s Drive view does not support Drive v3 query syntax (like appProperties) nor any “creatorAppId” filter. The setAppId and setQuery you tried won’t achieve this.
What you can do instead:
Use a dedicated folder for your app’s files, then point the Picker to that folder.
When you create the spreadsheet, place it in a known folder (e.g., “MyApp Sheets”).
In the Picker, use DocsView.setParent(folderId) so users only see files inside that folder.
Found the solution by digging through this sample project https://developer.apple.com/documentation/RealityKit/composing-interactive-3d-content-with-realitykit-and-reality-composer-pro
Note that the accompanying WWDC video presents a different method, which throws compiler errors, so ignore that. Thanks Apple!
struct MyAugmentedView : View {
private let notificationTrigger = NotificationCenter.default.publisher(for: Notification.Name("RealityKit.NotificationTrigger"))
var body: some View {
// Add the following modifier to your RealityView or ARView:
.onReceive(notificationTrigger) { output in
guard
let notificationName = output.userInfo?["RealityKit.NotificationTrigger.Identifier"] as? String
else { return }
switch notificationName {
case "MyFirstNotificationIdentifier" :
// code to run when this notification received here
case "MySecondNotificationIdentifier"
// etc
default:
return
}
}
}
}
Your postmeta is huge brother
Before HPOS, WooCommerce stored orders as post_type = shop_order in the wp_posts table and all order data in wp_postmeta.
Now with HPOS, orders live in:
wp_wc_orders
wp_wc_order_addresses
wp_wc_order_operational_data
wp_wc_orders_meta
But WooCommerce does not automatically delete the old shop_order posts or their postmeta (for backward compatibility). That’s why your wp_postmeta is still bloated.
wp_postsSELECT COUNT(*)
FROM wp_posts
WHERE post_type = 'shop_order';
If you’ve fully migrated to HPOS, you don’t need these anymore.
This query deletes all postmeta records tied to old shop_order posts:
DELETE pm
FROM wp_postmeta pm
INNER JOIN wp_posts p ON pm.post_id = p.ID
WHERE p.post_type = 'shop_order';
3. Remove old order posts
DELETE FROM wp_posts
WHERE post_type = 'shop_order';
4. Optimize the table
OPTIMIZE TABLE wp_postmeta;
Stage Cleanup - its optional:
Because you’ve got 9M+ rows, deleting everything in one go can lock tables and time out.
Do it in batches like this:
DELETE pm
FROM wp_postmeta pm
INNER JOIN wp_posts p ON pm.post_id = p.ID
WHERE p.post_type = 'shop_order'
LIMIT 50000;
Run multiple times until rows are gone. Cheers.
IMO, the most flexible way is to use css variables with a potential default value
<symbol id="thing">
<circle fill="var(--fill, red)"> // if --fill doesn't exit, it's gonna default to red
</symbol>
<svg style="--fill: blue">
<use xlink:href="sprite.svg#thing">
</svg>
A confident South Asian man in his early 30s strikes a bold pose in front of a vintage cream-colored luxury sedan from the 1970s. His confident expression, sharp features, and well-crafted 70s fashion, including a creamy pinstriped suit and gold chain, set against a softly lit, tree-lined avenue, create a striking interaction between style and nostalgia.
The simplest way to preview an HTML file inside a VS Code tab in GitHub Codespaces is to use the Live Preview extension by Microsoft.
Open the Extensions panel in VS Code.
Search for Live Preview (Microsoft) and install it.
Restart VS Code (if needed).
Open your index.html file. You’ll now see a “Live Preview” icon in the top-right corner of the editor.
Click it, and your HTML file will be rendered directly inside a VS Code tab.
This avoids switching to an external browser and lets you preview your page right inside Codespaces.
To fix recurrence rules in ics.js, update your rRule object to use uppercase freq values (e.g., 'WEEKLY', 'MONTHLY') as required by the library. So: fd.get("freq").toUpperCase()