This issue occurred due to an outdated SDK in the app. It is likely that some support for Objective-C was removed in iOS 26, resulting in the crash.
I too struggled initially setting up Tailwind/Postcss initially, then I configured these steps combining documentations, all youtube tutorials. This will surely work, just go stepwise
Using the new Tailwind CSS Version 4+ Latest supports auto configurations. (learn more about it from documentations).
We don’t have tailwind.config.js and postcss.config.js anymore.
Start a fresh new App
Note: Ensure using
Command Prompt CMDTerminal and notPowershell pslorGit Bashor others, inside your code editor.I face no errors when doing this
npm create vite@latest my-app -- --template <template_name>
Eg. For React:
npm create vite@latest my-app -- --template react
cd my-app
code -r my-app
Opens the app in another VS Code window.
npm install tailwindcss @tailwindcss/vite
Confirm the installation via package.json
"dependencies": {
"@tailwindcss/vite": "^4.1.11",
"react": "^19.1.0",
"react-dom": "^19.1.0",
"tailwindcss": "^4.1.11"
}
(With PostCss)
npm install -D @tailwindcss/postcss
and
npm install tailwindcss @tailwindcss/vite
Confirm the installation via package.json
"devDependencies": {
"@eslint/js": "^9.35.0",
"@tailwindcss/postcss": "^4.1.13",
"@types/react": "^19.1.13",
"@types/react-dom": "^19.1.9",
"@vitejs/plugin-react": "^5.0.2",
"eslint": "^9.35.0",
"eslint-plugin-react-hooks": "^5.2.0",
"eslint-plugin-react-refresh": "^0.4.20",
"globals": "^16.4.0",
"vite": "^7.1.6"
}
vite.config.jsimport { defineConfig } from 'vite'
import react from '@vitejs/plugin-react'
import tailwindcss from '@tailwindcss/vite' //1. Line 1
// https://vite.dev/config/
export default defineConfig({
plugins: [
tailwindcss(), //2. Line 2
react()
],
})
And press CTRL + S to save.
project/src/index.css
@import "tailwindcss";
:root {
font-family: system-ui, Avenir, Helvetica, Arial, sans-serif;
.
.
remove this default css..
}
rfc
export default function App() {
return (
<h1 className='text-lg font-bold underline text-red-500'>
YB-Tutorials
</h1>
)
}
npm run dev
Visit http://localhost:5173/
Done...!
For people following the Nest.js Passport.js tutorial that end up having this issue, what fixed it for me is adding the secret as an option again
this.jwtService.sign(payload, {
secret: jwtConstants.secret
})
Based on recent open-source benchmarks (https://github.com/chrisgleissner/loom-webflux-benchmarks), virtual threads consistently achieved same or even better performance than Project Reactor. This indicates that if your main concern is simply overcoming performance bottlenecks related to thread overhead or blocking I/O, the answer is YES, virtual threads alone are often sufficient and provide a simpler programming model.
But, the reactive programming model brings benefits beyond reducing thread usage. Frameworks like Project Reactor are inherently event-driven, which provide strong support for:
The model itself—not just the performance—is a key advantage. A concrete example is the recent wave of AI chatbot applications, which must handle massive numbers of concurrent requests, integrate with third-party APIs during a conversation, and stream partial responses back to users in real time. With Reactor, this can be naturally implemented using Flux and Sinks, while with virtual threads you would need to manually manage event emission, which is less straightforward.
Change formFields to be IEnumerable<FormItem>. The default router and json parser doesn't see a json array as a .NET array.
This code doesn't have that issue. Does it solve your problem?
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Demo</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
dialog {
position: relative;
border: 1px solid black; /* Default is 2px. */
}
dialog::backdrop {
background-color: salmon;
}
#closeButton {
font-size: 1.5em;
line-height: .75em;
position: absolute;
right: 0;
top: 0;
border-left: 1px solid black;
border-bottom: 1px solid black;
padding: 3px;
}
#closeButton:hover {
cursor: pointer;
background-color: black;
color: white;
}
</style>
</head>
<body>
<dialog>
<div id="closeButton" onclick="this.parentNode.close()">×</div>
<p>This is a dialog box.</p>
</dialog>
<script>
document.getElementsByTagName('dialog')[0].showModal(); // Or show() for a non-modal display.
</script>
</body>
</html>
I went through the same situation.
If there is no org policy, you must create it.
I thought this was an organizational policy because the item default was created for some reason, but it wasn't.
I found that my logs showed after changing my device. For example, they weren't showing on my Sony Xperia, but they were showing on my OnePlus 8.
Any solution for this yet?
I have tried to validate locales but still did not help.
Your organization looks solid for learning purposes! For some fun breaks while coding, you could also check out 3pattiboss.info/ a nice way to unwind between layers.
Is this for old v1 and v2 versiones too? Or only for g2
From my several experiences using Ploars, I can categorically say that Polars will not load the entire Parquet into memory if you only select a few columns but it does column pruning under the hood.
But in another case when you use .collect() without care, Polars will try to materialize the entire rows from those column at once and the only implication is that it can destroy RAM on huge data.
In cases where one need to work on a very larg datasets, kindly use the following
· Use Scan_parquet (Lazy mode) with filters before.collect().
· Use streaming =True as you did, but combine with filters/agregations so Polars does not need to hold everything together.
I was dealing with the same issue, but i just found the issue. I just found out it was syntax error with one of the nodes. Make sure your code works before you call the Image function.
Actually changing defconfig manually or using menuconfig will lets you select I2CGPIO and I2CGPIO Fault Injector so the kernel knows about the driver but does not create any bus instance or inform the kernel which pins to use.
You should create a device Tree to tells the kernel “make a software I2C bus using these GPIO pins.”The i2c-gpio driver looks for a specific node in the device tree to understand which GPIO pins to use for the I²C bus. Without this node, the driver has no bus to attach to.
This node must be added to your AST2600's specific .dts file or one of the .dtsi files it includes.
I've just fixed it. The whole problem was the dynamic allocation of page tables on non-mapped addresses. My solution was to make an array with as big as all the possible page tables + the page directory. It's not clean, but it works
I had to run the web app as an admin to be able to launch ChromeDriver
create a new local user account as an admin
assign that to the application pool of the IIS web site
In the new versions of Android studio the shortcut to delete the current line for Windows is Shift + delete
the width/height attributes give the browser the image’s natural aspect ratio up-front, preventing layout shift. contain-intrinsic-size is only used as a fallback size before the real dimensions load, so using both keeps the layout stable in all cases.
The issue is caused by your component missing React property ref. In my case I had customized <FlexCol> component that did not have that property. After I added support of ref property into my component the issue got fixed (just passed it as a prop value to the top div). Morale - never strip off life important React properties from your custom components (key, ref, maybe more?).
type FlexColProps = {
id?: string;
key?: string | number;
ref?: React.Ref<HTMLDivElement>; // ref was missing and causing the scrollTop error!
...
}
Thanks to Tom Cools' suggestion, I found out how to do it. In the resource class:
@Inject
private ConstraintMetaModel constraintMetaModel;
@GET
@Path("constraints")
@Produces(MediaType.APPLICATION_JSON)
public Collection<String> listConstraintNames() {
return constraintMetaModel.getConstraints().stream().map(Constraint::getConstraintRef)
.map(ConstraintRef::constraintName).collect(Collectors.toSet());
}
Here is custom implemention enter image description here
import React, { useState } from "react";
type PieSlice = {
name: string;
value: number;
color: string;
};
interface CustomPieChartProps {
innerRadius?: number;
outerRadius?: number;
gapAngle?: number; // Gap between slices in degrees
data: PieSlice[];
}
const CustomPieChart: React.FC<CustomPieChartProps> = ({
innerRadius = 35,
outerRadius = 60,
gapAngle = 18,
data
}) => {
const total = data.reduce((acc, item) => acc + item.value, 0);
let cumulativeAngle = -90; // Start from top (12 o'clock position)
const [tooltip, setTooltip] = useState<{ x: number; y: number; text: string } | null>(null);
// Function to create donut slice path with asymmetric curved ends
const createSlice = (startAngle: number, endAngle: number, innerR: number, outerR: number) => {
const rad = Math.PI / 180;
const capRadius = 12; // Radius for the rounded caps
const endCapRadius = capRadius + 10; // Radius for the rounded caps at the end
// Adjust angles to account for the curved caps
const adjustedStartAngle = startAngle + gapAngle / 2;
const adjustedEndAngle = endAngle - (gapAngle) / 2;
// Outer arc points
const x1Outer = outerR + outerR * Math.cos(-adjustedStartAngle * rad);
const y1Outer = outerR + outerR * Math.sin(-adjustedStartAngle * rad);
const x2Outer = outerR + outerR * Math.cos(-adjustedEndAngle * rad);
const y2Outer = outerR + outerR * Math.sin(-adjustedEndAngle * rad);
// Inner arc points
const x1Inner = outerR + innerR * Math.cos(-adjustedEndAngle * rad);
const y1Inner = outerR + innerR * Math.sin(-adjustedEndAngle * rad);
const x2Inner = outerR + innerR * Math.cos(-adjustedStartAngle * rad);
const y2Inner = outerR + innerR * Math.sin(-adjustedStartAngle * rad);
const largeArcFlag = adjustedEndAngle - adjustedStartAngle > 180 ? 1 : 0;
return `
M${x1Outer},${y1Outer}
A${outerR},${outerR} 0 ${largeArcFlag} 0 ${x2Outer},${y2Outer}
A${capRadius},${capRadius} 0 0 0 ${x1Inner},${y1Inner}
A${innerR},${innerR} 0 ${largeArcFlag} 1 ${x2Inner},${y2Inner}
A${endCapRadius},${endCapRadius} 0 0 1 ${x1Outer},${y1Outer}
Z
`;
};
return (
<div style={{ position: "relative", width: outerRadius * 2, height: outerRadius * 2 }}>
<svg width={outerRadius * 2} height={outerRadius * 2}>
{data.map((slice) => {
const startAngle = cumulativeAngle;
const angle = (slice.value / total) * 360;
cumulativeAngle += angle;
const endAngle = cumulativeAngle;
return (
<path
key={slice.name}
d={createSlice(startAngle, endAngle, innerRadius, outerRadius)}
fill={slice.color}
onMouseMove={(e) =>
setTooltip({
x: e.nativeEvent.offsetX,
y: e.nativeEvent.offsetY,
text: `${(slice.value / total * 100).toFixed(2)}%`,
})
}
onMouseLeave={() => setTooltip(null)}
style={{ cursor: "pointer" }}
/>
);
})}
</svg>
{tooltip && (
<div
style={{
position: "absolute",
top: tooltip.y + 10,
left: tooltip.x + 10,
background: "rgba(0,0,0,0.75)",
color: "white",
padding: "4px 8px",
borderRadius: "8px",
pointerEvents: "none",
fontSize: "12px",
zIndex: 10,
}}
>
{tooltip.text}
</div>
)}
</div>
);
};
export default CustomPieChart;
The purpose of Spring Modulith is to have everything in the same module as it's a monolithic application, effectively using ArchTest to validate architectural rules and boundaries with events for communication between ApplicationModules.
| h just iieader 1 | header 2 |
|---|---|
| cell 1 | cell 2 |
| cell 3 | cell 4 |
My sweetheart when are you going to make the payment I asked you so that I can able to come over to your place the $400 but I ask you or don't you want me to come over to your place
This error happens because phone auth works differently on React Native than on web. On web you can call signInWithPhoneNumber directly, but on mobile it needs extra setup. If you’re using Expo managed workflow, you’ll need expo-firebase-recaptcha for verification. If you’re on a custom dev build or bare workflow, the easier option is to use @react-native-firebase/auth, which handles SMS sign-in natively. So i have seen your code isn’t wrong it’s just that the method you’re using only works on web unless you add the right setup for React Native.
// Defer heavy render to next frame to avoid nav jank
const [ready, setReady] = React.useState(false);
React.useEffect(() => {
const task = InteractionManager.runAfterInteractions(() => {
// Small timeout so the indicator is visible when transition is very fast
const t = setTimeout(() => setReady(true), 50);
return () => clearTimeout(t);
});
return () => task.cancel();
}, []);
solution grabbed from chatgpt5
To disable all AI features in the newest release (1.104.1), go to settings and set @id:chat.disableAIFeatures to true. This will immediately hide the chat panel, the status bar icon and all GitHub Copilot code completion features.

django-filter is built on top of Django’s forms.fields , not DRF’s serializers.Field, so you can’t plug serializer fields in directly. There isn’t a first-class “use serializer fields in filters” hook.
That leaves you with two options:
Wrap serializer fields in a forms.Field adapter (like your DRFFormFieldWrapper). This is a reasonable approach if you want to reuse the exact parsing/validation logic you already have in DRF fields. It keeps things DRY, but you’ll need to maintain the wrapper.
Implement the parsing at the forms layer (i.e. write a custom forms.Field for Jalali dates). This is the more idiomatic solution in the Django ecosystem, because filters conceptually belong to the forms layer, not the serializer layer.
If you care about maintainability and alignment with the rest of the Django stack, option #2 is the “best practice”. If avoiding duplication is more important and you’re comfortable with a thin adapter, option #1 is fine.
There’s no built-in way to bridge the two layers, so the choice depends on whether you want to stay idiomatic (forms-based) or DRY (wrapper-based).
Use a fixed-length tuple or vector where each position corresponds to a base unit (e.g., (length, time, mass, ...)). Define constants like:
METER = (1, 0, 0)
SECOND = (0, 1, 0)
METER_PER_SECOND = (1, -1, 0)
This makes operations predictable and easy to validate using basic vector math
Turns out BTLS is stricter than others about certificates' metadata.
Starting from the fact that trust-self-signed.sh "https://self-signed.badssl.com:443" worked as expected, I modeled my certificate after the one used there to include organization & locale metadata, not have X509EnhancedKeyUsageExtension and X509KeyUsageExtension, worked like a charm.
Ended up with this:
public static X509Certificate2 BuildSelfSignedServerCertificate(string host)
{
using RSA rsa = RSA.Create(2048);
CertificateRequest request = CreateRequest(rsa, host);
X509Certificate2 certificate = CreateCertificate(request);
return certificate;
static CertificateRequest CreateRequest(RSA rsa, string host)
{
X500DistinguishedNameBuilder distinguishedName = new();
distinguishedName.AddOrganizationName("Myself");
distinguishedName.AddLocalityName("Sofia");
distinguishedName.AddStateOrProvinceName("Sofia");
distinguishedName.AddCountryOrRegion("BG");
distinguishedName.AddCommonName(host);
SubjectAlternativeNameBuilder sanExtension = new();
sanExtension.AddDnsName(host);
X509BasicConstraintsExtension constraintsExtension = new(
certificateAuthority: false,
hasPathLengthConstraint: false,
pathLengthConstraint: 0,
critical: false
);
CertificateRequest request = new(distinguishedName.Build(), rsa, HashAlgorithmName.SHA256, RSASignaturePadding.Pkcs1);
request.CertificateExtensions.Add(sanExtension.Build());
request.CertificateExtensions.Add(constraintsExtension);
return request;
}
static X509Certificate2 CreateCertificate(CertificateRequest request)
{
X509Certificate2 certificate = request.CreateSelfSigned(
new DateTimeOffset(DateTime.UtcNow.AddDays(-30)),
new DateTimeOffset(DateTime.UtcNow.AddDays(365_0))
);
string password = $"{Guid.NewGuid():N}";
byte[] export = certificate.Export(X509ContentType.Pfx, password);
X509Certificate2 result = X509CertificateLoader.LoadPkcs12(export, password);
return result;
}
}
To compare a pair of certificates, one can use s_client -connect $HOST:$PORT to get it, save it as .crt and use the viewer built into Windows or some online viewer.
try:
handles = drv.window_handles
if handles:
drv.switch_to.window(handles[0])
#assume one window
#simple way to implement into your code if its a small service exception can be handled if the url of the window cant be read note:check if your server is running
Great try brother but I see exactly what’s happening.
The 503 Backend fetch failed error is almost never coming from WooCommerce itself - it’s your server (PHP-FPM, Apache, or Nginx proxy maybe) timing out or choking when too many requests or heavy payloads come in quickly.
Here’s how to fix this systematically:
Before touching your code, check:
max_execution_time → at least 300
memory_limit → 512M or higher
max_input_vars → 5000+
post_max_size / upload_max_filesize → bigger than your JSON payload
(500 products × attributes = quite big!)
You can override in .htaccess or php.ini if your host allows:
max_execution_time = 300
memory_limit = 512M
max_input_vars = 10000
post_max_size = 64M
upload_max_filesize = 64M
Even though WooCommerce allows 100 products per batch, in practice chunk size 20–30 is safer when updating stock/price.
Change:
$chunks = array_chunk($products, 50);
to:
$chunks = array_chunk($products, 20); // safer for heavy sites
Instead of sleep() (which blocks PHP execution), you should queue the next request only after the previous AJAX response succeeds.
Example flow:
Upload CSV → store data in an option or transient
First AJAX call updates batch #1
When it completes, JavaScript triggers AJAX call for batch #2
Repeat until done
This avoids overloading PHP with one giant loop.
sendBatchRequest (Retry + Delay)Sometimes WooCommerce REST API throttles requests. Add retry logic with exponential backoff:
private function sendBatchRequest($data) {
$attempts = 0;
$max_attempts = 3;
$delay = 2; // seconds
do {
$ch = curl_init($this->apiUrl);
curl_setopt_array($ch, [
CURLOPT_RETURNTRANSFER => true,
CURLOPT_POST => true,
CURLOPT_POSTFIELDS => json_encode($data),
CURLOPT_HTTPHEADER => ['Content-Type: application/json'],
CURLOPT_USERPWD => $this->apiKey . ':' . $this->apiSecret,
CURLOPT_TIMEOUT => 120,
]);
$response = curl_exec($ch);
$httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
if ($httpCode >= 200 && $httpCode < 300) {
return [
'success' => true,
'response' => json_decode($response, true),
'http_code' => $httpCode
];
}
$attempts++;
if ($attempts < $max_attempts) {
sleep($delay);
$delay *= 2; // exponential backoff
}
} while ($attempts < $max_attempts);
return [
'success' => false,
'response' => json_decode($response, true),
'http_code' => $httpCode
];
}
Here let me restructure your class so it processes products in batches via AJAX queue instead of looping all at once.
Here’s a production-ready rewrite (safe for 500–1000+ products):
<?php
/**
* Class StockUpdater
* Processes CSV file and updates WooCommerce products in batches
*/
class StockUpdater {
private $apiUrl;
private $apiKey;
private $apiSecret;
public function __construct($apiUrl, $apiKey, $apiSecret) {
$this->apiUrl = $apiUrl;
$this->apiKey = $apiKey;
$this->apiSecret = $apiSecret;
// AJAX hooks
add_action('wp_ajax_start_stock_update', [$this, 'ajaxStartStockUpdate']);
add_action('wp_ajax_process_stock_batch', [$this, 'ajaxProcessStockBatch']);
}
/**
* Parse CSV into product data
*/
private function parseCSV($csvFile) {
$products = [];
if (($handle = fopen($csvFile, 'r')) !== false) {
while (($data = fgetcsv($handle, 1000, ',')) !== false) {
$sku = trim($data[0]);
$id = wc_get_product_id_by_sku($sku);
if ($id) {
$products[] = [
'sku' => $sku,
'id' => $id,
'stock' => !empty($data[1]) ? (int) trim($data[1]) : 0,
'price' => !empty($data[2]) ? wc_format_decimal(str_replace(',', '.', trim($data[2]))) : 0,
];
}
}
fclose($handle);
}
return $products;
}
/**
* Start the update (first AJAX call)
*/
public function ajaxStartStockUpdate() {
check_ajax_referer('stock_update_nonce', 'security');
$csvFile = ABSPATH . 'wp-content/stock-update.csv'; // adjust path
$products = $this->parseCSV($csvFile);
if (empty($products)) {
wp_send_json_error(['message' => 'No products found in CSV']);
}
// Store products temporarily in transient
$batch_id = 'stock_update_' . time();
set_transient($batch_id, $products, HOUR_IN_SECONDS);
wp_send_json_success([
'batch_id' => $batch_id,
'total' => count($products),
]);
}
/**
* Process next batch (subsequent AJAX calls)
*/
public function ajaxProcessStockBatch() {
check_ajax_referer('stock_update_nonce', 'security');
$batch_id = sanitize_text_field($_POST['batch_id']);
$offset = intval($_POST['offset']);
$limit = 20; // products per batch (safe)
$products = get_transient($batch_id);
if (!$products) {
wp_send_json_error(['message' => 'Batch expired or not found']);
}
$chunk = array_slice($products, $offset, $limit);
if (empty($chunk)) {
delete_transient($batch_id);
wp_send_json_success(['done' => true]);
}
$data = ['update' => []];
foreach ($chunk as $product) {
$data['update'][] = [
'id' => $product['id'],
'sku' => $product['sku'],
'stock_quantity' => $product['stock'],
'regular_price' => $product['price'],
];
}
$response = $this->sendBatchRequest($data);
wp_send_json_success([
'done' => false,
'next' => $offset + $limit,
'response' => $response,
'remaining' => max(0, count($products) - ($offset + $limit)),
]);
}
/**
* Send batch request to WC REST API with retry logic
*/
private function sendBatchRequest($data) {
$attempts = 0;
$max_attempts = 3;
$delay = 2;
do {
$ch = curl_init($this->apiUrl);
curl_setopt_array($ch, [
CURLOPT_RETURNTRANSFER => true,
CURLOPT_POST => true,
CURLOPT_POSTFIELDS => json_encode($data),
CURLOPT_HTTPHEADER => ['Content-Type: application/json'],
CURLOPT_USERPWD => $this->apiKey . ':' . $this->apiSecret,
CURLOPT_TIMEOUT => 120,
]);
$response = curl_exec($ch);
$httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
if ($httpCode >= 200 && $httpCode < 300) {
return json_decode($response, true);
}
$attempts++;
if ($attempts < $max_attempts) {
sleep($delay);
$delay *= 2;
}
} while ($attempts < $max_attempts);
return ['error' => 'Request failed', 'http_code' => $httpCode];
}
}
jQuery(document).ready(function ($) {
$('#start-stock-update').on('click', function () {
$.post(ajaxurl, {
action: 'start_stock_update',
security: stockUpdate.nonce
}, function (response) {
if (response.success) {
processBatch(response.data.batch_id, 0, response.data.total);
} else {
alert(response.data.message);
}
});
});
function processBatch(batch_id, offset, total) {
$.post(ajaxurl, {
action: 'process_stock_batch',
batch_id: batch_id,
offset: offset,
security: stockUpdate.nonce
}, function (response) {
if (response.success) {
if (response.data.done) {
alert('Stock update complete!');
} else {
let remaining = response.data.remaining;
console.log(`Processed ${offset + 20} of ${total}. Remaining: ${remaining}`);
processBatch(batch_id, response.data.next, total);
}
} else {
alert(response.data.message);
}
});
}
});
wp_enqueue_script('stock-update', plugin_dir_url(__FILE__) . 'stock-update.js', ['jquery'], null, true);
wp_localize_script('stock-update', 'stockUpdate', [
'nonce' => wp_create_nonce('stock_update_nonce'),
]);
And here I build you a ready to use mini plugin that does exactly this:
Adds a menu in WooCommerce → Stock Updater
Lets you upload a CSV (sku, stock, price)
Shows a “Start Update” button
Runs the AJAX queue processor to update products in safe batches
<?php
/**
* Plugin Name: WooCommerce Stock Updater (CSV)
* Description: Upload a CSV (sku, stock, price) and batch update products safely via WooCommerce REST API.
* Version: 1.0
* Author: Jer Salam
*/
if (!defined('ABSPATH')) exit;
class WC_Stock_Updater {
private $apiUrl;
private $apiKey;
private $apiSecret;
public function __construct() {
$this->apiUrl = home_url('/wp-json/wc/v3/products/batch');
$this->apiKey = get_option('woocommerce_api_consumer_key');
$this->apiSecret = get_option('woocommerce_api_consumer_secret');
add_action('admin_menu', [$this, 'add_menu']);
add_action('admin_enqueue_scripts', [$this, 'enqueue_scripts']);
// AJAX
add_action('wp_ajax_start_stock_update', [$this, 'ajaxStartStockUpdate']);
add_action('wp_ajax_process_stock_batch', [$this, 'ajaxProcessStockBatch']);
}
public function add_menu() {
add_submenu_page(
'woocommerce',
'Stock Updater',
'Stock Updater',
'manage_woocommerce',
'wc-stock-updater',
[$this, 'render_admin_page']
);
}
public function enqueue_scripts($hook) {
if ($hook !== 'woocommerce_page_wc-stock-updater') return;
wp_enqueue_script('wc-stock-updater', plugin_dir_url(__FILE__) . 'stock-update.js', ['jquery'], '1.0', true);
wp_localize_script('wc-stock-updater', 'stockUpdate', [
'nonce' => wp_create_nonce('stock_update_nonce'),
'ajaxurl' => admin_url('admin-ajax.php'),
]);
}
public function render_admin_page() {
?>
<div class="wrap">
<h1>WooCommerce Stock Updater</h1>
<form method="post" enctype="multipart/form-data">
<?php wp_nonce_field('wc_stock_upload', 'wc_stock_nonce'); ?>
<input type="file" name="stock_csv" accept=".csv" required>
<input type="submit" name="upload_csv" class="button button-primary" value="Upload CSV">
</form>
<?php
if (isset($_POST['upload_csv']) && check_admin_referer('wc_stock_upload', 'wc_stock_nonce')) {
if (!empty($_FILES['stock_csv']['tmp_name'])) {
$upload_dir = wp_upload_dir();
$csv_path = $upload_dir['basedir'] . '/stock-update.csv';
move_uploaded_file($_FILES['stock_csv']['tmp_name'], $csv_path);
echo '<p><strong>CSV uploaded successfully.</strong></p>';
echo '<button id="start-stock-update" class="button button-primary">Start Update</button>';
}
}
?>
<div id="stock-update-log" style="margin-top:20px; font-family: monospace;"></div>
</div>
<?php
}
private function parseCSV($csvFile) {
$products = [];
if (($handle = fopen($csvFile, 'r')) !== false) {
while (($data = fgetcsv($handle, 1000, ',')) !== false) {
$sku = trim($data[0]);
$id = wc_get_product_id_by_sku($sku);
if ($id) {
$products[] = [
'sku' => $sku,
'id' => $id,
'stock' => !empty($data[1]) ? (int) trim($data[1]) : 0,
'price' => !empty($data[2]) ? wc_format_decimal(str_replace(',', '.', trim($data[2]))) : 0,
];
}
}
fclose($handle);
}
return $products;
}
public function ajaxStartStockUpdate() {
check_ajax_referer('stock_update_nonce', 'security');
$upload_dir = wp_upload_dir();
$csvFile = $upload_dir['basedir'] . '/stock-update.csv';
$products = $this->parseCSV($csvFile);
if (empty($products)) {
wp_send_json_error(['message' => 'No products found in CSV']);
}
$batch_id = 'stock_update_' . time();
set_transient($batch_id, $products, HOUR_IN_SECONDS);
wp_send_json_success([
'batch_id' => $batch_id,
'total' => count($products),
]);
}
public function ajaxProcessStockBatch() {
check_ajax_referer('stock_update_nonce', 'security');
$batch_id = sanitize_text_field($_POST['batch_id']);
$offset = intval($_POST['offset']);
$limit = 20;
$products = get_transient($batch_id);
if (!$products) {
wp_send_json_error(['message' => 'Batch expired or not found']);
}
$chunk = array_slice($products, $offset, $limit);
if (empty($chunk)) {
delete_transient($batch_id);
wp_send_json_success(['done' => true]);
}
$data = ['update' => []];
foreach ($chunk as $product) {
$data['update'][] = [
'id' => $product['id'],
'sku' => $product['sku'],
'stock_quantity' => $product['stock'],
'regular_price' => $product['price'],
];
}
$response = $this->sendBatchRequest($data);
wp_send_json_success([
'done' => false,
'next' => $offset + $limit,
'response' => $response,
'remaining' => max(0, count($products) - ($offset + $limit)),
]);
}
private function sendBatchRequest($data) {
$attempts = 0;
$max_attempts = 3;
$delay = 2;
do {
$ch = curl_init($this->apiUrl);
curl_setopt_array($ch, [
CURLOPT_RETURNTRANSFER => true,
CURLOPT_POST => true,
CURLOPT_POSTFIELDS => json_encode($data),
CURLOPT_HTTPHEADER => ['Content-Type: application/json'],
CURLOPT_USERPWD => $this->apiKey . ':' . $this->apiSecret,
CURLOPT_TIMEOUT => 120,
]);
$response = curl_exec($ch);
$httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
if ($httpCode >= 200 && $httpCode < 300) {
return json_decode($response, true);
}
$attempts++;
if ($attempts < $max_attempts) {
sleep($delay);
$delay *= 2;
}
} while ($attempts < $max_attempts);
return ['error' => 'Request failed', 'http_code' => $httpCode];
}
}
new WC_Stock_Updater();
jQuery(document).ready(function ($) {
$('#start-stock-update').on('click', function () {
$('#stock-update-log').html('<p>Starting stock update...</p>');
$.post(stockUpdate.ajaxurl, {
action: 'start_stock_update',
security: stockUpdate.nonce
}, function (response) {
if (response.success) {
processBatch(response.data.batch_id, 0, response.data.total);
} else {
$('#stock-update-log').append('<p style="color:red;">' + response.data.message + '</p>');
}
});
});
function processBatch(batch_id, offset, total) {
$.post(stockUpdate.ajaxurl, {
action: 'process_stock_batch',
batch_id: batch_id,
offset: offset,
security: stockUpdate.nonce
}, function (response) {
if (response.success) {
if (response.data.done) {
$('#stock-update-log').append('<p style="color:green;">Stock update complete!</p>');
} else {
let processed = offset + 20;
$('#stock-update-log').append('<p>Processed ' + processed + ' of ' + total + ' products. Remaining: ' + response.data.remaining + '</p>');
processBatch(batch_id, response.data.next, total);
}
} else {
$('#stock-update-log').append('<p style="color:red;">' + response.data.message + '</p>');
}
});
}
});
Upload the plugin folder (wc-stock-updater) with both files:
wc-stock-updater.php
stock-update.js
Activate it in WP Admin.
Go to WooCommerce → Stock Updater.
Upload your CSV (sku, stock, price).
Click Start Update.
It will process 20 products per batch until all are done without 503 errors.
Cheers Brother.
In my case I had to change from @pytest.fixture to @pytest_asyncio.fixture and just worked. Of course, keep in mind the tests should be annotated with @pytest.mark.asyncio and don't forget to install pytest-asyncio
also you can encode your table right before creating the table that saves so much extra effort
CREATE DATABASE my_database WITH ENCODING 'UTF8'
Are you creating container for Spring-boot application also or is it running locally
py38 was deprecated a year ago, none of the packages pinned, it is kinda expected that python environment materialization will fail. Base environmetn/image is also deprecated a while back. As a side note, there is not much value of using curated environment image for system managed environment. It will create an isolated environment so preinstalled dependencies won't be available. I you just need to install a few dependencies, like optuna, in the existing environment, just do
FROM mcr.microsoft.com/azureml/curated/acpt-pytorch-1.11-py38-cuda11.3-gpu:9
RUN pip install optuna=={some_compatible_version}
This is the formula in cell G9. The formula is filled up and confirmed with ctrl+shift+enter because i work with legacy Excel 2013. Not 100 % sure if this does what you want but that's how i understand the task.
=SUM(COUNTIFS($B$2:$B$18,F9,$C$2:$C$18,IF($B$2:$B$18=E9,$C$2:$C$18)))
Not really answer to the original question, but if someone wants to have a new paragraph starting with some tabulators, this is the only way I found. Basically to simulate tabulatos I had to insert ASCII 173 character followed by a space character and repeat a few times:

The Google Picker cannot filter by “files created by my app” (drive.file) out of the box. The Picker’s Drive view does not support Drive v3 query syntax (like appProperties) nor any “creatorAppId” filter. The setAppId and setQuery you tried won’t achieve this.
What you can do instead:
Use a dedicated folder for your app’s files, then point the Picker to that folder.
When you create the spreadsheet, place it in a known folder (e.g., “MyApp Sheets”).
In the Picker, use DocsView.setParent(folderId) so users only see files inside that folder.
Found the solution by digging through this sample project https://developer.apple.com/documentation/RealityKit/composing-interactive-3d-content-with-realitykit-and-reality-composer-pro
Note that the accompanying WWDC video presents a different method, which throws compiler errors, so ignore that. Thanks Apple!
struct MyAugmentedView : View {
private let notificationTrigger = NotificationCenter.default.publisher(for: Notification.Name("RealityKit.NotificationTrigger"))
var body: some View {
// Add the following modifier to your RealityView or ARView:
.onReceive(notificationTrigger) { output in
guard
let notificationName = output.userInfo?["RealityKit.NotificationTrigger.Identifier"] as? String
else { return }
switch notificationName {
case "MyFirstNotificationIdentifier" :
// code to run when this notification received here
case "MySecondNotificationIdentifier"
// etc
default:
return
}
}
}
}
Your postmeta is huge brother
Before HPOS, WooCommerce stored orders as post_type = shop_order in the wp_posts table and all order data in wp_postmeta.
Now with HPOS, orders live in:
wp_wc_orders
wp_wc_order_addresses
wp_wc_order_operational_data
wp_wc_orders_meta
But WooCommerce does not automatically delete the old shop_order posts or their postmeta (for backward compatibility). That’s why your wp_postmeta is still bloated.
wp_postsSELECT COUNT(*)
FROM wp_posts
WHERE post_type = 'shop_order';
If you’ve fully migrated to HPOS, you don’t need these anymore.
This query deletes all postmeta records tied to old shop_order posts:
DELETE pm
FROM wp_postmeta pm
INNER JOIN wp_posts p ON pm.post_id = p.ID
WHERE p.post_type = 'shop_order';
3. Remove old order posts
DELETE FROM wp_posts
WHERE post_type = 'shop_order';
4. Optimize the table
OPTIMIZE TABLE wp_postmeta;
Stage Cleanup - its optional:
Because you’ve got 9M+ rows, deleting everything in one go can lock tables and time out.
Do it in batches like this:
DELETE pm
FROM wp_postmeta pm
INNER JOIN wp_posts p ON pm.post_id = p.ID
WHERE p.post_type = 'shop_order'
LIMIT 50000;
Run multiple times until rows are gone. Cheers.
IMO, the most flexible way is to use css variables with a potential default value
<symbol id="thing">
<circle fill="var(--fill, red)"> // if --fill doesn't exit, it's gonna default to red
</symbol>
<svg style="--fill: blue">
<use xlink:href="sprite.svg#thing">
</svg>
A confident South Asian man in his early 30s strikes a bold pose in front of a vintage cream-colored luxury sedan from the 1970s. His confident expression, sharp features, and well-crafted 70s fashion, including a creamy pinstriped suit and gold chain, set against a softly lit, tree-lined avenue, create a striking interaction between style and nostalgia.
The simplest way to preview an HTML file inside a VS Code tab in GitHub Codespaces is to use the Live Preview extension by Microsoft.
Open the Extensions panel in VS Code.
Search for Live Preview (Microsoft) and install it.
Restart VS Code (if needed).
Open your index.html file. You’ll now see a “Live Preview” icon in the top-right corner of the editor.
Click it, and your HTML file will be rendered directly inside a VS Code tab.
This avoids switching to an external browser and lets you preview your page right inside Codespaces.
To fix recurrence rules in ics.js, update your rRule object to use uppercase freq values (e.g., 'WEEKLY', 'MONTHLY') as required by the library. So: fd.get("freq").toUpperCase()
I contacted Apple Support and explained the issue. After weeks of back-and-forth emails, during which I repeatedly tried to clarify the problem and even sent them a video demonstrating the steps, I eventually received an email saying the issue had been resolved. When I logged in to check, I found that the agreement was finally there.
Yeah, that's possible....................................................................................
i am working on my pyhton skills while stuck on something which i am not been able to resolve.
i am triyng to make the button as per shown in the image below:
enter image description here
with the below code :
start_button = Button(text="Start")
start_button.grid(column=0, row=3)
ResetButton = Button(text = "Reset")
ResetButton.grid(column = 2, row = 3)
i am able to get the below button :
enter image description here
Update firebase tools :
npm install -g firebase-tools
then logout and login again:
firebase logout
firebase login
you can use https://linkhive.tech a firebase dynamic links alternative
I am using like this in my .net project
public class News : EndpointGroupBase
{
public override void Map(WebApplication app)
{
var publicGroup = app.MapGroup("/news").WithTags("news");
publicGroup.MapGet("/", ([AsParameters]GetNews.Request request) => GetNews.HandleAsync);
}
}
fmt.Scanln() can return an error. If you check that error when inputting 123 456, you'll see it returns expected newline. If you check the documentation for fmt.Scan(), you'll see it actually reads multiple successive space-separated values.
So, what's happening here is Scanln finds your first value (123), then sees a space, which would cause it to scan a second value. However, you've only passed in one pointer, so it's expecting a newline rather than a space. It'll keep reading, trying to find a newline, but once it hits a character that's neither a space nor a newline, it returns an error. Because of that, it's consumed the 4 in 456 in its attempt to find a newline, and the next call to Scanln will no longer have access to that 4.
Your use case would likely be better served by bufio.Scanner or bufio.Reader.
Add the following property to your application.properties.
springdoc.swagger-ui.path=api-docs
I got this error when incorrectly running
uv run app.py
The correct command is:
uv run streamlit run app.py
can you please export the flow and share it with me?
I confirm the issue, but I'm not sure it's Flutter. I upgraded:
- Flutter to 3.35.4
- iOS to 26
- XCode to work with iOS 26
After that the debug version of the app became 100x slower to run. It takes about 5 min to run on my phone and it's almost impossible to use it after the start. It seems to be only affecting the dbug version. The release version works as expected.
It's hard to provide any code here. It happened for all my apps in iOS.
You can use OrbStack for running services in Docker containers on your local host, you can set up a unique URL for each service. OrbStack automatically creates these URLs, such as 'keycloak.devlocal.orb.local', which resolve correctly both within containers and on the host machine.
Hey this blog post answers this exact question in great detail.
You can't prevent the browser from throttling an inactive tab - see this article
Good thing is, you don't need to update the timer when the tab is not visible. When user switches back to this tab, your timer should fire. For getting the time, don't rely on how many times the function passed to setInterval fired. You can either get the time from Date object, or, if you want to display the time received from server (because it's in a specific time zone or something), you can save a Date object when you receive the server response and calculate the difference with a new Date() object
In addition you can return the number of products of an order ID that appear in the list with this formula. It works in my Excel 2013 sample sheet and has to be confirmed by pressing ctrl+shift+enter.
=SUM(COUNTIFS($A$2:$A$16,A2,$B$2:$B$16,$D$2:$D$4))
you need to use To_TIMESTAMP(observation_time, 'YYYY-MM-DD"T"HH24:MI:SS"Z"') format for oracle it doesn't understand TZ.
You can read this post about this exact error for more help
1.you are generating a new RSA keypair on every run.Save private/public key to disk and then load it on subsequent runs.
2. you are also splitting the files into 128-byte chunks.With a 2048‑bit RSA key, each ciphertext is 256 bytes, so you’re cutting ciphertexts in half.when you are writing Files.write(path, (Base64(ct) + "\n").getBytes(...), CREATE, APPEND).
3. Use Base64.getEncoder().encodeToString(bytes) or Arrays.toString(bytes).
Ok.........................................
Python 3.14 adds deferred evaluation of annotations which should make the code running as-is.
If your C++ code is not running, it could be due to compilation errors, missing compiler setup, or incorrect file configuration. Ensure a proper C++ compiler (like GCC or Clang) is installed and your IDE or terminal points to it. Check for syntax mistakes, include necessary headers, and verify the main function exists. Also, make sure to compile the code before running it; running without compiling will result in failure.
You should use @AssistedFactory in your VM.
Here's example: https://github.com/android/nav3-recipes/blob/main/app/src/main/java/com/example/nav3recipes/passingarguments/injectedviewmodels/InjectedViewModelsActivity.kt
React Native official: Only AAR (class) is released, no matching-sources.jar. So the IDE can only see the bytecode and cannot click on it.
Handle new users in your frontend/backend after signup instead of using a trigger:
const { data, error } = await supabase
.from('users')
.insert([{ user_id: user.id, email: user.email, role: 'Low' }]);
This avoids breaking the signup flow and is safer for production.
The Microsoft Authentication Libraries (MSAL) gives you the ability to add support for Azure Active Directory v2 (serves Microsoft Account and AAD) and B2C. Supports native clients such as Windows, iOS, OSX, Android, and Linux.
As of Sep 2025, if you just add:
/** @OnlyCurrentDoc */
at the top, and do not add any scopes in appsscript.json
you will get the minimal required permissions limited to the current doc and form for any installable triggers.
In this scenario, we need to introduce a new table to maintain the many-to-many relationship between movies and cast members. A movie can have multiple people involved (such as actors, directors, writers), and a person can be involved in multiple movies in different roles. To model this relationship effectively, we can use a junction table that connects movies, persons, and roles.
cast table :
This table defines the different roles a person can have in a movie (e.g., Director, Actor, Writer).
| id | name |
|---|---|
| 1 | Director |
| 2 | Actor |
| 3 | Writer |
person table :
This table stores the people involved in movies.
| id | name |
|---|---|
| 1 | Martin Scorsese |
| 2 | Christopher Nolan |
movie table :
This table stores information about the movies.
| id | movie name |
|---|---|
| 1 | Inception |
| 2 | Interstellar |
now the relationship table comes into the picture , that contains the relation ship between the movie , person and roles in a single view.
movie_cast_relationship_table :
This junction table defines the relationship between movies, persons, and their roles (cast).
| id | movie id | person id | cast id |
|---|---|---|---|
| 1 | 1 | 2 | 1 |
| 2 | 2 | 2 | 1 |
| ... | ... | ... | ... |
This design provides a clear and normalized way to represent the many-to-many relationship between movies and people, with specific roles defined for each connection.
I recently upgraded my Mac from macOS Sequoia to Ventura because of the “Apple System Data” storage issue. After a fresh install, I couldn’t log in to ChatGPT—getting a persistent Error 400 Route. I tried different browsers and troubleshooting, but nothing worked.
After some research, I realized that ChatGPT may no longer support older OS versions, similar to how older iPhones with outdated iOS face restrictions. When I installed the latest macOS Tahoe, I was finally able to log in to my ChatGPT Premium account.
While I understand the need for compatibility and security, it’s frustrating that users are forced to upgrade their systems—even ones that were previously working fine—just to access services they already pay for.
Through I can't reproduce (and waste time) but it looks like it's missing OpenGL 3.3 as mentioned in an unrelated issue in an unrelated thing that does work for OpenGL 3.1 and older. A workaround is at the link provided.
Keyword research is the process of finding relevant words and phrases that your target audience uses when searching on search engines to find information, products, or services.
WebSocket messages can arrive in fragments.
You need to properly handle binary messages.
You should avoid security risks, such as decompression bombs.
The same problem persists after all these 11 years.
The way I found to circumvent it was to create both a class AND an instance method called "create".
The class method calls:
def create(...)
new(...).create
end
and the logic with the run_callbacks exists inside the instance method, also called create
I think the issue might be caused by a mismatch in the userId . To avoid this, try retrieving the user ID using the method below.
Authentication auth = SecurityContextHolder.getContext().getAuthentication();
String userId = auth.getName();
sendNotificationWebSocket(userId,notifica);
T
O4oo4o4o4
99 https://media.tenor.com/STjTuyHNVmwAAAAM/dog-crying-meme-doggo-crys.gif https://media.tenor.com/STjTuyHNVmwAAAAM/dog-crying-meme-doggo-crys.gif https://media.tenor.com/STjTuyHNVmwAAAAM/dog-crying-meme-doggo-crys.gif u77tuıtı47474ı
The solution was writing to /etc/rancher/k3s/config.yaml:
kubelet-arg:
- allowed-unsafe-sysctls=net.ipv4.ip_forward
- allowed-unsafe-sysctls=net.ipv4.conf.all.src_valid_mark
- allowed-unsafe-sysctls=net.ipv6.conf.all.disable_ipv6
did you managed to find a solution to your problem? Im having the same issue right now and can't figure out what's wrong.
The safest way is to first initialize Pythonnet using pythonnet.load before importing clr
We are facing the exact same problem. As a workaround, we replaced the debug_node.mjs with an empty module in our custom-webpack.
However, this seems to be a bug in the system and therefore I will open an issue in the Angular Repo
EDIT: It seems it is "Not a Bug, its a feature": https://github.com/angular/angular/issues/61144
Blast from the future! Seven years later, thousands of CS students are learning computer architecture with the Little Computer 4.
Moving the content transform does exactly this.
scrollRect.content.transform.position -= xPixels * Vector3.right;
scrollRect.content.transform.position += yPixels * Vector3.up;
(notice the signs, they are important for desired functionality)
The option I found out is BinPackArguments. Setting it to false prevents packing function arguments and array initializer elements onto the same line.
As Alex mentions in his comment, I need to configure solid_cable for my development environment too.
Followed this tutorial and the issue is solved now. https://mileswoodroffe.com/articles/super-solid-cable
Steps:
bundle add solid_cable
bin/rails solid_cable:install
# Update config/cable.yml
development:
adapter: solid_cable
connects_to:
database:
writing: cable
polling_interval: 0.1.seconds
message_retention: 1.day
# Update config/database.yml, add cable section for development env.
development:
primary:
<<: *default
database: storage/development.sqlite3
cable:
<<: *default
database: storage/development_cable.sqlite3
migrations_paths: db/cable_migrate
rails db:preparevar nn:MovieClip = new enemyRed1()
nn.enemyRed1Moving = false
trace(nn.enemyRed1Moving)
2025 : Windows 10
Docker desktop : Settings -> Resources(main menu) -> Advanced -> Disk image location (Docker desktop version Current version: 4.41.2 (191736))
Uhm as far as I know, Docking puts elements to the left upper corner. Don't know why you use Syncfusion because visual studio has plenty options to format and align elements. And maybe the let you also pay for someting you already have in your hands..
But to help you a little bit: Syncfusion docking visual styles
run Remove-Item ".git" -Recurse -Force
Remove-Item Doc: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.management/remove-item?view=powershell-7.5&viewFallbackFrom=powershell-7.3
`
f(x) \rightarrow R \quad as \quad x \rightarrow a^+
$$ f(x) \rightarrow R \quad as \quad x \rightarrow a^+ $$
```
Indeed, it does appear that git diff does pass down its arguments to sub-commands. It's not necessarily always true per se, however, if git diff is masking another command, it is a fairly safe assumption to bet that the arguments are passed down.
However, it looks like in the particular case of -i for a -G flag (or even -S) being provided, the sub-command it is getting passed down to, Git pickaxe (pseudo Grep?), didn't have the option documented or explained at all in its documentation / CLI.
I decided to spend time actually reading Git's source code in depth rather than perusing it quickly and skimming for keyword terms. When digging into the source code Git itself, it looks like my overall assumption is indeed correct. The git diff command does indeed pass down its arguments to sub-commands. For example, we see it in the following line of code when using -G or -S:
void diffcore_std(struct diff_options *options)
{
...
if (options->pickaxe_opts & DIFF_PICKAXE_KINDS_MASK)
diffcore_pickaxe(options);
...
}
Furthermore, inside of the pickaxe sub-command, we see how those options are utilized (and where -i is being checked for):
void diffcore_pickaxe(struct diff_options *o)
{
const char *needle = o->pickaxe;
int opts = o->pickaxe_opts;
...
if (opts & (DIFF_PICKAXE_REGEX | DIFF_PICKAXE_KIND_G)) {
int cflags = REG_EXTENDED | REG_NEWLINE;
if (o->pickaxe_opts & DIFF_PICKAXE_IGNORE_CASE)
cflags |= REG_ICASE;
...
}
With the relevant macros for the masks, defined in diff.h:
#define DIFF_PICKAXE_KIND_S 4 /* traditional plumbing counter */
#define DIFF_PICKAXE_KIND_G 8 /* grep in the patch */
#define DIFF_PICKAXE_KIND_OBJFIND 16 /* specific object IDs */
#define DIFF_PICKAXE_KINDS_MASK (DIFF_PICKAXE_KIND_S | \
DIFF_PICKAXE_KIND_G | \
DIFF_PICKAXE_KIND_OBJFIND)
#define DIFF_PICKAXE_KINDS_G_REGEX_MASK (DIFF_PICKAXE_KIND_G | \
DIFF_PICKAXE_REGEX)
#define DIFF_PICKAXE_KINDS_ALL_OBJFIND_MASK (DIFF_PICKAXE_ALL | \
DIFF_PICKAXE_KIND_OBJFIND)
#define DIFF_PICKAXE_IGNORE_CASE 32
In essence, the issue appears to be undocumented behavior that is undefined on the git diff CLI and/or the git pickaxe CLI. Since I have no idea how Git's internal development works, I cannot make a patch or submit an issue for it (odds are good, since its a Linus Torvalds project, it's stuck in the 1990's development workflow model with ad-hoc patches?). Hopefully someone who works on Git and knows how that process is done, will see this thread and make the appropriate documentation updates (thank you to whomever that is, in advance).
Wanted to do this myself and ended up here. Turns out Skia appears to be the simplest packaged way (for newer Delphi). Tried it myself using FMX (though VCL would be almost identical)
Skia is (as with most things Delphi) not overflowing with documentation
To this end I've left a very low effort demo at https://github.com/IntermediateDelphi/SkiaResampleDemo in the hope that the next person who ends up here finds it useful
I asked Gemini a question and they recommended a helpful video.
https://www.youtube.com/watch?v=BG6EJYSOhfM
Thank you, Gemini.
Have you tried setting the SelectedValue inside the BindingContextChanged (or DataBindingComplete) event, so that the value is applied after the ComboBox has finished binding? For example:
ComboBox comboBox = new ComboBox();
comboBox.DisplayMember = "varName";
comboBox.ValueMember = "varId";
comboBox.DataSource = drs.CopyToDataTable();
comboBox.BindingContextChanged += (s, e) =>
{
comboBox.SelectedValue = 12;
};
Added wdk manually with VC++ directories. Also had to add _AMD_64 to my preprocessor definitions.
I tried using Tortoise CVS on Windows 11 and had troubles with DLL's, anyway using Linux that was easy ;-)
cvs -d $repodir checkout -r $branchname $projectname
Thank you Tim,
But this is not exactly what I am looking for. In fact, my real code is more complex, the example I posted was just a simplification. I am working with time series stacked in a multilayer SpatRaster (dozens of layers). With plet() you can easily navigate through the layers, and for comparison it is essential to use a fixed reference color palette.
plet() handles multilayer SpatRasters very well: plotting the different layers is just a matter of passing an argument. However, leaflet does not accept multilayer rasters directly, so to plot them you need to split the stack and add each layer one by one in a loop.
The problem is with the col argument. In terra 1.7.7x, plet(col=) accepted the output of colorNumeric and painted each layer with the correct range of values from that palette. In terra 1.8.x this no longer works.
There should be a way to provide a value–color mapping object to plet(col=), but I have not been able to figure it out.
Below I attach an extension of my previous example, which in terra 1.7.7x produces exactly the expected map:
ibrary(terra)
library(leaflet)
# data
f <- system.file("ex/elev.tif", package="terra")
r <- rast(f) # to raster
r
# Color palette
raster_colors = c("red", "yellow", "blue")
zlim_v <- c(-100, 1000) #
#
# 2) Construir multilayer: r, r+400, r-200
r_plus400 <- r + 400
r_minus200 <- r - 200
r_multi <- rast(c(`Topo (m)` = r,
`Topo +400 m` = r_plus400,
`Topo -200 m` = r_minus200))
#' Create a color palette with colorNumeric function
mypal <- colorNumeric(palette = raster_colors, # color gradient to use
domain = zlim_v, # range of the numeric variable
na.color = NA # no color for NA values
)
## Create map with a multilayer raster and a custom color palette
p_topo <- plet(r_multi, # raster file
y = c(1:nlyr(r_multi)), # raster layers
tiles="OpenStreetMap.Mapnik",
alpha=0.7, # opacity
col = mypal, # color palette
legend = NULL # no legend
) %>%
addLegend(
pal = mypal, # legend colors
values = zlim_v, # legend values
title = "topo (m)", # legend title
opacity = 0.7 # legend opacity
)
p_topo
did you try this one
this is working fine with 3 bucket from oci
https://wordpress.org/plugins/articla-media-offload-lite-for-oracle-cloud-infrastructure/