If you’re using Android Studio you may need to update your Build Variants. On version 2024.2.1 I had to choose from sidebar Build Variants > [ Re-import with defaults] (button at bottom right of panel)
Thanks to @DavidMaze I solve it. It was just a matter of deleting the strings and posting them as actual paths and the script did why I wanted. (stop running once I closed all the apps)
Thank you also to @furas to simplifying my code it runs much faster.
Final solution:
import os
import sys
import subprocess
from subprocess import Popen
def back_insert(filename):
#inserts backslashes into paths that have spaces in them
fstring = filename.replace(" ", "\\ ")
return fstring
#Runs the Emulator, socket, Timer and Tracker
commands = [
back_insert("/usr/bin/snes9x-gtk"),
back_insert("/usr/share/OpenTracker/OpenTracker"),
back_insert("/usr/bin/QUsb2Snes"),
back_insert("/home/user/LibreSplit/libresplit")
]
procs = [Popen(i) for i in commands]
for p in procs:
p.wait()
Voici un **guide complet pour installer MySQL sur un serveur Red Hat (RHEL, CentOS, AlmaLinux ou Rocky Linux)** et créer **deux instances MySQL distinctes** sur le même serveur.
---
## 🛠️ Objectif
- Installer **MySQL Server**
- Créer **deux instances MySQL indépendantes**
Instance 1 : port `3306`
Instance 2 : port `3307`
- Chaque instance aura :
Son propre répertoire de données
Sa propre configuration
Son propre service systemd
---
## 🔧 Étape 1 : Installer MySQL Server
### 1. Ajouter le dépôt MySQL officiel
```bash
sudo rpm -Uvh https://dev.mysql.com/get/mysql80-community-release-el9-7.noarch.rpm
```
\> Remplacer `el9` par votre version RHEL (`el7`, `el8`, etc.)
### 2. Installer MySQL Server
```bash
sudo dnf install mysql-server
```
---
## ⚙️ Étape 2 : Démarrer et activer l'instance par défaut
```bash
sudo systemctl enable mysqld
sudo systemctl start mysqld
```
### Récupérer le mot de passe temporaire root
```bash
sudo grep 'temporary password' /var/log/mysqld.log
```
Sécuriser l’installation :
```bash
sudo mysql_secure_installation
```
---
## 📁 Étape 3 : Préparer la deuxième instance
### 1. Créer un nouveau répertoire de données
```bash
sudo mkdir /var/lib/mysql2
sudo chown -R mysql:mysql /var/lib/mysql2
```
### 2. Initialiser la base de données pour la seconde instance
```bash
sudo mysqld --initialize --user=mysql --datadir=/var/lib/mysql2
```
\> ✅ Sauvegarder le mot de passe généré affiché dans les logs :
```bash
sudo cat /var/log/mysqld.log | grep "A temporary password"
```
---
## 📄 Étape 4 : Créer un fichier de configuration personnalisé pour la seconde instance
```bash
sudo nano /etc/my-2.cnf
```
Collez-y cette configuration :
```ini
[client]
port = 3307
socket = /var/lib/mysql2/mysql.sock
[mysqld]
port = 3307
socket = /var/lib/mysql2/mysql.sock
datadir = /var/lib/mysql2
pid-file = /var/lib/mysql2/mysqld.pid
server-id = 2
log-error = /var/log/mysqld2.log
```
Enregistrer et fermer.
### Créer le fichier log
```bash
sudo touch /var/log/mysqld2.log
sudo chown mysql:mysql /var/log/mysqld2.log
```
---
## 🔄 Étape 5 : Créer un service systemd pour la seconde instance
```bash
sudo nano /etc/systemd/system/mysqld2.service
```
Collez ce contenu :
```ini
[Unit]
Description=MySQL Second Instance
After=network.target
[Service]
User=mysql
Group=mysql
ExecStart=/usr/bin/mysqld --defaults-file=/etc/my-2.cnf --basedir=/usr --plugin-dir=/usr/lib64/mysql/plugin
ExecStop=/bin/kill -SIGTERM $MAINPID
Restart=always
PrivateTmp=false
[Install]
WantedBy=multi-user.target
```
Recharger systemd :
```bash
sudo systemctl daemon-reexec
sudo systemctl daemon-reload
```
Activer et démarrer le service :
```bash
sudo systemctl enable mysqld2
sudo systemctl start mysqld2
```
Vérifier le statut :
```bash
sudo systemctl status mysqld2
```
---
## 🔐 Étape 6 : Sécuriser la seconde instance
Connectez-vous à la seconde instance avec le mot de passe temporaire :
```bash
mysql -u root -p -h 127.0.0.1 -P 3307
```
Exécutez ces commandes SQL pour changer le mot de passe :
```sql
ALTER USER 'root'@'localhost' IDENTIFIED BY 'NouveauMotDePasse';
FLUSH PRIVILEGES;
exit
```
---
## 🧪 Étape 7 : Tester les deux instances
### Vérifier les ports utilisés :
```bash
ss -tuln | grep -E '3306|3307'
```
### Se connecter à chaque instance :
Instance 1 :
```bash
mysql -u root -p
```
Instance 2 :
```bash
mysql -u root -p -h 127.0.0.1 -P 3307
```
---
## 📌 Résumé des deux instances
| Instance | Port | Fichier Config | Données | Service Systemd | PID File | Log File |
|---------|------|--------------------|----------------|------------------|------------------------|--------------------|
| Default | 3306 | `/etc/my.cnf` | `/var/lib/mysql` | `mysqld` | `/var/run/mysqld/mysqld.pid` | `/var/log/mysqld.log` |
| Second | 3307 | `/etc/my-2.cnf` | `/var/lib/mysql2`| `mysqld2` | `/var/lib/mysql2/mysqld.pid` | `/var/log/mysqld2.log` |
---
## ✅ Vous avez terminé !
Vous avez maintenant **deux instances MySQL indépendantes** fonctionnant sur le même serveur Red Hat.
Chaque instance peut être gérée séparément via ses propres commandes :
```bash
sudo systemctl start/stop/restart mysqld
sudo systemctl start/stop/restart mysqld2
```
---
## ❓ Besoin d’un script Bash pour automatiser cette installation ?
Je peux vous fournir un **script Bash** qui fait tout cela automatiquement.
Souhaitez-vous que je vous le fournisse ?
Use the format_source_path() function on your builder:
env_logger::builder()
.format_source_path(true)
.init();
The logs will look like
[2025-06-03T20:06:14Z ERROR path/to/file.rs:84 project::module] Log message
Use the other format_
methods to further customize the look of your logs.
After lots of testing I found out the difference was Apache (1st server) vs. LiteSpeed (2nd server). The way to find it was by: <!--#echo var="SERVER_SOFTWARE" -->
How can I get it to fit inside an object that is not spanning the whole screen width?
The issue was that I was using END when I should have been using End.
Note for new TI-84 programers: if you include extra whitespace (other then newlines) or include a syntax error you won't be warned, your program will just poop out.
props.data
should work in your code snippet to get the data of the row. The cellRenderer
receives props of type CustomCellRendererProps
, and this is documented in the AG Grid docs.
wptrkinhthanks yoiu very muvh men
I obtain a total of 295,241 calls per second for the CIE ΔE2000 function in SQL. Both C99 and SQL (MariaDB and PostgreSQL) versions are available here.
Adding my 2¢ because everything else here seems overly complex to me (with Python 3.13 typing):
def recursive_subclasses[T](cls: type[T]) -> set[type[T]]:
"""
Recursively finds all subclasses of a given class.
"""
return set.union({cls}, *map(recursive_subclasses, cls.__subclasses__()))
Matthias this was exactly what I needed! If you run the service manager and point it to the correct version of jvm.dll you don't need to worry about if your JAVA_HOME is correct. Since I was using 6.0.0.0 I needed to use Java 8 when Java 17 was already installed and set as JAVA_HOME. I opened this up, pointed to the java 8 JDK jvm.dll and it started right up afterwards.
For me it was the Device Simulator window , apperently to avoid conflicts it disables some inputs.
https://docs.unity3d.com/Packages/com.unity.inputsystem%401.4/manual/Debugging.html?#device-simulator
When Device Simulator window is in use, mouse and pen inputs on the simulated device screen are turned into touchscreen inputs. Device Simulator uses its own touchscreen device, which it creates and destroys together with the Device Simulator window.
To prevent conflicts between simulated touchscreen inputs and native mouse and pen inputs, Device Simulator disables all native mouse and pen devices.
Closing it resolved my issue. (For cross platform developpement I am using both touch and mouse inputs)
You need to make sure that you run this as a read only transaction
I was blocked on 403 as well, I found a fix using Selenium for Python instead of "urlopen"
Fix on fork here:
https://github.com/Rolzad73/UnrealMarketplaceVaultExtractor
can't you just use substring, or create an extension method?
https://dotnetfiddle.net/RlDOuh
public static class StringExtensions
{
public static string Slice(this string source, int start, int end) => source.Substring(start, end - start);
}
Also keep in mind that even if auto update is on, it won't update until it considers the version stable:
To ensure the stability of self-hosted integration runtime, we release a new version each month and push an auto-update every three months, using a stable version from the preceding three months. So you may find that the autoupdated version is the previous version of the actual latest version. If you want to get the latest version, you can go to download center and do so manually. Additionally, auto-update to a new version is managed internally. You can't change it.
i don't think there is an easy way out, but this isn't particularly hard provided that you know python well.
here is what i was able to whip up in about 2 hours or so (in jupyter, cells are separated with #========
):
import numpy as np
import matplotlib.pyplot as plt
# step 1: load image
img = plt.imread("./your_image.png")
img = img[..., 0] # get rid of rgb channels that are present in the sample image
plt.imshow(img)
# ======================
# step 2, identify relevant area, crop the image to it
img_bin = img > 0.05 # threshold image to get where the roi is
# btw, plt.imread gets pixel values to 0-1 range
where_y, where_x = np.where(img_bin)
xmi, xma = np.quantile(where_x, (0.01, 0.99)) # better for images noise
ymi, yma = np.quantile(where_y, (0.01, 0.99))
# show visualize the region of interest
plt.imshow(img_bin)
plt.gca().add_artist(plt.Rectangle((xmi, ymi), xma-xmi, yma-ymi, facecolor='none', edgecolor='red'))
img2 = img[int(ymi):int(yma), int(xmi):int(xma)].copy() # crop the image
# ========================
# step 3: find some sort of starting point for the algorithm
ci = img2.ravel().argmax() # get brightest point
width = img2.shape[1]
cy, cx = ci//width, ci%width
plt.imshow(img2)
plt.plot([cx, ], [cy, ], marker="o", color='red', markerfacecolor="none")
def get_line_ends(cx, cy, len_pixels, rads):
l = len_pixels/2
cos = np.cos(rads)
sin = np.sin(rads)
y0 = cy-l*sin
y1 = cy+l*sin
x0 = cx-l*cos
x1 = cx+l*cos
return x0, x1, y0, y1
x0, x1, y0, y1 = get_line_ends(cx, cy, 100, -np.pi/11)
# notice that because y axis is inverted, line will be rotating clockwise, instead of counter-clockwise
print(x0, x1, y0, y1)
plt.plot([x0, x1] , [y0, y1], c='red')
# ===========================
# step 4. line sampling prototype
x0, x1, y0, y1 = get_line_ends(cx, cy, 100, -np.pi/11)
plt.imshow(img2)
print(x0, x1, y0, y1)
# plt.plot([x0, x1] , [y0, y1], c='red')
xs = np.linspace(x0, x1, 100).astype(int)
ys = np.linspace(y0, y1, 100).astype(int)
plt.plot(xs, ys, c="red", ls="none", marker="s", markersize=1)
plt.xlim(x0-5, x1+5)
plt.ylim(y0+5, y1-5) # y is still inverted
# ===============================
# step 5 sample pixels along the line at a bunch of angles, and find correct angle,
# to find direction of your lines
def sample_coordinates_along_a_line(img, cx, cy, len_pixels, rads): # same variable names
x0, x1, y0, y1 = get_line_ends(cx, cy, len_pixels, rads)
xs = np.linspace(x0, x1, int(len_pixels)).astype(int)
ys = np.linspace(y0, y1, int(len_pixels)).astype(int)
return img[ys, xs]
rs = np.linspace(-np.pi, 0, 100)
ms = np.empty_like(rs)
for i, r in enumerate(rs):
sample = sample_coordinates_along_a_line(img2, cx, cy, 100, r)
ms[i] = sample.mean()
r_est = r_estimated = rs[ms.argmax()] # will be nearly identical to our guess, so i don't plot it
plt.plot(rs, ms)
plt.axvline(-np.pi/11, c='red') # our guess, should match a maximum
# =================================
# step 6: sample along perpendicular direction, to identify lines
r_90 = r_est + np.pi/2 # perpendicular,
# since r_est is between -pi and 0, r_90 is always between -pi/2, pi/2
r_90 = r_est + np.pi/2 # perpendicular,
# since r_est is between -pi and 0, r_90 is always between -pi/2, pi/2
def get_line_in_a_box(cx, cy, r, xmi, xma, ymi, yma):
"get line that is inside of rectangular box, that goes through point (cx, cy), at angle r"
is_steep = np.abs(r) > np.pi/4 # if angle is > 45 deg, then line grows faster vertically
if is_steep:
y0 = ymi; y1 = yma
x0 = cx - cy/np.tan(r)
x1 = cx+(yma-cy)/np.tan(r)
else:
x0 = xmi; x1 = xma
y0 = cy - cx*np.tan(r)
y1 = cy + (xma-cx)*np.tan(r)
return x0, x1, y0, y1
plt.imshow(img2)
x0, x1, y0, y1 = get_line_in_a_box(cx, cy, r_est, xmi=0, xma=img2.shape[1]-1, ymi=0, yma=img2.shape[0]-1)
plt.plot([x0, x1] , [y0, y1], c='red') # along lines
x0, x1, y0, y1 = get_line_in_a_box(cx, cy, r_90, xmi=0, xma=img2.shape[1]-1, ymi=0, yma=img2.shape[0]-1)
plt.plot([x0, x1] , [y0, y1], c='red') # perpendicular to lines
# ================================
# now we figure out where peaks are from sampling along perpendicular
plt.figure()
def sample_coordinates_along_a_line2(img, x0, x1, y0, y1):
len_pixels = np.sqrt((x1-x0)**2+(y1-y0)**2)
print(len_pixels)
xs = np.linspace(x0, x1, int(len_pixels)).astype(int)
ys = np.linspace(y0, y1, int(len_pixels)).astype(int)
return img[ys, xs]
plt.figure()
x0, x1, y0, y1 = get_line_in_a_box(cx, cy, r_90, xmi=0, xma=img2.shape[1]-1, ymi=0, yma=img2.shape[0]-1)
sampl = sample_coordinates_along_a_line2(img2, x0, x1, y0, y1)
trend = np.convolve(sampl, [1/100]*100, mode="same")
sampl_detrended = sampl-trend
plt.plot(sampl_detrended)
# ==============================
# step 7: find maxima in detrended sample
# i've found this function somewhere in my processing scripts, it's more generic, but good one
def bool_to_regions2(xs: np.ndarray[bool], which=None) -> np.ndarray[int]:
"""return (start, end) pairs of each continious region as (n, 2) int array (end not inclusive, as usual). example
```
a = np.array([1,0,0,0,0,1,1,1,0,1,1,], dtype=bool)
for b, e in bool_to_regions2(a):
print(b, e)
print("".join(np.where(a,'1','0')))
print(' '*b + '^'+' '*(e-b-1)+'^')
```
set which to True or False to return only regions for these values
"""
heads = np.diff(xs, prepend=~xs[0])
tails = np.diff(xs, append=~xs[-1])
nh = abs(heads.sum()) # if array is 0 and 1 instead of proper boolean
nt = abs(tails.sum())
assert nh == nt, f"A: function `bool_to_regions` {nh=}, {nt=}, nh!=nt"
r = np.stack([np.where(heads)[0], np.where(tails)[0]+1], axis=-1)
if which is None: return r
elif which is True: return r[::2] if bool(xs[0]) else r[1::2]
elif which is False: return r[::2] if not xs[0] else r[1::2]
else: raise Exception("`which` should be True, False or None")
plt.plot(sampl_detrended)
maxima = bool_to_regions2(sampl_detrended>0.05, which=True).mean(axis=-1)
# maxima are positions of your lines along a perpendicular to them
for m in maxima:
plt.axvline(m, color='red', alpha=0.5)
# =======================================
# step 8: project maxima back to image space, by using linear interpolation along the line
plt.imshow(img2)
# remember, x0, y0, x1, y1 are from the perpendicular
line_x_coords = x0 + (x1-x0) * maxima / len(sampl_detrended)
line_y_coords = y0 + (y1-y0) * maxima / len(sampl_detrended)
plt.plot(line_x_coords, line_y_coords, ls="none", c="red", marker="+")
#================================================
# step 9: sample all lines
line_sampls = []
plt.imshow(img2)
x0, x1, y0, y1 = get_line_in_a_box(cx, cy, r_90, xmi=0, xma=img2.shape[1]-1, ymi=0, yma=img2.shape[0]-1)
plt.plot([x0, x1,], [y0, y1], c="red") # perpendicular
for line_number in range(len(line_x_coords)):
lcx = line_x_coords[line_number]
lcy = line_y_coords[line_number]
x0, x1, y0, y1 = get_line_in_a_box(lcx, lcy, r_est, xmi=0, xma=img2.shape[1]-1, ymi=0, yma=img2.shape[0]-1)
if x0 < 0 or x1 > img2.shape[1]-1 or y0 > img2.shape[0]-1 or y1 < 0:
continue # todo: clip line instead of skipping it
else:
plt.plot([x0, x1,], [y0, y1], c="red") # should cover the lines in the img2
sample = sample_coordinates_along_a_line2(img2, x0, x1, y0, y1)
line_sampls.append(sample)
line_sampls= np.stack(line_sampls)
# ===============================================
# this is how intensity samples look along the lines
# it should be easy to compute center of each line
for sampl in line_sampls:
plt.plot(sampl)
note: jupyter starts new plot in each cell. i'm not sure how to get nice visual feedback in a plain .py script. maybe call plt.show() at the end of each cell?
First check if Neocities allows making custom 404 page.
If yes, find it location and add html/js content to it.
As for "very detailed explanation" of javascript please read some documentation. It is impossible teach it here.
You can find articles on StackOverflow. For example Random image generation in Javascript
Naturally, right after I make a StackOverflow question I figure it out!
The answer is to call .persist()
after the .reply
call:
agent
.get('https://stackoverflow.com')
.intercept({
path: '/notarealendpoint',
method: 'GET',
})
.reply(200, "foo")
// this is new
.persist();
See https://github.com/nodejs/undici/blob/main/types/mock-interceptor.d.ts#L10.
There's also a .times
function if you only want the mock to persist N times.
Seeing as a Set IS-A Collection, you get a Collection when you get the Set - you get both, and you can decide which one you want to use. What is the downside of that?
If you'd like to avoid geoplot
or are experiencing crashing, you should be able to also get the desired effect using seaborn
and geopandas
(though not with just geopandas
to my knowledge).
with data like this:
Polygon GeoDataFrame:
id geometry
0 polygon_1 POLYGON ((-101.7 21.1, -101.6 21.1, -101.6 21....
Points GeoDataFrame:
PointID geometry
0 0 POINT (-101.61326 21.14453)
1 1 POINT (-101.66483 21.18465)
2 2 POINT (-101.61764 21.11646)
3 3 POINT (-101.65355 21.12132)
4 4 POINT (-101.68183 21.17071)
5 5 POINT (-101.61088 21.14948)
6 6 POINT (-101.66336 21.17007)
7 7 POINT (-101.6774 21.14027)
8 8 POINT (-101.66757 21.13169)
9 9 POINT (-101.66333 21.12997)
in any crs:
fig, ax = plt.subplots(figsize=(3, 3))
assert polygon_gdf.crs == points_gdf.crs
print("crs is", polygon_gdf.crs.coordinate_system)
polygon_gdf.plot(ax=ax, facecolor='none')
points_gdf.plot(ax=ax)
ax.set_axis_off()
crs is ellipsoidal
you can just re-project it to a cartesian coordinate reference system, like Albers Equal Area:
polygon_gdf.crs = 'EPSG:9822'
points_gdf.crs = 'EPSG:9822'
print("crs is", polygon_gdf.crs.coordinate_system)
crs is cartesian
Then you can plot as follows:
import seaborn as sns
fig, ax = plt.subplots(figsize=(3, 3))
sns.kdeplot(
x=points_gdf.geometry.x,
y=points_gdf.geometry.y,
fill=False,
cmap="viridis",
bw_adjust=0.5,
thresh=0.05,
ax=ax,
)
polygon_gdf.plot(ax=ax, facecolor="none")
ax.set(xlabel="", ylabel="")
fig.tight_layout()
Bro I have this issue for long time.
Any solution?
Open assistant and right after Class, change the prefix of the view controller (eg HomeView) that opens to the same name of the view controller you expect to open (eg SignupView). Then open the said view controller in new window, it will show error in class name (as 2 view controllers now have same name), now change class back to what it should be. It will auto correct the assistant that should show up as you expected.
For anyone curious, you can now just set the MaximumHeight, MinimumHeight, MaximumWidth, and MinimumWidth to a static value so that they won't change. something like this:
protected override Window CreateWindow(IActivationState? activationState)
{
const int newWidth = 1280;
const int newHeight = 640;
var window = new Window(new AppShell())
{
Width = newWidth,
Height = newHeight
};
window.MaximumHeight = newHeight;
window.MinimumHeight = newHeight;
window.MaximumWidth = newWidth;
window.MinimumWidth = newWidth;
return window;
}
Your xpath is not recognized by a browser. Try with normalize-space function:
//a[normalize-space()='Cookie Policy']
OR
Change strategy and use link text (java below):
driver.findElement(By.linkText("Cookie Policy"));
Anybody knew of a way to disable live chat via this YouTube data api? I have a node js script running pretty good in the livestream scheduling, except the live chat is also enabled. I read the YouTube api documentation, and search online, there seems to be NO way of disabling the live chat via API, except to manually disable it before it goes live. This kind of defeat the purpose of scheduling the live stream using API.
Any suggestion would be very much appreciated.
For those wondering what was (most likely) the actual "solution" - the certificate is generated for a single, specific McMaster-Carr account and you have to use that account credentials in the POST body along with the certificate to be able to log in.
It is seems not advisable to register the name of your gateway filter in application.properties
. According to the respective Spring Developer Guide "this is not a supported naming convention" and should be avoided since it "may be removed in future releases". Here is the official advise:
Naming Custom Filters And References In Configuration
Custom filters class names should end in
GatewayFilterFactory
.
For example, to reference a filter namedSomething
in configuration files, the filter must be in a class namedSomethingGatewayFilterFactory
.
I am having troubles to make my gateway implementation work, too. In my case the Spring Cloud Starter Gateway (v.4.2.3) somehow expects an CSRF token. Whenever I hit the gateway with a POST request it anwers 403 Forbidden: An expected CSRF token cannot be found
I haven't figured out why this happens since I disabled Spring Security in all microservices downstreams. Here is what I got so far:
@Slf4j
@Component
public class AuthGatewayFilterFactory extends AbstractGatewayFilterFactory<AuthGatewayFilterFactory.Config> {
private final RouteValidator routeValidator;
private final JwtUtil jwtUtil;
public AuthGatewayFilterFactory(RouteValidator routeValidator, JwtUtil jwtUtil) {
super(Config.class);
this.routeValidator = routeValidator;
this.jwtUtil = jwtUtil;
}
@Override
public GatewayFilter apply(Config config) {
return ((exchange, chain) -> {
ServerHttpRequest request = exchange.getRequest();
if (routeValidator.isSecured.test(request)) {
String header = request.getHeaders().getFirst(HttpHeaders.AUTHORIZATION);
if (header == null || !header.trim().startsWith("Bearer ")) {
log.warn("Invalid authorization header");
exchange.getResponse().setStatusCode(HttpStatus.UNAUTHORIZED);
return exchange.getResponse().setComplete();
}
try {
jwtUtil.validateJwtToken(header);
} catch (JwtTokenMalformentException | JwtTokenMissingException e) {
log.error("Error during token validation: {}", e.getMessage());
exchange.getResponse().setStatusCode(HttpStatus.UNAUTHORIZED);
return exchange.getResponse().setComplete();
}
String token = jwtUtil.getToken(header);
Claims claims = jwtUtil.extractAllClaims(token);
String csrfToken = request.getHeaders().getFirst("X-CSRF-TOKEN");
ServerHttpRequest mutatedRequest = exchange.getRequest().mutate()
.header("X-User-Id", claims.getSubject())
.header("X-User-Username", claims.get("username", String.class))
.header("X-User-Email", claims.get("email", String.class))
.header("X-User-Roles", claims.get("roles", String.class))
.header("X-CSRF-TOKEN", csrfToken)
.build();
return chain.filter(exchange.mutate().request(mutatedRequest).build());
}
// Non-secured route — pass through unchanged
return chain.filter(exchange);
});
}
public static class Config {
private boolean validateCsrf = false;
public boolean isValidateCsrf() {
return validateCsrf;
}
public void setValidateCsrf(boolean validateCsrf) {
this.validateCsrf = validateCsrf;
}
}
}
The issue was how I was copying my file structure in this part:
# Copy source code and build
COPY --from=dependencies /app/node_modules ./node_modules
COPY package.json eslint.config.mjs tsconfig.json ./
COPY . ./frontend
RUN npm run build frontend
I was changing the application structure to try and help with caching, but this wasn't actually necessary and was just causing too many problems. Updating it to this:
# Copy source code and build
COPY --from=dependencies /app/node_modules ./node_modules
COPY . .
RUN npm run build
and changing the rest of the Dockerfile accordingly resolved the issue.
I also recommend to re-orientate the image so that the chirp-grating is either horizontal or vertical. (For the sample image I get a rotation angle of about 16.1 degree to obtain horizontal grid-lines.)
Now project the image either horizontally or vertically. What you get is a plot like this:
From the local maxima you get the coordinates of the gridlines, i.e. you can position the line selection appropriately and do the desired measurements.
Please tell us if this is of any help for you or if you need further help.
I think David_sd has a great answer, I'll add how you might adapt his approach to be smoother, or else how you could potentially use plotly
.
import numpy as np
from matplotlib import pyplot as plt
# Prepare data
n = 100
cmap = plt.get_cmap("bwr")
theta = np.linspace(-4 * np.pi, 4 * np.pi, n)
z = np.linspace(-2, 2, n)
r = z**2 + 1
x = r * np.sin(theta)
y = r * np.cos(theta)
T = (2 * np.random.rand(n) - 1) # Values in [-1, 1]
If you don't need the colormap, you can get a 3d curve very easily. It's a shame that you can't pass a colormap
argument into the plotting function as you can with the scatterplot/surface plot options you mentioned.
ax = plt.figure().add_subplot(projection='3d')
ax.plot(x, y, z)
To apply a colormap, I'd use the same approach as David_sd (with the limitations you identified).
# Build segments for Line3DCollection
points = np.array([x, y, z]).T.reshape(-1, 1, 3)
segments = np.concatenate([points[:-1], points[1:]], axis=1)
where points is a (100, 1, 3) representation of the points in space:
array([
[[ x1, y1, z1 ]],
[[ x2, y2, z2 ]],
[[ x3, y3, z3 ]],
...
])
and segments is a (99, 2, 3) representation of the point-to-point connections:
array([
[[ x1, y1, z1 ],
[ x2, y2, z2 ]],
[[ x2, y2, z2 ],
[ x3, y3, z3 ]],
...
])
then you run the following, using T[:-1]
to match the shape of segments
.
from mpl_toolkits.mplot3d.art3d import Line3DCollection
from matplotlib.colors import Normalize
norm = Normalize(vmin=T.min(), vmax=T.max())
colors = cmap(norm(T[:-1])) # Use T[:-1] to match number of segments
lc = Line3DCollection(segments, colors=colors, linewidth=2)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.add_collection3d(lc)
plt.show()
This of course doesn't have the smooth gradient you want. One way to approximate that could be to pump up n
-- e.g. here I set n=1000
If that's not satisfying, I might switch to plotly
, which gets a pretty good gradient even with n=100
import plotly.graph_objects as go
# Create 3D line plot with color
fig = go.Figure(data=go.Scatter3d(
x=x,
y=y,
z=z,
mode='lines',
line=dict(
color=T,
colorscale='RdBu',
cmin=-1,
cmax=1,
width=6
)
))
fig.show()
I got the same issue because two jobs were reading a pickled file at the same time, and the file somehow got corrupted. I had to recreate the pickle file.
It turns out setenv LD_LIBRARY_PATH /a/b/c/d/<gtk_version>/lib
fixed this problem.
There are also one const
can be added:
constexpr const * const int DoubleImmutable() const {
return &mutable_.x;
}
const
pointer to const int
.
i find CASE / WHEN to be easier to construct, as in @john-rotenstein's answer, but Redshift does have a built-in PIVOT function:
SELECT
a,
b,
id
FROM
(SELECT * FROM temp)
PIVOT
(MAX(value) FOR source IN ('A', 'B'))
ORDER BY
id
;
+----+----+---+
|a |b |id |
+----+----+---+
|C123|V123|111|
|C456|V456|222|
+----+----+---+
So I ended up recompiling ktfmt core with kotlin compiler (org.jetbrains.kotlin:kotlin-compiler) rather than kotlin compiler embedded (org.jetbrains.kotlin:kotlin-compiler-embeddable) with required changes. And the conflict goes away. Although I would rather not maintain my own version of ktfmt, I guess this will be my solution.
Make sure you add the CLASSPATH in ~/.bashrc
file and source the file after edit. I had same issue and it got resolved after adding the CLASSPATH. The value shoule be the dir path of where the connector is installed.
For anyone reaching this thread, instead of doing Results.Ok
or simply returning an object from the minimal API endpoint, use Results.Content
// before
return result;
// after
var json = Newtonsoft.Json.JsonConvert.SerializeObject(result);
return Results.Content(json, "application/json");
IBConsole is not isql. isql is the command line tool. While a lot of the "show Xxx" commands do work, that one has not been implemented at this time. I wasn't able to reproduce the AV in the current version it just doesn't display anything.
The next release will have this implemented.
This is great and works perfectly BUT, could you (somehow) be able to filter based on other columns as well? I would want to do this based on three columns (non adjecent)... have been trying to nest it but it doesn't seem to be a good approach.
You can use the CMachine. GetNCStatus() and check for NCStatusEnum.Run while machine is in Auto mode and use the CycleComplete to check for end of cycle or cycle time. The time interval between the 2 is the cycle time that completes the part program without error.
If you're using get snackbar i think it's batter and clear to use this :
if (!Get.isSnackbarOpen) {
Get.snackbar(
title,
message,
snackPosition: SnackPosition.BOTTOM,
// You can add more customization here
);
}
I have a similar problem with boost 1.88.0 installed from the binary release file for Windows available here: https://archives.boost.io/release/1.88.0/binaries/boost_1_88_0-bin-msvc-all-32-64.7z
The above archive has a somewhat strange folder structure that doesn't seem to work very well with CMake. CMake find_package
in CONFIG mode needs to find the file BoostConfig.cmake
that is contained in several different directories. You need to point it to the right one for you. For example, when using VS2022 for 64-bit x86 builds, you would use lib64-msvc-14.3/cmake/Boost-1.88.0/BoostConfig.cmake
.
I found it easiest to have the environment variable Boost_ROOT
set to this directory rather than to the actual root directory of boost. But I consider that a hack. I think the boost developers ought to make this easier, perhaps by revising the folder structure to better match the search procedure used by CMake.
To calculate how much space a text will occupy, you can use:
String word = "My word";
Font font = new Font( "Arial", Font.PLAIN, 14 );
Canvas canvas = new Canvas();
FontMetrics fm = canvas.getFontMetrics( font );
int width = fm.stringWidth( word );
Now let's see how to apply it to a concrete case:
public class TextFormat {
FontMetrics fm;
void initialize() {
Font font = new Font( "Arial", Font.PLAIN, 14 );
Canvas canvas = new Canvas();
fm = canvas.getFontMetrics( font );
}
// this method is very simple, it just creates an array of phrases, dividing the
// original string in lines, and then iterates over the array creating an array of
// words, and adding in each iteration to **out**, the return of the **cut** method.
String formatText( String imitialInput, int maxWidth ) {
String[] lines = imitialInput.split( "\n" );
String out = "";
for( int i = 0; i < lines.length; i ++ ) {
String[] words = lines[ i ].split( " " );
out += cut( words, 0, maxWidth );
}
return out;
}
// this method makes use of recursion, it iterates over the array of words, adding
// to **aux** the content of the variable **space** (in the first iteration it will
// be **“”** and in the successive **“ ”**), concatenated with the words of the
// array, then verifies if the length of this one exceeds the limit, in which case,
// it returns the trimmed value of **aux** (it removes the last added word) plus the
// string **"\n ”** the return of **cut**, passing as parameters, the array of
// words, the value of **i** (so that it continues oterating from the first word not
// added) and **maxWidth**, and the last line, it returns the possible remainder.
String cut( String[] words, int init, int maxWidth ) {
String aux = "",
space = "";
for( int i = init; i < words.length; i++ ) {
aux += space + words[ i ];
int width = fm.stringWidth( aux );
if( width > maxWidth ) {
return aux.substring( 0, aux.length() - words[ i ].length() - 1 )
+ "\n" + cut( words, i, maxWidth );
}
if( space.isEmpty( )) space = " ";
}
return aux + "\n";
}
}
class Main {
String text = "Hi! Please provide code you have tried this with."
+ " Please don't use StackOverflow as a code delivery service.\n"
+ "If you provide a minimal reproducible example we can help debug. "
+ "See how to ask";
void init() {
TextFormat tf = new TextFormat();
tf.initialice();
tf.formatText( text, 200 );
}
public static void main( String x[] ) {
new Main().init();
}
}
Out:
Hi👋🏻! Please provide code you have tried this with. Please don't use StackOverflow as a code delivery service. If you provide a minimal reproducible example we can help debug. See how to ask
I encountered the same problem after Windows 24H2 update, my problem was solved by removing ASP support from the Windows components install/remove menu and reinstalling it. (I tried for localhost, I don't know about the server side)
That's a bug. Thanks for finding and reporting it.
Hyy did u find the answer to ur question I'm facing this issue now in 2025 please help
The short answer is no — you can’t run Excel VBA macros directly from Node.js, and libraries like xlsx won’t help with that either.
The main reason? Two different systems/mechanics
VBA is Excel’s territory: Macros are interpreted and executed by Excel itself. There’s no standalone VBA runtime you can tap into from Node.js.
Node.js lives outside that world: It can read and write Excel files at the data level, but it can’t talk to Excel as a running application or trigger internal macro execution.
An npm package like 'xlsx' doesn’t bridge that gap: they are great for manipulating spreadsheets — cells, styles, structure — and can even preserve embedded VBA code (the raw VBA code, should it exist), but they don’t run it. That part stays inert unless Excel opens the file and runs it, which cannot be triggered by node.js.
You would need something that can actually launch Excel, like PowerShell. I hope that helps.
As best I can tell the state being reset is just the error and data properties. It doesn't say more (like anything about caching behavior or whatever else). But whether clearing these properties will be useful depends on your app, not react-query's internal state. i.e. if you're using either property being non-nullish as an indicator that some event has occurred - like using the properties in a dependency array - you might want to reset them before firing another mutation.
I am going accept the answer from @EdMorton as best here, the main reason is that it allows me to:
deserial() {
declare function_to_call="$1"
declare -n assoc_array_ref="$2"
while IFS= read -r -d '' line; do
assoc_array_ref+=( $line )
done < <($function_to_call)
}
and then simply call deserial func Y
- this populates the associative array Y with what was returned from function func
universally.
Use the Kotlin wrapper method — it's clean, safe, and doesn't require changes from the library provider.
Alternative Hack (If Kotlin wrapper isn't an option)
You could technically use reflection in Java to instantiate and access the class, but this approach is messy, error-prone, and discouraged unless there's absolutely no other option. Reflection bypasses compile-time checks, meaning you lose type safety, IDE support, and readability — which makes the code much harder to debug, maintain, and scale. It's more of a last-resort workaround than a proper solution.
Actually yes :) I
know it is late but for others interested in this, one can write:
SELECT "{{COLUMN_NAME}}"
FROM MY_TABLE
WHERE "{{COLUMN_NAME}}" > 0
Where COLUMN_NAME is a text parameter or of other nature. This will work as long as the provided value is a column name.
Relevant example for web using Google's Material Icons font:
https://www.w3schools.com/icons/tryit.asp?filename=tryicons_google-voicemail
Alternatively (not great), there's the tape icon: 🖭 / U+1F5AD / Tape Cartrige
I got this error when I tried to add all subnets to the endpoint via Pulumi (IaC)
Apparently, you can't create more than 1 interface per AZ.
If you tried it with Console UI it does not provide you such an opportunity.
I had the same problem, I couldn't add or find my bank until I selected another country. In my case I couldn't find ING in the Netherlands, but when I selected Belgium and searched ING, I went back to searching ING in the Netherlands and I was able to find it and add my bank account (with the same info that I tried when it didn't work)
I was wondering if the use of trim function in conditional formatting would mitigate its impact on sheet performance.
I could take a look. Please post the ids of the workitems for which you didn't receive notification. Note that we give up quickly if the callback URL is unavailable. We retry 5, 15, 30 seconds later and then give up.
1- try Close Visual Studio completely and Delete these cache folders:
%LocalAppData%\Microsoft\VisualStudio\<Version>\ComponentModelCache
%LocalAppData%\Microsoft\SQL Server Data Tools
2- then (Replace <Version>
with your VS version, e.g., 17.0
for VS 2022)
3- after that Reopen VS and retry publishing
I think I found the answer to my question. So if my goal is to create node embeddings using the random walk algorithm. I could potentially only store the node_ids as projected graph. Then while creating the random walk for each start node, proceed to collect all the node_ids, then make a cypher query to retrieve their properties and labels separately. By this approach, I reduce the memory footprint, but end up loosing up on time, by making one additional query per walk. Would this be a viable solution?
This should be fixed in update 7. https://blogs.embarcadero.com/embarcadero-interbase-2020-update-7-released/
It seems a threading issue with your polling loop. All API objects must be created by the main thread or threads created by the main thread. Accessing API objects from other system threads such as Worker Thread, System Timer might produce undesirable result.
Just a note for others: setting UIPrefersShowingLanguageSettings
and CFBundleDevelopmentRegion
in the Info.plist doesn’t reliably show the "App Language" option in Settings on iOS 16. This behavior seems more consistent on iOS 17 and later.
You are a lifesaver...
I was importing my old project and thinking about (exactly same) problem the whole day and could not get why that happened.
Just look at THIS LETTERS xD ((
Changed the code to make the Auto Focus:
<ActionButton id="MudButtonSearch"
AutoFocus="false"
ButtonType="ButtonType.Submit"
Disabled="@(!context.Validate() || model.ProviderNumber == null)"
Search
</ActionButton>
// ----
private bool autoFocus = false;
private async Task<IEnumerable<string>> SearchProviderNumbers(string providerNumber, CancellationToken token)
{
var result = (IEnumerable<string>) dtoConfig.ProviderNumbersSearchData
v.Where(x => x.StartsWith(providerNumber, StringComparison.InvariantCultureIgnoreCase)).ToList();
if (result.Count() == 0)
{
autoFocus = true;
}
}
The new code executed, but the results were the same.
I don't think changing makes any difference and that this request cannot be satisfied.
Thank you @jonsharpe!
Using an updater function to set the state fixes it.
const [items, setItems] = useState<Item[]>([])
const createItem = useCallback(async (item: Item) => {
info("posting new Item");
fetch(`${API_SERVER}/CreateItem`, {"POST", body: item})
.then(response => setItems(items => [...items, response]));
}, [items]);
Another alternative, which may coincidentally be the servers timezone, and if you have the proper permissions...
select setting from pg_settings where name ='log_timezone';
I also met same issue. Every process didn't work.
Just my case, I'm completely not sure, but not use the replace function in the VS Code, but copy and paste the path one by one, even the same sentence, it could found the file. just for
In fact, logback has InvocationGate, it prevent files from rolling too quickly:
- write the entries for exceeding the size limit
- wait 60 seconds
- rewrite a new entry
Theses steps have successfully triggered the rolling
No, SSMS cannot run directly on VMware Fusion running on Apple Silicon Macs because:
VMware Fusion for Apple Silicon supports only ARM-based virtual machines.
SSMS is a Windows application built for x86/x64 architecture.
There is no official Windows x86/x64 VM support on Apple Silicon via VMware Fusion.
Windows ARM version exists but SSMS does not have a native ARM version, so it won’t run properly even on Windows ARM.
There's now a combined VC C++ 2015-2022 redistributable available which will support old and new versions of wkhtmltopdf.exe
As previously mentioned you'll want the x86 version rather than x64:
https://aka.ms/vs/17/release/vc_redist.x86.exe
Full details
could you please try again with the https
for Origin = https://dev.getExample.com
Make sure it's NOThttp://dev.getExample.com
.
There is a workaround for this:
Go to Tools =>Projects and Solutions => Web Projects
Then uncheck "Stop debugger when browser window is closed, close browser when debugging stops".
But this will keep open that web page of your project.
I initially copied logo.png
to the wrong location. To fix this, I locate the correct path of the default image inside my Docker container and use it as a reference.
$ docker exec -it <my container id>
superset-logo-horiz.png
): $ find / -name "superset-logo-horiz.png"
COPY logo.png /usr/local/lib/python3.10/site-packages/superset/static/assets/images/logo.png
What worked for me was installing psycopg2 like this:
pip install psycopg2-binary
As this answer suggests: https://stackoverflow.com/a/58984045/11677302
Looks like this is explicitly not supported by PyPI: https://docs.pypi.org/trusted-publishers/troubleshooting/
Reusable workflows cannot currently be used as the workflow in a Trusted Publisher. This is a practical limitation, and is being tracked in warehouse#11096.
Time to refactor our GitHub actions I guess.
How do I clear specific site data in Chrome?
Delete specific cookies
On your computer, open Chrome.
At the top right, select More Settings .
Select Privacy and security. Third-party cookies.
Select See all site data and permissions.
At the top right, search for the website's name.
To the right of the site, select Delete .
To confirm, select Delete.
Has anyone managed to get this to work using v4?
<?xml version="1.0" encoding="utf-8"?><Keyboard xmlns:android="http://schemas.android.com/apk/res/android"
android:keyWidth="10%p"
android:horizontalGap="0px"
android:verticalGap="0px"
android:keyHeight="60dp">
<!-- First row -->
<Row>
\<Key android:codes="-1" android:keyLabel="Ⲁ" /\>
\<Key android:codes="-1" android:keyLabel="Ⲉ" /\>
\<Key android:codes="-1" android:keyLabel="Į" /\>
\<Key android:codes="-1" android:keyLabel="O" /\>
\<Key android:codes="-1" android:keyLabel="Ꞗ" /\>
\<Key android:codes="-1" android:keyLabel="V" /\>
\<Key android:codes="-1" android:keyLabel="G" /\>
\<Key android:codes="-1" android:keyLabel="Ɠ" /\>
\<Key android:codes="-1" android:keyLabel="Đ" /\>
\<Key android:codes="-1" android:keyLabel="X" /\>
</Row>
<!-- Second row -->
<Row>
\<Key android:codes="-1" android:keyLabel="Ⲍ" /\>
\<Key android:codes="-1" android:keyLabel="ꓙ" /\>
\<Key android:codes="-1" android:keyLabel="Ƥ" /\>
\<Key android:codes="-1" android:keyLabel="𐍆" /\>
\<Key android:codes="-1" android:keyLabel="Ӈ" /\>
\<Key android:codes="-1" android:keyLabel="𐌺" /\>
\<Key android:codes="-1" android:keyLabel="Ɫ" /\>
\<Key android:codes="-1" android:keyLabel="Ұ" /\>
\<Key android:codes="-1" android:keyLabel="𐌼" /\>
\<Key android:codes="-1" android:keyLabel="ꓚ" /\>
</Row>
<!-- Third row -->
<Row>
\<Key android:codes="-1" android:keyLabel="Ꙅ" /\>
\<Key android:codes="-1" android:keyLabel="Õ" /\>
\<Key android:codes="-1" android:keyLabel="Ŋ" /\>
\<Key android:codes="-1" android:keyLabel="Ɍ" /\>
\<Key android:codes="-1" android:keyLabel="𐍃" /\>
\<Key android:codes="-1" android:keyLabel="Ⲧ" /\>
\<Key android:codes="-1" android:keyLabel="Ư" /\>
\<Key android:codes="-1" android:keyLabel="Q" /\>
\<Key android:codes="-5" android:keyLabel="⌫" /\> \<!-- Backspace --\>
</Row>
<!-- Fourth row -->
<Row android:rowEdgeFlags="bottom">
\<Key android:codes="-2" android:keyLabel="🌐" /\> \<!-- Language switch --\>
\<Key android:codes="32" android:keyLabel="␣" android:keyWidth="40%p" /\> \<!-- Space --\>
\<Key android:codes="10" android:keyLabel="⏎" android:keyWidth="20%p" /\> \<!-- Enter --\>
</Row>
</Keyboard>
header 1 | header 2 |
---|---|
cell 1 | cell 2 |
cell 3 | cell 4 |
999999 diamond
Are you sure the db object is actually not null/undefined? Try using another function instead of transaction and see if it shows you the same error or, alternatively, try doing a console.log() of the db.
Jacob Kaplan's book A Criminologist's Guide to R has a section on downloading different file types. As for .txt files in ASCII (which my file type was), he writes "hopefully you'll never encounter" this "very old file format system" (p. 51). To read it into R, he created the asciiSetupReader package. I installed that package and used it on my ASCII .txt file. It still didn't work. So I downloaded the NIBRS file in SPSS format and tried read_sav() (from the haven package). This still didn't work. So I used the asciiSetupReader function read_ascii_setup() with the SPSS file:
NIBRS2014_1 <- read_ascii_setup("C:/Users/steve/OneDrive/Desktop/RaceEconProfiling/NIBRS/ICPSR_36421_SPSS_2014/DS0002/36421-0002-Data.txt" , "C:/Users/steve/OneDrive/Desktop/RaceEconProfiling/NIBRS/ICPSR_36421_SPSS_2014/DS0002/36421-0002-Setup.sps")
This worked!
Thank you! I think I was just tired and was missing something so obvious.
Something to try:
Make sure you have enable_partition_pruning
set to on
. I believe documentation states that is the default, but I found it not set before.
https://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITION-PRUNING
See how it compares for you:
-- see if EXPLAIN is different when pruning is ON
SET enable_partition_pruning = on;
EXPLAIN analyse select * from "IpLocation" where "IpFrom"<=1503395841 and "IpTo">=1503395841 limit 1;
-- compare when pruning is OFF
SET enable_partition_pruning = off;
EXPLAIN analyse select * from "IpLocation" where "IpFrom"<=1503395841 and "IpTo">=1503395841 limit 1;
Select or Double-click the word you want to edit. This highlights all its occurrences via Smart Highlighting.
Go to the menu:
Edit > Multi-Select All
Here, you will see options like Match Whole Word Only and Match Case & Whole Word.
Choose the appropriate option to select all occurrences of the word as actual selections (multi-cursors), not just highlights1.
Now, all occurrences are selected as editable cursors. You can type to replace or edit all of them simultaneously.
Bonus tip: assign a keyboard shortcut to the menu action. This way you can multi-edit all occurrences of selected text with just a press of a shortcut (just like in sublime)
I'm using an older version of MUI in a legacy project and the Select seems to have way less props when compared to the TextField.
For example, the "helperText" doesn't exist in the Select and needs to be added manually as a new component, whereas it's native on the TextField.
Another example would be the color of the error, the Select has a slightly lighter red and doesn't actually make the question label red... Talking about labels, that's manually done in the Select as well using the "InputLabel" component.
Though the Select, by making you do everything manually, gives you more control over how the components render on the screen whereas the TextField with select=true gives you all these features out of the box and gives you cleaner code.
In my personal opinion, use the TextField unless you need to render things weirdly/differently. Though you could do that in the TextField by passing inputprops, helpertextprops, inputlabelprops, etc, it's way more cumbersome IMO.
I'm using colored Emojis when printing to the debug console using plain print statements.
is not correct
"types": "./dist/index.d.ts"
is not correct
and
you have to use
"types": "./dist/index.js"
Resolved by setting password (DROPBEAR_SVR_PASSWORD_AUTH=1) instead of key (DROPBEAR_SVR_PUBKEY_AUTH=1)
After looking at the source code it looks like git bisect reset
is simply running checkout
on the contents of .git/BISECT_START
. So I ended up adding an alias for the following command, which does just that: git checkout $(cat $(git rev-parse --show-toplevel)/.git/BISECT_START)
.
If internal_ref is a string in your database you have to put '' to him.
did you manage to fix this? If so could you please share what you changed as we currently have this issue.
Thanks!
Sarah
No, you can't change the value of EXPO_PUBLIC_API_URL after building the APK, because Expo embeds the .env values at build time. To change it post-build, you need to rebuild the APK with new .env values.
to show another window modally you ```present``` the dialog window and call the method ```set_transient_for``` passing the parent window.
```
let d = DialogWindow::new():
d.set_transient_for(Some(parent_window));
d.present();
```
You're absolutely right - the issue is that volume_mounts isn't included in the template_fields of KubernetesPodOperator, so Airflow doesn't apply templating to it at all.
I've run into this exact same problem before. Here are a few approaches that actually work:
Monkey patch the template_fields (quick and dirty)
from airflow.providers.cncf.kubernetes.operators.pod import KubernetesPodOperator
# Add volume_mounts to template_fields
KubernetesPodOperator.template_fields = KubernetesPodOperator.template_fields + ('volume_mounts',)
@dag(
dag_id=PIPELINE_NAME,
schedule=None,
params={
"command": "",
"image": "python:3.13-slim",
"shared_data_mount_path": "/mnt/data/"
}
)
def run_arbitary_command_pipeline():
# ... your existing code ...
run_command = KubernetesPodOperator(
task_id="run_arbitrary_command",
cmds=["sh", "-c", "{{ params.command }}"],
image="{{ params.image }}",
volumes=[k8s.V1Volume(name=pvc_name, persistent_volume_claim=k8s.V1PersistentVolumeClaimVolumeSource(claim_name=pvc_name))],
# Use dict instead of V1VolumeMount object for templating to work
volume_mounts=[{
'name': pvc_name,
'mount_path': "{{ params.shared_data_mount_path }}"
}],
)
Custom operator (cleaner approach)
class TemplatedKubernetesPodOperator(KubernetesPodOperator):
template_fields = KubernetesPodOperator.template_fields + ('volume_mounts',)
# Then use TemplatedKubernetesPodOperator instead of KubernetesPodOperator
The key insight here is that you need to use dictionary format for volume_mounts when templating is involved, not the k8s.V1VolumeMount objects. Airflow will convert the dicts to proper Kubernetes objects after template rendering.
I personally prefer Option 1 for one-off cases since it's simpler, but if you're going to reuse this pattern across multiple DAGs, definitely go with the custom operator approach.
Also make sure you're defining your params in the @dag decorator with the params argument, not as function parameters - that's another common gotcha
Try using region
instead of initialRegion
as prop in your MapView
. This will render the children of your MapView
on changes of the region
-state.
You have to choose a resampling method with nearest-neighbor interpolation. In pillow
use something like
w, h = bw_image.size
scale_factor = 16
img_scaled = bw_image.resize((w * scale_factor, h * scale_factor), resample=Image.NEAREST)
where scale_factor
is your integer scaling factor. Result:
When using matplotlib
you can also only scale for display, i.e.
import matplotlib.pyplot as plt
plt.imshow(bw_data, cmap='gray', interpolation='nearest')
You can disable performance.rtp.reset
configuration option.
lazy.setup({
...
performance = {
rtp = {
reset = false,
},
},
})
Consider adding index on transactions(status, id, amount) to reduce SELECTion time required.
This should minimize COMPLETE ROW READS.