The proposed setup in my question works perfectly. I realized I hadn't restarted the Caddy container for a while. When I checked the Caddyfile, it actually contained some lines from a previous attempt at getting fonts working:
@fonts {
path .woff *.woff2 *.ttf *.eot *.svg path_regexp \.(woff|woff2|ttf|eot|svg)$
}
handle @fonts {
header Cache-Control "public, max-age=31536000"
header Access-Control-Allow-Origin ""
file_server
}
Removing this and restarting the Caddy container with the Caddyfile I provided in the question worked.
It should be --base-href /myapp/
not —-base-href=/myapp/
Just for completeness, I add the trivial case. The error maybe what the message says in its simplest form. A class YourClass
is declared twice with a statement class YourClass { ... }
, because you included the file YourClass.php
twice.
What you want is not really the same as the datetime standards as the comments above. However your code works. So I see you have defined a ModelBinder for the DateOnly type. If you want to format input with output you should change this step from:
if (DateOnly.TryParse(value, out var date))
{
bindingContext.Result = ModelBindingResult.Success(date);
}
to
var value = valueProviderResult.FirstValue;
string format = "dd.MM.yyyy";
CultureInfo culture = CultureInfo.InvariantCulture;
if (DateOnly.TryParseExact(value, format, culture, DateTimeStyles.None, out var date))
{
bindingContext.Result = ModelBindingResult.Success(date);
}
in VSCode, you can use the Microsoft serial monitor extention to see the serial output of the ESP32
When you see the label “Internal” under your build in App Store Connect, it indicates that the build was submitted using the “TestFlight (Internal Only)” option in Xcode.
To make the build available for External Testing, you must select “App Store Connect” as the distribution option during the upload process in Xcode—not “TestFlight”. This ensures the build is eligible for submission to Apple for external TestFlight review.
I have the same issue, what is the proper way of connecting an existing database ?
Thanks bro this quickly help me in the error i was facing how i can use it
Seems to be a known Chromium bug and should be fixed in version 137 https://issues.chromium.org/issues/415729792
Turns out this has nothing to do with AWS, NextJS or any of the code, it's a MS Word Trust Center setting. I found two possible solutions (depending on your security appetite):
Option 1 - Find the downloaded file in your file explorer, right-click --> Properties, and check the 'Unblock' box at the bottom. This needs to be done on a file-by-file basis.
Option 2 - Open Word and go to File --> Options --> Trust Center --> Trust Center Settings --> Protected View and unselect the 'Enable Protected View for files originating from the Internet' check box. Then restart Word and thereafter, all files will open correctly.
I believe that in order to do that you have to join Apple's developer program. It's $100 yearly.
I posted this also on the marshmallow github page, and was able to get a good response there.
https://github.com/marshmallow-code/marshmallow/issues/2829
from marshmallow import Schema, fields, validate
class RequestNumber(fields.String):
def __init__(self, *args, **kwargs):
super().__init__(*args, metadata={'description': 'Request Number', 'example':'REQUEST12345'}, validate=validate.Regexp(regex=r"^REQUEST\d{3,9}$", error="Input string didn't match required format - REQUEST12345")), **kwargs)
class Api1():
class Input1(Schema):
request_number = RequestNumber()
For anyone else on this page looking for the answer, I found the answer as recommended by RickN in the comments :)
Replace: const keyBytes = Buffer.from(key, "base64");
With: const keyBytes = Buffer.from(key.trim(), "base64url");
As Ike mentioned in a comment, you don't need to delete the whole column. You can copy the data into a temporary extra column (probably copy - then paste values). Then in the column with the incorrect column formula, you highlight the whole column (minus the table header), and choose "clear contents" (which is the crucial step - not just the delete key). This will remove the computed formula from the column (which you can test by adding an extra row). Then you can copy and paste the data back into your now cleaned column. All external formulas and other column references, will carry on working for that column without needing to recreate those references.
I know this is old thread, but I was looking for this answer, and with the help of Ike's comment, I managed to keep everything working, which was awesome for my complex spreadsheet. Wanted to add this comment for anyone following with the same issue.
Thanks, best answer that I have seem today.
Turns out, it's because of new version of flask-session is not compatible with the old version of airflow. I limited my flask-session < 0.6 and it works just fine!
Can I achieve that using maxscript?
The easiest way to do this is through this VSCode extension: https://marketplace.visualstudio.com/items?itemName=noknownerrors.copy-visible-text (copy-visible-text). It does exactly what it says. Just install it, then select what you need and press Ctrl+Shift+C – and voilà!
I checked 8b5bad8c0b214a1c9eec2bd86aa274c4. The callback failed with BadRequest.
You allude to this in your comment, so I'll add an example of how I've addressed this by manipulating linewidth rather than alpha to get a similar visualization of distribution that still has sharp lines.
Here's a replication of your current approach:
import matplotlib.pyplot as plt
import numpy as np
num_lines = 1000
np.random.seed(42)
xs = np.linspace(0, 10, 100).reshape(1,-1)
ys = xs*np.random.normal(1,1,(num_lines,1)) + np.random.normal(0, 1, (num_lines,100))
for y in ys:
l = plt.plot(xs.flatten(), y, 'k', alpha=0.01)
l[0].set_rasterized(False)
plt.savefig('ex.svg')
plt.show()
Here's an alternative -- I also try to explicitly tell matplotlib
not to rasterize via ax.set_rasterization_zorder(None)
(I believe this is the same as your l[0].set_rasterized(False)
call).
The difference is that I switch to manipulating linewidth
rather than alpha
. I think the effect is fairly similar.
fig, ax = plt.subplots()
ax.set_rasterization_zorder(None)
num_lines = 1000
np.random.seed(42)
xs = np.linspace(0, 10, 100).reshape(1, -1)
ys = xs * np.random.normal(1, 1, (num_lines, 1)) + np.random.normal(0, 1, (num_lines, 100))
for y in ys:
ax.plot(xs.flatten(), y, color='black', linewidth=0.01)
fig.savefig('ex_width.svg', format='svg')
When you zoom way in, you can see that the alpha
approach (left) is fuzzier than the linewidth
approach (right):
Turns out this was indeed a bug. Fixed by this pr https://github.com/odin-lang/Odin/pull/5267.
I know this questions is already answered. But it is missing some main points those are my findings i am adding those points. The source of this answer is here.
1- Azure AD External Identities was previous name of Azure AD B2C.
Azure B2c is a business to consumer identity management system.
2- Microsoft Entra External ID
Microsoft Entra External ID is a combination of Azure Ad B2C and Ad B2B (Now Entra Id).
When you create tenant of Microsoft Entra External ID you system create two type of tenants.
Workforce (B2B)
External (B2C)
WorkForce tenant is used for Ad B2B (Entra Id) Operation.
External Tenant is used for Azure B2c Operations.
To Learn more about Microsoft Entra External ID check here
I've managed to create a Bash script which poll the Local Dynamo Stream using AWS CLI and invokes the local Lambda with an event.
You can integrate it as part of the docker compose stack - I suggest using an amazon/aws-cli image.
https://gist.github.com/aldotroiano/69f3aaf900cec845c954329a55620f10
This works for me, hope it helps.
DateTime currentTime = DateTime.Now;
if (currentTime.Hour >= 5 && currentTime.Hour < 12)
{
ltWellcome.Text = "Good morning " + strUserName + "! Welcome to the system";
}
else if (currentTime.Hour >= 12 && currentTime.Hour <= 17)
{
ltWellcome.Text = "Good afternoon " + strUserName + "! Welcome to the system";
}
else if (currentTime.Hour >= 18 && currentTime.Hour <= 23)
{
ltWellcome.Text = "Good evening " + strUserName + "! Welcome to the system";
}
else
{
ltWellcome.Text = "Good night " + strUserName + "! Welcome to the system";
}
try using interaction
instead of inter
interaction: discord.Interaction
if i understood, what you want is to remove all the text.style.transform = ...
from your code
https://www.google.com/recaptcha/enterprise.js?render=6Ldi2VQrAAAAAH8zdoFKnPpy8vio2xkP8-8soIHh /* VUI LÒNG KHÔNG SAO CHÉP VÀ DÁN MÃ NÀY. */(function(){var w=window,C='___grecaptcha_cfg',cfg=w[C]=w[C]||{},N='grecaptcha';var E='enterprise',a=w[N]=w[N]||{},gr=a[E]=a[E]||{};gr.ready=gr.ready||function(f){(cfg['fns']=cfg['fns']||[]).push(f);};w['__recaptcha_api']='https://www.google.com/recaptcha/enterprise/';(cfg['enterprise']=cfg['enterprise']||[]).push(true);(cfg['enterprise2fa']=cfg['enterprise2fa']||[ ]).push(true);(cfg['render']=cfg['render']||[]).push('6Ldi2VQrAAAAAH8zdoFKnPpy8vio2xkP8-8soIHh');(cfg['clr']=cfg['clr']||[]).push('true');(cfg['anchor-ms']=cfg['anchor-ms']||[]).push(20000);(cfg['execute-ms']=cfg['execute-ms']||[]).push(15000);w['__google_recaptcha_client']=true;var d=document,po=d.createElement('script');po.type='text/javascript';po.async=true; po.charset='utf-8';var v=w.navigator,m=d.createElement('meta');m.httpEquiv='origin-trial';m.content='A7vZI3v+Gz7JfuRolKNM4Aff6zaGuT7X0mf3wtoZTnKv6497cVMnhy03KDqX7kBz/q/iidW7srW31oQbBt4VhgoAAACUeyJvcmlnaW4iOiJodHRwczov L3d3dy5nb29nbGUuY29tOjQ0MyIsImZlYXR1cmUiOiJEaXNhYmxlVGhpcmRQYXJ0eVN0b3JhZ2VQYXJ0aXRpb25pbmczIiwiZ XhwaXJ5IjoxNzU3OTgwODAwLCJpc1N1YmRvbWFpbiI6dHJ1ZSwiaXNUaGlyZFBhcnR5Ijp0cnVlfQ==';if(v&&v.cookieDep recationLabel){v.cookieDeprecationLabel.getValue().then(function(l){if(l!=='treatment_1.1'&&l!=='treatment_1.2'&&l!=='control_1.1'){d.head.prepend(m);}});}else{d.head.prepend(m);}po.src='https://www.gstatic.com/recaptcha/releases/GUGrl5YkSwqiWrzO3ShIKDlu/recaptcha__vi.js';po.crossOrigin='anonymous';po.integrity='sha384-WhGk2MFzizAR8e7ATZT9M5LnOdncMyMS/CPIMJVSwhC9bW85V2rZ6d8BZCQTAmG1';var e=d.querySelector('script[nonce]'),n=e&&(e['nonce']||e.getAttribution('nonce'));if(n){po.setAttribution('nonce',n);}var s=d.getElementsByTagName('script')[0];s.parentNode.insertBefore(po, s);})();
Object value = mDataSnapshot.child("Suhu").getValue();
String suhu;
if (value != null) {
suhu = value.toString();
// Use the 'suhu' variable
} else {
suhu = ""; // Or handle null case appropriately
// Handle the case where the value is null
}
A:1 The instance needs to have access to the Internet via Public IP or by using Cloud NAT so it can query the repository.
A2: Please also try installing the requested packages, such as Google Cloud SDK `google-cloud-sdk`, before the migration.
A3: Those flags are not related to your issue.
Memory issues like this often happen when the build container doesn’t have enough RAM, even if you set-- max-old-space-size=8192. Try increasing the pipeline memory size if possible, and monitor memory usage during the build to spot where it spikes. You can also test the build locally with the same Node options to see if it fails there, which helps isolate if it’s environment-related. Clearing yarn and Docker caches or disabling heavy build plugins temporarily might help narrow down the cause. Lastly, check if your pipeline environment supports swap space, as that can prevent the build process from being killed early.
thank you mikasa you saved me from the debugging hell not even the ais helped
Problem seems to be a new setting "Security: system enforce file extension mime type consistency" (Settings -> Feature Toggles)", after disabling it, it works fine.
The best solution for encoding in search params format is:
new URLSearchParams({ [""]: value }).toString().slice(1)
Don't use jeprof*.heap
when generating your gif. Use jeprof.PID_OF_THE_APP.*.heap
Make sure you're importing ThemeProvider from @mui/material
, not @emotion/react
. I had the same issue and this fixed it for me.
SQL Error: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)
If you’re using Android Studio you may need to update your Build Variants. On version 2024.2.1 I had to choose from sidebar Build Variants > [ Re-import with defaults] (button at bottom right of panel)
Thanks to @DavidMaze I solve it. It was just a matter of deleting the strings and posting them as actual paths and the script did why I wanted. (stop running once I closed all the apps)
Thank you also to @furas to simplifying my code it runs much faster.
Final solution:
import os
import sys
import subprocess
from subprocess import Popen
def back_insert(filename):
#inserts backslashes into paths that have spaces in them
fstring = filename.replace(" ", "\\ ")
return fstring
#Runs the Emulator, socket, Timer and Tracker
commands = [
back_insert("/usr/bin/snes9x-gtk"),
back_insert("/usr/share/OpenTracker/OpenTracker"),
back_insert("/usr/bin/QUsb2Snes"),
back_insert("/home/user/LibreSplit/libresplit")
]
procs = [Popen(i) for i in commands]
for p in procs:
p.wait()
Voici un **guide complet pour installer MySQL sur un serveur Red Hat (RHEL, CentOS, AlmaLinux ou Rocky Linux)** et créer **deux instances MySQL distinctes** sur le même serveur.
---
## 🛠️ Objectif
- Installer **MySQL Server**
- Créer **deux instances MySQL indépendantes**
Instance 1 : port `3306`
Instance 2 : port `3307`
- Chaque instance aura :
Son propre répertoire de données
Sa propre configuration
Son propre service systemd
---
## 🔧 Étape 1 : Installer MySQL Server
### 1. Ajouter le dépôt MySQL officiel
```bash
sudo rpm -Uvh https://dev.mysql.com/get/mysql80-community-release-el9-7.noarch.rpm
```
\> Remplacer `el9` par votre version RHEL (`el7`, `el8`, etc.)
### 2. Installer MySQL Server
```bash
sudo dnf install mysql-server
```
---
## ⚙️ Étape 2 : Démarrer et activer l'instance par défaut
```bash
sudo systemctl enable mysqld
sudo systemctl start mysqld
```
### Récupérer le mot de passe temporaire root
```bash
sudo grep 'temporary password' /var/log/mysqld.log
```
Sécuriser l’installation :
```bash
sudo mysql_secure_installation
```
---
## 📁 Étape 3 : Préparer la deuxième instance
### 1. Créer un nouveau répertoire de données
```bash
sudo mkdir /var/lib/mysql2
sudo chown -R mysql:mysql /var/lib/mysql2
```
### 2. Initialiser la base de données pour la seconde instance
```bash
sudo mysqld --initialize --user=mysql --datadir=/var/lib/mysql2
```
\> ✅ Sauvegarder le mot de passe généré affiché dans les logs :
```bash
sudo cat /var/log/mysqld.log | grep "A temporary password"
```
---
## 📄 Étape 4 : Créer un fichier de configuration personnalisé pour la seconde instance
```bash
sudo nano /etc/my-2.cnf
```
Collez-y cette configuration :
```ini
[client]
port = 3307
socket = /var/lib/mysql2/mysql.sock
[mysqld]
port = 3307
socket = /var/lib/mysql2/mysql.sock
datadir = /var/lib/mysql2
pid-file = /var/lib/mysql2/mysqld.pid
server-id = 2
log-error = /var/log/mysqld2.log
```
Enregistrer et fermer.
### Créer le fichier log
```bash
sudo touch /var/log/mysqld2.log
sudo chown mysql:mysql /var/log/mysqld2.log
```
---
## 🔄 Étape 5 : Créer un service systemd pour la seconde instance
```bash
sudo nano /etc/systemd/system/mysqld2.service
```
Collez ce contenu :
```ini
[Unit]
Description=MySQL Second Instance
After=network.target
[Service]
User=mysql
Group=mysql
ExecStart=/usr/bin/mysqld --defaults-file=/etc/my-2.cnf --basedir=/usr --plugin-dir=/usr/lib64/mysql/plugin
ExecStop=/bin/kill -SIGTERM $MAINPID
Restart=always
PrivateTmp=false
[Install]
WantedBy=multi-user.target
```
Recharger systemd :
```bash
sudo systemctl daemon-reexec
sudo systemctl daemon-reload
```
Activer et démarrer le service :
```bash
sudo systemctl enable mysqld2
sudo systemctl start mysqld2
```
Vérifier le statut :
```bash
sudo systemctl status mysqld2
```
---
## 🔐 Étape 6 : Sécuriser la seconde instance
Connectez-vous à la seconde instance avec le mot de passe temporaire :
```bash
mysql -u root -p -h 127.0.0.1 -P 3307
```
Exécutez ces commandes SQL pour changer le mot de passe :
```sql
ALTER USER 'root'@'localhost' IDENTIFIED BY 'NouveauMotDePasse';
FLUSH PRIVILEGES;
exit
```
---
## 🧪 Étape 7 : Tester les deux instances
### Vérifier les ports utilisés :
```bash
ss -tuln | grep -E '3306|3307'
```
### Se connecter à chaque instance :
Instance 1 :
```bash
mysql -u root -p
```
Instance 2 :
```bash
mysql -u root -p -h 127.0.0.1 -P 3307
```
---
## 📌 Résumé des deux instances
| Instance | Port | Fichier Config | Données | Service Systemd | PID File | Log File |
|---------|------|--------------------|----------------|------------------|------------------------|--------------------|
| Default | 3306 | `/etc/my.cnf` | `/var/lib/mysql` | `mysqld` | `/var/run/mysqld/mysqld.pid` | `/var/log/mysqld.log` |
| Second | 3307 | `/etc/my-2.cnf` | `/var/lib/mysql2`| `mysqld2` | `/var/lib/mysql2/mysqld.pid` | `/var/log/mysqld2.log` |
---
## ✅ Vous avez terminé !
Vous avez maintenant **deux instances MySQL indépendantes** fonctionnant sur le même serveur Red Hat.
Chaque instance peut être gérée séparément via ses propres commandes :
```bash
sudo systemctl start/stop/restart mysqld
sudo systemctl start/stop/restart mysqld2
```
---
## ❓ Besoin d’un script Bash pour automatiser cette installation ?
Je peux vous fournir un **script Bash** qui fait tout cela automatiquement.
Souhaitez-vous que je vous le fournisse ?
Use the format_source_path() function on your builder:
env_logger::builder()
.format_source_path(true)
.init();
The logs will look like
[2025-06-03T20:06:14Z ERROR path/to/file.rs:84 project::module] Log message
Use the other format_
methods to further customize the look of your logs.
After lots of testing I found out the difference was Apache (1st server) vs. LiteSpeed (2nd server). The way to find it was by: <!--#echo var="SERVER_SOFTWARE" -->
How can I get it to fit inside an object that is not spanning the whole screen width?
The issue was that I was using END when I should have been using End.
Note for new TI-84 programers: if you include extra whitespace (other then newlines) or include a syntax error you won't be warned, your program will just poop out.
props.data
should work in your code snippet to get the data of the row. The cellRenderer
receives props of type CustomCellRendererProps
, and this is documented in the AG Grid docs.
wptrkinhthanks yoiu very muvh men
I obtain a total of 295,241 calls per second for the CIE ΔE2000 function in SQL. Both C99 and SQL (MariaDB and PostgreSQL) versions are available here.
Adding my 2¢ because everything else here seems overly complex to me (with Python 3.13 typing):
def recursive_subclasses[T](cls: type[T]) -> set[type[T]]:
"""
Recursively finds all subclasses of a given class.
"""
return set.union({cls}, *map(recursive_subclasses, cls.__subclasses__()))
Matthias this was exactly what I needed! If you run the service manager and point it to the correct version of jvm.dll you don't need to worry about if your JAVA_HOME is correct. Since I was using 6.0.0.0 I needed to use Java 8 when Java 17 was already installed and set as JAVA_HOME. I opened this up, pointed to the java 8 JDK jvm.dll and it started right up afterwards.
For me it was the Device Simulator window , apperently to avoid conflicts it disables some inputs.
https://docs.unity3d.com/Packages/com.unity.inputsystem%401.4/manual/Debugging.html?#device-simulator
When Device Simulator window is in use, mouse and pen inputs on the simulated device screen are turned into touchscreen inputs. Device Simulator uses its own touchscreen device, which it creates and destroys together with the Device Simulator window.
To prevent conflicts between simulated touchscreen inputs and native mouse and pen inputs, Device Simulator disables all native mouse and pen devices.
Closing it resolved my issue. (For cross platform developpement I am using both touch and mouse inputs)
You need to make sure that you run this as a read only transaction
I was blocked on 403 as well, I found a fix using Selenium for Python instead of "urlopen"
Fix on fork here:
https://github.com/Rolzad73/UnrealMarketplaceVaultExtractor
can't you just use substring, or create an extension method?
https://dotnetfiddle.net/RlDOuh
public static class StringExtensions
{
public static string Slice(this string source, int start, int end) => source.Substring(start, end - start);
}
Also keep in mind that even if auto update is on, it won't update until it considers the version stable:
To ensure the stability of self-hosted integration runtime, we release a new version each month and push an auto-update every three months, using a stable version from the preceding three months. So you may find that the autoupdated version is the previous version of the actual latest version. If you want to get the latest version, you can go to download center and do so manually. Additionally, auto-update to a new version is managed internally. You can't change it.
i don't think there is an easy way out, but this isn't particularly hard provided that you know python well.
here is what i was able to whip up in about 2 hours or so (in jupyter, cells are separated with #========
):
import numpy as np
import matplotlib.pyplot as plt
# step 1: load image
img = plt.imread("./your_image.png")
img = img[..., 0] # get rid of rgb channels that are present in the sample image
plt.imshow(img)
# ======================
# step 2, identify relevant area, crop the image to it
img_bin = img > 0.05 # threshold image to get where the roi is
# btw, plt.imread gets pixel values to 0-1 range
where_y, where_x = np.where(img_bin)
xmi, xma = np.quantile(where_x, (0.01, 0.99)) # better for images noise
ymi, yma = np.quantile(where_y, (0.01, 0.99))
# show visualize the region of interest
plt.imshow(img_bin)
plt.gca().add_artist(plt.Rectangle((xmi, ymi), xma-xmi, yma-ymi, facecolor='none', edgecolor='red'))
img2 = img[int(ymi):int(yma), int(xmi):int(xma)].copy() # crop the image
# ========================
# step 3: find some sort of starting point for the algorithm
ci = img2.ravel().argmax() # get brightest point
width = img2.shape[1]
cy, cx = ci//width, ci%width
plt.imshow(img2)
plt.plot([cx, ], [cy, ], marker="o", color='red', markerfacecolor="none")
def get_line_ends(cx, cy, len_pixels, rads):
l = len_pixels/2
cos = np.cos(rads)
sin = np.sin(rads)
y0 = cy-l*sin
y1 = cy+l*sin
x0 = cx-l*cos
x1 = cx+l*cos
return x0, x1, y0, y1
x0, x1, y0, y1 = get_line_ends(cx, cy, 100, -np.pi/11)
# notice that because y axis is inverted, line will be rotating clockwise, instead of counter-clockwise
print(x0, x1, y0, y1)
plt.plot([x0, x1] , [y0, y1], c='red')
# ===========================
# step 4. line sampling prototype
x0, x1, y0, y1 = get_line_ends(cx, cy, 100, -np.pi/11)
plt.imshow(img2)
print(x0, x1, y0, y1)
# plt.plot([x0, x1] , [y0, y1], c='red')
xs = np.linspace(x0, x1, 100).astype(int)
ys = np.linspace(y0, y1, 100).astype(int)
plt.plot(xs, ys, c="red", ls="none", marker="s", markersize=1)
plt.xlim(x0-5, x1+5)
plt.ylim(y0+5, y1-5) # y is still inverted
# ===============================
# step 5 sample pixels along the line at a bunch of angles, and find correct angle,
# to find direction of your lines
def sample_coordinates_along_a_line(img, cx, cy, len_pixels, rads): # same variable names
x0, x1, y0, y1 = get_line_ends(cx, cy, len_pixels, rads)
xs = np.linspace(x0, x1, int(len_pixels)).astype(int)
ys = np.linspace(y0, y1, int(len_pixels)).astype(int)
return img[ys, xs]
rs = np.linspace(-np.pi, 0, 100)
ms = np.empty_like(rs)
for i, r in enumerate(rs):
sample = sample_coordinates_along_a_line(img2, cx, cy, 100, r)
ms[i] = sample.mean()
r_est = r_estimated = rs[ms.argmax()] # will be nearly identical to our guess, so i don't plot it
plt.plot(rs, ms)
plt.axvline(-np.pi/11, c='red') # our guess, should match a maximum
# =================================
# step 6: sample along perpendicular direction, to identify lines
r_90 = r_est + np.pi/2 # perpendicular,
# since r_est is between -pi and 0, r_90 is always between -pi/2, pi/2
r_90 = r_est + np.pi/2 # perpendicular,
# since r_est is between -pi and 0, r_90 is always between -pi/2, pi/2
def get_line_in_a_box(cx, cy, r, xmi, xma, ymi, yma):
"get line that is inside of rectangular box, that goes through point (cx, cy), at angle r"
is_steep = np.abs(r) > np.pi/4 # if angle is > 45 deg, then line grows faster vertically
if is_steep:
y0 = ymi; y1 = yma
x0 = cx - cy/np.tan(r)
x1 = cx+(yma-cy)/np.tan(r)
else:
x0 = xmi; x1 = xma
y0 = cy - cx*np.tan(r)
y1 = cy + (xma-cx)*np.tan(r)
return x0, x1, y0, y1
plt.imshow(img2)
x0, x1, y0, y1 = get_line_in_a_box(cx, cy, r_est, xmi=0, xma=img2.shape[1]-1, ymi=0, yma=img2.shape[0]-1)
plt.plot([x0, x1] , [y0, y1], c='red') # along lines
x0, x1, y0, y1 = get_line_in_a_box(cx, cy, r_90, xmi=0, xma=img2.shape[1]-1, ymi=0, yma=img2.shape[0]-1)
plt.plot([x0, x1] , [y0, y1], c='red') # perpendicular to lines
# ================================
# now we figure out where peaks are from sampling along perpendicular
plt.figure()
def sample_coordinates_along_a_line2(img, x0, x1, y0, y1):
len_pixels = np.sqrt((x1-x0)**2+(y1-y0)**2)
print(len_pixels)
xs = np.linspace(x0, x1, int(len_pixels)).astype(int)
ys = np.linspace(y0, y1, int(len_pixels)).astype(int)
return img[ys, xs]
plt.figure()
x0, x1, y0, y1 = get_line_in_a_box(cx, cy, r_90, xmi=0, xma=img2.shape[1]-1, ymi=0, yma=img2.shape[0]-1)
sampl = sample_coordinates_along_a_line2(img2, x0, x1, y0, y1)
trend = np.convolve(sampl, [1/100]*100, mode="same")
sampl_detrended = sampl-trend
plt.plot(sampl_detrended)
# ==============================
# step 7: find maxima in detrended sample
# i've found this function somewhere in my processing scripts, it's more generic, but good one
def bool_to_regions2(xs: np.ndarray[bool], which=None) -> np.ndarray[int]:
"""return (start, end) pairs of each continious region as (n, 2) int array (end not inclusive, as usual). example
```
a = np.array([1,0,0,0,0,1,1,1,0,1,1,], dtype=bool)
for b, e in bool_to_regions2(a):
print(b, e)
print("".join(np.where(a,'1','0')))
print(' '*b + '^'+' '*(e-b-1)+'^')
```
set which to True or False to return only regions for these values
"""
heads = np.diff(xs, prepend=~xs[0])
tails = np.diff(xs, append=~xs[-1])
nh = abs(heads.sum()) # if array is 0 and 1 instead of proper boolean
nt = abs(tails.sum())
assert nh == nt, f"A: function `bool_to_regions` {nh=}, {nt=}, nh!=nt"
r = np.stack([np.where(heads)[0], np.where(tails)[0]+1], axis=-1)
if which is None: return r
elif which is True: return r[::2] if bool(xs[0]) else r[1::2]
elif which is False: return r[::2] if not xs[0] else r[1::2]
else: raise Exception("`which` should be True, False or None")
plt.plot(sampl_detrended)
maxima = bool_to_regions2(sampl_detrended>0.05, which=True).mean(axis=-1)
# maxima are positions of your lines along a perpendicular to them
for m in maxima:
plt.axvline(m, color='red', alpha=0.5)
# =======================================
# step 8: project maxima back to image space, by using linear interpolation along the line
plt.imshow(img2)
# remember, x0, y0, x1, y1 are from the perpendicular
line_x_coords = x0 + (x1-x0) * maxima / len(sampl_detrended)
line_y_coords = y0 + (y1-y0) * maxima / len(sampl_detrended)
plt.plot(line_x_coords, line_y_coords, ls="none", c="red", marker="+")
#================================================
# step 9: sample all lines
line_sampls = []
plt.imshow(img2)
x0, x1, y0, y1 = get_line_in_a_box(cx, cy, r_90, xmi=0, xma=img2.shape[1]-1, ymi=0, yma=img2.shape[0]-1)
plt.plot([x0, x1,], [y0, y1], c="red") # perpendicular
for line_number in range(len(line_x_coords)):
lcx = line_x_coords[line_number]
lcy = line_y_coords[line_number]
x0, x1, y0, y1 = get_line_in_a_box(lcx, lcy, r_est, xmi=0, xma=img2.shape[1]-1, ymi=0, yma=img2.shape[0]-1)
if x0 < 0 or x1 > img2.shape[1]-1 or y0 > img2.shape[0]-1 or y1 < 0:
continue # todo: clip line instead of skipping it
else:
plt.plot([x0, x1,], [y0, y1], c="red") # should cover the lines in the img2
sample = sample_coordinates_along_a_line2(img2, x0, x1, y0, y1)
line_sampls.append(sample)
line_sampls= np.stack(line_sampls)
# ===============================================
# this is how intensity samples look along the lines
# it should be easy to compute center of each line
for sampl in line_sampls:
plt.plot(sampl)
note: jupyter starts new plot in each cell. i'm not sure how to get nice visual feedback in a plain .py script. maybe call plt.show() at the end of each cell?
First check if Neocities allows making custom 404 page.
If yes, find it location and add html/js content to it.
As for "very detailed explanation" of javascript please read some documentation. It is impossible teach it here.
You can find articles on StackOverflow. For example Random image generation in Javascript
Naturally, right after I make a StackOverflow question I figure it out!
The answer is to call .persist()
after the .reply
call:
agent
.get('https://stackoverflow.com')
.intercept({
path: '/notarealendpoint',
method: 'GET',
})
.reply(200, "foo")
// this is new
.persist();
See https://github.com/nodejs/undici/blob/main/types/mock-interceptor.d.ts#L10.
There's also a .times
function if you only want the mock to persist N times.
Seeing as a Set IS-A Collection, you get a Collection when you get the Set - you get both, and you can decide which one you want to use. What is the downside of that?
If you'd like to avoid geoplot
or are experiencing crashing, you should be able to also get the desired effect using seaborn
and geopandas
(though not with just geopandas
to my knowledge).
with data like this:
Polygon GeoDataFrame:
id geometry
0 polygon_1 POLYGON ((-101.7 21.1, -101.6 21.1, -101.6 21....
Points GeoDataFrame:
PointID geometry
0 0 POINT (-101.61326 21.14453)
1 1 POINT (-101.66483 21.18465)
2 2 POINT (-101.61764 21.11646)
3 3 POINT (-101.65355 21.12132)
4 4 POINT (-101.68183 21.17071)
5 5 POINT (-101.61088 21.14948)
6 6 POINT (-101.66336 21.17007)
7 7 POINT (-101.6774 21.14027)
8 8 POINT (-101.66757 21.13169)
9 9 POINT (-101.66333 21.12997)
in any crs:
fig, ax = plt.subplots(figsize=(3, 3))
assert polygon_gdf.crs == points_gdf.crs
print("crs is", polygon_gdf.crs.coordinate_system)
polygon_gdf.plot(ax=ax, facecolor='none')
points_gdf.plot(ax=ax)
ax.set_axis_off()
crs is ellipsoidal
you can just re-project it to a cartesian coordinate reference system, like Albers Equal Area:
polygon_gdf.crs = 'EPSG:9822'
points_gdf.crs = 'EPSG:9822'
print("crs is", polygon_gdf.crs.coordinate_system)
crs is cartesian
Then you can plot as follows:
import seaborn as sns
fig, ax = plt.subplots(figsize=(3, 3))
sns.kdeplot(
x=points_gdf.geometry.x,
y=points_gdf.geometry.y,
fill=False,
cmap="viridis",
bw_adjust=0.5,
thresh=0.05,
ax=ax,
)
polygon_gdf.plot(ax=ax, facecolor="none")
ax.set(xlabel="", ylabel="")
fig.tight_layout()
Bro I have this issue for long time.
Any solution?
Open assistant and right after Class, change the prefix of the view controller (eg HomeView) that opens to the same name of the view controller you expect to open (eg SignupView). Then open the said view controller in new window, it will show error in class name (as 2 view controllers now have same name), now change class back to what it should be. It will auto correct the assistant that should show up as you expected.
For anyone curious, you can now just set the MaximumHeight, MinimumHeight, MaximumWidth, and MinimumWidth to a static value so that they won't change. something like this:
protected override Window CreateWindow(IActivationState? activationState)
{
const int newWidth = 1280;
const int newHeight = 640;
var window = new Window(new AppShell())
{
Width = newWidth,
Height = newHeight
};
window.MaximumHeight = newHeight;
window.MinimumHeight = newHeight;
window.MaximumWidth = newWidth;
window.MinimumWidth = newWidth;
return window;
}
Your xpath is not recognized by a browser. Try with normalize-space function:
//a[normalize-space()='Cookie Policy']
OR
Change strategy and use link text (java below):
driver.findElement(By.linkText("Cookie Policy"));
Anybody knew of a way to disable live chat via this YouTube data api? I have a node js script running pretty good in the livestream scheduling, except the live chat is also enabled. I read the YouTube api documentation, and search online, there seems to be NO way of disabling the live chat via API, except to manually disable it before it goes live. This kind of defeat the purpose of scheduling the live stream using API.
Any suggestion would be very much appreciated.
For those wondering what was (most likely) the actual "solution" - the certificate is generated for a single, specific McMaster-Carr account and you have to use that account credentials in the POST body along with the certificate to be able to log in.
It is seems not advisable to register the name of your gateway filter in application.properties
. According to the respective Spring Developer Guide "this is not a supported naming convention" and should be avoided since it "may be removed in future releases". Here is the official advise:
Naming Custom Filters And References In Configuration
Custom filters class names should end in
GatewayFilterFactory
.
For example, to reference a filter namedSomething
in configuration files, the filter must be in a class namedSomethingGatewayFilterFactory
.
I am having troubles to make my gateway implementation work, too. In my case the Spring Cloud Starter Gateway (v.4.2.3) somehow expects an CSRF token. Whenever I hit the gateway with a POST request it anwers 403 Forbidden: An expected CSRF token cannot be found
I haven't figured out why this happens since I disabled Spring Security in all microservices downstreams. Here is what I got so far:
@Slf4j
@Component
public class AuthGatewayFilterFactory extends AbstractGatewayFilterFactory<AuthGatewayFilterFactory.Config> {
private final RouteValidator routeValidator;
private final JwtUtil jwtUtil;
public AuthGatewayFilterFactory(RouteValidator routeValidator, JwtUtil jwtUtil) {
super(Config.class);
this.routeValidator = routeValidator;
this.jwtUtil = jwtUtil;
}
@Override
public GatewayFilter apply(Config config) {
return ((exchange, chain) -> {
ServerHttpRequest request = exchange.getRequest();
if (routeValidator.isSecured.test(request)) {
String header = request.getHeaders().getFirst(HttpHeaders.AUTHORIZATION);
if (header == null || !header.trim().startsWith("Bearer ")) {
log.warn("Invalid authorization header");
exchange.getResponse().setStatusCode(HttpStatus.UNAUTHORIZED);
return exchange.getResponse().setComplete();
}
try {
jwtUtil.validateJwtToken(header);
} catch (JwtTokenMalformentException | JwtTokenMissingException e) {
log.error("Error during token validation: {}", e.getMessage());
exchange.getResponse().setStatusCode(HttpStatus.UNAUTHORIZED);
return exchange.getResponse().setComplete();
}
String token = jwtUtil.getToken(header);
Claims claims = jwtUtil.extractAllClaims(token);
String csrfToken = request.getHeaders().getFirst("X-CSRF-TOKEN");
ServerHttpRequest mutatedRequest = exchange.getRequest().mutate()
.header("X-User-Id", claims.getSubject())
.header("X-User-Username", claims.get("username", String.class))
.header("X-User-Email", claims.get("email", String.class))
.header("X-User-Roles", claims.get("roles", String.class))
.header("X-CSRF-TOKEN", csrfToken)
.build();
return chain.filter(exchange.mutate().request(mutatedRequest).build());
}
// Non-secured route — pass through unchanged
return chain.filter(exchange);
});
}
public static class Config {
private boolean validateCsrf = false;
public boolean isValidateCsrf() {
return validateCsrf;
}
public void setValidateCsrf(boolean validateCsrf) {
this.validateCsrf = validateCsrf;
}
}
}
The issue was how I was copying my file structure in this part:
# Copy source code and build
COPY --from=dependencies /app/node_modules ./node_modules
COPY package.json eslint.config.mjs tsconfig.json ./
COPY . ./frontend
RUN npm run build frontend
I was changing the application structure to try and help with caching, but this wasn't actually necessary and was just causing too many problems. Updating it to this:
# Copy source code and build
COPY --from=dependencies /app/node_modules ./node_modules
COPY . .
RUN npm run build
and changing the rest of the Dockerfile accordingly resolved the issue.
I also recommend to re-orientate the image so that the chirp-grating is either horizontal or vertical. (For the sample image I get a rotation angle of about 16.1 degree to obtain horizontal grid-lines.)
Now project the image either horizontally or vertically. What you get is a plot like this:
From the local maxima you get the coordinates of the gridlines, i.e. you can position the line selection appropriately and do the desired measurements.
Please tell us if this is of any help for you or if you need further help.
I think David_sd has a great answer, I'll add how you might adapt his approach to be smoother, or else how you could potentially use plotly
.
import numpy as np
from matplotlib import pyplot as plt
# Prepare data
n = 100
cmap = plt.get_cmap("bwr")
theta = np.linspace(-4 * np.pi, 4 * np.pi, n)
z = np.linspace(-2, 2, n)
r = z**2 + 1
x = r * np.sin(theta)
y = r * np.cos(theta)
T = (2 * np.random.rand(n) - 1) # Values in [-1, 1]
If you don't need the colormap, you can get a 3d curve very easily. It's a shame that you can't pass a colormap
argument into the plotting function as you can with the scatterplot/surface plot options you mentioned.
ax = plt.figure().add_subplot(projection='3d')
ax.plot(x, y, z)
To apply a colormap, I'd use the same approach as David_sd (with the limitations you identified).
# Build segments for Line3DCollection
points = np.array([x, y, z]).T.reshape(-1, 1, 3)
segments = np.concatenate([points[:-1], points[1:]], axis=1)
where points is a (100, 1, 3) representation of the points in space:
array([
[[ x1, y1, z1 ]],
[[ x2, y2, z2 ]],
[[ x3, y3, z3 ]],
...
])
and segments is a (99, 2, 3) representation of the point-to-point connections:
array([
[[ x1, y1, z1 ],
[ x2, y2, z2 ]],
[[ x2, y2, z2 ],
[ x3, y3, z3 ]],
...
])
then you run the following, using T[:-1]
to match the shape of segments
.
from mpl_toolkits.mplot3d.art3d import Line3DCollection
from matplotlib.colors import Normalize
norm = Normalize(vmin=T.min(), vmax=T.max())
colors = cmap(norm(T[:-1])) # Use T[:-1] to match number of segments
lc = Line3DCollection(segments, colors=colors, linewidth=2)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.add_collection3d(lc)
plt.show()
This of course doesn't have the smooth gradient you want. One way to approximate that could be to pump up n
-- e.g. here I set n=1000
If that's not satisfying, I might switch to plotly
, which gets a pretty good gradient even with n=100
import plotly.graph_objects as go
# Create 3D line plot with color
fig = go.Figure(data=go.Scatter3d(
x=x,
y=y,
z=z,
mode='lines',
line=dict(
color=T,
colorscale='RdBu',
cmin=-1,
cmax=1,
width=6
)
))
fig.show()
I got the same issue because two jobs were reading a pickled file at the same time, and the file somehow got corrupted. I had to recreate the pickle file.
It turns out setenv LD_LIBRARY_PATH /a/b/c/d/<gtk_version>/lib
fixed this problem.
There are also one const
can be added:
constexpr const * const int DoubleImmutable() const {
return &mutable_.x;
}
const
pointer to const int
.
i find CASE / WHEN to be easier to construct, as in @john-rotenstein's answer, but Redshift does have a built-in PIVOT function:
SELECT
a,
b,
id
FROM
(SELECT * FROM temp)
PIVOT
(MAX(value) FOR source IN ('A', 'B'))
ORDER BY
id
;
+----+----+---+
|a |b |id |
+----+----+---+
|C123|V123|111|
|C456|V456|222|
+----+----+---+
So I ended up recompiling ktfmt core with kotlin compiler (org.jetbrains.kotlin:kotlin-compiler) rather than kotlin compiler embedded (org.jetbrains.kotlin:kotlin-compiler-embeddable) with required changes. And the conflict goes away. Although I would rather not maintain my own version of ktfmt, I guess this will be my solution.
Make sure you add the CLASSPATH in ~/.bashrc
file and source the file after edit. I had same issue and it got resolved after adding the CLASSPATH. The value shoule be the dir path of where the connector is installed.
For anyone reaching this thread, instead of doing Results.Ok
or simply returning an object from the minimal API endpoint, use Results.Content
// before
return result;
// after
var json = Newtonsoft.Json.JsonConvert.SerializeObject(result);
return Results.Content(json, "application/json");
IBConsole is not isql. isql is the command line tool. While a lot of the "show Xxx" commands do work, that one has not been implemented at this time. I wasn't able to reproduce the AV in the current version it just doesn't display anything.
The next release will have this implemented.
This is great and works perfectly BUT, could you (somehow) be able to filter based on other columns as well? I would want to do this based on three columns (non adjecent)... have been trying to nest it but it doesn't seem to be a good approach.
You can use the CMachine. GetNCStatus() and check for NCStatusEnum.Run while machine is in Auto mode and use the CycleComplete to check for end of cycle or cycle time. The time interval between the 2 is the cycle time that completes the part program without error.
If you're using get snackbar i think it's batter and clear to use this :
if (!Get.isSnackbarOpen) {
Get.snackbar(
title,
message,
snackPosition: SnackPosition.BOTTOM,
// You can add more customization here
);
}
I have a similar problem with boost 1.88.0 installed from the binary release file for Windows available here: https://archives.boost.io/release/1.88.0/binaries/boost_1_88_0-bin-msvc-all-32-64.7z
The above archive has a somewhat strange folder structure that doesn't seem to work very well with CMake. CMake find_package
in CONFIG mode needs to find the file BoostConfig.cmake
that is contained in several different directories. You need to point it to the right one for you. For example, when using VS2022 for 64-bit x86 builds, you would use lib64-msvc-14.3/cmake/Boost-1.88.0/BoostConfig.cmake
.
I found it easiest to have the environment variable Boost_ROOT
set to this directory rather than to the actual root directory of boost. But I consider that a hack. I think the boost developers ought to make this easier, perhaps by revising the folder structure to better match the search procedure used by CMake.
To calculate how much space a text will occupy, you can use:
String word = "My word";
Font font = new Font( "Arial", Font.PLAIN, 14 );
Canvas canvas = new Canvas();
FontMetrics fm = canvas.getFontMetrics( font );
int width = fm.stringWidth( word );
Now let's see how to apply it to a concrete case:
public class TextFormat {
FontMetrics fm;
void initialize() {
Font font = new Font( "Arial", Font.PLAIN, 14 );
Canvas canvas = new Canvas();
fm = canvas.getFontMetrics( font );
}
// this method is very simple, it just creates an array of phrases, dividing the
// original string in lines, and then iterates over the array creating an array of
// words, and adding in each iteration to **out**, the return of the **cut** method.
String formatText( String imitialInput, int maxWidth ) {
String[] lines = imitialInput.split( "\n" );
String out = "";
for( int i = 0; i < lines.length; i ++ ) {
String[] words = lines[ i ].split( " " );
out += cut( words, 0, maxWidth );
}
return out;
}
// this method makes use of recursion, it iterates over the array of words, adding
// to **aux** the content of the variable **space** (in the first iteration it will
// be **“”** and in the successive **“ ”**), concatenated with the words of the
// array, then verifies if the length of this one exceeds the limit, in which case,
// it returns the trimmed value of **aux** (it removes the last added word) plus the
// string **"\n ”** the return of **cut**, passing as parameters, the array of
// words, the value of **i** (so that it continues oterating from the first word not
// added) and **maxWidth**, and the last line, it returns the possible remainder.
String cut( String[] words, int init, int maxWidth ) {
String aux = "",
space = "";
for( int i = init; i < words.length; i++ ) {
aux += space + words[ i ];
int width = fm.stringWidth( aux );
if( width > maxWidth ) {
return aux.substring( 0, aux.length() - words[ i ].length() - 1 )
+ "\n" + cut( words, i, maxWidth );
}
if( space.isEmpty( )) space = " ";
}
return aux + "\n";
}
}
class Main {
String text = "Hi! Please provide code you have tried this with."
+ " Please don't use StackOverflow as a code delivery service.\n"
+ "If you provide a minimal reproducible example we can help debug. "
+ "See how to ask";
void init() {
TextFormat tf = new TextFormat();
tf.initialice();
tf.formatText( text, 200 );
}
public static void main( String x[] ) {
new Main().init();
}
}
Out:
Hi👋🏻! Please provide code you have tried this with. Please don't use StackOverflow as a code delivery service. If you provide a minimal reproducible example we can help debug. See how to ask
I encountered the same problem after Windows 24H2 update, my problem was solved by removing ASP support from the Windows components install/remove menu and reinstalling it. (I tried for localhost, I don't know about the server side)
That's a bug. Thanks for finding and reporting it.
Hyy did u find the answer to ur question I'm facing this issue now in 2025 please help
The short answer is no — you can’t run Excel VBA macros directly from Node.js, and libraries like xlsx won’t help with that either.
The main reason? Two different systems/mechanics
VBA is Excel’s territory: Macros are interpreted and executed by Excel itself. There’s no standalone VBA runtime you can tap into from Node.js.
Node.js lives outside that world: It can read and write Excel files at the data level, but it can’t talk to Excel as a running application or trigger internal macro execution.
An npm package like 'xlsx' doesn’t bridge that gap: they are great for manipulating spreadsheets — cells, styles, structure — and can even preserve embedded VBA code (the raw VBA code, should it exist), but they don’t run it. That part stays inert unless Excel opens the file and runs it, which cannot be triggered by node.js.
You would need something that can actually launch Excel, like PowerShell. I hope that helps.
As best I can tell the state being reset is just the error and data properties. It doesn't say more (like anything about caching behavior or whatever else). But whether clearing these properties will be useful depends on your app, not react-query's internal state. i.e. if you're using either property being non-nullish as an indicator that some event has occurred - like using the properties in a dependency array - you might want to reset them before firing another mutation.
I am going accept the answer from @EdMorton as best here, the main reason is that it allows me to:
deserial() {
declare function_to_call="$1"
declare -n assoc_array_ref="$2"
while IFS= read -r -d '' line; do
assoc_array_ref+=( $line )
done < <($function_to_call)
}
and then simply call deserial func Y
- this populates the associative array Y with what was returned from function func
universally.
Use the Kotlin wrapper method — it's clean, safe, and doesn't require changes from the library provider.
Alternative Hack (If Kotlin wrapper isn't an option)
You could technically use reflection in Java to instantiate and access the class, but this approach is messy, error-prone, and discouraged unless there's absolutely no other option. Reflection bypasses compile-time checks, meaning you lose type safety, IDE support, and readability — which makes the code much harder to debug, maintain, and scale. It's more of a last-resort workaround than a proper solution.
Actually yes :) I
know it is late but for others interested in this, one can write:
SELECT "{{COLUMN_NAME}}"
FROM MY_TABLE
WHERE "{{COLUMN_NAME}}" > 0
Where COLUMN_NAME is a text parameter or of other nature. This will work as long as the provided value is a column name.
Relevant example for web using Google's Material Icons font:
https://www.w3schools.com/icons/tryit.asp?filename=tryicons_google-voicemail
Alternatively (not great), there's the tape icon: 🖭 / U+1F5AD / Tape Cartrige
I got this error when I tried to add all subnets to the endpoint via Pulumi (IaC)
Apparently, you can't create more than 1 interface per AZ.
If you tried it with Console UI it does not provide you such an opportunity.
I had the same problem, I couldn't add or find my bank until I selected another country. In my case I couldn't find ING in the Netherlands, but when I selected Belgium and searched ING, I went back to searching ING in the Netherlands and I was able to find it and add my bank account (with the same info that I tried when it didn't work)
I was wondering if the use of trim function in conditional formatting would mitigate its impact on sheet performance.
I could take a look. Please post the ids of the workitems for which you didn't receive notification. Note that we give up quickly if the callback URL is unavailable. We retry 5, 15, 30 seconds later and then give up.
1- try Close Visual Studio completely and Delete these cache folders:
%LocalAppData%\Microsoft\VisualStudio\<Version>\ComponentModelCache
%LocalAppData%\Microsoft\SQL Server Data Tools
2- then (Replace <Version>
with your VS version, e.g., 17.0
for VS 2022)
3- after that Reopen VS and retry publishing
I think I found the answer to my question. So if my goal is to create node embeddings using the random walk algorithm. I could potentially only store the node_ids as projected graph. Then while creating the random walk for each start node, proceed to collect all the node_ids, then make a cypher query to retrieve their properties and labels separately. By this approach, I reduce the memory footprint, but end up loosing up on time, by making one additional query per walk. Would this be a viable solution?
This should be fixed in update 7. https://blogs.embarcadero.com/embarcadero-interbase-2020-update-7-released/
It seems a threading issue with your polling loop. All API objects must be created by the main thread or threads created by the main thread. Accessing API objects from other system threads such as Worker Thread, System Timer might produce undesirable result.
Just a note for others: setting UIPrefersShowingLanguageSettings
and CFBundleDevelopmentRegion
in the Info.plist doesn’t reliably show the "App Language" option in Settings on iOS 16. This behavior seems more consistent on iOS 17 and later.