Simple action can do for enable log document add code in wp-config.php located on wordpress home directory.
define( 'WP_DEBUG', true );
define('WP_DEBUG_LOG', true);
If you want shows error on screen then,
define('WP_DEBUG_DISPLAY', true);
Did the answer above work? I'm trying to do the same thing.
The issue is due to that your python version doesnot support TensorFlow 2.5
.
TensorFlow 2.5 requires python versions 3.9, 3.10, 3.11, 3.12. Please ensure that TensorFlow 2.5
is compatible with your python version.
You can check your Python version using:
python --version
I have a recently came across the same issue regarding the API rate limits, you can contact the Customer success manager responsible for your account on Ariba and request for the increase in the rate limits.
I have got the limits increased from 270/day to 4800/day for document management api.
Several points:
Install the doc itself:
$ mkdir raku/ && cd raku/
$ git clone [email protected]:Raku/doc.git
The name of the document should match the document names under the raku/doc directory in step 1.
Some examples:
$ alias 6d='RAKUDOC=raku/doc PAGER=less rakudoc -D'
$ 6d Type/Map
$ 6d .push
You need to either disable SELinux alltogether (in the /etc/selinux/config
file change "SELINUX=enforcing
" into "SELINUX=permissive
" and reboot) or disable it specifically for HTTP (the semanage permissive -a httpd_t
command).
import expressMiddleware from this
const { expressMiddleware } = require("@as-integrations/express5");
or
import {expressMiddleware } from " @as-integrations/express5 "
<script>
$('#input_text_date').persianDatepicker({
calendar:{
persian: {
leapYearMode: 'astronomical'
}
}
});
</script>
The issue occurs because 1403 is a leap year in the Persian calendar. To fix this, you need to explicitly set leapYearMode: 'astronomical'
in your configuration. The default setting (leapYearMode: 'algorithmic'
) uses a mathematical approximation that causes this one-day discrepancy for Persian leap years.
Use makeHidden
:
return $collection->makeHidden(["password", "secret_key"]);
My approach was incorrect from the start. It had to be done with templates, dependency injection and unique pointers for ownership (in my use case)
the things i found useful were comments and cpp conf about DI
I feel there's only one way to reduce your size of AAR, reduce the size of resources and assets you used in your library
I was looking for Enum to Name/Value, thanks to @Paul Rivera the `char.IsLetter((char)i)` helped me to get my result, here is my code, maybe somebody needs it:
System.Collections.IEnumerable EnumToNameValue<T>() where T : struct, Enum
{
var values = Enum.GetValues<T>().Select(x => (Name: x.ToString(), Value: (int)(object)x));
var isChar = values.All(x => char.IsLetter((char)x.Value));
return values.Select(x => new { x.Name, Value = isChar ? (char)x.Value : (object)x.Value }).ToList();
}
In your interface you created: string | null, and the function typescript definition.
So, there is no need to create the function definition, you just need to set de type of value on the state.
As you can see on the documentation.
https://react.dev/learn/typescript
const [enabled, setEnabled] = useState<boolean>(false);
Decide whether to run the command line or the GUI based on the input parameters
Before run commandline,i call this function
#ifdef _WIN32
# include <windows.h>
# include <fcntl.h>
# include <io.h>
#endif
void attachConsole()
{
#ifdef _WIN32
if (AttachConsole(ATTACH_PARENT_PROCESS))
{
FILE* fDummy;
freopen_s(&fDummy, "CONOUT$", "w", stdout);
freopen_s(&fDummy, "CONOUT$", "w", stderr);
freopen_s(&fDummy, "CONIN$", "r", stdin);
std::ios::sync_with_stdio();
}
#endif
}
But console output is flowing:
exit immediately after the program executes the command-line logic without any extra blank lines or repetitive output. but it go wrong!
not in packer directly, but a friend made canga.io, which adds layers to vm image generation. this would get you where you want, I think.
I faced this issue too. To support both PostgreSQL and SQL Server, I switched from JsonBinaryType to JsonType and removed the PostgreSQL-specific columnDefinition = "jsonb"
By using jsonType
and omitting database-specific definitions, the same entity worked seamlessly across both databases.
The reason it is happening is you have limited space for your text field. You can put your text field into a sized box and set height to the sized box to make sure it has sufficient space for the text field and error text.
I just had this same issue. To resolve you needed to install mongodb v5.9.1
npm install [email protected]
yarn install [email protected]
This instantly fixed the issue
As an HTML developer who tests my websites often, I can say that this post has a lack of information to help us guide you to the answer; although that is not entirely your fault-- If you are wanting to run static (HTML5, CSS3, & ES6 Javascript) files from Visual Studio Code; then I would recommend downloading the Live Server Extention from the extensions tab of your IDE; & then all that you have to do is click on the "Go Live" button in the bottom right corner of you screen, & it should work just as intended with live-reloading there for you; & just as a side-note, please try to avoid using Microsoft Edge for development, it is a great browser for every-day life, but for development, Firefox is recommended for de-bugging due to its' powerful de-bugging tools, & I have found that it is much more consistent than Microsoft Edge for almost anything.
Finally found the solution, thank you guys:
Opened "intl.cpl" -> Some special language settings, i haver never seen
There was the setting "Use windows displaylanguage"
I changed the setting to "German" -> Problem solved!
I still don't get it 100%, maybe somebody can explain it.
You need to add "exports files;" in your module-info.java file then issue will be fixed.
Try to review formula because my case is Old formula didn't work.
Old :
If(**DataCardValue42.Text=Blank()**,false,true)
New :
If(IsBlank(DataCardValue42.Text),false,true)
As @pskink pointed out in the comment, when an event is completely handled, no emits are allowed, so if you want to still emit a new state you will have to create another event that emits the state that you want and trigger it where you currently emitting the new state, then the issue will go away!
After a lot of experimentation I was able to do this with ADF pipeline but don't recommend this since it is easy to miss fields in this approach and it works only if the schema is fixed. It basically works by bringing the nested field to the root, updating it and then joining it with the rest of the data.
Step 1: Create two branches for the input data
Branch 1:
Select: Select properties.execution AS execution, OrderID
Select: Select all properties in execution: Select execution.item AS item, OrderID
Derived column: items = Array(item)
Construct execution object - Derived column with subcolumns item, items
Select execution, OrderID
Branch 2:
Join: Branch1, Branch2 on OrderID
Dervied column: construct properties with subcolumns execution, and other fields within properties
Select: finally select only the required fields and output
My fault. I've rename the rootpath to invocation_path. Solved.
After running the code in Visual Studio Code, & even asking Github Co-Pilot to confirm; I have concluded that your Lua program is working just as intended, I have went through the following choices in an attempt to trigger the bug, but it has worked perfectly for me, I believe that this might be an issue with your Text Editor / IDE of your choice, but great job for a first time project, & continue doing what you are doing, for any further questions; just reply to this comment.
from moviepy.editor import VideoFileClip, TextClip, CompositeVideoClip, vfx, concatenate_videoclips, AudioFileClip
import os
# Load the original Fortnite clip
input_path = "/mnt/data/20250603_AltruisticPolishedBarracudaRaccAttack-k-SNrG5_2MfSJIMG_source.mp4"
clip = VideoFileClip(input_path)
# Shorten to the first 50 seconds max for Shorts
short_clip = clip.subclip(0, min(clip.duration, 50)).resize(height=1080) # Resize for vertical output
# Determine width after resizing to vertical
aspect_ratio = short_clip.w / short_clip.h
width = int(1080 * aspect_ratio)
# Create epic intro text
intro_text = TextClip("¡CLUTCH AÉREO EN FORTNITE! 🔥", fontsize=70, color='white', font="Arial-Bold", stroke_color='black', stroke_width=3)
intro_text = intro_text.set_position('center').set_duration(3).fadein(0.5).fadeout(0.5)
# Position intro text overlay on top of video
intro_overlay = CompositeVideoClip([short_clip.set_start(3), intro_text.set_start(0).set_position(('center', 'top'))], size=(width, 1080))
# Export path
output_path = "/mnt/data/fortnite_epic_clutch_edit.mp4"
intro_overlay.write_videofile(output_path, codec="libx264", audio_codec="aac", fps=30)
output_path
Hi I'm Trying to connect my postgress on Azure but after deploying i am getting this error
eventhough i have already installed the requirements over there
# Database clients
psycopg2-binary==2.9.10
asyncpg==0.30.0
requests
SQLAlchemy==2.0.41
pydantic==1.10.13
Exception while executing function: Functions.DbHealthCheck Result: Failure
Exception: ModuleNotFoundError: No module named 'asyncpg.protocol.protocol'
Stack: File "/azure-functions-host/workers/python/3.11/LINUX/X64/azure_functions_worker/dispatcher.py", line 674, in _handle__invocation_request
await self._run_async_func(fi_context, fi.func, args)
File "/azure-functions-host/workers/python/3.11/LINUX/X64/azure_functions_worker/dispatcher.py", line 1012, in _run_async_func
return await ExtensionManager.get_async_invocation_wrapper(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/azure-functions-host/workers/python/3.11/LINUX/X64/azure_functions_worker/extension.py", line 143, in get_async_invocation_wrapper
result = await function(**args)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/site/wwwroot/function_app.py", line 115, in db_health
engine = get_async_engine()
^^^^^^^^^^^^^^^^^^
File "/home/site/wwwroot/function_app.py", line 94, in get_async_engine
return create_async_engine(connection_string, echo=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/site/wwwroot/.python_packages/lib/site-packages/sqlalchemy/ext/asyncio/engine.py", line 120, in create_async_engine
sync_engine = _create_engine(url, **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "<string>", line 2, in create_engine
File "/home/site/wwwroot/.python_packages/lib/site-packages/sqlalchemy/util/deprecations.py", line 281, in warned
return fn(*args, **kwargs) # type: ignore[no-any-return]
^^^^^^^^^^^^^^^^^^^
File "/home/site/wwwroot/.python_packages/lib/site-packages/sqlalchemy/engine/create.py", line 602, in create_engine
dbapi = dbapi_meth(**dbapi_args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/site/wwwroot/.python_packages/lib/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 1100, in import_dbapi
return AsyncAdapt_asyncpg_dbapi(__import__("asyncpg"))
^^^^^^^^^^^^^^^^^^^^^
File "/home/site/wwwroot/.python_packages/lib/site-packages/asyncpg/__init__.py", line 9, in <module>
from .connection import connect, Connection # NOQA
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/site/wwwroot/.python_packages/lib/site-packages/asyncpg/connection.py", line 25, in <module>
from . import connect_utils
File "/home/site/wwwroot/.python_packages/lib/site-packages/asyncpg/connect_utils.py", line 30, in <module>
from . import protocol
File "/home/site/wwwroot/.python_packages/lib/site-packages/asyncpg/protocol/__init__.py", line 11, in <module>
from .protocol import Protocol, Record, NO_TIMEOUT, BUILTIN_TYPE_NAME_MAP
For my case, my code can run on vs code only after I have run the code from Xcode (have to run at least once from Xcode before running from vs code). As a reminder, you have to run flutter run --release if you want to use it even after you quit the flutter run.
Tenfour04's answer doesn't work. It says "Cannot resolve method 'toBigDecimal'"
Yes sure you can do that but you need both hardware and software knowledge for the delivery rider location access for real time.
The proposed setup in my question works perfectly. I realized I hadn't restarted the Caddy container for a while. When I checked the Caddyfile, it actually contained some lines from a previous attempt at getting fonts working:
@fonts {
path .woff *.woff2 *.ttf *.eot *.svg path_regexp \.(woff|woff2|ttf|eot|svg)$
}
handle @fonts {
header Cache-Control "public, max-age=31536000"
header Access-Control-Allow-Origin ""
file_server
}
Removing this and restarting the Caddy container with the Caddyfile I provided in the question worked.
It should be --base-href /myapp/
not —-base-href=/myapp/
Just for completeness, I add the trivial case. The error maybe what the message says in its simplest form. A class YourClass
is declared twice with a statement class YourClass { ... }
, because you included the file YourClass.php
twice.
What you want is not really the same as the datetime standards as the comments above. However your code works. So I see you have defined a ModelBinder for the DateOnly type. If you want to format input with output you should change this step from:
if (DateOnly.TryParse(value, out var date))
{
bindingContext.Result = ModelBindingResult.Success(date);
}
to
var value = valueProviderResult.FirstValue;
string format = "dd.MM.yyyy";
CultureInfo culture = CultureInfo.InvariantCulture;
if (DateOnly.TryParseExact(value, format, culture, DateTimeStyles.None, out var date))
{
bindingContext.Result = ModelBindingResult.Success(date);
}
in VSCode, you can use the Microsoft serial monitor extention to see the serial output of the ESP32
When you see the label “Internal” under your build in App Store Connect, it indicates that the build was submitted using the “TestFlight (Internal Only)” option in Xcode.
To make the build available for External Testing, you must select “App Store Connect” as the distribution option during the upload process in Xcode—not “TestFlight”. This ensures the build is eligible for submission to Apple for external TestFlight review.
I have the same issue, what is the proper way of connecting an existing database ?
Thanks bro this quickly help me in the error i was facing how i can use it
Seems to be a known Chromium bug and should be fixed in version 137 https://issues.chromium.org/issues/415729792
Turns out this has nothing to do with AWS, NextJS or any of the code, it's a MS Word Trust Center setting. I found two possible solutions (depending on your security appetite):
Option 1 - Find the downloaded file in your file explorer, right-click --> Properties, and check the 'Unblock' box at the bottom. This needs to be done on a file-by-file basis.
Option 2 - Open Word and go to File --> Options --> Trust Center --> Trust Center Settings --> Protected View and unselect the 'Enable Protected View for files originating from the Internet' check box. Then restart Word and thereafter, all files will open correctly.
I believe that in order to do that you have to join Apple's developer program. It's $100 yearly.
I posted this also on the marshmallow github page, and was able to get a good response there.
https://github.com/marshmallow-code/marshmallow/issues/2829
from marshmallow import Schema, fields, validate
class RequestNumber(fields.String):
def __init__(self, *args, **kwargs):
super().__init__(*args, metadata={'description': 'Request Number', 'example':'REQUEST12345'}, validate=validate.Regexp(regex=r"^REQUEST\d{3,9}$", error="Input string didn't match required format - REQUEST12345")), **kwargs)
class Api1():
class Input1(Schema):
request_number = RequestNumber()
For anyone else on this page looking for the answer, I found the answer as recommended by RickN in the comments :)
Replace: const keyBytes = Buffer.from(key, "base64");
With: const keyBytes = Buffer.from(key.trim(), "base64url");
As Ike mentioned in a comment, you don't need to delete the whole column. You can copy the data into a temporary extra column (probably copy - then paste values). Then in the column with the incorrect column formula, you highlight the whole column (minus the table header), and choose "clear contents" (which is the crucial step - not just the delete key). This will remove the computed formula from the column (which you can test by adding an extra row). Then you can copy and paste the data back into your now cleaned column. All external formulas and other column references, will carry on working for that column without needing to recreate those references.
I know this is old thread, but I was looking for this answer, and with the help of Ike's comment, I managed to keep everything working, which was awesome for my complex spreadsheet. Wanted to add this comment for anyone following with the same issue.
Thanks, best answer that I have seem today.
Turns out, it's because of new version of flask-session is not compatible with the old version of airflow. I limited my flask-session < 0.6 and it works just fine!
Can I achieve that using maxscript?
The easiest way to do this is through this VSCode extension: https://marketplace.visualstudio.com/items?itemName=noknownerrors.copy-visible-text (copy-visible-text). It does exactly what it says. Just install it, then select what you need and press Ctrl+Shift+C – and voilà!
I checked 8b5bad8c0b214a1c9eec2bd86aa274c4. The callback failed with BadRequest.
You allude to this in your comment, so I'll add an example of how I've addressed this by manipulating linewidth rather than alpha to get a similar visualization of distribution that still has sharp lines.
Here's a replication of your current approach:
import matplotlib.pyplot as plt
import numpy as np
num_lines = 1000
np.random.seed(42)
xs = np.linspace(0, 10, 100).reshape(1,-1)
ys = xs*np.random.normal(1,1,(num_lines,1)) + np.random.normal(0, 1, (num_lines,100))
for y in ys:
l = plt.plot(xs.flatten(), y, 'k', alpha=0.01)
l[0].set_rasterized(False)
plt.savefig('ex.svg')
plt.show()
Here's an alternative -- I also try to explicitly tell matplotlib
not to rasterize via ax.set_rasterization_zorder(None)
(I believe this is the same as your l[0].set_rasterized(False)
call).
The difference is that I switch to manipulating linewidth
rather than alpha
. I think the effect is fairly similar.
fig, ax = plt.subplots()
ax.set_rasterization_zorder(None)
num_lines = 1000
np.random.seed(42)
xs = np.linspace(0, 10, 100).reshape(1, -1)
ys = xs * np.random.normal(1, 1, (num_lines, 1)) + np.random.normal(0, 1, (num_lines, 100))
for y in ys:
ax.plot(xs.flatten(), y, color='black', linewidth=0.01)
fig.savefig('ex_width.svg', format='svg')
When you zoom way in, you can see that the alpha
approach (left) is fuzzier than the linewidth
approach (right):
Turns out this was indeed a bug. Fixed by this pr https://github.com/odin-lang/Odin/pull/5267.
I know this questions is already answered. But it is missing some main points those are my findings i am adding those points. The source of this answer is here.
1- Azure AD External Identities was previous name of Azure AD B2C.
Azure B2c is a business to consumer identity management system.
2- Microsoft Entra External ID
Microsoft Entra External ID is a combination of Azure Ad B2C and Ad B2B (Now Entra Id).
When you create tenant of Microsoft Entra External ID you system create two type of tenants.
Workforce (B2B)
External (B2C)
WorkForce tenant is used for Ad B2B (Entra Id) Operation.
External Tenant is used for Azure B2c Operations.
To Learn more about Microsoft Entra External ID check here
I've managed to create a Bash script which poll the Local Dynamo Stream using AWS CLI and invokes the local Lambda with an event.
You can integrate it as part of the docker compose stack - I suggest using an amazon/aws-cli image.
https://gist.github.com/aldotroiano/69f3aaf900cec845c954329a55620f10
This works for me, hope it helps.
DateTime currentTime = DateTime.Now;
if (currentTime.Hour >= 5 && currentTime.Hour < 12)
{
ltWellcome.Text = "Good morning " + strUserName + "! Welcome to the system";
}
else if (currentTime.Hour >= 12 && currentTime.Hour <= 17)
{
ltWellcome.Text = "Good afternoon " + strUserName + "! Welcome to the system";
}
else if (currentTime.Hour >= 18 && currentTime.Hour <= 23)
{
ltWellcome.Text = "Good evening " + strUserName + "! Welcome to the system";
}
else
{
ltWellcome.Text = "Good night " + strUserName + "! Welcome to the system";
}
try using interaction
instead of inter
interaction: discord.Interaction
if i understood, what you want is to remove all the text.style.transform = ...
from your code
https://www.google.com/recaptcha/enterprise.js?render=6Ldi2VQrAAAAAH8zdoFKnPpy8vio2xkP8-8soIHh /* VUI LÒNG KHÔNG SAO CHÉP VÀ DÁN MÃ NÀY. */(function(){var w=window,C='___grecaptcha_cfg',cfg=w[C]=w[C]||{},N='grecaptcha';var E='enterprise',a=w[N]=w[N]||{},gr=a[E]=a[E]||{};gr.ready=gr.ready||function(f){(cfg['fns']=cfg['fns']||[]).push(f);};w['__recaptcha_api']='https://www.google.com/recaptcha/enterprise/';(cfg['enterprise']=cfg['enterprise']||[]).push(true);(cfg['enterprise2fa']=cfg['enterprise2fa']||[ ]).push(true);(cfg['render']=cfg['render']||[]).push('6Ldi2VQrAAAAAH8zdoFKnPpy8vio2xkP8-8soIHh');(cfg['clr']=cfg['clr']||[]).push('true');(cfg['anchor-ms']=cfg['anchor-ms']||[]).push(20000);(cfg['execute-ms']=cfg['execute-ms']||[]).push(15000);w['__google_recaptcha_client']=true;var d=document,po=d.createElement('script');po.type='text/javascript';po.async=true; po.charset='utf-8';var v=w.navigator,m=d.createElement('meta');m.httpEquiv='origin-trial';m.content='A7vZI3v+Gz7JfuRolKNM4Aff6zaGuT7X0mf3wtoZTnKv6497cVMnhy03KDqX7kBz/q/iidW7srW31oQbBt4VhgoAAACUeyJvcmlnaW4iOiJodHRwczov L3d3dy5nb29nbGUuY29tOjQ0MyIsImZlYXR1cmUiOiJEaXNhYmxlVGhpcmRQYXJ0eVN0b3JhZ2VQYXJ0aXRpb25pbmczIiwiZ XhwaXJ5IjoxNzU3OTgwODAwLCJpc1N1YmRvbWFpbiI6dHJ1ZSwiaXNUaGlyZFBhcnR5Ijp0cnVlfQ==';if(v&&v.cookieDep recationLabel){v.cookieDeprecationLabel.getValue().then(function(l){if(l!=='treatment_1.1'&&l!=='treatment_1.2'&&l!=='control_1.1'){d.head.prepend(m);}});}else{d.head.prepend(m);}po.src='https://www.gstatic.com/recaptcha/releases/GUGrl5YkSwqiWrzO3ShIKDlu/recaptcha__vi.js';po.crossOrigin='anonymous';po.integrity='sha384-WhGk2MFzizAR8e7ATZT9M5LnOdncMyMS/CPIMJVSwhC9bW85V2rZ6d8BZCQTAmG1';var e=d.querySelector('script[nonce]'),n=e&&(e['nonce']||e.getAttribution('nonce'));if(n){po.setAttribution('nonce',n);}var s=d.getElementsByTagName('script')[0];s.parentNode.insertBefore(po, s);})();
Object value = mDataSnapshot.child("Suhu").getValue();
String suhu;
if (value != null) {
suhu = value.toString();
// Use the 'suhu' variable
} else {
suhu = ""; // Or handle null case appropriately
// Handle the case where the value is null
}
A:1 The instance needs to have access to the Internet via Public IP or by using Cloud NAT so it can query the repository.
A2: Please also try installing the requested packages, such as Google Cloud SDK `google-cloud-sdk`, before the migration.
A3: Those flags are not related to your issue.
Memory issues like this often happen when the build container doesn’t have enough RAM, even if you set-- max-old-space-size=8192. Try increasing the pipeline memory size if possible, and monitor memory usage during the build to spot where it spikes. You can also test the build locally with the same Node options to see if it fails there, which helps isolate if it’s environment-related. Clearing yarn and Docker caches or disabling heavy build plugins temporarily might help narrow down the cause. Lastly, check if your pipeline environment supports swap space, as that can prevent the build process from being killed early.
thank you mikasa you saved me from the debugging hell not even the ais helped
Problem seems to be a new setting "Security: system enforce file extension mime type consistency" (Settings -> Feature Toggles)", after disabling it, it works fine.
The best solution for encoding in search params format is:
new URLSearchParams({ [""]: value }).toString().slice(1)
Don't use jeprof*.heap
when generating your gif. Use jeprof.PID_OF_THE_APP.*.heap
Make sure you're importing ThemeProvider from @mui/material
, not @emotion/react
. I had the same issue and this fixed it for me.
SQL Error: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)
If you’re using Android Studio you may need to update your Build Variants. On version 2024.2.1 I had to choose from sidebar Build Variants > [ Re-import with defaults] (button at bottom right of panel)
Thanks to @DavidMaze I solve it. It was just a matter of deleting the strings and posting them as actual paths and the script did why I wanted. (stop running once I closed all the apps)
Thank you also to @furas to simplifying my code it runs much faster.
Final solution:
import os
import sys
import subprocess
from subprocess import Popen
def back_insert(filename):
#inserts backslashes into paths that have spaces in them
fstring = filename.replace(" ", "\\ ")
return fstring
#Runs the Emulator, socket, Timer and Tracker
commands = [
back_insert("/usr/bin/snes9x-gtk"),
back_insert("/usr/share/OpenTracker/OpenTracker"),
back_insert("/usr/bin/QUsb2Snes"),
back_insert("/home/user/LibreSplit/libresplit")
]
procs = [Popen(i) for i in commands]
for p in procs:
p.wait()
Voici un **guide complet pour installer MySQL sur un serveur Red Hat (RHEL, CentOS, AlmaLinux ou Rocky Linux)** et créer **deux instances MySQL distinctes** sur le même serveur.
---
## 🛠️ Objectif
- Installer **MySQL Server**
- Créer **deux instances MySQL indépendantes**
Instance 1 : port `3306`
Instance 2 : port `3307`
- Chaque instance aura :
Son propre répertoire de données
Sa propre configuration
Son propre service systemd
---
## 🔧 Étape 1 : Installer MySQL Server
### 1. Ajouter le dépôt MySQL officiel
```bash
sudo rpm -Uvh https://dev.mysql.com/get/mysql80-community-release-el9-7.noarch.rpm
```
\> Remplacer `el9` par votre version RHEL (`el7`, `el8`, etc.)
### 2. Installer MySQL Server
```bash
sudo dnf install mysql-server
```
---
## ⚙️ Étape 2 : Démarrer et activer l'instance par défaut
```bash
sudo systemctl enable mysqld
sudo systemctl start mysqld
```
### Récupérer le mot de passe temporaire root
```bash
sudo grep 'temporary password' /var/log/mysqld.log
```
Sécuriser l’installation :
```bash
sudo mysql_secure_installation
```
---
## 📁 Étape 3 : Préparer la deuxième instance
### 1. Créer un nouveau répertoire de données
```bash
sudo mkdir /var/lib/mysql2
sudo chown -R mysql:mysql /var/lib/mysql2
```
### 2. Initialiser la base de données pour la seconde instance
```bash
sudo mysqld --initialize --user=mysql --datadir=/var/lib/mysql2
```
\> ✅ Sauvegarder le mot de passe généré affiché dans les logs :
```bash
sudo cat /var/log/mysqld.log | grep "A temporary password"
```
---
## 📄 Étape 4 : Créer un fichier de configuration personnalisé pour la seconde instance
```bash
sudo nano /etc/my-2.cnf
```
Collez-y cette configuration :
```ini
[client]
port = 3307
socket = /var/lib/mysql2/mysql.sock
[mysqld]
port = 3307
socket = /var/lib/mysql2/mysql.sock
datadir = /var/lib/mysql2
pid-file = /var/lib/mysql2/mysqld.pid
server-id = 2
log-error = /var/log/mysqld2.log
```
Enregistrer et fermer.
### Créer le fichier log
```bash
sudo touch /var/log/mysqld2.log
sudo chown mysql:mysql /var/log/mysqld2.log
```
---
## 🔄 Étape 5 : Créer un service systemd pour la seconde instance
```bash
sudo nano /etc/systemd/system/mysqld2.service
```
Collez ce contenu :
```ini
[Unit]
Description=MySQL Second Instance
After=network.target
[Service]
User=mysql
Group=mysql
ExecStart=/usr/bin/mysqld --defaults-file=/etc/my-2.cnf --basedir=/usr --plugin-dir=/usr/lib64/mysql/plugin
ExecStop=/bin/kill -SIGTERM $MAINPID
Restart=always
PrivateTmp=false
[Install]
WantedBy=multi-user.target
```
Recharger systemd :
```bash
sudo systemctl daemon-reexec
sudo systemctl daemon-reload
```
Activer et démarrer le service :
```bash
sudo systemctl enable mysqld2
sudo systemctl start mysqld2
```
Vérifier le statut :
```bash
sudo systemctl status mysqld2
```
---
## 🔐 Étape 6 : Sécuriser la seconde instance
Connectez-vous à la seconde instance avec le mot de passe temporaire :
```bash
mysql -u root -p -h 127.0.0.1 -P 3307
```
Exécutez ces commandes SQL pour changer le mot de passe :
```sql
ALTER USER 'root'@'localhost' IDENTIFIED BY 'NouveauMotDePasse';
FLUSH PRIVILEGES;
exit
```
---
## 🧪 Étape 7 : Tester les deux instances
### Vérifier les ports utilisés :
```bash
ss -tuln | grep -E '3306|3307'
```
### Se connecter à chaque instance :
Instance 1 :
```bash
mysql -u root -p
```
Instance 2 :
```bash
mysql -u root -p -h 127.0.0.1 -P 3307
```
---
## 📌 Résumé des deux instances
| Instance | Port | Fichier Config | Données | Service Systemd | PID File | Log File |
|---------|------|--------------------|----------------|------------------|------------------------|--------------------|
| Default | 3306 | `/etc/my.cnf` | `/var/lib/mysql` | `mysqld` | `/var/run/mysqld/mysqld.pid` | `/var/log/mysqld.log` |
| Second | 3307 | `/etc/my-2.cnf` | `/var/lib/mysql2`| `mysqld2` | `/var/lib/mysql2/mysqld.pid` | `/var/log/mysqld2.log` |
---
## ✅ Vous avez terminé !
Vous avez maintenant **deux instances MySQL indépendantes** fonctionnant sur le même serveur Red Hat.
Chaque instance peut être gérée séparément via ses propres commandes :
```bash
sudo systemctl start/stop/restart mysqld
sudo systemctl start/stop/restart mysqld2
```
---
## ❓ Besoin d’un script Bash pour automatiser cette installation ?
Je peux vous fournir un **script Bash** qui fait tout cela automatiquement.
Souhaitez-vous que je vous le fournisse ?
Use the format_source_path() function on your builder:
env_logger::builder()
.format_source_path(true)
.init();
The logs will look like
[2025-06-03T20:06:14Z ERROR path/to/file.rs:84 project::module] Log message
Use the other format_
methods to further customize the look of your logs.
After lots of testing I found out the difference was Apache (1st server) vs. LiteSpeed (2nd server). The way to find it was by: <!--#echo var="SERVER_SOFTWARE" -->
How can I get it to fit inside an object that is not spanning the whole screen width?
The issue was that I was using END when I should have been using End.
Note for new TI-84 programers: if you include extra whitespace (other then newlines) or include a syntax error you won't be warned, your program will just poop out.
props.data
should work in your code snippet to get the data of the row. The cellRenderer
receives props of type CustomCellRendererProps
, and this is documented in the AG Grid docs.
wptrkinhthanks yoiu very muvh men
I obtain a total of 295,241 calls per second for the CIE ΔE2000 function in SQL. Both C99 and SQL (MariaDB and PostgreSQL) versions are available here.
Adding my 2¢ because everything else here seems overly complex to me (with Python 3.13 typing):
def recursive_subclasses[T](cls: type[T]) -> set[type[T]]:
"""
Recursively finds all subclasses of a given class.
"""
return set.union({cls}, *map(recursive_subclasses, cls.__subclasses__()))
Matthias this was exactly what I needed! If you run the service manager and point it to the correct version of jvm.dll you don't need to worry about if your JAVA_HOME is correct. Since I was using 6.0.0.0 I needed to use Java 8 when Java 17 was already installed and set as JAVA_HOME. I opened this up, pointed to the java 8 JDK jvm.dll and it started right up afterwards.
For me it was the Device Simulator window , apperently to avoid conflicts it disables some inputs.
https://docs.unity3d.com/Packages/com.unity.inputsystem%401.4/manual/Debugging.html?#device-simulator
When Device Simulator window is in use, mouse and pen inputs on the simulated device screen are turned into touchscreen inputs. Device Simulator uses its own touchscreen device, which it creates and destroys together with the Device Simulator window.
To prevent conflicts between simulated touchscreen inputs and native mouse and pen inputs, Device Simulator disables all native mouse and pen devices.
Closing it resolved my issue. (For cross platform developpement I am using both touch and mouse inputs)
You need to make sure that you run this as a read only transaction
I was blocked on 403 as well, I found a fix using Selenium for Python instead of "urlopen"
Fix on fork here:
https://github.com/Rolzad73/UnrealMarketplaceVaultExtractor
can't you just use substring, or create an extension method?
https://dotnetfiddle.net/RlDOuh
public static class StringExtensions
{
public static string Slice(this string source, int start, int end) => source.Substring(start, end - start);
}
Also keep in mind that even if auto update is on, it won't update until it considers the version stable:
To ensure the stability of self-hosted integration runtime, we release a new version each month and push an auto-update every three months, using a stable version from the preceding three months. So you may find that the autoupdated version is the previous version of the actual latest version. If you want to get the latest version, you can go to download center and do so manually. Additionally, auto-update to a new version is managed internally. You can't change it.
i don't think there is an easy way out, but this isn't particularly hard provided that you know python well.
here is what i was able to whip up in about 2 hours or so (in jupyter, cells are separated with #========
):
import numpy as np
import matplotlib.pyplot as plt
# step 1: load image
img = plt.imread("./your_image.png")
img = img[..., 0] # get rid of rgb channels that are present in the sample image
plt.imshow(img)
# ======================
# step 2, identify relevant area, crop the image to it
img_bin = img > 0.05 # threshold image to get where the roi is
# btw, plt.imread gets pixel values to 0-1 range
where_y, where_x = np.where(img_bin)
xmi, xma = np.quantile(where_x, (0.01, 0.99)) # better for images noise
ymi, yma = np.quantile(where_y, (0.01, 0.99))
# show visualize the region of interest
plt.imshow(img_bin)
plt.gca().add_artist(plt.Rectangle((xmi, ymi), xma-xmi, yma-ymi, facecolor='none', edgecolor='red'))
img2 = img[int(ymi):int(yma), int(xmi):int(xma)].copy() # crop the image
# ========================
# step 3: find some sort of starting point for the algorithm
ci = img2.ravel().argmax() # get brightest point
width = img2.shape[1]
cy, cx = ci//width, ci%width
plt.imshow(img2)
plt.plot([cx, ], [cy, ], marker="o", color='red', markerfacecolor="none")
def get_line_ends(cx, cy, len_pixels, rads):
l = len_pixels/2
cos = np.cos(rads)
sin = np.sin(rads)
y0 = cy-l*sin
y1 = cy+l*sin
x0 = cx-l*cos
x1 = cx+l*cos
return x0, x1, y0, y1
x0, x1, y0, y1 = get_line_ends(cx, cy, 100, -np.pi/11)
# notice that because y axis is inverted, line will be rotating clockwise, instead of counter-clockwise
print(x0, x1, y0, y1)
plt.plot([x0, x1] , [y0, y1], c='red')
# ===========================
# step 4. line sampling prototype
x0, x1, y0, y1 = get_line_ends(cx, cy, 100, -np.pi/11)
plt.imshow(img2)
print(x0, x1, y0, y1)
# plt.plot([x0, x1] , [y0, y1], c='red')
xs = np.linspace(x0, x1, 100).astype(int)
ys = np.linspace(y0, y1, 100).astype(int)
plt.plot(xs, ys, c="red", ls="none", marker="s", markersize=1)
plt.xlim(x0-5, x1+5)
plt.ylim(y0+5, y1-5) # y is still inverted
# ===============================
# step 5 sample pixels along the line at a bunch of angles, and find correct angle,
# to find direction of your lines
def sample_coordinates_along_a_line(img, cx, cy, len_pixels, rads): # same variable names
x0, x1, y0, y1 = get_line_ends(cx, cy, len_pixels, rads)
xs = np.linspace(x0, x1, int(len_pixels)).astype(int)
ys = np.linspace(y0, y1, int(len_pixels)).astype(int)
return img[ys, xs]
rs = np.linspace(-np.pi, 0, 100)
ms = np.empty_like(rs)
for i, r in enumerate(rs):
sample = sample_coordinates_along_a_line(img2, cx, cy, 100, r)
ms[i] = sample.mean()
r_est = r_estimated = rs[ms.argmax()] # will be nearly identical to our guess, so i don't plot it
plt.plot(rs, ms)
plt.axvline(-np.pi/11, c='red') # our guess, should match a maximum
# =================================
# step 6: sample along perpendicular direction, to identify lines
r_90 = r_est + np.pi/2 # perpendicular,
# since r_est is between -pi and 0, r_90 is always between -pi/2, pi/2
r_90 = r_est + np.pi/2 # perpendicular,
# since r_est is between -pi and 0, r_90 is always between -pi/2, pi/2
def get_line_in_a_box(cx, cy, r, xmi, xma, ymi, yma):
"get line that is inside of rectangular box, that goes through point (cx, cy), at angle r"
is_steep = np.abs(r) > np.pi/4 # if angle is > 45 deg, then line grows faster vertically
if is_steep:
y0 = ymi; y1 = yma
x0 = cx - cy/np.tan(r)
x1 = cx+(yma-cy)/np.tan(r)
else:
x0 = xmi; x1 = xma
y0 = cy - cx*np.tan(r)
y1 = cy + (xma-cx)*np.tan(r)
return x0, x1, y0, y1
plt.imshow(img2)
x0, x1, y0, y1 = get_line_in_a_box(cx, cy, r_est, xmi=0, xma=img2.shape[1]-1, ymi=0, yma=img2.shape[0]-1)
plt.plot([x0, x1] , [y0, y1], c='red') # along lines
x0, x1, y0, y1 = get_line_in_a_box(cx, cy, r_90, xmi=0, xma=img2.shape[1]-1, ymi=0, yma=img2.shape[0]-1)
plt.plot([x0, x1] , [y0, y1], c='red') # perpendicular to lines
# ================================
# now we figure out where peaks are from sampling along perpendicular
plt.figure()
def sample_coordinates_along_a_line2(img, x0, x1, y0, y1):
len_pixels = np.sqrt((x1-x0)**2+(y1-y0)**2)
print(len_pixels)
xs = np.linspace(x0, x1, int(len_pixels)).astype(int)
ys = np.linspace(y0, y1, int(len_pixels)).astype(int)
return img[ys, xs]
plt.figure()
x0, x1, y0, y1 = get_line_in_a_box(cx, cy, r_90, xmi=0, xma=img2.shape[1]-1, ymi=0, yma=img2.shape[0]-1)
sampl = sample_coordinates_along_a_line2(img2, x0, x1, y0, y1)
trend = np.convolve(sampl, [1/100]*100, mode="same")
sampl_detrended = sampl-trend
plt.plot(sampl_detrended)
# ==============================
# step 7: find maxima in detrended sample
# i've found this function somewhere in my processing scripts, it's more generic, but good one
def bool_to_regions2(xs: np.ndarray[bool], which=None) -> np.ndarray[int]:
"""return (start, end) pairs of each continious region as (n, 2) int array (end not inclusive, as usual). example
```
a = np.array([1,0,0,0,0,1,1,1,0,1,1,], dtype=bool)
for b, e in bool_to_regions2(a):
print(b, e)
print("".join(np.where(a,'1','0')))
print(' '*b + '^'+' '*(e-b-1)+'^')
```
set which to True or False to return only regions for these values
"""
heads = np.diff(xs, prepend=~xs[0])
tails = np.diff(xs, append=~xs[-1])
nh = abs(heads.sum()) # if array is 0 and 1 instead of proper boolean
nt = abs(tails.sum())
assert nh == nt, f"A: function `bool_to_regions` {nh=}, {nt=}, nh!=nt"
r = np.stack([np.where(heads)[0], np.where(tails)[0]+1], axis=-1)
if which is None: return r
elif which is True: return r[::2] if bool(xs[0]) else r[1::2]
elif which is False: return r[::2] if not xs[0] else r[1::2]
else: raise Exception("`which` should be True, False or None")
plt.plot(sampl_detrended)
maxima = bool_to_regions2(sampl_detrended>0.05, which=True).mean(axis=-1)
# maxima are positions of your lines along a perpendicular to them
for m in maxima:
plt.axvline(m, color='red', alpha=0.5)
# =======================================
# step 8: project maxima back to image space, by using linear interpolation along the line
plt.imshow(img2)
# remember, x0, y0, x1, y1 are from the perpendicular
line_x_coords = x0 + (x1-x0) * maxima / len(sampl_detrended)
line_y_coords = y0 + (y1-y0) * maxima / len(sampl_detrended)
plt.plot(line_x_coords, line_y_coords, ls="none", c="red", marker="+")
#================================================
# step 9: sample all lines
line_sampls = []
plt.imshow(img2)
x0, x1, y0, y1 = get_line_in_a_box(cx, cy, r_90, xmi=0, xma=img2.shape[1]-1, ymi=0, yma=img2.shape[0]-1)
plt.plot([x0, x1,], [y0, y1], c="red") # perpendicular
for line_number in range(len(line_x_coords)):
lcx = line_x_coords[line_number]
lcy = line_y_coords[line_number]
x0, x1, y0, y1 = get_line_in_a_box(lcx, lcy, r_est, xmi=0, xma=img2.shape[1]-1, ymi=0, yma=img2.shape[0]-1)
if x0 < 0 or x1 > img2.shape[1]-1 or y0 > img2.shape[0]-1 or y1 < 0:
continue # todo: clip line instead of skipping it
else:
plt.plot([x0, x1,], [y0, y1], c="red") # should cover the lines in the img2
sample = sample_coordinates_along_a_line2(img2, x0, x1, y0, y1)
line_sampls.append(sample)
line_sampls= np.stack(line_sampls)
# ===============================================
# this is how intensity samples look along the lines
# it should be easy to compute center of each line
for sampl in line_sampls:
plt.plot(sampl)
note: jupyter starts new plot in each cell. i'm not sure how to get nice visual feedback in a plain .py script. maybe call plt.show() at the end of each cell?
First check if Neocities allows making custom 404 page.
If yes, find it location and add html/js content to it.
As for "very detailed explanation" of javascript please read some documentation. It is impossible teach it here.
You can find articles on StackOverflow. For example Random image generation in Javascript
Naturally, right after I make a StackOverflow question I figure it out!
The answer is to call .persist()
after the .reply
call:
agent
.get('https://stackoverflow.com')
.intercept({
path: '/notarealendpoint',
method: 'GET',
})
.reply(200, "foo")
// this is new
.persist();
See https://github.com/nodejs/undici/blob/main/types/mock-interceptor.d.ts#L10.
There's also a .times
function if you only want the mock to persist N times.
Seeing as a Set IS-A Collection, you get a Collection when you get the Set - you get both, and you can decide which one you want to use. What is the downside of that?
If you'd like to avoid geoplot
or are experiencing crashing, you should be able to also get the desired effect using seaborn
and geopandas
(though not with just geopandas
to my knowledge).
with data like this:
Polygon GeoDataFrame:
id geometry
0 polygon_1 POLYGON ((-101.7 21.1, -101.6 21.1, -101.6 21....
Points GeoDataFrame:
PointID geometry
0 0 POINT (-101.61326 21.14453)
1 1 POINT (-101.66483 21.18465)
2 2 POINT (-101.61764 21.11646)
3 3 POINT (-101.65355 21.12132)
4 4 POINT (-101.68183 21.17071)
5 5 POINT (-101.61088 21.14948)
6 6 POINT (-101.66336 21.17007)
7 7 POINT (-101.6774 21.14027)
8 8 POINT (-101.66757 21.13169)
9 9 POINT (-101.66333 21.12997)
in any crs:
fig, ax = plt.subplots(figsize=(3, 3))
assert polygon_gdf.crs == points_gdf.crs
print("crs is", polygon_gdf.crs.coordinate_system)
polygon_gdf.plot(ax=ax, facecolor='none')
points_gdf.plot(ax=ax)
ax.set_axis_off()
crs is ellipsoidal
you can just re-project it to a cartesian coordinate reference system, like Albers Equal Area:
polygon_gdf.crs = 'EPSG:9822'
points_gdf.crs = 'EPSG:9822'
print("crs is", polygon_gdf.crs.coordinate_system)
crs is cartesian
Then you can plot as follows:
import seaborn as sns
fig, ax = plt.subplots(figsize=(3, 3))
sns.kdeplot(
x=points_gdf.geometry.x,
y=points_gdf.geometry.y,
fill=False,
cmap="viridis",
bw_adjust=0.5,
thresh=0.05,
ax=ax,
)
polygon_gdf.plot(ax=ax, facecolor="none")
ax.set(xlabel="", ylabel="")
fig.tight_layout()
Bro I have this issue for long time.
Any solution?
Open assistant and right after Class, change the prefix of the view controller (eg HomeView) that opens to the same name of the view controller you expect to open (eg SignupView). Then open the said view controller in new window, it will show error in class name (as 2 view controllers now have same name), now change class back to what it should be. It will auto correct the assistant that should show up as you expected.
For anyone curious, you can now just set the MaximumHeight, MinimumHeight, MaximumWidth, and MinimumWidth to a static value so that they won't change. something like this:
protected override Window CreateWindow(IActivationState? activationState)
{
const int newWidth = 1280;
const int newHeight = 640;
var window = new Window(new AppShell())
{
Width = newWidth,
Height = newHeight
};
window.MaximumHeight = newHeight;
window.MinimumHeight = newHeight;
window.MaximumWidth = newWidth;
window.MinimumWidth = newWidth;
return window;
}
Your xpath is not recognized by a browser. Try with normalize-space function:
//a[normalize-space()='Cookie Policy']
OR
Change strategy and use link text (java below):
driver.findElement(By.linkText("Cookie Policy"));
Anybody knew of a way to disable live chat via this YouTube data api? I have a node js script running pretty good in the livestream scheduling, except the live chat is also enabled. I read the YouTube api documentation, and search online, there seems to be NO way of disabling the live chat via API, except to manually disable it before it goes live. This kind of defeat the purpose of scheduling the live stream using API.
Any suggestion would be very much appreciated.
For those wondering what was (most likely) the actual "solution" - the certificate is generated for a single, specific McMaster-Carr account and you have to use that account credentials in the POST body along with the certificate to be able to log in.
It is seems not advisable to register the name of your gateway filter in application.properties
. According to the respective Spring Developer Guide "this is not a supported naming convention" and should be avoided since it "may be removed in future releases". Here is the official advise:
Naming Custom Filters And References In Configuration
Custom filters class names should end in
GatewayFilterFactory
.
For example, to reference a filter namedSomething
in configuration files, the filter must be in a class namedSomethingGatewayFilterFactory
.
I am having troubles to make my gateway implementation work, too. In my case the Spring Cloud Starter Gateway (v.4.2.3) somehow expects an CSRF token. Whenever I hit the gateway with a POST request it anwers 403 Forbidden: An expected CSRF token cannot be found
I haven't figured out why this happens since I disabled Spring Security in all microservices downstreams. Here is what I got so far:
@Slf4j
@Component
public class AuthGatewayFilterFactory extends AbstractGatewayFilterFactory<AuthGatewayFilterFactory.Config> {
private final RouteValidator routeValidator;
private final JwtUtil jwtUtil;
public AuthGatewayFilterFactory(RouteValidator routeValidator, JwtUtil jwtUtil) {
super(Config.class);
this.routeValidator = routeValidator;
this.jwtUtil = jwtUtil;
}
@Override
public GatewayFilter apply(Config config) {
return ((exchange, chain) -> {
ServerHttpRequest request = exchange.getRequest();
if (routeValidator.isSecured.test(request)) {
String header = request.getHeaders().getFirst(HttpHeaders.AUTHORIZATION);
if (header == null || !header.trim().startsWith("Bearer ")) {
log.warn("Invalid authorization header");
exchange.getResponse().setStatusCode(HttpStatus.UNAUTHORIZED);
return exchange.getResponse().setComplete();
}
try {
jwtUtil.validateJwtToken(header);
} catch (JwtTokenMalformentException | JwtTokenMissingException e) {
log.error("Error during token validation: {}", e.getMessage());
exchange.getResponse().setStatusCode(HttpStatus.UNAUTHORIZED);
return exchange.getResponse().setComplete();
}
String token = jwtUtil.getToken(header);
Claims claims = jwtUtil.extractAllClaims(token);
String csrfToken = request.getHeaders().getFirst("X-CSRF-TOKEN");
ServerHttpRequest mutatedRequest = exchange.getRequest().mutate()
.header("X-User-Id", claims.getSubject())
.header("X-User-Username", claims.get("username", String.class))
.header("X-User-Email", claims.get("email", String.class))
.header("X-User-Roles", claims.get("roles", String.class))
.header("X-CSRF-TOKEN", csrfToken)
.build();
return chain.filter(exchange.mutate().request(mutatedRequest).build());
}
// Non-secured route — pass through unchanged
return chain.filter(exchange);
});
}
public static class Config {
private boolean validateCsrf = false;
public boolean isValidateCsrf() {
return validateCsrf;
}
public void setValidateCsrf(boolean validateCsrf) {
this.validateCsrf = validateCsrf;
}
}
}
The issue was how I was copying my file structure in this part:
# Copy source code and build
COPY --from=dependencies /app/node_modules ./node_modules
COPY package.json eslint.config.mjs tsconfig.json ./
COPY . ./frontend
RUN npm run build frontend
I was changing the application structure to try and help with caching, but this wasn't actually necessary and was just causing too many problems. Updating it to this:
# Copy source code and build
COPY --from=dependencies /app/node_modules ./node_modules
COPY . .
RUN npm run build
and changing the rest of the Dockerfile accordingly resolved the issue.
I also recommend to re-orientate the image so that the chirp-grating is either horizontal or vertical. (For the sample image I get a rotation angle of about 16.1 degree to obtain horizontal grid-lines.)
Now project the image either horizontally or vertically. What you get is a plot like this:
From the local maxima you get the coordinates of the gridlines, i.e. you can position the line selection appropriately and do the desired measurements.
Please tell us if this is of any help for you or if you need further help.