I am working with CATS and am having the same issue, does the same solution apply and if so which file would I need to edit.
For me resyncing the project with Gradle files helped. I didn't modify anything, because the code worked before I shut down the system. I am using the latest Android Studio (Meerkat | 2024.3.1 Patch 2), but it is still an issue.
To configure WebClient to use a specific DNS server or rely on the system's resolver, you'll need to customize the underlying HttpClient's resolver. If you want to stick with the system DNS, use DefaultAddressResolverGroup.INSTANCE, which follows your OS-level settings. To set a custom DNS server (like 8.8.8.8), create a DnsAddressResolverGroup using a DnsNameResolverBuilder and a SingletonDnsServerAddressStreamProvider.
If you're working with an HTTP proxy, make sure it's properly set up in HttpClient and supports HTTPS tunneling through the CONNECT method. In more locked-down environments, it's a good idea to combine proxy settings with the system resolver and enable wiretap logging for better reliability and easier debugging.
I had some annotations with normalized values above 1, like "bottomX": 1.00099
in my case: Apache 2.4 win64, PHP 8.1.1 win32 vs16 x64 the problem was solved by the following: copy libsalsl.dll from php to apache/bin
You cannot prevent the way that the Android system works. You need to handle your own session and state per the Designing and Developing Plugins guide.
I decided to just use Umbraco Cloud to host. I've recreated the site on there. The most likely issue was in my views I was referencing content ID's that were only on my local database. I noticed and resolved this in Umbraco Cloud which was cheaper for hosting anyway
Sounds like a bug described in this [github issue](https://github.com/Azure/azure-cli/issues/17179) where `download-batch` doesn't distinguish between blobs and folder entries. It lists everything in the container and then attempts to incorrectly download "config-0000" as a file, and it writes a file with this name to your destination dir. Then it does a similar thing with "config-0000/scripts", but "config-0000" is a file, and that's where the "Directory is expected" error message comes from.
A possible work around that might have worked for you is to specify a pattern that wouldn't match any of your folders in blob storage like: `--pattern *.json`.
So with the hint with the functions from @Davide_sd I made a generic method that allows me to pretty easily control how the sub-steps are split up. Basically, I'm manually deriving the functions I split off, but much like cse, keep the results in a dictionary to share among all occurences.
The base expression that make up the calculation are input and never modified, the derivation list is seeded with what you want to derive (multiple expressions are ok), and it will recursively derive them, using the expression list as required.
At the end, I can still use cse to a) bring it into that format should you require it, and b) factor out even more common occurences.
It works decently well with my small example, may update it as I add more complexity to the function I need derived.
from sympy import *
def find_derivatives(expression):
derivatives = []
if isinstance(expression, Derivative):
#print(expression)
derivatives.append(expression)
elif isinstance(expression, Basic):
for a in expression.args:
derivatives += find_derivatives(a)
elif isinstance(expression, MatrixBase):
for i in range(rows):
for j in range(cols):
derivatives += find_derivatives(self[i, j])
return derivatives
def derive_recursively(expression_list, derive_done, derive_todo):
newly_derived = {}
for s, e in derive_todo.items():
print("Handling derivatives in " + str(e))
derivatives = find_derivatives(e)
for d in derivatives:
if d in newly_derived:
#print("Found derivative " + str(d) + " in done list, already handled!")
continue
if d in derive_todo:
#print("Found derivative " + str(d) + " in todo list, already handling!")
continue
if d in expression_list:
#print("Found derivative " + str(d) + " in past list, already handled!")
continue
if d.expr in expression_list:
expression = expression_list[d.expr]
print(" Deriving " + str(d.expr) + " w.r.t. " + str(d.variables))
print(" Expression: " + str(expression))
derivative = Derivative(expression, *d.variable_count).doit().simplify()
print(" Derivative: " + str(derivative))
if derivative == 0:
e = e.subs(d, 0)
derive_todo[s] = e
print(" Replacing main expression with: " + str(e))
continue
newly_derived[d] = derivative
continue
print("Did NOT find base expression " + str(d.expr) + " in provided expression list!")
derive_done |= derive_todo
if len(newly_derived) == 0:
return derive_done
return derive_recursively(expression_list, derive_done, newly_derived)
incRot_c = symbols('aX aY aZ')
incRot_s = Matrix(3,1,incRot_c)
theta_s = Function("theta")(*incRot_c)
theta_e = sqrt((incRot_s.T @ incRot_s)[0,0])
incQuat_c = [ Function(f"i{i}")(*incRot_c) for i in "WXYZ" ]
incQuat_s = Quaternion(*incQuat_c)
incQuat_e = Quaternion.from_axis_angle(incRot_s/theta_s, theta_s*2)
baseQuat_c = symbols('qX qY qZ qW')
baseQuat_s = Quaternion(*baseQuat_c)
poseQuat_c = [ Function(f"p{i}")(*incRot_c, *baseQuat_c) for i in "WXYZ" ]
poseQuat_s = Quaternion(*poseQuat_c)
# Could also do it like this and in expressions just refer poseQuat_s to poseQuat_e, but output is less readable
#poseQuat_s = Function(f"pq")(*incRot_c, *baseQuat_c)
poseQuat_e = incQuat_s * baseQuat_s
expressions = { theta_s: theta_e } | \
{ incQuat_c[i]: incQuat_e.to_Matrix()[i] for i in range(4) } | \
{ poseQuat_c[i]: poseQuat_e.to_Matrix()[i] for i in range(4) }
derivatives = derive_recursively(expressions, {}, { symbols('res'): diff(poseQuat_s, incRot_c[0]) })
print(derivatives)
elements = cse(list(expressions.values()) + list(derivatives.values()))
pprint(elements)
Try this!
RawRequest: "*\+*" or RawRequest:*\+*
The speedup is insignificant because you only sped up an insignificant part of the overall work. Most time is spent by the primes[...] = False
commands, and they're the same for both wheels.
Official Microsoft documentation:
https://learn.microsoft.com/en-us/nuget/reference/nuget-exe-cli-reference?tabs=windows
tengo este codigo pero no me quere dar se me queda en abtascado
using System;
using System.Collections;
using System.Collections.Generic;
using System.Linq;
class Program
{
static void Main()
{
int[] LICEO = Enumerable.Range(4, 15).ToArray();
List<int> IUSH = Enumerable.Range(18, 43).ToList();
Console.WriteLine("Ingrese las edades de RONDALLA separadas por punto y coma (;):");
ArrayList RONDALLA = new ArrayList(Console.ReadLine().Split(';').Select(double.Parse).ToArray());
Console.WriteLine("LICEO : " + string.Join(", ", LICEO));
Console.WriteLine("IUSH : " + string.Join(", ", IUSH));
Console.WriteLine("RONALLA : " + string.Join(", ", RONDALLA.ToArray()));
int diferencia = (int)RONDALLA.Cast<int>().Max() - LICEO.Min();
Console.WriteLine($"Diferencia entre la edad mayor de RONDALLA y la menor del LICEO es: {diferencia}");
int sumaIUSH = IUSH.Sum();
double promedioIush = IUSH.Average();
Console.WriteLine($"La sumatira de las edades de IUSH es: {sumaIUSH}");
Console.WriteLine($"El promedio de las edades de IUSH es: {promedioIush}");
Console.WriteLine("Ingrese la edad que sea buscar del LICEO:");
int edadBuscadaLICEO = int.Parse(Console.ReadLine());
int posicionLICEO = Array.IndexOf(LICEO, edadBuscadaLICEO);
if (posicionLICEO != -1)
{
Console.WriteLine($"La edad {edadBuscadaLICEO} existe en la posiciÃŗn {posicionLICEO}.");
Console.WriteLine($"La edad en IUSH en la misma posiciÃŗn: {(posicionLICEO < IUSH.Count ? IUSH[posicionLICEO].ToString() : "N/A")}");
Console.WriteLine($"La edad en RONDALLA en la misma posiciÃŗn: {(posicionLICEO < RONDALLA.Count ? RONDALLA[posicionLICEO].ToString() : "N/A")}");
}
else
{
Console.WriteLine($"La edad {edadBuscadaLICEO} no existe en el LICEO.");
}
List<int> SALAZAR = LICEO.Concat(IUSH).ToList();
Console.WriteLine("Edades de SALAZAR: " + string.Join(", ", SALAZAR));
SALAZAR.Sort();
SALAZAR.Reverse();
Console.WriteLine("5 edades mÃĄs altas de SALAZAR: " + string.Join(", ", SALAZAR.Take(5)));
Console.WriteLine("5 edades mÃĄs bajas de SALAZAR: " + string.Join(", ", SALAZAR.OrderBy(x => x).Take(5)));
int[] edadesEntre15y25 = SALAZAR.Where(edad => edad >= 15 && edad <= 25).ToArray();
int cantidad = edadesEntre15y25.Length;
double porcentaje = (double)cantidad / SALAZAR.Count * 100;
Console.WriteLine($"Cantidad de edades entre 15 y 25 aÃąos: {cantidad}");
Console.WriteLine($"Porcentaje de edades entre 15 y 25 aÃąos: {porcentaje:F2}%");
}
}
Well, install works:
winget install --id Microsoft.Powershell
But the MS documentation says my original command should have worked. Frustrating.
Azure Database - I'm including here SQL Database, SQL Elastic Pool and MySQL Flexible Server - scaling cannot be performed real-time because it has a downtime. It can range from a few seconds, to a few hours depending on the size of your workload (Microsof Expresses this downtime in terms of "minutes per GB" in some of their articles).
See this post from 2017 where they describe downtimes of up to 6hours with ~250GB databases:
How do you automatically scale-up and down a single Azure database?
You probably know where I'm trying to get here. You automatically scale-up and down on your own. You need to either build your own tools or do it manually. There is no built-in support for this (and with reason).
I have to say that lately for Azure Sql Pools we are seeing extremely fast tier scaling (i.e < 1 min) with databases in the range of 100-200GB, so probaly the Azure team has come great lengths to improve changing tiers since 2017...
For MySQL FLexible Server I've seen it's almost never < than 4-5 minutes, even for small servers. But this is a very new service, I am sure it will get better with time.
The fact that you have this downtime is probably why Azure did not add out of the box autoscaling, providing users metrics and API's so they can choose when and how to scale according to their business needs and applications. Again, depending on your bussiness case and workload, those downtimes might be tolerable if properly handled (at specific times of the day, etc.)
I.e. for our development and staging environments we are using this (disclaimer, I built it):
https://github.com/david-garcia-garcia/azureautoscalerapp
and have setup rules that cater to our staging environment needs: pool scales automaticaly between 20DTU and 800DTU according to real usage. DTU's are scaled to a minimum of 50 between 6:00 and 18:00 to reduce disruption. Provisioned storage also scales and downscales automatically (in the staging pools we get databases added and removed automatically all the time, some are small, others several hundred GB's).
It does have a downtime, but it is so small, that properly educating our QA team allowed us to cut more than in half our MSSQL costs.
- Resources:
myserver_pools:
ResourceId: "/subscriptions/xxx/resourceGroups/mygroup/providers/Microsoft.Sql/servers/myserver/pool/{.*}"
Frequency: 5m
ScalingConfigurations:
Baseline:
ScaleDownLockWindowMinutes: 50
ScaleUpAllowWindowMinutes: 50
Metrics:
dtu_consumption_percent:
Name: dtu_consumption_percent
Window: 00:05
storage_used:
Name: storage_used
Window: 00:05
TimeWindow:
Days: All
Months: All
StartTime: "00:00"
EndTime: "23:59"
TimeZone: Romance Standard Time
ScalingRules:
autoadjust:
ScalingStrategy: Autoadjust
Dimension: Dtu
ScaleUpCondition: "(data) => data.Metrics[\"dtu_consumption_percent\"].Values.Select(i => i.Average).Take(3).Average() > 85" # Average DTU > 85% for 3 minutes
ScaleDownCondition: "(data) => data.Metrics[\"dtu_consumption_percent\"].Values.Select(i => i.Average).Take(5).Average() < 60" # Average DTU < 60% for 5 minutes
ScaleUpTarget: "(data) => data.NextDimensionValue(1)" # You could actually specificy DTU number manually, and system will find closes valid tier
ScaleDownTarget: "(data) => data.PreviousDimensionValue(1)" # You could actually specificy DTU number manually, and system will find closes valid tier
ScaleUpCooldownSeconds: 180
ScaleDownCoolDownSeconds: 3600
DimensionValueMax: "800"
DimensionValueMin: "50"
TimeWindow:
Days: ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday"]
Months: All
StartTime: "06:00"
EndTime: "17:00"
TimeZone: Romance Standard Time
ScalingRules:
# Warm up things for office hours
minimum_office_hours:
ScalingStrategy: Fixed
Dimension: Dtu
ScaleTarget: "(data) => (50).ToString()"
# Always have a 100Gb or 25% extra space, whatever is greater.
fixed:
ScalingStrategy: Fixed
Dimension: MaxDataBytes
ScaleTarget: "(data) => (Math.Max(data.Metrics[\"storage_used\"].Values.First().Average.Value + (100.1*1024*1024*1024), data.Metrics[\"storage_used\"].Values.First().Average.Value * 1.25)).ToString()"
header 1 | header 2 |
---|---|
cell 1 | cell 2 |
cell 3 | cell 4 |
In my case, it occurred only due to the wrong .jks file selection. After making sure the accurate key was selected, the issue disappeared, and the build was successful.
I am also working with a naturally unicode language and the easiest and sure fix as follows:
1-] Download a unicode supporting true type font. An example is OpenDyslexic-Regular.
Here is GitHub repository for it.
2-] You either work in that directory you downloaded .ttf while in Python, or give the complete path.
3-]
pdf = FPDF()
pdf.add_page()
pdf.add_font("OpenDyslexic-Regular", "", "./OpenDyslexic-Regular.ttf", uni=True)
pdf.set_font("OpenDyslexic-Regular", "",8)
pdf.multi_cell(0, 10, txt="çiÃļÃŧÅp alekrjgnnselrjgnaej")
pdf.output("##.pdf" )
from fpdf import FPDF
from datetime import datetime
# āĻŦāϰā§āϤāĻŽāĻžāύ āϤāĻžāϰāĻŋāĻ
today = datetime.today().strftime("%d/%m/%Y")
# āĻāĻŋāĻ āĻŋāϰ āĻŦāĻŋāώā§āĻŦāϏā§āϤā§
letter_content = f"""
āĻĒā§āϰāĻžāĻĒāĻ:
āĻŽāϰāĻšā§āĻŽ āĻŽā§āĻ āĻāĻŽāĻŋāύā§āϰ āĻāϏāϞāĻžāĻŽ-āĻāϰ āĻĒāϰāĻŋāĻŦāĻžāϰ/āĻāϤā§āϤāϰāĻžāϧāĻŋāĻāĻžāϰā§āĻāĻŖ
āĻā§āϰāĻžāĻŽ: āĻĒāĻžāĻāĻāĻāĻžāĻā§, āĻāĻĒāĻā§āϞāĻž: āĻā§ā§āĻŋāĻā§āϰāĻžāĻŽ āϏāĻĻāϰ, āĻā§āϞāĻž: āĻā§ā§āĻŋāĻā§āϰāĻžāĻŽāĨ¤
āĻŽā§āĻŦāĻžāĻāϞ: 01712398384
āĻŦāĻŋāώā§: āϧāĻžāϰ āĻĒāϰāĻŋāĻļā§āϧ āϏāĻāĻā§āϰāĻžāύā§āϤ
āĻŽāĻžāύā§āϝāĻŦāϰ,
āĻāĻŽāĻŋ āĻŽā§āĻ āĻāϞāĻāĻŽāĻŋāύ āϏāϰāĻāĻžāϰ, āĻĒāĻŋāϤāĻž: āĻŽā§āϤ- āĻāĻŦā§āϞ āĻšā§āϏā§āύ, āĻā§āϰāĻžāĻŽ: āĻĒāϞāĻžāĻļāĻŦāĻžā§ā§, āĻĄāĻžāĻāĻāϰ: āĻāϞāĻŋāϞāĻāĻā§āĻ, āĻāĻĒāĻā§āϞāĻž: āĻā§ā§āĻŋāĻā§āϰāĻžāĻŽ āϏāĻĻāϰ, āĻā§āϞāĻž: āĻā§ā§āĻŋāĻā§āϰāĻžāĻŽāĨ¤ ⧍ā§Ļā§§ā§Ŧ āϏāĻžāϞ āĻĨā§āĻā§ āĻŽāϰāĻšā§āĻŽ āĻŽā§āĻ āĻāĻŽāĻŋāύā§āϰ āĻāϏāϞāĻžāĻŽ āĻāϰ āϏāĻā§āĻā§ āϏā§āϏāĻŽā§āĻĒāϰā§āĻā§ āĻāĻŋāϞāĻžāĻŽāĨ¤ āĻāĻŽāĻžāĻĻā§āϰ āĻŦā§āϝāĻā§āϤāĻŋāĻāϤ āϏāĻŽā§āĻĒāϰā§āĻā§āϰ āĻāĻŋāϤā§āϤāĻŋāϤā§, āϤāĻŋāύāĻŋ āĻāĻŽāĻžāϰ āύāĻŋāĻāĻ āĻŽā§āĻ āĻāĻžāϰ āϧāĻžāĻĒā§ ā§Ģā§Ž,ā§Ļā§Ļā§Ļ/- āĻāĻžāĻāĻž āϧāĻžāϰ āĻā§āϰāĻšāĻŖ āĻāϰā§āύāĨ¤ āύāĻŋāĻā§ āϧāĻžāϰ āύā§āĻā§āĻžāϰ āϤāĻžāϰāĻŋāĻ āĻ āĻĒāϰāĻŋāĻŽāĻžāĻŖ āĻāϞā§āϞā§āĻ āĻāϰāĻž āĻšāϞā§:
ā§§āĨ¤ [āϤāĻžāϰāĻŋāĻ] - [āĻāĻžāĻāĻžāϰ āĻĒāϰāĻŋāĻŽāĻžāĻŖ]
⧍āĨ¤ [āϤāĻžāϰāĻŋāĻ] - [āĻāĻžāĻāĻžāϰ āĻĒāϰāĻŋāĻŽāĻžāĻŖ]
ā§ŠāĨ¤ [āϤāĻžāϰāĻŋāĻ] - [āĻāĻžāĻāĻžāϰ āĻĒāϰāĻŋāĻŽāĻžāĻŖ]
ā§ĒāĨ¤ [āϤāĻžāϰāĻŋāĻ] - [āĻāĻžāĻāĻžāϰ āĻĒāϰāĻŋāĻŽāĻžāĻŖ]
āĻāĻ āϞā§āύāĻĻā§āύāĻā§āϞ⧠āĻāĻŽāĻžāϰ āĻŦā§āϝāĻā§āϤāĻŋāĻāϤ āύā§āĻāĻŦā§āĻā§ āϞāĻŋāĻāĻŋāϤ āϰā§ā§āĻā§ āĻāĻŦāĻ āĻāĻāĻāĻŋ āĻŦāĻž āĻāĻāĻžāϧāĻŋāĻ āϞā§āύāĻĻā§āύā§āϰ āϏāĻŽā§ āĻāĻāĻāύ āϏāĻžāĻā§āώ⧠āĻāĻĒāϏā§āĻĨāĻŋāϤ āĻāĻŋāϞā§āύāĨ¤
āĻŽāϰāĻšā§āĻŽā§āϰ āĻšāĻ āĻžā§ āĻŽā§āϤā§āϝā§āϤ⧠āĻāĻŽāĻŋ āĻāĻā§āϰāĻāĻžāĻŦā§ āĻļā§āĻāĻžāĻšāϤ, āĻāĻŋāύā§āϤ⧠āĻāĻ āĻāϰā§āĻĨāĻŋāĻ āĻŦāĻŋāώā§āĻāĻŋ āύāĻŋā§ā§ āĻāĻŽāĻŋ āĻŦāĻŋāĻĒāĻžāĻā§ āĻĒā§ā§āĻāĻŋāĨ¤
āĻāĻĒāύāĻžāĻĻā§āϰ āĻāĻžāĻā§ āĻŦāĻŋāύā§āϤ āĻ āύā§āϰā§āϧ, āĻŽāϰāĻšā§āĻŽā§āϰ āϏāĻŽā§āĻĒāϤā§āϤāĻŋāϰ āĻāϤā§āϤāϰāĻžāϧāĻŋāĻāĻžāϰ⧠āĻšāĻŋāϏā§āĻŦā§ āĻāĻ āĻĻā§āύāĻžāϰ āĻŦāĻŋāώā§āĻāĻŋ āĻŦāĻŋāĻŦā§āĻāύāĻžā§ āĻāύ⧠āϤāĻž āĻĒāϰāĻŋāĻļā§āϧā§āϰ āĻŦā§āϝāĻŦāϏā§āĻĨāĻž āĻā§āϰāĻšāĻŖ āĻāϰāĻŦā§āύāĨ¤
āĻāĻĒāύāĻžāĻĻā§āϰ āϏāĻĻā§ āϏāĻšāϝā§āĻāĻŋāϤāĻž āĻĒā§āϰāϤā§āϝāĻžāĻļāĻž āĻāϰāĻāĻŋāĨ¤
āĻāϤāĻŋ,
āĻŽā§āĻ āĻāϞāĻāĻŽāĻŋāύ āϏāϰāĻāĻžāϰ
āĻŽā§āĻŦāĻžāĻāϞ: 01740618771
āϤāĻžāϰāĻŋāĻ: {today}
Ah, I feel your frustration â industrial cameras can definitely be tricky to get working with libraries like EmguCV, especially when they rely on special SDKs or drivers. Letâs break it down and see how we can get things moving.
EmguCV (just like OpenCV, which it's based on) uses standard interfaces (like DirectShow on Windows, or V4L on Linux) to access cameras. So if your industrial camera:
Requires a proprietary SDK, or
Doesn't expose a DirectShow interface,
then EmguCV wonât be able to see or use it via the usual Capture
or VideoCapture
class.
Does your camera show up in regular webcam apps?
If it doesnât show up in apps like Windows Camera or OBS, then itâs not available via DirectShow â meaning EmguCV canât access it natively.
Check EmguCV camera index or path
If the camera does appear in regular apps, you can try:
csharp
CopyEdit
var capture = new VideoCapture(0); // Try index 1, 2, etc.
But again, if your camera uses its own SDK (like Baslerâs Pylon, IDS, Daheng SDK, etc.), this wonât work.
Most industrial cameras provide their own .NET-compatible SDKs. Use that SDK to grab frames, then feed those images into EmguCV like so:
csharp
CopyEdit
// Assume you get a Bitmap or raw buffer from the SDKBitmap bitmap = GetFrameFromCameraSDK(); // Convert to EmguCV Mat Mat mat = bitmap.ToMat(); // or use CvInvoke.Imread if saving to disk temporarily // Now use EmguCV functions on mat
Youâll basically use the vendor SDK to acquire, and EmguCV to process.
If you're feeling ambitious and want to keep using EmguCVâs patterns, you could extend Capture
or create a custom class to wrap your camera SDK, but thatâs quite involved.
EmguCV doesnât natively support cameras that require special SDKs.
Most industrial cameras do require their own SDKs to function.
Your best bet: Use the SDK to get frames, then convert those into Mat
or Image<Bgr, Byte>
for processing.
Is there a specific need to use the "product" Model? Maybe an abstract Model would do the trick in your case?
class products(models.Model): # COMM0N
product_name = models.CharField(max_length=100)
product_id = models.PositiveSmallIntegerField()
product_desc = models.CharField(max_length=512)
[. other Åhared fields and functions]
class Meta:
abstract = True`
class shirt(product):
class Size(models.IntegerChoices):
S = 1, "SMALL"
M = 2, "MEDIUM"
L = 3, "LARGE"
# (...)
size = models.PositiveSmallIntegerField(
choices=Size.choices,
default=Size.S
product_tyoe = "shirt"
)
Mermaid.ink has been known to timeout when rendering graphs with very short node names like "A", particularly in Jupyter notebooks using langgraph. This appears to be a bug or parsing edge case in Mermaid.inkâs backend. Longer node names such as "start_node" or "chatbot" tend to work reliably and avoid the issue. Interestingly, the same Mermaid code usually renders fine in the Mermaid Live Editor, suggesting the problem is specific to Mermaid.inkâs API or langgraphâs integration. Workarounds include using longer node names, switching to Pyppeteer for local rendering, or running a local Mermaid server via Docker.
Boombastick,
I know it's been awhile since you asked but I recently discovered that there are some unanswered question on Stackoverflow regarding xeokit SDK. So, just in case that it might still be relevant and might be good to know for others, too, we always recommend the following steps when trying to tackle an issue:
In your particular case, it would be useful if you can reproduce the bug with one of the SDK or BIM Viewer examples from https://xeokit.github.io/xeokit-sdk/examples/index.html and then post it on the GitHub Issues. There usually there is someone to take care of bugs.
@ValidateIf
and Inside It Handle the LogicInstead of layering multiple @ValidateIf
s and validators, consolidate validation using a single custom @ValidateIf()
for each conditional branch.
@ValidateIf((o) => o.transactionType === TransactionType.PAYMENT)
@IsNotEmpty({ message: 'Received amount is required for PAYMENT transactions' })
@IsNumber()
receivedAmount?: number;
@ValidateIf((o) => o.transactionType === TransactionType.SALE && o.receivedAmount !== undefined)
@IsEmpty({ message: 'Received amount should not be provided for SALE transactions' })
receivedAmount?: number;
Create a custom validator for receivedAmount
:
import {
registerDecorator,
ValidationOptions,
ValidationArguments,
} from 'class-validator';
export function IsValidReceivedAmount(validationOptions?: ValidationOptions) {
return function (object: any, propertyName: string) {
registerDecorator({
name: 'isValidReceivedAmount',
target: object.constructor,
propertyName: propertyName,
options: validationOptions,
validator: {
validate(value: any, args: ValidationArguments) {
const obj = args.object as any;
if (obj.transactionType === 'PAYMENT') {
return typeof value === 'number' && value !== null;
} else if (obj.transactionType === 'SALE') {
return value === undefined || value === null;
}
return true;
},
defaultMessage(args: ValidationArguments) {
const obj = args.object as any;
if (obj.transactionType === 'SALE') {
return 'receivedAmount should not be provided for SALE transactions';
}
if (obj.transactionType === 'PAYMENT') {
return 'receivedAmount is required for PAYMENT transactions';
}
return 'Invalid value for receivedAmount';
},
},
});
};
}
Very late to the game, but got a nice workaroud.
Add a container element into the rack, and then add your smaller equipment into it.
This way it will work with other rack elements, but won't be resized.
It seems the answer is to leave out the cross compilation arguments.
export CFLAGS="-arch x86_64"
./configure --enable-shared
Configure gets confused about cross compilation on macOS, because when it tries to execute a cross compiled program, the program does not fail (thanks to Rosetta, I presume).
You should always do your due diligence when adding a new package to your codebase, at the end of the day it is 3rd party code.
I think your main worry is your credentials being exposed. This package in particular seems to be popular enough to be battle tested and trusted by a good chunk of the community.
I think you'll be fine. Just remember to keep your credentials a secret and that means not adding them to version control. Use env variables or any of the other methods listed here to set your credentials.
Agrega esto a tu functions.php
function ninja_table_por_post_id() {
$post_id = get_the_ID();
$shortcode = '[ninja_tables id="446" search=0 filter="' . $post_id . '" filter_column="Filter4" columns="name,address,city,website,facebook"]';
return do_shortcode($shortcode);
}
add_shortcode('tabla_post_actual', 'ninja_table_por_post_id');
Este cÃŗdigo crea un nuevo shortcode llamado [tabla_post_actual]
, que al insertarlo en cualquier plantilla o contenido de WordPress, mostrarÃĄ la tabla filtrada por el ID del post actual.
Modo de uso:
<?php echo do_shortcode('[tabla_post_actual]'); ?>
For all modern versions of the sdk, it's just dotnet fsi myfile.fsx
Based on the available information, Babu89BD appears to be a web-based platform, likely associated with online gaming or betting services. However, the site https://babu89bd.app provides very limited public information about what the app actually does, how to use it, or whether it's secure and legitimate.
If you're trying to figure out its purpose:
It seems to require login access before showing any details, which may be a red flag.
The site doesn't list a privacy policy, terms of service, or contact informationâimportant factors for trust and transparency.
The design and naming resemble other platforms often used for online gambling, especially popular in South Asia.
Caution is advised if you're unsure about the legitimacy. Avoid entering personal or financial information until you can verify its credibility.
If anyone has used this app and can confirm its features or authenticity, please share your insights.
From what I could find on the JetBrains website, you can disable it by including an empty file named .noai in IntelliJ.
This worked for me, so it should hopefully work for you as well.
Solo ejecuta el comando:
composer remove laravel/jetstream
I am not sure who these anchor boxes are coming from?
Are these defined during the model training process?
I am doing custom object detection using mediapipe model_maker but it creates a model that has two outputs. The outputs are of Shape=[ 1 27621 4] and Shape=[ 1 27621 3].
I am totally confused on what is going on and how can I get the four outputs? I want output locations, classes, scores, detections.
Following is my current code, please help me understand what's going on and how to obtain the desired outputs?
# Set up the model
quantization_config = quantization.QuantizationConfig.for_float16()
spec = object_detector.SupportedModels.MOBILENET_MULTI_AVG
hparams = object_detector.HParams(export_dir='exported_model', epochs=30)
options = object_detector.ObjectDetectorOptions(
supported_model=spec,
hparams=hparams
)
# Run retraining
model = object_detector.ObjectDetector.create(
train_data=train_data,
validation_data=validation_data,
options=options)
# Evaluate the model
loss, coco_metrics = model.evaluate(validation_data, batch_size=4)
print(f"Validation loss: {loss}")
print(f"Validation coco metrics: {coco_metrics}")
# Save the model
model.export_model(model_name=regular_output_model_name)
model.export_model(model_name=fp16_output_model_name, quantization_config=quantization_config)
Using %d with sscanf can cause a problem, because %d can be 2 bytes or 4 bytes.
It is said in earlier days %d used to be 2 bytes, but in more modern environments %d became 4 bytes. To be certain it is 2 bytes, replace %d with %hu or %hd or %hi
Came back to this question years later to offer an update.
Laravel 12 has a new feature called Automatic Eager Loading, which fixed this issue of eager loading in recursive relationships for me.
https://laravel.com/docs/12.x/eloquent-relationships#automatic-eager-loading
The command to install the stable version of PyTorch (2.7.0) with CUDA 12.8 using pip on Linux is:
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
Store Tenant Info Somewhere Dynamic Instead of putting all your tenant info (like issuer and audience) in appsettings.json
, store it in a database or some other place that can be updated while the app is running. This way, when a new tenant is added, you donât need to restart the app
Figure Out Which Tenant is Making the Request When a request comes in, figure out which tenant it belongs to. You can do this by:
Checking a custom header (e.g., X-Tenant-Id
)
Looking at the domain theyâre using
Or even grabbing the tenant ID from a claim inside the JWT token
Validate the Token Dynamically Use something called JwtBearerEvents
to customize how tokens are validated. This lets you check the tenant info on the fly for each request. Hereâs how it works:
When a request comes in, grab the tenant ID
Look up the tenantâs settings (issuer, audience, etc.) from your database or wherever youâre storing it
Validate the token using those settings
This could be helpful: https://github.com/mikhailpetrusheuski/multi-tenant-keycloak and this blog post: https://medium.com/@mikhail.petrusheuski/multi-tenant-net-applications-with-keycloak-realms-my-hands-on-approach-e58e7e28e6a3
Shoutout to Mikhail Petrusheuski for the source code and detailed explanation!
Not sure if anyone is monitoring thread but better late than never. We have launched a new unified Gitops controller for ECS (EC2 and Fargate) and Lambda. EKS is also coming soon. Check it out and would to engage on this - https://gitmoxi.io
I had same problem and resolve it by adding .python in between tensorflow an keras. so instead of tensorflow.keras, I wrote : tensorflow.python.keras
Adding GeneratedPluginRegistrant.registerWith(flutterEngine);
to MainActivity.kt
did work for me.
import io.flutter.plugins.GeneratedPluginRegistrant
//...
override fun configureFlutterEngine(flutterEngine: FlutterEngine) {
super.configureFlutterEngine(flutterEngine)
GeneratedPluginRegistrant.registerWith(flutterEngine);
configureChannels(flutterEngine)
}
Sousce:
https://github.com/firebase/flutterfire/issues/9113#issuecomment-1188429009
i ignored this in my case since it use a flash or delay before the page loads if i used the async await.
If you change the mocking method and then you cast, you could avoid the ignore comment:
jest.spyOn(Auth, 'currentSession').mockReturnValue({
getIdToken: () => ({
getJwtToken: () => 'mock',
}),
} as unknown as Promise<CognitoUserSession>)
You're stuck in a loop because Google Maps APIs like Geocoding don't support INR (Indian Rupees) billing accounts.
Even if you're not in India, Google might still block the API if your billing account uses INR.
You need to manually create a new billing account using the Google Billing console, and specifically make sure:
The country is set to the U.S. (or another supported one)
The currency is set to USD
It's not created via the Maps API "Setup" flow, because that usually defaults to your local region/currency (e.g., INR)
Then create a new project and link this specific USD billing account manually. After linking the billing account, enable the Geocoding API within that project.
If the issue still persists, please share your setup in the billing account using the Google Billing console and configurations.
How do we get the UserID? I am trying to retrieve the user descriptor and passing that in the approval payload's request body it is returning the ID something like aad.JGUENDN....... but When I am trying to construct approvers payload it is returning me an invalid identities error.
I had same issue. I think [email protected] is not compatible with [email protected].
I ran npm install react-day-picker@latest and it's fixed.
Hope it helps.
In the project explorer, click the 3-dot settings button (ī¸) go to Behaviour -> Always Select Opened File.
Hi @Alvin Jhao Iâve implemented the monitor pipeline as advised using a Bash-based approach instead of PowerShell. The pipeline does the following:
â
Fetches the build timeline using the Azure DevOps REST API
â
Identifies stages that are failed
, partiallySucceeded
, or succeededWithIssues
â
Constructs a retry payload for each stage and sends a PATCH
request to retry it
â
Verifies correct stage-to-job relationships via timeline structure
Hereâs where Iâm stuck now:
Although my pipeline correctly:
Authenticates with $(System.AccessToken)
Targets the correct stageId
and buildId
Sends the payload:
`
{
"state": "retry",
"forceRetryAllJobs": true,
"retryDependencies": true
}
`
I consistently receive: ` Retry failed for: <StageName> (HTTP 204) `
Oddly, this used to work for stages like PublishArtifact
in earlier runs, but no longer does â even with identical logic.
Service connection has Queue builds
permission (confirmed in Project Settings)
Target builds are fully completed
Timeline output shows the stage identifier is present and correct
The payload matches Microsoftâs REST API spec
Even test pipelines with result: failed
stages return 204
Are there specific reasons a stage becomes non-retryable (beyond completion)?
Could stage identifier
fields being null (sometimes seen) block the retry?
Is there a way to programmatically verify retry eligibility before sending the PATCH?
Any help or insights would be appreciated!
Open Xcode, select the Pods target, then delete React-Core-React-Core_privacy and RCT-Folly-RCT-Folly_privacy, and try again â that should fix it.
I had the same issue on a gradle project and I was able to resolve it by following the instructions given in this link: https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/setup-project-gradle.html
Using %d with sscanf can cause a problem, because %d can be 2 bytes or 4 bytes.
It is said in earlier times %d used to be 2 bytes, but in more modern environments %d became 4 bytes. To be certain it is 2 bytes, replace %d with %hu or %hd or %hi
I too faced the same issue, the problem is that the file is not saved. try on turning on AUTOSAVE or use CTRL + S.
could you please share, how did you end up resolving this? i am having same problem 7 years later
Default of nestjs for websocket connection is Socket.io
, so if you want to connect your websocket server, your client must use Socket.io
to connect it and if you want to connect with normal websocket you will get this error:
Error: socket hang up
uninstall node_modules folder, upgrade node to the leatest version and run npm install again
Is it possible to create a field (control?) to search in the logs (from within the dashboard)
how does Ambari manage components such as Apache Hadoop, Hive, and Spark?
Ambari now uses Apache Bigtop: https://bigtop.apache.org/
Bigtop is similar to HDP.
Can Ambari directly manage existing Hadoop clusters?
How do I get Ambari to manage and monitor my open source cluster? I already have data on my current Hadoop cluster and don't want to rebuild a new cluster.
Ambari can do this, but it's not an easy process. Much easier to deploy a Bigtop cluster from scratch using Ambari.
Using Ambari on top of an existing cluster requires creating Ambari Blueprints to post cluster info and configurations to Ambari Server. Some details here: https://www.adaltas.com/en/2018/01/17/ambari-how-to-blueprint/
In case you are using other functions like store, storePublicly, etc.
$cover->store($coverAsset->dir, 'public');
$cover->storePublicly($coverAsset->dir, 'public');
Look for my implementation of FlowHStack (fully back ported to iOS 13): github.com/c-villain/Flow , video demo is here: t.me/swiftui_dev/289
Look for my implementation of FlowHStack: github.com/c-villain/Flow
video demo is here: t.me/swiftui_dev/289
Look for my implementation of FlowHStack: Look here: github.com/c-villain/Flow
video demo is here: t.me/swiftui_dev/289
class Vision: Detector {
typealias ViewType = Text
func check(mark: some Mark) -> ViewType {
Text("Vision")
}
}
The above answer is correct. For more details, https://tailwindcss.com/docs/dark-mode use this link.
Wo mans use 48:Notes this is the soluction
Chrome no longer show this information directly so you'll have to use a more complicated method, but that requires no external tools :
- Enter this adresse in you url bar :
edge://net-export/
- Click on Start Logging to Disk, and chose a temporary location for the log file that will be generated
- In a separate chrome windows or tab, load the url of a site that you want to be able to access using the proxy settings
https://stackoverflow.com/ for exemple
- Go back to the first window/tab and click on Stop Logging
- Click on Show File
- The export file is shown and selected. Open it with a text editor
- Search for the string : "proxy_info":"PROXY inside this file
- The content of the line will show you the proxy parameters you need to use :
{"params":{"proxy_info":"PROXY proxy1.xxxx.xxx:8080;PROXY proxy2.xxxx.xxx:8080"},"phase":0,"source":{"id":2006431,"start_time":"1018634551","type":30},"time":"1018634572","type":32},
- In this example, there's 2 proxy available ; one with name proxy1.xxxx.xxx and port 8080, and the other with name proxy2.xxxx.xxx and also port 8080.
you need to user v5 version. it will support then
I have thoughts on this that exceeded the character limit for SO. So I posted on Dev.to <- completely independant blogger myself; totally unaffiliated.
GOTO
IMHO is a very slept-on keyword; and I use it heavily when doing row-by-agonizing-row (RBAR) operations in T-SQL. Stack Overflow might not be the place for long form answers, but given how nuanced SQL dialects differ and where TSQL lives in the spectrum and what to do about it, I was left with deciding be be concise or complete. I went with complete. <- due largely to the fact the OP and others coming here might want something thoughtful.
Why T-SQL's Most Misunderstood Keyword is Actually It's Safest ~ David Midlo (me/Hunnicuttt)
As I found out, brace expansion happens before variable expansion; you can accomplish the same objective by replacing for i in {${octet[0]:-0}..255} with for ((i=${octet[0]:-0}; i<=255; i++)) (and do the same for j, k and l) â @markp-fuso
Ok. I thought this only applied to POST php/cgi, but apparently it has to do with allowing anyone anywhere to have access to the script. I had to add this to the php script:
header('Access-Control-Allow-Origin: *');
Or you can try ThreadPoolExecutor
from concurrent.futures import ThreadPoolExecutor
since it has the same api you don't need to change anything and it runs your code in the same process so need for serializer
Microsoft are deprecating support for kubenet on AKS on March 31 2028.
Instructions to migrate to Azure CNI can be found here: https://learn.microsoft.com/en-gb/azure/aks/upgrade-azure-cni#kubenet-cluster-upgrade
I couldn't find the "default" renderer but it was so easy to recreate it that it's not worth worrying about. Here's the updated code:
headerName: 'My Group',
field: 'id',
cellRendererParams: {
suppressCount: true,
innerRenderer: params => {
if (!params.node.group) {
// Custom rendering logic for individual rows
return <GroupColumnCellRenderer {...params} />;
}
return `${params.node.key} (${params.node.allChildrenCount})`;
},
},
};
Lifesaver! This command just saved me from hours of pain.
I might if found a fix but without more context on how this is setup I cant be fully confidant that this will help.
If found a GitHub post that seems like they have the same issue but they got a fix
And this video
Try using $form->setEntity($entity)
instead of setModel()
the Schema.prisma was given that the output link like this
generator client {
provider = "prisma-client-js"
output = "../lib/generated/prisma"
}
remove the output = "../lib/generated/prisma"
generator client {
provider = "prisma-client-js"
}
As of May 2023 -metadata
rotate
has been deprecated.
Use instead:
ffmpeg -display_rotation <rotation_degrees> -i <input_file> -c copy <output_file>
(This of course does not cover all possible options etc.)
I had two problems:
Duplicate Gson configuration (code + yml)... this fixed the Map name.
The keys of the map are used as-is, because Gson formats only fields.
My solution was to copy the code that formats the field names and use it before inserting into the Map.
Leaving this here If anyone needs to describe Network Rules specifically, you will need 'USAGE' on the schema where the network rule lives and have the 'Ownership' of the Network rule.
I encountered the same error in Visual Studio 2022, and updating Entity Framework to version 6.5.1 resolved the issue.
:
Create a XML document which contains details of cars car like: id, company name, model, engine and mileage and display the same as a table by using element by using XSLT.
Had the same issue, removing org.slf4j.slf4j-simple from the dependencies solved the issue.
Absolutely agree. Have the same problem with grails 5.x
furthermore there are no examples available how to customize scaffolding to get the result needed...
documentation or sources for the fields-taglib is also not available. really sad.
a really good product killed by too many features...
You can use AIRegex
in AiUtil.FindText
/ AIUtil.FindTextBlock
.
However, a UFT version at least from 2023 is required.
Set regex = AIRegex("some text (.*)")
AIUtil.FindTextBlock(regex).CheckExists True
You have to use the services to get a real pov. I've deployed applications in AWS. The adminstration focus when using R53 is way greater than using RDS.
just checking in to see if this issue has been resolved. I'm currently encountering the same problem. Thank you!
I'm looking for pretty much the same question. Want one entire task-group to finish before it starts the next parallel task-group (3 parallel task groups at a time). Were you able to find a good solution to this?
After time spent on this, I want to share my findings. Maybe those will be useful.
I was able to run w1-gpio kernel module on STM32MP135F-DK board, by using this simple patch to device tree:
#########################################################################
# Enable w1-gpio kernel module on PF10 GPIO
#########################################################################
diff --git a/stm32mp135f-dk.dts.original b/stm32mp135f-dk.dts
index 0ff8a08..d1ee9ba 100644
--- a/arch/arm/boot/dts/st/stm32mp135f-dk.dts
+++ b/arch/arm/boot/dts/st/stm32mp135f-dk.dts
@@ -152,6 +152,12 @@
compatible = "mmc-pwrseq-simple";
reset-gpios = <&mcp23017 11 GPIO_ACTIVE_LOW>;
};
+
+ onewire: onewire@0 {
+ compatible = "w1-gpio";
+ gpios = <&gpiof 10 GPIO_OPEN_DRAIN>; // PF10
+ status = "okay";
+ };
};
&adc_1 {
When using Yocto and meta-st-stm32 layer, to apply the patch, simply add it to SRC_URI in linux-stm32mp_%.bbappend file.
Enabling certain kernel modules is also required, I have done that by creating w1.config file:
CONFIG_W1=m # 1-Wire core
CONFIG_W1_MASTER_GPIO=m # GPIO-based master
CONFIG_W1_SLAVE_THERM=m # Support for DS18B20
CONFIG_W1_SLAVE_DS28E17=m
In linux-stm32mp_%.bbappend this w1.config should be add as:
KERNEL_CONFIG_FRAGMENTS:append = "${WORKDIR}/w1.config"
This should be enough to run w1-gpio, and read temp. from DS18B20 sensor.
Later on I was able to modify w1-gpio module to support my custom slaves. I add those slaves manually (via sysfs), all under non-standard family code. When w1 core has some slave with family code that is not supported with any dedicated library, then one can use sysfs file called rw to read/write to that slave. It works with my slaves, although there are lot of problems with stability. I use a C program to read/write to that rw file, but nearly half of read operations fail, because master looses timing for some microseconds. I think it's due to some CPU interrupts coming in. I am thinking about using kernel connector instead of rw sysfs file, like described here
I followed @mattrick example of using a IntersectionObserver
giving a bound on the rootMargin
and attached it to the physical header
. I am just answering for the sake of adding additional information to @mattrick's answer since @mattrick didn't provide an example.
IntersectionObserver emits a IntersectionObserverEntry when triggered, which has a isIntersecting
property that indicates whether or not the actual header is intersecting the viewport
or the element
.
In this case:
Note that my implemenation is using Tailwind
and Typescript
but can be created in base CSS
and JS
.
<!doctype html>
<html>
<head></head>
<body class="flex flex-col min-h-screen">
<header id="header" class="banner flex flex-row mb-4 p-4 sticky top-0 z-50 w-full bg-white"></header>
<main id="main" class="main flex-grow"></main>
<footer class="content-info p-4 bg-linear-footer bottom-0 mt-4"></footer>
</body>
</html>
Note: The <header>
requires a id
of header for the js to reference the element.
export class Header {
static checkSticky() {
const header = document.getElementById("header");
if (header == null) {
return; // Abort
}
const observer = new IntersectionObserver(
([entry]) => this._handleStickyChange(entry , header) ,
{
rootMargin: '-1px 0px 0px 0px',
threshold: [1],
}
);
observer.observe(header);
}
static _handleStickyChange(entry : IntersectionObserverEntry , header : HTMLElement ) {
if (!entry.isIntersecting) {
header.classList.add("your-class");
return; // Abort further execution
}
header.classList.remove("your-class");
}
}
Call Header.checkSticky()
when the DOM is ready to start observing the header. The observer will trigger _handleStickyChange()
reactively based on whether the header is intersecting the viewport.
This allows you to add visual effects (e.g., shadows, background changes) or trigger callbacks when the header becomes sticky.
Thanks @mattrick for your initial contribution.
oldlist = ["Peter", "Paul", "Mary"]
newlist = list(map(str.upper, oldlist))
print(newlist)
['PETER', 'PAUL', 'MARY']
The solution is described in this thread:
https://github.com/expressive-code/expressive-code/issues/330
Duplicate of Intel HAXM is required to run this AVD - Your CPU does not support VT-x
This issue has already been addressed in the post linked above. The error typically occurs when:
Your CPU does not support Intel VT-x / AMD-V, or
VT-x is disabled in the BIOS/UEFI settings.
Let me give some insights to each one of your questions:
Currently, there is no built-in configuration in Datastream or BigQuery to selectively prevent DELETE or TRUNCATE operations from being replicated.
Yes, you have more control on your data transformation when you use the Dataflow pipeline into BigQuery. Feel free to browse this document for more information.
Besides Dataflow, another Google Cloud-native solution could involve using Cloud Functions triggered by Pub/Sub messages from Datastream. The Cloud Function would filter out DELETE/TRUNCATE operations and then write the remaining data to BigQuery. However, for high-volume data, Dataflow is generally more scalable and recommended.
Example tested with UID
The thing is you should export UID variable and then it works
export UID=${UID}
Put in docker-compose file user: "${UID}"
docker compose up
...
profit
The renaming is applied always only to the topics. The consumer group names remain the same regardless the replication policy. When syncing the offsets, the topics are renamed according to the policy as well, but the group is not.
Based on what you've shared, I have 2 theories about what might be wrong.
(Most likely) Since you didn't provide a full command output from inside container (i.e. curl
vs curl ... | grep ...
) I can assume that the grep
version inside conatiner is working different than expected. This is usually happens with more complex commands (e.g. when using -E), but it worth checking a full piped pair.
(Less likely) Weird idea, but maybe YAML itself is not resolved correctly? Try to make it as simple as possible to 2x check:
startupProbe:
exec:
command: ["sh", "-c", "curl -s -f http://localhost:8080/v1/health | grep -q -e '\"status\":\"healthy\"'"]
If this doesn't work, try to make it verbose and check the Pod logs:
startupProbe:
exec:
command:
- echo "PROBE DEBUG"
- curl -v http://localhost:8080/v1/health
- sh
- -c
- >
curl http://localhost:8080/v1/health |
grep -e '\"status\":\"healthy\"'
- echo "$?"
The answer can possibly be found here:
Altough this is wit solved the issue for my situation,
The strange case of Data source canât be created with Reporting Services 2016 in Azure VM | Microsoft Learn
have you find the answer for this problem ?
based on the suggestions made above the following worked as required:
program | jq -r '[.mmsi, .rxtime, .speed, .lon, .lat] |
@csv
'
this also delivered practically the same result:
program | jq -r '[.mmsi, .rxtime, .speed, .lon, .lat] | join(",")'
Thanks for the many contributions
Answered here.
Now, I am able to resolve the issue by creating another Python function and using Pandas to convert Parquet to JSON data.
This works for replayed (not rebuilt builds) builds. The output is the build number of the original build from which your current build was replayed from:
def getReplayCauseNumber() {
// This function is used to access the build number of the build from which a build was replayed from
def cause = currentBuild.rawBuild.getCause(org.jenkinsci.plugins.workflow.cps.replay.ReplayCause)
if (cause == null){
return null
}
def originalNum = cause.getOriginalNumber()
echo "This build was replayed from build #${originalNum}"
return originalNum
}