I did also find I had to quit and restart Xcode after I plugged the phone in
Partition by reference is based on the primary constraint of parent table and foreign key constraint of child table. The partition key should be primary key column. In your example you are portioning by date column which does not have primary key.
Based on what @Grismar said, it sounds like the answer is that the locals() built-in function only shows things defined in the local scope. In other words, if we define Python scoping as LEGB, locals() only displays the "L" part. For what I was trying to do, I need to use the globals() built-in.
As for VS Code, it appears that when using the Python debugger in VS Code that locals() displays more than just the "L" scope. However, I believe pdb is the definitive debugger and that only shows things in the "L" scope.
Finally - is what I'm trying to do a good idea? Maybe/maybe not. In a nutshell - I'm doing Cloud hosted code challenges. The cloud environment defines their own variables (globals) that make sense for them (a Linux hosted environment). I choose to solve the challenges locally on a Windows environment. My environment is quite a bit different, so I define my own variables that make sense for my environment. I want to do something generic that works in either environment, so I check if my variable is defined. If yes use it, if no fallback to the cloud definition. There's probably a better way to do this and I'm open to suggestions.
Thanks for all the feedback.
So basically one of the problems was that I was initializing Firebase.initializeApp() only in production mode, not in debug mode.
We have to move that function out of the if block, right after WidgetsFlutterBinding.ensureInitialized();.
But the problem persisted. Then, I downloaded my project from github in a different, new and clean directory - pasted my code - and it was working fine?
So basically, two folders, old with the git in it, and new and clean without git. Both have the same code, exactly the same, but old was giving me the same error, and new was running properly...
I didn't find the cause of this, and moved on.
Here is the proper code, with Firebase Testing Suite:
void main() async {
WidgetsFlutterBinding.ensureInitialized();
if (kIsWeb || Platform.isIOS || Platform.isAndroid) {
print("Running on Web/iOS/Android - Initializing Firebase...");
await Firebase.initializeApp(
options: DefaultFirebaseOptions.currentPlatform,
);
if (kDebugMode) {
try {
FirebaseFirestore.instance.useFirestoreEmulator('localhost', 8080);
await FirebaseAuth.instance.useAuthEmulator('localhost', 9099);
print(
"Firebase initialized successfully - DEVELOPMENT - for Web/iOS/Android.");
} catch (e) {
print(e);
}
} else {
print(
"Firebase initialized successfully - PRODUCTION - for Web/iOS/Android.");
}
} else {
print("Not running on Web/iOS/Android - Firebase functionality disabled.");
}
Without SeDebugPrivilege explicitly granted to your user account or process, it is not possible to enable it programmatically. Even if you manage to obtain a token with SeDebugPrivilege (e.g., through exploitation), the kernel enforces strict access checks that prevent non-admin processes from performing privileged operations.
Thanks a lot @tilman-hausherr and @mkl. I didn't think about filtering the fields and annotations. It took me some time, but I came up with the following version which works for my test documents. Feel free to give some input/thoughts, hopefully other developer can benefit from it :)
How does it work:
import org.apache.pdfbox.Loader;
import org.apache.pdfbox.cos.COSDictionary;
import org.apache.pdfbox.cos.COSName;
import org.apache.pdfbox.pdmodel.PDDocument;
import org.apache.pdfbox.pdmodel.PDPage;
import org.apache.pdfbox.pdmodel.common.PDRectangle;
import org.apache.pdfbox.pdmodel.encryption.AccessPermission;
import org.apache.pdfbox.pdmodel.interactive.annotation.PDAnnotation;
import org.apache.pdfbox.pdmodel.interactive.annotation.PDAnnotationWidget;
import org.apache.pdfbox.pdmodel.interactive.form.PDAcroForm;
import org.apache.pdfbox.pdmodel.interactive.form.PDField;
import org.apache.pdfbox.pdmodel.interactive.form.PDSignatureField;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.util.ArrayList;
import java.util.List;
public class PdfCleanerUtils {
private static final String EOF_MARKER = "%%EOF";
private static final Integer EOF_LENGTH = EOF_MARKER.length();
// Private constructor
private PdfCleanerUtils() {
}
public static byte[] sanitizePdfDocument(byte[] documentData) throws ServerException {
// Check if linearized
boolean isLinearized = isLinearized(documentData);
// Get the first EOF offset for non-linearized documents and the second EOF offset for linearized documents (Quite rare)
int offset = getOffset(documentData, isLinearized ? 2 : 1);
// Get the original byte range
byte[] originalPdfData = new byte[offset + EOF_LENGTH];
System.arraycopy(documentData, 0, originalPdfData, 0, offset + EOF_LENGTH);
// Load and parse the PDF document based on the original data we just got
try (PDDocument pdDocument = Loader.loadPDF(originalPdfData)) {
// Remove encryption and security protection if required
AccessPermission accessPermission = pdDocument.getCurrentAccessPermission();
if (!accessPermission.canModify()) {
pdDocument.setAllSecurityToBeRemoved(true);
}
// Remove certification if required
COSDictionary catalog = pdDocument.getDocumentCatalog().getCOSObject();
if (catalog.containsKey(COSName.PERMS)) {
catalog.removeItem(COSName.PERMS);
}
// Check for a remaining signature. This can be the case when the first signature was added with incremental = false.
// Signatures with incremental = true were already cut away by the EOF range because we drop the revisions
int numberOfSignatures = getNumberOfSignatures(pdDocument);
if (numberOfSignatures > 0) {
// Ensure there is exactly one signature. Otherwise, our EOF marker search was wrong
if (numberOfSignatures != 1) {
throw new ServerException("The original document has to contain exactly one signature because it was not incrementally signed. Signatures found: " + numberOfSignatures);
}
// Remove the remaining signature
removeSignatureFromNonIncrementallySignedPdf(pdDocument);
}
// Re-check and ensure no signatures exist
numberOfSignatures = getNumberOfSignatures(pdDocument);
if (numberOfSignatures != 0) {
throw new ServerException("The original document still contains signatures.");
}
// Ensure the document has at least one page
if (pdDocument.getNumberOfPages() == 0) {
throw new ServerException("The original document has no pages.");
}
// Write the original document loaded by pdfbox to filter out smaller issues
try (ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream()) {
pdDocument.save(byteArrayOutputStream);
return byteArrayOutputStream.toByteArray();
}
} catch (IOException exception) {
throw new ServerException("Unable to load the original PDF document: " + exception.getMessage(), exception);
}
}
private static boolean isLinearized(byte[] originalPdfData) {
// Parse the data and search for the linearized value in the first 1024 bytes
String text = new String(originalPdfData, 0, 1024, StandardCharsets.UTF_8);
return text.contains("/Linearized");
}
private static int getOffset(byte[] originalPdfData, int markerCount) {
// Store the number of EOF markers we passed by
int passedMarkers = 0;
// Iterate over all bytes and find the n.th marker. Return this as offset
for (int offset = 0; offset < originalPdfData.length - EOF_LENGTH; offset++) {
// Sub-search for the EOF marker
boolean found = true;
for (int j = 0; j < EOF_LENGTH; j++) {
if (originalPdfData[offset + j] != EOF_MARKER.charAt(j)) {
// Mismatching byte, set found to false and break
found = false;
break;
}
}
// Check if the EOF marker was found
if (found) {
// Increase the passed markers
passedMarkers++;
// Check if we found our marker
if (passedMarkers == markerCount) {
return offset;
}
}
}
// No EOF marker found - corrupted PDF document
throw new RuntimeException("The PDF-document has no EOF marker - it looks corrupted.");
}
private static int getNumberOfSignatures(PDDocument pdDocument) {
// Get the number of signatures
PDAcroForm acroForm = pdDocument.getDocumentCatalog().getAcroForm();
return acroForm != null ? pdDocument.getSignatureDictionaries().size() : 0;
}
private static void removeSignatureFromNonIncrementallySignedPdf(PDDocument pdDocument) throws IOException {
// Get the AcroForm or return
PDAcroForm acroForm = pdDocument.getDocumentCatalog().getAcroForm();
if (acroForm == null) {
return; // No AcroForm present
}
// Iterate over all fields in the AcroForm and filter out all signatures, but keep visual signature fields
List<PDField> updatedFields = new ArrayList<>();
for (PDField field : acroForm.getFields()) {
// Handle signature fields or just re-add the other field
if (field instanceof PDSignatureField signatureField) {
// Get the dictionary and the first potential widget
COSDictionary fieldDictionary = signatureField.getCOSObject();
PDAnnotationWidget widget = signatureField.getWidgets().isEmpty() ? null : signatureField.getWidgets().getFirst();
// Check for visibility. Only re-add signature fields and make them re-signable
if (!isInvisible(widget)) {
// Clear the signature field and make it re-usable
fieldDictionary.removeItem(COSName.V);
fieldDictionary.removeItem(COSName.DV);
signatureField.setReadOnly(false);
updatedFields.add(signatureField);
}
} else {
// Retain non-signature fields
updatedFields.add(field);
}
}
// Re-set the filtered AcroForm fields
acroForm.setFields(updatedFields);
// Iterate over all pages and their annotations and filter out all signature annotation
for (PDPage page : pdDocument.getPages()) {
// Filter the annotations for each page
List<PDAnnotation> updatedAnnotations = new ArrayList<>();
for (PDAnnotation annotation : page.getAnnotations()) {
if (annotation instanceof PDAnnotationWidget widget) {
// Check if the widget belongs to an invisible signature
if (widget.getCOSObject().containsKey(COSName.PARENT)) {
COSDictionary parentField = widget.getCOSObject().getCOSDictionary(COSName.PARENT);
if (parentField != null && isInvisible(widget)) {
// Skip an invisible signature widget
continue;
}
}
}
updatedAnnotations.add(annotation); // Retain all other annotations
}
// Re-set the filtered annotations for the page
page.setAnnotations(updatedAnnotations);
}
}
private static boolean isInvisible(PDAnnotationWidget widget) {
// A signature without an annotation widget is invisible
if (widget == null) {
return true;
}
// Check the rectangle for visibility. Null or width/height 0 means invisible
PDRectangle pdRectangle = widget.getRectangle();
return pdRectangle == null || pdRectangle.getWidth() == 0 && pdRectangle.getHeight() == 0;
}
}
I don't see how it can happen other than the aside being an iframe.
Do some inspect on the rendered html to see if it is.
If not check what's impeding your modal to open.
Try Settings -> Build, Execution, Development -> Compiler -> Annotation Processors -> Processor Path -> search and insert your path to lombok
I think the rendermessages function is off.
After some research, I concluded that the Arduino framework somehow prevents polling the External Interrupt flag (INTF0). The same hardware and code worked flawlessly when the main function was explicitly defined. I'll leave the "why" to the Arduino experts.
In my case I had to specify the username in lower case.
Limit the collection size for dropdown $collection->setPageSize(5); // Only get what you need for the dropdown
Add caching layer
Optimize the selected attributes
Add proper indexes
... list continues.
you should use [[var:FirstName:"fakeFirstName"]] instead
In css-tricks they have this article about auto-growing inputs:
https://css-tricks.com/auto-growing-inputs-textareas/
The one i like has just a line of JS. I know you said 0 JS, but you dont have many options i think, and it's nothing to complicated.
label {
display: inline-grid;
}
label::after {
content: attr(data-value) ' ';
visibility: hidden;
white-space: pre-wrap;
}
<label>
<input type="text" name="" value="" oninput="this.parentNode.dataset.value = this.value" size="1">
</label>
While it's true that native GA4 to BigQuery backfilling isn't currently available, I've built a tool at databackfill.com that helps solve this problem. You're right that the Analytics Data API has limitations, but we've focused on making the backfill process as straightforward as possible through a simple UI - no coding or API scripts needed. let us know what you think
In my case, with a very heavy load for the updates, this error occurred because the stored procedure used updates and did not use indexes on the search field. The table was not big, at a maximum of 3000 records, but updates were widespread. Creating an index solved the problem with MS SQL Server 2019
Dude, I would like to thank you from the bottom of my heart for your solution to your problem. I needed to solve the same problem with the equations of motion of a body with 6 degrees of freedom, by hand it would have been very long. I divided the system of original differential equations into matrices and then multiplied again and everything matches the original ones.
Here is an example of my steps as I obtained each matrix:
q = [x; y; z; phi; theta; psi]
qdot = [x_dot;y_dot;z_dot;phi_dot;theta_dot;psi_dot]
qddot = [x_ddot;y_ddot;z_ddot;phi_ddot;theta_ddot;psi_ddot]
% initial equations of motion
eqns = transpose([eqddx, eqddy, eqddz, eqddphi, eqddtheta, eqddpsi])
%Mass and inertia matrix (you can also use the matlab function)
[MM, zbytek] = equationsToMatrix(eqns, qddot)
%Coriolis force and drag force matrix
[C_G, zbytek2] = equationsToMatrix_abatea(-1*zbytek, qdot)
%my some inputs in differential equations
inputs = [Thrust; Tau_phi; Tau_theta; Tau_psi];
%Matrix for inputs L and gravity Q
[L, Q] = equationsToMatrix_abatea(-1*zbytek2, inputs)
Q = -1*Q;
Multiplication for comparison
vychozi = expand(eqns)
roznasobeni = expand( MM*qddot + C_G*qdot + Q == L*inputs)
Yes the regex that you have created matches with /services/data/v and you are correctly checking the version.
Spectral.js is the best algorithm I found. MixBox is second place.
Comparing the two, when mixing blue (0,0,255) with yellow (255,255,0): spectral.js: 56, 143, 84 mixbox: 78, 150, 100
As you can see, spectral.js tends to be more vibrant and less grey. When I tested both of them side by side with multiple colors, spectral.js also felt a lot more natural and intuitive, mixbox felt a little disappointing and grey.
Spectral.js is only officially implemented in JS and Python, so I transcribed the script into C++.
Spectral.js still isn't perfect, though. I imagine the best algorithm would be one using supervised machine learning, if someone wanted to take the time to make that training data.
In the example with Stape's User ID power-up, the unique ID is generated and added to the Request Header for each Incoming Request inside the server Google Tag Manager once the Incoming Request is detected.
The ID is generated and added to the request on the Stape's side.
I'm having the same problem, but with an field whose value I'm setting with jQuery val(), but the value is cleared immediately I click on another field.
I have the same problem. It used to be that this button remembered which option you last picked from the drop-down list. But now it gets stuck: sometimes it always stays as "Run" even though you are picking Debug from the menu. And sometimes it stays as "Debug" even though your are picking to just Run from the menu.
I haven't figured out what the conditions are for why it gets stuck, or how to un-stick it.
If you're encountering the "document is not defined" error even after installing Flowbite, here’s what you can do: Check angular.json:
Ensure Tailwind CSS and Flowbite are properly configured in your angular.json file.
Example configuration -
Check to Tailwind is Configured Correctly. After run ng serve.
Since I am new to this, if I have said something wrong or if anyone can explain my mistake, I will humbly accept it. Thank you.
This has now been added in EasyAdmin 4.14.0: https://symfony.com/bundles/EasyAdminBundle/current/dashboards.html#pretty-admin-urls
Turns out I just set it up wrong somehow, I removed all of the plugins and such and redid it and it worked. One cause might be that I did app.component() after doing app.use(PrimeVue)?
In Django 5.1 and later, you can set allow_overwrite flag to True on the FileSystemStorage instance used in the model's ImageField
from django.core.files.storage import FileSystemStorage
image = models.ImageField(
upload_to=upload_fn,
storage=FileSystemStorage(allow_overwrite=True)
)
The issue is related to character encoding . Please ensure you are using UTF-8 when i had this encoding i can see the correct output .
Steps to check the character encoding you are using is UTF-8 or not :
Note: Iam using Apache Netbeans IDE 23
Etherscan is not 100% proof service. Here is a website that shows the incident history and the status of etherscan: https://etherscan.freshstatus.io/.
For example, in 2024 December in the last 30 days there has been 8 incidents involving maintanence for Web and API.
Alternatively, you can use evm explorer to check for the transaction status: https://evmexplorer.com/transactions/mainnet/0xe7d97a52f6396b2e344ecd363b41c600165c81481f9fc482356ac1f3e13d0146
Now there is an easy way using the hideSelectAll property
<Table rowSelection={rowSelection} .... />
and setting rowSelection as
const rowSelection: TableProps['rowSelection'] = {
hideSelectAll: true,
....
}
To elaborate, VSCode debugger is not attaching with NodeJS 23.3.0. There is a ticket https://github.com/microsoft/vscode/issues/235023
I've downgraded to NodeJS v22.12.0 (LTS) and it works. Cheers.
This was the most hilarious problem I ever ran into. Your problem is not the code, it's your active connections to E-Trades servers.
Try closing any browsers or services where you might be logged in at, and make sure your software's connection is the only session/instance attempting to connect.
If this doesn't immediately solve the problem, try reverting back to older code or starting over with the above mentioned information in mind. There's a chance you GPT'd your way into a scrambled mess of code
Can you message me on telegram JIKLOGAA19
i think we should show the slug like this domain.com/1/slug-here domain.com/2/slug-here
These 1 and 2 can also be a unique generated id that is not repeated. this way the final title matching slug would not be looking unprofessional such as -1 -2 shows up in the end. Even seo will not be effected.
Whenever we initialize the builder object in Elasticsearch Java, it starts with default values, but some builders come with predefined defaults.
For example, a SearchQuery builder may have default values for parameters like from, size, etc., and similar defaults are present in other filters as well. For every builder, there are required parameters that must be provided, such as index, query, etc. If you try to build the query without providing the required parameters, it will throw an error.
In Elasticsearch Java, there are different types of query builders, and one of the main advantages is the ability to build queries using lambda expressions.
! pip install keras==2.10.0 tensorflow==2.10.0
Using seek(), read() and readline(),
I can rapidly retrieve the last line of a text file :
with open("My_File", "r") as f:
n = f.seek(0,2)
for i in range(n-2, 1, -1):
f.seek(i)
if f.read(1)=="\n":
s = f.readline()[:-1]
break
By changing the hidden layers from relu to sigmoid, you ensure that each layer applies a nonlinear transformation over the entire input range. With relu, there is the possibility that the model enters a regime where a large portion of the neurons fire linearly (for example, if the values are all in the positive region, relu basically behaves like the identity function). This can lead to the model, in practice, behaving almost linearly, especially if the initialization of the weights and the distribution of the data results in a saturation of the neurons in a linear region of the relu.
In contrast, sigmoid always introduces curvature (nonlinearity), compressing the output values to a range between 0 and 1. This makes it difficult for the network to stagnate in linear behavior, since even with subtle changes in the weights, the sigmoid function maintains a non-linear mapping between input and output.
As per their official documentation
Each database can only have one user. If you require multiple users, consider a VPS plan
Here is the link providing above information from their documentation page:
https://support.hostinger.com/en/articles/1583542-how-to-create-a-new-mysql-database
Answer: How can I retrieve the attributes of an enum value?
In modern C#, you can efficiently retrieve attributes using the EnumCentricStatusManagement library. It simplifies attribute handling and centralizes enum-based management.
Steps:
1.Install the library via NuGet:
dotnet add package EnumCentricStatusManagement
2.Define your enum with attributes: using EnumCentricStatusManagement.Attributes;
public enum Status
{
[Status("Operation successful", StatusType.Success)]
Success,
[Status("An error occurred", StatusType.Error)]
Error
}
3.Retrieve attributes at runtime:
using EnumCentricStatusManagement.Helpers;
var status = Status.Success;
var attribute = EnumHelper.GetAttribute<StatusAttribute>(status);
if (attribute != null)
{
Console.WriteLine($"Message: {attribute.Message}");
}
This approach eliminates the need for complex reflection logic and provides a clean, centralized solution for managing enums with attributes.
Note: For more details and advanced usage, you can refer to the EnumCentricStatusManagement GitHub repository.
I had the same problem. After I deleted my yarn.lock and my node_modules folder and reinstalled everything, the error no longer occurred.
Try replacing
family="CM Roman"
with
family="CMU Serif"
To parse the body, it is indeed necessary to create some classes thanks to pydantic to achieve my goal. Here is the final code.
from fastapi import FastAPI
from pydantic import BaseModel
app = FastAPI()
msgs=dict()
class Body(BaseModel):
title: str
@app.post("/posts/")
async def posts(body: Body):
number= len(msgs)+1
title = body
msgs[number]= {
"type": "PostCreated",
"data": {
"id": number,
"title": title.title
}
}
return msgs
@app.get("/")
async def root():
return msgs
well it would seem, trading view does not allow shor and long positions simultaneously, aka hedge mode
case closed...
Verify that the npm is pointing in the correct place:
npm config get registry
my mistake is that it was pointing to "http" not "https"
so just reset the config:
npm config set registry https://registry.npmjs.org/
Might sound like a stupid solution, but it actually worked.
I just applied filter: brightness(100%); to the container who is rounded and has that overflow hidden, and IT WORKED PERFECTLY!
I solwed it by flashing my ESP01 with this firmware and using CoolTerm instead of TeraTerm.
I'm facing a similar problem on my thesis research. I'm wondering what's the best approach to apply clinical BERT models to Portuguese medical data. What solution did you find to your problem?
Starting with version 52, Chrome introduced an optional support flag, and soon default support, for passive scroll event listeners, so according to this document, for disable scroll it's enough to specify that your event handler is not passive (passive: false):
window.addEventListener('mousewheel', e => {
e.preventDefault();
yourCustomFn();
}, { passive: false })
NOTE: for old browsers you need to use polyfill
Ok so in my case i just removed this file and re-build the code and it worked.
did you fix this error? I have same problem, after sysops patched jenkins agents, git ls-remote doesn;t work an returns
Cannot run program "nohup" (in directory "/mnt/jenkins/workspace/servicedir"): error=0, Failed to exec spawn helper: pid: 1224365, exit value: 1
Previously it failed on different commands and restarting agents multiple time fixed, not it fails here. Also, in this stage I put debugging steps "which nohup" and like that, ANd it failed with same error, can't run nogup.
java.io.IOException: error=0, Failed to exec spawn helper: pid: 1224365, exit value: 1
at java.base/java.lang.ProcessImpl.forkAndExec(Native Method)
at java.base/java.lang.ProcessImpl.<init>(ProcessImpl.java:314)
at java.base/java.lang.ProcessImpl.start(ProcessImpl.java:244)
at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1110)
Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to JNLP4-connect connection from ip-10-3-3-94.eu-west-1.compute.internal/10.3.3.94:49196
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1787)
at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:356)
at hudson.remoting.Channel.call(Channel.java:1003)
at hudson.Launcher$RemoteLauncher.launch(Launcher.java:1121)
at hudson.Launcher$ProcStarter.start(Launcher.java:506)
at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:180)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:134)
at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:329)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:323)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:196)
at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:124)
at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:47)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:20)
at org.jenkinsci.plugins.workflow.cps.LoggingInvoker.methodCall(LoggingInvoker.java:105)
at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:90)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:116)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:85)
at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
at java.base/java.lang.reflect.Method.invoke(Method.java:580)
at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:110)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:85)
at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
at java.base/java.lang.reflect.Method.invoke(Method.java:580)
at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
at com.cloudbees.groovy.cps.impl.CastBlock$ContinuationImpl.cast(CastBlock.java:47)
at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
at java.base/java.lang.reflect.Method.invoke(Method.java:580)
at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
at com.cloudbees.groovy.cps.impl.CollectionLiteralBlock$ContinuationImpl.dispatch(CollectionLiteralBlock.java:55)
at com.cloudbees.groovy.cps.impl.CollectionLiteralBlock$ContinuationImpl.item(CollectionLiteralBlock.java:45)
at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
at java.base/java.lang.reflect.Method.invoke(Method.java:580)
at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
at com.cloudbees.groovy.cps.Next.step(Next.java:83)
at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:147)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:17)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:49)
at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:180)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:423)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:331)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:295)
at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService.lambda$wrap$4(CpsVmExecutorService.java:140)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
at jenkins.util.ErrorLoggingExecutorService.lambda$wrap$0(ErrorLoggingExecutorService.java:51)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$1.call(CpsVmExecutorService.java:53)
at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$1.call(CpsVmExecutorService.java:50)
at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:136)
at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:275)
at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService.lambda$categoryThreadFactory$0(CpsVmExecutorService.java:50)
at java.base/java.lang.Thread.run(Thread.java:1583)
Also: org.jenkinsci.plugins.workflow.actions.ErrorAction$ErrorId: 0e4b091d-0f04-4ada-87fa-3c617e0b0e47
Caused: java.io.IOException: Cannot run program "nohup" (in directory "/mnt/jenkins/workspace/servicedirt"): error=0, Failed to exec spawn helper: pid: 1224365, exit value: 1
You could add buttons around the words and change the CSS so that you can edit each word. You can also make the border/background/specific sides transparent if you want to, that way.
I think you need to use a reverse proxy such as Nginx for that. Personally, I prefer Traefik over Nginx as it can automatically handle tls certificates for you, so you do not need a seperate service for that. (I've also found Traefik to work better in Kubernetes environments, so you can apply your knowledge in both cases; not relevant if you're only using Docker however)
A reverse proxy allows you to decouple concern for things like path prefixes from your application and the deployment environment.
I recommend creating a routing rule for your prefix and then routing that request via a stripprefix middleware to your service. The middleware removes the path prefix from the HTTP request before it gets to your service, so it can correctly match the request path again.
You should build an analog of the Parsec's chainl1 to, quote:
eliminate left recursion which typically occurs in expression grammars.
So on I was wrong with types. ResourceLoader::load can load packed scene straight ahead as PackedScene, not necessary use Resorce class. Solved wtih flip Ref<Resource> to Ref<PackedScene>:
Ref<PackedScene> cpp_sprite = ResourceLoader::load("res://CppSprite.tscn");
i just update the GeForce Experience driver and works
if you have the flatpak version first become the root
sudo -i
or
su
then find the pycharm.sh and run it using bash. For example, mine was like:
bash /var/lib/flatpak/app/com.jetbrains.PyCharm-Community/x86_64/stable/active/files/pycharm/bin/pycharm.sh
availableSizes() for svg files showing [], so is not correct even file svg is correct.
CupertinoPageRoute by default prioritizes handling the system back gesture (swipe-to-go-back), which may bypass the logic of WillPopScope or PopScope. (From ChatGPT), so transition of page can't be set to cupertino.
Im having the same problem - would love to hear an answer to this
Here's what i did to get around the issue so i could keep working:
-
import logging
logging.basicConfig(
level=logging.INFO, # Set to DEBUG for more detailed logs
format="%(asctime)s - %(levelname)s - %(message)s",
handlers=[
logging.FileHandler("agent.log"), # Logs to a file
logging.StreamHandler() # Logs to the console
]
)
And you need to adjust how your errors and whatnot are being logged, but chatgpt can help you with that!
Good luck!
Steps to check and fix the issue:
If you get the error again, try to reinstall python and pip. Make sure you set the path variable correctly. To reinstall python and pip, please read this doc
Just checking, have you closed the col tag ? Or inspect the website if all is building in the correct way.
Your second element from the list it's smaller, like it was build inside the first one.
If it's not the case, can you try putting all the list structure with the css used. Thanks
It was a permissions issue, in my venv directory, some libraries were installed by the user that runs the program (runcloud), and others were installed by root. For example:
drwxrwxr-x+ 3 runcloud runcloud 4096 Nov 22 17:02 jinja2
drwxrwxr-x+ 3 runcloud runcloud 4096 Nov 22 17:02 jiter
drwxr-xr-x+ 4 root root 4096 Dec 5 20:21 mariadb
drwxrwxr-x+ 3 runcloud runcloud 4096 Nov 22 17:02 markupsafe
drwxr-xr-x+ 24 root root 4096 Dec 5 19:06 numpy
This allowed the python program to recognize the folders (and import the libraries) but it couldn't access anything inside of them. Hence the missing attribute error.
To fix the issue I went through each library installed by root, and copied the permissions and ownership from one of the functional libraries (flask).
Here's an example for mariadb:
chmod -R --reference=flask mariadb
chown -R --reference=flask mariadb
After this the program could import the libraries and access the files inside without any issues.
As suggested by @Bill Karwin ... user is a reserved key word in h2. and that might be causing the issue.
I suggest you to change the table name and try again..
configuring env files for staging, development and other phases is quite easy.
create .env.staging .env.prodution and modify your package.json file in this way
"dev": "vite --mode staging", "production": "vite --mode production", "build": "vite build --mode production",
You did not specify the version and I am not very deep into Razorpay, but I remember that you have to use the prefillparam instead of customer
I think this is what you need. I'm basically hiding the segment when the x-axis is different
``` https://jsfiddle.net/dt5zLgay ```
############################# Log Basics #############################
#log.dirs=/tmp/kafka-logs
log.dirs=/var/log/kafka
sudo mkdir -p /var/log/kafka and then
sudo chmod 777 /var/log/kafka
ps: the tmp folder to which it was pointing initially is a temporary folder, so not very stable, you might have missed permissions or something and your kafka couldn't create logs there.
you won't need to change folders from now on
you created a kafka folder in the /var/logs/ location and kafka is going to store logs there from now on
What I need to do if the folder is full again?
rm -rf /var/log/kafka/*
+1 Making Magento compatible with Vitess would be a game changer.
Both YouTube and GitHub leverage MySQL for some of their services, with Vitess playing a pivotal role in achieving high scalability. Vitess offers features such as:
What version of react-native do you have? Are you using the "new architecture"?
Can you please provide some code so that I can offer better help?
Looks like https://github.com/stephenjude/filament-jetstream/ helps integrate the two. You could start over or copy its approach into your existing app.
El código de cliente que finalmente ha funcionado es el siguiente.
async function exportarCanvasF1(nombrearchivo, start_) {
let start = Number(start_);
var chunkSize = 30000; // size of each chunk
let start_limite = (start == 0 ? Number(start_) :
Number(start_)) + chunkSize;
if (start_limite > dataURL.length) {
start_limite = (start_limite - (start_limite - dataURL.length - 1));
}
if (Number(start) + chunkSize > dataURL.length) {
chunkSize = chunkSize - ((Number(start) + chunkSize) - dataURL.length) + 1; }
let dataURL_;
if (Number(start) < dataURL.length) {
dataURL_ = dataURL.substr(start, chunkSize);
**dataURL_ = encodeURIComponent(dataURL_);**
$.ajax({
type: "POST",
url: "Respuestaajax.aspx/Respuestaaj",
contentType: "application/json; charset=utf-8",
data:
'{"parametro":"funcion","valor":"TroceadoFileBase64","tabla":"' + dataURL_ + '","campo":"' + nombrearchivo + '","criterio":"' + start_limite + '","v1":""}',
dataType: "json",
success: function (devolucion) {
if (devolucion.d) {
var d = JSON.parse(devolucion.d);
exportarCanvasF1(d[0][1], d[1][1]);
}
},
error: function (req, status, error) {
}
});
}
else if (!Number(start) < dataURL.length) {
dataURL_ = dataURL.substr(start, chunkSize);
console.log("Length chunk: " + dataURL_.length);
**dataURL_ = encodeURIComponent(dataURL_);**
$.ajax({
type: "POST",
url: "Respuestaajax.aspx/Respuestaaj",
contentType: "application/json; charset=utf-8",
data:
'{"parametro":"funcion","valor":"TroceadoFileBase64","tabla":"' +
dataURL_ + '","campo":"' + nombrearchivo + '","criterio":"' +
start_limite + '","v1":""}',
dataType: "json",
success: function (devolucion) {
if (devolucion.d) {
var d = JSON.parse(devolucion.d);
$.ajax({
type: "POST",
url: "Respuestaajax.aspx/Respuestaaj",
contentType: "application/json; charset=utf-8",
data: '{"parametro":"funcion","valor":"TroceadoFileBase64_fin","tabla":"' + nombrearchivo + '","campo":"","criterio":"","v1":""}',
dataType: "json",
success: function (devolucion) {
if (devolucion.d) {
}
},
error: function (req, status, error) {
}
});
}
},
error: function (req, status, error) {
alert("No hubo respuesta desde el servidor. Prueba otra vez.");
}
});
}
}
Y el de cliente:
public string TroceadoFileBase64(string base64file, string nombrefile, string start)
{
string jsonDevolucion = "";
**string base64filedec1 = HttpUtility.UrlDecode(base64file);**
byte[] b = null;
System.Text.ASCIIEncoding codificador = new
System.Text.ASCIIEncoding();
b = codificador.GetBytes(base64filedec1);
CrearfiledesdeArray(b, nombrefile);
string[,] devolucion = new string[2, 2]; // 2 bloques de 2 datos
devolucion[0,0] = "nombrefile";
devolucion[0,1] = nombrefile;
devolucion[1,0] = "start";
devolucion[1,1] = start;
jsonDevolucion = JsonConvert.SerializeObject(devolucion);
return jsonDevolucion;
}
public string TroceadoFileBase64_fin(string nombrefile)
{
string strtextfile = File.ReadAllText((string)HttpContext.Current.Server.MapPath("~") + "google/archivoseditados/" + Left(nombrefile, nombrefile.Length - 4) + ".txt");
int mod4 = strtextfile.Length % 4;
if (mod4 > 0)
{
strtextfile += new string('=', 4 - mod4);
}
byte[] b = null;
b = Convert.FromBase64String(strtextfile);
File.WriteAllBytes((string)HttpContext.Current.Server.MapPath("~") + "google/archivoseditados/" + nombrefile, b);
return "Finalizado";
}
public void CrearfiledesdeArray(Byte[] array, string nombrefile)
{
FileStream fs = new FileStream((string)HttpContext.Current.Server.MapPath("~") + "google/archivoseditados/" + Left(nombrefile, nombrefile.Length - 4) + ".txt", FileMode.Append);
fs.Seek(0, SeekOrigin.End);
fs.Write(array, 0, array.Length);
fs.Flush();
fs.Dispose();
}
Bit of a delayed reaction(!) but I used to run Coherent 3.2, which is a Unix clone that uses 16-bit protected mode. (I was running it on a 386 but I don't think it needed one.)
Use ACCOUNT_NAME column in the filter
https://docs.snowflake.com/en/sql-reference/organization-usage/accounts#columns
select * from snowflake.organization_usage.accounts where ACCOUNT_NAME = 'XXXX';
or
select * from snowflake.organization_usage.accounts where ACCOUNT_NAME ilike 'XXXX%';
By default, WordPress does not show subcategories that are empty.
You have no idea how much hassle and time this issue caused me before I found out.
Thanks for sharing your insights regarding the issue with desaturated and faded images in GANs. I’m encountering a similar problem but with an autoencoder model I’m training using TensorFlow 1.15.8 (DirectML). Problem Description:
My model outputs blurry and low-contrast images compared to the expected results. Here’s what I’m working with:
Python Version: 3.7
TensorFlow Version: TensorFlow 1.15.8 (DirectML)
GPU: AMD Radeon RX 6700XT
Model Type: Convolutional Autoencoder for image reconstruction.
Despite data normalization and implementing data augmentation (rotation, brightness adjustment, horizontal flipping), the model struggles to generate high-quality reconstructions. I suspect it might be related to the convolutional layers or loss function settings. What I’ve Tried:
Reducing the learning rate.
Normalizing the dataset ([0,1] range).
Adjusting the number of filters in the encoder and decoder.
Using MSE as the loss function.
Images:
I’ve included comparisons of the input, expected output (target), and the model’s predictions below:
Example 2:
Questions:
1-) Could doubling the filters in the encoder/decoder layers help address the blurriness as it did for the critic in your GAN?
2-) Is there a way to combine MAE loss with MSE during training to prevent this desaturation?
3-) Are there any specific adjustments I can make to the learning process or network architecture to avoid the blurry and faded outputs?
I appreciate your advice and any suggestions you can provide to tackle this issue.
Thanks in advance!
האן אפשר ליצור בספרייה VPython דמות תלת מימד בה כל איבר ניתן להזזה ויבצע תנועה?
Check if the "new architecture" is used in this project. If it is, newArchEnabled=true should be set, and the necessary change should be added to the .env file. If it is not set newArchEnabled=false.
stop metro and also delete package-lock.json/yarn.lock when you reinstall node_modules
the error most likely indicates that models architecture isnt directly recognized by conversion script have you tried "python convert-hf-to-gguf.py /path/to/model --outtype f16" ?
these links might help: https://github.com/ggerganov/llama.cpp#obtain-and-prepare-the-model-files
for transforming it https://huggingface.co/docs/transformers/index
Had the same problem on my imac, your solution (with git) worked for me too. Thanks !!!
I used GCC 32-bit version. So, I installed the 64-bit version and Python 3.12.7. It works.
It could be the git is already initialized as from some framework and you simply do not see it, also running usually bash command does not work.
From powershell at the project level, simply run this:
Get-ChildItem -Force
This will make visible .git folder
Azure and Microsoft 365 (Teams/SharePoint/Exchange Online/etc) are best considered separate applications. The only common factor between them is the identity provider, Entra ID/Azure AD. Fundamentally speaking, as long as the Azure subscriptions are set up on the same parent tenant as the users who have the licenses for the M365 services, that is all the integration that there is to do*.
*This is to say, that there is a lot more work to do for a combined Azure/M365 tenant to meet enterprise information security and other such requirements, but the foundation for this is having a single tenant that contains both your users and Azure subscriptions.
I had the same issue until I realised I had not run the qemu system command before jumping to gdb. Make sure qemu is running...
if use bind(the source ip is known),or if use connect socket:
With a static route, with IP_MULTICAST_IF, the multicast IP datagram is sent in a unicast Ethernet frame to the gateway.
With a static route, without IP_MULTICAST_IF, the multicast IP datagram is sent in a multicast Ethernet frame.
if use unconnect socket(without first bind):
With a static route,the multicast IP datagram is always sent in a unicast Ethernet frame, with or without IP_MULTICAST_IF.
So don't use static multicast routes. Stick with setting IP_MULTICAST_IF.
Issue was default timeout for real-time Inference endpoint being 60 seconds.
Seems like missing the timeout threshold caused the request to repeat for some reason (docs)
Switching to async inference endpoint solved it as request takes ~2m
Found a solution by using the enumerate() function.
Code changes as following :
def findtemplate():
x = None
for x,template in enumerate(all_templates):
result = cv2.matchTemplate(work_image, template,
method=cv2.TM_CCORR_NORMED)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(result)
threshold = 0.945
if max_val >= threshold:
break
else:
continue
return x
Now what the function does is it returns the index of the item that was found in the list. We can then take that index and get an output that confirms the match :
if findtemplate() == 3:
print('green image was a match')
Which then results in a succesful print of the green image match .
green image was a match
That is all I wanted to achieve , the ability of mapping each item in the list and getting some sort of index that matches the item so that I could use that later on.
Thanks a lot !
There's a rule in computing: Don't re-invent the wheel. So please take a look at Algorithm 463 from the Collected Algorithms of the Association for Computing Machinery: https://calgo.acm.org/
listOf, setOf, and mapOf should return a persistent vector, persistent hash set, and persistent hash map, respectively.
In addition there should be a persistent linked list.
Scala does it like that, so does Clojure, which has adopted the concept from Scala.
Without persistent collections, entire immutable collections will be copied again and again for every additional value. This results in runtime complexity of O(n^2), as apposed to O(n).
(Per addition, O(n) instead of O(1))
This issue might be caused by font padding. Try setting the includeFontPadding attribute to false in your TextView:
android:includeFontPadding="false"
I found a workaround for this issue:
I filtered the data using the sensor_discord tag.
Then, I created two queries:
After that, I used Field Overrides in Grafana:
This approach worked perfectly!
I was working on my master thesis and I made the same mistakes as you... I was so frustrated and have no idea how to restore my files. Did you end up with restore all your files in the end and how?
The problem was me not specifying the pgp_key argument and using encrypted_ses_smtp_password_v4 attribute instead of encrypted_secret in the output. I did not read the documentation carefully telling that the attribute will only be generated if pgp_key is specified.
Things seem to be working now and the secret key gets generated.
When is the window.addEventListener() call executed? Does the problem occur on mobile, or desktop? You may want to use the visibilitychange event instead (or both). See this blog post.
That said, I've used the beforeunload event successfully with the below approach:
const unload = useCallback((e) => {
// ...
}, []);
useEffect(() => {
window.addEventListener("beforeunload", unload);
return () => {
window.removeEventListener("beforeunload", unload);
}
}, [unload]);
My unload function does not contain an async API call though, which may be a significant difference between our use cases.
“I invested with a Crypto broker without proper research to know what I was hoarding my hard-earned money into, only to find out it was a hoax and sham. I invested over $180,000 USD estimated to be 6.3 BTC. I was unable to make any withdrawals out of my initial deposits let alone the gains he claimed I have earned even after meeting the bogus fees and charges he is always requesting. Fortunately, I got to know about Perfect Hackers's Bitcoin recovery programmers through research and positive reviews on Google. After a couple of hours consulting with them, all my funds were recovered including my profits. I can’t thank these guys enough for making me not another prey to these scammers. Thank you Perfect Hackers. Consult them via: [email protected]
You need to install jdk first.
Beroepsoriëntatie jongeren met gedragsproblemen odd
Bij het kiezen van deze doelgroep heb ik gekozen voor jongeren met gedragsproblemen zoals ODD heb ik gekozen vanwege de uitdagingen waar mee deze jongeren geconfronteerd worden ze hebben moeite met het opvangen van hun emoties Wat leidt tot groote conflicten uitdagingen jongeren met ODD zien hun omgeving meer als vijanden en reageren met hun emoties ze zijn vaak geïrriteerd en raken snel van streek ze kunnen niet goed omgaan met tegenslagen hierdoor ontstaan kwaad en veel verdriet hierdoor gaan ze andere irriteren en lastigvallen Zoals ik als beeld zie zijn het ruzies met volwassen hebben of weigeren te luisteren Meestal zijn ze boos en hebben ze een woede die oplopen liegen en geven vaak andere de schuld van hun eigen fouten en hierdoor weigeren ze te luisteren en waar ze natuurlijk aanmoeten houden Waar ik mij zelf in hebt verdiept is dat ze de ondersteuning nodig hebben Zoals de vaardigheden emotioneel en het coaching het creëren van een gestructeerde omgeving verbeteren hoe meer jongeren zich beter voelen en het positief in hun zelf willen hallen zo dat ze zich kunnen ontwikkelen en het begrijpen dat er verschillende. Uitdagingen zijn om die kans aan te bieden en het kunnen behalen Sommige hebben meer moeite in het ontwikkelen omdat sommige problemen wel een groot rol speelt ze liegen vaak spijbelen het gedrag leidt tot een schorsing van school of werk dit blijkt dat er iets in de omgeving niet fijn is dit behoord dat het te maken heeft met familie/ vrienden iets wat hun dierbaar raakt Zoals veel instellingen waar ze toch terecht kunnen bijvoorbeeld heb je therapie of bij een clinic daar kijken ze naar hun fouten en het ontwikkeling en maken een heel proces uit van 6 maanden het kan soms langer duren, hoe kijken mensen tegen jonger met gedragsproblemen Odd mensen kijk vaak op verschillen manieren naar hen op Sommige hebben weinig begrip in wat jongeren met gedragsproblemen doormaken en wat ze hebben meegemaakt dit kan emotioneel doeningen ontstaan bij een van hun leden die probeert het gedrag weer te herstellen en word het gedrag weer minder er zijn bijvoorbeeld mensen die het niet lastig vinden om met die jongeren om te gaan Wat mij interesse brengt aan deze doelgroep is hoe ze interesse tonen en willen begrijpen wat ze doormaken om hun processen achteraantegaan sommige willen helemaal niet luisteren of zijn ze heel kwaad om te ontwikkelen daar hebben ze hulp bij nodig ik vind het heel bijzonder hoe dat allemaal in werking gaat bijvoorbeeld hoe ze zo kwaad en boos regeren en in het vervolg gaan ontwikkelen.
Hier zijn nog paar kenmerken die erbij horen
De persoonlijk kwaliteit die ik denk te kunnen zetten
Deze kwaliteiten zijn belangrijk om een vertrouwenband op te bouwen omdat je praat met de situaties die hun ingaan en hun daar bij de ondersteunen je helpt de jongeren om beter contact te leggen en de situatie te kunnen begrijpen en wat ze doormaken dat we het gevoel geven dat er mee wordt geleefd in de situatie
In your code, there a number of confusing state management patterns from the child and App component.
I have updated your code in the sandbox here https://codesandbox.io/p/sandbox/w9lvkr
After I saw this discussion, I managed to write a generic function for this
const result = await this.db.select({
...getTableColumns(this.firstSchema),
[this.firstFieldName]: sql'json_agg(${this.secondSchema})'
})
.from(this.firstSchema)
.leftJoin(this.secondSchema, eq(this.firstSchema[this.firstFieldId], this.secondSchema.id))
.groupBy(this.firstSchema.id)