🤖 AhaChat AI Ecosystem is here!
💬 AI Response – Auto-reply to customers 24/7
🎯 AI Sales – Smart assistant that helps close more deals
🔍 AI Trigger – Understands message context & responds instantly
🎨 AI Image – Generate or analyze images with one command
🎤 AI Voice – Turn text into natural, human-like speech
📊 AI Funnel – Qualify & nurture your best leads automatically
Figured it out, it was related to the work sizes.
By setting the local_work_size to NULL I think it's iterating single process through the seed_ranges, if you set the global_work_size to 28 (number of cores) and the local_work_size to 1 then it will fully utilise the CPU.
I didn't change the work_dim though.
uint64_t global = num_seed_ranges; // 28 in my case
uint64_t local = 1;
error = clEnqueueNDRangeKernel(
commands, //command queue
ko_part_b, // kernel
1, NULL, // work dimension stuff
&global, // global work size (num of cores)
&local, // local work size (1)
0, NULL, NULL // event queue stuff
);
Final Results:
C Single thread - 4 mins
C OpenMP - 23 seconds
C OpenCL - 9 seconds
Rust single threaded - 1.5 mins
Rust rayon multiprocess - 7 seconds
Cuda 3072 cores (2000 series) - 9 seconds
It is possible to use nested switch statements in JS. Buth they are generally not considered a best practice. They:
The better approach is to extract each case into separate private methods.
if someone has same issue please refer below link
https://www.youtube.com/watch?v=so6MbkVJOSQ
Is this what you want?
$number = 9;
$bin_no = decbin($number);
$bin_arr = array_map('intval', str_split($bin_no));
@Kevin But it might slow down the function only on the first call, since each module is only imported once per interpreter session.
Easy enough,
First, create a measure named "SUM Amount",
SUM Amount = SUM( 'DATATABLE'[Amount] )
then,
_Amount =
VAR __r =
RANK(
ALLSELECTED( 'DATATABLE'[State], 'DATATABLE'[Company] ),
ORDERBY( [SUM Amount], DESC, 'DATATABLE'[Company], DESC ),
PARTITIONBY( 'DATATABLE'[State] )
)
RETURN
IF( __r <= 3, [SUM Amount] )
Most likely your problem is in this line:const cmd = message.split(");since you are not dividing by a space, if you do so:const cmd = message.split(' ');then everything should work
Turns out the code wasn't the problem, I just messed up the SQL Room dependencies.
A couple issues off the top of my head:
When you call the function if the library is large it might slow down your algorithm. Whether it matters or not depends on the context and end user. This drawback could be a gain if the goal is to reduce initial script loading time.
If the library isn't installed your code may only fail when the function is called, which could cause a delay in failure. It is often better to fail as soon as the script is loaded so you immediately know there is a problem.
It's easier to read and debug code that adheres to formatting standards.
Potential linter implications.
In the end the drawbacks depend on the context entirely. I think the more important thing to consider is what you can accomplish by doing this, which often is very little.
There is a domain specific language for your problem:
https://docs.askalot.io/guide/qml-syntax/
Questionnaire Markup Language (QML)
You should try it with some math:
https://docs.askalot.io/theory/questionnaire-analysis/
Have you tried QML (Questionnaire Markup Language)?
https://docs.askalot.io
I think it is more a "default" to use import at the beggining off the code. I don't see problems to use in this way.
In fact, all C compilers tend to somewhat ignore
restrict.
restrict qualification is local to block/struct/function/file but is not transmitted to another function (assignation). With a call to an external function that the compiler does not know anything about restrict does nothing.
I'm not asking anybody to run any of my code. I'm not here to train generative AI. I'm merely asking how to write the condition that I am asking about, hence why I described the context of it. It's a simple question about SQL queries.
PEP 8 explicitly recommends placing imports at the top of the file.
@Brian Berns The code is identical because I have defined (+) between a Var and a float, so I can write expressions like this
let x = Var.create 10.0
let y = 10.0
let z = x + y
where Var.create just creates a new Var
When you call cartRepository.deleteById(id):
You will need to fetch the Cart, get its associated User, clear the reference, and then delete the Cart
@Transactional
@Override
public void clearCart(Long id) {
Cart cart = cartRepository.findById(id)
.orElseThrow(() -> new ResourceNotFoundException("Cart not found for ID: " + id));
User user = cart.getUser();
if (user != null) {
user.setCart(null);
userRepository.save(user);
}
cartItemRepository.deleteByCartId(id);
cartRepository.delete(cart);
}
After this method, the transaction commits, and both entities are fully detached.
@John Bollinger
It is also very meaningful in the case of structures or arrays. In Windows, there is a type of programming called COM, and the well-known graphics API DirectX3D also uses COM. COM objects are generally called in this way.
typedef struct {
void (*func1)();
void (*func2)();
void (*func3)();
void (*func4)();
void (*func5)();
} i_ibject_vtable;
typedef struct {
i_ibject_vtable *vtable;
} i_object;
int object_create(i_object **);
int entry() {
i_object *p_object;
object_create(&p_object);
p_object->vtable->func1();
p_object->vtable->func2();
p_object->vtable->func3();
p_object->vtable->func4();
p_object->vtable->func5();
return 0;
}
.file "example.c"
# GNU C23 (Compiler-Explorer-Build-gcc--binutils-2.44) version 15.2.0 (x86_64-linux-gnu)
# compiled by GNU C version 11.4.0, GMP version 6.2.1, MPFR version 4.1.0, MPC version 1.2.1, isl version isl-0.24-GMP
# GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
# options passed: -mtune=generic -march=x86-64 -g -g0 -Ofast -fno-asynchronous-unwind-tables
.text
.p2align 4
.globl entry
.type entry, @function
entry:
subq $24, %rsp #,
# /app/example.c:18: object_create(&p_object);
leaq 8(%rsp), %rdi #, tmp114
call object_create #
# /app/example.c:20: p_object->vtable->func1();
movq 8(%rsp), %rax # p_object, p_object
# /app/example.c:20: p_object->vtable->func1();
movq (%rax), %rax # p_object.0_1->vtable, p_object.0_1->vtable
# /app/example.c:20: p_object->vtable->func1();
call *(%rax) # _2->func1
# /app/example.c:21: p_object->vtable->func2();
movq 8(%rsp), %rax # p_object, p_object
# /app/example.c:21: p_object->vtable->func2();
movq (%rax), %rax # p_object.1_4->vtable, p_object.1_4->vtable
# /app/example.c:21: p_object->vtable->func2();
call *8(%rax) # _5->func2
# /app/example.c:22: p_object->vtable->func3();
movq 8(%rsp), %rax # p_object, p_object
# /app/example.c:22: p_object->vtable->func3();
movq (%rax), %rax # p_object.2_7->vtable, p_object.2_7->vtable
# /app/example.c:22: p_object->vtable->func3();
call *16(%rax) # _8->func3
# /app/example.c:23: p_object->vtable->func4();
movq 8(%rsp), %rax # p_object, p_object
# /app/example.c:23: p_object->vtable->func4();
movq (%rax), %rax # p_object.3_10->vtable, p_object.3_10->vtable
# /app/example.c:23: p_object->vtable->func4();
call *24(%rax) # _11->func4
# /app/example.c:24: p_object->vtable->func5();
movq 8(%rsp), %rax # p_object, p_object
# /app/example.c:24: p_object->vtable->func5();
movq (%rax), %rax # p_object.4_13->vtable, p_object.4_13->vtable
# /app/example.c:24: p_object->vtable->func5();
call *32(%rax) # _14->func5
# /app/example.c:27: }
xorl %eax, %eax #
addq $24, %rsp #,
ret
.size entry, .-entry
.ident "GCC: (Compiler-Explorer-Build-gcc--binutils-2.44) 15.2.0"
.section .note.GNU-stack,"",@progbits
Manually saving these function pointers results in negative optimization
typedef struct {
void (*func1)();
void (*func2)();
void (*func3)();
void (*func4)();
void (*func5)();
} i_ibject_vtable;
typedef struct {
i_ibject_vtable *vtable;
} i_object;
int object_create(i_object **);
int entry() {
i_object *p_object;
object_create(&p_object);
i_ibject_vtable vtable;
__builtin_memcpy(&vtable, p_object->vtable, sizeof(vtable));
vtable.func1();
vtable.func2();
vtable.func3();
vtable.func4();
vtable.func5();
return 0;
}
.file "example.c"
# GNU C23 (Compiler-Explorer-Build-gcc--binutils-2.44) version 15.2.0 (x86_64-linux-gnu)
# compiled by GNU C version 11.4.0, GMP version 6.2.1, MPFR version 4.1.0, MPC version 1.2.1, isl version isl-0.24-GMP
# GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
# options passed: -mtune=generic -march=x86-64 -g -g0 -Ofast -fno-asynchronous-unwind-tables
.text
.p2align 4
.globl entry
.type entry, @function
entry:
subq $72, %rsp #,
# /app/example.c:18: object_create(&p_object);
leaq 8(%rsp), %rdi #, tmp106
call object_create #
# /app/example.c:21: __builtin_memcpy(&vtable, p_object->vtable, sizeof(vtable));
movq 8(%rsp), %rax # p_object, p_object
# /app/example.c:21: __builtin_memcpy(&vtable, p_object->vtable, sizeof(vtable));
movq (%rax), %rax # p_object.0_1->vtable, p_object.0_1->vtable
movdqu (%rax), %xmm0 # MEM <char[1:40]> [(void *)_2], MEM <char[1:40]> [(void *)_2]
movq %xmm0, %rdx # MEM <char[1:40]> [(void *)_2], tmp119
movaps %xmm0, 16(%rsp) # MEM <char[1:40]> [(void *)_2], MEM <char[1:40]> [(void *)&vtable]
movdqu 16(%rax), %xmm0 # MEM <char[1:40]> [(void *)_2], MEM <char[1:40]> [(void *)_2]
movq 32(%rax), %rax # MEM <char[1:40]> [(void *)_2], MEM <char[1:40]> [(void *)_2]
movaps %xmm0, 32(%rsp) # MEM <char[1:40]> [(void *)_2], MEM <char[1:40]> [(void *)&vtable]
movq %rax, 48(%rsp) # MEM <char[1:40]> [(void *)_2], MEM <char[1:40]> [(void *)&vtable]
# /app/example.c:23: vtable.func1();
call *%rdx # tmp119
# /app/example.c:24: vtable.func2();
call *24(%rsp) # vtable.func2
# /app/example.c:25: vtable.func3();
call *32(%rsp) # vtable.func3
# /app/example.c:26: vtable.func4();
call *40(%rsp) # vtable.func4
# /app/example.c:27: vtable.func5();
call *48(%rsp) # vtable.func5
# /app/example.c:30: }
xorl %eax, %eax #
addq $72, %rsp #,
ret
.size entry, .-entry
.ident "GCC: (Compiler-Explorer-Build-gcc--binutils-2.44) 15.2.0"
.section .note.GNU-stack,"",@progbits
I also can't do this for every object. Once the compiler knows that i_object_vtable will not be constantly changed, it is able to optimize it.
typedef struct {
void (*func1)();
void (*func2)();
void (*func3)();
void (*func4)();
void (*func5)();
} i_ibject_vtable;
typedef struct {
i_ibject_vtable *vtable;
} i_object;
__attribute__((malloc)) i_object *object_create();
int entry() {
i_object *p_object;
p_object = object_create();
p_object->vtable->func1();
p_object->vtable->func2();
p_object->vtable->func3();
p_object->vtable->func4();
p_object->vtable->func5();
// Saved the pointer into the register
p_object->vtable->func1();
p_object->vtable->func1();
p_object->vtable->func1();
return 0;
}
.file "example.c"
# GNU C23 (Compiler-Explorer-Build-gcc--binutils-2.44) version 15.2.0 (x86_64-linux-gnu)
# compiled by GNU C version 11.4.0, GMP version 6.2.1, MPFR version 4.1.0, MPC version 1.2.1, isl version isl-0.24-GMP
# GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
# options passed: -mtune=generic -march=x86-64 -g -g0 -Ofast -fno-asynchronous-unwind-tables
.text
.p2align 4
.globl entry
.type entry, @function
entry:
pushq %rbx #
# /app/example.c:18: p_object = object_create();
call object_create #
# /app/example.c:20: p_object->vtable->func1();
movq (%rax), %rbx # p_object_12->vtable, _1
# /app/example.c:20: p_object->vtable->func1();
call *(%rbx) # _1->func1
# /app/example.c:21: p_object->vtable->func2();
call *8(%rbx) # _1->func2
# /app/example.c:22: p_object->vtable->func3();
call *16(%rbx) # _1->func3
# /app/example.c:23: p_object->vtable->func4();
call *24(%rbx) # _1->func4
# /app/example.c:24: p_object->vtable->func5();
call *32(%rbx) # _1->func5
# /app/example.c:27: p_object->vtable->func1();
call *(%rbx) # _1->func1
# /app/example.c:28: p_object->vtable->func1();
call *(%rbx) # _1->func1
# /app/example.c:29: p_object->vtable->func1();
call *(%rbx) # _1->func1
# /app/example.c:32: }
xorl %eax, %eax #
popq %rbx #
ret
.size entry, .-entry
.ident "GCC: (Compiler-Explorer-Build-gcc--binutils-2.44) 15.2.0"
.section .note.GNU-stack,"",@progbits
Unfortunately, most APIs return error codes instead of pointers, making it impossible to use __attribute__.
For anyone else stumbling on this, the latest OpenSCAD development snapshot has added support for center=true in the import parameters, see here: https://en.wikibooks.org/wiki/OpenSCAD_User_Manual/STL_Import_and_Export
I am on 54.0.20, but no success. Still facing the error for iOS - "The 'expo-modules-autolinking' package has been found, but it seems to be incompatible with '@expo/prebuild-config'"
@ssd
In fact, all C compilers tend to somewhat ignore restrict. When do_another_thing and entry are compiled in the same source file, the compiler will perform its own analysis and then assume that xxx might be continuously modified.
int xxx_create(int *p_xxx);
int xxx_do_something(int xxx);
void do_another_thing(int* restrict xxx) {
xxx_do_something(*xxx);
xxx_do_something(*xxx);
xxx_do_something(*xxx);
}
int entry() {
int xxx;
xxx_create(&xxx);
do_another_thing(&xxx);
return 0;
}
.file "example.c"
# GNU C23 (Compiler-Explorer-Build-gcc--binutils-2.44) version 15.2.0 (x86_64-linux-gnu)
# compiled by GNU C version 11.4.0, GMP version 6.2.1, MPFR version 4.1.0, MPC version 1.2.1, isl version isl-0.24-GMP
# GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
# options passed: -mtune=generic -march=x86-64 -g -g0 -Ofast -fno-asynchronous-unwind-tables
.text
.p2align 4
.globl do_another_thing
.type do_another_thing, @function
do_another_thing:
pushq %rbx #
# /app/example.c:4: void do_another_thing(int* restrict xxx) {
movq %rdi, %rbx # xxx, xxx
# /app/example.c:5: xxx_do_something(*xxx);
movl (%rdi), %edi # *xxx_5(D), *xxx_5(D)
call xxx_do_something #
# /app/example.c:6: xxx_do_something(*xxx);
movl (%rbx), %edi # *xxx_5(D), *xxx_5(D)
call xxx_do_something #
# /app/example.c:7: xxx_do_something(*xxx);
movl (%rbx), %edi # *xxx_5(D), *xxx_5(D)
# /app/example.c:8: }
popq %rbx #
# /app/example.c:7: xxx_do_something(*xxx);
jmp xxx_do_something #
.size do_another_thing, .-do_another_thing
.p2align 4
.globl entry
.type entry, @function
entry:
subq $24, %rsp #,
# /app/example.c:13: xxx_create(&xxx);
leaq 12(%rsp), %rdi #, tmp102
call xxx_create #
# /app/example.c:5: xxx_do_something(*xxx);
movl 12(%rsp), %edi # xxx,
call xxx_do_something #
# /app/example.c:6: xxx_do_something(*xxx);
movl 12(%rsp), %edi # xxx,
call xxx_do_something #
# /app/example.c:7: xxx_do_something(*xxx);
movl 12(%rsp), %edi # xxx,
call xxx_do_something #
# /app/example.c:18: }
xorl %eax, %eax #
addq $24, %rsp #,
ret
.size entry, .-entry
.ident "GCC: (Compiler-Explorer-Build-gcc--binutils-2.44) 15.2.0"
.section .note.GNU-stack,"",@progbits
@Eric Postpischil
In fact, the compiler will not know.
int func1(int *);
int func2(int);
int entry() {
int arr[10];
{
int _arr[10];
func1(_arr);
__builtin_memcpy(arr, _arr, sizeof(arr));
}
for (int i = 0; i < sizeof(arr) / sizeof(int); i++) {
func2(arr[i]);
}
return 0;
}
.file "example.c"
# GNU C23 (Compiler-Explorer-Build-gcc--binutils-2.44) version 15.2.0 (x86_64-linux-gnu)
# compiled by GNU C version 11.4.0, GMP version 6.2.1, MPFR version 4.1.0, MPC version 1.2.1, isl version isl-0.24-GMP
# GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
# options passed: -mtune=generic -march=x86-64 -g -g0 -Ofast -fno-asynchronous-unwind-tables
.text
.p2align 4
.globl entry
.type entry, @function
entry:
pushq %rbp #
pushq %rbx #
subq $104, %rsp #,
# /app/example.c:12: func1(_arr);
leaq 48(%rsp), %rdi #, tmp103
movq %rsp, %rbx #, ivtmp.11
leaq 40(%rsp), %rbp #, _19
call func1 #
# /app/example.c:14: __builtin_memcpy(arr, _arr, sizeof(arr));
movdqa 48(%rsp), %xmm0 # MEM <unsigned char[40]> [(char * {ref-all})&_arr], MEM <unsigned char[40]> [(char * {ref-all})&_arr]
movq 80(%rsp), %rax # MEM <unsigned char[40]> [(char * {ref-all})&_arr], MEM <unsigned char[40]> [(char * {ref-all})&_arr]
movaps %xmm0, (%rsp) # MEM <unsigned char[40]> [(char * {ref-all})&_arr], MEM <unsigned char[40]> [(char * {ref-all})&arr]
movdqa 64(%rsp), %xmm0 # MEM <unsigned char[40]> [(char * {ref-all})&_arr], MEM <unsigned char[40]> [(char * {ref-all})&_arr]
movq %rax, 32(%rsp) # MEM <unsigned char[40]> [(char * {ref-all})&_arr], MEM <unsigned char[40]> [(char * {ref-all})&arr]
movaps %xmm0, 16(%rsp) # MEM <unsigned char[40]> [(char * {ref-all})&_arr], MEM <unsigned char[40]> [(char * {ref-all})&arr]
.p2align 4
.p2align 3
.L2:
# /app/example.c:18: func2(arr[i]);
movl (%rbx), %edi # MEM[(int *)_17], MEM[(int *)_17]
# /app/example.c:17: for (int i = 0; i < sizeof(arr) / sizeof(int); i++) {
addq $4, %rbx #, ivtmp.11
# /app/example.c:18: func2(arr[i]);
call func2 #
# /app/example.c:17: for (int i = 0; i < sizeof(arr) / sizeof(int); i++) {
cmpq %rbp, %rbx # _19, ivtmp.11
jne .L2 #,
# /app/example.c:22: }
addq $104, %rsp #,
xorl %eax, %eax #
popq %rbx #
popq %rbp #
ret
.size entry, .-entry
.ident "GCC: (Compiler-Explorer-Build-gcc--binutils-2.44) 15.2.0"
.section .note.GNU-stack,"",@progbits
The compiler will still allocate space to save it.
The best optimization method is to dereference each time func2(arr[i]); is called.
The compiler does indeed have the ability to do it.
int func1(int *);
int func2(int);
__attribute__((malloc)) int *func3();
int entry() {
int *arr = func3();
for (int i = 0; i < 10; i++) {
func2(arr[i]);
}
return 0;
}
.file "example.c"
# GNU C23 (Compiler-Explorer-Build-gcc--binutils-2.44) version 15.2.0 (x86_64-linux-gnu)
# compiled by GNU C version 11.4.0, GMP version 6.2.1, MPFR version 4.1.0, MPC version 1.2.1, isl version isl-0.24-GMP
# GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
# options passed: -mtune=generic -march=x86-64 -g -g0 -Ofast -fno-asynchronous-unwind-tables
.text
.p2align 4
.globl entry
.type entry, @function
entry:
pushq %rbp #
pushq %rbx #
subq $8, %rsp #,
# /app/example.c:9: int *arr = func3();
call func3 #
movq %rax, %rbx # ivtmp.10, ivtmp.10
leaq 40(%rax), %rbp #, _20
.p2align 4
.p2align 3
.L2:
# /app/example.c:12: func2(arr[i]);
movl (%rbx), %edi # MEM[(int *)_18], MEM[(int *)_18]
# /app/example.c:11: for (int i = 0; i < 10; i++) {
addq $4, %rbx #, ivtmp.10
# /app/example.c:12: func2(arr[i]);
call func2 #
# /app/example.c:11: for (int i = 0; i < 10; i++) {
cmpq %rbp, %rbx # _20, ivtmp.10
jne .L2 #,
# /app/example.c:16: }
addq $8, %rsp #,
xorl %eax, %eax #
popq %rbx #
popq %rbp #
ret
.size entry, .-entry
.ident "GCC: (Compiler-Explorer-Build-gcc--binutils-2.44) 15.2.0"
.section .note.GNU-stack,"",@progbits
if you have this problem Error: You don't have permission to access that port.
your Windows closed all packet and you have to open this first :
net stop winnat
then :
net start winnat
after that :
python manage.py runserver
That is, we have to guess (assume) that the table contains columns (country,year,Life_Expectancy) and years 2000,2019 should be hardcoded.
I don't understand these"advice" things... Do you want a concrete answer? Or a discussion?
That's wrong. A question should be clear and therefore sample data is required. But well, if you refuse help us help you, I also refuse to help. Fair enough.
I'm just asking how to make a certain kind of conditional SQL statement. All the information needed to provide context is provided.
I was able to sort it out, my ng config was not the issue, it was the Github workflow file.
Event if a explicity specify a build command:
npm run build:qa
Apparently, internally, Azure SWA still builds the app internally using the default config (npm run build)
So I had to add this line in the deployment step:
skip_app_build: true
And also ensure that app_location points to the build output location and that ouput_location is empty.
Please show both sample input data and expected result as markdown tables in your question.
I tried restarting the kernel; it still doesn't work.
Only way I can figure out is to make a new scrollbar thumb yourself and make the previous scrollbar's width 0
vite is not recognized, try to npm install vite --save dev in devdependies and then try first to build locally by using the command : npm run build
Case 2: check you build scripts it has to vite build to do so. If it not correct it.
Conclusion : If it is work in locally and probably work at deployment also.
Happy coding...
The target (action) in a card game (of chance) is the one that has the best chance (odds) of succeeding. The target will vary with the type of "game"; as well as the properties required; e.g. range. One can be too generic (abstract) and lose all touch with the "game" itself.
It turns out that we will replace runtime creation with deep copying?
And then we will not control where the static data that is the creation template is, and where its copies with dynamic data obtained in the constructor are?
I actually ended up figuring this out. if you're in the same predicament use an event named "input" with bubbles true executed from the element that is troubling you. for other elements it might not be the same so use getEventListeners(yourElement) and find the listeners and their event names
Do you really want a discussion? Or do you want a concrete answer?
Currently upgrades to spark 4.0.0 and scala 2.13 and pyrhon 3.12 is coming with spark but it is not able to write spark dataframe to snowflake stating ArrayStoreException (net.snowflake.spark.io.FileUploadResult) with spark jdbc 3.26.1 and spark_snowflake_3.1_2.1.3 jars
How can i resolve it ?
Maybe im too dumb, but for me it just tell you that function doesn't exist on that instance so,
memberTarget.timeout
This instance object...
Maybe, change the Target for client an see what happens.
ps: I just create my account to start readind this forum. k
DO NOT GIVE THEM MONEY! BEWARE THIEVES/CRIMINALS!
I have lost tens of thousands of £ to these con artists. Facebook let them advertise as “Dragons Den made money with lexatrade LIES! LIES, LIES
AVOID THESE SCAMMERS, they take your money, pretend that they are making profits and encourage you to invest your life savings. Then when you request money back they don’t return anything and close down email addresses. I was conned. Thanks Jeff for helping me get back my lost money from those scumbags, you can get in touch with him for help via /ats/pp +1 5 05 .5.3 4 09 90 ail address jeffsilbert39 gm8l.com.
@sirtao First select requires backtick so first argument should be moved to the first line. I also prefer style where command is on the first line and all args are indented the same level. So backticks are for consistency...
I would like the menu to switch to mobile mode at the breaking point of 1200px instead of 992px, based on the number of menu items it has. How could I achieve this?
This looks more like something for https://unix.stackexchange.com/
You can add
<script src="{{ url_for('static', filename='scripts/index.js') }}"></script>
to your html template file,
also be good to:
app = Flask(__name__, template_folder="templates", static_folder="static")
that is it :)
I recently developed small library which validates and parses Telegram InitData https://github.com/sanvew/telegram-init-data
Will be home in an hpour will delete or convert to question.
@quuxplusone : "Is there no way to post an answer for this question?" No, there is no spoon. You may not answer - you may only reply.
Maybe your problem, or example problem, is a bit too simplified for folks. It's a perfectly valid spreadsheet problem. Let's assume that I posted this problem:
Given the letter in the header row of each column, how do I calculate its column count, inversly weighted by its row count? Consider the first header, B. It appears 4 times in that column, but each of those need to be inversly weighted by the number of times they appear in the row. (1/2) + (1/3) + (1/3) + (1/4)
| B | A | C | C | A | B | C | B | A | A |
|---|---|---|---|---|---|---|---|---|---|
| B | B | C | C | C | A | ||||
| C | C | A | A | B | |||||
| A | B | A | B | A | B | B | |||
| B | B | B | C | C | |||||
| A | B | C | A | B | |||||
| B | A | A | B | B | |||||
| B | B | A | C | B | B | C | |||
| B | B | A | A | C |
People would jump on this problem.
Your problem is very similar, but simplified in the fact that your rows all have the same person. and now we want the row count inversly weighted by the column count.
| 9/3 | 9/4 | 9/5 | 9/6 | 9/7 | 9/8 | 9/9 | 9/10 | 9/11 | 9/12 |
|---|---|---|---|---|---|---|---|---|---|
| bob | bob | bob | bob | ||||||
| bob | bob | bob | bob | bob | |||||
| larry | larry | larry | larry | larry | |||||
| bob | bob | bob | bob | bob | |||||
| larry | larry | larry | larry | ||||||
| chuck | chuck | chuck | chuck | ||||||
| chuck | chuck | chuck | chuck | chuck | |||||
| bob | bob | bob | bob | bob |
Here's a formula that will accomplish this task:
=let(
hrsPerDay,9,
hourlyRate,20,
startTimes,A2:A9,
endTimes,B2:B9,
names,C2:C9,
map(startTimes,endTimes,names,
lambda(start,end,name,
hrsPerDay*hourlyRate*sum(map(sequence(end-start+1,1,start),
lambda(t,1/sumproduct(names=name,startTimes<=t,endTimes>=t)))))))
Which gives us the final results.
| start | end | name | cost |
|---|---|---|---|
| 9/3/25 | 9/6/25 | bob | $390 |
| 9/4/25 | 9/8/25 | bob | $360 |
| 9/4/25 | 9/8/25 | larry | $540 |
| 9/5/25 | 9/9/25 | bob | $360 |
| 9/5/25 | 9/8/25 | larry | $360 |
| 9/5/25 | 9/8/25 | chuck | $630 |
| 9/8/25 | 9/12/25 | chuck | $810 |
| 9/8/25 | 9/12/25 | bob | $690 |
If you ever need it on an old version like python 3.12 for example.
pyenv uninstall 3.12.12 # uninstall the current version
sudo apt install tk-dev # It includes all dependencies needed
pyenv install 3.12.12 # Run install again to compile
If the CaS fails you need to retry the operation.
int value;
do value = shared_var;
while( ! cas( &shared_var, value, value+1 ) )
I'm not an expert in bot with python, but I guess this is the solution.
Since your TIMER() function is an Async function, you have to use await( ) to get its actual result instead of the coroutine object.
Try:
timer-result = await TIMER()
print(timer-result)
Why is it impossible? If there is no mutex / lock, then thread2 can update shared_var as thread1 is doing something else right?
I'll try to answer your question with another question.
If you drag the Capsule up 10 float, and then the View scales accordingly, and then the capsule lands underneath of your new finger position... What is the Value.translation.height?
0?
10?
Technically, it's right where it began, underneath of your finger. But it took a trip to get there, so...
I think this confusion is the source of the jitters.
If this were true, then adding the value.translation.height to the viewSize ( instead of setting it equal to ) would solve the issue. And it does.
struct CustomSplitView: View {
@State private var height: CGFloat = 400
var body: some View {
VStack ( spacing: 0 ) {
Color.blue
.frame ( height: height )
Capsule()
.fill ( Color.secondary.opacity ( 0.5 ) )
.frame ( width: 40, height: 6 )
.frame ( height: 12 )
.gesture ( DragGesture()
.onChanged ( self.dragChanged )
)
Color.green
}
}
private func dragChanged ( value: DragGesture.Value ) {
self.height += value.translation.height <-- here _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-
self.height = self.height .clamp ( lower: 100 , upper: 600 )
}
}
public extension Comparable {
func clamp ( lower: Self , upper: Self ) -> Self {
min ( max ( self , lower ) , upper )
}
}
From uvicorn documentation:
Note
The --reload and --workers arguments are mutually exclusive. You cannot use both at the same time.
I created a new project using .Net 9 and followed this guide: https://learn.microsoft.com/en-us/previous-versions/xamarin/xamarin-forms/data-cloud/data/entity-framework and it works perfectly.
It seems a little anti-climactic but I believe the issues resolved around not knowing .Net 8 Maui was unsupported and I may have had some hodgepodge code because I was trying to many different remedies.
I started working on this open source python library called Reduino it basically simplifies your workflow and transpiles the python code to an arduino c++.
Get Started with
pip install Reduino
Here is what a Led Blink example looks like in Reduino
from Reduino import target
from Reduino.Actuators import Led
from Reduino.Utils import sleep
target("COM4")
led = Led(7)
while True:
led.toggle()
sleep(3000)
If you really want a binary that doesn't dynamically link any libraries on macOS, you can use the approach here: https://stackoverflow.com/a/79806805/1925631
Doesn't use the -static flag, but otool -L gives an empty list :)
Result Value. If C is in the collating sequence defined by the codes specified in ISO/IEC 646:1991 (International Reference Version), the result is the position of C in that sequence; it is nonnegative and less than or equal to 127. The value of the result is processor dependent if C is not in the ASCII collating sequence.
for me the feature was disabled and i enabled it by clicking on "open in VS Code " button in this link
https://docs.github.com/en/copilot/how-tos/get-code-suggestions/get-ide-code-suggestions
Using JSON parse and error detection functions by @sln,
Find/Replace is possible to pare down the objects to just two keys jobID and exec
keys and values.
For more information on how these recursive functions work and more practical
examples, see https://stackoverflow.com/a/79785886/15577665
Should be a complete and valid JSON string.
(?:(?=(?&V_Obj)){(?=(?:(?&V_KeyVal)(?&Sep_Obj))*?\s*("jobID"\s*:\s*(?&V_Value)))(?=(?:(?&V_KeyVal)(?&Sep_Obj))*?\s*("exec"\s*:\s*(?&V_Value)))(?:(?&V_KeyVal)(?&Sep_Obj))+})(?(DEFINE)(?<Sep_Ary>\s*(?:,(?!\s*[}\]])|(?=\])))(?<Sep_Obj>\s*(?:,(?!\s*[}\]])|(?=})))(?<Str>(?>"[^\\"]*(?:\\[\s\S][^\\"]*)*"))(?<Numb>(?>[+-]?(?:\d+(?:\.\d*)?|\.\d+)(?:[eE][+-]?\d+)?|(?:[eE][+-]?\d+)))(?<V_KeyVal>(?>\s*(?&Str)\s*:\s*(?&V_Value)\s*))(?<V_Value>(?>(?&Numb)|(?>true|false|null)|(?&Str)|(?&V_Obj)|(?&V_Ary)))(?<V_Ary>\[(?>\s*(?&V_Value)(?&Sep_Ary))*\s*\])(?<V_Obj>{(?>(?&V_KeyVal)(?&Sep_Obj))*\s*}))
Replace: { $1, $2 }
https://regex101.com/r/QI0y4i/1
Output:
{
"2597401": [
{ "jobID": "2597401", "exec": "ft.D.64" },
{ "jobID": "2597401", "exec": "cg.C.64" },
{ "jobID": "2597401", "exec": "mg.D.64" },
{ "jobID": "2597401", "exec": "lu.D.64" }
]
}
Rx Explained:
(?:
(?= (?&V_Obj) ) # Assertion : Must be a Valid JSON Object
{ # Open Obj
(?= # Lookahead: Find the 'jobID' key
(?: (?&V_KeyVal) (?&Sep_Obj) )*?
\s*
( "jobID" \s* : \s* (?&V_Value) ) # (1), capture jobID and value
)
(?= # Lookahead: Find the 'exec' key
(?: (?&V_KeyVal) (?&Sep_Obj) )*?
\s*
( "exec" \s* : \s* (?&V_Value) ) # (2), capture exec and value
)
(?: (?&V_KeyVal) (?&Sep_Obj) )+ # Get the entire Object
} # Close Obj
)
# JSON functions - NoErDet
# ---------------------------------------------
(?(DEFINE)(?<Sep_Ary>\s*(?:,(?!\s*[}\]])|(?=\])))(?<Sep_Obj>\s*(?:,(?!\s*[}\]])|(?=})))(?<Str>(?>"[^\\"]*(?:\\[\s\S][^\\"]*)*"))(?<Numb>(?>[+-]?(?:\d+(?:\.\d*)?|\.\d+)(?:[eE][+-]?\d+)?|(?:[eE][+-]?\d+)))(?<V_KeyVal>(?>\s*(?&Str)\s*:\s*(?&V_Value)\s*))(?<V_Value>(?>(?&Numb)|(?>true|false|null)|(?&Str)|(?&V_Obj)|(?&V_Ary)))(?<V_Ary>\[(?>\s*(?&V_Value)(?&Sep_Ary))*\s*\])(?<V_Obj>{(?>(?&V_KeyVal)(?&Sep_Obj))*\s*}))
just use reddit dawg💔
this site dead as hell
Fixed with downgrading psutil module to version 5.8.0
StackOverflow isn't really the place for architecture patterns and data structure discussions. There is multiple ways and any answer would probably at least to a huge part be opinion based ... Why have separate classes at all? Why can't the config itself already implement that interface?
for key, value in testShelve.items():
print(key, value)
You can deploy LLMs in Azure AI Foundry using either Azure OpenAI models such as GPT-4, GPT-4o or open-source models like Llama 3, Phi-3, Mistral etc.
Some steps below:
Prerequisites
Have an Azure subscription: https://azure.microsoft.com/free
Request Azure OpenAI access: https://aka.ms/oai/access
Create a Hub
Go to https://ai.azure.com → Get started → Create a hub.
Choose a name, subscription, resource group, and region (e.g. East US).
Click Review + Create → Create.
Create a Project
llm-demo) → Create.Deploy a Model
Option A: Azure OpenAI model
Go to Models + endpoints → Azure OpenAI → + Deploy model.
Choose a model (e.g. gpt-4o-mini) and click Deploy.
Option B: Open-source model
Test in Playground
If you want, I could show you how to use your deployed models for downstream tasks???
Can you delete this, it is NOT advice, but a real question. So ask it as a regular SO question
When I first started working with Java, building a web application felt heavy. A lot of configuration, XML everywhere, and you had to manually set up servers. That all changed when I discovered Spring Boot.
If you're new to Spring or you want a clean starting point for backend development, this walks you through what Spring Boot is and how to build a simple REST API.
Spring Boot is a framework that makes it easy to create Java applications with almost no configuration.
It handles the heavy lifting for you:
No XML files
No manual server setup
No complicated dependency wiring
You just write your code, run the application, and Spring Boot takes care of the rest.
Here are the things that made me personally appreciate Spring Boot:
it has an embedded server.
auto configuration .
very fast to get started .
Go to : spring.io
Choose:
Project: Maven
Language: Java
Dependencies: Spring Web
Download the project and open it in your IDE .
Spring Boot generates everything you need to start immediately.
Inside your project, create a new class:
HelloController.java
package com.example.demo.controllers;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class HelloController {
@GetMapping("/hello")
public String hello() {
return "Hello World!";
}
}
That’s it.
@RestController tells Spring this class will handle HTTP requests
@GetMapping("/hello") exposes a GET endpoint at /hello
The method returns a simple string
You can run the project in two ways:
Simply run the main() method inside DemoApplication.java.
Open your terminal and run:
mvn spring-boot:run
The app starts on port 8080 by default.
Now open:
You should see:
Hello World!
Spring Boot uses application.properties or application.yml to configure your app.
For example, to change the port:
server.port=2025
Or using YAML:
server:
port: 2025
Restart the app, and it will now run on:
If you're starting your Spring journey, here are the next topics you should take a look at:
Building CRUD APIs
Working with Spring Data JPA
Using profiles (dev, test, prod)
Global exception handling
Testing with JUnit and Mockito
You can try flutter_prunekit — it’s a Dart/Flutter analyzer that finds unused classes and methods.
I built and maintain it for exactly this kind of use case.
You can try flutter_prunekit — it’s a Dart/Flutter analyzer that finds unused classes and methods.
I built and maintain it for exactly this kind of use case.
What happens to values above 128?
i still got the same problem. my page has got 2 language. and woocommerce settings in cart and check out is Turkish. But when i choose other language "DE", it opens still TR language. i did not find the where the problem is.
With Mingw compiler
Source code. // me.c
#include <stdio.h>
Int main (){
printf("Hello Bull");
return 0;
}
With windows power shell or command line
Give in
gcc me.c-S
Output will be me.s
Open me.s with texteditor..
You will see the assembly code thier.
Huh. Is there no way to post an answer for this question? All I see are these comment-replies. I'd answer if I could find the button.
Isn't that what move semantics is supposed to do, leave the source in a valid but indeterminate state? Meaning the object can be reused, but the data inside it can't.
How is this an advice question? This looks like standard Q&A.
Append the origin "youtube.com" as a link parameter and it should work again.
For the big buck example, you'd use
https://www.youtube-nocookie.com/embed/ScMzIvxBSi4?origin=youtube.com
Is this really caused because of multithreading? Or is it related to the scope of variables? I'm confused.
Try to call this, after changing the mainImageId to null, which should tell EF to do an update before trying to delete:
dbContext.Entry(imageSeries).Property(x => x.MainImageId).IsModified = true;
Also, use RemoveRange instead of the foreach:
dbContext.Images.RemoveRange(images);
Try
./vendor/bin/pest --bail -v
or
./vendor/bin/pest --bail --verbose
@dbush: The post says they want the compiler to keep the value in a register, not in the register used to pass it to the routine. The argument register would have to be reloaded for each call, but it can be from another register instead of from memory. The post explicitly notes %ebx would be reloaded.
Using the @JohnGordon idea and dividing the process for each CPU core might make the process faster.
If you are using the customtkinter library, you can also do this:
import customtkinter as ctk
button = ctk.CTkButton(text="Centered Button")
button.place(relx=0.5, rely=0.5, anchor=ctk.CENTER)
but it should work just fine with the anchor="center". This is just another method :)
You declare a Boolean searching_for_a_0 = True, then iterate over the list. Inside the loop, you have if searching_for_a_0: if item == 0: searching_for_a_0 = False else: “”“ from the first if ”“” if item > S: result.append( item ) looking_for_a_0 = True, the solution of dividing the list into sublists involves iterating over the list to create the sublists, and then iterating over each of these, so it is not efficient.
On Mac just type: Cmd + "," -> Tools -> Terminal & adjust the Fonts Settings section according to your needs:
You have to right-click on the pom.xml file, go down to Maven, and then select Sync Project. It should work afterwards, or you might see a notification about vulnerable dependencies.
I’ll be seeing you soon, gang gang 😉
What a fragile question
Thank you guys, it's all completely working now. "Nate Eldridge" nailed it. I was shifting my bits in the wrong direction LOL
And I should be using ldrh not ldrb as ldrb only gave me a nibble.
I am puzzled here. I do exactly that because my fonts are way too small for a lecture room screen:
div(tableOutput("optim"),style="font-size:120%")
And it does absolutely nothing. No error message, nothing !
uvx pyclean . --debris
This works great for me. No issues and is the simplest without any mess.
@kelly, range of integers is any non-negative integer
@Chris, yes order does matter
I need to update controller. The main problem is to update controller from provider.
111111
| header 1 | header 2 |
|---|---|
| cell 1 | cell 2 |
| cell 3 | cell 4 |
The react itself doesn't do event delegation
In the case of web, event delegation is done by react-dom not by react itself,
In the case of react-native, react-native doesn't do any kind of event delegation like react-dom
the Devs have to implement delegation on thier own
I used this package and my all problems fixed but please make sure to change the input command of making video
ffmpeg_kit_flutter_new_gpl: ^1.6.5
Can you share the complete code,Please. I want to implement this on a blog website releated to NHS PAY BANDS
Rails.error.unexpected("Unexpected error") do
my_possible_error_code
end
The error is swallowed in production but not in test/development. But the error is wrapped in a ActiveSupport::ErrorReporter::UnexpectedError and the original message is hidden, which makes the console/log output information useless.
Where is the sweet combination where?:
The error is swallowed in production
The error is raised in test/development
Or, what workaround I can do to have this behaviour?
ruby-on-railserror-handling
Share
Edit
Follow
I know this is an old qustion but when I tried the method shown it failed.
I eventually found a command that successfullt installed eyeD4, as below:
py -m pip install "eye3D"
I found the above on https://packaging.python.org/en/latest/tutorials/installing-packages/
Showing your for-loop solution and code for generating realistic data would be useful.
When you set exactly 1920×1080 on Windows via cv2.VideoCapture using the DirectShow (CAP_DSHOW) backend, OpenCV negotiates the capture format differently than Linux’s V4L2 backend does.
If the driver’s DirectShow filter graph doesn’t expose that exact combination (MJPG @ 60 FPS @ 1920×1080) as a supported media type, it silently falls back to uncompressed RGB24 or YUY2 — which are huge and slow to transfer (e.g., ~373 MB/s for 1080p60 RGB), so you end up with ~1 FPS.
When you instead request a slightly off resolution like 1900×1080, Windows can’t match it exactly. The backend then asks the driver for the nearest valid format, and the driver internally selects MJPG @ 1920×1080, which is compressed — and therefore fast.
So, the difference is purely about which pixel format ends up being negotiated.