Pyxl was the main culprit slowing down the process alot , replacing it with fastexcel was very effective, ditching pandas was absolutely worth it .
I'm late to the game. I recently came across this issue where my Laravel app, when it was initially built, used increments() method. However, Laravel is now using the id() method.
None of the answers provided here pointed out the main difference between the two.
increments() uses INTEGER column type and id() uses BIGINT column type in database . INTEGER and BIGINT are different in terms of how much they can store but the other key thing is when you create other tables and foreign keys to id columns, column types have to match.
The idea is to have a text only interface. I'm using ncurses to display info, etc.
Thanks a lot dean and Ollie, I am able to achieve 3-4 secs for 700k records. wouldn't be able to do it without your guidance. I will work on optimising it more. Thanks, a tonne once again
I just did some more debugging and apparently making a FileWriter wiped the text file. I took it away and now my code works.
The issue for me waas png format , It had a transparent layer ,
I converted simply to jpg format and it was sent to reviews
Just to let others know if they are facing the same issue. You can try in Kaggle if you face this issue in colab.
Generally, You can find different version of WIX in it's official Github Page from link below:
Prerequisites
Install WiX Toolset
Download from: https://wixtoolset.org/releases/
Install "WiX Toolset Visual Studio Extension" if using Visual Studio
Your WPF Project should be built and ready
What @deceze said is a good approach. Instead of updating the counter value every 10 seconds, just store the startDateTime somewhere (e.g., in .env or config.json). Then use this snippet:
<script>
const startDateTime = new Date('2025-11-10T00:00:00Z'); // example start time
const now = new Date();
const diffInSeconds = Math.floor((now - startDateTime) / 1000);
const counterValue = Math.floor(diffInSeconds / 10);
console.log(counterValue);
</script>
I ran into this exact issue with webscraper.io — the "next button" pagination is a pain when you have hundreds of pages.
I ended up switching to BrowserAct because it handles this automatically with natural language prompts. You basically tell it "loop through all pages by clicking next" and it does it without needing to manually map each page.
Here's a working example using their Reddit Scraper template — it automatically loops through Reddit posts (which use infinite scroll/next button navigation) and extracts all the data. Same logic works for any pagination.
I am also facing the same issue with my report, and I checked all the filters in Google Analytics & Power BI are the same. Still, the total does not match.
Chanyanut1103701965541 hiphoppoprock hungnaek caeuq daegbied hungnaek de wnggai yungh lai cungj sojyoujci swhbwnj aeu genhciz
This issue mainly occurs when the mocks used in the test cases are not actually used and need to remove the unnecessary mocks.
@Robin Hossain, have you found a solution to this problem?
On my environment, there is rpds.cpython-310-x86_64-linux-gnu.so in the rpds directory.
It's for Python 3.10. but, my environment was for Python 3.12.
I changed the runtime to 3.10 then it works.
It works and lets me download if I try on JSFiddle (My browser blocks downloads from iframes)
Hi I had this exact error a few minutes ago... It all disappeared when i created a virtual environment, activated it, downloaded dbt-core and dbt-postgres adapter, then i ran my dbt command using my activated virtual environment.
Also, check your python version, older versions of dbt seem to have beef with some python versions https://docs.getdbt.com/faqs/Core/install-python-compatibility
just like:
bytes_string = b'3\x00\x02\x05\x15\x13GO\xff\xff\xff\xff\xff\xff\xff\xff'
num_string = bytes_string.hex()
print(num_string)
# 330002051513474
# num = int(num_string)
here are some good resources focused on Python application development that will be useful.
1. Create GUI Application with Python and Qt6 by Martin Fitzpatrick.
2. Mastering GUI Programming with Python by Alan D. Moore.
3. Learning Python Application Development by Ninad Sathaye.
4. Core Python Application Programming by Wesley J. Chun.
5. Hands-On Enterprise Application Development with Python by Riaz Ahmed
Hope this will help.
Try adding these in your Runner.entitlements after <dict>
<key>aps-environment</key>
<string>development</string>
Still doesn't work? Make sure you complete the 3rd step
https://firebase.flutter.dev/docs/messaging/apple-integration/
This is what I use:
/* (A Name Here if you want ;3) */
var code = 'Hello, World!';
alert(code);
/* (An optional Ending sentence.) */
If you don't want a name or ending then just make them both /**/.
And if you want to commentanize it, then just remove the second forward slash:
/**/ <-- This one! :3
/**/
So this would be a comment:
/** <-- '/' missing!
alert(1+2);
/**/
You can also replace the last /**/ with //*/ if you want ;\
I like this a lot because it is toggalable by 1 (one) character, and it also look pretty nice imo. Yeah! ;3
We are here to provide you best solution regarding your quarry, CloudBik offers you tenant to tenant migration in very easy steps.
Advantage:- user can also migrate by there self, CloudBik solution provide proper guidance to user.
CloudBik is very secure tool
it provide migration in "0" downtime without any data losses.
Hello and welcome
I’m inviting you to join CS50's Introduction to Computer Science with Python
The exclusive group where great minds come together to learn, share ideas, and grow together. It’s a space for engaging discussions, valuable insights, and real connections with like-minded people. Don’t miss out click the link below to join and introduce yourself once you’re in! https://chat.whatsapp.com/JXWEtGWuHLvKeGrTUSORkP?mode=ems_copy_t
Hello and welcome
I’m inviting you to join CS50's Introduction to Computer Science with Python
The exclusive group where great minds come together to learn, share ideas, and grow together. It’s a space for engaging discussions, valuable insights, and real connections with like-minded people. Don’t miss out click the link below to join and introduce yourself once you’re in! https://chat.whatsapp.com/JXWEtGWuHLvKeGrTUSORkP?mode=ems_copy_t
thanks for reply! I'm adding cost using the AddCost API. But specifically, there are L1NormCost, L2NormCost, QuadraticCost, and some other cost classes in drake. I'm trying to find out how construct a cost in my problem and impose it using AddCost.
This would be possible, with a macOS "Installer Plug-In", however it will not be an easy process.
Installer Plug-Ins allow you to create custom actions that are shown during the install process, as an additional step (such as after the "Read Me" or "License" step). However, in recent years Apple has not provided any documentation regarding the creation of custom installer plug-ins, likely because of the possible security risks they could expose by running arbitrary code. This means that while they are still fully supported as of macOS Tahoe, development has long-since stopped on them, and they could very well be removed in the future.
You can find examples on GitHub such as this registration code installer plug-in, but the common theme among any examples you come across will likely be how dated they are. As a result of this, the sample code is in Objective-C using Storyboards. You could possibly write the configuration data to a .plist file somewhere on disk, and then retrieve it later from your installed application. It may be possible to migrate this code to Swift, but this would require additional effort on your part.
I would recommend following the Installer Plug-In tutorial by Stéphane Sudre (the individual behind the incredibly useful Packages app). The resource was last updated in 2012, however almost nothing has changed about installer plug-ins since this guide was written.
You could technically prompt for user input via osascript in a pre/postinstall script, however this would likely result in an even worse end-user experience and could lead to many issues.
Following up just need more clarification, was there ever a EntityTypeConfiguration<T> base class? What ever happened to it? Analog to a Fluent NHibernate ClassMap<T> for example. No biggie I guess, but would be interesting to have such an enriched base class experience.
I'm new bee.
When I am using Robin 54030 projection world map facing same problem, Polygons are closing themselves from one side to the other.
Second scale in km/miles shows as invalid when using the Robinson/Wagner VII projection. please help how to rectify. I have tried many ways, but it hasn't been solved.
Thanks
There is an issue coming up, but when I installed react-native-reanimated, it came up. What should I do for this
The PermissionError: [Errno 13] Permission denied usually means your Python code (or sub-agent) doesn’t have the proper rights or file path access to write to the target file.
Here are the most common causes and fixes:
Make sure your sub-agent is writing to a path it actually has access to.
file_path = "/path/to/output.txt"
with open(file_path, "w") as f:
f.write("Hello World!")
If this path is inside a restricted system folder (e.g., /root, C:\Program Files, etc.), you’ll get Permission denied.
Fix:
Use a user-writable path like /tmp, ./data/, or os.getcwd()
Example:
import os
file_path = os.path.join(os.getcwd(), "output.txt")
with open(file_path, "w") as f:
f.write("Works fine!")
Check the data types inferred by Glue when creating the table.
If the “Gender” column was inferred as a string, Glue DQ may treat blanks as valid values.
You can manually adjust the schema in the Glue Catalog or apply a schema mapping transform to ensure null handling works properly.
It's not a feature of the VSCode terminal, but of the PowerShell.
You can run Set-PSReadLineOption -PredictionSource None on the PowerShell and predictive IntelliSense would turn off.
BTW this question is off topic. You should've visit StackExchange Super User.
When a Next.js Server Action receives a 401 Unauthorized response from a service like Google Cloud IAP, Next.js's underlying fetch mechanism may not automatically throw an error in the client-side code when used with Server Actions, leading to the observed silent failure and undefined result [1, 2]. This behavior is a known characteristic of how Next.js handles certain server action responses, especially in specific deployment configurations.
Here is a breakdown of why this happens and recommended approaches to handle session expiration:
Why Doesn't Next.js Throw an Error?
The primary reason for the silent failure lies in how Next.js handles the response from the server action's underlying network request:
Server Actions use Fetch: Server actions in Next.js utilize the fetch API under the hood [2].
Next.js Response Handling: Next.js intercepts the response for Server Actions. If the response is a 401, the framework might be processing it in a way that prevents it from bubbling up as a standard JavaScript error that can be caught by the client-side try/catch block [2]. Instead of an error, the result variable is simply undefined.
IAP's Role: The IAP intercepts the request and returns a 401 response before the request even reaches your server action logic. The browser receives the 401, but the Next.js client-side runtime interprets this in a non-error-throwing manner for this specific interaction [1].
How to Detect the Failure on the Client Side
Since the try/catch block fails to catch the error, you need to implement explicit checks within your client component or the server action itself:
1. Check for undefined result in the Client Component
The simplest way is to check if the result is undefined and handle it as an unauthorized state. This approach works because in the broken scenario, the result is always undefined [1].
javascript
'use client';
import { myServerAction } from './actions';
export default function MyComponent() {
const handleClick = async () => {
try {
const result = await myServerAction();
// Explicitly check for an undefined result
if (result === undefined) {
console.error('Session expired or unauthorized');
// Trigger a re-authentication flow or display a message
return;
}
console.log('Result:', result);
} catch (err) {
console.error('Caught error:', err);
}
};
// ...
}
Use code with caution.
2. Implement a Redirect or Session Check in the Server Action
You can add logic within your server action to manually check the session or authentication status and return a specific, informative object.
javascript
'use server';
export async function myServerAction() {
// Check auth status here before any main logic
const isAuthenticated = checkSessionStatus(); // Replace with actual session check
if (!isAuthenticated) {
// Return a specific error object
return { success: false, message: 'Unauthorized or session expired' };
}
// Some logic here
return { success: true, message: 'Hello from server' };
}
Use code with caution.
Then, on the client, check the returned object's properties:
javascript
// Client side
const result = await myServerAction();
if (!result.success) {
console.error(result.message);
// Handle unauthorized state
}
Use code with caution.
Recommended Approach for Handling Session Expiration with IAP
The most robust approach involves a combination of client-side detection and a mechanism to force re-authentication:
Use Client-Side Redirection: The standard IAP flow expects a browser redirect to the Google login page when a 401/403 occurs. However, Server Actions use XHR/fetch requests, which don't automatically trigger a browser-level navigation.
Explicitly Force Re-authentication:
When the client-side code detects an undefined result (as shown in method 1 above), it should assume the session is invalid.
The best user experience is to then force a full page reload or navigate the user to a known protected URL to trigger the IAP login flow.
javascript
// Client side
if (result === undefined) {
console.log('Session expired, redirecting to login...');
// Navigating to the current page will trigger IAP's redirect
window.location.reload();
}
Use code with caution.
Consider a Custom Fetch Wrapper (Advanced): If you find yourself needing a more generic solution across many server actions, you could create a custom utility function that wraps the server action call with enhanced error handling. However, the first two methods are usually sufficient and less complex.
By explicitly checking the result of the server action for undefined on the client side, you can reliably detect IAP's 401 responses and implement the necessary re-authentication flow.
For Material UI 7 you add the colors under colorSchemes in the theme
https://mui.com/material-ui/customization/palette/#color-schemes
const theme = createTheme({
colorSchemes: {
light: {
palette: {
primary: {
main: '#FF5733',
},
},
},
dark: {
palette: {
primary: {
main: '#E0C2FF',
},
},
},
},
});
Web tracking standard/API:
\>🔹 Google Analytics Measurement Protocol – main standard for sending tracking data.
🔹 Google Tag Manager (GTM) – tool for managing tracking tags.
🔹 Conversion APIs – server-side tracking used by Facebook, Google Ads, TikTok, etc.
In the identity server project, you need to register the client application. This is done in the application's seed. You can add an additional registration apart from the one that comes by default.
On the client side, you should reference the identity server as you’re already doing; however, I noticed that some configurations are not entirely correct.
To enable the client application to obtain the current user’s information, an extra step may be required — adding the scopes in both the identity server and the client — and in the client, adding a claim mapping so that the user information is displayed correctly when you use the current user.
This is happening because you're plotting the entire graph
if you plot only the first 8 seconds, you'll probably get the result you want
I found the solution myself.
The answer provided in the following related question solved my problem: Autodesk Platform Services - ACC - Get Roles IDs
I just discovered that I can copy my files from my old phone to my laptop, then, with my new phone connected to Android studio, I can drag/drop the file from Window File Explorer directly into the Android file explorer!
Problem solved.
Then you'll need to modify your app to request all file access https://developer.android.com/training/data-storage/manage-all-files, officially Play Store will only grant it for specific apps, which is why I believe it's simpler to just modify your app to accept shared files instead of going through the hoop (unless it's not a published app)
If I understand your reply correctly, it doesn't address what I'm trying to do. I don't have multiple apps trying to share a file.
On my previous phone, some of my apps created/read/updated app-specific files. Those files were located in "Internal Storage" (not in any subfolder of Internal Storage). As a result, those files were accessible from both my phone's file manager and my PC's file manager (when connected to my phone) if I needed to copy/delete/edit them from outside my apps.
It's my understanding that, when I move my apps to the new phone, the apps (which still need to use the info in those files) can only access files that are in "/data/user/0/myApp/files". So I need to copy my files from "Internal Storage" on my previous phone to "/data/user/0/myApp/files" on my new phone.
I guess my first question should be: Is there a way for my apps on my new phone to access files in "Internal Storage"? If so, then I could simply copy the files over to my new phone. But, if my apps can't access "internal Storage', then how can I copy my files into "/data/user/0/myApp/files" on my new phone so my apps can access them?.
Does this clarify my question?
@Wicket - I appreciate your replies.
With that said, I can't think of a way to reduce scope on either of the two issues and still have the Add-On do what it is supposed to do.
Reading from Sheets. Seems like I need the readonly for Sheets to get my data. I can still use their picker to pick the spreadsheet, but to read the data, I'll need sheets readonly.
For your comment on the slides currentonly. I love this idea and I want to implement it so users won't be as nervous about the Add-On...however I cannot think of anyway to put pie shapes with varying angles into a slides presentation with that scope. There is no way to do it with the API, I tried a number of things and researched here and elsewhere. I finally realized I could do it with a public template and was really happy about my idea working...and now I'm realizing that won't work because of openById even though it's not the users.
I think I'll have to appeal to the Google team and see what they say. They told me to post here first and from what I'm seeing there aren't any ways around it. I need to have my app do less for narrower scopes or appeal for my original scope request.
There are 2 possibilities to do it:
Rebuilding like this: NIXOS_LABEL="somelabel" nixos-rebuild switch
Configuring system.nixos.label and optionally system.nixos.tags in configuration.nix (See the links for full info)
When you use the 2 possibilities at the same time, the first one will get priority.
Important: Labels don't support all types of chars. Spaces won't work.
It's better to install all significant dependencies explicitly. If you want a better way to manage similar dependencies across subpackages, you could use pnpm's catalogs feature.
extension DurationExt on Duration {
String format() {
return [
inHours,
inMinutes.remainder(60),
inSeconds.remainder(60),
].map((e) => e.toString().padLeft(2, '0')).join(':');
}
}
I know this is a while ago, but I have a lead for you as I think I've just fixed this issue at my end (same scenario github codespaces with Snowflake, using SSO.
I changed this setting in vscode from "hybrid" to "process" (note that process reports as being the default)
remote.autoforwardportsource
I was looking for the same thing, and I managed to re-implement Ziggy Routes, and I removed Wayfinder since it's still in beta and I don't know exactly how to use it...
I created a repository, but I forgot the name because I have several. I'll find it and send it to you by email: [email protected]
Or contact me on GitHub: github.com/casimirorocha
-----------------------------------------------------------------------------------------
Oi eu estava querendo a mesma coisa, e consegui implementar de volta o ziggy routes, e removi o wayfinder já que ainda é beta e não sei direito como usar....
Eu criei um repositório, mas esqueci o nome pois tenho vários, vou achar e te mandar popor email: [email protected]
Ou me chama no github: github.com/casimirorocha
Off topic. NB Please lay off the boldface. It doesn't help.
Frame can be applied to menu item as such
Menu("Options") {
Button("Option 1") {
}
Button("Option 2") {
}
}
.frame(width: 50)
The output will be as below. (Please ignore the button).
Hi friends how are you today where are you from can you walking today so you book right now I will be there online tonight walk to hotel and
🐞 مشكلة: فشل إنشاء تقرير Cucumber
عند محاولة توليد تقرير باستخدام maven-cucumber-reporting، تظهر الرسالة التالية:
net.masterthought.cucumber.ValidationException: No report file was added!
📌 السبب المحتمل
هذه الرسالة تعني أن الـ plugin لم يجد أي ملف JSON صالح لتوليد التقرير منه. غالبًا ما يكون السبب:
- عدم تنفيذ اختبارات Cucumber قبل مرحلة verify
- عدم إنشاء الملف target/cucumber.json بسبب فشل أو غياب الاختبارات
- مسار غير صحيح أو مفقود في إعدادات pom.xml
✅ الحلول المقترحة
1. تنفيذ الاختبارات قبل توليد التقرير
`bash
mvn clean test
mvn verify
`
\> تأكد من أن mvn test يُنتج ملف cucumber.json في مجلد target.
2. التحقق من وجود ملف JSON
بعد تنفيذ الاختبارات، تأكد من وجود الملف:
`bash
ls target/cucumber.json
3. إعداد صحيح لـ @CucumberOptions
`java
@CucumberOptions(
features = "src/test/resources/features",
glue = {"steps"},
plugin = {"pretty", "json:target/cucumber.json"},
monochrome = true,
publish = true
)
4. إعداد صحيح لـ pom.xml
`xml
<plugin>
\<groupId\>net.masterthought\</groupId\>
\<artifactId\>maven-cucumber-reporting\</artifactId\>
\<version\>5.7.1\</version\>
\<executions\>
\<execution\>
\<id\>execution\</id\>
\<phase\>verify\</phase\>
\<goals\>
\<goal\>generate\</goal\>
\</goals\>
\<configuration\>
\<projectName\>cucumber-gbpf-graphql\</projectName\>
\<skip\>false\</skip\>
\<outputDirectory\>${project.build.directory}\</outputDirectory\>
\<inputDirectory\>${project.build.directory}\</inputDirectory\>
\<jsonFiles\>
\<param\>/\*.json\</param\>
\</jsonFiles\>
\<checkBuildResult\>false\</checkBuildResult\>
\</configuration\>
\</execution\>
\</executions\>
</plugin>
`
🧪 اختبار يدوي (اختياري)
`java
File reportOutputDirectory = new File("target");
List<String> jsonFiles = Arrays.asList("target/cucumber.json");
Configuration config = new Configuration(reportOutputDirectory, "اسم المشروع");
ReportBuilder reportBuilder = new ReportBuilder(jsonFiles, config);
reportBuilder.generateReports();
`
🧠 ملاحظات إضافية
- تأكد من أن ملفات .feature موجودة وتُنفذ فعليًا
- تحقق من أن ملفات الاختبار تحتوي على @RunWith(Cucumber.class) أو @Cucumber حسب نوع JUnit
- استخدم mvn clean test verify كأمر موحد لضمان الترتيب الصحيح
\> 💬 إذا استمرت المشكلة، راجع سجل التنفيذ (target/surefire-reports) أو فعّل debug في Maven للحصول على تفاصيل أعمق.
To expand on previous answers, you can get a nice re-usable Group component similar to the one in Mantine like this:
import { View, ViewProps } from "react-native";
export function Group(props: ViewProps) {
return <View style={[{ flexDirection: "row" }, props.style]} {...props} />;
}
Assuming your project is using typescript, the ViewProps usage above allows passing through any other properties and preserves type hints
Yes. It was impossible to switch directly. So, I made a working switch. Here's the fix:
A post on retrocomputing gives more details. Note that LOADALL apparently couldn't do it, but was wrongly rumored to be able to:
Pm32 -> pm16 -> real 16 (wrap function caller) -> real 16 ( the function call) -> pm32 (resume 32) -> ret to original caller.
uint16_t result = call_real_mode_function(add16_ref, 104, 201); // argc automatically calculated
print_args16(&args16_start);
terminal_write_uint("\nThe result of the real mode call is: ", result);
uint16_t result2 = call_real_mode_function(complex_operation, 104, 201, 305, 43); // argc automatically calculated
print_args16(&args16_start);
terminal_write_uint("\nThe result of the real mode call is: ", result2);
// Macro wrapper: automatically counts number of arguments
#define call_real_mode_function(...) \
call_real_mode_function_with_argc(PP_NARG(__VA_ARGS__), __VA_ARGS__)
// Internal function: explicit argc
uint16_t call_real_mode_function_with_argc(uint32_t argc, ...) {
bool optional = false;
if (optional) {
// This is done later anyway. But might as well for now
GDT_ROOT gdt_root = get_gdt_root();
args16_start.gdt_root = gdt_root;
uint32_t esp_value;
__asm__ volatile("mov %%esp, %0" : "=r"(esp_value));
args16_start.esp = esp_value;
}
va_list args;
va_start(args, argc);
uint32_t func = va_arg(args, uint32_t);
struct realmode_address rm_address = get_realmode_function_address((func_ptr_t)func);
args16_start.func = rm_address.func_address;
args16_start.func_cs = rm_address.func_cs;
args16_start.argc = argc - 1;
for (uint32_t i = 0; i < argc; i++) {
args16_start.func_args[i] = va_arg(args, uint32_t); // read promoted uint32_t
}
va_end(args);
return pm32_to_pm16();
}
GDT16_DESCRIPTOR:
dw GDT_END - GDT_START - 1 ;limit/size
dd GDT_START ; base
GDT_START:
dq 0x0
dq 0x0
dq 0x00009A000000FFFF ; code
dq 0x000093000000FFFF ; data
GDT_END:
section .text.pm32_to_pm16
pm32_to_pm16:
mov eax, 0xdeadfac1
; Save 32-bit registers and flags
pushad
pushfd
push ds
push es
push fs
push gs
; Save the stack pointer in the first 1mb (first 64kb in fact)
; So its accessible in 16 bit, and can be restored on the way back to 32 bit
sgdt [args16_start + GDT_ROOT_OFFSET]
mov [args16_start + ESP_OFFSET], esp ;
mov ax, ss
mov [args16_start + SS_OFFSET], ax ;
mov esp, 0 ; in case i can't change esp in 16 bit mode later. Don't want the high bit to fuck us over
mov ebp, 0 ; in case i can't change esp in 16 bit mode later. Don't want the high bit to fuck us over
cli
lgdt [GDT16_DESCRIPTOR]
jmp far 0x10:pm16_to_real16
/* Reference version (purely for comparison) */
__attribute__((section(".text.realmode_functions"))) int16_t complex_operation(uint16_t a, uint16_t b, uint16_t c, uint16_t d) {
return 2 * a + b - c + 3 * d;
}
/* Reference version (purely for comparison) */
__attribute__((section(".text.realmode_functions"))) uint16_t add16_ref(uint16_t a, uint16_t b) {
return 2 * a + b;
}
resume32:
; Restore segment registers
mov esp, [args16_start + ESP_OFFSET]
mov ax, [args16_start + SS_OFFSET]
mov ss, ax
mov ss, ax
pop gs
pop fs
pop es
pop ds
; Restore general-purpose registers and flags
popfd
popad
; Retrieve result
movzx eax, word [args16_start + RET1_OFFSET]
; mov eax, 15
ret
The struct located in the first 64kb of memory, to allow multi segment data passing.
typedef struct __attribute__((packed)) Args16 {
GDT_ROOT gdt_root;
// uint16_t pad; // (padded due to esp wanting to)
uint16_t ss;
uint32_t esp;
uint16_t ret1;
uint16_t ret2;
uint16_t func;
uint16_t func_cs;
uint16_t argc;
uint16_t func_args[13];
} Args16;
To see a simpler version of this:
commit message: "we are so fucking back. Nicegaga"
Commit date: Nov 7, 3:51 am
(gaga typo accidentally typed)
hash: 309ca54630270c81fa6e7a66bc93
and a more modern and cleaned (The one with the code show above):
commit message: Changed readme.
commit date: Sun Nov 9 18:09:16
commit hash: a2058ca7e3f99e92ea7c76909cc3f7846674dc83
====
Hm, I'm not seeing the used key parts/key length be what it should be. Looks like it is using the sports id as a filter but not actually part of the index lookup. Can you please include output of show create table Bet; and show create table BetSelection;?
Just in case someone would search for the answer as me.
As @jhasse said it is really easy with Clang. All you need to do is just to build clang with openmp runtime support, so working set of build commands would be like
git clone https://github.com/llvm/llvm-project
cd llvm-project
mkdir build
cd build
cmake -DLLVM_ENABLE_PROJECTS="clang;lld" -DLLVM_ENABLE_RUNTIMES="openmp;compiler-rt" -DLLVM_TARGETS_TO_BUILD="host" -DLLVM_USE_SANITIZER="" -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local/llvm -G "Unix Makefiles" ../llvm
in build directory inside llvm-project one. Then you can also make installation. Or you really can dive into separate OpenMP build.
See also here (there is in-build tool called Archer that was mentioned by @
Simone Atzeni).
onclick="location.href='About:blank';"
*pseudo code
If both positive:
ABS(a-b) or MAX(a,b) - MIN(a,b)
If both negative:
ABS(ABS(a) - ABS(b)) or MAX(a,b) - MIN(a,b)
If one positive and one negative:
MAX(a,b) - MIN(a,b)
Hence for all situations:
MAX(a,b) - MIN(a,b)
Don't know why you have an error, but you have an issue with your url
There is a missing & between response_type and scope
response_type=code&scope=user_profile,user_media
May be only a typo
I figured it out. The search API string for the next call simply needs to be appended with "&nextPageToken=" and then the token, and keeping all of the same search criteria and returned fields.
STEP 1. Find the Dirt.
Start data cleaning by determining what is wrong with your data.
Look for the following:
Are there rows with empty values? Entire columns with no data? Which data is missing and why?
How is data distributed? Remember, visualizations are your friends. Plot outliers. Check distributions to see which groups or ranges are more heavily represented in your dataset.
Keep an eye out for the weird: are there impossible values? Like “date of birth: male”, “address: -1234”.
Is your data consistent? Why are the same product names written in uppercase and other times in camelCase?
STEP 2: SCRUB THE DIRT
Missing Data
Outliers Contaminated
Data Inconsistent : You have to expect inconsistency in your data.
Especially when there is a higher possibility of human error (e.g. when salespeople enter the product info on proforma invoices manually).
The best way to spot inconsistent representations of the same elements in your database is to visualize them.
Plot bar charts per product category.
Do a count of rows by category if this is easier.
When you spot the inconsistency, standardize all elements into the same format.
Humans might understand that ‘apples’ is the same as ‘Apples’ (capitalization) which is the same as ‘appels’ (misspelling), but computers think those three refer to three different things altogether.
Lowercasing as default and correcting typos are your friends here.
Data Invalid
Data Duplicate
Data Data Type Issues
Structural Errors
The majority of data cleaning is running reusable scripts, which perform the same sequence of actions. For example: 1) lowercase all strings, 2) remove whitespace, 3) break down strings into words.
Problem discovery. Use any visualization tools that allow you to quickly visualize missing values and different data distributions.
Identify the problematic data
Clean the data
Remove, encode, fill in any missing data
Remove outliers or analyze them separately
Purge contaminated data and correct leaking pipelines
Standardize inconsistent data
Check if your data makes sense (is valid)
Deduplicate multiple records of the same dataForesee and prevent type issues (string issues, DateTime issues)
Remove engineering errors (aka structural errors)
Rinse and repeat
HANDLING MISSING VALUES
The first thing I do when I get a new dataset is take a look at some of it. This lets me see that it all read in correctly and get an idea of what's going on with the data. In this case, I'm looking to see if I see any missing values, which will be reprsented with NaN or None.
nfl_data.sample(5)
Ok, now we know that we do have some missing values. Let's see how many we have in each column.
# get the number of missing data points per column
missing_values_count = nfl_data.isnull().sum()
# look at the # of missing points in the first ten columns
missing_values_count[0:10]
That seems like a lot! It might be helpful to see what percentage of the values in our dataset were missing to give us a better sense of the scale of this problem:
# how many total missing values do we have?
total_cells = np.product(nfl_data.shape)
total_missing = missing_values_count.sum()
# percent of data that is missing
(total_missing/total_cells) * 100
Wow, almost a quarter of the cells in this dataset are empty! In the next step, we're going to take a closer look at some of the columns with missing values and try to figure out what might be going on with them.
One of the most important question you can ask yourself to help figure this out is this:
Is this value missing becuase it wasn't recorded or becuase it dosen't exist?
If a value is missing becuase it doens't exist (like the height of the oldest child of someone who doesn't have any children) then it doesn't make sense to try and guess what it might be.
These values you probalby do want to keep as NaN. On the other hand, if a value is missing becuase it wasn't recorded, then you can try to guess what it might have been based on the other values in that column and row.
# if relevant
# replace all NA's with 0
subset_nfl_data.fillna(0)
# replace all NA's the value that comes directly after it in the same column,
# then replace all the reamining na's with 0
subset_nfl_data.fillna(method = 'bfill', axis=0).fillna(0)
# The default behavior fills in the mean value for imputation.
from sklearn.impute import SimpleImputer
my_imputer = SimpleImputer()
data_with_imputed_values = my_imputer.fit_transform(original_data)
----------
import pandas as pd
import numpy as np
# for Box-Cox Transformation
from scipy import stats
# for min_max scaling
from mlxtend.preprocessing import minmax_scaling
# plotting modules
import seaborn as sns
import matplotlib.pyplot as plt
# return a dataframe showing the number of NaNs and their percentage
total = df.isnull().sum().sort_values(ascending=False)
percent = (df.isnull().sum() / df.isnull().count()).sort_values(ascending=False)
missing_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
missing_data.head(20)
# replace NaNs with 0
df.fillna(0, inplace=True)
# replace NaNs with the column mean
df['column_name'].fillna(df['column_name'].mean(), inplace=True)
# replace NaNs with the column median
df['column_name'].fillna(df['column_name'].median(), inplace=True)
# linear interpolation to replace NaNs
df['column_name'].interpolate(method='linear', inplace=True)
# replace with the next value
df['column_name'].fillna(method='backfill', inplace=True)
# replace with the previous value
df['column_name'].fillna(method='ffill', inplace=True)
# drop rows containing NaNs
df.dropna(axis=0, inplace=True)
# drop columns containing NaNs
df.dropna(axis=1, inplace=True)
# replace NaNs depending on whether it's a numerical feature (k-NN) or categorical (most frequent category)
from sklearn.impute import SimpleImputer
missing_cols = df.isna().sum()[lambda x: x > 0]
for col in missing_cols.index:
if df[col].dtype in ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']:
imputer = KNNImputer(n_neighbors=5)
imputer = SimpleImputer(strategy='mean') # or 'median', 'most_frequent', or 'constant'
imputer = SimpleImputer(strategy='constant', fill_value=0) # replace with 0
df[col] = imputer.fit_transform(df[col].values.reshape(-1, 1))
# if test set
# df_test[col] = imputer.fit_transform(df_test[col].values.reshape(-1, 1))
else:
df[col] = df[col].fillna(df[col].mode().iloc[0])
# if test set
# df_test[col] = df_test[col].fillna(df_test[col].mode().iloc[0])
parsing date
https://strftime.org/ Some examples:
1/17/07 has the format "%m/%d/%y"
17-1-2007 has the format "%d-%m-%Y"
# create a new column, date_parsed, with the parsed dates
landslides['date_parsed'] = pd.to_datetime(landslides['date'], format = "%m/%d/%y")
One of the biggest dangers in parsing dates is mixing up the months and days. The to_datetime() function does have very helpful error messages, but it doesn't hurt to double-check that the days of the month we've extracted make sense
# remove na's
day_of_month_landslides = day_of_month_landslides.dropna()
# plot the day of the month
sns.distplot(day_of_month_landslides, kde=False, bins=31)
reading files with encoding problems
# try to read in a file not in UTF-8
kickstarter_2016 = pd.read_csv("../input/kickstarter-projects/ks-projects-201612.csv")
# look at the first ten thousand bytes to guess the character encoding
with open("../input/kickstarter-projects/ks-projects-201801.csv", 'rb') as rawdata:
result = chardet.detect(rawdata.read(10000))
# check what the character encoding might be
print(result)
So chardet is 73% confidence that the right encoding is "Windows-1252". Let's see if that's correct:
# read in the file with the encoding detected by chardet
kickstarter_2016 = pd.read_csv("../input/kickstarter-projects/ks-projects-201612.csv", encoding='Windows-1252')
# look at the first few lines
kickstarter_2016.head()
INCONSISTENT DATA
INCONSISTENT DATA
# get all the unique values in the 'City' column
cities = suicide_attacks['City'].unique()
# sort them alphabetically and then take a closer look
cities.sort()
cities
Just looking at this, I can see some problems due to inconsistent data entry: 'Lahore' and 'Lahore ', for example, or 'Lakki Marwat' and 'Lakki marwat'.
# convert to lower case
suicide_attacks['City'] = suicide_attacks['City'].str.lower()
# remove trailing white spaces
suicide_attacks['City'] = suicide_attacks['City'].str.strip()
It does look like there are some remaining inconsistencies: 'd. i khan' and 'd.i khan' should probably be the same.
I'm going to use the fuzzywuzzy package to help identify which string are closest to each other.
# get the top 10 closest matches to "d.i khan"
matches = fuzzywuzzy.process.extract("d.i khan", cities, limit=10, scorer=fuzzywuzzy.fuzz.token_sort_ratio)
# take a look at them
matches
We can see that two of the items in the cities are very close to "d.i khan": "d. i khan" and "d.i khan". We can also see the "d.g khan", which is a seperate city, has a ratio of 88. Since we don't want to replace "d.g khan" with "d.i khan", let's replace all rows in our City column that have a ratio of > 90 with "d. i khan".
# function to replace rows in the provided column of the provided dataframe
# that match the provided string above the provided ratio with the provided string
def replace_matches_in_column(df, column, string_to_match, min_ratio = 90):
# get a list of unique strings
strings = df[column].unique()
# get the top 10 closest matches to our input string
matches = fuzzywuzzy.process.extract(string_to_match, strings,
limit=10, scorer=fuzzywuzzy.fuzz.token_sort_ratio)
# only get matches with a ratio > 90
close_matches = [matches[0] for matches in matches if matches[1] >= min_ratio]
# get the rows of all the close matches in our dataframe
rows_with_matches = df[column].isin(close_matches)
# replace all rows with close matches with the input matches
df.loc[rows_with_matches, column] = string_to_match
# let us know the function's done
print("All done!")
# use the function we just wrote to replace close matches to "d.i khan" with "d.i khan"
replace_matches_in_column(df=suicide_attacks, column='City', string_to_match="d.i khan")
REMOVING A CARACTER THAT WE DONT WANT
df['GDP'] = df['GDP'].str.replace('$', "")
TO CONVERT STR TO NUMERICAL
#df['GDP'] = df['GDP'].astype(float)
#Si on est géné par des caractère dans la conversion
df['GDP'] = df['GDP'].str.replace(',', '').astype(float)
TO ENCODE CATEGORICAL VARIABLES
# For variables taking more than 2 values
from sklearn.preprocessing import OrdinalEncoder
ordinal_encoder = OrdinalEncoder()
df['Country'] = ordinal_encoder.fit_transform(df[['Country']])
#TO define the encoding ourself
custom_categories = [['High School', 'Bachelor', 'Master', 'Ph.D'], [0, 1, 2, 3]]
ordinal_encoder = OrdinalEncoder(categories=custom_categories)
It seems like GitHub only detects the specific licenses if they are on the default (nowadays, main) branch.
Here's what I've observed just now
on my dev branch, the tabs next to README all read "License"
on my main branch the tabs next to README read their full license names (e.g. "MIT License")
on any branch, the licenses listed in the About section (top right) depended on the main branch licenses.
dev had licenses but main didn't, the About section would not list any licenses, but the tabs next to README would read "License".By "tabs" I mean these things:
Should be enough to edit your .Renviron file and add
http_proxy=http://proxy.foo.bar:8080/
http_proxy_user=user_name:password
The android emulator was hanging on startup of older emulator images and after a lot of trying and reading the solution was quite simple: 1. quit Android Studio, 2. delete the directory ".android" (actually I renamed it to ".android_bak") and 3. when restarting Android Studio the directory was recreated and all emulator images were running fine. And the reason was that long ago I did some configuration changes in ".android" which were not compatible anymore with the new Android SDK.
This does not work for me.
My copy of Excel does not have "(Excel Options > Trust Centre > Macro Settings > Trust access to the VBA Project object model)"
BTW, I'm in the USA. Perhaps the version sold in Europe is different than the one sold here.
You overwrote AX register when doing mov ax, 0x0000 so AH and AL don't have proper values when calling int 13h.
(8-bit AH and AL registers are high and low bytes of 16-bit AX register)
I know this is an old request but this may help others with the same requirements to access a SQLite database in AutoCAD with AutoLisp.
See SQLite for AutoLisp on theswamp.org forum. The author of the SQLite for AutoLisp is very responsive and actively maintains it. To get the most out of the forum it is worth creating a user account.
My mistake was using my aws_access_key_id (the id with letters in you get from IAM) instead of the account id (all numbers that is you billing account). Fixed this and worked 1st time!
Thanks! A follow-up question:
If the CreditLine (or Financing) entity only holds stable attributes like the ID, would it also contain a list of its Snapshots?
Should getters and all domain logic (like calculations) stay in the CreditLine class and delegate to the latest snapshot, or live directly in the Snapshot and the Credit Line class just have methods to add and retrieve a snapshot?
I got a segmentation fault when switching (Configure Virtual Device) "Graphics acceleration" from: "Automatic" to "Software". And after a lot of trying and reading the solution was quite simple: 1. quit Android Studio, 2. delete the directory ".android" (actually I renamed it to ".android_bak") and 3. when restarting Android Studio the directory was recreated and all emulator images were running fine. And the reason was that long ago I did some configuration changes in ".android" which were not compatible anymore with the new Android SDK.
sudo apt install python3-scriptforge
For Windows 11, I just added this path to Path Environment Variable: c:\Program Files\Git\mingw64\libexec\git-core ... and restarted VS Code
You can try to append a bunch of coordinate points to an array and make two other arrays around this as loci and then look for intersects by expanding a variable about the matches in each array.
To add to @Kimzu answer, HTML entities with pre tags remain HTMl, so you should escape them:
'<pre>' . htmlentities(print_r($array, true), ENT_QUOTES, 'UTF-8') . '</pre>'
1. Open LocalWP
Launch Local (by Flywheel).
2. Go to Your Site’s Settings
In the left sidebar, click on your site (e.g., “MySite”).
Click on the “Overview” tab (sometimes labeled “Site” or “Site Overview” depending on your version).
3. Change the Domain
Find the field labeled “Site Domain” (or “Domain”).
Change it from something like localhost:10005 to:
mysite.local
Click “Apply Changes.”
4. Allow LocalWP to Update Hosts File
Local will prompt you to update your system’s hosts file.
Click “Allow” or “Yes” when it asks for administrator/root access.
This step maps mysite.local → 127.0.0.1 on your computer.
5. Restart the Site
Stop the site (if running) and click Start Site again.
6. Open the Site
Click “Open Site” — it
should now open:
I'm confused. Why would you want to do that?
there are 65 and you make 6 it will need the most of 10 people to make it and need 5 coder to make it
Corrigi usando na .env uma opção chamada "Transaction Pooler" do Supabase. A conexão deu tudo certo após isso. Pois parece que o IPv6 não é totalmente suportado no meu Ubuntu, então temos que usar algo como IPv4.
Transaction Pooler e usa a string de conexão fornecida, geralmente com a porta 6543
As of November 2025, Gitlab CI supports defining dependencies on a granular level when working with parallel/matrix.
stages:
- prepare
- build
- test
.full_matrix: &full_matrix
parallel:
matrix:
- PLATFORM: ["linux", "windows", "macos"]
VERSION: ["16", "18", "20"]
.platform_only: &platform_only
parallel:
matrix:
- PLATFORM: ["linux", "windows", "macos"]
prepare_env:
stage: prepare
script:
- echo "Preparing $PLATFORM with Node.js $VERSION"
<<: *full_matrix
build_project:
stage: build
script:
- echo "Building on $PLATFORM"
needs:
- job: prepare_env
parallel:
matrix:
- PLATFORM: ['$[[ matrix.PLATFORM ]]']
VERSION: ["18"] # Only depend on Node.js 18 preparations
<<: *platform_only
Source: https://docs.gitlab.com/ci/yaml/matrix_expressions/#use-a-subset-of-values
Ideally, you could combine a Smalltalk image concept with peristence and just store all your objects in a real database.
When designing a domain model for something like a Credit Line that changes over time, a solid approach is to keep a core CreditLine entity with just stable info (like its ID) and store all changing details (conditions, financial entities, contributions) in dated snapshots or Statements. This avoids duplicating static info and keeps complete history, letting you query the state at any date by picking the right snapshot. This matches your Option C and keeps the model clean and easy to maintain.
Another approach is Event Sourcing, where you store every change as an immutable event and rebuild the Credit Line’s state by replaying those events. This offers a detailed audit trail and no data duplication but adds complexity in querying and implementation.
For your case, Option C is usually simpler and practical unless you need fine-grained change tracking that Event Sourcing provides. Both are established patterns for managing evolving, versioned domain data.
To remove the first matching element, one may:
xs.extract_if(.., |x| *x == some_x).next();
The resulting option will contain the value, if contained in the vector.
It turns out that nodeJS code is using a different data that browser uses.
So this issue is caused by different data.
I am working in Apache OpenOffice Calc (spreadsheet). To rotate a chart, I do this:
Select the chart
Select "FORMAT" from the top menu bar
Select "Graphic" from the drop-down menu
Select "Position-Size" from the available options
Select the "Rotation" tab from the Position-Size pop-up
Go To "Rotation Angle on the bottom of the pop-up
Set the Rotation Angle I want for my chart.
I hope This helps!
Solution by alexanderokosten
This guide explains how to install .NET Framework 4.0 and 4.5 on VS 2022, as it doesn't support these versions out of the box. Microsoft says these versions are outdated but many people still need them to maintain legacy apps.
Requirements:
- VS 2022
- 7-zip (I haven't tested WinRar)
Notes:
- When .NET 4.0 is mentioned, it means .NET Framework 4.0
- When it's mentioned to open as an archive or un-zip, use 7-zip
The steps:
- Download the .NET 4.0 Reference Assemblies from https://www.nuget.org/api/v2/package/Microsoft.NETFramework.ReferenceAssemblies.net40/1.0.3
- Once downloaded, open them up as an archive
- Navigate to Build > .NETFramework
- Keep 7-zip open and open the folder C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework
- Delete the existing v4.0 (or make a backup)
- Extract the v4.0 folder from 7-zip into the folder (make sure to not extract the contents but the folder itself)
- Open Visual Studio 2022 and now you should see .NET 4.0 as a .NET Framework target
React Router V6 has a backwards compatibility package. Perhaps that's a middleground worth exploring?
You’re using:
"firebase-tools": "^14.24.1"
"firebase-functions": "6.6.0"
→ Problem: emulator support for firebase-functions v2 CloudEvent format was added only in v14.0.0+, and full support came later (~v15+).
Fix:
Upgrade Firebase CLI and dependencies:
npm install -g firebase-tools@latest
npm install firebase-functions@latest firebase-admin@latest
The Firebase Emulator does not yet fully support Node 22 runtime.
Your package.json says:
"engines": {
"node": "22"
}
Fix:
Downgrade temporarily to a supported version for local testing:
"engines": {
"node": "20"
}
Or use Node 20 in your local environment when emulating.
Separate v1 and v2 triggers into different files to avoid cross-import issues:
// index.js
exports.authTriggers = require('./auth');
exports.firestoreTriggers = require('./firestore');
That avoids old v1 helpers accidentally mixing with v2 definitions.
If you need confirmation that it’s not your logic:
firebase deploy --only functions:onPrivateUserDataUpdateTrigger
of course signal approach is better, the main reason is the performance, just add console log into method and signal and compare the repetition of that, you will notice that method is running multiple times unnecessary, while the computed signal is aware of dependencies and only run again when one of the dependencies is changed.
AI DeepSeek offered following solution as an option
Procedure PresentationGetProcessing(Data, Presentation, StandardProcessing)
StandardProcessing = False;
Try
If Data.Ref = Undefined Then
StandardProcessing = True;
Return;
EndIf;
If Data.Ref.IsEmpty() Then
StandardProcessing = True;
Return;
EndIf;
fullDescr = Data.Ref.FullDescr();
Presentation = StrReplace(fullDescr, "/", ", ");
Except
// If any error occurs, fall back to standard processing
StandardProcessing = True;
EndTry;
EndProcedure
But I am not sure that this is the correct way...
Looking forward for other answers...
I'm also just getting started with coding and recently came across the idea of contributing to open source. At first, I had no clue where to begin, but after doing a bit of digging, I realized you don’t need to work for a company to contribute.
What I’ve been doing is checking out projects on GitHub that seem interesting—especially ones using the language I’m learning. I look for issues labeled “good first issue” or “beginner friendly.” Those are perfect for people like us who are just starting out.
You don’t need to be an expert. You can help by fixing small bugs, improving documentation, or even just asking if there’s something you can do. Most open source communities are super welcoming and happy to guide you.
So yeah, if you’re thinking of jumping in, go for it! We’re all learning. Good luck!
It should be
template <typename T>
void push_front(T&& value) {
// ...
new_head->value = std::forward<T>(value);
// ...
}
If value is U&& then it will give new_head->value = std::move(value);
If value is const U& then it will give new_head->value = value;
See When is a reference a forwarding reference, and when is it an rvalue reference?, Is there a difference between universal references and forwarding references?
2025 edit:
Great answer to this problem here: Medium blog by Ted Goas
Rather than, say, wrapping an image in a tags, you instead have to wrap the a tags in table html.
I believe this is an iOS 26.1 bug unfortunately :(
I'm also facing the same issue, tests runs just fine with direct class name or even direct package reference, but as soon as I try to run with groups then it fails with null pointer exception.
I noticed that, when groups are included in xml then it even not going for @BeforeClass and @BeforeMethod methods from superclass which is overridden in test class and that is the reason test is trying to execute before initializing the web drivers and ended up with null pointer exception.
I've tried to rename the methods in superclass to avoid collision but none worked. With same configuration it runs fine if I remove the groups from testng xml suite.
Note that group names are correctly specified in the @test.
If the backend relies on Windows-only features (COM/Crystal Reports/Windows auth), I willl tailor this to Windows (EB/ECS-Windows). Otherwise I would proceed with ECS Fargate + ALB host rules + S3/CloudFront - that’ll give you per-app autoscaling, isolation, and lower ops without the VM Tetris.
If you are not tracing your project, I would suggest to tracing immediatedly using LangSmith, and LangSmith comes with the built in evaluators to evaluate your document retrieval and even more using LLM as Judge option
Understand that every data is unique, we cannot say on first look. I would strongly encourage you to create a dataset in LangSmith and test with multiple version. By this time you would have a rough idea of which works
Although not an answer to the question (still hope there is some clever way (as always in haskell...)). For reusing determining lower and upper bounds I came with something like this:
data ValueRange = MkValueRange { minValue :: Double, maxValue :: Double }
instance Show ValueRange where
show (MkValueRange minV maxV) = "ValueRange(" ++ show minV ++ ", " ++ show maxV ++ ")"
fmap2 :: (Double -> Double -> Double) -> ValueRange -> ValueRange -> ValueRange
fmap2 f (MkValueRange minA maxA) (MkValueRange minB maxB) =
MkValueRange (minimum results) (maximum results)
where
results = [ f x y | x <- [minA, maxA], y <- [minB, maxB] ]
(<+>) :: ValueRange -> ValueRange -> ValueRange
(<+>) = fmap2 (+)
(<->) :: ValueRange -> ValueRange -> ValueRange
(<->) = fmap2 (-)
(<*>) :: ValueRange -> ValueRange -> ValueRange
(<*>) = fmap2 (*)
(</>) :: ValueRange -> ValueRange -> ValueRange
(</>) = fmap2 (/)
example1 :: ValueRange
example1 = MkValueRange 10 20 <+> MkValueRange 5 15
I got around the problem in the following way:
@Bean
public OpenAiApi openAiApi() {
HttpClient httpClient = HttpClient.newBuilder().version(HttpClient.Version.HTTP_1_1).build();
JdkClientHttpRequestFactory jdkClientHttpRequestFactory = new JdkClientHttpRequestFactory(httpClient);
return OpenAiApi.builder()
.apiKey(this.apiKey)
.baseUrl(this.baseUrl)
.restClientBuilder(
RestClient.builder()
.requestFactory(jdkClientHttpRequestFactory)
).build();
};
Credits to: https://github.com/spring-projects/spring-ai/issues/2653#issuecomment-2783528029
You can hide those details into pure python bootstrap like this.
It is possible to "do it right" in shell, but it does not evolve well.
Alternatively, consider bootstrap like this to hide those:
... and other unnecessary details, especially for simple start.
Shopify maintains strict control over its checkout process for security so as far as I know it is not possible to integrate unapproved payment service provider.
You can consider to implement this approach that might work properly:
Create a Draft Order
Create a PayGreen payment link for the draft’s total and include the draft order ID as metadata
Redirect the customer to PayGreen’s hosted page
On PayGreen webhook success, verify signature, then complete the draft and mark it paid with an external transaction (“PayGreen”)
On failure/timeout, cancel the draft and restock