I found the solution myself.
The answer provided in the following related question solved my problem: Autodesk Platform Services - ACC - Get Roles IDs
I just discovered that I can copy my files from my old phone to my laptop, then, with my new phone connected to Android studio, I can drag/drop the file from Window File Explorer directly into the Android file explorer!
Problem solved.
Then you'll need to modify your app to request all file access https://developer.android.com/training/data-storage/manage-all-files, officially Play Store will only grant it for specific apps, which is why I believe it's simpler to just modify your app to accept shared files instead of going through the hoop (unless it's not a published app)
If I understand your reply correctly, it doesn't address what I'm trying to do. I don't have multiple apps trying to share a file.
On my previous phone, some of my apps created/read/updated app-specific files. Those files were located in "Internal Storage" (not in any subfolder of Internal Storage). As a result, those files were accessible from both my phone's file manager and my PC's file manager (when connected to my phone) if I needed to copy/delete/edit them from outside my apps.
It's my understanding that, when I move my apps to the new phone, the apps (which still need to use the info in those files) can only access files that are in "/data/user/0/myApp/files". So I need to copy my files from "Internal Storage" on my previous phone to "/data/user/0/myApp/files" on my new phone.
I guess my first question should be: Is there a way for my apps on my new phone to access files in "Internal Storage"? If so, then I could simply copy the files over to my new phone. But, if my apps can't access "internal Storage', then how can I copy my files into "/data/user/0/myApp/files" on my new phone so my apps can access them?.
Does this clarify my question?
@Wicket - I appreciate your replies.
With that said, I can't think of a way to reduce scope on either of the two issues and still have the Add-On do what it is supposed to do.
Reading from Sheets. Seems like I need the readonly for Sheets to get my data. I can still use their picker to pick the spreadsheet, but to read the data, I'll need sheets readonly.
For your comment on the slides currentonly. I love this idea and I want to implement it so users won't be as nervous about the Add-On...however I cannot think of anyway to put pie shapes with varying angles into a slides presentation with that scope. There is no way to do it with the API, I tried a number of things and researched here and elsewhere. I finally realized I could do it with a public template and was really happy about my idea working...and now I'm realizing that won't work because of openById even though it's not the users.
I think I'll have to appeal to the Google team and see what they say. They told me to post here first and from what I'm seeing there aren't any ways around it. I need to have my app do less for narrower scopes or appeal for my original scope request.
There are 2 possibilities to do it:
Rebuilding like this: NIXOS_LABEL="somelabel" nixos-rebuild switch
Configuring system.nixos.label and optionally system.nixos.tags in configuration.nix (See the links for full info)
When you use the 2 possibilities at the same time, the first one will get priority.
Important: Labels don't support all types of chars. Spaces won't work.
It's better to install all significant dependencies explicitly. If you want a better way to manage similar dependencies across subpackages, you could use pnpm's catalogs feature.
extension DurationExt on Duration {
String format() {
return [
inHours,
inMinutes.remainder(60),
inSeconds.remainder(60),
].map((e) => e.toString().padLeft(2, '0')).join(':');
}
}
I know this is a while ago, but I have a lead for you as I think I've just fixed this issue at my end (same scenario github codespaces with Snowflake, using SSO.
I changed this setting in vscode from "hybrid" to "process" (note that process reports as being the default)
remote.autoforwardportsource
I was looking for the same thing, and I managed to re-implement Ziggy Routes, and I removed Wayfinder since it's still in beta and I don't know exactly how to use it...
I created a repository, but I forgot the name because I have several. I'll find it and send it to you by email: [email protected]
Or contact me on GitHub: github.com/casimirorocha
-----------------------------------------------------------------------------------------
Oi eu estava querendo a mesma coisa, e consegui implementar de volta o ziggy routes, e removi o wayfinder já que ainda é beta e não sei direito como usar....
Eu criei um repositório, mas esqueci o nome pois tenho vários, vou achar e te mandar popor email: [email protected]
Ou me chama no github: github.com/casimirorocha
Off topic. NB Please lay off the boldface. It doesn't help.
Frame can be applied to menu item as such
Menu("Options") {
Button("Option 1") {
}
Button("Option 2") {
}
}
.frame(width: 50)
The output will be as below. (Please ignore the button).
Hi friends how are you today where are you from can you walking today so you book right now I will be there online tonight walk to hotel and
🐞 مشكلة: فشل إنشاء تقرير Cucumber
عند محاولة توليد تقرير باستخدام maven-cucumber-reporting، تظهر الرسالة التالية:
net.masterthought.cucumber.ValidationException: No report file was added!
📌 السبب المحتمل
هذه الرسالة تعني أن الـ plugin لم يجد أي ملف JSON صالح لتوليد التقرير منه. غالبًا ما يكون السبب:
- عدم تنفيذ اختبارات Cucumber قبل مرحلة verify
- عدم إنشاء الملف target/cucumber.json بسبب فشل أو غياب الاختبارات
- مسار غير صحيح أو مفقود في إعدادات pom.xml
✅ الحلول المقترحة
1. تنفيذ الاختبارات قبل توليد التقرير
`bash
mvn clean test
mvn verify
`
\> تأكد من أن mvn test يُنتج ملف cucumber.json في مجلد target.
2. التحقق من وجود ملف JSON
بعد تنفيذ الاختبارات، تأكد من وجود الملف:
`bash
ls target/cucumber.json
3. إعداد صحيح لـ @CucumberOptions
`java
@CucumberOptions(
features = "src/test/resources/features",
glue = {"steps"},
plugin = {"pretty", "json:target/cucumber.json"},
monochrome = true,
publish = true
)
4. إعداد صحيح لـ pom.xml
`xml
<plugin>
\<groupId\>net.masterthought\</groupId\>
\<artifactId\>maven-cucumber-reporting\</artifactId\>
\<version\>5.7.1\</version\>
\<executions\>
\<execution\>
\<id\>execution\</id\>
\<phase\>verify\</phase\>
\<goals\>
\<goal\>generate\</goal\>
\</goals\>
\<configuration\>
\<projectName\>cucumber-gbpf-graphql\</projectName\>
\<skip\>false\</skip\>
\<outputDirectory\>${project.build.directory}\</outputDirectory\>
\<inputDirectory\>${project.build.directory}\</inputDirectory\>
\<jsonFiles\>
\<param\>/\*.json\</param\>
\</jsonFiles\>
\<checkBuildResult\>false\</checkBuildResult\>
\</configuration\>
\</execution\>
\</executions\>
</plugin>
`
🧪 اختبار يدوي (اختياري)
`java
File reportOutputDirectory = new File("target");
List<String> jsonFiles = Arrays.asList("target/cucumber.json");
Configuration config = new Configuration(reportOutputDirectory, "اسم المشروع");
ReportBuilder reportBuilder = new ReportBuilder(jsonFiles, config);
reportBuilder.generateReports();
`
🧠 ملاحظات إضافية
- تأكد من أن ملفات .feature موجودة وتُنفذ فعليًا
- تحقق من أن ملفات الاختبار تحتوي على @RunWith(Cucumber.class) أو @Cucumber حسب نوع JUnit
- استخدم mvn clean test verify كأمر موحد لضمان الترتيب الصحيح
\> 💬 إذا استمرت المشكلة، راجع سجل التنفيذ (target/surefire-reports) أو فعّل debug في Maven للحصول على تفاصيل أعمق.
To expand on previous answers, you can get a nice re-usable Group component similar to the one in Mantine like this:
import { View, ViewProps } from "react-native";
export function Group(props: ViewProps) {
return <View style={[{ flexDirection: "row" }, props.style]} {...props} />;
}
Assuming your project is using typescript, the ViewProps usage above allows passing through any other properties and preserves type hints
Yes. It was impossible to switch directly. So, I made a working switch. Here's the fix:
A post on retrocomputing gives more details. Note that LOADALL apparently couldn't do it, but was wrongly rumored to be able to:
Pm32 -> pm16 -> real 16 (wrap function caller) -> real 16 ( the function call) -> pm32 (resume 32) -> ret to original caller.
uint16_t result = call_real_mode_function(add16_ref, 104, 201); // argc automatically calculated
print_args16(&args16_start);
terminal_write_uint("\nThe result of the real mode call is: ", result);
uint16_t result2 = call_real_mode_function(complex_operation, 104, 201, 305, 43); // argc automatically calculated
print_args16(&args16_start);
terminal_write_uint("\nThe result of the real mode call is: ", result2);
// Macro wrapper: automatically counts number of arguments
#define call_real_mode_function(...) \
call_real_mode_function_with_argc(PP_NARG(__VA_ARGS__), __VA_ARGS__)
// Internal function: explicit argc
uint16_t call_real_mode_function_with_argc(uint32_t argc, ...) {
bool optional = false;
if (optional) {
// This is done later anyway. But might as well for now
GDT_ROOT gdt_root = get_gdt_root();
args16_start.gdt_root = gdt_root;
uint32_t esp_value;
__asm__ volatile("mov %%esp, %0" : "=r"(esp_value));
args16_start.esp = esp_value;
}
va_list args;
va_start(args, argc);
uint32_t func = va_arg(args, uint32_t);
struct realmode_address rm_address = get_realmode_function_address((func_ptr_t)func);
args16_start.func = rm_address.func_address;
args16_start.func_cs = rm_address.func_cs;
args16_start.argc = argc - 1;
for (uint32_t i = 0; i < argc; i++) {
args16_start.func_args[i] = va_arg(args, uint32_t); // read promoted uint32_t
}
va_end(args);
return pm32_to_pm16();
}
GDT16_DESCRIPTOR:
dw GDT_END - GDT_START - 1 ;limit/size
dd GDT_START ; base
GDT_START:
dq 0x0
dq 0x0
dq 0x00009A000000FFFF ; code
dq 0x000093000000FFFF ; data
GDT_END:
section .text.pm32_to_pm16
pm32_to_pm16:
mov eax, 0xdeadfac1
; Save 32-bit registers and flags
pushad
pushfd
push ds
push es
push fs
push gs
; Save the stack pointer in the first 1mb (first 64kb in fact)
; So its accessible in 16 bit, and can be restored on the way back to 32 bit
sgdt [args16_start + GDT_ROOT_OFFSET]
mov [args16_start + ESP_OFFSET], esp ;
mov ax, ss
mov [args16_start + SS_OFFSET], ax ;
mov esp, 0 ; in case i can't change esp in 16 bit mode later. Don't want the high bit to fuck us over
mov ebp, 0 ; in case i can't change esp in 16 bit mode later. Don't want the high bit to fuck us over
cli
lgdt [GDT16_DESCRIPTOR]
jmp far 0x10:pm16_to_real16
/* Reference version (purely for comparison) */
__attribute__((section(".text.realmode_functions"))) int16_t complex_operation(uint16_t a, uint16_t b, uint16_t c, uint16_t d) {
return 2 * a + b - c + 3 * d;
}
/* Reference version (purely for comparison) */
__attribute__((section(".text.realmode_functions"))) uint16_t add16_ref(uint16_t a, uint16_t b) {
return 2 * a + b;
}
resume32:
; Restore segment registers
mov esp, [args16_start + ESP_OFFSET]
mov ax, [args16_start + SS_OFFSET]
mov ss, ax
mov ss, ax
pop gs
pop fs
pop es
pop ds
; Restore general-purpose registers and flags
popfd
popad
; Retrieve result
movzx eax, word [args16_start + RET1_OFFSET]
; mov eax, 15
ret
The struct located in the first 64kb of memory, to allow multi segment data passing.
typedef struct __attribute__((packed)) Args16 {
GDT_ROOT gdt_root;
// uint16_t pad; // (padded due to esp wanting to)
uint16_t ss;
uint32_t esp;
uint16_t ret1;
uint16_t ret2;
uint16_t func;
uint16_t func_cs;
uint16_t argc;
uint16_t func_args[13];
} Args16;
To see a simpler version of this:
commit message: "we are so fucking back. Nicegaga"
Commit date: Nov 7, 3:51 am
(gaga typo accidentally typed)
hash: 309ca54630270c81fa6e7a66bc93
and a more modern and cleaned (The one with the code show above):
commit message: Changed readme.
commit date: Sun Nov 9 18:09:16
commit hash: a2058ca7e3f99e92ea7c76909cc3f7846674dc83
====
Hm, I'm not seeing the used key parts/key length be what it should be. Looks like it is using the sports id as a filter but not actually part of the index lookup. Can you please include output of show create table Bet; and show create table BetSelection;?
Just in case someone would search for the answer as me.
As @jhasse said it is really easy with Clang. All you need to do is just to build clang with openmp runtime support, so working set of build commands would be like
git clone https://github.com/llvm/llvm-project
cd llvm-project
mkdir build
cd build
cmake -DLLVM_ENABLE_PROJECTS="clang;lld" -DLLVM_ENABLE_RUNTIMES="openmp;compiler-rt" -DLLVM_TARGETS_TO_BUILD="host" -DLLVM_USE_SANITIZER="" -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local/llvm -G "Unix Makefiles" ../llvm
in build directory inside llvm-project one. Then you can also make installation. Or you really can dive into separate OpenMP build.
See also here (there is in-build tool called Archer that was mentioned by @
Simone Atzeni).
onclick="location.href='About:blank';"
*pseudo code
If both positive:
ABS(a-b) or MAX(a,b) - MIN(a,b)
If both negative:
ABS(ABS(a) - ABS(b)) or MAX(a,b) - MIN(a,b)
If one positive and one negative:
MAX(a,b) - MIN(a,b)
Hence for all situations:
MAX(a,b) - MIN(a,b)
Don't know why you have an error, but you have an issue with your url
There is a missing & between response_type and scope
response_type=code&scope=user_profile,user_media
May be only a typo
I figured it out. The search API string for the next call simply needs to be appended with "&nextPageToken=" and then the token, and keeping all of the same search criteria and returned fields.
STEP 1. Find the Dirt.
Start data cleaning by determining what is wrong with your data.
Look for the following:
Are there rows with empty values? Entire columns with no data? Which data is missing and why?
How is data distributed? Remember, visualizations are your friends. Plot outliers. Check distributions to see which groups or ranges are more heavily represented in your dataset.
Keep an eye out for the weird: are there impossible values? Like “date of birth: male”, “address: -1234”.
Is your data consistent? Why are the same product names written in uppercase and other times in camelCase?
STEP 2: SCRUB THE DIRT
Missing Data
Outliers Contaminated
Data Inconsistent : You have to expect inconsistency in your data.
Especially when there is a higher possibility of human error (e.g. when salespeople enter the product info on proforma invoices manually).
The best way to spot inconsistent representations of the same elements in your database is to visualize them.
Plot bar charts per product category.
Do a count of rows by category if this is easier.
When you spot the inconsistency, standardize all elements into the same format.
Humans might understand that ‘apples’ is the same as ‘Apples’ (capitalization) which is the same as ‘appels’ (misspelling), but computers think those three refer to three different things altogether.
Lowercasing as default and correcting typos are your friends here.
Data Invalid
Data Duplicate
Data Data Type Issues
Structural Errors
The majority of data cleaning is running reusable scripts, which perform the same sequence of actions. For example: 1) lowercase all strings, 2) remove whitespace, 3) break down strings into words.
Problem discovery. Use any visualization tools that allow you to quickly visualize missing values and different data distributions.
Identify the problematic data
Clean the data
Remove, encode, fill in any missing data
Remove outliers or analyze them separately
Purge contaminated data and correct leaking pipelines
Standardize inconsistent data
Check if your data makes sense (is valid)
Deduplicate multiple records of the same dataForesee and prevent type issues (string issues, DateTime issues)
Remove engineering errors (aka structural errors)
Rinse and repeat
HANDLING MISSING VALUES
The first thing I do when I get a new dataset is take a look at some of it. This lets me see that it all read in correctly and get an idea of what's going on with the data. In this case, I'm looking to see if I see any missing values, which will be reprsented with NaN or None.
nfl_data.sample(5)
Ok, now we know that we do have some missing values. Let's see how many we have in each column.
# get the number of missing data points per column
missing_values_count = nfl_data.isnull().sum()
# look at the # of missing points in the first ten columns
missing_values_count[0:10]
That seems like a lot! It might be helpful to see what percentage of the values in our dataset were missing to give us a better sense of the scale of this problem:
# how many total missing values do we have?
total_cells = np.product(nfl_data.shape)
total_missing = missing_values_count.sum()
# percent of data that is missing
(total_missing/total_cells) * 100
Wow, almost a quarter of the cells in this dataset are empty! In the next step, we're going to take a closer look at some of the columns with missing values and try to figure out what might be going on with them.
One of the most important question you can ask yourself to help figure this out is this:
Is this value missing becuase it wasn't recorded or becuase it dosen't exist?
If a value is missing becuase it doens't exist (like the height of the oldest child of someone who doesn't have any children) then it doesn't make sense to try and guess what it might be.
These values you probalby do want to keep as NaN. On the other hand, if a value is missing becuase it wasn't recorded, then you can try to guess what it might have been based on the other values in that column and row.
# if relevant
# replace all NA's with 0
subset_nfl_data.fillna(0)
# replace all NA's the value that comes directly after it in the same column,
# then replace all the reamining na's with 0
subset_nfl_data.fillna(method = 'bfill', axis=0).fillna(0)
# The default behavior fills in the mean value for imputation.
from sklearn.impute import SimpleImputer
my_imputer = SimpleImputer()
data_with_imputed_values = my_imputer.fit_transform(original_data)
----------
import pandas as pd
import numpy as np
# for Box-Cox Transformation
from scipy import stats
# for min_max scaling
from mlxtend.preprocessing import minmax_scaling
# plotting modules
import seaborn as sns
import matplotlib.pyplot as plt
# return a dataframe showing the number of NaNs and their percentage
total = df.isnull().sum().sort_values(ascending=False)
percent = (df.isnull().sum() / df.isnull().count()).sort_values(ascending=False)
missing_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
missing_data.head(20)
# replace NaNs with 0
df.fillna(0, inplace=True)
# replace NaNs with the column mean
df['column_name'].fillna(df['column_name'].mean(), inplace=True)
# replace NaNs with the column median
df['column_name'].fillna(df['column_name'].median(), inplace=True)
# linear interpolation to replace NaNs
df['column_name'].interpolate(method='linear', inplace=True)
# replace with the next value
df['column_name'].fillna(method='backfill', inplace=True)
# replace with the previous value
df['column_name'].fillna(method='ffill', inplace=True)
# drop rows containing NaNs
df.dropna(axis=0, inplace=True)
# drop columns containing NaNs
df.dropna(axis=1, inplace=True)
# replace NaNs depending on whether it's a numerical feature (k-NN) or categorical (most frequent category)
from sklearn.impute import SimpleImputer
missing_cols = df.isna().sum()[lambda x: x > 0]
for col in missing_cols.index:
if df[col].dtype in ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']:
imputer = KNNImputer(n_neighbors=5)
imputer = SimpleImputer(strategy='mean') # or 'median', 'most_frequent', or 'constant'
imputer = SimpleImputer(strategy='constant', fill_value=0) # replace with 0
df[col] = imputer.fit_transform(df[col].values.reshape(-1, 1))
# if test set
# df_test[col] = imputer.fit_transform(df_test[col].values.reshape(-1, 1))
else:
df[col] = df[col].fillna(df[col].mode().iloc[0])
# if test set
# df_test[col] = df_test[col].fillna(df_test[col].mode().iloc[0])
parsing date
https://strftime.org/ Some examples:
1/17/07 has the format "%m/%d/%y"
17-1-2007 has the format "%d-%m-%Y"
# create a new column, date_parsed, with the parsed dates
landslides['date_parsed'] = pd.to_datetime(landslides['date'], format = "%m/%d/%y")
One of the biggest dangers in parsing dates is mixing up the months and days. The to_datetime() function does have very helpful error messages, but it doesn't hurt to double-check that the days of the month we've extracted make sense
# remove na's
day_of_month_landslides = day_of_month_landslides.dropna()
# plot the day of the month
sns.distplot(day_of_month_landslides, kde=False, bins=31)
reading files with encoding problems
# try to read in a file not in UTF-8
kickstarter_2016 = pd.read_csv("../input/kickstarter-projects/ks-projects-201612.csv")
# look at the first ten thousand bytes to guess the character encoding
with open("../input/kickstarter-projects/ks-projects-201801.csv", 'rb') as rawdata:
result = chardet.detect(rawdata.read(10000))
# check what the character encoding might be
print(result)
So chardet is 73% confidence that the right encoding is "Windows-1252". Let's see if that's correct:
# read in the file with the encoding detected by chardet
kickstarter_2016 = pd.read_csv("../input/kickstarter-projects/ks-projects-201612.csv", encoding='Windows-1252')
# look at the first few lines
kickstarter_2016.head()
INCONSISTENT DATA
INCONSISTENT DATA
# get all the unique values in the 'City' column
cities = suicide_attacks['City'].unique()
# sort them alphabetically and then take a closer look
cities.sort()
cities
Just looking at this, I can see some problems due to inconsistent data entry: 'Lahore' and 'Lahore ', for example, or 'Lakki Marwat' and 'Lakki marwat'.
# convert to lower case
suicide_attacks['City'] = suicide_attacks['City'].str.lower()
# remove trailing white spaces
suicide_attacks['City'] = suicide_attacks['City'].str.strip()
It does look like there are some remaining inconsistencies: 'd. i khan' and 'd.i khan' should probably be the same.
I'm going to use the fuzzywuzzy package to help identify which string are closest to each other.
# get the top 10 closest matches to "d.i khan"
matches = fuzzywuzzy.process.extract("d.i khan", cities, limit=10, scorer=fuzzywuzzy.fuzz.token_sort_ratio)
# take a look at them
matches
We can see that two of the items in the cities are very close to "d.i khan": "d. i khan" and "d.i khan". We can also see the "d.g khan", which is a seperate city, has a ratio of 88. Since we don't want to replace "d.g khan" with "d.i khan", let's replace all rows in our City column that have a ratio of > 90 with "d. i khan".
# function to replace rows in the provided column of the provided dataframe
# that match the provided string above the provided ratio with the provided string
def replace_matches_in_column(df, column, string_to_match, min_ratio = 90):
# get a list of unique strings
strings = df[column].unique()
# get the top 10 closest matches to our input string
matches = fuzzywuzzy.process.extract(string_to_match, strings,
limit=10, scorer=fuzzywuzzy.fuzz.token_sort_ratio)
# only get matches with a ratio > 90
close_matches = [matches[0] for matches in matches if matches[1] >= min_ratio]
# get the rows of all the close matches in our dataframe
rows_with_matches = df[column].isin(close_matches)
# replace all rows with close matches with the input matches
df.loc[rows_with_matches, column] = string_to_match
# let us know the function's done
print("All done!")
# use the function we just wrote to replace close matches to "d.i khan" with "d.i khan"
replace_matches_in_column(df=suicide_attacks, column='City', string_to_match="d.i khan")
REMOVING A CARACTER THAT WE DONT WANT
df['GDP'] = df['GDP'].str.replace('$', "")
TO CONVERT STR TO NUMERICAL
#df['GDP'] = df['GDP'].astype(float)
#Si on est géné par des caractère dans la conversion
df['GDP'] = df['GDP'].str.replace(',', '').astype(float)
TO ENCODE CATEGORICAL VARIABLES
# For variables taking more than 2 values
from sklearn.preprocessing import OrdinalEncoder
ordinal_encoder = OrdinalEncoder()
df['Country'] = ordinal_encoder.fit_transform(df[['Country']])
#TO define the encoding ourself
custom_categories = [['High School', 'Bachelor', 'Master', 'Ph.D'], [0, 1, 2, 3]]
ordinal_encoder = OrdinalEncoder(categories=custom_categories)
It seems like GitHub only detects the specific licenses if they are on the default (nowadays, main) branch.
Here's what I've observed just now
on my dev branch, the tabs next to README all read "License"
on my main branch the tabs next to README read their full license names (e.g. "MIT License")
on any branch, the licenses listed in the About section (top right) depended on the main branch licenses.
dev had licenses but main didn't, the About section would not list any licenses, but the tabs next to README would read "License".By "tabs" I mean these things:
Should be enough to edit your .Renviron file and add
http_proxy=http://proxy.foo.bar:8080/
http_proxy_user=user_name:password
The android emulator was hanging on startup of older emulator images and after a lot of trying and reading the solution was quite simple: 1. quit Android Studio, 2. delete the directory ".android" (actually I renamed it to ".android_bak") and 3. when restarting Android Studio the directory was recreated and all emulator images were running fine. And the reason was that long ago I did some configuration changes in ".android" which were not compatible anymore with the new Android SDK.
This does not work for me.
My copy of Excel does not have "(Excel Options > Trust Centre > Macro Settings > Trust access to the VBA Project object model)"
BTW, I'm in the USA. Perhaps the version sold in Europe is different than the one sold here.
You overwrote AX register when doing mov ax, 0x0000 so AH and AL don't have proper values when calling int 13h.
(8-bit AH and AL registers are high and low bytes of 16-bit AX register)
I know this is an old request but this may help others with the same requirements to access a SQLite database in AutoCAD with AutoLisp.
See SQLite for AutoLisp on theswamp.org forum. The author of the SQLite for AutoLisp is very responsive and actively maintains it. To get the most out of the forum it is worth creating a user account.
My mistake was using my aws_access_key_id (the id with letters in you get from IAM) instead of the account id (all numbers that is you billing account). Fixed this and worked 1st time!
Thanks! A follow-up question:
If the CreditLine (or Financing) entity only holds stable attributes like the ID, would it also contain a list of its Snapshots?
Should getters and all domain logic (like calculations) stay in the CreditLine class and delegate to the latest snapshot, or live directly in the Snapshot and the Credit Line class just have methods to add and retrieve a snapshot?
I got a segmentation fault when switching (Configure Virtual Device) "Graphics acceleration" from: "Automatic" to "Software". And after a lot of trying and reading the solution was quite simple: 1. quit Android Studio, 2. delete the directory ".android" (actually I renamed it to ".android_bak") and 3. when restarting Android Studio the directory was recreated and all emulator images were running fine. And the reason was that long ago I did some configuration changes in ".android" which were not compatible anymore with the new Android SDK.
sudo apt install python3-scriptforge
For Windows 11, I just added this path to Path Environment Variable: c:\Program Files\Git\mingw64\libexec\git-core ... and restarted VS Code
You can try to append a bunch of coordinate points to an array and make two other arrays around this as loci and then look for intersects by expanding a variable about the matches in each array.
To add to @Kimzu answer, HTML entities with pre tags remain HTMl, so you should escape them:
'<pre>' . htmlentities(print_r($array, true), ENT_QUOTES, 'UTF-8') . '</pre>'
1. Open LocalWP
Launch Local (by Flywheel).
2. Go to Your Site’s Settings
In the left sidebar, click on your site (e.g., “MySite”).
Click on the “Overview” tab (sometimes labeled “Site” or “Site Overview” depending on your version).
3. Change the Domain
Find the field labeled “Site Domain” (or “Domain”).
Change it from something like localhost:10005 to:
mysite.local
Click “Apply Changes.”
4. Allow LocalWP to Update Hosts File
Local will prompt you to update your system’s hosts file.
Click “Allow” or “Yes” when it asks for administrator/root access.
This step maps mysite.local → 127.0.0.1 on your computer.
5. Restart the Site
Stop the site (if running) and click Start Site again.
6. Open the Site
Click “Open Site” — it
should now open:
I'm confused. Why would you want to do that?
there are 65 and you make 6 it will need the most of 10 people to make it and need 5 coder to make it
Corrigi usando na .env uma opção chamada "Transaction Pooler" do Supabase. A conexão deu tudo certo após isso. Pois parece que o IPv6 não é totalmente suportado no meu Ubuntu, então temos que usar algo como IPv4.
Transaction Pooler e usa a string de conexão fornecida, geralmente com a porta 6543
As of November 2025, Gitlab CI supports defining dependencies on a granular level when working with parallel/matrix.
stages:
- prepare
- build
- test
.full_matrix: &full_matrix
parallel:
matrix:
- PLATFORM: ["linux", "windows", "macos"]
VERSION: ["16", "18", "20"]
.platform_only: &platform_only
parallel:
matrix:
- PLATFORM: ["linux", "windows", "macos"]
prepare_env:
stage: prepare
script:
- echo "Preparing $PLATFORM with Node.js $VERSION"
<<: *full_matrix
build_project:
stage: build
script:
- echo "Building on $PLATFORM"
needs:
- job: prepare_env
parallel:
matrix:
- PLATFORM: ['$[[ matrix.PLATFORM ]]']
VERSION: ["18"] # Only depend on Node.js 18 preparations
<<: *platform_only
Source: https://docs.gitlab.com/ci/yaml/matrix_expressions/#use-a-subset-of-values
Ideally, you could combine a Smalltalk image concept with peristence and just store all your objects in a real database.
When designing a domain model for something like a Credit Line that changes over time, a solid approach is to keep a core CreditLine entity with just stable info (like its ID) and store all changing details (conditions, financial entities, contributions) in dated snapshots or Statements. This avoids duplicating static info and keeps complete history, letting you query the state at any date by picking the right snapshot. This matches your Option C and keeps the model clean and easy to maintain.
Another approach is Event Sourcing, where you store every change as an immutable event and rebuild the Credit Line’s state by replaying those events. This offers a detailed audit trail and no data duplication but adds complexity in querying and implementation.
For your case, Option C is usually simpler and practical unless you need fine-grained change tracking that Event Sourcing provides. Both are established patterns for managing evolving, versioned domain data.
To remove the first matching element, one may:
xs.extract_if(.., |x| *x == some_x).next();
The resulting option will contain the value, if contained in the vector.
It turns out that nodeJS code is using a different data that browser uses.
So this issue is caused by different data.
I am working in Apache OpenOffice Calc (spreadsheet). To rotate a chart, I do this:
Select the chart
Select "FORMAT" from the top menu bar
Select "Graphic" from the drop-down menu
Select "Position-Size" from the available options
Select the "Rotation" tab from the Position-Size pop-up
Go To "Rotation Angle on the bottom of the pop-up
Set the Rotation Angle I want for my chart.
I hope This helps!
Solution by alexanderokosten
This guide explains how to install .NET Framework 4.0 and 4.5 on VS 2022, as it doesn't support these versions out of the box. Microsoft says these versions are outdated but many people still need them to maintain legacy apps.
Requirements:
- VS 2022
- 7-zip (I haven't tested WinRar)
Notes:
- When .NET 4.0 is mentioned, it means .NET Framework 4.0
- When it's mentioned to open as an archive or un-zip, use 7-zip
The steps:
- Download the .NET 4.0 Reference Assemblies from https://www.nuget.org/api/v2/package/Microsoft.NETFramework.ReferenceAssemblies.net40/1.0.3
- Once downloaded, open them up as an archive
- Navigate to Build > .NETFramework
- Keep 7-zip open and open the folder C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework
- Delete the existing v4.0 (or make a backup)
- Extract the v4.0 folder from 7-zip into the folder (make sure to not extract the contents but the folder itself)
- Open Visual Studio 2022 and now you should see .NET 4.0 as a .NET Framework target
React Router V6 has a backwards compatibility package. Perhaps that's a middleground worth exploring?
You’re using:
"firebase-tools": "^14.24.1"
"firebase-functions": "6.6.0"
→ Problem: emulator support for firebase-functions v2 CloudEvent format was added only in v14.0.0+, and full support came later (~v15+).
Fix:
Upgrade Firebase CLI and dependencies:
npm install -g firebase-tools@latest
npm install firebase-functions@latest firebase-admin@latest
The Firebase Emulator does not yet fully support Node 22 runtime.
Your package.json says:
"engines": {
"node": "22"
}
Fix:
Downgrade temporarily to a supported version for local testing:
"engines": {
"node": "20"
}
Or use Node 20 in your local environment when emulating.
Separate v1 and v2 triggers into different files to avoid cross-import issues:
// index.js
exports.authTriggers = require('./auth');
exports.firestoreTriggers = require('./firestore');
That avoids old v1 helpers accidentally mixing with v2 definitions.
If you need confirmation that it’s not your logic:
firebase deploy --only functions:onPrivateUserDataUpdateTrigger
of course signal approach is better, the main reason is the performance, just add console log into method and signal and compare the repetition of that, you will notice that method is running multiple times unnecessary, while the computed signal is aware of dependencies and only run again when one of the dependencies is changed.
AI DeepSeek offered following solution as an option
Procedure PresentationGetProcessing(Data, Presentation, StandardProcessing)
StandardProcessing = False;
Try
If Data.Ref = Undefined Then
StandardProcessing = True;
Return;
EndIf;
If Data.Ref.IsEmpty() Then
StandardProcessing = True;
Return;
EndIf;
fullDescr = Data.Ref.FullDescr();
Presentation = StrReplace(fullDescr, "/", ", ");
Except
// If any error occurs, fall back to standard processing
StandardProcessing = True;
EndTry;
EndProcedure
But I am not sure that this is the correct way...
Looking forward for other answers...
I'm also just getting started with coding and recently came across the idea of contributing to open source. At first, I had no clue where to begin, but after doing a bit of digging, I realized you don’t need to work for a company to contribute.
What I’ve been doing is checking out projects on GitHub that seem interesting—especially ones using the language I’m learning. I look for issues labeled “good first issue” or “beginner friendly.” Those are perfect for people like us who are just starting out.
You don’t need to be an expert. You can help by fixing small bugs, improving documentation, or even just asking if there’s something you can do. Most open source communities are super welcoming and happy to guide you.
So yeah, if you’re thinking of jumping in, go for it! We’re all learning. Good luck!
It should be
template <typename T>
void push_front(T&& value) {
// ...
new_head->value = std::forward<T>(value);
// ...
}
If value is U&& then it will give new_head->value = std::move(value);
If value is const U& then it will give new_head->value = value;
See When is a reference a forwarding reference, and when is it an rvalue reference?, Is there a difference between universal references and forwarding references?
2025 edit:
Great answer to this problem here: Medium blog by Ted Goas
Rather than, say, wrapping an image in a tags, you instead have to wrap the a tags in table html.
I believe this is an iOS 26.1 bug unfortunately :(
I'm also facing the same issue, tests runs just fine with direct class name or even direct package reference, but as soon as I try to run with groups then it fails with null pointer exception.
I noticed that, when groups are included in xml then it even not going for @BeforeClass and @BeforeMethod methods from superclass which is overridden in test class and that is the reason test is trying to execute before initializing the web drivers and ended up with null pointer exception.
I've tried to rename the methods in superclass to avoid collision but none worked. With same configuration it runs fine if I remove the groups from testng xml suite.
Note that group names are correctly specified in the @test.
If the backend relies on Windows-only features (COM/Crystal Reports/Windows auth), I willl tailor this to Windows (EB/ECS-Windows). Otherwise I would proceed with ECS Fargate + ALB host rules + S3/CloudFront - that’ll give you per-app autoscaling, isolation, and lower ops without the VM Tetris.
If you are not tracing your project, I would suggest to tracing immediatedly using LangSmith, and LangSmith comes with the built in evaluators to evaluate your document retrieval and even more using LLM as Judge option
Understand that every data is unique, we cannot say on first look. I would strongly encourage you to create a dataset in LangSmith and test with multiple version. By this time you would have a rough idea of which works
Although not an answer to the question (still hope there is some clever way (as always in haskell...)). For reusing determining lower and upper bounds I came with something like this:
data ValueRange = MkValueRange { minValue :: Double, maxValue :: Double }
instance Show ValueRange where
show (MkValueRange minV maxV) = "ValueRange(" ++ show minV ++ ", " ++ show maxV ++ ")"
fmap2 :: (Double -> Double -> Double) -> ValueRange -> ValueRange -> ValueRange
fmap2 f (MkValueRange minA maxA) (MkValueRange minB maxB) =
MkValueRange (minimum results) (maximum results)
where
results = [ f x y | x <- [minA, maxA], y <- [minB, maxB] ]
(<+>) :: ValueRange -> ValueRange -> ValueRange
(<+>) = fmap2 (+)
(<->) :: ValueRange -> ValueRange -> ValueRange
(<->) = fmap2 (-)
(<*>) :: ValueRange -> ValueRange -> ValueRange
(<*>) = fmap2 (*)
(</>) :: ValueRange -> ValueRange -> ValueRange
(</>) = fmap2 (/)
example1 :: ValueRange
example1 = MkValueRange 10 20 <+> MkValueRange 5 15
I got around the problem in the following way:
@Bean
public OpenAiApi openAiApi() {
HttpClient httpClient = HttpClient.newBuilder().version(HttpClient.Version.HTTP_1_1).build();
JdkClientHttpRequestFactory jdkClientHttpRequestFactory = new JdkClientHttpRequestFactory(httpClient);
return OpenAiApi.builder()
.apiKey(this.apiKey)
.baseUrl(this.baseUrl)
.restClientBuilder(
RestClient.builder()
.requestFactory(jdkClientHttpRequestFactory)
).build();
};
Credits to: https://github.com/spring-projects/spring-ai/issues/2653#issuecomment-2783528029
You can hide those details into pure python bootstrap like this.
It is possible to "do it right" in shell, but it does not evolve well.
Alternatively, consider bootstrap like this to hide those:
... and other unnecessary details, especially for simple start.
Shopify maintains strict control over its checkout process for security so as far as I know it is not possible to integrate unapproved payment service provider.
You can consider to implement this approach that might work properly:
Create a Draft Order
Create a PayGreen payment link for the draft’s total and include the draft order ID as metadata
Redirect the customer to PayGreen’s hosted page
On PayGreen webhook success, verify signature, then complete the draft and mark it paid with an external transaction (“PayGreen”)
On failure/timeout, cancel the draft and restock
Try reposting this a regular question.
To answer my own question, OpenLayers doesn't allows dispatching custom events. cf. https://github.com/openlayers/openlayers/issues/14667
Turns out, I was using eglChooseConfig the wrong way. The fix was simple:
EGLint total_config_count;
success = eglChooseConfig(egl_display, config_attrs, NULL, 0, &total_config_count);
EGLConfig* all_configs = (EGLConfig*) malloc(sizeof(EGLConfig) * total_config_count);
success = eglChooseConfig(egl_display, config_attrs, all_configs, total_config_count, &total_config_count); //Changed the last argument from NULL
Hope this was useful!
Bro I finally found the Answer, after 5 days of going through everything.
The Solution is by changing one "Build Settings" Setting named "Approachable Concurrency" to know (refer to the screenshot)
Or if you have VScode, open your project "project.pbxproj" file and change this here

If you have any questions or further additions please ask away.
I have resolved it. I found a project that it can bypass it. https://github.com/JustLikeCheese/NexToast.
Ted Lyngmo answered the question (IDK how to use comments to answer). I did not find a cmakelists that worked on the github repo, but as I looked again I saw one that worked for me. Thanks!
What worked for me was commenting out the code in android/app/src/main/kotlin/<app bundle name>/MainActivity.kt
You can any use env for MONGO_USER and MONGO_PASSWORD. Remove MONGO_DATABASE AND MONGO_CLUSTER in .evn file. Give the Database name and cluster in application properties file. Run the code. It worked for me.
When users log off the server, why not automatically run a VS Code Server kill and cleanup process? You could even schedule it daily to prevent multiple users from accumulating bloat over time. That way, system admins don’t have to manage manual cleanups. It’s easy to script and you could even trigger it when a VS Code window closes to make the process fully automated. Honestly, no production server should retain a .vscode-server directory long-term. It should be purged once a session ends or work is complete.
This answer works great! Thank you!
So the complete error meassage reads:
line 2870967: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')' at line 1ax to use near ')' at line 1
I checked out the line and more parantheses are being closed than opened. But as I have said, this was created via web UI dump, which is directly from IONOS, it is not PHP My Admin. I dont know the exact command it uses. But there should not be any errors in it.
The thing you said, Allen, makes totally sense. I guess before that line, there is some data that cannot be parsed and that way an opened paranthesis cannot be read.
Is there a fix for this?
I have contacted IONOS, and they suggested some fixes. I'll try them out on Monday. If anything works I'll post it here for future reference.
please help you give my diamind please
Form information is visible by using admin page after adding the model into admin.py file.
The form data was saved but it was not visible when admin page is used.
Well, seems I can use onNodesChange() from useVueFlow(). I have simply to implement logic to distinguish between a single node selected from multiple node selected, because the routine is called for every single selection event. So the problem is in my code, not in vue-flow. Sorry for the noise.
Right now, it creates them all as a folder, (including main.c)
Because that's what `create_dir_all` does.
Just create the file separately after using creating the directories.
There are some application and services which are responsible for removing any dev-certs added to keep a specific certificate always as the first priority (maybe - like an organization's certificate). Look for these app or services on your computer. I've had installed one for my previous job and I didn't know that this app is removing them.
How to find the app or process: Install Procmon from here. Then add a filter on Path which Contains Microsoft\SystemCertificates\My\Certificates. Then look for the reason in the process Name column. find it's exe and remove it from your computer (inside programs and features).
As @Barmar said in the comments, there is no built-in property to check if the child process is about to read from the pipe. Even standard terminals allow input during process execution. So when I run the following code, I can enter text that is as not being processed by the child process, the shell tries to execute after process termination.
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
int main()
{
int a[2], b[2], child_read, child_write;
char output[4] = {0};
pipe(a);
pipe(b);
child_write = a[1];
child_read = b[0];
FILE *parent_write = fdopen(b[1], "w");
FILE *parent_read = fdopen(a[0], "r");
if (fork() == 0) {
close(b[1]);
close(a[0]);
fclose(parent_read);
fclose(parent_write);
dup2(child_read, STDIN_FILENO);
dup2(child_write, STDOUT_FILENO);
sleep(3);
printf("ok\n");
close(child_read);
close(child_write);
exit(0);
}
close(child_read);
close(child_write);
if (fread(output, 1, 3, parent_read) > 0) {
printf("%s\n", output);
}
fclose(parent_read);
fclose(parent_write);
close(b[1]);
close(a[0]);
}
$ gcc -o test test.c ; ./test
ls
ok
$ ls
test test.c
@herrstrietzel u r right maybe setting up the correct environment can defiantly reproduce the problem, but I thought to give the minimal snippet to stay away from complexity. but here u go full setting:
Windows 10, latest Edge browser
Latest Next.Js + Tailwind CSS + Shadcn
CSS codes in -> globals.css
Pages are in type script
The problem happens in both inputs (Shade-Cn input and Next-JS default input) they will clip this font without any reason.
I think you need to open port 8000 on your EC2 Security Group and UFW, since Django’s dev server runs on that port, and it's currently blocked externally. Thanks.
this is a wonderful article that I think answers your question https://betweendata.io/posts/secure-spring-rest-api-using-keycloak/
I updated code like that. The result was successful.
________________________________________________________
<template>
<VueDatePicker v-model="date" class="vue-datepicker"></VueDatePicker>
</template>
<style>
.vue-datepicker .dp__cell_inner {
height: 56px !important;
width: 56px !important;
font-size: 3rem;
padding: 35px!important;
}
.vue-datepicker .dp__cell_inner:hover {
background-color: #f0f0f0!important;
}
.vue-datepicker .dp__cell_inner:active {
background-color: #e0e0e0!important;
}
.vue-datepicker .dp__cell_inner:focus {
background-color: #d0d0d0!important;
}
</style>
<script setup>
import { ref } from 'vue';
import { VueDatePicker } from '@vuepic/vue-datepicker';
import '@vuepic/vue-datepicker/dist/main.css'
const date = ref();
</script>
Are you ok?
The problem was resolved, there was nothing wrong with the 'settings.py', but a database field that conflicted with the previous migrations. Once I filled the field with data, the error stopped and the django site worked fully.
What a lot of answers and I've not seen one that exploits the sort function itself to build the list of duplicates as the original list is sorted. The top answer is order n + n log n. This answer is order n log n.
a = [1,7,2,3,4,5,6,1,4];
dups=new Set();
a.slice().sort((a,b)=>{if(a<b) return -1; else if(a>b) return 1; else {dups.add(a);return 0}});
@Timeless000729 there are still some missing definitions, like grad and Uncertainties .
Then where does HashMap comes from? You should include an open statement and possibly an #r "nuget: .. if it comes from a library. You say diffs is a list of floats, ok then include it. You don't need the exact definition if it's not relevant, but at least something that makes it compile.
If I pick your code, put it in a script and it doesn't compile, not because of the problem you mention but because of a missing definition, how can I possibly help you?
select a, b, c, sum(diffCnt) from
(select a, b, c, 1 as diffCnt from T1 union all select a, b, c, -1 as diffCnt from T2 ) tmpt
group by a, b, c having sum(diffCnt) <> 0
The discrepancy arises because the user is calculating R-squared for univariate regressions incorrectly by subtracting the mean of Y, which is not appropriate for models without an intercept. To align with the geometric interpretation, R-squared should be computed as the squared ratio of the norm of the predicted values to the norm of Y. Here's how to fix it:\n\n1. Replace costheta1 = np.linalg.norm(Y1 - np.mean(Y)) / np.linalg.norm(Y - np.mean(Y)) with costheta1 = np.linalg.norm(Y1) / np.linalg.norm(Y).\n2. Similarly, replace costheta2 = ... with costheta2 = np.linalg.norm(Y2) / np.linalg.norm(Y).\n\nThis adjustment ensures that the calculated R-squared values (cos²(theta)) for univariate regressions match the angles (45° and 120°) set in the script.
If you're using the docker container like me, you have missed to pass:
-c 'config_file=/etc/postgresql/postgresql.conf'
// Source - https://stackoverflow.com/a/50544587
// Posted by Danny Tuppeny
// Retrieved 2025-11-09, License - CC BY-SA 4.0
Future getData() async {
await new Future.delayed(const Duration(seconds: 5));
setState(() {
\_data = \[
{"title": "one"},
{"title": "two"},
];
});
}
\n
If you google "telegram bot add new line", this question is at the top, but there's no \n among the answers.
Is there a way to change color previous active tab ?
Addressing render-blocking resources in WordPress without using a plugin requires direct code changes, primarily in your theme's functions.php file and sometimes in your HTML header. It focuses on how the browser prioritizes loading JavaScript and CSS.
For a step-by-step implementation guide that includes the exact code snippets and detailed procedures for handling both JS deferral and Critical CSS extraction without relying on a plugin, I've covered the complete process in this guide:
How to eliminate render blocking resources in wordpress without plugin
Got it — that “flash of light background” when switching to or loading dark mode is a common issue, and you’re right that it’s usually a kind of FOUC (Flash of Unstyled Content).
Before diving into fixes, could you share the following?
1. The way you’re currently handling dark mode (e.g., CSS prefers-color-scheme, JS toggle, or CSS class like .dark on <html>).
2. Whether you’re using a framework (like React, Next.js, Astro, etc.) or plain HTML/CSS/JS.
3. The part of your code that sets the dark/light theme (HTML head + relevant CSS/JS).
Once I see that, I can pinpoint why the flash is happening and give you an exact fix (for example, inlining a script in the <head> that applies the theme before paint).
If you want, you can paste or upload the code for:
your <head> section, and
your dark mode script or CSS toggle code.
Would you like me to explain the general reasons and fixes for this issue first (so you can understand the concept), or do you want to go straight into fixing your specific code?
Dimensionality reduction ended up working a lot, stopped my code from crashing and reduced the column size down to 150 from 34000 (after encoding with one hot encoder). I used a pipeline with columntransformer with a sparse output and truncated svd, code posted below:
categorical = ["developers", "publishers", "categories", "genres", "tags"]
numeric = ["price", "windows", "mac", "linux"]
ct = ColumnTransformer(transformers=[("ohe", OneHotEncoder(handle_unknown='ignore', sparse_output=True), categorical)],
remainder='passthrough',
sparse_threshold=0.0
)
svd = TruncatedSVD(n_components = 150, random_state=42)
pipeline = Pipeline([("ct", ct), ("svd", svd), ("clf", BernoulliNB())])
X = randomizedDf[categorical + numeric]
y = randomizedDf['recommendation']
this brought my shape down to (11200, 300) for training data.
Try the below code in your refresh endpoint; this seems to work for me.
if (!Request.Cookies.TryGetValue("refreshToken", out string? refreshToken) ||
string.IsNullOrEmpty(refreshToken))
{
return Unauthorized();
}
its a dead project, and links are not active for buildozer, messed not very centralize.. .
Mostly the languguage or js isnt fast , a DL of 2mb took 12 hours and corrupted.
Which operating system are you using?
The "PHP Server" functionality that you are using is a built-in feature of your version of Visual Studio Code (then please share the version of it) or is it by an extension of it, then please share the Name and ID of the Extension and its version information as well.
And as I read from your question, you don't like to click which I sympathise with, would you please be so kind and share how you start your editor and how you pass the information where the project is located on your system (the directory)?