Adding 2 rules for conditional formatting before your 'main' conditional formatting, I got this result.
Cell Value < lower limit - no formatting;
Cell Value > upper limit - no formatting;
Make sure to select 'stop if true' on these first 2 rules.
The [comment of @eftshift0](Rebasing all branches on new initial commit) pushed me into the right direction:
I've just rewritten the history using git-filter-repo, using this example script:
https://github.com/newren/git-filter-repo/blob/main/contrib/filter-repo-demos/insert-beginning
It does not create a new commit at the root of the repository, but just adds the file so it is available in every commit.
For example, if you have three percentages like 70%, 80%, and 90%, you add them up (240) and then divide by 3, which gives you an average percentage of 80%
Bigg Boss Season 19 has taken reality television to the next level with its thrilling mix of drama, suspense, and entertainment. This season introduces fresh faces, bold personalities, and unexpected twists that keep fans glued to their screens. Contestants are challenged with tasks, evictions, and high-pressure situations that reveal their true character. From emotional breakdowns to fiery clashes, every episode brings unforgettable moments. With its unpredictable format and nonstop excitement, Bigg Boss Season 19 continues to be the ultimate source of entertainment for viewers, making it one of the most popular reality shows of the year.
It sounds like you’re running into the classic challenges of applying GPA + PCA to complex 3D anatomy like vertebrae. From what you describe, there are a few reasons why your ASM fitting is going “off”:
Insufficient or inconsistent correspondences
Active Shape Models (ASM) work best when each landmark has a consistent semantic meaning across all shapes. Vertebrae have complex topology, and even after Procrustes alignment, landmarks may not correspond exactly between meshes.
Using closest points for surface-based fitting can lead to mismatched correspondences, especially on highly curved or irregular regions.
Large shape variability / non-overlapping regions
If parts of your vertebrae are displaced or have high variability, the mean shape may not represent all instances well. PCA will then project shapes onto modes that don’t match the local geometry, producing unrealistic fits.
Scaling / alignment issues
You are doing similarity Procrustes alignment (scaling + rotation + translation), which is generally good, but when using surface points instead of annotated landmarks, slight misalignments can propagate and distort PCA projections.
Step size / iterative fitting
In your iterative ASM, step_size=0.5 may overshoot or undershoot. Sometimes, reducing the step size and increasing iterations helps stabilize convergence.
Too few points / too sparse sampling
Sampling only 1000 points on a vertebra mesh may not capture all the intricate features needed for proper alignment. Denser sampling or using semantically meaningful points (e.g., tips of processes, endplates) improves GPA convergence.
Flattening for PCA
Flattening 3D coordinates for PCA ignores the spatial structure. For complex anatomical shapes, methods like point distribution models (PDM) with mesh connectivity) or non-linear dimensionality reduction can sometimes work better.
Suggestions:
Increase landmark consistency: Make sure points correspond anatomically across all vertebrae. Consider manual annotation for critical points.
Refine initial alignment: Before fitting ASM, ensure the meshes are roughly aligned (translation, rotation, maybe even rigid ICP). Avoid large initial offsets.
Reduce PCA modes or increase data: If your dataset is small (7 vertebrae for landmarks, 40 for surfaces), PCA may overfit. More training shapes help.
Use robust correspondence methods: Instead of just nearest points, consider geodesic or feature-based correspondences.
Check scaling: Surface-based fitting may benefit from rigid alignment without scaling, to avoid distortion.
Visualize intermediate steps: Plot each iteration to see where it diverges—sometimes only a few points cause the misalignment.
You've divided the screen into 8 parts (flex: 7 + flex: 1). try 8:2 or 9:1 in flex. if does not work then Wrap your main content (the welcome text) in an Expanded widget and Place your button section directly after the Expanded widget in the Column.
Old question but, had the same issue, with python3 -v -m pip install .. I saw it got stuck on netrc import, disabling ipv6 with sysctl -w net.ipv6.conf.all.disable_ipv6=1 fixed my issue.
As one comment pointed out, the problem can be solved by giving the following as a parameter to CallMethod() :
Something{ m_something }
So the actual line of code would look like this:
CallMethod( Something{ m_something } );
Use the following event DataGrid.LoadingRow and attach it to the Data Grid.
Official documentation : https://learn.microsoft.com/en-us/dotnet/api/system.windows.controls.datagrid.loadingrow
<DataGrid x:Name="DataGrid"
SelectedItem="{Binding SelectedSupplier, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}"
ItemsSource="{Binding SuppliersList, Mode=OneWay}"
AutoGenerateColumns="False"
LoadingRow="DataGrid_LoadingRow">
Now define the function DataGrid_LoadingRow and then disable the row.
if (e.Row.GetIndex() == 0) e.Row.IsEnabled = false;
when creating interaction:
const drawInteraction = new ol.interaction.Draw({
source: source,
type: 'Point'
});
drawInteraction.setProperties({ somePropertyName: true });
map.addInteraction(drawInteraction);
when you need to delete this interaction:
const interactions = map.getInteractions().getArray().slice();
interactions.forEach(int => {
if (int.getProperties().somePropertyName) map.removeInteraction(int);
});
I get what you are requesting. After you have sorted and highlighted all the files you want to copy out the path, right click on the selected file that is on top of the pack and select "copy as path". That shld give u the sorted order that you want.
Yes, declaring a variable as Int32 means it always takes up 32 bits (4 bytes) of memory, no matter what value it holds. Even if the value is just 1, it’s still stored using the full 32-bit space. That’s because Int32 is a fixed-size type, and the memory is allocated based on the type, not the value. This helps with performance and consistency in memory layout.
in app/build.gradle
add
android {
...
packagingOptions {
jniLibs {
useLegacyPackaging = true
}
}
}
https://developer.android.com/guide/topics/manifest/application-element
A clear tutorial to solve the problem: https://dev.to/yunshan_li/setting-up-your-own-github-remote-repository-on-a-shared-server-kom
ADF still does not support deleting records from Salesforce. Still, there might be an alternative (see the latest message on this page):
instanceofseems to work for class types only.
Good to know that Java 24 supports instanceof with primitive types, as introduced in JEP-488
in my case, it worked by using npx bubblewrap build
I don't think that not merging the two R segments is any kind of failure. Rather it's for performance. The R segment number 2 contains infrequently accessed sections and the R segment number 4 contains frequently accessed sections. Doing that is better for paging and caching.
I had a task to create a row by the next index after the last one, adding a value only to the first field and automatically setting nan to the rest of the fields. I solved it like this:
df1.loc[df1.index[-1] + 1] = ['2025-08-01' if i == 0 else np.nan for i in range(len(list(df1)))]
in app/build.gradle
add
android {
...
packagingOptions {
jniLibs {
useLegacyPackaging = true
}
}
}
https://developer.android.com/guide/topics/manifest/application-element
How about ContinuousClock.Instant ?
I turned out that I mixed multipart/form-data and application/octet-stream approaches.
The correct Kotlin code for Ktor-client to upload to Cloudflare R2 will be:
suspend fun uploadS3File2(
url: String,
file: File
) = client.put(url) {
setBody(file.readChannel())
headers {
append(HttpHeaders.ContentType, ContentType.Application.OctetStream)
append(HttpHeaders.ContentLength, "${file.length()}")
}
}
from PIL import Image
# Open the previously saved PNG and convert to JPG
png_path = "/mnt/data/Online_GSS_Lead_Table.png"
jpg_path = "/mnt/data/Online_GSS_Lead_Table.jpg"
# Convert and save
with Image.open(png_path) as img:
rgb_img = img.convert("RGB")
rgb_img.save(jpg_path, "JPEG")
jpg_path
Most popular platforms provide their own OAuth 2.0 documentation, which can be integrated directly into a custom plugin or even within your theme’s functions.php file, depending on your project requirements.
Alternatively, you may consider using the Simple JWT Login plugin, which comes with a built-in Google OAuth 2.0 configuration out of the box. This plugin is highly extensible, as it offers multiple hooks and filters that make customization straightforward.
To tailor the functionality to your needs, you can leverage these hooks to modify authentication flows, user handling, or token management. Well-structured documentation is available for these modification points, ensuring developers can adapt the plugin seamlessly without heavy code rewrites.
Reference Links:
Google OAuth 2.0
Facebook oauth2
Add "use client" to the top of where you initialised you react query provider
Did you find a fix for this? I think I am seeing the same problem. When I add a marker to my array of markers via long press, it doesn't appear until after the next marker is added....
If I add key={markers.length} to MapView this fixes the problem of the newest marker not showing, by forcing a reload of the map. But reloading the map is not ideal because it defaults back to its initial settings and disrupts the user experience.
My code:
import MapView, { Marker } from "react-native-maps";
import { StyleSheet, View } from "react-native";
import { useState } from "react";
function Map() {
const [markers, setMarkers] = useState([]);
const addMarker = (e) => {
const { latitude, longitude } = e.nativeEvent.coordinate;
setMarkers((prev) => [
...prev,
{ id: Date.now().toString() + markers.length, latitude, longitude },
]);
};
return (
<View style={styles.container}>
<MapView
style={styles.map}
initialRegion={{
latitude: 53.349063173157184,
longitude: -6.27913410975665,
latitudeDelta: 0.0922,
longitudeDelta: 0.0421,
}}
onLongPress={addMarker}
>
{markers.map((m) => {
console.log(m);
return (
<Marker
key={m.id}
identifier={m.id}
coordinate={{ latitude: m.latitude, longitude: m.longitude }}
/>
);
})}
</MapView>
</View>
);
}
export default Map;
const styles = StyleSheet.create({
container: {
//flex: 1,,
},
map: {
width: "100%",
height: "100%",
},
button: {
position: "absolute",
top: 10,
right: 10,
width: 80,
height: 80,
borderRadius: 10,
overflow: "hidden",
borderWidth: 2,
borderColor: "#fff",
backgroundColor: "#ccc",
elevation: 5,
},
previewMap: {
flex: 1,
},
});
No se ha dicho pero una posible solución podría ser añadir en el __init__.py de la carpeta donde están los módulos (por ejemplo si es la carpeta objects que está dentro del proyecto project) lo siguiente:
# project/objects/__init__.py
import importlib
homePageLib = importlib.import_module(
"project.objects.homePageLib"
)
calendarLib = importlib.import_module(
"project.objects.calendarLib"
)
Después en cada módulo homePageLib y calendarLib hacer el import de la siguiente manera:
from project.objects import homePageLib
o
from project.objects import calendarLib
y para usarlo dentro:
return calendarLib.CalendarPage()
try looking at NativeWind as well
I have a quick solution to this. Update this line with a default parameter EmptyTuple:
inline def makeString[T <: Tuple](x: T = EmptyTuple): String = arg2String(x).mkString(",")
Here it is in scastie:
For now this is my conclusion on how to access the required value from withing the MinecraftServer.class
@Override
@Nullable
public ReloadableServerResources codec$getResources() {
try {
Field resources = MinecraftServer.class.getDeclaredField("resources");
resources.setAccessible(true);
Method managers = resources.getType().getDeclaredMethod("managers");
managers.setAccessible(true);
Object reloadableResources = resources.get(this);
return (ReloadableServerResources) managers.invoke(reloadableResources);
} catch (Exception e) {
return null;
}
}
public class UITestAttribute : TestAttribute
{
public new void ApplyToTest(Test test)
{
base.ApplyToTest(test);
new RequiresThreadAttribute(ApartmentState.STA).ApplyToTest(test);
}
}
I ran into the same error for the Shadcn chart and sidebarbutton components. When the error shows, Next.js would display the offending component and line of code. I went in and added id tags to where I call said components to resolve the hydration server-client mismatch.
.net Entity Framework 6 + Just add the following to the Scaffold code
-Nopluralize
| header 1 | header 2 |
|---|---|
| cell 1 | cell 2 |
| cell 3 | cell 4 |
I solved this problem by placeing the displays (in parametrs -> system -> displays) in straight row.
From enter image description here to enter image description here
Unfortunately, no, there’s no safe way to fully hide an OpenAI API key in a frontend-only React app. Any key you put in the client code or request headers can be seen in the browser or network tab, so it’s always exposed.
The standard solutions are:
1.Use a backend (Node.js, serverless functions, Firebase Cloud Functions, etc.) to proxy requests. Your React app calls your backend, which adds the API key and forwards the request. This keeps the key secret.
2.Use OpenAI’s client-side tools with ephemeral keys if available (like some limited use cases in OpenAI’s examples), but these are temporary and still limited.
Without a backend, there’s no fully secure way anyone could copy the key and make API calls themselves. For production apps, a backend or serverless proxy is mandatory.
Title: Standardizing showDatePicker date format to dd/MM/yyyy in Flutter
Question / Issue:
Users can manually type dates in mm/dd/yyyy format while most of the app expects dd/MM/yyyy. This causes parsing errors and inconsistent date formats across the app.
I want to standardize the showDatePicker so that either:
The picker respects dd/MM/yyyy based on locale.
Manual input parsing is handled safely in dd/MM/yyyy.
Reference: https://github.com/flutter/flutter/issues/62401
Solution 1: Using Flutter Localization
You can force the picker to follow a locale that uses dd/MM/yyyy (UK or India):
// In pubspec.yaml
flutter_localizations:
sdk: flutter
// MaterialApp setup
MaterialApp(
title: 'APP NAME',
localizationsDelegates: const [
GlobalMaterialLocalizations.delegate,
GlobalWidgetsLocalizations.delegate,
GlobalCupertinoLocalizations.delegate,
],
supportedLocales: const [
Locale('en', 'GB'), // UK English = dd/MM/yyyy
Locale('ar', 'AE'), // Arabic, UAE
Locale('en', 'IN'), // Indian English = dd/MM/yyyy
],
home: MyHomePage(title: 'Flutter Demo Home Page'),
);
// DatePicker usage
await showDatePicker(
locale: const Locale('en', 'GB'), // or Locale('en', 'IN')
context: context,
fieldHintText: 'dd/MM/yyyy',
initialDate: selectedDate,
firstDate: DateTime(1970, 8),
lastDate: DateTime(2101),
);
✅ Pros: Works with stock showDatePicker.
⚠️ Cons: Requires adding flutter_localizations to pubspec.
Solution 2: Using a Custom CalendarDelegate
You can extend GregorianCalendarDelegate and override parseCompactDate to handle manual input safely:
class CustomCalendarGregorianCalendarDelegate extends GregorianCalendarDelegate {
const CustomCalendarGregorianCalendarDelegate();
@override
DateTime? parseCompactDate(String? inputString, MaterialLocalizations localizations) {
if (inputString == null || inputString.isEmpty) return null;
try {
// First, try dd/MM/yyyy
return DateFormat('dd/MM/yyyy').parseStrict(inputString);
} catch (_) {
try {
// Fallback: MM/dd/yyyy
return DateFormat('MM/dd/yyyy').parseStrict(inputString);
} catch (_) {
return null;
}
}
}
}
Usage:
await showDatePicker(
context: context,
fieldHintText: 'dd/MM/yyyy',
initialDate: selectedDate,
firstDate: DateTime(1970, 8),
lastDate: DateTime(2101),
calendarDelegate: CustomCalendarGregorianCalendarDelegate(),
);
✅ Pros: Full control over manual input parsing, no extra pubspec assets required.
⚠️ Cons: Requires using a picker/widget that supports custom CalendarDelegate.
Recommendation:
Use Flutter localization for a quick standard solution.
Use CustomCalendarGregorianCalendarDelegate for strict manual input handling or if flutter_localizations cannot be added.
Unfortunately, Android Studio doesn't have an option/setting to disable this. It assumes that once you refactor a file, you want to take a look at the result and thus opens it in the editor.
LOVE is the answer:
12=L (#'s in Alphabet)
15=O
22=V
05=E
Evolve, elovate, and omg! Volvo Volvo, okay quit showing off Mom and Dad!!!:)
Kathy, he he he:)
Go to android/app/build.gradle and change the versions with below codes.
compileOptions {
sourceCompatibility = JavaVersion.VERSION_17
targetCompatibility = JavaVersion.VERSION_17
}
kotlinOptions {
jvmTarget = JavaVersion.VERSION_17
}
Eclipse doesn’t provide a direct global setting to always use Java Compare for Java files, but you can set it per file type:
Go to Window → Preferences → General → Editors → File Associations.
Find .java in the file types list.
In the Associated editors section, select Java Compare and click Default.
After this, whenever you open a Java file for comparison, Eclipse should prefer the Java Compare editor instead of the generic text compare.
If Git still opens the standard compare, a workaround is to right-click the file → Compare With → HEAD, then manually select Java Compare the first time; Eclipse usually remembers it for future comparisons.
Eclipse doesn’t have a built-in preference to force Git staging view to always use Java Compare globally.
You can’t really hide your API key in a React app because anything in the frontend is visible to the user (including the key in the network tab). So, calling OpenAI directly from the frontend will always expose it.
To keep your key safe, the best option is to use a backend (like Node.js/Express or Python) to make the request for you. That way, the API key stays hidden from the user.
If you don’t want to deal with a full backend, you could try using serverless functions (like Vercel or Netlify), which essentially act as tiny backends to handle the API call securely.
In short, you need some kind of backend to protect the key — no way around that for security reasons.
One new algorithm that you might not be aware of is Gloria. It is not neural network based as your current approach, but is state-of-the-art in a sense that it significantly improves on the well-known Prophet.
Online traning is not yet available (i.e. updating existing models based on the latest new data point), but including a warm-start is on our roadmap for the upcoming minor release (see issue #57), which should speed up re-training your models with new data significantly.
As Gloria outputs lower and upper confidence intervals simple distance-based anomaly detection is very straight forward. Based on the data-type you are using, you have a number of different distribution models available (non-negative models, models with upper bounds, count data,...). These will give you very reliable bounds for precise anomaly detection. With a little bit of extra work, you will even be able to assign a p-value like probability to your data points of being an anomaly.
import torch.multiprocessing as mp
import torch
def foo(worker,tl):
tl[worker] += (worker+1) * 1000
if __name__ == '__main__':
mp.set_start_method('spawn')
tl = [torch.randn(2,), torch.randn(3,)]
# for t in tl:
# t.share_memory_()
print("before mp: tl=")
print(tl)
p0 = mp.Process(target=foo, args=(0, tl))
p1 = mp.Process(target=foo, args=(1, tl))
p0.start()
p1.start()
p0.join()
p1.join()
print("after mp: tl=")
print(tl)
# The running result:
# before mp: tl=
# [tensor([1.7138, 0.0069]), tensor([-0.6838, 2.7146, 0.2787])]
# after mp: tl=
# [tensor([1001.7137, 1000.0069]), tensor([1999.3162, 2002.7146, 2000.2787])
I have another question. As long as mp.set_start_method('spawn') is used, envn if I comment t.share_memory_,the tl is still modified.
suppressScrollOnNewData={true}
getRowId={getRowId}
It looks like hyperlinks in the terminal are broken again in WebStorm 2025 (at least if the path to the file is relative). For those looking for a solution, there is a plugin https://plugins.jetbrains.com/plugin/7677-awesome-console that fixes the problem
May be this variant with grouping will do the thing?
df = df.assign(grp=df[0].str.contains(r"\++").cumsum())
res = df.groupby("grp").apply(lambda x: x.iloc[-3,2]
if "truck" in x[1].values
else None,
include_groups=False).dropna()
Can anyone have clear idea, about this issue and find any solution. kindly share your experience.
AbandonedConnectionTimeout set to 15 mins InactivityTimeout set to 30 mins,: is this work?
When I do something like this I usually just use the date command. Perhaps if I run a command that takes a while and I want to see about how long it ran I run something like...
(date && COMMAND && date) > output.txt
Then when I look in the output file, it will show the date before the command starts, and after the command finishes. In Perl the code would look something like this...
$ perl -e '$cmd=q(date && echo "sleeping 3 seconds" && sleep 3 && date); print for(`$cmd`);'
Thu Aug 21 02:54:45 AM CDT 2025
sleeping 3 seconds
Thu Aug 21 02:54:48 AM CDT 2025
So if you wanted to print out the time in a logfile you could do something like this...
#!/usr/bin/perl -w
open(my $fh, ">", "logfile.txt");
my ($dateCommand, $sleepCommand, $date, $sleep);
$dateCommand = "date";
$sleepCommand = "sleep 3";
chomp($date =`$dateCommand`);
print $fh "LOG: Stuff happened at time: $date\n";
chomp($date = `$dateCommand && echo "sleeping for 3 seconds" && $sleepCommand && $dateCommand`);
print $fh "LOG: Following line is command output surrounded by date\n\n$date\n";
if(1){ #this is how you can put the date in error messages
chomp($date = `$dateCommand`);
die("ERROR: something happened at time: $date\n");
}
Output looks like this
$ perl date.in.logfile.pl
ERROR: something happened at time: Thu Aug 21 02:55:54 AM CDT 2025
Compilation exited abnormally with code 255 at Thu Aug 21 02:55:54
$ more logfile.txt
LOG: Stuff happened at time: Thu Aug 21 02:55:51 AM CDT 2025
LOG: Following line is command output surrounded by date
Thu Aug 21 02:55:51 AM CDT 2025
sleeping for 3 seconds
Thu Aug 21 02:55:54 AM CDT 2025
If you only wanted a specific time field instead of the entire date, you could run the date command and separate it with a regular expression like so...
#!/usr/bin/perl -w
$cmd="date";
$date=`$cmd`;
$date=~/(\w+) (\w+) (\d+) ([\d:]+) (\w+) (\w+) (\d+)/;
my ($dayOfWeek, $month, $day, $time, $meridiem, $timeZone, $year) =
($1, $2, $3, $4, $5, $6, $7);
#used printf to align columns to -11 and -8
printf("%-11s : %-8s\n", "Day of week", $dayOfWeek);
printf("%-11s : %-8s\n", "Month", $month);
printf("%-11s : %-8s\n", "Day", $day);
printf("%-11s : %-8s\n", "Time", $time);
printf("%-11s : %-8s\n", "Meridiem",$meridiem );
printf("%-11s : %-8s\n", "Timezone", $timeZone);
printf("%-11s : %-8s\n", "Year", $year);
Output looks like this...
$ perl date.pl
Day of week : Thu
Month : Aug
Day : 21
Time : 03:25:05
Meridiem : AM
Timezone : CDT
Year : 2025
ARG USER
ARG GROUP
RUN useradd "$USER"
USER "$USER":"$GROUP"
I found the explanation myself. It seems the error was triggered not by comments but by file size.
I ended up refactoring the ApexCharts options in a separate file, and that got rid of the error.
So it seems that web-pack has some issues with big configuration files (not sure exactly what), but clearly by reducing the file size it solved the issue.
Does not care about comments directly, but most likely, the comments are getting stripped at compilation so that affects the resulting file size, thus it's was an indirect effect when I played around with comments in my question above.
This question is duplicate of Expo unable to resolve module expo-router
Try answer added to this question.
This error means your Android device doesn't have a Lock Screen Knowledge Factor (LSKF) set up - basically, no screen lock protection.
Quick fix:
Go to Settings → Security (or Lock Screen)
Set up a screen lock:
🔸 PIN
🔸 Pattern
🔸 Password
🔸 Fingerprint
🔸 Face unlock
Why does this happen?
Your app is trying to use Android's secure keystore, but the system requires some form of screen lock to protect the keys. Without it, Android won't let apps store sensitive data securely.
Steps:
Open Settings
Find "Security" or "Lock Screen"
Choose "Screen Lock"
Pick any method (PIN is quickest)
Restart your app
That should fix it. The keystore needs device security to work properly.
Right click on the variable and click on the Rename Symbol option, this option will only rename the correct ABC (str vs bool).
You can alternatively press F2 as well to do this.
Just open Vscode in the folder that contains the Scripts folder.
Then activate your virtual environment. Create a ipynb notebook, put some code in it and at the top right, you can select the kernel. The name of the env will be same as the name of your folder.
see top right, this is my env name
Vscode will auto detect this environment, even when you restart the editor. Once you activate the environment, select on this option and reselect the environment. I have a cell that shows me the number of libraries installed in the venv, and this helps to check if vscode is using the correct env or not. (in my main python, i have only 20 libraries installed and in my virtual environments, I have over 100).
Alternatively, you can exclude packages by adding a parameter to the upgrade command:
choco upgrade all --except="firefox,googlechrome"
SELECT COUNT(*) FROM (VALUES ('05040'),('7066'),('2035'),('1310')) AS t(val);
ENV PATH="$PATH:/opt/gtk/bin"
No spaces before or after =
I don't know if the quotes are necessary.
I've just delete from .idea the gradle.xml file -> closed and open the project and it worked
Request Id: 70e9f474-ee8a-43c5-8dcc-d82e56925400
Correlation Id: bd74ca51-e2ca-47eb-9fca-2e06d01a6763
Timestamp: 2025-08-21T07:47:14.933Z
For pytest-asyncio >= 1.1.0,
#pyproject.toml
...
[tool.pytest.ini_options]
asyncio_default_fixture_loop_scope = "session"
asyncio_default_test_loop_scope = "session"
if use other configuration, reference
https://pytest-asyncio.readthedocs.io/en/latest/how-to-guides/change_default_fixture_loop.html
https://pytest-asyncio.readthedocs.io/en/latest/how-to-guides/change_default_test_loop.html
This seems to be working now. gemini_in_workspace_apps is part of the API pattern rules for allowed applications.
I think that the disconnectedCallback() is what are you looking for.
Try to add it to your class with logic of destroying your element, like
disconnectedCallback() {
// here put your logic with killing subscriptions and so on
}
Please check out this : https://github.com/mmin18/RealtimeBlurView
I think this is the best blur overlay view in Android world
Custom property can be used for this, here is the example:
@property --breakpoint-lg {
syntax: "<length>";
initial-value: 1024px;
inherits: true;
}
.container {
// some styles
@media (min-width: --breakpoint-lg) {
// some styles
}
}
SOLVED
sudo apt install postgresql-client-common
Okay, I finally managed the sql syntax:
DoCmd.RunSQL "INSERT INTO PlanningChangeLog(" & _
"ID, TimeStampEdit, UserAccount, Datum, Bestelbon, Transporteur, Productnaam, Tank) " & _
"SELECT ID, Now() as TimeStampEdit, '" & user & "' as UserAccount, Datum, " & _
"Bestelbon, Transporteur, Productnaam, Tank FROM Planning " & _
"WHERE Bestelbon = " & Me.txtSearch & ""
This copies the record to a changelog table, and inserts a timestamp and user account field after the index field.
Thx for all the suggestions!
Use this regular expression to find the invalid pattern. It's flexible enough to match any expressions for a, b, c, and d, not just simple variables.
(.*\s*\?)(.*):\s*(.*)\?\s*:\s*(.*)
Option 1: Fix to (a ? b : c) ?: d
Use this replacement pattern if you want to group the entire first ternary as the condition for the second.
Code snippet
($1 $2 : $3) ?: $4
This pattern wraps the first three capture groups in parentheses, creating a single, valid expression.
Option 2: Fix to a ? b : (c ?: d)
Use this replacement pattern if you want to nest the second ternary inside the first. This is a common and often more readable approach.
Code snippet
$1 $2 : ($3 ?: $4)
Try using one of these.
import { screen } from '@testing-library/react';
screen.debug(undefined, Infinity);
import { prettyDOM } from '@testing-library/react';
console.log(prettyDOM());
I've been working in a company that fully make use of Spring ecosystem. And in order to use actor model, we needed to integrate spring and Pekko. So I've wrote a libarary that integrated Pekko(Akka fork) and Spring Ecosystem. PTAL if you are interested
Filtering by Apps Script ID fails because Logs Explorer doesn’t index script_id as a resource label. It only allows filtering by types like resource.type="app_script_function". To filter by script ID, you must either log the script ID explicitly in your log messages and filter via jsonPayload—or export your logs to BigQuery or a Logging sink, enabling full querying capabilities.
Issue
tsconfig.app.json included the whole src folder, which also contained test files. This meant the tests were type-checked by both tsconfig.app.json and tsconfig.test.json, which caused conflicts and ESLint didn’t recognize Vitest globals.
Fix
Exclude test files from tsconfig.app.json so only tsconfig.test.json handles them.
tsconfig.app.json
{
// Vite defaults...
"exclude": ["src/**/*.test.ts", "src/**/*.test.tsx", "src/tests/setup.ts"]
}
tsconfig.test.json
{
"compilerOptions": {
"types": ["vitest/globals"],
"lib": ["ES2020", "DOM"],
"module": "ESNext",
"moduleResolution": "bundler",
"jsx": "react-jsx"
},
"include": ["src/**/*.test.ts", "src/**/*.test.tsx", "src/tests/setup.ts"]
}
After this change, ESLint recognized describe, it, expect, etc.
(Optional): I also added @vitest/eslint-plugin to my ESLint config. Not required for fixing the globals error, but helpful for extra rules and best practices in tests.
If you are using MVC, use RedirectToAction with a TempData or QueryString passed as error id. Using that error id, display a message box in the target action or view.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "cloudformation.amazonaws.com"
},
"Action": "lambda:InvokeFunction",
"Resource": "*"
}
]
}
Firehose cannot deliver directly to a Redshift cluster in a private VPC without internet access or making the cluster public.
Using an Internet Gateway workaround compromises security.
1. Enabling an Internet Gateway exposes the Redshift cluster to inbound traffic from the internet, dramatically increasing the attack surface.
2. Many compliance frameworks and AWS Security Hub rules (e.g., foundational best practices) discourage making databases publicly accessible.
A best-practice alternative is to have Firehose deliver logs to S3, then use a Lambda or similar within the VPC to COPY into Redshift.
For real-time streaming, consider Redshift's native Streaming Ingestion which fits tightly into private network models.
Did you find any solution for this i hope its solved by now
Imagine you’re designing a lift (elevator) for a building.
Sometimes only 1 person uses it (easy).
Sometimes 20 people rush in together (heavy).
If you want to guarantee safety, you don’t design for the “average” or “best” case.
You design for the worst possible load.
Similarly, in algorithms, we want to know the maximum time it can ever take, so that no matter what input comes, the program won’t surprise or fail.
Use string_split function
Use this workaround if you have old sql-server version
When you configure an AWS CLI profile for SSO, every command you run—even those against LocalStack—requires authentication via a valid SSO session. The CLI automatically checks for a cached SSO access token and, if missing or expired, prompts you to run aws sso login. Only after that token is retrieved can the CLI issue (mock or real) AWS API calls. This is documented in the AWS CLI behavior around IAM Identity Center sessions and SSO tokens.
AWS Doc: https://docs.aws.amazon.com/cli/latest/reference/sso/login.html?
"To login, the requested profile must have first been setup using aws configure sso. Each time the login command is called, a new SSO access token will be retrieved."
For LocalStack, you can bypass this by using a non-SSO profile with dummy static credentials (aws_access_key_id and aws_secret_access_key), since LocalStack does not validate them. This prevents unnecessary SSO logins while still allowing AWS CLI and SDKs to function locally.
This is because DocumentDB does not support isMaster; it utilizes hello instead, particularly in newer releases (v5.0). Ensure your driver version is compatible and uses hello, or upgrade the cluster to v5.0 for better API alignment with MongoDB
You may try Spectral confocal technology.
Spectral confocal technology is a non-contact method used to measure surface height, particularly for micro and nano-scale measurements. It works by analyzing the spectrum of reflected light from a surface, where different wavelengths correspond to different heights.
Usually it been used to measure heights, but you can get intensity of different surface from the results, but maybe need some normalization to convert to the intensity of white light.
you can make use of --disableexcludes=all with the yum install command, which overrides all the excludes from /etc/yum.conf file.
in you case, yum install nginx --disableexcludes=all
SELECT NAME, TYPE, LINE, TEXT
FROM USER_SOURCE
WHERE TYPE = 'PROCEDURE'
AND UPPER(TEXT) LIKE '%PALABRA%';
The proble get resolved.
follow below step if sudo nano /usr/share/libalpm/hooks/60-dkms.hook or sudo nano /usr/share/libalpm/hooks/90-mkinitcpio-install.hook files doesn't exist and you have /usr/share/libalpm/hooks/30-systemd-udev-reload.hook this file.
Here are the steps that I followed:
sudo mkdir -p /etc/pacman.d/hooks
sudo nano /etc/pacman.d/hooks/30-systemd-udev-reload.hook
[Trigger]
Type = Path
Operation = Install
Operation = Upgrade
Operation = Remove
Target = usr/lib/udev/rules.d/*
[Action]
Description = Skipping udev reload to avoid freeze
When = PostTransaction
Exec = /usr/bin/true
and the problem is resolved now.
This kind of issue may also occur due to the expiration of your Apple Developer account. If your App Store membership expire that time you may also face similar type issues. Please make sure your account is renewed.
I had a similar use case a few years ago, so I created a small package that converts trained XGBClassifier and XGBRegressor models into Excel formulas by exporting their decision trees. https://github.com/KalinNonchev/xgbexcel
As far as I understand, there is no point in considering approximations that are slower than the standard acos() or acosf() functions. Achieving same performance for correctly rounded double-precision values is extremely difficult, if not impossible, but it is quite possible to improve performance for values with error close to one of single-precision format. Therefore, even those approximations that seem successful should be tested for performance.
Since the arccosine of x has an unbounded derivative at points x=+-1, the approximated function should be transformed so that it becomes sufficiently smooth. I propose to do this as follows (I think this is not a new way): is constructed an approximation of the function
f(t) = arccos(t^2)/(1-t^2)^0.5
using the Padé-Chebyshev method, where t=|x|^0.5, -1<=t<=1. The function f(t) is even, fairly smooth, and can be well approximated by both polynomial and a fractional rational functions. The approximation is as follows:
f(t) ≈ (a0+a1*t^2+a2*t^4+a3*t^6)/(b0+b1*t^2+b2*t^4+b3*t^6) = p(t)/q(t).
Considering the relationship between the variables t and x, we can write:
f(x) ≈ (a0+a1*|x|+a2*|x|^2+a3*|x|^3)/(b0+b1*|x|+b2*|x|^2+b3*|x|^3) = p(x)/q(x).
After calculating the function f(x), the final result is obtained using one of the formulas:
arccos(x) = f(x)*(1-|x|)^0.5 at x>=0;
arccos(x) = pi-f(x)*(1-|x|)^0.5 at x<=0.
The coefficients of the fractional rational function f(x), providing a maximum relative error of 8.6E-10, are follows:
a0 = 1.171233654022217, a1 = 1.301361441612244, a2 = 0.3297972381114960, a3 = 0.01141332555562258;
b0 = 0.7456305027008057, b1 = 0.9303402304649353, b2 = 0.2947896122932434, b3 = 0.01890071667730808.
These coefficients are specially selected for calculations in single precision format.
An example of code implementation using the proposed method can be found in the adjacent topic Fast Arc Cos algorithm?
i think the issue You are asked to design a program that displays a message box showing a custom message entered by the user. The box should include options such as OK, Cancel, Retry, and Exit. How would you implement this?
Would you like me to make a few different variations of the question (same grammar, ~220 characters) so you can choose the best one?
A workaround to the original code could be:
template<int...n> struct StrStuff {
template<int...n0> explicit StrStuff(char const(&...s)[n0]) {}
};
template<int...n> StrStuff(char const(&...s)[n])->StrStuff<n...>;
int main() {
StrStuff g("apple", "pie");
}
But I still wonder why the original code can/can't compile in different compilers.
Adding those configurations to `application.properties` just worked as advised in this Github issue.
server.tomcat.max-part-count=50
server.tomcat.max-part-header-size=2048
The issue is that your Docker build does not have your Git credentials.
If it is a private repo, the simplest fix is to make a build argument with a personal access token:
ARG GIT_TOKEN
RUN git clone https://${GIT_TOKEN}@github.com/username/your-repo.git
Then build with:
docker build --build-arg GIT_TOKEN=your_token_here -t myimage .
Just make sure that you are using a personal access token from GitHub, and not your password - GitHub does not allow password auth anymore.
If it is a public repo and is still not working, try:
RUN git config --global url."https://".insteadOf git://
RUN git clone https://github.com/username/repo.git
Sometimes the git:// protocol will mess up Docker images.
Edit: Also, as mentioned in comments, be careful about tokens in build args - because they may appear in image history, and this could pose a risk. For production purposes, consider using Docker BuildKit's --mount=type=ssh option instead.
For multiples of 90°, you can use page.set_rotation() For arbitrary angles, render the page as an image with a rotation matrix, then insert it back into a PDF if needed—this isn’t a true vector transformation, but a raster workaround, as MuPDF and most PDF formats do not natively support non-orthogonal page rotations.
Para cumplir tus requerimientos en Batch Script:
1. Mover un archivo de una ruta a otra: Se usa el comando move.
2. Renombrar el archivo y cambiar la fecha juliana a DDMMYYYY: Se requiere extraer la fecha juliana del nombre, convertirla y renombrar el archivo.
Aquí tienes un ejemplo de código Batch Script que realiza ambas tareas. Supongamos que el archivo original tiene un nombre como archivo_2024165.txt (donde 2024165 es la fecha juliana: año 2024, día 165).
-----------------------------------------------------------------------------------------------------------------------------------
@echo off
setlocal enabledelayedexpansion
REM Configura las rutas
set "origen=C:\ruta\origen\archivo_2024165.txt"
set "destino=C:\ruta\destino"
REM Mueve el archivo
move "%origen%" "%destino%"
REM Extrae el nombre del archivo movido
for %%F in ("%destino%\archivo_*.txt") do (
set "archivo=%%~nxF"
REM Extrae la fecha juliana del nombre
for /f "tokens=2 delims=_" %%A in ("!archivo!") do (
set "fechaJuliana=%%~nA"
set "anio=!fechaJuliana:~0,4!"
set "dia=!fechaJuliana:~4,3!"
REM Convierte día juliano a fecha DDMMYYYY
powershell -Command "$date = [datetime]::ParseExact('%anio%', 'yyyy', $null).AddDays(%dia% - 1); Write-Host $date.ToString('ddMMyyyy')" > temp_fecha.txt
set /p fechaDDMMYYYY=<temp_fecha.txt
del temp_fecha.txt
REM Renombra el archivo
ren "%destino%\!archivo!" "archivo_!fechaDDMMYYYY!.txt"
)
)
endlocal
-----------------------------------------------------------------------------------------------------------------------------------
odifica las rutas de origen y destino según tus necesidades.
• El script usa PowerShell para convertir la fecha juliana a DDMMYYYY, ya que Batch puro no tiene funciones de fecha avanzadas.
• El nombre final será archivo_DDMMYYYY.txt.
Primitives, and their object counterparts are not proxyable as per the spec. If you need the value to live in the request scope, use a wrapper class that is actually proxyable. If you make it @Dependent, you will be able to inject it as an Integer, but there may be overhead because of the nature of dependent beans.
You can open up 2 tabs or windows on the same view and have different preview devices showing. Hate this, but it works.
1. Check Project Java Build Path
2.Update Installed JREs in Eclipse
3. Project Compiler Compliance Level
4. Check Source and Target Compatibility
5. Restart Eclipse/Refresh Workspace
6. Check for Errors in Problems View
7. Update Content Assist Settings
The build system generates the SDL3 library in the build folder, but imgui was not searching in the correct directory due to the command issue. target_link_directories(imgui PUBLIC SDL3) in the vendors/imgui/CMakeLists.txt file, on the last line, needs to be target_link_libraries(imgui PUBLIC SDL3::SDL3).
I can see why you'd want to build this feature, but unfortunately, detecting whether a user has an active screen-sharing session via any external application (like TeamViewer, Zoom, or Google Meet) isn't directly possible from a web-based application using JavaScript. This is a deliberate limitation for some security and privacy reasons:
You can also do the following :
Go to Settings
Type "update mode" in the search bar
Ensure that "Update: Mode" is NOT set as "none"
Then "Check for Updates..." would be in the "Code" menu.