This error means your Android device doesn't have a Lock Screen Knowledge Factor (LSKF) set up - basically, no screen lock protection.
Quick fix:
Go to Settings → Security (or Lock Screen)
Set up a screen lock:
🔸 PIN
🔸 Pattern
🔸 Password
🔸 Fingerprint
🔸 Face unlock
Why does this happen?
Your app is trying to use Android's secure keystore, but the system requires some form of screen lock to protect the keys. Without it, Android won't let apps store sensitive data securely.
Steps:
Open Settings
Find "Security" or "Lock Screen"
Choose "Screen Lock"
Pick any method (PIN is quickest)
Restart your app
That should fix it. The keystore needs device security to work properly.
Right click on the variable and click on the Rename Symbol option, this option will only rename the correct ABC
(str
vs bool
).
You can alternatively press F2
as well to do this.
Just open Vscode in the folder that contains the Scripts folder.
Then activate your virtual environment. Create a ipynb notebook, put some code in it and at the top right, you can select the kernel. The name of the env will be same as the name of your folder.
see top right, this is my env name
Vscode will auto detect this environment, even when you restart the editor. Once you activate the environment, select on this option and reselect the environment. I have a cell that shows me the number of libraries installed in the venv, and this helps to check if vscode is using the correct env or not. (in my main python, i have only 20 libraries installed and in my virtual environments, I have over 100).
Alternatively, you can exclude packages by adding a parameter to the upgrade command:
choco upgrade all --except="firefox,googlechrome"
SELECT COUNT(*) FROM (VALUES ('05040'),('7066'),('2035'),('1310')) AS t(val);
ENV PATH="$PATH:/opt/gtk/bin"
No spaces before or after =
I don't know if the quotes are necessary.
I've just delete from .idea the gradle.xml file -> closed and open the project and it worked
Request Id: 70e9f474-ee8a-43c5-8dcc-d82e56925400
Correlation Id: bd74ca51-e2ca-47eb-9fca-2e06d01a6763
Timestamp: 2025-08-21T07:47:14.933Z
For pytest-asyncio >= 1.1.0,
#pyproject.toml
...
[tool.pytest.ini_options]
asyncio_default_fixture_loop_scope = "session"
asyncio_default_test_loop_scope = "session"
if use other configuration, reference
https://pytest-asyncio.readthedocs.io/en/latest/how-to-guides/change_default_fixture_loop.html
https://pytest-asyncio.readthedocs.io/en/latest/how-to-guides/change_default_test_loop.html
This seems to be working now. gemini_in_workspace_apps is part of the API pattern rules for allowed applications.
I think that the disconnectedCallback() is what are you looking for.
Try to add it to your class with logic of destroying your element, like
disconnectedCallback() {
// here put your logic with killing subscriptions and so on
}
Please check out this : https://github.com/mmin18/RealtimeBlurView
I think this is the best blur overlay view in Android world
Custom property can be used for this, here is the example:
@property --breakpoint-lg {
syntax: "<length>";
initial-value: 1024px;
inherits: true;
}
.container {
// some styles
@media (min-width: --breakpoint-lg) {
// some styles
}
}
SOLVED
sudo apt install postgresql-client-common
Okay, I finally managed the sql syntax:
DoCmd.RunSQL "INSERT INTO PlanningChangeLog(" & _
"ID, TimeStampEdit, UserAccount, Datum, Bestelbon, Transporteur, Productnaam, Tank) " & _
"SELECT ID, Now() as TimeStampEdit, '" & user & "' as UserAccount, Datum, " & _
"Bestelbon, Transporteur, Productnaam, Tank FROM Planning " & _
"WHERE Bestelbon = " & Me.txtSearch & ""
This copies the record to a changelog table, and inserts a timestamp and user account field after the index field.
Thx for all the suggestions!
Use this regular expression to find the invalid pattern. It's flexible enough to match any expressions for a
, b
, c
, and d
, not just simple variables.
(.*\s*\?)(.*):\s*(.*)\?\s*:\s*(.*)
Option 1: Fix to (a ? b : c) ?: d
Use this replacement pattern if you want to group the entire first ternary as the condition for the second.
Code snippet
($1 $2 : $3) ?: $4
This pattern wraps the first three capture groups in parentheses, creating a single, valid expression.
Option 2: Fix to a ? b : (c ?: d)
Use this replacement pattern if you want to nest the second ternary inside the first. This is a common and often more readable approach.
Code snippet
$1 $2 : ($3 ?: $4)
Try using one of these.
import { screen } from '@testing-library/react';
screen.debug(undefined, Infinity);
import { prettyDOM } from '@testing-library/react';
console.log(prettyDOM());
I've been working in a company that fully make use of Spring ecosystem. And in order to use actor model, we needed to integrate spring and Pekko. So I've wrote a libarary that integrated Pekko(Akka fork) and Spring Ecosystem. PTAL if you are interested
Filtering by Apps Script ID fails because Logs Explorer doesn’t index script_id
as a resource label. It only allows filtering by types like resource.type="app_script_function"
. To filter by script ID, you must either log the script ID explicitly in your log messages and filter via jsonPayload
—or export your logs to BigQuery or a Logging sink, enabling full querying capabilities.
Issue
tsconfig.app.json
included the whole src
folder, which also contained test files. This meant the tests were type-checked by both tsconfig.app.json
and tsconfig.test.json
, which caused conflicts and ESLint didn’t recognize Vitest globals.
Fix
Exclude test files from tsconfig.app.json
so only tsconfig.test.json
handles them.
tsconfig.app.json
{
// Vite defaults...
"exclude": ["src/**/*.test.ts", "src/**/*.test.tsx", "src/tests/setup.ts"]
}
tsconfig.test.json
{
"compilerOptions": {
"types": ["vitest/globals"],
"lib": ["ES2020", "DOM"],
"module": "ESNext",
"moduleResolution": "bundler",
"jsx": "react-jsx"
},
"include": ["src/**/*.test.ts", "src/**/*.test.tsx", "src/tests/setup.ts"]
}
After this change, ESLint recognized describe
, it
, expect
, etc.
(Optional): I also added @vitest/eslint-plugin
to my ESLint config. Not required for fixing the globals error, but helpful for extra rules and best practices in tests.
If you are using MVC, use RedirectToAction with a TempData or QueryString passed as error id. Using that error id, display a message box in the target action or view.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "cloudformation.amazonaws.com"
},
"Action": "lambda:InvokeFunction",
"Resource": "*"
}
]
}
Firehose cannot deliver directly to a Redshift cluster in a private VPC without internet access or making the cluster public.
Using an Internet Gateway workaround compromises security.
1. Enabling an Internet Gateway exposes the Redshift cluster to inbound traffic from the internet, dramatically increasing the attack surface.
2. Many compliance frameworks and AWS Security Hub rules (e.g., foundational best practices) discourage making databases publicly accessible.
A best-practice alternative is to have Firehose deliver logs to S3, then use a Lambda or similar within the VPC to COPY into Redshift.
For real-time streaming, consider Redshift's native Streaming Ingestion which fits tightly into private network models.
Did you find any solution for this i hope its solved by now
Imagine you’re designing a lift (elevator) for a building.
Sometimes only 1 person uses it (easy).
Sometimes 20 people rush in together (heavy).
If you want to guarantee safety, you don’t design for the “average” or “best” case.
You design for the worst possible load.
Similarly, in algorithms, we want to know the maximum time it can ever take, so that no matter what input comes, the program won’t surprise or fail.
Use string_split function
Use this workaround if you have old sql-server version
When you configure an AWS CLI profile for SSO, every command you run—even those against LocalStack—requires authentication via a valid SSO session. The CLI automatically checks for a cached SSO access token and, if missing or expired, prompts you to run aws sso login
. Only after that token is retrieved can the CLI issue (mock or real) AWS API calls. This is documented in the AWS CLI behavior around IAM Identity Center sessions and SSO tokens.
AWS Doc: https://docs.aws.amazon.com/cli/latest/reference/sso/login.html?
"To login, the requested profile must have first been setup using aws configure sso
. Each time the login
command is called, a new SSO access token will be retrieved."
For LocalStack, you can bypass this by using a non-SSO profile with dummy static credentials (aws_access_key_id
and aws_secret_access_key
), since LocalStack does not validate them. This prevents unnecessary SSO logins while still allowing AWS CLI and SDKs to function locally.
This is because DocumentDB does not support isMaster
; it utilizes hello
instead, particularly in newer releases (v5.0). Ensure your driver version is compatible and uses hello
, or upgrade the cluster to v5.0 for better API alignment with MongoDB
You may try Spectral confocal technology.
Spectral confocal technology is a non-contact method used to measure surface height, particularly for micro and nano-scale measurements. It works by analyzing the spectrum of reflected light from a surface, where different wavelengths correspond to different heights.
Usually it been used to measure heights, but you can get intensity of different surface from the results, but maybe need some normalization to convert to the intensity of white light.
you can make use of --disableexcludes=all
with the yum install command, which overrides all the excludes from /etc/yum.conf file.
in you case, yum install nginx --disableexcludes=all
SELECT NAME, TYPE, LINE, TEXT
FROM USER_SOURCE
WHERE TYPE = 'PROCEDURE'
AND UPPER(TEXT) LIKE '%PALABRA%';
The proble get resolved.
follow below step if sudo nano /usr/share/libalpm/hooks/60-dkms.hook
or sudo nano /usr/share/libalpm/hooks/90-mkinitcpio-install.hook
files doesn't exist and you have /usr/share/libalpm/hooks/30-systemd-udev-reload.hook
this file.
Here are the steps that I followed:
sudo mkdir -p /etc/pacman.d/hooks
sudo nano /etc/pacman.d/hooks/30-systemd-udev-reload.hook
[Trigger]
Type = Path
Operation = Install
Operation = Upgrade
Operation = Remove
Target = usr/lib/udev/rules.d/*
[Action]
Description = Skipping udev reload to avoid freeze
When = PostTransaction
Exec = /usr/bin/true
and the problem is resolved now.
This kind of issue may also occur due to the expiration of your Apple Developer account. If your App Store membership expire that time you may also face similar type issues. Please make sure your account is renewed.
I had a similar use case a few years ago, so I created a small package that converts trained XGBClassifier and XGBRegressor models into Excel formulas by exporting their decision trees. https://github.com/KalinNonchev/xgbexcel
As far as I understand, there is no point in considering approximations that are slower than the standard acos() or acosf() functions. Achieving same performance for correctly rounded double-precision values is extremely difficult, if not impossible, but it is quite possible to improve performance for values with error close to one of single-precision format. Therefore, even those approximations that seem successful should be tested for performance.
Since the arccosine of x has an unbounded derivative at points x=+-1, the approximated function should be transformed so that it becomes sufficiently smooth. I propose to do this as follows (I think this is not a new way): is constructed an approximation of the function
f(t) = arccos(t^2)/(1-t^2)^0.5
using the Padé-Chebyshev method, where t=|x|^0.5, -1<=t<=1. The function f(t) is even, fairly smooth, and can be well approximated by both polynomial and a fractional rational functions. The approximation is as follows:
f(t) ≈ (a0+a1*t^2+a2*t^4+a3*t^6)/(b0+b1*t^2+b2*t^4+b3*t^6) = p(t)/q(t).
Considering the relationship between the variables t and x, we can write:
f(x) ≈ (a0+a1*|x|+a2*|x|^2+a3*|x|^3)/(b0+b1*|x|+b2*|x|^2+b3*|x|^3) = p(x)/q(x).
After calculating the function f(x), the final result is obtained using one of the formulas:
arccos(x) = f(x)*(1-|x|)^0.5 at x>=0;
arccos(x) = pi-f(x)*(1-|x|)^0.5 at x<=0.
The coefficients of the fractional rational function f(x), providing a maximum relative error of 8.6E-10, are follows:
a0 = 1.171233654022217, a1 = 1.301361441612244, a2 = 0.3297972381114960, a3 = 0.01141332555562258;
b0 = 0.7456305027008057, b1 = 0.9303402304649353, b2 = 0.2947896122932434, b3 = 0.01890071667730808.
These coefficients are specially selected for calculations in single precision format.
An example of code implementation using the proposed method can be found in the adjacent topic Fast Arc Cos algorithm?
i think the issue You are asked to design a program that displays a message box showing a custom message entered by the user. The box should include options such as OK, Cancel, Retry, and Exit. How would you implement this?
Would you like me to make a few different variations of the question (same grammar, ~220 characters) so you can choose the best one?
A workaround to the original code could be:
template<int...n> struct StrStuff {
template<int...n0> explicit StrStuff(char const(&...s)[n0]) {}
};
template<int...n> StrStuff(char const(&...s)[n])->StrStuff<n...>;
int main() {
StrStuff g("apple", "pie");
}
But I still wonder why the original code can/can't compile in different compilers.
Adding those configurations to `application.properties` just worked as advised in this Github issue.
server.tomcat.max-part-count=50
server.tomcat.max-part-header-size=2048
The issue is that your Docker build does not have your Git credentials.
If it is a private repo, the simplest fix is to make a build argument with a personal access token:
ARG GIT_TOKEN
RUN git clone https://${GIT_TOKEN}@github.com/username/your-repo.git
Then build with:
docker build --build-arg GIT_TOKEN=your_token_here -t myimage .
Just make sure that you are using a personal access token from GitHub, and not your password - GitHub does not allow password auth anymore.
If it is a public repo and is still not working, try:
RUN git config --global url."https://".insteadOf git://
RUN git clone https://github.com/username/repo.git
Sometimes the git:// protocol will mess up Docker images.
Edit: Also, as mentioned in comments, be careful about tokens in build args - because they may appear in image history, and this could pose a risk. For production purposes, consider using Docker BuildKit's --mount=type=ssh option instead.
For multiples of 90°, you can use page.set_rotation() For arbitrary angles, render the page as an image with a rotation matrix, then insert it back into a PDF if needed—this isn’t a true vector transformation, but a raster workaround, as MuPDF and most PDF formats do not natively support non-orthogonal page rotations.
Para cumplir tus requerimientos en Batch Script:
1. Mover un archivo de una ruta a otra: Se usa el comando move.
2. Renombrar el archivo y cambiar la fecha juliana a DDMMYYYY: Se requiere extraer la fecha juliana del nombre, convertirla y renombrar el archivo.
Aquí tienes un ejemplo de código Batch Script que realiza ambas tareas. Supongamos que el archivo original tiene un nombre como archivo_2024165.txt (donde 2024165 es la fecha juliana: año 2024, día 165).
-----------------------------------------------------------------------------------------------------------------------------------
@echo off
setlocal enabledelayedexpansion
REM Configura las rutas
set "origen=C:\ruta\origen\archivo_2024165.txt"
set "destino=C:\ruta\destino"
REM Mueve el archivo
move "%origen%" "%destino%"
REM Extrae el nombre del archivo movido
for %%F in ("%destino%\archivo_*.txt") do (
set "archivo=%%~nxF"
REM Extrae la fecha juliana del nombre
for /f "tokens=2 delims=_" %%A in ("!archivo!") do (
set "fechaJuliana=%%~nA"
set "anio=!fechaJuliana:~0,4!"
set "dia=!fechaJuliana:~4,3!"
REM Convierte día juliano a fecha DDMMYYYY
powershell -Command "$date = [datetime]::ParseExact('%anio%', 'yyyy', $null).AddDays(%dia% - 1); Write-Host $date.ToString('ddMMyyyy')" > temp_fecha.txt
set /p fechaDDMMYYYY=<temp_fecha.txt
del temp_fecha.txt
REM Renombra el archivo
ren "%destino%\!archivo!" "archivo_!fechaDDMMYYYY!.txt"
)
)
endlocal
-----------------------------------------------------------------------------------------------------------------------------------
odifica las rutas de origen y destino según tus necesidades.
• El script usa PowerShell para convertir la fecha juliana a DDMMYYYY, ya que Batch puro no tiene funciones de fecha avanzadas.
• El nombre final será archivo_DDMMYYYY.txt.
Primitives, and their object counterparts are not proxyable as per the spec. If you need the value to live in the request scope, use a wrapper class that is actually proxyable. If you make it @Dependent, you will be able to inject it as an Integer, but there may be overhead because of the nature of dependent beans.
You can open up 2 tabs or windows on the same view and have different preview devices showing. Hate this, but it works.
1. Check Project Java Build Path
2.Update Installed JREs in Eclipse
3. Project Compiler Compliance Level
4. Check Source and Target Compatibility
5. Restart Eclipse/Refresh Workspace
6. Check for Errors in Problems View
7. Update Content Assist Settings
The build system generates the SDL3 library in the build folder, but imgui was not searching in the correct directory due to the command issue. target_link_directories(imgui PUBLIC SDL3)
in the vendors/imgui/CMakeLists.txt
file, on the last line, needs to be target_link_libraries(imgui PUBLIC SDL3::SDL3)
.
I can see why you'd want to build this feature, but unfortunately, detecting whether a user has an active screen-sharing session via any external application (like TeamViewer, Zoom, or Google Meet) isn't directly possible from a web-based application using JavaScript. This is a deliberate limitation for some security and privacy reasons:
You can also do the following :
Go to Settings
Type "update mode" in the search bar
Ensure that "Update: Mode" is NOT set as "none"
Then "Check for Updates..." would be in the "Code" menu.
I couldn't find any other way to create a share folder than creating it manually using PowerShell. This is what I did in my code. Thank you everyone for their replies.
- name: 'Create Folder shortcut on G: drive'
ansible.windows.win_shell: |
$WshShell = New-Object -ComObject WScript.Shell
$Shortcut = $WshShell.CreateShortcut("G:\\Folder1.lnk")
$Shortcut.TargetPath = "\\\\SERVER1\\Folder1"
$Shortcut.Save()
Các bạn có thể tham khảo bài viết Các kiểu dữ liệu trong MySQL https://webmoi.vn/cac-kieu-du-lieu-trong-mysql/ ở bên mình.
This works great with custom fonts and updating the view's frame causing layout changes:
Here's the code:
public struct FirstLineCenterID: AlignmentID {
static func defaultValue(in d: ViewDimensions) -> CGFloat {
d[VerticalAlignment.center]
}
}
/// Custom vertical alignment used to coordinate views against the **first line center**.
extension VerticalAlignment {
static let firstLineCenter = VerticalAlignment(FirstLineCenterID.self)
}
// MARK: - FirstLineCenteredLabel
/// A `Label`-like view that displays a leading icon and a text label, aligning the icon
/// to the **vertical midpoint of the text’s first line**.
struct FirstLineCenteredLabel<Icon>: View where Icon : View {
let text: String
let spacing: CGFloat?
let icon: Icon
/// Cached measured height of a single line for the current font.
@State private var firstLineHeight: CGFloat?
/// The effective font pulled from the environment; used by both visible and measuring text.
@Environment(\.font) var font
init(
_ text: String,
spacing: CGFloat? = nil,
@ViewBuilder icon: () -> Icon
) {
self.text = text
self.spacing = spacing
self.icon = icon()
}
var body: some View {
HStack(alignment: .firstLineCenter, spacing: spacing) {
let text = Text(text)
icon
// aligns by its vertical center
.alignmentGuide(.firstLineCenter) { d in d[.top] + d.height / 2 }
.font(font)
text
.font(font)
.fixedSize(horizontal: false, vertical: true)
// aligns by the first line's vertical midpoint
.alignmentGuide(.firstLineCenter) { d in
let h = firstLineHeight ?? d.height
return d[.top] + h / 2
}
// Measure the natural height of a single line **without impacting layout**:
// a 1-line clone in an overlay with zero frame captures `geo.size.height` for the
// current environment font. This avoids the `.hidden()` pitfall which keeps layout space.
.overlay(alignment: .topLeading) {
text.font(font).lineLimit(1).fixedSize()
.overlay(
GeometryReader { g in
Color.clear
.onAppear { firstLineHeight = g.size.height }
.onChange(of: g.size.height) { firstLineHeight = $0 }
}
)
.opacity(0)
.frame(width: 0, height: 0)
.allowsHitTesting(false)
.accessibilityHidden(true)
}
}
}
}
Usage:
var body: some View {
VStack {
FirstLineCenteredLabel(longText, spacing: 8) {
Image(systemName: "star.fill")
}
FirstLineCenteredLabel(shortText, spacing: 8) {
Image(systemName: "star.fill")
}
Divider()
// Just to showcase that it can handle various font sizing.
FirstLineCenteredLabel(longText, spacing: 8) {
Image(systemName: "star.fill")
.font(.largeTitle
}
.font(.caption)
}
}
private var longText: String {
"This is a new label view that places an image/icon to the left of the text and aligns it to the text's first line vertical midpoint."
}
private var shortText: String {
"This should be one line.
}
import time
def slow_print(text, delay=0.04):
for char in text:
print(char, end='', flush=True)
time.sleep(delay)
print()
def escolha_personagem():
slow_print("Escolha sua classe:")
slow_print("1 - Guerreiro")
slow_print("2 - Mago")
slow_print("3 - Ladino")
classe = input("Digite o número da sua escolha: ")
if classe == "1":
return "Guerreiro"
elif classe == "2":
return "Mago"
elif classe == "3":
return "Ladino"
else:
slow_print("Escolha inválida. Você será um Aventureiro.")
return "Aventureiro"
def inicio_historia(classe):
slow_print(f"\nVocê acorda em uma floresta sombria. Como um(a) {classe}, seu instinto o guia.")
slow_print("De repente, um velho encapuzado aparece diante de você...")
slow_print("Kael: 'Você finalmente despertou
You've said in the comments that
I could see some extra bytes added in the corrupted file.
... well there's your problem, rather than "flushing/closing".
What extra bytes? I wonder if it is the "Byte Order Mark". Read about it here https://stackoverflow.com/a/48749396/1847378 - this article is about how to overcome a file/input stream with a BOM that you don't want. That's the opposite problem, though.
Maybe the use of stream
is unhelpfully messing around with the stream. Just for a test at least, how about passing ByteArrayOutputStream()
to the the outputDocument(..)
and then passing byte[]
(from ByteArrayOutputStream.toByteArray()
) to the JAX RS Response
?
According to this, the maximum deepsleep is 2^45 microseconds, or just over 407 days.
Turns out having Claude test things in Chrome on it's own, made copies of Chrome in a temp file that were never deleted. The path was this:
/private/var/folders/c3/6s_l3vn96s5b8b9f08szx05w0000gn/X/com.google.Chrome.code_sign_clone/
I deleted all the temp files in here and got back 200gb of space.
For Android you can use React Native's Own Permissions (PermissionsAndroid) and for IOS the library you are using in one of the bes libraries but it has some minor issues.
for IOS you can use 3 libraries saperately
react-native-geolocation-service (for location).
react-native-documents (for documents).
@react-native-camera-roll/camera-roll (for Camera)
Some .so
library files in the command output are 2**12
== 4KB aligned. So, the message that ELF alignment check failed.
Please check this link for detailed answer. I am posting a summary of what you need to do here:
Steps to support 16KB page alignment in Flutter app: (First of all create a backup of your original project, and try the following steps in a copy of the project. It's always good to have a backup.)
As stated in the official documentation, you need to update AGP to version 8.5.1 or higher to be able to compile with 16KB page size. The documentation says to upgrade NDK version to 28
but the versions 26
and 27
also compatible. You may leave it to any among 26, 27 or 28. The respective files to edit are: android/settings.gradle
and look for the line like id "com.android.application" version "8.7.3" apply false
and change to compatible version, and file android/app/build.gradle
where you may change the like as ndkVersion "27.0.12077973"
.
Your project code, if contains native code, must update to support 16KB page size.
Your project dependencies listed in pubspec.yaml file, both direct and dev dependencies may need to be updated if they depend on native code. If you can identify the individual ones, you may update only those to the appropriate version, or else you should update all the dependency packages in your pubspec.yaml file. Also, the transient dependencies should be updated. How: To update the direct and dev dependencies, update the corresponding version number to each packages in the pubspec.yaml file and after that run flutter clean; flutter pub get
from the project root which will update the direct and dev dependencies listed in pubspec.yaml file. Now to upgrade the transient dependencies: You can see the table with the command: flutter pub outdated
and update the transient dependencies with flutter pub upgrade
command (or flutter pub upgrade --major-versions).
After you've updated the dependencies, try to run the project. Additional configuration changes may be asked and displayed as error messages. You should do as suggested. You may update your question if you need help with that.
After you fix these, check again for the 16KB alignment of .so
packages and also test run in 16KB compliant emulator or devices.
Note: To update the dependencies in your pubspec.yaml VS Code extensions like Version Lens can ease the process. Similar should also exist for Android Studio.
The emoji is a non-existing topic, so publishing of messages also fail with the following message: InvalidTopicException: Invalid topics: [😃]. Everything good so far.
The problem is that I now have an infinite loop.
I honestly don't get what else do you expect?
Let's see: you use the DefaultErrorHandler as is, which works at Spring container level & covers events happening BEFORE (and AFTER) whatever happens in the listener POJO.
Then, when it exhausts all attempts prescribed by your BackOff, it executes component that itself throws an error.
It's executing in the same container context, where does it fall then? Back into your DefaultErrorHandler!
And here you go again.
I'm not quite sure what's the reason for "experiment" like this besides sheer boredom and nothing-else-to-do.
But IF there's an engineering reason for this, you can extend the DefaultErrorHandler by overriding the appropriate method of it, likely handleOne() (or implement a whole CommonErrorHandler yourself) and deal with the situation.
And to top all that...
I throw an exception in the Kafka listener code, so messages will fail and this error handler will be used.
... if there's an exception in @KafkaListener POJO (means YOUR processing part, not framework's), another error handler is to be utilized - the KafkaListenerErrorHandler implementation.
Yes, the Vault API and DLLs change between versions, so code compiled against AutoCAD Vault 2012 libraries will not work reliably with Vault 2015 or newer. You’ll need to reference and build against the matching Vault SDK for each version. There’s no true “write once, run everywhere” approach with Vault, but you can structure your code to use abstraction/interfaces and compile separate builds per version, or use conditional references to target multiple Vault releases.
Xcode 16:
I just waited 10-20 seconds and the code appeared.
Try to put `CREATE OR REPLACE TABLE ... AS`
before your query.
CREATE OR REPLACE TABLE your_dataset.your_table AS
WITH my_cte AS (
SELECT ...
)
SELECT * FROM my_cte;
This keeps your query the same, but saves the result into a table.
I've been having the exact same problem in NextJS with Tailwind and Typescript. Dynamically swapping between sm:flex-row
and sm:flex-row-reverse
(in my case using Template Literals and a constant defined by the index of the item in a map).
It has an inconsistent bug where suddenly, after making a change anywhere on the page, all the items that should resolve to flex-row-reverse
instead revert to being styled as the default breakpoint's flex-col
. This issue will then persist through more changes, reloads, etc., until it will randomly fix itself again. I have tried other methods of dynamically setting the direction, I've tried just putting sm:flex-row
and then only dynamically adding -reverse
to the end, with no success. I truly don't know why it's happening or how to fix it at this point.
Here's a rough outline of the code:
<ul className="flex flex-col list-none">
{arr.map((entry, index) => {
const direction = index % 2 === 0 ? 'flex-row' : 'flex-row-reverse';
return (
<li key={entry.id} className={`flex flex-col sm:${direction} border-2 p-[1rem] my-[1rem]`}>
<Image
className="object-contain border border-black"
width={300}
height={300}
priority
/>
<div>
<div className={`flex ${direction}`}>
<h2 className="border px-[0.5rem]">text</h2>
<div className="w-auto h-1/3 p-[0.25rem] border">text</div>
</div>
<div className={`flex ${direction} p-[0.5rem]`}>
<div className="mx-[1rem]">
text
</div>
<ul className="flex flex-col px-[2rem] py-[1rem] mx-[0.5rem] list-disc border">
<li>
text
</li>
<li>
text
</li>
<li>
text
</li>
</ul>
</div>
</div>
</li>
)
})}
</ul>
To solve this issue, I had to restart my computer, my system runs on Ubuntu.
System settings, Google settings, other Google apps, Assistant settings, transportation
Set default transportation mode to walking
Now Google maps will always start in walking mode
It's reported as an issue on azure-cli as well: https://github.com/Azure/azure-cli/issues/23643
"I ran into this as well while using a "fine grained PAT." I fixed it by enabling webhook read/write access."
Does using a token with more permissions resolve this issue?
Answer my question quetion using the knowledge of networking full answer please the answer should contain the related answers from other works
Turns out application is a reserved word, or rather it's not allowed as part of a form field name. The Parameter interceptor sets fields on the Action but, for security reasons, excludes any parameter name that matches the excluded parameters regex pattern. More is found at https://struts.apache.org/core-developers/parameters-interceptor#excluding-parameters
The documentation is wrong and, for 6.4.0, the exclusion patterns are the following pair of hideous monstrosities.
(^|\%\{)((#?)(top(\.|\['|\[\")|\[\d\]\.)?)(dojo|struts|session|request|response|application|servlet(Request|Response|Context)|parameters|context|_memberAccess)(\.|\[).*
.*(^|\.|\[|\'|\"|get)class(\(\.|\[|\'|\").*
Application is an object on the Value Stack and a bad person might edit parameter names to hack it.
A different exclusion pattern can be set per Action or for the whole application but, as you've discovered, it's just easier to use a different form name.
What worked for me was
camel.main.run-controller = true
This is printed in the log at startup and in https://github.com/apache/camel-spring-boot-examples/blob/main/activemq/src/main/resources/application.properties.
You can mark the containing directory of the header files as system include. Usually compilers do not complain about those (gcc, clang, MSVC).
Either using SYSTEM in
target_include_directories(<target> [SYSTEM] [AFTER|BEFORE]
<INTERFACE|PUBLIC|PRIVATE> [items1...]
[<INTERFACE|PUBLIC|PRIVATE> [items2...] ...])
or
set_target_properties(xxxDepXxx PROPERTIES INTERFACE_SYSTEM_INCLUDE_DIRECTORIES $<TARGET_PROPERTY:xxxDepXxx,INTERFACE_INCLUDE_DIRECTORIES>)
See more info on this so question:
How to suppress Clang warnings in third-party library header file in CMakeLists.txt file?
Ошибок нет, просто обновите 8.3.1 в tools - agp
Just a hint for anyone still having the same issue. If the circumstances of process death aren't important (exit code and/or signal) there is work around. One can detect a process death by secondary indicators. One can rely on the *nix feature that guarantees that on exit each process closes all of its descriptors. Knowing this one can create a simple pipe, where each end is shared by each party.
Then, a simple select on one end of the pipe will detect an event of the other end being closed (process exit). It's very common in this case to already have some communication line between parent and child (stdin/stdout/strderr or a custom pipe/unix-socket). In which case this is all that is needed here.
I suggest using this plugin developed by me, vim-simple-guifont
: https://github.com/awvalenti/vim-simple-guifont
Then you can do:
" This check avoids loading plugin when Vim is running on terminal
if has('gui_running')
silent! call simple_guifont#Set(
\['Cascadia Code PL', 'JetBrains Mono', 'Hack'], 'Consolas', 14)
endif
For more details, please check the plugin page on GitHub.
facing same issues, what combination of recent packages work? any idea?
I suggest using this plugin developed by me, vim-simple-guifont
: https://github.com/awvalenti/vim-simple-guifont
Then you can simply do:
silent! call simple_guifont#Set(['Monaco New'], 'monospace', 11)
For more details, please check the plugin page on GitHub.
I've written the occasional extension method in situations where it seemed to make sense, and I absolutely love what you can do with Linq, but I think there's one major drawback to the overuse of extension methods that no one else here has mentioned.
They can be a nightmare for Unit Testing, as they can severely complicate the process of Mocking public interfaces.
Numerous times now, I've approached writing a unit test that seems like it will be pretty straight forward. The method I'm testing has a dependency on an external service (ISomeService) and is calling the .Foo() method on that service.
So I create a mock ISomeService object using Moq or any other mocking framework, and go ahead and attempt to mock the Foo() call, only to discover that Foo() is an extension method. So now I have to dig into third party extension method code to try and figure out what actual member of the ISomeService interface is being ultimately being called by the Foo() extension method. And maybe Foo() calls Foo(int, string) which is also an extension method and that calls Foo(int, string, SomeEnumType, object[]), and on and on it goes until eventually I find where its actually calling a method that's actually a member of an the ISomeService interface.
And that's if I'm lucky enough to find that my original Foo() call ultimately maps to just one invocation on the actual interface. If I'm unlucky, I may dig through a tangled web of third party extension method code to ultimately find calls to ISomeService.Bar(), ISomeService.Blam(string, bool), ISomeService.Fizzle(object[]), and ISomeService.Whomp(IEumerable<Task<bool>>). And now I need to figure out how to mock all those invocations just to adequately mock my one simple call to the Foo() extension method.
And that's not even the worst case scenario. Sometimes that tangled web of extension methods ends up referencing and passing around instances of types that are internal to those third party libraries, so I don't even have access to directly reference the types in my mocks. And all this time I'm screaming in my head, "Did you REALLY need to put all this stuff in extension methods??? Just make Foo() part of the interface!"
So, I would say if you're working on library code that's meant to be consumed by third parties, have mercy on those of us who just want to write good, well-tested code and use extension methods sparingly.
You are right. As of now, this feature is not yet supported by the WhatsApp Business API.
Exactly what I've been searching for forever. Thanks so much.
The error message you’ve encountered might be due to your bot not getting the response in time. Take note that the chat app must send a response within 30 seconds of receiving the event.
Another reason could be that the received response was not formatted correctly. Make sure that your bot follows the JSON format or else it will reject it and return that error message. You can refer to the documentation about responding to interaction events for more information.
This project's configure file will build shared libraries by default. If you run the configure script with the --disable-shared
flag, then it will build the static (*.a) version instead.
Of course, we can come up with a solution where we will use 'c4'
, but I see the solution this way.
We can bring to the form you need with the help of an auxiliary column 'group'
. It will help us to index the values for future transformation.
Now we will write a function that will create a pd.Series
. We take the values and place them into one array using flatten()
.
def grouping(g):
return pd.Series(g[['c1', 'c2', 'c3']].values.flatten(),
index=[f'c{i+1}' for i in range(9)])
We apply the function to the grouped DataFrame
by the auxiliary column 'group'
.
Full code:
import pandas as pd
data = {
'c1': [1, 10, 100, 1, 10, 100],
'c2': [2, 20, 200, 2, 20, 200],
'c3': [3, 30, 300, 3, 30, 300],
'c4': [0, 1, 2, 0, 1, 2]
}
df = pd.DataFrame(data)
df['group'] = df.index // 3
def grouping(g):
return pd.Series(g[['c1', 'c2', 'c3']].values.flatten(),
index=[f'c{i+1}' for i in range(9)])
transformed_df = df.groupby('group').apply(grouping).reset_index(drop=True)
I have updated the sub posted by @xiaoyaosoft using the CompactDatabase method as suggested by @June7. Tested for all combinations of setting changing, or removing a password, and also encrypting or decrypting the database. I didn't look at the database files at a low level after any of these changes to see what effect the encryption process has - I merely verified that it could be opened and read in Access after each change.
' Procedure : Set_DBPassword
' Author : Daniel Pineault, CARDA Consultants Inc.
' Website : http://www.cardaconsultants.com
' Purpose : Change the password of any given database
' Copyright : The following may be altered and reused as you wish so long as the
' copyright notice is left unchanged (including Author, Website and
' Copyright). It may not be sold/resold or reposted on other sites (links
' back to this site are allowed).
'
' Input Variables:
' ~~~~~~~~~~~~~~~~
' sDBName : Full path and file name with extension of the database to modify the pwd of
' sOldPwd : Existing database pwd - use "" if db is unprotected
' sNewPwd : New pwd to assign - Optional, leave out if you wish to remove the
' existing pwd
' bEncrypt : Encrypt the database if adding a new password, or decrypt it if removing
' (has no effect if only changing an existing password)
'
' Usage:
' ~~~~~~
' Set a pwd on a db which never had one
' Set_DBPassword "C:\Users\Daniel\Desktop\db1.accdb", "", "test"
'
' Clear the password on a db which previous had one
' Set_DBPassword "C:\Users\Daniel\Desktop\db1.accdb", "test", "" 'Clear the password
'
' Change the pwd of a pwd protected db
' Set_DBPassword "C:\Users\Daniel\Desktop\db1.accdb", "test", "test2"
'
' Revision History:
' Rev Date(yyyy/mm/dd) Description
' **************************************************************************************
' 2 2025-Aug-20 Made work for to or from blank password (by MEM)
' 1 2012-Sep-10 Initial Release
'---------------------------------------------------------------------------------------
Private Sub Set_DBPassword(sDBName As String, sOldPwd As String, Optional sNewPwd As String = "", Optional bEncrypt As Boolean = False)
On Error GoTo Error_Handler
Dim db As DAO.Database
'Password can be a maximum of 20 characters long
If Len(sNewPwd) > 20 Then
MsgBox "Your password is too long and must be 20 characters or less." & _
" Please try again with a new password", vbCritical + vbOKOnly
GoTo Error_Handler_Exit
End If
'Could verify pwd strength
'Could verify ascii characters
If sOldPwd = vbNullString And sNewPwd <> vbNullString Then ' use temporary file
If bEncrypt Then
DBEngine.CompactDatabase sDBName, sDBName & ".$$$", dbLangGeneral & ";pwd=" & sNewPwd, dbEncrypt
Else
DBEngine.CompactDatabase sDBName, sDBName & ".$$$", dbLangGeneral & ";pwd=" & sNewPwd
End If
Kill sDBName
Name sDBName & ".$$$" As sDBName
ElseIf sOldPwd <> vbNullString And sNewPwd = vbNullString Then ' use temporary file database
If bEncrypt Then
DBEngine.CompactDatabase sDBName, sDBName & ".$$$", dbLangGeneral & ";pwd=" & sNewPwd, dbDecrypt, ";pwd=" & sOldPwd
Else
DBEngine.CompactDatabase sDBName, sDBName & ".$$$", dbLangGeneral & ";pwd=" & sNewPwd, , ";pwd=" & sOldPwd
End If
Kill sDBName
Name sDBName & ".$$$" As sDBName
Else
Set db = OpenDatabase(sDBName, True, False, ";PWD=" & sOldPwd) 'open the database in exclusive mode
db.NewPassword sOldPwd, sNewPwd 'change the password
End If
Error_Handler_Exit:
On Error Resume Next
kill sDBName & ".$$$"
db.Close 'close the database
Set db = Nothing
Exit Sub
Error_Handler:
'err 3704 - not able to open exclusively at this time, someone using the db
'err 3031 - sOldPwd supplied was incorrect
'err 3024 - couldn't locate the database file
MsgBox "The following error has occurred." & vbCrLf & vbCrLf & _
"Error Number: " & Err.Number & vbCrLf & _
"Error Source: Set_DBPassword" & vbCrLf & _
"Error Description: " & Err.Description, _
vbCritical, "An Error has Occurred!"
Resume Error_Handler_Exit
End Sub
'from :https://www.devhut.net/ms-access-vba-change-database-password/
Use the Oauth instead don't use the PAT it's not recommended/not secure
here are the steps
https://docs.databricks.com/aws/en/admin/users-groups/manage-service-principals
Not all of the mapbox examples include all the elements in a form, but in the documentation, it's states "Eligible inputs must be a descendant of a element,".
I had missed the form requirement the first time and got inconsistent performance. Once I wrapped those fields in a form, it all worked well.
This might be an Xcode issue. I found an open GitHub thread that seems related to your issue, and it includes a list of workarounds from other developers that might help you gain insights into the issue you're encountering.
I have the same problem. Since Anaconda managed my Python environment, I generated a requirement.txt file using pip, and created a virtual environment as indicated by PzKpfwlVB including auto_py_to_exe. As a result, I was able to generate the executable file including pyside6.
I was able to correctly convert diacritic characters to their lowercase counterparts by creating the following auxiliary table with two columns and inserting pairs of diacritic characters in lower and upper case.
CREATE TABLE Latin1Accents (
UCASE STRING,
LCASE STRING
);
INSERT INTO Latin1Accents (UCASE, LCASE) VALUES
('À', 'à'),
('Á', 'á'),
('Â', 'â'),
('Ã', 'ã'),
('Ä', 'ä'),
('Å', 'å'),
('Ç', 'ç'),
('È', 'è'),
('É', 'é'),
('Ê', 'ê'),
('Ë', 'ë'),
('Ì', 'ì'),
('Í', 'í'),
('Î', 'î'),
('Ï', 'ï'),
('Ñ', 'ñ'),
('Ò', 'ò'),
('Ó', 'ó'),
('Ô', 'ô'),
('Õ', 'õ'),
('Ö', 'ö'),
('Ù', 'ù'),
('Ú', 'ú'),
('Û', 'û'),
('Ü', 'ü'),
('Ý', 'ý');
A CASE is used to verify if the UNICODE representation of the first character in the string is above 127, in order to find out if the character could be a diacritic one. If not, the lower function is used to convert the character. However, if the UNICODE value is above 127, a subquery is used to look for the lowercase representation of that character in the Latin1Accents auxiliary table. If the lowercase character could not be found in that table, the original character is returned.
SELECT Customer_Name,
SUBSTR (Customer_Name,1,1) as "First Letter", UNICODE (SUBSTR (Customer_Name,1,1)) ,
CASE
WHEN UNICODE (SUBSTR (Customer_Name,1,1)) > 127 THEN
(SELECT CASE WHEN LCASE IS NULL THEN SUBSTR (Customer_Name,1,1) ELSE LCASE END
FROM Latin1Accents
WHERE UCASE = SUBSTR (Customer_Name,1,1) )
ELSE
LOWER (SUBSTR (Customer_Name,1,1))
END as "First Letter in Lowercase"
FROM Customer
2-Install the package:
pip install supervision
# or
pip3 install supervision
In https://deepai.org/machine-learning-glossary-and-terms/affine-layer I have found the following:
"Mathematically, the output of an affine layer can be described by the equation:
output = W * input + b
where:
W is the weight matrix.
input is the input vector or matrix.
b is the bias vector.
This equation is the reason why the layer is called "affine" – it consists of a linear transformation (the matrix multiplication) and a translation (the bias addition)."
Solution does not work for forge 1.21.8
cannot find symbol
symbol: method getModEventBus()
location: variable context of type FMLJavaModLoadingContext
Maybe it's the new version of API Platform (4.1 now), but the config parameter worked for me as charm. Just where you'd written.
The main point for me was to find all ManyToMany relations )
When you are using Bramus Router in PHP, and defining routes that point to a static method , that static method must be declared with public
visibility.
Example:
$router->get(
pattern: '/hr/users',
fn: [UserController::class, 'index']
);
When installing Git and Visual Studio Code, please refer to their official documentation to install it. I would recommend doing these rather than installing Ubuntu's snap packages as they tend to be slower in performance.
And of course, you are using Ubuntu which is Debian-based so installing a debian package should be great and smooth for you.
For Git:
https://git-scm.com/downloads/linux
#For the latest stable version for your release of Debian/Ubuntu
apt-get install git
# For Ubuntu, this PPA provides the latest stable upstream Git version
add-apt-repository ppa:git-core/ppa
apt update; apt install git
For Visual Studio Code:
Download the .deb (Debian, Ubuntu) under Linux category.
https://code.visualstudio.com/download
# After downloading it, install it using the following command
sudo apt install ./code-latest.deb
After doing all of these, it should be working out of the box now. Enjoy coding!
JEP-488 released with Java 24, adds support for primitive types in instanceof
operators. This means you can now simply write:
b instanceof byte
To upload a Next app to IIS, you can follow these steps:
1- Install the following modules
IIS NODE
https://github.com/Azure/iisnode/releases/tag/v0.2.26
URL REWRITE
https://www.iis.net/downloads/microsoft/url-rewrite
Application Request Routing
https://www.iis.net/downloads/microsoft/application-request-routing
2- Create a folder on your C disk and pass the following:
The .next folder you get when you do npm run build.
The public folder
The node_modules
3- Create a server.js file in your folder with the following information.
const { createServer } = require("http");
const { parse } = require("url");
const next = require("next");
const dev = process.env.NODE_ENV !== "production";
const port = process.env.PORT ;
const hostname = "localhost";
const app = next({ dev, hostname, port });
const handle = app.getRequestHandler();
app.prepare().then(() => {
createServer(async (req, res) => {
try {
const parsedUrl = parse(req.url, true);
const { pathname, query } = parsedUrl;
if (pathname === "/a") {
await app.render(req, res, "/a", query);
} else if (pathname === "/b") {
await app.render(req, res, "/b", query);
} else {
await handle(req, res, parsedUrl);
}
} catch (err) {
console.error("Error occurred handling", req.url, err);
res.statusCode = 500;
res.end("internal server error");
}
})
.once("error", (err) => {
console.error(err);
process.exit(1);
})
.listen(port, async () => {
console.log(`> Ready on http://localhost:${port}`);
});
});
4- Configuration in the IIS
We check if we have our modules installed; we do this by clicking on our IIS server.
Then we click on modules to see the IIS NODE.
After that, we select feature delegation.
and verify that the controller mappings are read and write.
then we create our website in the IIS and reference the folder we created, click on the website and enter controller assignments
Once inside we click on add module assignments, in Request Path we put the name of the js file in this case "server.js", in module we select iisnode and in name we place iisnode.
We give him accept; This will create a configuration file in our folder called "Web". Open it and place this:
<!-- First we consider whether the incoming URL matches a physical file in the /public folder -->
<rule name="StaticContent">
<action type="Rewrite" url="public{REQUEST_URI}"/>
</rule>
<!-- All other URLs are mapped to the node.js site entry point -->
<rule name="DynamicContent">
<conditions>
<add input="{REQUEST_FILENAME}" matchType="IsFile" negate="True"/>
</conditions>
<action type="Rewrite" url="server.js"/>
</rule>
</rules>
</rewrite>
<!-- 'bin' directory has no special meaning in node.js and apps can be placed in it -->
<security>
<requestFiltering>
<hiddenSegments>
<add segment="node_modules"/>
</hiddenSegments>
</requestFiltering>
</security>
<!-- Make sure error responses are left untouched -->
<httpErrors existingResponse="PassThrough" />
<iisnode node_env="production"/>
<!--
You can control how Node is hosted within IIS using the following options:
* watchedFiles: semi-colon separated list of files that will be watched for changes to restart the server
* node_env: will be propagated to node as NODE_ENV environment variable
* debuggingEnabled - controls whether the built-in debugger is enabled
See https://github.com/tjanczuk/iisnode/blob/master/src/samples/configuration/web.config for a full list of options
-->
<!--<iisnode watchedFiles="web.config;*.js"/>-->
</system.webServer>
We stop our site in the IIS, update and upload the site.
Not necessarily. You can override the certificate check with a change to the sys_properties... helps with troubleshooting connections when certs are an issue...
com.glide.communications.httpclient.verify_hostname = false
You need an _eq_ in Point. Otherwise, Python is comparing two objects by their addresses, and always coming up False.
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def __eq__(self, other):
if not isinstance(other, Point):
return False
return self.x == other.x and self.y == other.y
$sql = "SELECT SUBSTRING(SUBSTRING_INDEX(`COLUMN_TYPE`,'\')',1),6) as set_str FROM `information_schema`.`COLUMNS` WHERE `TABLE_SCHEMA` = 'db' AND `TABLE_NAME` = 'tbl' AND `COLUMN_NAME` = 'col'";
$set_h = mysqli_query($c,$sql);
if (!$set_h){
echo "ERROR: ".$sql;
return;
}
if (mysqli_num_rows($set_h)!=0){
$set_str = mysql_result($set_h,0,'set_str');
echo $set_str.'<br>';
$type_a = explode("','",$set_str);
print_r($type_a);
}
return;
same issue im also facing pls someone help
Getting below error with WebDriverIO v9.19.1.
By default the credentials are being set with my company’s user profile. How to fix this?
Please remove "incognito" from "goog:chromeOptions" args* as it is not supported running Chrome with WebDriver.
WebDriver sessions are always incognito mode and do not persist across browser sessions.
JEP-488 released with Java 24, adds support for primitive types in instanceof
operators. This means you can now simply write:
b instanceof byte