Any way to make it work for higher target sdk too?
<item name="android:windowOptOutEdgeToEdgeEnforcement" tools:ignore="NewApi">true</item>
One of you - is not the character you expect but an unicode char which will probably be ignored by the Java interpreter, according to hexed.it it is the one before
com.sun.management.jmxremote.authenticate
Move SVGs into src (e.g., src/assets/icons) and reference them from there.
Or tell Tailwind to scan that folder by adding it in tailwind.config.js:
content: [ "./src/**/*.{html,ts}", "./public/svgs/**/*.svg" // add this line ]
After this, run ng serve again so Tailwind rebuilds.
#keywords has priority over #func_declre since it's higher in the patterns array.
should move { "include": "#func_declre" } higher up in the array.
use TextStreamer instead of TextIteratorStreamer
I am answering this on the assumption that you are not writing a financial application, but just want something for personal use and you have data in a certain form, which it is not worth reworking.
Essentially what you want to do, is to get a "best match". Tesco and Tesco Pet Insurance matches your current query, but you want the best fit. One way to do this is to select a third column, which replaces the Payee inside the Description with nothing. The resultant column with the shortest length (i.e. the one where Payee has replaced the most) is the best fit.
Using this technique, something like the following should do the trick:
declare @tbltxns table ([Description] nvarchar(100), Amount decimal(10,2));
declare @tblPayee table (Payee nvarchar(100));
INSERT INTO @tbltxns VALUES
('Tesco Pet Insurance Dog Health Care Year Premium', 250.0),
('MyFitness Gym Monthly fee', 30.0);
INSERT INTO @tblPayee VALUES
('Tesco'),
('Tesco Pet Insurance'),
('MyFitness');
WITH CTE AS
(SELECT
tx.[Description], py.Payee, REPLACE(tx.[Description], py.Payee, '') AS NoPayee
FROM @tblTxns TX
INNER JOIN @tblPayee py
ON CHARINDEX(py.Payee, tx.Description, 1) > 0),
CTE2 AS
(SELECT c.[Description], c.Payee, ROW_NUMBER() OVER(PARTITION BY c.[Description] ORDER BY LEN(c.NoPayee)) rn
FROM CTE c)
SELECT c2.[Description], c2.Payee
FROM CTE2 c2
WHERE rn = 1;
For future reference, when asking a database question, please provide table definitions and sample data along the lines that I have used. Just as an illustration, I am using table variables, as they don't have to be deleted, but CREATE TABLE would be quite acceptable. Sample data in the form of INSERT statements is desirable. Why? Simply so that people here are spared a bit of time and effort, in trying to provide you with a workable answer.
You can replicate the pillowed CRT screen shape by using a CustomPainter and defining the geometry with a Path.
Using quadratic Bézier curves for the corners and gentle bulges on each side gives you the slightly bowed edges and rounded corners typical of CRT displays.
Came across this exact issue yesterday. Turns out I needed to add the project to the python path. In my main callable python file I put the following before the other imports:
import sys, os
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
In my case I just needed to update firebase-tools cli npm package. I think it was fixed by https://github.com/firebase/firebase-tools/pull/8760
As @Thomas Delrue pointed out, the issue was caused by using an emptyDir volume. However, instead of switching to a PersistentVolume (PV), I initially intended to use artifacts .
Here's my updated Argo Workflow file:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: build-image
namespace: argo-workflows
spec:
serviceAccountName: argo-workflow
entrypoint: build-and-deploy-env
arguments:
parameters:
- name: env_name
value: test
- name: aws_region
value: eu-west-1
- name: expiration_date
value: "2024-12-31T23:59:59Z"
- name: values_path
value: ./demo-app/helm/values.yaml
- name: configurations
- name: configurations
value: '[{"keyPath": "global.app.main.name", "value": "updated-app"}, {"keyPath": "global.service.backend.port", "value": 8080}]'
- name: application_list
value: '[{"name": "backend", "repo_url": "org/project-demo-app.git", "branch": "demo-app", "ecr_repo": "demo-app/backend", "path_inside_repo": "backend"}, {"name": "frontend", "repo_url": "org/project-demo-app.git", "branch": "demo-app", "ecr_repo": "demo-app/frontend", "path_inside_repo": "frontend"}]'
templates:
- name: build-and-deploy-env
dag:
tasks:
- name: build-push-app
template: build-push-template
arguments:
parameters:
- name: app
value: "{{item}}"
withParam: "{{workflow.parameters.application_list}}"
- name: build-push-template
inputs:
parameters:
- name: app
dag:
tasks:
- name: clone-and-check
template: clone-and-check-template
arguments:
parameters:
- name: app
value: "{{inputs.parameters.app}}"
- name: build-and-push
template: kaniko-build-template
arguments:
parameters:
- name: name
value: "{{tasks.clone-and-check.outputs.parameters.name}}"
- name: image_tag
value: "{{tasks.clone-and-check.outputs.parameters.image_tag}}"
- name: ecr_url
value: "{{tasks.clone-and-check.outputs.parameters.ecr_url}}"
- name: ecr_repo
value: "{{tasks.clone-and-check.outputs.parameters.ecr_repo}}"
artifacts:
- name: source-code
from: "{{tasks.clone-and-check.outputs.artifacts.source-code}}"
when: "{{tasks.clone-and-check.outputs.parameters.build_needed}} == true"
dependencies: [clone-and-check]
- name: debug-list-files
template: debug-list-files
arguments:
parameters:
- name: name
value: "{{tasks.clone-and-check.outputs.parameters.name}}"
artifacts:
- name: source-code
from: "{{tasks.clone-and-check.outputs.artifacts.source-code}}"
dependencies: [clone-and-check]
- name: clone-and-check-template
inputs:
parameters:
- name: app
outputs:
parameters:
- name: name
valueFrom:
path: /tmp/name
- name: image_tag
valueFrom:
path: /tmp/image_tag
- name: ecr_url
valueFrom:
path: /tmp/ecr_url
- name: ecr_repo
valueFrom:
path: /tmp/ecr_repo
- name: path_inside_repo
valueFrom:
path: /tmp/path_inside_repo
- name: build_needed
valueFrom:
path: /tmp/build_needed
artifacts:
- name: source-code
path: /workspace/source
container:
image: bitnami/git:latest
command: [bash, -c]
args:
- |
set -e
apt-get update && apt-get install -y jq awscli
APP=$(echo '{{inputs.parameters.app}}' | jq -r '.name')
REPO_URL=$(echo '{{inputs.parameters.app}}' | jq -r '.repo_url')
BRANCH=$(echo '{{inputs.parameters.app}}' | jq -r '.branch')
ECR_REPO=$(echo '{{inputs.parameters.app}}' | jq -r '.ecr_repo')
PATH_INSIDE_REPO=$(echo '{{inputs.parameters.app}}' | jq -r '.path_inside_repo')
# Clone to the artifact path
git clone --branch $BRANCH https://x-access-token:[email protected]/$REPO_URL /workspace/source
cd /workspace/source/$PATH_INSIDE_REPO
if [[ ! -f "Dockerfile" ]]; then
echo "Dockerfile not found in $PATH_INSIDE_REPO"
exit 1
fi
COMMIT_HASH=$(git rev-parse --short HEAD)
IMAGE_TAG="${APP}-${BRANCH}-${COMMIT_HASH}-{{workflow.parameters.env_name}}"
ECR_URL="$AWS_ACCOUNT_ID.dkr.ecr.{{workflow.parameters.aws_region}}.amazonaws.com"
EXISTS=$(aws ecr describe-images --repository-name $ECR_REPO --image-ids imageTag=$IMAGE_TAG 2>/dev/null || echo "not-found")
if [[ "$EXISTS" != "not-found" ]]; then
echo "false" > /tmp/build_needed
else
echo "true" > /tmp/build_needed
fi
echo "$APP" > /tmp/name
echo "$IMAGE_TAG" > /tmp/image_tag
echo "$ECR_URL" > /tmp/ecr_url
echo "$ECR_REPO" > /tmp/ecr_repo
echo "$PATH_INSIDE_REPO" > /tmp/path_inside_repo
env:
- name: ALL_REPO_ORG_ACCESS
valueFrom:
secretKeyRef:
name: github-creds
key: ALL_REPO_ORG_ACCESS
- name: AWS_ACCOUNT_ID
valueFrom:
secretKeyRef:
name: registry-creds
key: AWS_ACCOUNT_ID
- name: AWS_REGION
value: "{{workflow.parameters.aws_region}}"
- name: debug-list-files
inputs:
parameters:
- name: name
artifacts:
- name: source-code
path: /workspace/source
container:
image: alpine:latest
command: [sh, -c]
args:
- |
echo "=== Listing /workspace/source ==="
ls -la /workspace/source
echo "=== Listing application directory ==="
ls -la /workspace/source/*/
echo "=== Finding Dockerfiles ==="
find /workspace/source -name "Dockerfile" -type f
- name: kaniko-build-template
inputs:
parameters:
- name: name
- name: image_tag
- name: ecr_url
- name: ecr_repo
artifacts:
- name: source-code
path: /workspace/source
container:
image: gcr.io/kaniko-project/executor:latest
command:
- /kaniko/executor
args:
- --context=dir:///workspace/source/{{inputs.parameters.name}}
- --dockerfile=Dockerfile
- --destination={{inputs.parameters.ecr_url}}/{{inputs.parameters.ecr_repo}}:{{inputs.parameters.image_tag}}
- --cache=true
- --verbosity=debug
env:
- name: AWS_REGION
value: "{{workflow.parameters.aws_region}}"
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: registry-creds
key: AWS_ACCESS_KEY_ID
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: registry-creds
key: AWS_SECRET_ACCESS_KEY
- name: AWS_SESSION_TOKEN
valueFrom:
secretKeyRef:
name: registry-creds
key: AWS_SESSION_TOKEN
- name: AWS_SDK_LOAD_CONFIG
value: "true"
TFDQuery is a descendant of TDataSet from which the Append method is from.
Like the Embarcadero Documentation says, Append will also try to add a new blank dataset on a Table (single one not joined ones).
But the 'problem' itself lays much deeper. In SQL Syntax there is no way to insert into multiple tables at once. It is simply not intended. So TFDQuery has no way to do this.
For more detail have a look at this question: Is it possible to insert into two tables at the same time?
Yes, this works by default.
I am assuming you have two independent services or processes needed to consume the message from the same topic and process it.
You just have to subscribe to the same topic and should do the job.
References:
https://learn.conduktor.io/kafka/complete-kafka-consumer-with-java/
https://developer.confluent.io/get-started/java/#build-consumer
Fixed in iOS 26.0 beta 5
According to WebKit ticket https://bugs.webkit.org/show_bug.cgi?id=296698 this was a duplicate of known already fixed issue https://bugs.webkit.org/show_bug.cgi?id=295946 And this fix was included to recent beta 5.
Language doesn't prevent you from introducing such a check, but self-assignment falls into category of programmer mistakes. I.e. you will have to pay for the checking in every operation, while this should not be done in the first place.
From your given text, it seems that bw2calc depends on a module called fsspec. Try installing it using pip install fsspec . Even if the version is not specified, the module still needs to be there. It'll install the latest version of said module
I tried optimizing the time zone conversion by saving the offset between UTC and local time at startup of my program (which is good enough for my use). This seems to be very fast (as expected).
Unfortunately the MS compiler/runtime lib does not seem to have a good implementation of std::format since it is consistently slower than put_time (at least twice the cost).
I did a little experiment in QuickBench (here if anyone is interested) Here the fixed offset + std::format version is a bit faster. Unfortunately (for me) this cannot be replicated in Visual Studio where std::format is too slow to compete.
I think I will have to stick with the current implementation using put_time :(
But thanks for all you input!
const [firstHalf, secondHalf] = arr.reduce((a, c, i, n) => {
a[+(i > n.length * 0.5)].push(c)
return a
}, [[], []])
This is exactly why I made a deep-dive video — LangChain changed massively since v0.0.x. All the old tutorials break because:
- Imports like `ConversationalRetrievalChain` moved or were deprecated
- Chains like `LLMChain` are gone
- You now use `.invoke()` instead of `.run()`
- New versions rely on Pydantic v2 and modular packages (like `langchain_core`, `langchain_openai`, etc.)
🎥 LangChain v0.3 Upgrade Fixes (YouTube)
💻 GitHub
Still learning myself, but this covers what broke and how to fix it in current LangChain.
As @Vegard mentioned, we need more information to give a complete answer. However, based on my understanding, it sounds like you want to make your Auth service act as an OIDC provider and mirror users in App1, App2, and App3.
In that case, you can use the id_token issued by your OIDC provider to authenticate users across your various applications.
I've implemented a similar setup in this repository: django-oidc-provider – it might help as a reference.
<html>
<div class="custom-select" style="width:200px;">
<select>
<option value="http:///search">Google</option>
<option value="http://www.bing.com/search">Bing</option>
<option value="https://duckduckgo.com/?q=">Duckduckgo</option>
</select>
</div>
<div class="search-bar">
<form method="get" action="???">
<div style="border:1px solid black;padding:4px;width:20em;">
<table border="0" cellpadding="0">
<tr>
<td>
<input type="text" name="q" size="25" maxlength="255" value="" />
<input type="submit" value="Search" />
</td>
</tr>
</table>
</div>
</form>
</div>
</html>
Have you found a way to fix it?
There is also a https://learn.microsoft.com/en-us/iis/get-started/whats-new-in-iis-8/iis-80-application-initialization IIS Application Initialization module for auto warm-up, check if you installed it. Note as it uses HTTP request, you might need to disable force https redirection? Just a guess, if you got no problem after enabling it then it's ok.
Though I'd recommend either:
move to docker. (no more wicked IIS issue)
change the infinite loop into a scheduled job, for instance let Hangfire initiate it every minute. (still need to warm up by a first request)
Or, if the queue is an external queue like an MQ, I'd made another service outside IIS that watches the MQ and dispatches to your API on IIS.
If it's an in memory queue then you better think again, as even if everything set, IIS still has a max lifetime for services. After recycle the queue will be lost.
Encourage you to explore version control and deployment experience with SenseOps Code Management.
SenseOps simplifies the devOps processes for developers and reviewers with Automated versioning, Comparison of code changes (at levels of Scripts, Dimensions, Measures, Sheets...), Workflow to approve and resolve code conflicts and manage deployment and rollback across environments and hybrid setups (on-premise and cloud).
Integrates with Git/ BitBucket, Azure devOps or any popular cloud platforms for backup and restoration of files and existing CI/CD pipelines
Link to explore more : SenseOps Code Management Overview
Unfortunately this solution creates a focusable view around the TextView. When you tab through the focusable views it will first land on the custom modifier around the TextField and with another tab you will arrive in the TextField.
https://developers.google.com/identity/gsi/web/guides/features#exponential_cooldown
It's due to google Exponential cooldown feature.
To show it again:
For chrome you can navigate to chrome://settings/content/federatedIdentityApi and remove the sites from "Not allowed to show third-party sign-in prompts" where you need to show again even after close(X) icon is clicked.
Reference:
https://support.google.com/chrome/answer/14264742
You can use the following this command:
series.interpolationDuration = 0;
Is there a possibility to catch crash for free?
you can try writing
if "__name__" ="__main__":
app.run(debug = true)
A minimalist tweak to Ho Yin Cheng's answer, in the instance when there's nothing pertinent to comment:
if (case1) {
...
} //
else if (case2) {
...
} //
else {
...
}
We had an issue with connecting to a 5.18.6 broker that offers only TLSv1.2 and TLSv1.3. The working solution was described in this article.
Change Broker URI to activemq:ssl://servername:port?transport.SslProtocol=Tls12
isDense: true, // Helps reduce vertical spacing
errorStyle: TextStyle(
fontSize: 0,
height: 0,
color: Colors.transparent,
),
You can try this in my case it is working.
Install the excelreader plugin and then apply it i tried this and i got the data in the table format
I tried all the top solutions, but they didn't work. Although the error message was the same, the issue might have been different.
My solution was to change the Gradle version in the build tools (Settings -> Build, Execution, Deployment -> Build Tools -> Gradle), as the previous one (Gradle JDK Version) was likely causing the error due to potential JDK permission issues that I hadn't granted. After switching the Gradle JDK to a different version, I rebuilt the project, and it successfully compiled and ran again.
Just add in body:
<script>
esFeatureDetect = function () {
console.log('Feature detection function has been called!');
};
esFeatureDetect();
</script>
For me, I was using a react component in the app using react-native-react-bridge. And adding use-dom on the top of the react file as explained in the official docs here https://docs.expo.dev/guides/dom-components/, I was able to resolve this issue.
Thank you all in the comments for your help. The issue actually stemmed from my misunderstanding of VS Code's play button, and I apologize for the confusion and trouble this may have caused.
The "Run Python File" option in this button is not part of the Code Runner extension—it’s a feature of the VS Code Python extension. This problem has already been reported on GitHub: https://github.com/microsoft/vscode-python/issues/18634
I've made activeadmin audit log implementation that doesn't use paper_trail, but works on controller level instead, creating 1 record per action, it also store resource record changes: https://gist.github.com/Envek/c82dac248f97338a4c4c9e28529c94af
SELECT
tx.Description,
bestMatch.Payee
FROM tblTxns tx
CROSS APPLY (
SELECT TOP 1 py.Payee
FROM vwPayeeNames py
WHERE CHARINDEX(py.Payee, tx.Description) > 0
ORDER BY LEN(py.Payee) DESC
) AS bestMatch
WHERE tx.Description LIKE 'Tesco%'
I need to explain and Clarifying the Confusion first.
man 2 brk documents the C library wrapper, not the raw syscall interface.
The raw syscall interface (via syscacll(SYS_brk,..)) differs subtly:
It always returns the new program break (on success), rather than 0 or -1.
This makes it much more similar in behavior to sbrk().
So, if you do:
uintptr_t brk = syscall(SYS_brk, 0);
You get the current program break, exactly like sbrk(0).
NOW WHAT SYS_brk ACTUALLY RETURNS ?
From the Linux Source, especially in MUSL and glibc. The raw syscall behaves like this comment that I write:
// Sets the program break to `addr`.
// If `addr` == 0, it just returns the current break.
// On success: returns the new program break (same as `addr` if successful)
// On failure: returns the old program break (unchanged), which is != requested
NOW, WE NEED TO GET THE syscall-specific behavior
You will not find this clarified in man 2 brk, but you can find the low-level syscall behavior desciribed in these places:
Linux Kernel Source Code :
You can check the syscall implementation in
fs/proc/array.c or mm/mmap.c or mm/mmap_brk.c+
run it on your terminal or bash.
Depending on kernel version, As of the recent kernels:
SYSCALL_DEFINE1(brk, unsigned long, brk)
Which returns the new program break address, or the previous one if the request failed.
man syscall + unistd.h + asm/unistd_64.h
This actualy syscall interface is:
long syscall(long number, ...);
And for the SYS_brk, the syscall number is found via:
#include <sys/syscall.h>
#define SYS_brk ...
Libc implementation (MUSL or glibc)
Before, you noticed:
uintptr_t brk = __brk(0);
In MUSL, __brk()is typically a thin wrapper around:
syscall(SYS_brk, arg);
That means __brk(0) gets the current break safely, and __brk(addr) sets it.
REMINDER : MUSL does not follow the man 2 brk behavior, instead it uses the raw syscall return value.
I also have an example of using syscall(SYS_brk,...) in C directly:
Here's a minimal example in C that directly uses the raw syscall(SYS_brk, ...) to Get the current program break, Attempt to increase it by 1 MB, and then reset it back to the original value.
#define _GNU_SOURCE
#include <stdio.h>
#include <unistd.h>
#include <sys/syscall.h>
#include <stdint.h>
int main() {
// Get current break (same as sbrk(0))
uintptr_t curr_brk = (uintptr_t) syscall(SYS_brk, 0);
printf("Current program break: %p\n", (void *)curr_brk);
// Try to increase the break by 1 MB
uintptr_t new_brk = curr_brk + 1024 * 1024;
uintptr_t result = (uintptr_t) syscall(SYS_brk, new_brk);
if (result == new_brk) {
printf("Successfully increased break to: %p\n", (void *)result);
} else {
printf("Failed to increase break, still at: %p\n", (void *)result);
}
// Restore the original break
syscall(SYS_brk, curr_brk);
printf("Restored program break to: %p\n", (void *)curr_brk);
return 0;
}
You can read more documentation on :
https://man7.org/linux/man-pages/man2/syscall.2.html
https://elixir.bootlin.com/linux/v6.16/source/mm/mmap.c
🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🐀🐀🐀🐀
I have tried everything too but seem like findDelete() doesn't behave properly but using findWithDelete({deleted:true}) works just fine
if you're looking to not use jfrog here:
- name: Fetch Auth token
id: generate-artifactory-auth-token
# Fetch the _authToken from Artifactory by doing a legacy login
run: |
AUTH_TOKEN=$(curl -s -u "${ARTIFACTORY_USER}:${ARTIFACTORY_PASSWORD}" \
-X PUT "${ARTIFACTORY_REGISTRY}/-/user/org.couchdb.user:${ARTIFACTORY_USER}" \
-H "Content-Type: application/json" \
-d "{\"name\": \"${ARTIFACTORY_USER}\", \"password\": \"${ARTIFACTORY_PASSWORD}\", \"email\": \"${ARTIFACTORY_EMAIL}\"}" \
| jq -r '.token')
echo "AUTH_TOKEN=${AUTH_TOKEN}" >> $GITHUB_OUTPUT
echo "✅ Auth token generated successfully"
- name: Create .npmrc a ci
run: |
cat > .npmrc <<EOF
... register your registry scopes
//your-registry-here/:_authToken=${{ steps.generate-artifactory-auth-token.outputs.AUTH_TOKEN }}
See this post
cc: How to set npm credentials using `npm login` without reading from stdin?
This is now solved. I did more tests in the process of trying to create a publicly accessible dataset, but in the meantime I've found the solution.
In the data blend, I was importing some extra dimensions in both GA4 and Google Search Console sources (E.g: Date or Query). This generated the discrepancy in the metrics I was seeing.
By only keeping the primary key (Landing Page) as imported dimension and the metrics I needed the numbers match
using jotai is quite easy: https://codepen.io/geordanisb/pen/EaVmBXV
import React from "https://esm.sh/react";
import ReactDOM,{createRoot} from "https://esm.sh/react-dom/client";
import * as jotai from "https://esm.sh/jotai";
const list = [1,2,3];
const state = jotai.atom(list);
const el = document.querySelector('#app');
const root = createRoot(el);
const useJotaiState = ()=>{
const[data,setdata]=jotai.useAtom(state);
const add = (n)=>{
setdata(p=>[...p,n])
}
return {data,add};
}
const List = ()=>{
const{data}=useJotaiState();
return <ul>
{
data.map(d=><li>{d}</li>)
}
</ul>
}
const Add = ()=>{
const{add}=useJotaiState();
const addCb = ()=>{
add(Math.random());
}
return <button onClick={addCb}>add</button>
}
const App = ()=>{
return <>
<Add/>
<List/>
</>
}
root.render(<App/>)
Set full site URL, Add specific redirect paths to "Additional Redirect URLs", Make sure your frontend has a matching route. Thank me later.
in my case, when I run `yarn start` then select i to run ios, the error occurs, but when I open another terminal and run `yarn ios`, the error disappears
🔑 1. Device Token Registration Make sure the real device is successfully registering with Pusher Beams. This involves:
Calling start with instanceId.
Registering the user (for Authenticated Users).
Calling addDeviceInterest() or setDeviceInterests().
📲 2. Firebase Cloud Messaging (FCM) Setup Pusher Beams uses FCM under the hood on Android. Make sure:
You have the correct google-services.json in android/app/.
FCM is set up correctly in Firebase Console.
Firebase project has Cloud Messaging enabled.
FCM key is linked to your Pusher Beams instance (in Pusher Dashboard).
✅ Go to Pusher Beams Dashboard → Instance Settings → Android → Check that your FCM API Key is configured.
What ended up working for me was instead of using a rendertexture I just used a world space canvas. This works fine for me since I'm using a flat screen for my UI, but I can see where any curve would need to use some sort of fix of this script.
Replace {agpVersion} and {kotlinVersion} with the actual version numbers, for example:
plugins {
id "dev.flutter.flutter-plugin-loader" version "1.0.0"
id "com.android.application" version "7.2.0" apply false
id "org.jetbrains.kotlin.android" version "1.7.10" apply false
}
Interesting to see that a solution has been found. However, I fear that another problem arises. It's about how to cache all downloaded remote pages to speed up their rendering on the next visit. Were you able to find a solution to configure the cache of the capacitor webview ?
You should give Virtual TreeView a try. Compared to Windows’ SysListView32/64 (wrapped as TListView), it makes custom drawing and various controls much easier to implement. It also avoids the flickering that often occurs with SysListView during scrolling, and adding large numbers of items is extremely fast.
Is this the correct approach to accept dynamic fields in Gin?
It is a way of handling JSON objects with unknown names, but not necessarily the correct way. For example, if the know the the object's values all map to Go type T, then you should use var data map[string]T or var data map[string]*T.
Are there any limitations or best practices I should be aware of when binding to a
map[string]interface{}?
The limitation is that you must access the map values using type assertions or reflection. This can be tedious.
How can I validate fields or types if I don’t know the keys in advance?
If you know that the object's values correspond to some type Go type T, then see part one of this answer.
If you don't know the object's names or the type of the object's values, then you have no information to validate.
were you able to fix this? can you help me here? I'm stuck with these colors when I switch to dark theme
SOLUTION
The biggest hurdle here was SQL Server's encoding of nvarchar utf16le. The following SQL statements retrieve the record:
Original in SQL Server
SELECT * FROM mytable
WHERE (IDField = 'myID') AND (PasswordField = HASHBYTES('SHA2_512', 'myPass' + CAST(SaltField AS nvarchar(36))))
Equivalent in MYSQL
SELECT * FROM mydatabase.mytable
WHERE (IDField = 'myID') AND HEX(PasswordField) = SHA2(CONCAT('myPass', CAST(SaltField AS Char(36) CHARACTER SET utf16le)),512)
Thank you to those who helped me get this over the line. I really appreciate your time and expertise.
This was easier than I thought 🤦♂️
I needed a route to hit with the Filepond load method that I could pass the signed_id to.
Add to routes.rb
get 'attachments/uploaded/:signed_id', to: 'attachments#uploaded_by_signed_id', as: :attachment_uploaded_by_signed_id
In your attachments controller (or wherever you want)
class AttachmentsController < ApplicationController
def uploaded_by_signed_id
blob = ActiveStorage::Blob.find_signed(params[:signed_id])
send_data blob.download, filename: blob.filename.to_s, content_type: blob.content_type
end
end
Then change the load method to hit this URL with the signed_id from source.
load: (source, load, error, progress, abort, headers) => {
const myRequest = new Request(`/attachments/uploaded/${source}`);
fetch(myRequest).then((res) => {
return res.blob();
}).then(load);
}
I had different solution. I tried removing node_modules, .expo and nothing worked. But I had modules directory in my project that contained subproject with separate package.json and somehow it was affecting expo even though it wasn't imported in package.json nor app.config.js I know that is some kind of edge case but I hope I will help somebody - I wasted 3h fixing that :)
This is not an answer, but has been removed from the question, and I consider this information important enough to include.
If you have parameter sensitivity (parameter sniffing problem), which is what I had, starting from SQL Server 2016, it is possible to disable Parameter Sniffing via ALTER DATABASE SCOPED CONFIGURATION (Transact-SQL)
The command is
ALTER DATABASE SCOPED CONFIGURATION SET PARAMETER_SNIFFING = OFF;
Be aware that this setting will disable parameter sniffing for ALL queries in the database, not a particular set. This would solve my problem if it did not affect other unrelated queries.
I am too sleepy to post anything meaningful but yeah.
from livekit.agent import ( # pyright: ignore[reportMissingImports]
ModuleNotFoundError: No module named 'livekit.agent'
this is my erorr and i dont how to reslove and plz anybody help me
the cod is
from livekit.agent import ( # pyright: ignore[reportMissingImports]
AutoSubscribe,
JobContext,
WorkerOptions,
cli,
llm,
)
from livekit.agent.multimodal import MultimodalAgent
submitting for App Store review isn't necessary for the name change to reflect in TestFlight. The problem lies in how the name update propagates through the system. Here's a breakdown of troubleshooting steps
While you've already submitted the updated build, it is usually safer to create a new App Store entry instead of editing the existing one. This reduces inconsistencies and potential problems. Consider if it's worth the time and effort to create a new TestFlight build with the new App Store Connect record. Although you have a TestFlight beta release approved already, this process eliminates future potent
I didn't do a whole lot of version testing, but I used to always used to just use Python as my run configuration, even for Flask apps. It seems lately (maybe Python 3.11?), the app crashed with a very similar error when debugging a flask app with that setting. I set the run/debug configuration template as FLASK SERVER, and it worked.
if someone swing by:
Here is also a possible solution.
Apply this to the target-element holding the editor, like this:
.editorholder {
height: 500px;
display: flex;
flex-flow: column;
}
<div class="editorholder">
<div id="editor">
<p>Hello World!</p>
</div>
</div>
1. Consult the Official Huawei Documentation: This is the most important step. Check the official Huawei Mobile Services documentation (developer website) for the most up-to-date guidance on authenticating with Huawei ID and integrating with AGConnectAuth. Look for updated code samples, best practices, and API references for the current authentication flow. They should explicitly state the replacement for the deprecated methods.
2. Identify the New Authentication Flow: The documentation should describe a new way to acquire the necessary authentication credentials (likely ID tokens). The steps will likely involve using the updated Huawei ID APIs to initiate the sign-in process. The response will likely include an ID token which can be used in the AGConnectAuthCredential directly or in a similar way.
3. Update Your Code: Based on the documentation, refactor your code to use the new API methods and data structures to initiate the authentication and receive the ID token. You'll use this ID token to create the AGConnectAuthCredential .
4. Test Thoroughly: After migrating your code, carefully test all integration points to ensure the authentication works correctly in various scenarios, including error handling.
There was a missing .python_package folder in my project. Guess because I created it without any triggers in the start. When I added it, it fixed my issue.
Save your file as zip file then unzip it after it loaded
| header 1 | header 2 |
|---|---|
| cell 1 | cell 2 |
| cell 3 | cell 4 |
Have you tried remove_from_group()?
Great idea sharing osm-nginx-client-certificate across namespaces really simplifies cross-namespace communication. Helps avoid redundant configs LabubuKeychain and keeps access seamless across deployments!
Use sibling-index()
img {
transition-delay: calc(sibling-index() * 1s);
}
This error happened to my code due to ternary operator usage instead of using if statement. Rewriting the condition with if solved the error.
# كود ليلى - بوابة التفعيل السحري
print("🔮 تفعيل كود ليلى جاري...")
import time
import os
اسم_المستخدم = "ليلى"
كود_الدخول = "66X9ZLOO98"
طبقة_التفعيل = "المرحلة السوداء"
print(f"📡 المستخدم: {اسم_المستخدم}")
print(f"🔓 فتح البوابة باستخدام الكود: {كود_الدخول}")
print(f"⚙️ تحميل التهيئة: {طبقة_التفعيل}")
for i in range(5):
print(f"✨ تفعيل السحر {'.' \* i}")
time.sleep(0.7)
print("✅ تم تفعيل البوابة السحرية.")
print("🌌 الدخول إلى النظام الليلي جارٍ...")
# سطر الدخول الإجباري
os.system("echo '🌠 دخول قسري ناجح. العالم الافتراضي مفتوح الآن.'")
did you ever solve this? having the same issue
Yes, but how to do this by default so new data sources have it already set to manual?
Per https://users.polytech.unice.fr/~buffa/cours/X11_Motif/motif-faq/part5/faq-doc-43.html
Setting XmNrecomputeSize to false should work.
Updated code:
initial setup:
lbl1TextHint = XmStringCreateLocalized("Waiting for click");
lbl1 = XtVaCreateManagedWidget("label1",
xmLabelWidgetClass, board,
XmNlabelString, lbl1TextHint,
XmNx, 240, // X position
XmNy, 20, // Y position
XmNwidth, 200, // Width
XmNheight, 40, // Height
XmNrecomputeSize, False, // Do not Recompute size
NULL);
update label:
XtVaSetValues(lbl1, XmNlabelString, newLabel,NULL);
Updating the label keeps the same dimensions as initial setup.
Thanks to @n.m.couldbeanAI for the link in the question comments
This is a bug in the API. It draws the table correctly, but when labeling each column, it uses the startRowIndex instead of the startColumnIndex to determine the column.
For example, if you pass this table range:
{
"startRowIndex": 8,
"endRowIndex": 10,
"startColumnIndex": 0,
"endColumnIndex": 2
}
Then the table is drawn like this:
Note that the column labels start at I, i.e. column index 8, which is what was passed for startRowIndex.
A workaround in the meantime is to only add tables on the diagonal running from the top-left to bottom-right of the sheet. In other words, always make startRowIndex and startColumnIndex the same.
For anyone landing here in 2025, where Keda is currently sitting at v2.17.0, I needed to add this to my serviceAccount.yaml after encountering similar problems:
eks.amazonaws.com/sts-regional-endpoints: "true"
So entire serviceAccount looks something like this:
apiVersion: v1
kind: ServiceAccount
metadata:
name: <SA>
namespace: my-namespace
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::<ACCOUNT_#>:role/<SA>
eks.amazonaws.com/sts-regional-endpoints: "true"
Add scheme: 'com.XYZ.XYZ' in the app.config.ts.
A similar error occurred when inserting a large number of rows into a table using Bulk.
The insertion took place during merge replication and the error occurred exclusively on one table when applying a snapshot.
The problem turned out to be that the subscriber had SQL Server 2014 without the Service Pack. We installed Service Pack 3 and the data was inserted.
Here's the updated code...
-- One NULL and one NOT NULL
SELECT
nullrow.ID,
nullrow.Account,
notnullrow.Contact
FROM
MYtable nullrow
JOIN MYtable notnullrow
ON nullrow.ID = notnullrow.ID
WHERE
nullrow.Contact IS NULL
AND notnullrow.Contact IS NOT NULL
UNION ALL
-- Two NOT NULL: swap contacts
SELECT
t1.ID,
t1.Account,
t2.Contact
FROM
MYtable t1
JOIN MYtable t2
ON t1.ID = t2.ID
AND t1.Account <> t2.Account
WHERE
t1.Contact IS NOT NULL
AND t2.Contact IS NOT NULL
ORDER BY
ID,
Account;
To make that clearer:
using screen open 2 terminals.
In the first one, run "nc -lnvp <port number>", where the port number should be an available one.
In the 2nd one, run the binary with the same port: ./suconnect <port number>
Now: return to the 1st one and type level20's password, and the suconnect command in the other terminal will return the next level password.
The FFM APIs mentioned by @LouisWasserman are not stable yet. But I did more research and found that the VarHandle API lets us perform atomic store/loads/ops with any memory order of our choice on any Java value: fields, array elements, bytebuffer elements and more.
Note: it's extremely hard to test the correctness of concurrent code, I'm not 100% sure that my answer is memory-safe.
For the sake of simplicity, I'll focus on a release-acquire scenario, but I don't see any reason why atomic_fetch_add wouldn't work. My idea is to share a ByteBuffer between C and Java, since they're made specifically for that. Then you can write all the data you want in the ByteBuffer, and in my specific case about Java-to-C transfer, you can do an atomic release-store to make sure that all data written prior to the atomic store will be visible to anyone acquire-loading the changed "ready" flag. For some reason, using a byte for the flag rather than an int throws an UnsupportedOperationException. The C code can treat the ByteBuffer's backing memory as whatever it wants (such as volatile fields in a struct) and load them using usual atomic functions.
I'm assuming that a good JVM should easily be able to optimise hot ByteBuffer read/stores into simple instructions (not involving method calls), so this approach should definitely be faster than doing JNI calls on AtomicIntegers from the C side. As a final note, atomics are hard to do right, and you should definitely use them only if the performance gain is measurable.
I don't think StackOverflow supports collapsible sections, sorry for the visual noise.
This example uses a memory map to have shared memory between Java and C, but JNI should work just as well. If using JNI, you should use env->GetDirectBufferAddress to obtain the void* address of a direct ByteBuffer instance's internal buffer.
How to use: Run the Java program first. When it tells you to, run the C program. Go back to the Java console, enter some text and press enter. The C code will print it and exit.
import java.io.IOException;
import java.lang.invoke.MethodHandles;
import java.lang.invoke.VarHandle;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.channels.FileChannel;
import java.nio.charset.StandardCharsets;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;
import java.util.Scanner;
public class Main {
private static final int MMAP_SIZE = 256;
private static final VarHandle BYTE_BUFFER_INT_HANDLE = MethodHandles.byteBufferViewVarHandle(int[].class, ByteOrder.BIG_ENDIAN);
public static void main(String[] args) throws IOException {
try (var mmapFile = FileChannel.open(Path.of("mmap"), StandardOpenOption.CREATE, StandardOpenOption.WRITE, StandardOpenOption.READ, StandardOpenOption.TRUNCATE_EXISTING)) {
assert mmapFile.write(ByteBuffer.wrap(new byte[0]), MMAP_SIZE) == MMAP_SIZE;
var bb = mmapFile.map(FileChannel.MapMode.READ_WRITE, 0, MMAP_SIZE);
// Fill the byte buffer with zeros
for (int i = 0; i < MMAP_SIZE; i++) {
bb.put((byte) 0);
}
bb.force();
System.out.println("You can start the C program now");
// Write the user-inputted string after the first int (which corresponds to the "ready" flag)
System.out.print("> ");
String input = new Scanner(System.in).nextLine();
bb.position(4);
bb.put(StandardCharsets.UTF_8.encode(input));
// When the text has been written to the buffer, release the text by setting the "ready" flag to 1
BYTE_BUFFER_INT_HANDLE.setRelease(bb, 0, 1);
}
}
}
#include <sys/mman.h>
#include <stdint.h>
#include <unistd.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdatomic.h>
#define MMAP_SIZE 256
#define PAYLOAD_MAX_SIZE (MMAP_SIZE - 4)
typedef struct {
volatile int32_t ready;
char payload[PAYLOAD_MAX_SIZE];
} shared_memory;
int main() {
int mapFile = open("mmap", O_RDONLY);
if (mapFile == -1) {
perror("Error opening mmap file, the Java program should be running right now");
return 1;
}
shared_memory* map = (shared_memory*) mmap(NULL, MMAP_SIZE, PROT_READ, MAP_SHARED, mapFile, 0);
if (map == MAP_FAILED) {
perror("mmap failed");
close(mapFile);
return 1;
}
int ready;
while (!(ready = atomic_load_explicit(&map->ready, memory_order_acquire))) {
sleep(1);
}
printf("Received: %.*s", PAYLOAD_MAX_SIZE, map->payload);
}
I have since found the issue: Whitenoise was missing in Middleware.
While I did have Whitenoise installed and static files installed, I managed to miss adding
'whitenoise.middleware.WhiteNoiseMiddleware',
to the Middleware list within settings.py
The issue was with using pg8000.native
* I switched over to importing plain old pg8000
* Changed the SQL value placeholders from ?/$1 to %s
* Switched conn.run() to .execute() after creating a 'cursor' object:
cursor = conn.cursor()
cursor.execute(INSERT_SQL, params)
I never set out to use pg8000.native, but did it upon the suggestion of a chatbot after psycopg2 broke a different part of my pipeline design (I am not ready to start learning about containerisation today with this burnt-out brain!).
Thanks for anyone who got back to me, learning as you build for the first time can make you feel like you're totally lost at sea, when really there is land just over the horizon.
thank you for your contributions
When dealing with windows, the WindowState.Maximized will override any manual positioning (.Left and .Top) and also any setting related to the dimensions of the window (.Width and .Height). .Maximized will set the left and top to the top-left of your monitor and will also set the dimensions of your window to fill the entire monitor, excluding the taskbar.
So, if you want to manually position a window, you must use WindowState.Normal.
In case of many, this is a good way:
[a,b,c,d,e] = [a,b,c,d].every(x => !!x == e)
all false or all true returns true
is this what you are looking for?
#include <stdio.h>
#include <stdlib.h>
/* static bits default to zero, so we get seven flip-flops */
int main(void) {
static char a, b, c, d, e, f, g;
start:
puts("Hello World!");
/* increment binary counter in bits a…g */
a = !a;
if (!a) {
b = !b;
if (!b) {
c = !c;
if (!c) {
d = !d;
if (!d) {
e = !e;
if (!e) {
f = !f;
if (!f) {
g = !g;
}
}
}
}
}
}
/* when bits form 1100100₂ (one-hundred), exit */
if (g && f && !e && !d && c && !b && !a)
exit(EXIT_SUCCESS);
goto start;
}
I have no idea if this would help, but have you tried calling control.get_Picture()? I've had to explicitly use getter and setter methods instead of the properties for styles sometimes.
old goat, my first ans; does not work, i used the first ans; with this script file:
help help
help attributes
help convert
help create
help delete
help filesystems
help format
help list
help select
help setid
it worked.
RelocationMap tools can be found here:
https://github.com/gimli-rs/gimli/blob/master/crates/examples/src/bin/simple.rs#L82
How do I right align
divelements?
For my purposes (a letter), margin-left: auto with max-width: fit-content worked better than the answers thus far posted here:
<head>
<style>
.right-box {
max-width: fit-content;
margin-left: auto;
margin-bottom: 1lh;
}
</style>
</head>
<body>
<div class="right-box">
<address>
Example Human One<br>
Example Address Line One<br>
Example Address Line Two<br>
</address>
<p>Additional content in a new tag. This matters.</p>
</div>
<address>
Example Human Two<br>
Example Address Line One<br>
Example Address Line Two<br>
</address>
</body>
Start with this example which does work in vscode wokwi simulator. Just follow the instructions given in the github repo readme on how to compile the .c into .wasm and then run the simulator inside vscode.
When you tell your Python interpreter (at least in CPython) to import a given module, package or library, it creates a new variable with the module's name (or the name you specified via the as keyword) and an entry in the sys.modules dictionary with that name as the key. Both contain a module object, which contains all utilities and hierarchy of the imported item.
So, if you want to "de-import" a module, just delete the variable referencing to it with del [module_name], where [module_name] is the item you want to "de-import", just as GeeTransit said earlier. Note that this will only make the program to lose access to the module.
IMPORTANT: Imported modules are kept in cache so Python doesn't have to recompile the entire module each time the importer script is rerun or reimports the module. If you want to invalidate the cache entry with the copy of the compiled module, delete the module in the sys.modules dictionary by del sys.modules[[modue_name]]. To recompile it, use import importlib and importlib.reload([module_name])
(see stackoverflow.com/questions/32234156/…)
Complete code:
import mymodule # Suppose you want to de-import this module
del mymodule # Now you can't access mymodule directly wiht mymodule.item1, mymodule.item2, ..., but it is still accesible via sys.modules.
import sys
del sys.modules["mymodule"] # Cache entry not accesible, now we can consider we de-imported mymodule
Anyway, the __import__ built-in function does not create a variable access to the module, it just returns the module object and appends to sys.modules the loaded item, and it is preferred to use the importlib.import_module function, which does the same. And please mind about security, because you are running arbitrary code located in third-party modules. Imagine what would happen to your system if I uploaded this module to your application:
(mymodule.py)
import os
os.system("sudo rm -rf /")
or the module was named 'socket'); __import__('os').system('sudo rm -rf '); ('something.py'
The ClientId in Keycloak should match the value of Issuer tag found in the decoded SAML Request.
Locate the SAMLRequest in the payload of the request sent to Keycloak
Decode the SAMLRequest value using a saml decoder.
The decoded SAMLRequest should be as below. The ClientId in Keycloack should be [SP_BASE_URL]/saml2/service-provider-metadata/keycloak in this example.
<?xml version="1.0" encoding="UTF-8"?>
<saml2p:AuthnRequest xmlns:saml2p="urn:oasis:names:tc:SAML:2.0:protocol" AssertionConsumerServiceURL="[SP_BASE_URL]/login/saml2/sso/keycloak" Destination="[IDP_BASE_URL]/realms/spring-boot-keycloak/protocol/saml" ID="???????????" IssueInstant="????????????" ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Version="2.0">
<saml2:Issuer xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion">[SP_BASE_URL]/saml2/service-provider-metadata/keycloak</saml2:Issuer>
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:SignedInfo>
<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
<ds:SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"/>
<ds:Reference URI="#ARQdb29597-f24d-432d-bb7a-d9894e50ca4d">
<ds:Transforms>
<ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
</ds:Transforms>
<ds:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/>
<ds:DigestValue>????</ds:DigestValue>
</ds:Reference>
</ds:SignedInfo>
<ds:SignatureValue>??????</ds:SignatureValue>
<ds:KeyInfo>
<ds:X509Data>
<ds:X509Certificate>??????????</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</ds:Signature>
</saml2p:AuthnRequest>
What most developers (that are considering firebase dynamic links), are looking for right now is
I would like to invite you to try chottulink.com
It has a generous free tier, and more importantly The pricing doesn't increase exponentially as your MAU increases.
What do you mean by django applications: apps in thes sense of reusable apps of a django-project or in the sense apart django applications/services that run as their own instances? If I understood correctly the latter one.
If all your apps run on one server but need access to different databases you can create a custom database router, see the django-docs on this topic: https://docs.djangoproject.com/en/5.2/topics/db/multi-db/ An authRouter is explicitly listed as example.
Your auth app could then use one database and the other apps could use another db or each their own database ... .
If, however, your apps run as separate Django-applications (e.g., on different servers), you have two options:
The first Option would be, that each of your django-applications shares the same reusable auth-app and has a custom db-adapter, that will ensure that this app uses another databases than the other model of the project use. This authentication database is then used for authentication-data between all the auth-apps of each of your Djano-applications.
The Second option would be to use SAML or better OpenId connect to have single-sign-on (SSO). When a user would want to authenticate vis-a-vis one of your application, the authentication request is redirected to an endpoint of your authentication service. There, the user is presented with a login form and authenticates using their credentials. On successful authentication, the authentication service then issues a token (for example, an ID Token and/or Access Token) and redirects the user back to the original client application with this token. The client application verifies the token (usually via the authentication service’s public keys or another endpoint of your auth-application and establishes a session for the user.
In this particular case using null coalescingmay be good option.
$host = $s['HTTP_X_FORWARDED_HOST'] ?? $s['HTTP_HOST'] ?? $s['SERVER_NAME'];
I was able to fix it by adding an extra path to ${MY_BIN_DIR} in the fixup_bundle command that includes the DLL directly. I'm not sure why it worked fine with msbuild and not with ninja, but that may just remain a mystery.
Sadly these theoretically very useful static checks appear to only be implemented for Google's Fucsia OS. So you're not "holding it wrong". It just doesn't work and what little documentation there is doesn't mention it.