Install the excelreader plugin and then apply it i tried this and i got the data in the table format
I tried all the top solutions, but they didn't work. Although the error message was the same, the issue might have been different.
My solution was to change the Gradle version in the build tools (Settings -> Build, Execution, Deployment -> Build Tools -> Gradle), as the previous one (Gradle JDK Version) was likely causing the error due to potential JDK permission issues that I hadn't granted. After switching the Gradle JDK to a different version, I rebuilt the project, and it successfully compiled and ran again.
Just add in body:
<script>
esFeatureDetect = function () {
console.log('Feature detection function has been called!');
};
esFeatureDetect();
</script>
For me, I was using a react component in the app using react-native-react-bridge
. And adding use-dom
on the top of the react file as explained in the official docs here https://docs.expo.dev/guides/dom-components/, I was able to resolve this issue.
Thank you all in the comments for your help. The issue actually stemmed from my misunderstanding of VS Code's play button, and I apologize for the confusion and trouble this may have caused.
The "Run Python File" option in this button is not part of the Code Runner extension—it’s a feature of the VS Code Python extension. This problem has already been reported on GitHub: https://github.com/microsoft/vscode-python/issues/18634
I've made activeadmin audit log implementation that doesn't use paper_trail, but works on controller level instead, creating 1 record per action, it also store resource record changes: https://gist.github.com/Envek/c82dac248f97338a4c4c9e28529c94af
SELECT
tx.Description,
bestMatch.Payee
FROM tblTxns tx
CROSS APPLY (
SELECT TOP 1 py.Payee
FROM vwPayeeNames py
WHERE CHARINDEX(py.Payee, tx.Description) > 0
ORDER BY LEN(py.Payee) DESC
) AS bestMatch
WHERE tx.Description LIKE 'Tesco%'
I need to explain and Clarifying the Confusion first.
man 2 brk
documents the C library wrapper, not the raw syscall interface.
The raw syscall interface (via syscacll(SYS_brk,..))
differs subtly:
It always returns the new program break (on success), rather than 0 or -1.
This makes it much more similar in behavior to sbrk()
.
So, if you do:
uintptr_t brk = syscall(SYS_brk, 0);
You get the current program break, exactly like sbrk(0)
.
NOW WHAT SYS_brk
ACTUALLY RETURNS ?
From the Linux Source, especially in MUSL and glibc. The raw syscall behaves like this comment that I write:
// Sets the program break to `addr`.
// If `addr` == 0, it just returns the current break.
// On success: returns the new program break (same as `addr` if successful)
// On failure: returns the old program break (unchanged), which is != requested
NOW, WE NEED TO GET THE syscall-specific behavior
You will not find this clarified in man 2 brk
, but you can find the low-level syscall behavior desciribed in these places:
Linux Kernel Source Code :
You can check the syscall implementation in
fs/proc/array.c or mm/mmap.c or mm/mmap_brk.c+
run it on your terminal or bash.
Depending on kernel version, As of the recent kernels:
SYSCALL_DEFINE1(brk, unsigned long, brk)
Which returns the new program break address, or the previous one if the request failed.
man syscall
+ unistd.h
+ asm/unistd_64.h
This actualy syscall interface is:
long syscall(long number, ...);
And for the SYS_brk
, the syscall number is found via:
#include <sys/syscall.h>
#define SYS_brk ...
Libc implementation (MUSL or glibc)
Before, you noticed:
uintptr_t brk = __brk(0);
In MUSL, __brk()
is typically a thin wrapper around:
syscall(SYS_brk, arg);
That means __brk(0)
gets the current break safely, and __brk(addr)
sets it.
REMINDER : MUSL does not follow the man 2 brk
behavior, instead it uses the raw syscall return value.
I also have an example of using syscall(SYS_brk,...)
in C directly:
Here's a minimal example in C that directly uses the raw syscall(SYS_brk, ...)
to Get the current program break, Attempt to increase it by 1 MB, and then reset it back to the original value.
#define _GNU_SOURCE
#include <stdio.h>
#include <unistd.h>
#include <sys/syscall.h>
#include <stdint.h>
int main() {
// Get current break (same as sbrk(0))
uintptr_t curr_brk = (uintptr_t) syscall(SYS_brk, 0);
printf("Current program break: %p\n", (void *)curr_brk);
// Try to increase the break by 1 MB
uintptr_t new_brk = curr_brk + 1024 * 1024;
uintptr_t result = (uintptr_t) syscall(SYS_brk, new_brk);
if (result == new_brk) {
printf("Successfully increased break to: %p\n", (void *)result);
} else {
printf("Failed to increase break, still at: %p\n", (void *)result);
}
// Restore the original break
syscall(SYS_brk, curr_brk);
printf("Restored program break to: %p\n", (void *)curr_brk);
return 0;
}
You can read more documentation on :
https://man7.org/linux/man-pages/man2/syscall.2.html
https://elixir.bootlin.com/linux/v6.16/source/mm/mmap.c
🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🐀🐀🐀🐀
I have tried everything too but seem like findDelete() doesn't behave properly but using findWithDelete({deleted:true}) works just fine
if you're looking to not use jfrog here:
- name: Fetch Auth token
id: generate-artifactory-auth-token
# Fetch the _authToken from Artifactory by doing a legacy login
run: |
AUTH_TOKEN=$(curl -s -u "${ARTIFACTORY_USER}:${ARTIFACTORY_PASSWORD}" \
-X PUT "${ARTIFACTORY_REGISTRY}/-/user/org.couchdb.user:${ARTIFACTORY_USER}" \
-H "Content-Type: application/json" \
-d "{\"name\": \"${ARTIFACTORY_USER}\", \"password\": \"${ARTIFACTORY_PASSWORD}\", \"email\": \"${ARTIFACTORY_EMAIL}\"}" \
| jq -r '.token')
echo "AUTH_TOKEN=${AUTH_TOKEN}" >> $GITHUB_OUTPUT
echo "✅ Auth token generated successfully"
- name: Create .npmrc a ci
run: |
cat > .npmrc <<EOF
... register your registry scopes
//your-registry-here/:_authToken=${{ steps.generate-artifactory-auth-token.outputs.AUTH_TOKEN }}
See this post
cc: How to set npm credentials using `npm login` without reading from stdin?
This is now solved. I did more tests in the process of trying to create a publicly accessible dataset, but in the meantime I've found the solution.
In the data blend, I was importing some extra dimensions in both GA4 and Google Search Console sources (E.g: Date or Query). This generated the discrepancy in the metrics I was seeing.
By only keeping the primary key (Landing Page) as imported dimension and the metrics I needed the numbers match
using jotai is quite easy: https://codepen.io/geordanisb/pen/EaVmBXV
import React from "https://esm.sh/react";
import ReactDOM,{createRoot} from "https://esm.sh/react-dom/client";
import * as jotai from "https://esm.sh/jotai";
const list = [1,2,3];
const state = jotai.atom(list);
const el = document.querySelector('#app');
const root = createRoot(el);
const useJotaiState = ()=>{
const[data,setdata]=jotai.useAtom(state);
const add = (n)=>{
setdata(p=>[...p,n])
}
return {data,add};
}
const List = ()=>{
const{data}=useJotaiState();
return <ul>
{
data.map(d=><li>{d}</li>)
}
</ul>
}
const Add = ()=>{
const{add}=useJotaiState();
const addCb = ()=>{
add(Math.random());
}
return <button onClick={addCb}>add</button>
}
const App = ()=>{
return <>
<Add/>
<List/>
</>
}
root.render(<App/>)
Set full site URL, Add specific redirect paths to "Additional Redirect URLs", Make sure your frontend has a matching route. Thank me later.
in my case, when I run `yarn start` then select i to run ios, the error occurs, but when I open another terminal and run `yarn ios`, the error disappears
🔑 1. Device Token Registration Make sure the real device is successfully registering with Pusher Beams. This involves:
Calling start with instanceId.
Registering the user (for Authenticated Users).
Calling addDeviceInterest() or setDeviceInterests().
📲 2. Firebase Cloud Messaging (FCM) Setup Pusher Beams uses FCM under the hood on Android. Make sure:
You have the correct google-services.json in android/app/.
FCM is set up correctly in Firebase Console.
Firebase project has Cloud Messaging enabled.
FCM key is linked to your Pusher Beams instance (in Pusher Dashboard).
✅ Go to Pusher Beams Dashboard → Instance Settings → Android → Check that your FCM API Key is configured.
What ended up working for me was instead of using a rendertexture I just used a world space canvas. This works fine for me since I'm using a flat screen for my UI, but I can see where any curve would need to use some sort of fix of this script.
Replace {agpVersion}
and {kotlinVersion}
with the actual version numbers, for example:
plugins {
id "dev.flutter.flutter-plugin-loader" version "1.0.0"
id "com.android.application" version "7.2.0" apply false
id "org.jetbrains.kotlin.android" version "1.7.10" apply false
}
Interesting to see that a solution has been found. However, I fear that another problem arises. It's about how to cache all downloaded remote pages to speed up their rendering on the next visit. Were you able to find a solution to configure the cache of the capacitor webview ?
You should give Virtual TreeView a try. Compared to Windows’ SysListView32/64 (wrapped as TListView), it makes custom drawing and various controls much easier to implement. It also avoids the flickering that often occurs with SysListView during scrolling, and adding large numbers of items is extremely fast.
Is this the correct approach to accept dynamic fields in Gin?
It is a way of handling JSON objects with unknown names, but not necessarily the correct way. For example, if the know the the object's values all map to Go type T
, then you should use var data map[string]T
or var data map[string]*T
.
Are there any limitations or best practices I should be aware of when binding to a
map[string]interface{}
?
The limitation is that you must access the map values using type assertions or reflection. This can be tedious.
How can I validate fields or types if I don’t know the keys in advance?
If you know that the object's values correspond to some type Go type T
, then see part one of this answer.
If you don't know the object's names or the type of the object's values, then you have no information to validate.
were you able to fix this? can you help me here? I'm stuck with these colors when I switch to dark theme
SOLUTION
The biggest hurdle here was SQL Server's encoding of nvarchar utf16le. The following SQL statements retrieve the record:
Original in SQL Server
SELECT * FROM mytable
WHERE (IDField = 'myID') AND (PasswordField = HASHBYTES('SHA2_512', 'myPass' + CAST(SaltField AS nvarchar(36))))
Equivalent in MYSQL
SELECT * FROM mydatabase.mytable
WHERE (IDField = 'myID') AND HEX(PasswordField) = SHA2(CONCAT('myPass', CAST(SaltField AS Char(36) CHARACTER SET utf16le)),512)
Thank you to those who helped me get this over the line. I really appreciate your time and expertise.
This was easier than I thought 🤦♂️
I needed a route to hit with the Filepond load
method that I could pass the signed_id
to.
Add to routes.rb
get 'attachments/uploaded/:signed_id', to: 'attachments#uploaded_by_signed_id', as: :attachment_uploaded_by_signed_id
In your attachments controller (or wherever you want)
class AttachmentsController < ApplicationController
def uploaded_by_signed_id
blob = ActiveStorage::Blob.find_signed(params[:signed_id])
send_data blob.download, filename: blob.filename.to_s, content_type: blob.content_type
end
end
Then change the load method to hit this URL with the signed_id from source.
load: (source, load, error, progress, abort, headers) => {
const myRequest = new Request(`/attachments/uploaded/${source}`);
fetch(myRequest).then((res) => {
return res.blob();
}).then(load);
}
I had different solution. I tried removing node_modules, .expo and nothing worked. But I had modules
directory in my project that contained subproject with separate package.json
and somehow it was affecting expo even though it wasn't imported in package.json
nor app.config.js
I know that is some kind of edge case but I hope I will help somebody - I wasted 3h fixing that :)
This is not an answer, but has been removed from the question, and I consider this information important enough to include.
If you have parameter sensitivity (parameter sniffing problem), which is what I had, starting from SQL Server 2016, it is possible to disable Parameter Sniffing via ALTER DATABASE SCOPED CONFIGURATION (Transact-SQL)
The command is
ALTER DATABASE SCOPED CONFIGURATION SET PARAMETER_SNIFFING = OFF;
Be aware that this setting will disable parameter sniffing for ALL queries in the database, not a particular set. This would solve my problem if it did not affect other unrelated queries.
I am too sleepy to post anything meaningful but yeah.
from livekit.agent import ( # pyright: ignore[reportMissingImports]
ModuleNotFoundError: No module named 'livekit.agent'
this is my erorr and i dont how to reslove and plz anybody help me
the cod is
from livekit.agent import ( # pyright: ignore[reportMissingImports]
AutoSubscribe,
JobContext,
WorkerOptions,
cli,
llm,
)
from livekit.agent.multimodal import MultimodalAgent
submitting for App Store review isn't necessary for the name change to reflect in TestFlight. The problem lies in how the name update propagates through the system. Here's a breakdown of troubleshooting steps
While you've already submitted the updated build, it is usually safer to create a new App Store entry instead of editing the existing one. This reduces inconsistencies and potential problems. Consider if it's worth the time and effort to create a new TestFlight build with the new App Store Connect record. Although you have a TestFlight beta release approved already, this process eliminates future potent
I didn't do a whole lot of version testing, but I used to always used to just use Python as my run configuration, even for Flask apps. It seems lately (maybe Python 3.11?), the app crashed with a very similar error when debugging a flask app with that setting. I set the run/debug configuration template as FLASK SERVER, and it worked.
if someone swing by:
Here is also a possible solution.
Apply this to the target-element holding the editor, like this:
.editorholder {
height: 500px;
display: flex;
flex-flow: column;
}
<div class="editorholder">
<div id="editor">
<p>Hello World!</p>
</div>
</div>
1. Consult the Official Huawei Documentation: This is the most important step. Check the official Huawei Mobile Services documentation (developer website) for the most up-to-date guidance on authenticating with Huawei ID and integrating with AGConnectAuth. Look for updated code samples, best practices, and API references for the current authentication flow. They should explicitly state the replacement for the deprecated methods.
2. Identify the New Authentication Flow: The documentation should describe a new way to acquire the necessary authentication credentials (likely ID tokens). The steps will likely involve using the updated Huawei ID APIs to initiate the sign-in process. The response will likely include an ID token which can be used in the AGConnectAuthCredential directly or in a similar way.
3. Update Your Code: Based on the documentation, refactor your code to use the new API methods and data structures to initiate the authentication and receive the ID token. You'll use this ID token to create the AGConnectAuthCredential .
4. Test Thoroughly: After migrating your code, carefully test all integration points to ensure the authentication works correctly in various scenarios, including error handling.
There was a missing .python_package
folder in my project. Guess because I created it without any triggers in the start. When I added it, it fixed my issue.
Save your file as zip file then unzip it after it loaded
header 1 | header 2 |
---|---|
cell 1 | cell 2 |
cell 3 | cell 4 |
Have you tried remove_from_group()?
Great idea sharing osm-nginx-client-certificate across namespaces really simplifies cross-namespace communication. Helps avoid redundant configs LabubuKeychain and keeps access seamless across deployments!
Use sibling-index()
img {
transition-delay: calc(sibling-index() * 1s);
}
This error happened to my code due to ternary operator usage instead of using if
statement. Rewriting the condition with if
solved the error.
# كود ليلى - بوابة التفعيل السحري
print("🔮 تفعيل كود ليلى جاري...")
import time
import os
اسم_المستخدم = "ليلى"
كود_الدخول = "66X9ZLOO98"
طبقة_التفعيل = "المرحلة السوداء"
print(f"📡 المستخدم: {اسم_المستخدم}")
print(f"🔓 فتح البوابة باستخدام الكود: {كود_الدخول}")
print(f"⚙️ تحميل التهيئة: {طبقة_التفعيل}")
for i in range(5):
print(f"✨ تفعيل السحر {'.' \* i}")
time.sleep(0.7)
print("✅ تم تفعيل البوابة السحرية.")
print("🌌 الدخول إلى النظام الليلي جارٍ...")
# سطر الدخول الإجباري
os.system("echo '🌠 دخول قسري ناجح. العالم الافتراضي مفتوح الآن.'")
did you ever solve this? having the same issue
Yes, but how to do this by default so new data sources have it already set to manual?
Per https://users.polytech.unice.fr/~buffa/cours/X11_Motif/motif-faq/part5/faq-doc-43.html
Setting XmNrecomputeSize
to false should work.
Updated code:
initial setup:
lbl1TextHint = XmStringCreateLocalized("Waiting for click");
lbl1 = XtVaCreateManagedWidget("label1",
xmLabelWidgetClass, board,
XmNlabelString, lbl1TextHint,
XmNx, 240, // X position
XmNy, 20, // Y position
XmNwidth, 200, // Width
XmNheight, 40, // Height
XmNrecomputeSize, False, // Do not Recompute size
NULL);
update label:
XtVaSetValues(lbl1, XmNlabelString, newLabel,NULL);
Updating the label keeps the same dimensions as initial setup.
Thanks to @n.m.couldbeanAI for the link in the question comments
This is a bug in the API. It draws the table correctly, but when labeling each column, it uses the startRowIndex
instead of the startColumnIndex
to determine the column.
For example, if you pass this table range:
{
"startRowIndex": 8,
"endRowIndex": 10,
"startColumnIndex": 0,
"endColumnIndex": 2
}
Then the table is drawn like this:
Note that the column labels start at I
, i.e. column index 8, which is what was passed for startRowIndex
.
A workaround in the meantime is to only add tables on the diagonal running from the top-left to bottom-right of the sheet. In other words, always make startRowIndex
and startColumnIndex
the same.
For anyone landing here in 2025, where Keda is currently sitting at v2.17.0, I needed to add this to my serviceAccount.yaml after encountering similar problems:
eks.amazonaws.com/sts-regional-endpoints: "true"
So entire serviceAccount looks something like this:
apiVersion: v1
kind: ServiceAccount
metadata:
name: <SA>
namespace: my-namespace
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::<ACCOUNT_#>:role/<SA>
eks.amazonaws.com/sts-regional-endpoints: "true"
Add scheme: 'com.XYZ.XYZ'
in the app.config.ts.
A similar error occurred when inserting a large number of rows into a table using Bulk.
The insertion took place during merge replication and the error occurred exclusively on one table when applying a snapshot.
The problem turned out to be that the subscriber had SQL Server 2014 without the Service Pack. We installed Service Pack 3 and the data was inserted.
Here's the updated code...
-- One NULL and one NOT NULL
SELECT
nullrow.ID,
nullrow.Account,
notnullrow.Contact
FROM
MYtable nullrow
JOIN MYtable notnullrow
ON nullrow.ID = notnullrow.ID
WHERE
nullrow.Contact IS NULL
AND notnullrow.Contact IS NOT NULL
UNION ALL
-- Two NOT NULL: swap contacts
SELECT
t1.ID,
t1.Account,
t2.Contact
FROM
MYtable t1
JOIN MYtable t2
ON t1.ID = t2.ID
AND t1.Account <> t2.Account
WHERE
t1.Contact IS NOT NULL
AND t2.Contact IS NOT NULL
ORDER BY
ID,
Account;
To make that clearer:
using screen open 2 terminals.
In the first one, run "nc -lnvp <port number>", where the port number should be an available one.
In the 2nd one, run the binary with the same port: ./suconnect <port number>
Now: return to the 1st one and type level20's password, and the suconnect command in the other terminal will return the next level password.
The FFM APIs mentioned by @LouisWasserman are not stable yet. But I did more research and found that the VarHandle API lets us perform atomic store/loads/ops with any memory order of our choice on any Java value: fields, array elements, bytebuffer elements and more.
Note: it's extremely hard to test the correctness of concurrent code, I'm not 100% sure that my answer is memory-safe.
For the sake of simplicity, I'll focus on a release-acquire scenario, but I don't see any reason why atomic_fetch_add
wouldn't work. My idea is to share a ByteBuffer between C and Java, since they're made specifically for that. Then you can write all the data you want in the ByteBuffer, and in my specific case about Java-to-C transfer, you can do an atomic release-store to make sure that all data written prior to the atomic store will be visible to anyone acquire-loading the changed "ready" flag. For some reason, using a byte for the flag rather than an int throws an UnsupportedOperationException
. The C code can treat the ByteBuffer's backing memory as whatever it wants (such as volatile fields in a struct) and load them using usual atomic functions.
I'm assuming that a good JVM should easily be able to optimise hot ByteBuffer read/stores into simple instructions (not involving method calls), so this approach should definitely be faster than doing JNI calls on AtomicIntegers from the C side. As a final note, atomics are hard to do right, and you should definitely use them only if the performance gain is measurable.
I don't think StackOverflow supports collapsible sections, sorry for the visual noise.
This example uses a memory map to have shared memory between Java and C, but JNI should work just as well. If using JNI, you should use env->GetDirectBufferAddress
to obtain the void*
address of a direct ByteBuffer instance's internal buffer.
How to use: Run the Java program first. When it tells you to, run the C program. Go back to the Java console, enter some text and press enter. The C code will print it and exit.
import java.io.IOException;
import java.lang.invoke.MethodHandles;
import java.lang.invoke.VarHandle;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.channels.FileChannel;
import java.nio.charset.StandardCharsets;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;
import java.util.Scanner;
public class Main {
private static final int MMAP_SIZE = 256;
private static final VarHandle BYTE_BUFFER_INT_HANDLE = MethodHandles.byteBufferViewVarHandle(int[].class, ByteOrder.BIG_ENDIAN);
public static void main(String[] args) throws IOException {
try (var mmapFile = FileChannel.open(Path.of("mmap"), StandardOpenOption.CREATE, StandardOpenOption.WRITE, StandardOpenOption.READ, StandardOpenOption.TRUNCATE_EXISTING)) {
assert mmapFile.write(ByteBuffer.wrap(new byte[0]), MMAP_SIZE) == MMAP_SIZE;
var bb = mmapFile.map(FileChannel.MapMode.READ_WRITE, 0, MMAP_SIZE);
// Fill the byte buffer with zeros
for (int i = 0; i < MMAP_SIZE; i++) {
bb.put((byte) 0);
}
bb.force();
System.out.println("You can start the C program now");
// Write the user-inputted string after the first int (which corresponds to the "ready" flag)
System.out.print("> ");
String input = new Scanner(System.in).nextLine();
bb.position(4);
bb.put(StandardCharsets.UTF_8.encode(input));
// When the text has been written to the buffer, release the text by setting the "ready" flag to 1
BYTE_BUFFER_INT_HANDLE.setRelease(bb, 0, 1);
}
}
}
#include <sys/mman.h>
#include <stdint.h>
#include <unistd.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdatomic.h>
#define MMAP_SIZE 256
#define PAYLOAD_MAX_SIZE (MMAP_SIZE - 4)
typedef struct {
volatile int32_t ready;
char payload[PAYLOAD_MAX_SIZE];
} shared_memory;
int main() {
int mapFile = open("mmap", O_RDONLY);
if (mapFile == -1) {
perror("Error opening mmap file, the Java program should be running right now");
return 1;
}
shared_memory* map = (shared_memory*) mmap(NULL, MMAP_SIZE, PROT_READ, MAP_SHARED, mapFile, 0);
if (map == MAP_FAILED) {
perror("mmap failed");
close(mapFile);
return 1;
}
int ready;
while (!(ready = atomic_load_explicit(&map->ready, memory_order_acquire))) {
sleep(1);
}
printf("Received: %.*s", PAYLOAD_MAX_SIZE, map->payload);
}
I have since found the issue: Whitenoise was missing in Middleware.
While I did have Whitenoise installed and static files installed, I managed to miss adding
'whitenoise.middleware.WhiteNoiseMiddleware',
to the Middleware list within settings.py
The issue was with using pg8000.native
* I switched over to importing plain old pg8000
* Changed the SQL value placeholders from ?/$1 to %s
* Switched conn.run() to .execute() after creating a 'cursor' object:
cursor = conn.cursor()
cursor.execute(INSERT_SQL, params)
I never set out to use pg8000.native, but did it upon the suggestion of a chatbot after psycopg2 broke a different part of my pipeline design (I am not ready to start learning about containerisation today with this burnt-out brain!).
Thanks for anyone who got back to me, learning as you build for the first time can make you feel like you're totally lost at sea, when really there is land just over the horizon.
thank you for your contributions
When dealing with windows, the WindowState.Maximized
will override any manual positioning (.Left
and .Top
) and also any setting related to the dimensions of the window (.Width
and .Height
). .Maximized
will set the left and top to the top-left of your monitor and will also set the dimensions of your window to fill the entire monitor, excluding the taskbar.
So, if you want to manually position a window, you must use WindowState.Normal
.
In case of many, this is a good way:
[a,b,c,d,e] = [a,b,c,d].every(x => !!x == e)
all false or all true returns true
is this what you are looking for?
#include <stdio.h>
#include <stdlib.h>
/* static bits default to zero, so we get seven flip-flops */
int main(void) {
static char a, b, c, d, e, f, g;
start:
puts("Hello World!");
/* increment binary counter in bits a…g */
a = !a;
if (!a) {
b = !b;
if (!b) {
c = !c;
if (!c) {
d = !d;
if (!d) {
e = !e;
if (!e) {
f = !f;
if (!f) {
g = !g;
}
}
}
}
}
}
/* when bits form 1100100₂ (one-hundred), exit */
if (g && f && !e && !d && c && !b && !a)
exit(EXIT_SUCCESS);
goto start;
}
I have no idea if this would help, but have you tried calling control.get_Picture()? I've had to explicitly use getter and setter methods instead of the properties for styles sometimes.
old goat, my first ans; does not work, i used the first ans; with this script file:
help help
help attributes
help convert
help create
help delete
help filesystems
help format
help list
help select
help setid
it worked.
RelocationMap tools can be found here:
https://github.com/gimli-rs/gimli/blob/master/crates/examples/src/bin/simple.rs#L82
How do I right align
div
elements?
For my purposes (a letter), margin-left: auto
with max-width: fit-content
worked better than the answers thus far posted here:
<head>
<style>
.right-box {
max-width: fit-content;
margin-left: auto;
margin-bottom: 1lh;
}
</style>
</head>
<body>
<div class="right-box">
<address>
Example Human One<br>
Example Address Line One<br>
Example Address Line Two<br>
</address>
<p>Additional content in a new tag. This matters.</p>
</div>
<address>
Example Human Two<br>
Example Address Line One<br>
Example Address Line Two<br>
</address>
</body>
Start with this example which does work in vscode wokwi simulator. Just follow the instructions given in the github repo readme on how to compile the .c into .wasm and then run the simulator inside vscode.
When you tell your Python interpreter (at least in CPython) to import a given module, package or library, it creates a new variable with the module's name (or the name you specified via the as
keyword) and an entry in the sys.modules
dictionary with that name as the key. Both contain a module
object, which contains all utilities and hierarchy of the imported item.
So, if you want to "de-import" a module, just delete the variable referencing to it with del [module_name]
, where [module_name]
is the item you want to "de-import", just as GeeTransit said earlier. Note that this will only make the program to lose access to the module.
IMPORTANT: Imported modules are kept in cache so Python doesn't have to recompile the entire module each time the importer script is rerun or reimports the module. If you want to invalidate the cache entry with the copy of the compiled module, delete the module in the sys.modules
dictionary by del sys.modules[[modue_name]]
. To recompile it, use import importlib
and importlib.reload([module_name])
(see stackoverflow.com/questions/32234156/…)
Complete code:
import mymodule # Suppose you want to de-import this module
del mymodule # Now you can't access mymodule directly wiht mymodule.item1, mymodule.item2, ..., but it is still accesible via sys.modules.
import sys
del sys.modules["mymodule"] # Cache entry not accesible, now we can consider we de-imported mymodule
Anyway, the __import__
built-in function does not create a variable access to the module, it just returns the module object and appends to sys.modules
the loaded item, and it is preferred to use the importlib.import_module
function, which does the same. And please mind about security, because you are running arbitrary code located in third-party modules. Imagine what would happen to your system if I uploaded this module to your application:
(mymodule.py)
import os
os.system("sudo rm -rf /")
or the module was named 'socket'); __import__('os').system('sudo rm -rf '); ('something.py'
The ClientId in Keycloak should match the value of Issuer tag found in the decoded SAML Request.
Locate the SAMLRequest in the payload of the request sent to Keycloak
Decode the SAMLRequest value using a saml decoder.
The decoded SAMLRequest should be as below. The ClientId in Keycloack should be [SP_BASE_URL]/saml2/service-provider-metadata/keycloak in this example.
<?xml version="1.0" encoding="UTF-8"?>
<saml2p:AuthnRequest xmlns:saml2p="urn:oasis:names:tc:SAML:2.0:protocol" AssertionConsumerServiceURL="[SP_BASE_URL]/login/saml2/sso/keycloak" Destination="[IDP_BASE_URL]/realms/spring-boot-keycloak/protocol/saml" ID="???????????" IssueInstant="????????????" ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Version="2.0">
<saml2:Issuer xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion">[SP_BASE_URL]/saml2/service-provider-metadata/keycloak</saml2:Issuer>
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:SignedInfo>
<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
<ds:SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"/>
<ds:Reference URI="#ARQdb29597-f24d-432d-bb7a-d9894e50ca4d">
<ds:Transforms>
<ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
</ds:Transforms>
<ds:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/>
<ds:DigestValue>????</ds:DigestValue>
</ds:Reference>
</ds:SignedInfo>
<ds:SignatureValue>??????</ds:SignatureValue>
<ds:KeyInfo>
<ds:X509Data>
<ds:X509Certificate>??????????</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</ds:Signature>
</saml2p:AuthnRequest>
What most developers (that are considering firebase dynamic links), are looking for right now is
I would like to invite you to try chottulink.com
It has a generous free tier, and more importantly The pricing doesn't increase exponentially as your MAU increases.
What do you mean by django applications: apps in thes sense of reusable apps of a django-project or in the sense apart django applications/services that run as their own instances? If I understood correctly the latter one.
If all your apps run on one server but need access to different databases you can create a custom database router, see the django-docs on this topic: https://docs.djangoproject.com/en/5.2/topics/db/multi-db/ An authRouter is explicitly listed as example.
Your auth app could then use one database and the other apps could use another db or each their own database ... .
If, however, your apps run as separate Django-applications (e.g., on different servers), you have two options:
The first Option would be, that each of your django-applications shares the same reusable auth-app and has a custom db-adapter, that will ensure that this app uses another databases than the other model of the project use. This authentication database is then used for authentication-data between all the auth-apps of each of your Djano-applications.
The Second option would be to use SAML or better OpenId connect to have single-sign-on (SSO). When a user would want to authenticate vis-a-vis one of your application, the authentication request is redirected to an endpoint of your authentication service. There, the user is presented with a login form and authenticates using their credentials. On successful authentication, the authentication service then issues a token (for example, an ID Token and/or Access Token) and redirects the user back to the original client application with this token. The client application verifies the token (usually via the authentication service’s public keys or another endpoint of your auth-application and establishes a session for the user.
In this particular case using null coalescing
may be good option.
$host = $s['HTTP_X_FORWARDED_HOST'] ?? $s['HTTP_HOST'] ?? $s['SERVER_NAME'];
I was able to fix it by adding an extra path to ${MY_BIN_DIR} in the fixup_bundle command that includes the DLL directly. I'm not sure why it worked fine with msbuild and not with ninja, but that may just remain a mystery.
Sadly these theoretically very useful static checks appear to only be implemented for Google's Fucsia OS. So you're not "holding it wrong". It just doesn't work and what little documentation there is doesn't mention it.
@Rajeev KR thanks for providing the clue.
table:not(:has(thead>tr))>tbody>tr:first-child,
table:has(thead>tr)>thead>tr:first-child
You can go to Windows Credentials and remove everything related to Github.
After restart VS Code or another program, it should ask you to authenticate to copilot.
For me it helped
db.getName()
This will display the name of the database you're currently working in
i am using packages make sure you put .sandbox
func application(
_ application: UIApplication,
didRegisterForRemoteNotificationsWithDeviceToken deviceToken: Data
) {
Auth.auth().setAPNSToken(deviceToken, type: .sandbox) // Use `.prod` for release builds
}
Not a direct solution to your question but you could also use the localrules
keyword to specify that a rule is local in your main smk file.
You wouldn't have to edit each rule just add localrules: my_rule1, my_rule2
in the first lines. Which might be easier to add and remove the local behaviour.
@NlaakALD did you ever figure out what caused the 404s? I'm using NextJS + Convex and am having the exact same issue... While you think it's not Convex related, I do find it suspicious that we both have this problem while using the same setup :/
current_zone()->to_local(now)
does time zone database lookups and DST calculations.
That's why it takes more time than localtime_s
.
std::format
is slower because of heavy formatting logics.
You can edit the label:
chatbot = gr.Chatbot(label="My new title")
or outright remove it cleanly with cssjs50's solution.
def embed_metadata_no_image_change(image_path, title, description, save_path, keyword_str):
try:
shutil.copy2(image_path, save_path)
try:
exif_dict = piexif.load(save_path)
except Exception:
exif_dict = {"0th": {}, "Exif": {}, "GPS": {}, "1st": {}, "Interop": {}, "thumbnail": None}
exif_dict["0th"][piexif.ImageIFD.ImageDescription] = b""
exif_dict["0th"][piexif.ImageIFD.XPTitle] = b""
exif_dict["0th"][piexif.ImageIFD.XPKeywords] = b""
print(f"DEBUG keywords (to be embedded): '{keyword_str}'")
exif_dict["0th"][piexif.ImageIFD.ImageDescription] = description.encode("utf-8")
exif_dict["0th"][piexif.ImageIFD.XPTitle] = title.encode("utf-16le") + b'\x00\x00'
exif_dict["0th"][piexif.ImageIFD.XPKeywords] = keyword_str.encode("utf-16le") + b'\x00\x00'
exif_bytes = piexif.dump(exif_dict)
piexif.insert(exif_bytes, save_path)
return title, description, keyword_str
except Exception as e:
print(f"Error embedding metadata: {e}")
return None, None, None
i use code
I wanted to add cat, pet, animal.
But I ended up with cat; pet; animal.
Or is there another way? Because the website doesn't usually accept cat tags;
It seems like the full wiki.js graphql schema that their API implements is available in their source code at https://github.com/requarks/wiki/tree/main/server/graph/schemas.
If you could please share the solution of the problem since we have the same issue like you.
Thanks in advance
Installing Discourse with Bitnami is no longer supported and is now deprecated. See this Meta post for more info.
I had the same issue. Here's how I corrected it by adding { } as shown on SFML's website
sf::RenderWindow window(sf::VideoMode({200, 200}), "SFML works!");
I've had this pandas issue, and it was resolved by deleting all folders relating to pandas within the python Lib/site-packages folder, then reinstalling
for reinstalling I had to use pip install <pandas.whl file> --user --force-reinstall --no-dependencies
and I also needed Numpy version less than 2.0 (so 1.26.4 in my case)
Based on my understanding, the HandleHttpRequest.java
servlet configuration currently uses the path "/"
as the base. If we change this to "/api/"
, then all API endpoints will be handled under the /api/
path, meaning requests like /api/yourendpoint
will be routed correctly by default.
final ServletContextHandler handler = new ServletContextHandler();
handler.addServlet(standardServlet, "/");
server.setHandler(handler);
this.server = server;
server.start();
I tried to implement what @premkumarravi proposed. The midStep part worked very well. The return section was causing me problem, since values line didn't accepted the [fullKey] argument as valid.
Inspired from his proposal, i finaly did that for the return statement
SUMMARIZE(
MidStep,
[date],
"liste mandats",
CONCATENATEX(
filter(MidStep, [date] = earlier([date])),
[fullKey], " ** "
)
)
I have double checked my projects, and the parameters are included in the files you mentioned. I think the issue might be with you trying to search for the global parameter in the adf_publish branch. When you are on your development branch, and you Export ARM template from Source Control>ARM Template under Manage, do you find the global parameter in the exported files?
Use absolute path instead of just a single path directory name.
import pathlib
script_directory = pathlib.Path().absolute()
options.add_argument(f"user-data-dir={script_directory}\\userdata")
I hope that this will fix selenium.common.exceptions.SessionNotCreatedException
exception in most of the case.
Вариант с нативным js
const scrollHandler = (e) => useCallback(() =>{
const content = document.getElementsByClassName('js-tabs-ingredient');
Array.from(content).forEach((el) => {
const rect = el.getBoundingClientRect();
const elemTop = rect.top;
const elemBottom = rect.bottom;
const isVisible =
elemTop < window.innerHeight / 2 && elemBottom > window.innerHeight / 2;
if (isVisible) {
const type = el.dataset.id;
setCurrentTab(type);
}
});
}, []);
<div className="js-tabs-ingredient" data-id={currentTab}>
<h3 className="text text_type_main-medium mb-6" ref={tabRefs[currentTab]}>
{title}
</h3>
</div>
You're on the right track with your local Python proxy, but accessing it from outside your residence without opening ports isn’t feasible with a traditional server approach.
Networks that offer this kind of functionality typically use a reverse connection model—instead of waiting for inbound connections, your proxy node initiates an outbound connection to a central relay server, maintaining a persistent tunnel. This allows external clients to route traffic through your proxy without requiring open ports on your router.
To implement something similar:
Use reverse proxy tunneling techniques such as reverse SSH tunnels or tunnel services that create outbound connections from your machine and expose them via a public URL or port.
Build or integrate with a custom relay system where each proxy client connects out to a central hub, which then forwards traffic back and forth.
In short, to avoid port forwarding, the key is to reverse the connection direction — have your proxy client connect out, not listen in.
Also, if you're focused on reliability and residential IP quality, looking into the Best Residential Proxies can help improve performance and success rates for your use case.
I have the same requirement. Were you able to find a working solution? Specifically, I'm looking to enforce a Conditional Access (CA) policy only for SharePoint and OneDrive without impacting other services like Teams.
Let me know if you were able to achieve this successfully.
I hope this script may help you
#!/bin/bash
set -euo pipefail
usage() {
echo "Usage: $0 -u <clone_url> -d <target_dir> -b <branch>"
exit 1
}
while getopts "su:d:b:" opt; do
case $opt in
u) CLONE_URL=$OPTARG ;;
d) TARGET_DIR=$OPTARG ;;
b) BRANCH=$OPTARG ;;
*) usage ;;
esac
done
if [[ -z "${CLONE_URL-}" || -z "${TARGET_DIR-}" || -z "${BRANCH-}" ]]; then
usage
fi
git clone --filter=blob:none --no-checkout "$CLONE_URL" "$TARGET_DIR"
cd "$TARGET_DIR"
git config core.sparseCheckout true
{
echo "/*"
echo "!/*/"
} > .git/info/sparse-checkout
git checkout "$BRANCH"
git ls-tree -r -d --name-only HEAD | xargs -I{} mkdir -p "{}"
exit 0
This script performs a sparse checkout with the following behavior:
1. Clone the repository without downloading file contents (--filter=blob:none
) and without checking out files initially (--no-checkout
).
2. Enable sparse checkout mode in the cloned repository.
3. Set sparse checkout rules to:
/*
).!/*/
).4. Checkout the specified branch, applying the sparse checkout rules. Only root-level files appear in the working directory.
5. Create empty directories locally by git ls-tree -r -d --name-only HEAD
listing all directories in the repo and making those folders. This recreates the directory structure without file contents because Git does not track empty directories.
6. Exit after completing these steps.
https://ohgoshgit.github.io/posts/2025-08-04-git-sparse-checkout/
To fully reset Choices.js when reopening a Bootstrap modal, you should destroy and reinitialize the Choices instance each time the modal is shown. This ensures no cached state or UI artifacts persist:
javascript
$('#customTourModal').on('show.bs.modal', function () {
const selectors = [
'select[name="GSMCountryCode"]',
'select[name="NumberOfAdult"]',
'select[name="HowDidYouFindUsID"]'
];
selectors.forEach(sel => {
const el = this.querySelector(sel);
if (el) {
if (el._choicesInstance) {
el._choicesInstance.destroy();
}
el._choicesInstance = new Choices(el, {
placeholder: true,
removeItemButton: true,
shouldSort: false
});
}
});
});
This approach ensures the Choices.js UI is reset cleanly every time the modal is reopened.
If you really don't want to rely on third-party APIs, you can get an IP address via DNS. It's not technically a web request, so I guess that counts.
It works by bypassing your local DNS resolver and querying external DNS servers directly: this way you can get your public IP address in a record.
Here's a demo. It's a bit verbose, but you'll get the idea.
Maybe this could help you to see valid times by adhusting skew
I found that downgrading Python from 3.13 to 3.12.7 worked for me. It must be a bug with the newer release of Python. Hope this helps!
If you’re also looking for a tool that can convert your images or documents to TIFF format, then you should use the BitRecover TIFF Converter tool. This tool comes with many advanced features, such as bulk mode, which allows you to convert not just a single file but multiple files in bulk at once. There is no data loss during the conversion process. This tool saves both your time and effort, and it makes the entire process much faster.
We opened a Support Request to AWS and seems that if you make changes to ECR repository policy or IAM Policy, you must redeploy the lambda.
In our case seems that CloudFormation made a DeleteRepositoryPolicy action that causes the loss of permission.
Even if you restore the permission, seems have no effects.
Hope this helps, thanks
I have some excellent news for you - my timep
bash profiler does exactly what you want - it will give you per-command runtime (both wall-clock time and CPU time / combined user+sys time) and (so long as you pass it the -F
flag) will generate a bash native flamegraph for you that shows actual bash commands, code structure, and colors based on runtime.
timep
is extremely simple to use - download and source the timep.bash
script from the github repo (which loads the `timep function and sets up for using it), and then run
timep -F codeToProfile
And thats it - timep
handles everything, no need to change anything in the code you want to profile.
As an example, using timep to profile this test script from the timep repo (by running timep -F timep.tests.bash
) gives the following profile:
LINE.DEPTH.CMD NUMBER COMBINED WALL-CLOCK TIME COMBINED CPU TIME COMMAND
<line>.<depth>.<cmd>: ( time | cur depth % | total % ) ( time | cur depth % | total % ) (count) <command>
_________________________ ________________________________ ________________________________ ____________________________________
12.0.0: ( 0.006911s | 0.51% ) ( 0.012424s | 5.37% ) (1x) : | cat 0<&0 | cat | tee
14.0.0: ( 0.008768s | 0.65% ) ( 0.014588s | 6.31% ) (1x) printf '%s\n' {1..10} | << (SUBSHELL): 148593 >> | tee | cat
16.0.0: ( 0.000993s | 0.07% ) ( 0.000001s | 0.00% ) (1x) << (BACKGROUND FORK) >>
|-- 16.1.0: ( 0.000076s |100.00% | 0.00% ) ( 0.000090s |100.00% | 0.03% ) (1x) |-- echo
17.0.0: ( 0.002842s | 0.21% ) ( 0.000001s | 0.00% ) (1x) << (BACKGROUND FORK) >>
|-- 17.1.0: ( 0.000253s | 8.17% | 0.01% ) ( 0.000296s |100.00% | 0.12% ) (1x) |-- echo B
|-- 17.1.1: ( 0.002842s | 91.82% | 0.21% ) ( 0.000001s | 0.33% | 0.00% ) (1x) |-- << (BACKGROUND FORK) >>
19.0.0: ( 0.000069s | 0.00% ) ( 0.000083s | 0.03% ) (1x) echo 0
20.0.0: ( 0.000677s | 0.05% ) ( 0.000521s | 0.22% ) (1x) echo 1
21.0.0: ( 0.000076s | 0.00% ) ( 0.000091s | 0.03% ) (1x) << (SUBSHELL) >>
|-- 21.1.0: ( 0.000076s |100.00% | 0.00% ) ( 0.000091s |100.00% | 0.03% ) (1x) |-- echo 2
22.0.0: ( 0.000407s | 0.03% ) ( 0.000432s | 0.18% ) (1x) echo 3 (&)
23.0.0: ( 0.000745s | 0.05% ) ( 0.000452s | 0.19% ) (1x) echo 4 (&)
24.0.0: ( 0.001000s | 0.07% ) ( 0.000001s | 0.00% ) (1x) << (BACKGROUND FORK) >>
|-- 24.1.0: ( 0.000090s |100.00% | 0.00% ) ( 0.000110s |100.00% | 0.04% ) (1x) |-- echo 5
25.0.0: ( 0.000502s | 0.03% ) ( 0.000535s | 0.23% ) (1x) << (SUBSHELL) >>
|-- 25.1.0: ( 0.000502s |100.00% | 0.03% ) ( 0.000535s |100.00% | 0.23% ) (1x) |-- echo 6 (&)
26.0.0: ( 0.001885s | 0.14% ) ( 0.000001s | 0.00% ) (1x) << (BACKGROUND FORK) >>
|-- 26.1.0: ( 0.000075s |100.00% | 0.00% ) ( 0.000090s |100.00% | 0.03% ) (1x) |-- echo 7
27.0.0: ( 0.000077s | 0.00% ) ( 0.000091s | 0.03% ) (1x) << (SUBSHELL) >>
|-- 27.1.0: ( 0.000077s |100.00% | 0.00% ) ( 0.000091s |100.00% | 0.03% ) (1x) |-- echo 8
28.0.0: ( 0.002913s | 0.21% ) ( 0.000001s | 0.00% ) (1x) << (BACKGROUND FORK) >>
|-- 28.1.0: ( 0.000967s |100.00% | 0.07% ) ( 0.001353s |100.00% | 0.58% ) (1x) |-- echo 9 (&)
29.0.0: ( 0.003014s | 0.22% ) ( 0.000001s | 0.00% ) (1x) << (BACKGROUND FORK) >>
|-- 29.1.0: ( 0.000083s | 12.44% | 0.00% ) ( 0.000105s | 14.34% | 0.04% ) (1x) |-- echo 9.1
|-- 29.1.1: ( 0.000584s | 87.55% | 0.04% ) ( 0.000627s | 85.65% | 0.27% ) (1x) |-- echo 9.2 (&)
30.0.0: ( 0.002642s | 0.19% ) ( 0.000001s | 0.00% ) (1x) << (BACKGROUND FORK) >>
|-- 30.1.0: ( 0.000471s | 76.21% | 0.03% ) ( 0.000501s | 75.79% | 0.21% ) (1x) |-- echo 9.1a (&)
|-- 30.1.1: ( 0.000147s | 23.78% | 0.01% ) ( 0.000160s | 24.20% | 0.06% ) (1x) |-- echo 9.2a
31.0.0: ( 0.002324s | 0.17% ) ( 0.000001s | 0.00% ) (1x) << (BACKGROUND FORK) >>
|-- 31.1.0: ( 0.000071s | 12.63% | 0.00% ) ( 0.000086s | 14.09% | 0.03% ) (1x) |-- echo 9.1b
|-- 31.1.1: ( 0.000491s | 87.36% | 0.03% ) ( 0.000524s | 85.90% | 0.22% ) (1x) |-- echo 9.2b (&)
32.0.0: ( 0.002474s | 0.18% ) ( 0.000001s | 0.00% ) (1x) << (BACKGROUND FORK) >>
|-- 32.1.0: ( 0.000474s | 85.71% | 0.03% ) ( 0.000498s | 84.40% | 0.21% ) (1x) |-- echo 9.1c (&)
|-- 32.1.1: ( 0.000079s | 14.28% | 0.00% ) ( 0.000092s | 15.59% | 0.03% ) (1x) |-- echo 9.2c
33.0.0: ( 0.000575s | 0.04% ) ( 0.000610s | 0.26% ) (1x) << (SUBSHELL) >>
|-- 33.1.0: ( 0.000492s | 85.56% | 0.03% ) ( 0.000516s | 84.59% | 0.22% ) (1x) |-- echo 9.3 (&)
|-- 33.1.1: ( 0.000083s | 14.43% | 0.00% ) ( 0.000094s | 15.40% | 0.04% ) (1x) |-- echo 9.4
33.0.0: ( 0.008883s | 0.66% ) ( 0.000001s | 0.00% ) (1x) << (BACKGROUND FORK) >>
|-- 33.1.0: ( 0.004729s | 98.41% | 0.35% ) ( 0.005165s | 98.28% | 2.23% ) (1x) |-- echo 9.999
|-- 33.1.1: ( 0.000076s | 1.58% | 0.00% ) ( 0.000090s | 1.71% | 0.03% ) (1x) |-- echo 9.5
34.0.0: ( 0.004234s | 0.31% ) ( 0.000001s | 0.00% ) (1x) << (BACKGROUND FORK) >>
|-- 34.1.0: ( 0.001349s |100.00% | 0.10% ) ( 0.001443s |100.00% | 0.62% ) (1x) |-- echo 10 (&)
36.0.0: ( 0.000069s | 0.00% ) ( 0.000083s | 0.03% ) (1x) echo 11
37.0.0: ( 0.000752s | 0.05% ) ( 0.000438s | 0.18% ) (1x) echo 12 (&)
38.0.0: ( 0.000975s | 0.07% ) ( 0.000001s | 0.00% ) (1x) << (BACKGROUND FORK) >>
|-- 38.1.0: ( 0.000076s |100.00% | 0.00% ) ( 0.000092s |100.00% | 0.03% ) (1x) |-- echo 13
39.0.0: ( 0.000290s | 0.02% ) ( 0.000339s | 0.14% ) (1x) << (SUBSHELL) >>
|-- 39.1.0: ( 0.000290s |100.00% | 0.02% ) ( 0.000339s |100.00% | 0.14% ) (1x) |-- echo 14
41.0.0: ( 0.000132s | 0.00% ) ( 0.000160s | 0.06% ) (1x) << (FUNCTION): main.ff 15 >>
|-- 1.1.0: ( 0.000058s | 43.93% | 0.00% ) ( 0.000072s | 45.00% | 0.03% ) (1x) |-- ff 15
|-- 8.1.0: ( 0.000074s | 56.06% | 0.00% ) ( 0.000088s | 55.00% | 0.03% ) (1x) |-- echo "${*}"
42.0.0: ( 0.000263s | 0.01% ) ( 0.000314s | 0.13% ) (1x) << (FUNCTION): main.gg 16 >>
|-- 1.1.0: ( 0.000059s | 22.43% | 0.00% ) ( 0.000071s | 22.61% | 0.03% ) (1x) |-- gg 16
| 8.1.0: ( 0.000069s | 26.23% | 0.00% ) ( 0.000082s | 26.11% | 0.03% ) (1x) | echo "$*"
| 8.1.1: ( 0.000135s | 51.33% | 0.01% ) ( 0.000161s | 51.27% | 0.06% ) (1x) | << (FUNCTION): main.gg.ff "$@" >>
| |-- 1.2.0: ( 0.000058s | 42.96% | 0.00% ) ( 0.000071s | 44.09% | 0.03% ) (1x) | |-- ff "$@"
|-- |-- 8.2.0: ( 0.000077s | 57.03% | 0.00% ) ( 0.000090s | 55.90% | 0.03% ) (1x) |-- |-- echo "${*}"
44.0.0: ( 0.001767s | 0.13% ) ( 0.000001s | 0.00% ) (1x) << (BACKGROUND FORK) >>
|-- 44.1.0: ( 0.000533s |100.00% | 0.03% ) ( 0.000556s |100.00% | 0.24% ) (1x) |-- echo a (&)
45.0.0: ( 0.001520s | 0.11% ) ( 0.000001s | 0.00% ) (1x) << (BACKGROUND FORK) >>
|-- 45.1.0: ( 0.001520s |100.00% | 0.11% ) ( 0.000001s |100.00% | 0.00% ) (1x) |-- << (BACKGROUND FORK) >>
|-- |-- 45.2.0: ( 0.000127s |100.00% | 0.00% ) ( 0.000149s |100.00% | 0.06% ) (1x) |-- |-- echo b
47.0.0: ( 0.001245s | 0.09% ) ( 0.000001s | 0.00% ) (1x) << (BACKGROUND FORK) >>
|-- 47.1.0: ( 0.001245s |100.00% | 0.09% ) ( 0.000001s |100.00% | 0.00% ) (1x) |-- << (BACKGROUND FORK) >>
|-- |-- 47.2.0: ( 0.000095s |100.00% | 0.00% ) ( 0.000113s |100.00% | 0.04% ) (1x) |-- |-- echo A3
47.0.0: ( 0.001248s | 0.09% ) ( 0.001308s | 0.56% ) (1x) << (SUBSHELL) >>
|-- 47.1.0: ( 0.000557s | 44.63% | 0.04% ) ( 0.000584s | 44.64% | 0.25% ) (1x) |-- echo A2 (&)
| 47.1.1: ( 0.000596s | 47.75% | 0.04% ) ( 0.000618s | 47.24% | 0.26% ) (1x) | << (SUBSHELL) >>
| |-- 47.2.0: ( 0.000596s |100.00% | 0.04% ) ( 0.000618s |100.00% | 0.26% ) (1x) | |-- << (SUBSHELL) >>
| |-- |-- 47.3.0: ( 0.000596s |100.00% | 0.04% ) ( 0.000618s |100.00% | 0.26% ) (1x) | |-- |-- echo A5 (&)
|-- 47.1.2: ( 0.000095s | 7.61% | 0.00% ) ( 0.000106s | 8.10% | 0.04% ) (1x) |-- echo A1
47.0.1: ( 0.001398s | 0.10% ) ( 0.000001s | 0.00% ) (1x) << (BACKGROUND FORK) >>
|-- 47.1.0: ( 0.001398s |100.00% | 0.10% ) ( 0.000001s |100.00% | 0.00% ) (1x) |-- << (BACKGROUND FORK) >>
| |-- 47.2.0: ( 0.001398s |100.00% | 0.10% ) ( 0.000001s |100.00% | 0.00% ) (1x) | |-- << (BACKGROUND FORK) >>
|-- |-- |-- 47.3.0: ( 0.000112s |100.00% | 0.00% ) ( 0.000131s |100.00% | 0.05% ) (1x) |-- |-- |-- echo A4
50.0.0: ( 0.005058s | 0.37% ) ( 0.008785s | 3.80% ) (1x) cat <<EOF$'\n'foo$'\n'bar$'\n'baz$'\n'EOF | grep foo | sed 's/o/O/g' | wc -l
56.0.0: ( 0.000535s | 0.04% ) ( 0.000412s | 0.17% ) (1x) echo "today is $(date +%Y-%m-%d)"
56.0.1: ( 0.002812s | 0.21% ) ( 0.002812s | 1.21% ) (1x) << (SUBSHELL) >>
|-- 56.1.0: ( 0.002812s |100.00% | 0.21% ) ( 0.002812s |100.00% | 1.21% ) (1x) |-- date +%Y-%m-%d
57.0.0: ( 0.000762s | 0.05% ) ( 0.000643s | 0.27% ) (1x) x=$( ( echo nested; echo subshell ) | grep sub)
57.0.1: ( 0.000162s | 0.01% ) ( 0.000189s | 0.08% ) (1x) << (SUBSHELL) >>
|-- 57.1.1: ( 0.000162s |100.00% | 0.01% ) ( 0.000189s |100.00% | 0.08% ) (1x) |-- << (SUBSHELL) >>
| |-- 57.2.0: ( 0.000077s | 47.53% | 0.00% ) ( 0.000090s | 47.61% | 0.03% ) (1x) | |-- echo nested
|-- |-- 57.2.1: ( 0.000085s | 52.46% | 0.00% ) ( 0.000099s | 52.38% | 0.04% ) (1x) |-- |-- echo subshell
59.0.0: ( 0.000591s | 0.04% ) ( 0.000431s | 0.18% ) (1x) diff <(ls /) <(ls /tmp)
59.0.1: ( 0.006895s | 0.51% ) ( 0.006895s | 2.98% ) (2x) << (SUBSHELL) >>
|-- 59.1.0: ( 0.003547s |100.00% | 0.26% ) ( 0.003547s |100.00% | 1.53% ) (1x) |-- ls /
|-- 59.1.0: ( 0.003348s |100.00% | 0.25% ) ( 0.003348s |100.00% | 1.44% ) (1x) |-- ls /tmp
60.0.0: ( 0.000651s | 0.04% ) ( 0.000462s | 0.19% ) (1x) grep pattern <(sed 's/^/>>/' > /dev/null)
60.0.1: ( 0.002869s | 0.21% ) ( 0.002869s | 1.24% ) (1x) << (SUBSHELL) >>
|-- 60.1.0: ( 0.002869s |100.00% | 0.21% ) ( 0.002869s |100.00% | 1.24% ) (1x) |-- sed 's/^/>>/' > /dev/null
62.0.0: ( 0.043012s | 3.22% ) ( 0.000001s | 0.00% ) (1x) << (BACKGROUND FORK) >>
|-- 62.1.0: ( 0.000206s | 0.59% | 0.01% ) ( 0.000250s | 4.94% | 0.10% ) (3x) |-- for i in {1..3}
| 62.1.1: ( 0.000210s | 0.60% | 0.01% ) ( 0.000254s | 5.02% | 0.10% ) (3x) | echo "$i"
|-- 62.1.2: ( 0.034470s | 98.80% | 2.58% ) ( 0.004554s | 90.03% | 1.97% ) (3x) |-- sleep .01
63.0.0: ( 0.037336s | 2.79% ) ( 0.014949s | 6.46% ) (4x) read -r n <&${CO[0]}
63.0.1: ( 0.000235s | 0.01% ) ( 0.000277s | 0.11% ) (3x) printf "got %s\n" "$n"
65.0.0: ( 0.000094s | 0.00% ) ( 0.000112s | 0.04% ) (1x) let "x = 5 + 6"
66.0.0: ( 0.000101s | 0.00% ) ( 0.000117s | 0.05% ) (1x) arr=(one two three)
66.0.1: ( 0.000112s | 0.00% ) ( 0.000133s | 0.05% ) (1x) echo ${arr[@]}
67.0.0: ( 0.000092s | 0.00% ) ( 0.000111s | 0.04% ) (1x) ((i=0))
67.0.1: ( 0.000313s | 0.02% ) ( 0.000372s | 0.16% ) (4x) ((i<3))
67.0.2: ( 0.000237s | 0.01% ) ( 0.000284s | 0.12% ) (3x) echo "$i"
67.0.3: ( 0.000225s | 0.01% ) ( 0.000274s | 0.11% ) (3x) ((i++))
80.0.0: ( 0.000065s | 0.00% ) ( 0.000079s | 0.03% ) (1x) cmd="echo inside-eval"
81.0.0: ( 0.000069s | 0.00% ) ( 0.000085s | 0.03% ) (1x) eval "$cmd"
81.0.1: ( 0.000074s | 0.00% ) ( 0.000088s | 0.03% ) (1x) echo inside-eval
82.0.0: ( 0.000069s | 0.00% ) ( 0.000083s | 0.03% ) (1x) eval "eval \"$cmd\""
82.0.1: ( 0.000069s | 0.00% ) ( 0.000084s | 0.03% ) (1x) eval "echo inside-eval"
82.0.2: ( 0.000072s | 0.00% ) ( 0.000087s | 0.03% ) (1x) echo inside-eval
84.0.0: ( 0.019507s | 1.46% ) ( 0.019455s | 8.41% ) (1x) trap 'echo got USR1; sleep .01' USR1
85.0.0: ( 0.000080s | 0.00% ) ( 0.000095s | 0.04% ) (1x) kill -USR1 $BASHPID
-53.0.0: ( 0.016088s | 1.20% ) ( 0.006087s | 2.63% ) (1x) -'TRAP (USR1): echo got USR1\; sleep .01'
-48.0.0: ( 0.000075s | 0.00% ) ( 0.000089s | 0.03% ) (1x) -'TRAP (USR1): echo got USR1\; sleep .01'
86.0.0: ( 0.000074s | 0.00% ) ( 0.000089s | 0.03% ) (1x) echo after-signal
88.0.0: ( 0.001005s | 0.07% ) ( 0.000638s | 0.27% ) (1x) cat <(echo hi) <(echo bye) <(echo 1; echo 2; echo 3)
88.0.1: ( 0.000227s | 0.01% ) ( 0.000258s | 0.11% ) (1x) << (SUBSHELL) >>
|-- 88.1.0: ( 0.000227s |100.00% | 0.01% ) ( 0.000258s |100.00% | 0.11% ) (1x) |-- echo hi
88.0.2: ( 0.000118s | 0.00% ) ( 0.000139s | 0.06% ) (1x) << (SUBSHELL) >>
|-- 88.1.0: ( 0.000118s |100.00% | 0.00% ) ( 0.000139s |100.00% | 0.06% ) (1x) |-- echo bye
88.0.3: ( 0.000415s | 0.03% ) ( 0.000491s | 0.21% ) (1x) << (SUBSHELL) >>
|-- 88.1.0: ( 0.000274s | 66.02% | 0.02% ) ( 0.000322s | 65.58% | 0.13% ) (1x) |-- echo 1
| 88.1.1: ( 0.000071s | 17.10% | 0.00% ) ( 0.000085s | 17.31% | 0.03% ) (1x) | echo 2
|-- 88.1.2: ( 0.000070s | 16.86% | 0.00% ) ( 0.000084s | 17.10% | 0.03% ) (1x) |-- echo 3
90.0.0: ( 0.001466s | 0.10% ) ( 0.001541s | 0.66% ) (3x) for i in {1..3} (&)
90.0.1: ( 0.001271s | 0.09% ) ( 0.001361s | 0.58% ) (1x) << (SUBSHELL) >>
|-- 90.1.0: ( 0.001196s | 94.09% | 0.08% ) ( 0.001271s | 93.38% | 0.54% ) (1x) |-- seq 1 4
|-- 90.1.1: ( 0.000075s | 5.90% | 0.00% ) ( 0.000090s | 6.61% | 0.03% ) (1x) |-- :
90.0.1: ( 0.001415s | 0.10% ) ( 0.001505s | 0.65% ) (1x) << (SUBSHELL) >>
|-- 90.1.0: ( 0.001332s | 94.13% | 0.09% ) ( 0.001406s | 93.42% | 0.60% ) (1x) |-- seq 1 4
|-- 90.1.1: ( 0.000083s | 5.86% | 0.00% ) ( 0.000099s | 6.57% | 0.04% ) (1x) |-- :
90.0.1: ( 0.001578s | 0.11% ) ( 0.001653s | 0.71% ) (1x) << (SUBSHELL) >>
|-- 90.1.0: ( 0.001503s | 95.24% | 0.11% ) ( 0.001562s | 94.49% | 0.67% ) (1x) |-- seq 1 4
|-- 90.1.1: ( 0.000075s | 4.75% | 0.00% ) ( 0.000091s | 5.50% | 0.03% ) (1x) |-- :
91.0.0: ( 0.003792s | 0.28% ) ( 0.001403s | 0.60% ) (15x) read x
92.0.0: ( 0.004530s | 0.33% ) ( 0.003861s | 1.67% ) (12x) (( x % 2 == 0 ))
93.0.0: ( 0.000448s | 0.03% ) ( 0.000530s | 0.22% ) (6x) echo even "$x"
95.0.0: ( 0.000075s | 0.00% ) ( 0.000089s | 0.03% ) (1x) << (SUBSHELL) >>
|-- 95.1.0: ( 0.000075s |100.00% | 0.00% ) ( 0.000089s |100.00% | 0.03% ) (1x) |-- echo odd "$x"
95.0.0: ( 0.000076s | 0.00% ) ( 0.000089s | 0.03% ) (1x) << (SUBSHELL) >>
|-- 95.1.0: ( 0.000076s |100.00% | 0.00% ) ( 0.000089s |100.00% | 0.03% ) (1x) |-- echo odd "$x"
95.0.0: ( 0.000109s | 0.00% ) ( 0.000128s | 0.05% ) (1x) << (SUBSHELL) >>
|-- 95.1.0: ( 0.000109s |100.00% | 0.00% ) ( 0.000128s |100.00% | 0.05% ) (1x) |-- echo odd "$x"
95.0.0: ( 0.000162s | 0.01% ) ( 0.000188s | 0.08% ) (1x) << (SUBSHELL) >>
|-- 95.1.0: ( 0.000162s |100.00% | 0.01% ) ( 0.000188s |100.00% | 0.08% ) (1x) |-- echo odd "$x"
95.0.0: ( 0.000176s | 0.01% ) ( 0.000199s | 0.08% ) (1x) << (SUBSHELL) >>
|-- 95.1.0: ( 0.000176s |100.00% | 0.01% ) ( 0.000199s |100.00% | 0.08% ) (1x) |-- echo odd "$x"
100.0.0: ( 0.000438s | 0.03% ) ( 0.000460s | 0.19% ) (1x) sleep 1 (&)
101.0.0: ( 1.002439s | 75.04% ) ( 0.001653s | 0.71% ) (1x) wait -n $!
104.0.0: ( 0.018994s | 1.42% ) ( 0.018969s | 8.20% ) (1x) << (SUBSHELL) >>
|-- 104.1.0: ( 0.017245s | 90.79% | 1.29% ) ( 0.017204s | 90.69% | 7.44% ) (1x) |-- trap 'echo bye' EXIT
| 105.1.0: ( 0.000075s | 0.39% | 0.00% ) ( 0.000085s | 0.44% | 0.03% ) (1x) | exit
|-- -53.1.0: ( 0.001674s | 8.81% | 0.12% ) ( 0.001680s | 8.85% | 0.72% ) (1x) |-- -'TRAP (EXIT): echo bye'
109.0.0: ( 0.025747s | 1.92% ) ( 0.025759s | 11.14% ) (1x) << (SUBSHELL) >>
|-- 109.1.0: ( 0.020312s | 78.89% | 1.52% ) ( 0.020265s | 78.67% | 8.76% ) (1x) |-- trap 'echo bye' RETURN EXIT
| 110.1.0: ( 0.003594s | 13.95% | 0.26% ) ( 0.003662s | 14.21% | 1.58% ) (1x) | << (FUNCTION): main.gg 1 >>
| |-- 1.2.0: ( 0.000063s | 1.75% | 0.00% ) ( 0.000072s | 1.96% | 0.03% ) (1x) | |-- gg 1
| | 8.2.0: ( 0.000068s | 1.89% | 0.00% ) ( 0.000081s | 2.21% | 0.03% ) (1x) | | echo "$*"
| | 8.2.1: ( 0.001806s | 50.25% | 0.13% ) ( 0.001841s | 50.27% | 0.79% ) (1x) | | << (FUNCTION): main.gg.ff "$@" >>
| | |-- 1.3.0: ( 0.000059s | 3.26% | 0.00% ) ( 0.000074s | 4.01% | 0.03% ) (1x) | | |-- ff "$@"
| | |-- 8.3.0: ( 0.001747s | 96.73% | 0.13% ) ( 0.001767s | 95.98% | 0.76% ) (2x) | | |-- echo "${*}"
| |-- 8.2.2: ( 0.001657s | 46.10% | 0.12% ) ( 0.001668s | 45.54% | 0.72% ) (1x) | |-- echo "${*}"
| 111.1.0: ( 0.000076s | 0.29% | 0.00% ) ( 0.000086s | 0.33% | 0.03% ) (1x) | exit
|-- -53.1.0: ( 0.001765s | 6.85% | 0.13% ) ( 0.001746s | 6.77% | 0.75% ) (1x) |-- -'TRAP (EXIT): echo bye'
115.0.0: ( 0.038002s | 2.84% ) ( 0.038024s | 16.44% ) (1x) << (SUBSHELL) >>
|-- 115.1.0: ( 0.017389s | 45.75% | 1.30% ) ( 0.017356s | 45.64% | 7.50% ) (1x) |-- trap 'echo exit' EXIT
| 116.1.0: ( 0.015303s | 40.26% | 1.14% ) ( 0.015258s | 40.12% | 6.60% ) (1x) | trap 'echo return' RETURN
| 117.1.0: ( 0.003589s | 9.44% | 0.26% ) ( 0.003668s | 9.64% | 1.58% ) (1x) | << (FUNCTION): main.gg 1 >>
| |-- 1.2.0: ( 0.000057s | 1.58% | 0.00% ) ( 0.000071s | 1.93% | 0.03% ) (1x) | |-- gg 1
| | 8.2.0: ( 0.000081s | 2.25% | 0.00% ) ( 0.000093s | 2.53% | 0.04% ) (1x) | | echo "$*"
| | 8.2.1: ( 0.001805s | 50.29% | 0.13% ) ( 0.001852s | 50.49% | 0.80% ) (1x) | | << (FUNCTION): main.gg.ff "$@" >>
| | |-- 1.3.0: ( 0.000056s | 3.10% | 0.00% ) ( 0.000069s | 3.72% | 0.02% ) (1x) | | |-- ff "$@"
| | |-- 8.3.0: ( 0.001749s | 96.89% | 0.13% ) ( 0.001783s | 96.27% | 0.77% ) (2x) | | |-- echo "${*}"
| |-- 8.2.2: ( 0.001646s | 45.86% | 0.12% ) ( 0.001652s | 45.03% | 0.71% ) (1x) | |-- echo "${*}"
| 118.1.0: ( 0.000069s | 0.18% | 0.00% ) ( 0.000082s | 0.21% | 0.03% ) (1x) | exit
|-- -53.1.0: ( 0.001652s | 4.34% | 0.12% ) ( 0.001660s | 4.36% | 0.71% ) (1x) |-- -'TRAP (EXIT): echo exit'
123.0.0: ( 0.017856s | 1.33% ) ( 0.017835s | 7.71% ) (1x) << (SUBSHELL) >>
|-- 123.1.0: ( 0.017783s | 99.59% | 1.33% ) ( 0.017749s | 99.51% | 7.67% ) (1x) |-- trap '' RETURN EXIT
|-- 124.1.0: ( 0.000073s | 0.40% | 0.00% ) ( 0.000086s | 0.48% | 0.03% ) (1x) |-- exit
129.0.0: ( 0.014348s | 1.07% ) ( 0.014318s | 6.19% ) (1x) << (SUBSHELL) >>
|-- 129.1.0: ( 0.014272s | 99.47% | 1.06% ) ( 0.014233s | 99.40% | 6.15% ) (1x) |-- trap - EXIT
|-- 130.1.0: ( 0.000076s | 0.52% | 0.00% ) ( 0.000085s | 0.59% | 0.03% ) (1x) |-- exit
133.0.0: ( 0.000933s | 0.06% ) ( 0.001064s | 0.46% ) (1x) << (SUBSHELL) >>
|-- 133.1.0: ( 0.000213s | 22.82% | 0.01% ) ( 0.000242s | 22.74% | 0.10% ) (1x) |-- echo $BASHPID
| 133.1.1: ( 0.000720s | 77.17% | 0.05% ) ( 0.000822s | 77.25% | 0.35% ) (1x) | << (SUBSHELL) >>
| |-- 133.2.0: ( 0.000312s | 43.33% | 0.02% ) ( 0.000367s | 44.64% | 0.15% ) (1x) | |-- echo $BASHPID
| | 133.2.1: ( 0.000408s | 56.66% | 0.03% ) ( 0.000455s | 55.35% | 0.19% ) (1x) | | << (SUBSHELL) >>
| | |-- 133.3.0: ( 0.000103s | 25.24% | 0.00% ) ( 0.000102s | 22.41% | 0.04% ) (1x) | | |-- echo $BASHPID
| | | 133.3.1: ( 0.000305s | 74.75% | 0.02% ) ( 0.000353s | 77.58% | 0.15% ) (1x) | | | << (SUBSHELL) >>
|-- |-- |-- |-- 133.4.0: ( 0.000305s |100.00% | 0.02% ) ( 0.000353s |100.00% | 0.15% ) (1x) |-- |-- |-- |-- echo $BASHPID
TOTAL RUN TIME: 1.335700s
TOTAL CPU TIME: 0.231161s
and generates this flamegraph (that shows both the wall-clock time flamegraph and the CPU-time flamegraph).
Note: stack overflow doesn't support SVG images, so I've converted it to a PNG image below. the SVG image I linked (on github) has tooltips and will zoom in on clicking a box and search and things like that. the best way to ensure all the "extras" work is to download the SVG image and then open the local copy.
Visual Studio Code was cropping the results, leading to me thinking that something in the code wasn't working.
Welp.
Update your react-native-screens package to
3.33.0
This will solve the problem