You need generate .ssh key and upload public key in you github account.
example you problem: in my user has not keys in .ssh
root@surgai:~# ls -lah .ssh/ drwx------ 2 root root 4,0K ноя 13 09:24 . drwx------ 8 root root 4,0K ноя 13 09:24 .. -rw-r--r-- 1 root root 142 ноя 13 09:24 known_hosts
Check login: root@surgai:~# ssh -T [email protected] [email protected]: Permission denied (publickey).
Check clone: root@surgai:~# git clone [email protected]:kortina/dotfiles.git Cloning into «dotfiles»... [email protected]: Permission denied (publickey). fatal: Could not read from remote repository.
Please make sure you have the correct access rights and the repository exists.
Fix: make key: root@surgai:~# ssh-keygen -t ed25519 Generating public/private ed25519 key pair. Enter file in which to save the key (/root/.ssh/id_ed25519): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_ed25519 Your public key has been saved in /root/.ssh/id_ed25519.pub
Now upload /root/.ssh/id_ed25519.pub in you githib account - https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent
And check:
root@surgai:~# ssh -T [email protected] Hi surgai! You've successfully authenticated, but GitHub does not provide shell access.
Clonning now success:
root@surgai:~# git clone [email protected]:kortina/dotfiles.git Cloning into «dotfiles»... remote: Enumerating objects: 5072, done. remote: Counting objects: 100% (1761/1761), done. remote: Compressing objects: 100% (577/577), done. remote: Total 5072 (delta 1133), reused 1687 (delta 1078), pack-reused 3311 (from 1)
I think enum is used as a type of property of models. So defining in model's file is best, I think.
// Post.swift
struct Post {
id: String
status: PublishStatus;
}
enum PublishStatus {
case draft
case published
case archived
}
If you don't have this file on your computer:
C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\15.0\Bin\MSBuild.exe
Try installing correct version of MSBuild.exe
using link from question: Are Visual Studio 2017 Build Tools still available for download?:
If you get another error after that, like The imported project "X:\Microsoft.Cpp.Default.props" was not found.
, try the command:
npm config set msvs_version 2019
<input type="color" name="color" [(ngModel)]="selectedColor" value="#ff0000" />
export class AppComponent {
selectedColor: string = '#ff0000'; // default color red
}
I achive it by setting in the last element property flex-grow: 1
.
My case is missing the Installation Directory field (empty) in the Build Settings section.
I fixed it by filling in: Installation Directory: $(LOCAL_APPS_DIR)
If you are looking for: if there is an alternate way to list all the ids from a collection in Milvus Milvus 2 - get list of ids in a collection
The milvus has a method query()
to fetch entities from a collection.
Assume there is a collection named aaa
, it has a field named id
, assume all id values are greater than 0.
collection = Collection("aaa")
result = collection.query(expr="id >= 0")
print(result)
The result is a list, you will see all the ids are in this list.
import random
from pymilvus import (
connections,
utility,
FieldSchema, CollectionSchema, DataType,
Collection,
)
from random import choice
from string import ascii_uppercase
print("start connecting to Milvus")
connections.connect("default", host="localhost", port="19530")
collection_name = "aaa"
if utility.has_collection(collection_name):
utility.drop_collection(collection_name)
fields = [
FieldSchema(name="id", dtype=DataType.INT64, is_primary=True, auto_id=False),
FieldSchema(name="vector", dtype=DataType.FLOAT_VECTOR, dim=128),
FieldSchema(name="name", dtype=DataType.VARCHAR, dim=100)
]
schema = CollectionSchema(fields, "aaa")
print("Create collection", collection_name)
collection = Collection(collection_name, schema)
print("Start inserting entities")
num_entities = 10000
for k in range(50):
print('No.', k)
entities = [
# [i for i in range(num_entities)], # duplicate id, the query will get 10000 ids
[i + num_entities*k for i in range(num_entities)], # unique id, the query will get 500000 ids
[[random.random() for _ in range(128)] for _ in range(num_entities)],
[[''.join(choice(ascii_uppercase) for i in range(100))] for _ in range(num_entities)],
]
insert_result = collection.insert(entities)
print(f"Number of entities: {collection.num_entities}")
print("Start loading")
collection.load()
result = collection.query(expr="id >= 0")
print("query result count:", len(result))
But if you are looking for : Is there any way to retrieve these embeddings from milvus collection? Retrieve data from Milvus collection
In Milvus, search()
is to do ANN search, query()
is to retrieve data.
Since milvus is optimized for ANN search, it loads index data in memory, but original embedding data is stay in disk. So, retrieve embeddings is heavy operation and not fast.
The following script is a simple example for how to use query()
:
import random
from pymilvus import (
connections,
FieldSchema, CollectionSchema, DataType,
Collection,
utility,
)
_HOST = '127.0.0.1'
_PORT = '19530'
if __name__ == '__main__':
connections.connect(host=_HOST, port=_PORT)
collection_name = "demo"
if utility.has_collection(collection_name):
utility.drop_collection(collection_name)
# create a collection with these fields: id, tag and vector
dim = 8
field1 = FieldSchema(name="id_field", dtype=DataType.INT64, is_primary=True)
field2 = FieldSchema(name="tag_field", dtype=DataType.VARCHAR, max_length=64)
field3 = FieldSchema(name="vector_field", dtype=DataType.FLOAT_VECTOR, dim=dim)
schema = CollectionSchema(fields=[field1, field2, field3])
collection = Collection(name="demo", schema=schema)
print("collection created")
# each vector field must have an index
index_param = {
"index_type": "HNSW",
"params": {"M": 48, "efConstruction": 500},
"metric_type": "L2"}
collection.create_index("vector_field", index_param)
# insert 1000 rows, each row has an id , tag and a vector
count = 1000
data = [
[i for i in range(count)],
[f"tag_{i%100}" for i in range(count)],
[[random.random() for _ in range(dim)] for _ in range(count)],
]
collection.insert(data)
print(f"insert {count} rows")
# must load the collection before any search or query operations
collection.load()
# method to retrieve vectors from the collection by filer expression
def retrieve(expr: str):
print("===============================================")
result = collection.query(expr=expr, output_fields=["id_field", "tag_field", "vector_field"])
print("query result with expression:", expr)
for hit in result:
print(f"id: {hit['id_field']}, tag: {hit['tag_field']}, vector: {hit['vector_field']}")
# get items whose id = 10 or 50
retrieve("id_field in [10, 50]")
# get items whose id <= 3
retrieve("id_field <= 3")
# get items whose tag = "tag_5"
retrieve("tag_field in [\"tag_25\"]")
# drop the collection
collection.drop()
Output of the script:
collection created
insert 1000 rows
===============================================
query result with expression: id_field in [10, 50]
id: 10, tag: tag_10, vector: [0.053770524, 0.83849007, 0.04007046, 0.16028273, 0.2640955, 0.5588169, 0.93378043, 0.031373363]
id: 50, tag: tag_50, vector: [0.082208894, 0.09554817, 0.8288978, 0.984166, 0.0028912988, 0.18656737, 0.26864904, 0.20859942]
===============================================
query result with expression: id_field <= 3
id: 0, tag: tag_0, vector: [0.60005647, 0.5609647, 0.36438486, 0.10851263, 0.65043026, 0.82504696, 0.8862855, 0.79214275]
id: 1, tag: tag_1, vector: [0.3711398, 0.0068489416, 0.004352187, 0.36848867, 0.9881858, 0.9160333, 0.5137728, 0.16045558]
id: 2, tag: tag_2, vector: [0.10995998, 0.24792045, 0.75946856, 0.6824144, 0.5848432, 0.10871549, 0.81346315, 0.5030568]
id: 3, tag: tag_3, vector: [0.38349515, 0.9714319, 0.81812894, 0.387037, 0.8180231, 0.030460497, 0.411488, 0.5743198]
===============================================
query result with expression: tag_field in ["tag_25"]
id: 25, tag: tag_25, vector: [0.8417967, 0.07186894, 0.64750504, 0.5146622, 0.68041337, 0.80861133, 0.6490419, 0.013803678]
id: 125, tag: tag_25, vector: [0.41458654, 0.13030894, 0.21482174, 0.062191084, 0.86997706, 0.4915581, 0.0478688, 0.59728557]
id: 225, tag: tag_25, vector: [0.4143869, 0.26847556, 0.14965168, 0.9563254, 0.7308634, 0.5715891, 0.37524575, 0.19693129]
id: 325, tag: tag_25, vector: [0.07538631, 0.2896633, 0.8130047, 0.9486398, 0.35597774, 0.41200536, 0.76178575, 0.63848394]
id: 425, tag: tag_25, vector: [0.3203018, 0.8246632, 0.28427872, 0.3969012, 0.94882655, 0.7670139, 0.43087512, 0.36356103]
id: 525, tag: tag_25, vector: [0.52027494, 0.2197635, 0.14136001, 0.081981435, 0.10024931, 0.40981093, 0.92328817, 0.32509744]
id: 625, tag: tag_25, vector: [0.2729753, 0.85121, 0.028014379, 0.32854447, 0.5946417, 0.2831049, 0.6444559, 0.57294136]
id: 725, tag: tag_25, vector: [0.98359156, 0.90887356, 0.26763296, 0.33788496, 0.9277225, 0.4743232, 0.5850919, 0.5116082]
id: 825, tag: tag_25, vector: [0.90271956, 0.31777886, 0.8150854, 0.37264413, 0.756029, 0.75934476, 0.07602229, 0.21065433]
id: 925, tag: tag_25, vector: [0.009773289, 0.352051, 0.8339834, 0.4277803, 0.53999937, 0.2620487, 0.4906858, 0.77002776]
Process finished with exit code 0
Currently, it is not possible to use a dynamically changed label in the Apache superset chart.
Apple start using shared cache and that might be part of the issue. I have tried to set LSMinimumVersion to different versions, reinstall SDK, restart Mac and few other things - NOTHING yet works. I do not know how to fix it yet, what I do know:
If I run LS to see the file it is not there
ls -l /usr/lib/swift/libswiftCloudKit.dylib
ls: /usr/lib/swift/libswiftCloudKit.dylib: No such file or directory
However:
dyld_info -platform /usr/lib/swift/libswiftCloudKit.dylib
/usr/lib/swift/libswiftCloudKit.dylib [arm64e]:
-platform:
platform minOS sdk
zippered(macOS/Catalyst) 15.1 15.1
Locally my app runs just fine. It is iPhone app but works on Mac with M processor. iCloud also works fine. I wonder if that something Apple needs to fix in the store. While it runs on Mac (see pic), I am still getting emails from Apple about the issue. Sad thing is that it blocks app from being available on Mac via TestFlight and QA engineers cannot test it on Mac. iPhone\iPad are not affected by this.
Try not using SSH. You can try your password or a personal access key. Follow these step(s):
git clone https://github.com/kortina/dotfiles/
. When it asks for your username, provide it. When asked for your password, provide it. Or use the personal access token as shown in the other way below.Another way:
I have the same issue. Did you solve it ?
Check with type(X) is type
>>> type(Exception) is type
True
>>> type(3) is type
False
>>> class Dummy: pass
>>> type(Dummy) is type
True
Inspired by S.Lott's anwser.
Did you get a way to connect the feedback directly to a message?
The issue referenced by Dou Xu-MSFT has been updated, and a fix has been implemented in the latest release (17.12.0). You can update the line limit by going to Tools -> Options -> Text Editor -> Advanced, and update "TextMate parser line limit (required restart)".
I experienced the same issue, but it was fixed when I removed the '&' symbol from the project folder name. After that, everything worked smoothly.
I think the issue is how you are handling the MAIL_DRIVER
in the .env
file.
In the .env
file your MAIL_DRIVER=sendmail
So please change it to MAIL_DRIVER=smtp
And also please cross check the SMTP credentials once again with some online tool like https://www.gmass.co/smtp-test.
The following solved the issue for me on Ubuntu 22:
sudo apt install libtiff-dev
Solution taken from here https://github.com/r-lib/ragg/issues/86
Create the file again with the exact same name by clicking New File (Image here) and typing in the correct name. Then, click the history button at the bottom right corner (Image here) and use the arrow buttons/slider to go to the version just before the file was deleted (should be the version before the current one). Click the restore to here button. Voila! Your file will now be restored.
One additional thing,
You seem to think that you are talking to official Replit support. This is actually a user-to-user Q&A site. It's not your fault. :)
I answered a similar question here:
In summary, this is currently still an issue in next-auth v5 beta. You can read up on them at:
The two workarounds from the first link are:
git clone -b fix-base-path https://github.com/k3k8/next-auth.git cd next-auth corepack enable pnpm pnpm install pnpm build cd packages/next-auth/ pnpm pack
This will generate a package file named something like
next-auth-5.0.0-beta.20.tgz
.Move this file to your project directory and install it by updating your
package.json
:{ "dependencies": { "next-auth": "./next-auth-5.0.0-beta.20.tgz" } }
Then run
pnpm install
@ThangHuuVu's solution (source):
// app/api/auth/[...nextauth]/route.ts import { handlers } from "auth"; import { NextRequest } from "next/server"; const reqWithBasePath = (req: NextRequest, basePath = "/authjs") => { const baseUrl = new URL( (req.headers.get("x-forwarded-proto") ?? "https") + "://" + (req.headers.get("x-forwarded-host") ?? req.nextUrl.host) ); const urlWithBasePath = new URL( `${basePath}${req.nextUrl.pathname}${req.nextUrl.search}`, baseUrl ); return new NextRequest(urlWithBasePath.toString(), req); }; export const GET = (req: NextRequest) => { const nextReq = reqWithBasePath(req); return handlers.GET(nextReq); }; export const POST = (req: NextRequest) => { const nextReq = reqWithBasePath(req); return handlers.POST(nextReq); };
I think there is a scrolling conflict between the CustomScrollView and the WebView. To resolve this, you can disable scrolling in the CustomScrollView by using NeverScrollableScrollPhysics.
@override
Widget build(BuildContext context) {
return CustomScrollView(
physics: NeverScrollableScrollPhysics(), // Disables scrolling in CustomScrollView
slivers: <Widget>[
SliverAppBar(
title: const Text("Heading"),
floating: true,
),
SliverFillRemaining(
child: WebView(initialUrl: "http://stackoverflow.com"),
),
],
);
}
thanks! it worked for me that remove the .next folder
A fix was implemented in the latest release (17.12.0). I found the answer here: https://developercommunity.visualstudio.com/t/Javascript-Files-Lose-Color-Coding-after/10687814. You'll need to go to Tools -> Options -> Text Editor -> Advanced, and update "TextMate parser line limit (required restart)".
Make sure that file not in use by:
Right click at the audit, then disable it.
Stop all tools that are reading from that file.
Then from properties, you can limit number of files and max number of line inside each audit log file.
Why we need to include ( -name '.mp4' -o -name '.mkv' ), please advice
I'm trying to do the same thing, don't think there's an answer to what the possible values are or how the string gets created in the first place. I read this in the documentation for html_instructions
here: https://developers.google.com/maps/documentation/directions/get-directions#DirectionsStep
Contains formatted instructions for this step, presented as an HTML text string. This content is meant to be read as-is. Do not programmatically parse this display-only content.
You can use RedZwitch.com they offer seamless, zero-downtime Redis migration with real-time sync that ensures the data is consistent and an easy web UI for monitoring. Simply enter source/target details, start the migration, and let it handle data consistency and progress tracking.
sudo systemctl restart jenkins.service
Cheers bro xD
Great job; please continue with the good work! Thank You........
enter image description hereLinkageError occurred while loading main class asbdj java.lang.UnsupportedClassVersionError: asbdj (class file version 66.65535) was compiled with preview features that are unsupported. This version of the Java Runtime only recognizes preview features for class file version 67.65535 .Please, help me fix this error.
"np.float64(1000.0)" is a variable in Python and it won't be accepted by MySQL.
Parameters :
(2.0, np.float64(1000.0))
Maybe, it should be:
(2.0, 1000.0)
I'm not sure. It might be a problem with "numpy".
ueditor.config.json fileMaxSize You can try it
I think you need to review the IP ranges for BitBucket from the following link - https://confluence.atlassian.com/bitbucket/what-are-the-bitbucket-cloud-ip-addresses-i-should-use-to-configure-my-corporate-firewall-343343385.html and and then use the outgoing IP's from the Bitbucket that need to be whitelisted on Snowflake.
If that doesn't work either then you will need to check with BitBucket team to get the right list of IP ranges.
Ok. I finally figured out how to fix this issue. I had to rewrite the code as follows:
# Iterate through all .xlsx files in the current directory
for file in os.listdir(script_dir):
if file.endswith('.xlsx'):
# Load the spreadsheet
df = pd.read_excel(file)
# Remove rows with "cancelled by system" or "cancelled by trader" in any cell
df = df[~df.apply(lambda row: row.astype(str).str.contains('cancelled by system|cancelled by trader|Maximum position exceeded', case=False).any(), axis=1)]
# Remove unwanted columns: A, B, C, D, K, Q
df = df.drop(df.columns[[0, 1, 2, 3, 4, 6, 7, 8, 10, 11, 12, 16]], axis=1)
# Save the updated spreadsheet without header\title row
output_file_path = os.path.join(output_folder, file)
df.to_excel(output_file_path, index=False, header=False)
print("Reformated data in xlsx files")
The OP asks
Why did the authors of nginx choose eight, and not a higher number? What could go wrong if I add more buffers, or a bigger buffer size?
This was answered by one of the authors in the nginx forum
Some hints about seveal big vs. many small buffers:
- Using bigger buffers may help a bit with CPU usage, but usually it's not a big deal.
- Using bigger buffers means less disk seeks if reply won't fit into memory and will be buffered to disk.
- On the other hand, small buffers allow better granularity (and less memory wasted for last partially filled buffer).
Since there are different load patterns (and different hardware) it's hard to recommend some particular setting, but the about hints should allow you to tune it for your system appropriately.
In general, you should probably search the forum first if you’re looking for an answer from the nginx developers.
Since the amount of data going back and forth will be very low overall, I may have gotten away with this 'hack' job. Instead of messing with the complexity of python serial communications, I edited the Arduino IDE to only transmit data if the port == 0 (free) and to put in a 50 ms buffer after sending a packet to help stabilize the communications.
> int sensorValue = analogRead(A5);
> if (Serial.available() == 0) {
> Serial.println(sensorValue);
> delay(50);
> }
Thanks everyone for your answer!
After careful consideration, I've decided to use class-validator
and class-transformer
for validation. This allows me to validate required properties and their types without manually specifying each property's name and type. The example here demonstrates how class-validator works, but in my project, I’ll use class-transformer
to convert data with plainToInstance
before running validations.
`
import { IsNotEmpty, IsString, IsInt } from "class-validator";
import "reflect-metadata";
export class Car {
@IsNotEmpty()
@IsString()
model: string;
@IsInt()
year: number;
@IsString()
price: string;
}
Call validate
validate(car).then(errors => {
if (errors.length > 0) {
throw new Error("Errors: " + errors.join("\n"));
}
});
const arr = ["hello",100,"world"] as const
// error:Cannot assign to '1' because it is a read-only property.(2540)
arr[1] = 2
I ended up using a custom hook with useEffect()
. It actually isn't so messy after all.
export const functions = getFunctions(app);
export const useEmulator = () => {
useEffect(() => {
if (window?.location?.hostname === "localhost") {
connectFunctionsEmulator(functions, '127.0.0.1', 5001)
}
}, [])
}
You can wrap your view in ScrollView
for your content got push up without getting clipped:
ScrollView {
ZStack(alignment: .top) {
...
}
}
i have the same question and reduced version of python-opencv it works for me.
pip uninstall opencv-python
pip install opencv-python==4.1.2.30
I've read through all of the Preferences in Acrobat and figured the solution myself.
Preferences > Security (Enhanced) > Uncheck Enable Protected Mode at startup
Done! No more "file might be in use" error.
I just needed to upgrade to the latest version and gem update rails
did the trick on Ubuntu
use the command to run:
sudo apt install libxml2-utils libxml2-dev xsltproc libjson-c-dev libtirpc-dev
sudo meson setup build -Ddriver_remote=enabled -Ddriver_libvirtd=enabled -Dsystem=true -Ddriver_qemu=enabled --reconfigure
The existing answer didn’t really answer the question.
It has to be noted that much of useful advice on tuning resides not in the docs, but in forum posting by nginx developers.
On the OP’s question, Maxim Dounin gave excellent guidance as early as 2009
Some hints about seveal big vs. many small buffers:
- Using bigger buffers may help a bit with CPU usage, but usually it's not a big deal.
- Using bigger buffers means less disk seeks if reply won't fit into memory and will be buffered to disk.
- On the other hand, small buffers allow better granularity (and less memory wasted for last partially filled buffer).
Since there are different load patterns (and different hardware) it's hard to recommend some particular setting, but the about hints should allow you to tune it for your system appropriately.
I am not familiar with AEM, but maybe you could change the code.
HTML - You do not need to wrap the text in a span, unless you want to change the color of certain text in it.
<a href="/content/travel/us/en/addticket.html" class="button-link">
<button class="add-ticket">ADD LEADS</button>
</a>
Also, do you really need the button?
<a href="/content/travel/us/en/addticket.html" class="button-link add-ticket">ADD LEADS</a>
With this code, you do not need js for it. Ping if you do want to do js. Also some docs from Adobe about embedding links in AEM.
What about paramiko?
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostname='your_host', port=22, username='your_username', password='your_password')
stdin, stdout, stderr = ssh.exec_command('your_command')
result = stdout.read()
ssh.close()
print(result.decode())
Make sure your consumer_key and consumer_secret have the necessary permissions to delete coupons.
Use CURL to test:
curl -X DELETE \
https://mypetstoryapp.com/wp-json/wc/v3/coupons/3601 \
-u consumer_key:consumer_secret
Target needed to be set to "target": "ESNext"
and then extension is no longer needed to run the code.
I'm using React Native in 0.74.5 version with Expo in 51.0.28 version and at the moment that property
textContentType="oneTimeCode"
it's working for me!
I found the way to solve this problem. This page & this page can explain such phenomenon. I am using .net472 and the HttpCliet.Post send two packets, one for header and the other one for the body. But somehow the target machine cannot handle the seperated post packets correctly. When it receives the first packet it responses and will not perform the posted data as expected. I cannot change the .net version so I solved this by platform invoking another dll like wininet.dll.
Just in case anyone finds themselves here still looking for a solution. I've also got a fastapi setup, though I don't need it to stay alive for 10hrs just 5-10 minutes.
Here's an extension of Abdul's answer that also adds a line of dashed separators to help guide the eye for column widths. It was much more complicated to write than I expected, but that's why it was fun (? 🥴).
Saved into table.jq
:
# Convert a list of similar objects into a table of values
# Modified from: https://stackoverflow.com/a/77054233/2613780
# Example usage:
# jq --raw-output --from-file table.jq <<< '[ {"a": "able"}, {"b": "bacon"}, {"c": "cradle"} ]' | column -ts $'\t'
# Convert all values to strings
map( map_values(tostring) )
# Get column headings
| ( map(keys) | add | unique ) as $column_titles
# Create an object from column_titles where all values are set to null
| ( reduce $column_titles[] as $title ({}; . + {$title: null}) ) as $null_object
# Add all "columns" to all objects (the order is important, we want any actual values to override nulls)
| map($null_object + .)
# Create column separators
| (
# Add an object with the column headings as values
. + [reduce $column_titles[] as $column ({}; . + {$column: $column})]
# Sort all object keys alphabetically
| map(to_entries | sort_by(.key) | from_entries)
# Create a row of dashes matched to max column widths
| ( map(map(length)) | transpose | map(max * "-") )
) as $seps
# Convert all values to table rows
| map(. as $row | $column_titles | map($row[.])) as $rows
# Output table
| $column_titles, $seps, $rows[]
| @tsv
Also here in this GitHub Gust.
This is too long for a "one-liner" now but I enjoyed working on it. It's probably a little inefficient (I just love using variables) so I'd love to hear some feedback - it seems like it works roughly on O(n) time as far as I can tell!
Adding | column -ts $'\t'
to the end of every usage is annoying enough that I've made a whole script just to use it called json-list-to-table
you can also find in the Gist above.
Tested with a few "interesting" cases:
The example above:
jq --raw-output --from-file table.jq <<< '[ {"a": "able"}, {"b": "bacon"}, {"c": "cradle"} ]' | column -ts $'\t'
Using the reqres.in test API:
curl https://reqres.in/api/users?per_page=12 | jq '.data' | jq --raw-output --from-file table.jq | column -ts $'\t'
A bunch of image layers from the DockerHub API:
curl --location --silent --show-error "https://registry.hub.docker.com/v2/namespaces/library/repositories/python/tags" | jq '[.results[].images[]]' | jq --raw-output --from-file table.jq | column -ts $'\t'
I know this is a four year old thread but I wanted to add to @Jescanellas solution.
Pick up at var hours
var hours = (b1 - a1) / 3.6e+6 /24; e.source.getActiveSheet().getRange(dataRow,14).setNumberFormat('h:mm:ss') e.source.getActiveSheet().getRange(dataRow,14).setValue(hours);
By adding the bold, it will display as regular time in hours minutes seconds. (in my .getRange, dataRow is the variable I am using). Thought this might help someone that sees this thread. It took me a while to figure it out.
@JustMe, I wound up here looking for an answer to the same question (so it wasn't just you). I suspect you've either found your answer by now or moved on, but I'll share what I found in case it might help someone else.
I found this bindparam
example in the SqlAlchemy documentation for Using UPDATE and DELETE Statements, but found this syntax to work for SELECT
statements as well.
First, bindparam
needs to added to the sqlalchemy
imports.
from sqlalchemy import bindparam
Then, it can be used in to create the placeholder in the WHERE
clause.
sql = select(User).where(User.c.first_name == bindparam("username"), User.c.age == bindparam("age"))
Then, a dictionary that maps the value to the placeholder gets passed to the execute
function:
user = session.execute(sql, {"username": "Tester", "age": 18})
from datetime import datetime
birthday = datetime(2024, 11, 12, 6, 1, 0) # Should be in UTC
cur_date = datetime.now(datetime.UTC) # Time in UTC
elapsed_time = cur_date - birthday
print(elapsed_time)
I know this is late, but:
print
statement.Helpful documentation:
This is the solution I used, which condenses Zekarias's answer into a single line:
await expect(page.getByRole("button", { name: "Save" })).toBeDisabled();
Doesn't address the question exactly, but this came up first in Google when I was wondering how to check if a button was disabled - so hope it helps someone else out ;)
Have you declared the Restlet Maven repository in your POM as explained here? https://restlet.talend.com/downloads/current/
Note that starting with version 2.5 RC1, Restlet is now published directly in Maven Central.
I have always thought the RFCs got this totally wrong. The Most Significant Bit should have the highest 'bit number', ie in a byte, bit 7 is the MSB. This was how DEC drew diagrams of in memory representations of stuff, control and status registers and so on. We are stuck with suboptimal representations and in my insignificant opinion we are stuck with really awful protocols like SIP/RTP. Yes, I had TELCO background!
yo tube un problema con un SELECT * FROM muy basico y no me funcionaba solo por que importe mysql2 de la siguiente manera import {createPool} from'mysql2'; este codigo no me funcionaba ya que debemos importar o decirle a node que trabajaremos con funciones asincronas entonces al poner la siguinete linea import {createPool} from'mysql2/promise'; llamamos a promise y ya le decimos a node que estaremos trabajando todo el archivo con funciones asincronas saludos
After some experimentation, the user variables are expanded in alphabetical order as they are created without backtracking for undefined variables. Ergo, any variables starting with A through PASZZZZ... will be expanded in PATH, but not any variables that come after PATH.
I found this when PYENV wasn't expanded in PATH. The workaround was creating a AAA_PYENV, which did get expanded.
I have the same question, have you found the answer
I decided to use only loadHTMLString(_:baseURL:)
and in case of local image, I put the Data URL type string by base64 encoded in img tag.
I was trying to build such a test framework with auto register. With msvc it runs well, but with gcc the static member in test case cpp ,which is supposed to execute ctor and call the register function,failed to execute ctor at all. I think this has to be something with the static initialization order, but can't get any clue about how to fix it.
Microsoft explains the differences and usage recommendations of string and String in official documents, which you can refer to.
If you do not want to receive IDE0049 warnings, you can also suppress the warning in the configuration file. You can refer to this document.
Luke M. - is right. Medo64 is the best option for this - download from here https://www.medo64.com/vhdattach/ and install it. After this, add vhdx as auto mount and restart os. Vhdattach starting before explorer and whole virtual drive is accessible from user environment. OneDrive target directory is accessible too.
Thanks Luke :)
Some older versions of Windows are incompatible with the adb.exe file that is included in the platform-tools folder of the more recent android sdks. I ran into this issue with Windows 7. The fix in this case is to download an older version of adb.exe. This may require downloading an older android sdk to your computer then replacing the adb.exe file from the platform-tools folder of your download to the platform-tools folder of the android sdk used by your version of android studio or whichever IDE you are using.
how about call blur()
in byEnter
byEnter(e) {
if (e.type === "keydown") {
if (e.key === 'Enter' || e.keyCode === 13) {
e.preventDefault();
e.stopPropagation();
e.target.blur();
}
}
}
proto
folder should be stored in src/main/proto
, not in src/main/java/proto
as I did before. I was reminded by a great person from this issues.
Resolved Issues
Fixed crashes and other anomalies that could occur in the String Catalog editor when multiple editor tabs are open to the same file with different sort or filter criteria. (FB14665353) (136257968)
It seems Xcode 16.2 will solve the problem
To upgrade the SSRS edition you just need to re-run the SSRS installer and choose "Upgrade Edition". There you can enter your license and upgrade the edition.
To upgrade to a newer version of SQL AND upgrade to a new edition you'll need to run the setup twice:
I just dealt with this issue today; the whole upgrade process was a bit frustrating and not nearly as streamlined or seamless as the MS Documentation implied.
Any solution found ? I am facing the same problem.
since your attribute
private static ApplicationContext ctx;
are static you need to reference in your set as
ApplicationContextUtils.ctx = appContext;
I tried asm("jump 0x10080000") but appears the message "unknown opcode or format name 'jump'". I am using the ESP32 in Arduino IDE.
What could be my mistake?
I thank you very much in advance for your help.
I discovered that there are 2 kernel parameters that control this behavior: /proc/sys/kernel/sched_rt_period_us
and sched_rt_runtime_us
. Their default values are 1000000 and 950000, which exactly correlates with a 50 ms interruption every second. This behavior can be controlled by modifying those values, including setting them to -1 to defeat this behavior entirely.
I already had a similar problem and the solution was to set the Authorization token in the Postman headers
In my case I found it was just an old image tag that was not in the registry anymore.
You just use sort_link @q, :b_id
The best way for that is to using Django Allauth, You can do anything with that secure and easy, For sure Allauth have many other options for you.
I had this problem today, too. My solution is changing the „np.float“ into „np.float64“ in the code which causes the error. Which code caused the problem needs to check the error message (traceback). After renaming it, the error message has been fixed.
I’ve resolved the issue. I noticed that the deployment for the main branch kept getting stuck, even after multiple attempts to rerun all jobs. Since I couldn’t view what was happening in the backend, I tried deploying a gh-pages branch instead.
Here’s what I did:
"scripts": {
"predeploy": "npm run build",
"deploy": "gh-pages -d build"
}
In my case I simply stopped and then started the service and that solved my problem.
1. brew services stop postgresql
2. brew services start postgresql
So I kind of found the issue and a fix to it.
The problem is that collabora and nextcloud servers being proxied by cloudflare. Specifying the nextcloud-server in aliasgroup1 therefore doesn't allow the actual requesting ip. Meaning: a1.a2.a3.a4 maps to example.com and is listed in aliasgroup1 as the domain of the nextcloud server. The outgoing request from that server comes from b1.b2.b3.b4 which is not mapped to that domain and also not allowed to make WOPI requests, just like any cloudflare ips aren't except the one that maps to example.com.
The following (especially the Collabora allow-list for WOPI requests part) for some reason did solve my problem, maybe because the cloudflare ip also gets parsed? Just adding all Cloudflare IP-Ranges to Allow list for WOPI requests (in Nextcloud instance) fixed the problem for me. I am not sure if this is secure (at all)
https://www.domsky.cz/nextcloud-with-collabora/
Just wanted to leave this here for anyone being confused by the mess that this thread has become over time. @Red's answer is still kind of correct and would work for static ips.
try these changes : (add them to to existing properties)
.sticky-header {
z-index:6;
width:100%;
background: #00060D;
top: 0;
}
you can also do killall XCode
It removes all Xcode processes.
The formula which can help is:
[B2]=LET(src,A2:A8,u,UNIQUE(src),XLOOKUP(src,SORT(u,,-1),SEQUENCE(ROWS(u))))
If the above solutions dont work, check to see if your IP has been blacklisted at spamhaus.com. They will tell you if it has and if so, how to go about resolving it.
If the above solutions dont work, check to see if your IP has been blacklisted at spamhaus.com. They will tell you if it has and if so, how to go about resolving it.
On line 10 it says "Display" symbols := Map("Headphones", "🎧", "Speakers", "🔊", "Display", "🖥️") but on line 52 you have "Monitor" ChangeAudioOutput("Monitor")
Make sure to change "Monitor" to "Display" or vise versa.
Thank you for this!
One solution is to uninstall Notepad++.
For people with a lockdown install which won't make bat files or allow access to the path you can add this to your PowerShell profile.
New-Alias vi Vim-Launch
function Vim-Launch([string]$FILE) {
$path = Join-Path $PWD $FILE
& "{Your path to the vim.exe}" $path
}
I think the obvious answer is to simply sort the JSON Object BEFORE you pass it to React-Table or to whatever state that React-Table is loading.
It's just going to be nearly impossible and it will over complicate your code to try messing around with the pacage. As someone that's used React-Tables a lot in the past. Simply Sort the data before you pass it to React Table...
Can anybody tell my how to do this right?
Your code looks quite confused. Why don't you simply create two Properties objects, one containing the logon parameters for system A, and the other containing the logon parameters for system B. And your implementation of MyDestinationDataProvider.getDestinationProperties() just returns the one or the other, depending on the destinationName. That's it and everything works fine! No need for complicated mechanisms that "change" something dynamically at runtime, etc.
Thanks everyone, I found the answer, there is a column security setting for the missing columns and I am not part of the readable team.
Apple is excellent at always putting new obstacles in the way of developers. Since Xcode 14, a new setting in the Build Settings is necessary for Repose's script to continue working.
Oleh Sherenhovskyi's script version no longer works because the Info.plist no longer contains any version information at all.
I decided to follow the st documents which at least resulted in the image being built correctly.
If you are having problems installing tensorflow in VSCode and have tried upgrading your pip and have set your python version to something supported for the non WSL version, which I believe is 3.9 and lower and are still having an issue, it may what the above user posted.
I installed Python 3.9.13 but was still receiving the Error "Could not find a version that satisfies the requirement tensorflow". The above poster mentions not allowing Python to add itself to the path, which I had done when I installed v 3.13. I removed that from my path and tensorflow installed fine at that point.
Is there any optimization happening during compile time?
There are many, but if you touch the array, the whole array will be there.
Is the compiler (I use gcc at the moment) smart enough to realize that only c[0] is used?
No, the compiler is not a programmer. It is programmer's job. Also If you use a reference to this array the compiler doe not why. You may not use the array at all in your code - but it can be used as a DMA buffer for example.
Is the operating system (name one) smart enough to not waste the memory?
It is not related to OS or bare metal. OS does not deal with static storage duration objects
When I code for embedded microcontrollers, is there any difference to running this program on another OS (memory-wise)?
No.
You just presented bad programming pattern leading to unnecessary wast3e of memory.
The next-themes doc includes a section about how to avoid this: Avoid Hydration Mismatch
Thanks for the replies. I was going to provide an edit but decided to add my own answer. There seems to be a few different ideas and opinions here, and after looking into this question further, I've realized that the core of the discussion revolves around how we define the scope of our analysis—what we care about when we talk about algorithmic complexity. Does the underlying representation of the input matter in our analysis, or should we only focus on the abstract, logical behavior of the algorithm?
Here’s an adjacent question that helps us explore this idea:
If I have a program that copies an input integer to another variable, this is typically assumed to be a constant space operation. But since the input integer is likely represented as a binary number under the hood, why is this not considered an O(log n) operation, since we are copying all log(n) bits from one variable to another?
One possible answer is that we are dealing with a fixed-size input, such as a 32-bit integer, which would make the operation constant in terms of space because the input is capped at a maximum size (regardless of the actual value of the integer). But is this truly the reason we consider the operation constant space, or does the space complexity analysis depend more on what we measure the complexity with respect to?
The key insight, I believe, is that the answer depends on our model of computation—on what we are willing to count as "space" or "time" in our complexity analysis. Complexity analysis is inherently tied to the assumptions we make about the nature of the input and the environment. As one comment puts it, "All the computations of complexity depend on your model of computation, in other words on what you would like to count." Are we counting only the space that our algorithm explicitly allocates, or are we including the underlying machine representation (e.g., the binary encoding of integers)?
In this context, the RAM model of algorithmic analysis is usually what we are concerned with. In this model, the focus is on high-level operations, such as additions and comparisons, and assumes a fixed-size machine word (e.g., 32 or 64 bits). The time and space complexity are measured based on the number of operations required to solve a problem, ignoring the details of machine word size or arbitrary precision. For most algorithmic problems and competitive programming algorithms, this model is used, as it abstracts away the details of the underlying hardware and focuses on the algorithm’s efficiency with respect to input size.
In short, the real question boils down to what you’re measuring your algorithmic complexity with respect to—the input value itself or the representation of the input value on the machine. If we consider the input value in its raw form, with a fixed bit size (say 32 or 64 bits), then copying it would be an O(1) operation. But if we delve into the details of how the input is represented on the machine (in binary, with a potentially varying bit length), we might argue that the operation could take O(log n) space, reflecting the number of bits involved.
Ultimately, it’s less about whether the input is “arbitrary precision” versus “fixed size” and more about what assumptions we make when measuring algorithmic complexity. These assumptions about the model of computation—what we count and how we count it—determine whether we consider the operation constant space or logarithmic.
In Short: O(1) time with respect to what we are generally concerned about. We can do complexity analysis with respect to other lower-level details, but this generally is outside the scope of our model of analysis.