Use https://developers.cloudflare.com/pages/how-to/www-redirect/.
192.0.2.1
I faced the same error, when I tried to run a maven project in the Eclipse IDE, what I solved was to delete the .metadata
folder from eclipse-workspace
, I hope I helped!
As of 1/2025 just adding the below to the devcontainer.json fixed the issue for me:
"forwardPorts": [5173]"
Here is a hacky solution that does the job for now.
if (knitr::is_html_output()) {
knitreg. = texreg::htmlreg
} else {
knitreg. = texreg::knitreg
}
0stone0 is absolutely right. But if you want to use constants instead of typing strings you can do it this way:
from telegram import ReactionTypeEmoji, constants
bot.set_message_reaction(message.chat.id, message.id, [ReactionTypeEmoji(constants.ReactionEmoji.ALIEN_MONSTER)], is_big=False)
Using the cryptography
library (the linked ed25519
library looks abandoned at this point):
from cryptography.hazmat.primitives import serialization
with open('private_key.pem', 'rb') as f:
# returns Ed25519PrivateKey
ed_priv = serialization.load_pem_private_key(f.read(), password=None)
signature = ed_priv.sign(b"my authenticated message")
For other methods see https://cryptography.io/en/latest/hazmat/primitives/asymmetric/ed25519/
How to fix image loading error in Flutter web project when running on Chrome?
I encountered an issue while loading images in my Flutter web project. I was getting an error and the images weren't displaying as expected. After searching and trying a few things, I found a solution that worked for me.
Solution:
You can run your Flutter web project on Chrome with the following command:
**
**
flutter run -d chrome --web-renderer html
This command resolved the issue of loading images in my project. If you're facing a similar problem, try using this command to see if it helps!
Here is the link i have checked to fix the error : https://www.youtube.com/watch?v=NljAhIQXcjw
In my case the issue was with the sheet names. I was using different sheet name with no spaces in in it when they had spaces. Please keep in mind that the message "The requested resource doesn't exist" doesn't always mean the id of the file is incorrect buy if the sheet name is incorrect it might also happen.
How I should be able to use Microsoft Power BI API for delete workspaces? When I created a workspaces linked to service entity profile I did so:
public function createWorkspace($accessToken, $profileId, $workspaceName)
{
$response = Http::withHeaders([
'Authorization' => 'Bearer ' . $accessToken,
'X-PowerBI-Profile-Id' => $profileId,
'Content-Type' => 'application/json',
])->post($this->baseUrl . 'groups', [
'name' => $workspaceName,
]);
if ($response->successful()) {
return $response->json();
}
throw new \Exception('Error creating workspace: ' . $response->body());
}
Then I grant access permissions like this:
public function grantAccessPermissions($accessToken, $workspaceId, $profileId)
{
$response = Http::withHeaders([
'Authorization' => 'Bearer ' . $accessToken,
'Content-Type' => 'application/json',
'X-PowerBI-Profile-Id' => $profileId,
])->post($this->baseUrl . 'groups/' . $workspaceId . '/users', [
'groupUserAccessRight' => 'Admin',
'identifier' => '487eabe2-22ea-4419-b8f9-e7f09fa0b875',
'principalType' => 'User',
'emailAddress' => '###@####.onmicrosoft.com'
]);
if ($response->successful()) {
Log::info('Respuesta de grantAccessPermissions: ' . json_encode($response->json()));
return $response->json();
}
throw new \Exception('Error granting access permissions: ' . $response->body());
}
Right here I send the 'emailAddress' parameter in request body which is 'Admin' of premium capacity assigned to service principal. And the workspace is showing in Power BI Service correctly.
But I can't delete the workspace created.
public function deleteWorkspace($accessToken, $workspaceId)
{
if (empty($workspaceId)) {
throw new \Exception('El Workspace ID no puede estar vacío.');
}
Log::info('Workspace ID que llega a deleteWorkspace: ' . $workspaceId);
$response = Http::withHeaders([
'Authorization' => 'Bearer ' . $accessToken,
'Content-Type' => 'application/json',
])->delete($this->baseUrl . 'groups/' . $workspaceId);
if ($response->successful()) {
return $response->json();
}
throw new \Exception('Error deleting workspace: ' . $response->body());
}
I receive this response: PowerBINotAuthorizedException
Did you solve the problem? I have the same issue I cant find the solution. Would be great if you can give an advice.
You did not show the error logs. If you can show the error logs I think it can help us know how to help you.
If the container is already created.
docker stop <container_name>
docker commit keycloak keycloak2
docker run -p 28080:28080 -td keycloak2 start-dev --http-port=28080
Amazon has released a new feature that can convert windows 11 iso to ami https://aws.amazon.com/about-aws/whats-new/2025/01/ec2-image-builder-converting-windows-iso-files-amis/
Yes, this can happen. Just like how prefetching (where email servers/providers pre-load email content before the user has opened the email) can inflate your open rates, links can also be "clicked" by email servers before actually clicked by a user. This is usually a security measure, where the system is checking the link for safety. But these types of clicks can usually be identified by the system, and thus excluded from the open rates they show you.
I have 9 years experience working at companies that allow people to send marketing emails, in roles ranging from tech support to product management, and I have seen instances where the filtering didn't happen as it's supposed to. These have been low-impact bugs, not very common and pretty quickly fixed. Within email campaigns where this happened, I don't recall specific numbers that were seen in terms of normal vs. abnormal click rates.
If you're sending emails and seeing way more clicks than usual, happening almost immediately, this might be the case for you. I'd suggest comparing like-to-like when looking into this, though - a "reset password" email campaign is going to have way more fast clicks than your standard marketing promo campaign, for example, and audience engagement varies. So look at the history for your particular emails and audience to get an idea of what's likely happening for you.
// Create requirements file for all installed pakages
pip freeze > requirements.txt
// Create requirements file for only imported pakages
pip install pipreqs
pipreqs . --force
Do you maybe have code for this photo? (smiley). I need this for my maths project.
The issue might be a merge conflict caused by working on the project simultaneously on different machines. This happened to me by having the project open on laptop and desktop at the same time. Digging around, I found this reddit post, with instructions on how to change the .tscn-file with a text editor. The trick is correcting duplicate id's of loading resources caused by the merge conflict.
https://www.reddit.com/r/godot/comments/6ntejk/tips_how_to_fix_merge_conflict_with_tscn_files/
Old question, but still hard to find an answer to it.
The only way I could find to prevent AcceptJSUI from scrolling was to remove the window.scrollTo()
function before the AcceptJS.UI script was loaded.
window.scrollTo = function(y, x) {
console.log("prevented scroll to " + x + ", " + y);
};
Google definitely messed up this email
Just found solution accidentally, have no idea why it is working like below:
LOWER(e.description) LIKE LOWER(CONCAT('%',:text,'%')) OR :text IS NULL)
THIS IS A LIFESAVER! I've been using java1.8(INTEL VERSION) for like a year. 110% CPU usage.
I updated to java17-coretto(APPLE VERSION) and now it is lightning fast (7% CPU)
As I typed this out and looked at the documentation the answer is actually very stupid.
{
"ContainerOverrides": {
"Environment": [
{
"Name": "FILE_NAME",
"Value": "<object_key>"
}
]
}
}
It simply just required Capitalisation of each word
Don't really get what you mean exactly by check for H6 <>
what is <>
in your description? The exact values, a hint to a possible string interpolation?
If understood your question correctly, you can basically:
=IF(F6="YES", EDATE(B6,3),
IF(F6="NO", EDATE(H6,3),
IF(F6="MAYBE", EDATE(B6,6),
IF(F6="UNKNOWN", EDATE(H6,6),
"[Error]: Invalid Condition")
)
)
)
If anyone else is still dealing with legacy software and encountering this, go to Credential Manager under the Control Panel and see if you have credentials for the web server under your "Windows Credentials". If you are, you can remove the credential and it should no longer log in with your admin account.
pythonanywhere does not support MSSQL ODBC driver installation due to its licencing issues & user consent - link here to forum
We have also experienced this issue and have implemented a workaround. I have documented the solution in this repository: https://github.com/Matdata-eu/jena-riot-literal-as-subject-issue-workaround
In summary: use a python script with rdflib to filter the triples that have a literal as subject.
The problem was in the property I used to declare the response URI :
Calendar.Events.Watch watched = calendar.events().watch(
"primary",
new Channel()
.setId(uuid)
.setType("web_hook")
.setAddress(uri.toURL().toString())
);
Notice the change : setAddress(uri)
instead of setResourceUri(uri)
In dbt 1.4 the flags warn_error
and warn_error_options
are defined in profiles.yml
:
config:
warn_error: true
warn_error_options:
include: all
exclude:
- NoNodesForSelectionCriteria
Note that the version 1.4 is no longer supported. See https://docs.getdbt.com/docs/dbt-versions/core
Have you managed to find a solutions for this?
The problem was the identifier
in tauri.conf.json
.
It was com.myproject.app
which Mac was treating as an executable. I changed it:
"identifier": "com.myproject",
There are few steps to build geometrical objects:
All is well written in Gmsh's docs.
Issue is most likely related to how ARIA attributes interact with focus managment in nested accordions. when expanding accordion or collapsing focus can inadvertenly move to a descendant and cause parent to interpret this as a collapse.
there are few ways of handling this, not sure what is your use case but you can try at least to use any of these methods to suit your needs
import { Component, ElementRef, ViewChild } from '@angular/core';
@Component({
selector: 'app-accordion',
template: `
<button #accordionHeader (click)="toggleAccordion()">Accordion Header</button>
@if (isExpanded) {
<div role="region">
<p>Content here...</p>
</div>
}
`,
})
export class AccordionComponent {
@ViewChild('accordionHeader') accordionHeader!: ElementRef<HTMLButtonElement>;
isExpanded = false;
toggleAccordion() {
this.isExpanded = !this.isExpanded;
this.accordionHeader.nativeElement.focus();
}
}
<button
(click)="toggleAccordion()"
[attr.aria-expanded]="isExpanded"
aria-controls="panel"
>
Accordion Header
</button>
<div id="panel" [attr.aria-hidden]="!isExpanded" *ngIf="isExpanded">
<p>Content...</p>
</div>
these two solutions won't be accurate as you did not provide any code what so ever so either you could provide us some code to look at or try these two and leave a comment which of them helped you out.
CQS: It is a programming principle that says you should separate operations that change data (commands) from those that read data (queries). If you have a method, for instance, it should either return something or update something, but not both.
CQRS: By dividing the design of the entire system into two sections, one for managing commands (writing or modifying data) and another for managing queries (reading data), CQRS expands on this idea. Each side can have its own database or model to optimize how they work.
So, CQS is the basic rule, and CQRS is like an advanced version of it used for bigger systems where you want to handle reading and writing differently.
My name is Michel, and I am a Full Stack Developer & DevOps. I know the question was posted some time ago, but it remains relevant. Based on my experience with WordPress, AWS, and large-scale, high-performance projects, as well as frequent contact with smaller projects and questions from other professionals in the field, I can confidently say that the issue is still valid.
My recommendation for hosting a WordPress application on AWS, based on various tests and projects I manage, is to use AWS Elastic Beanstalk to orchestrate EC2 instances, even if it is a project with just one instance. Elastic Beanstalk significantly simplifies infrastructure management, the deployment process, and maintenance, while also offering advanced scalability control. In some projects, I use auto-scaling, which adds or removes EC2 instances as needed, ensuring scalability and cost optimization.
Regarding architecture, it is entirely possible to run WordPress on multiple load-balanced instances. The key lies in implementing a distributed architecture. While EC2 instances process PHP with NGINX, WordPress media files can be stored in EFS (Elastic File System), allowing sharing and synchronization across all instances. For the database, the best approach is to use Amazon RDS with MySQL or Aurora.
Additionally, to improve performance, I recommend using ElastiCache to optimize database queries and AWS CloudFront to efficiently serve static content. By applying appropriate caching rules for different types of files, it is possible to achieve high performance and significantly reduce the page processing load on each request.
This is an overview of the architecture I have been using and refining over the years, with excellent results in content portals, e-commerce sites, and SaaS platforms based on WordPress on AWS.
If you would like to discuss this topic further, I am available. I hope I have helped clarify your question!
If by project you mean an opened folder or repository, then just put different .vscode/settings.json inside of each project and that should be it
Where did you put empty_values=()?
on MYSQL workbench the solution is SELECT idcategoriaespecifica, nombre FROM table ce WHERE MATCH (ce.column1) AGAINST ('VALUE1 VALUES2')
In the against just put the values you want to search but with a space between them
You can use
summary::marker {
content: none;
}
From my testing, it seems that Railway doesn't support hosting WebSocket connections. If you're looking for an alternative, I recommend using Render: https://render.com/
use this IP - http://10.0.2.2/,
instead of http://localhost/ and http://127.0.0.1/
The stage you mentioned, sfc-eu-ds1-9-customer-stage, is an internal Snowflake stage assigned to your account. This endpoint is used in the backend to connect to the cloud storage service Snowflake uses for your account. Clients use it for PUT/GET operations and to fetch persisted or large query results.
Please note that these S3 buckets are not directly accessible, so Snowflake cannot share the credentials for them.
However, you upload and download files from a Snowflake internal/external stage, you must use the following minimum versions of the NodeJS driver:
Version 1.6.2 to upload files (using the PUT command) Version 1.6.6 to download files (using the GET command)
References:
https://docs.snowflake.com/developer-guide/node-js/nodejs-driver#uploading-a-file-to-a-snowflake-stage https://docs.snowflake.com/en/sql-reference/sql/create-stage
Portable Nucleotide String Compression: Part I, Endian Enigmas
The following is a discussion of some issues that arise when writing portable code in the context of compressing nucleotide text strings and producing the reverse complement of such strings, i.e. the reverse DNA strand.
Portability across both big- and little-endian architectures is required, so before we discuss compression schemes, let's first look at a little code that exposes the issue of "which"-endian:
main()
{
char c[]="acgta";
unsigned int a[2], i;
for(i=0; i<5; i++) printf("address of c[%d] = %X\n", i, &c[i]);
for(i=0; i<2; i++) printf("%X\n",((int *) c)[i]);
a[0] = ((unsigned int *) c)[0] >> 8;
printf("%X\n",a[0]);
printf("%s\n",(char *) a);
}
The code says to print the address of each byte (%X = hex) of the text string acgta. Then cast the address of the c array to an unsigned int and display how the string is stored in two 4-byte words. Then we take the first 4-byte word, shift it to the right by 8 bits, store it in a[0], and look at it again. Finally we cast the address of the unsigned int a array containing that shifted word to be a character pointer, and see what string we get. On a big-endian machine we get
address of c[0] = 7FFF2F00
address of c[1] = 7FFF2F01
address of c[2] = 7FFF2F02
address of c[3] = 7FFF2F03
address of c[4] = 7FFF2F04
61636774
61002F4C
616367
As one might expect, each successive letter of the string occupies the next higher byte of memory. When we examine the string as a word of memory, we see 61636774 as the first word, e.g. acgt where a=0x61, c=0x63, g=0x67, and t=0x74. The second word is 61002F4C, which is an a, the last letter of the string, followed by the string null terminator (0x00) and whatever junk happened to be in the rest of the word (0x2F4C). The last entry, 0x00616367, is the shifted word with 0s filling in from the left. The 0s look like a null terminator so that when we ask to print out the string that (char *) a points to, we get only a newline from the printf statement.
Now let's look at output from the same code on a little-endian machine:
address of c[0] = BFFFFAD0
address of c[1] = BFFFFAD1
address of c[2] = BFFFFAD2
address of c[3] = BFFFFAD3
address of c[4] = BFFFFAD4
74676361
40000061
746763
cgt
Again, each successive letter of the string occupies the next higher byte of memory. When we examine the string as a word of memory, however, we see the letters reversed. This is sometimes explained by saying that the little endian scheme stores the least significant byte of an integer in the lowest address of a word, and the most significant byte of an integer in the highest address of a word. Big endian schemes do just the opposite. Since my computer prints the contents of a memory word (a number) on the screen in English (from left to right), the most significant byte will always on the left, followed by the lesser significant bytes to the right. For the programmer, it is conceptually easier to think of big endian machines as starting their first word of memory on the left and continuing to the right (like English), while little endian machines start their first word of memory on the right and continue to the left (like Hebrew). On a 32-bit machine this looks like:
Big-endian:
|<--------word0-------->|<--------word1-------->|<--etc-->|
byte0 byte1 byte2 byte3 byte4 byte5 byte6 byte7
a c g t a null
Little-endian:
|<--etc-->|<--------word1-------->|<--------word0-------->|
byte7 byte6 byte5 byte4 byte3 byte2 byte1 byte0
null a t g c a
With this scheme in mind, we can now see why word0
prints out the way it does, and we can interpret the rest of the code output on a
little-endian machine. Shifting the first word to the right results in
byte0
falling off the word, instead of byte3
falling off as in a big-endian machine. The result is what we see, 0x00746763
. Now when we ask for the string pointed to by (char *) a
, we get cgt
because the byte0
a
fell off the word, and the 0
s that filled in from the left became the null terminator.
The above example looked at something that was already in memory, placed there byte-by-byte from a text string. What happens when we want to put some value into a word ourselves? Look at the next code:
main()
{
unsigned int i = 0x00006100;
printf("%X\n", i);
i = i >> 8;
printf("%X\n", i);
printf("%c\n", *((char *)&i));
}
The output on a big endian machine is:
6100 61
The output on a little endian machine is:
6100 61 a
What happened? Lets look at our mental picture when i
is declared and assigned:
Big-endian:
|<--etc-->|<--------word0-------->|
byte3 byte2 byte1 byte0
00 00 61 00
Little-endian:
|<--etc-->|<--------word0-------->|
byte3 byte2 byte1 byte0
00 00 61 00
Both architectures print out the same thing for the integer as initially stored, and when the integer is shifted to the right. The difference happens when the address of i
is cast to a (char *)
, dereferenced, and printed as a character. The big-endian machine prints out its byte0
, which is null (we get a newline from the printf
statement), while the little-endian machine prints out its byte0
, which is the letter a
.
Code similar to this can be used in portability scenarios if it is important to determine which type endian-ness some application is running on.
Note that I've used unsigned int
in these examples. If an int
is signed, then 1
s will be shifted in from the left if the leftmost bit is a 1
, and we want 0
s shifted in no matter what bit pattern is in the word. 0
s fill in from the right during a left shift no matter if the
variable is signed or unsigned.
As mentioned at the start of this article, these issues are being discussed in the context of encoding nucleotide text strings on different architectures. The nucleotide alphabet consists of only 4 characters, A, C, G, & T. When manipulating strings of these letters, it is desirable to compress the text strings such that each letter occupies only 2 bits in a word instead of 8. Let's look at some compression code for 2-bit encoding of nucleotide text strings on each architecture, and how to produce the reverse complement of a nucleotide string, i.e. the reverse DNA strand.
Portable Nucleotide String Compression: Part II, Shifty Characters
In Part I, we looked at Endian "Enigmas" in the context of bit shifting on different architectures. We did this because we want to be able to compress nucleotide text strings, made up entirely of just the four letters A, C, G, & T, into words containing 2-bit representations of each nucleotide. Thus a 32-bit word will contain 16 nucleotides, and a 64-bit word will contain 32, in both cases a compression factor approaching 4.
The compression is done simply by taking the 2-bit representation for
each 8-bit ascii character and shifting it into its proper position
within a word. If the bit patterns are A=00
, C=01
, G=11
, & T=10
, then on a big-endian machine the string acgta
looks like:
uncompressed representation
|<--------------word0-------------->|<--------------word1-------------->|
|byte0---|byte1---|byte2---|byte3---|byte4---|byte5---|byte6---|byte7---|
01100001 01100011 01100111 01110100 01100001 00000000 <------8-bit ascii
a c g t a null
compressed representation, 4 letters/byte
|<--------------word0-------------->|<--------------word1-------------->|
|byte0---|byte1---|byte2---|byte3---|byte4---|byte5---|byte6---|byte7---|
00011110 00000000 00000000 00000000 <------ compressed 2-bit string
a c g t a null padded zeros
Each 2-bit representation was shifted to the left on a big-endian machine. On a little-endian machine, we have to shift the other way:
uncompressed representation
|<--------------word1-------------->|<--------------word0-------------->|
|byte7---|byte6---|byte5---|byte4---|byte3---|byte2---|byte1---|byte0---|
8-bit ascii -----> 00000000 01100001 01110100 01100111 01100011 01100001
null a t g c a
compressed representation, 4 letters/byte
|<--------------word1-------------->|<--------------word0-------------->|
|byte7---|byte6---|byte5---|byte4---|byte3---|byte2---|byte1---|byte0---|
compressed 2-bit string ----------> 00000000 00000000 00000000 10110100
padded zeros null a t g c a
Note that in both cases, an a
is 00
, composed of the same zero bits used to pad the word. Thus when decompressing, one must know the number of letters that were compressed.
The 2-bit representations could have been found with a lookup table that translates each ascii character into its equivalent 2-bit representation, something like:
array[256];
array['A'] = array['a'] = 0x0; <i>/* bit pattern 00 */</i>
array['C'] = array['c'] = 0x1; <i>/* bit pattern 01 */</i>
array['G'] = array['g'] = 0x3; <i>/* bit pattern 11 */</i>
array['T'] = array['t'] = 0x2; <i>/* bit pattern 10 */</i>
Doing it this way is slow, however, having to do a lookup for each 8-bit character. An examination of the 8-bit ascii representations for the characters reveals that the desired 2-bit patterns for each letter are already unique within each ascii representation, and are case insensitive:
A 0100 0001
a 0110 0001
C 0100 0011
c 0110 0011
G 0100 0111
g 0110 0111
T 0101 0100
t 0111 0100
^^
||
these two columns contain the desired 2-bit code
It is much faster to simply mask out the undesired bits and shift the
desired bits to their proper location. If unc
is an array of nucleotides in 8-bit ascii, then the following code fragment shows how to create one word of compressed data from 4 words of uncompressed on a big-endian machine, doing everything in the registers without additional load/stores:
mask shift logical "or"
========== ===== ============
compressed[0] = ( (0x06000000 & unc[0]) << 5) |
( (0x00060000 & unc[0]) << 11) |
( (0x00000600 & unc[0]) << 17) |
( (0x00000006 & unc[0]) << 23) |
((unsigned long)(0x06000000 & unc[1]) >> 3) |
( (0x00060000 & unc[1]) << 3) |
( (0x00000600 & unc[1]) << 9) |
( (0x00000006 & unc[1]) << 15) |
((unsigned long)(0x06000000 & unc[2]) >> 11) |
((unsigned long)(0x00060000 & unc[2]) >> 5) |
( (0x00000600 & unc[2]) << 1) |
( (0x00000006 & unc[2]) << 7) |
((unsigned long)(0x06000000 & unc[3]) >> 19) |
((unsigned long)(0x00060000 & unc[3]) >> 13) |
((unsigned long)(0x00000600 & unc[3]) >> 7) |
((unsigned long)(0x00000006 & unc[3]) >> 1);
masking turns bits off, while a logical "or" turns bits on.
The equivalent code on a little-endian machine looks like:
mask shift logical "or"
========== ===== ============
compressed[0] = ((unsigned long)(0x00000006 & unc[0]) >> 1) |
((unsigned long)(0x00000600 & unc[0]) >> 7) |
((unsigned long)(0x00060000 & unc[0]) >> 13) |
((unsigned long)(0x06000000 & unc[0]) >> 19) |
( (0x00000006 & unc[1]) << 7) |
( (0x00000600 & unc[1]) << 1) |
((unsigned long)(0x00060000 & unc[1]) >> 5) |
((unsigned long)(0x06000000 & unc[1]) >> 11) |
( (0x00000006 & unc[2]) << 15) |
( (0x00000600 & unc[2]) << 9) |
( (0x00060000 & unc[2]) << 3) |
((unsigned long)(0x06000000 & unc[2]) >> 3) |
( (0x00000006 & unc[3]) << 23) |
( (0x00000600 & unc[3]) << 17) |
( (0x00060000 & unc[3]) << 11) |
( (0x06000000 & unc[3]) << 5);
Note the mirror symmetry between the two code fragments.
Decompression is a little trickier because the prefix for a T (0101)
is different than for the other letters (0100)
. We can determine T-ness in a register by doing an xor (^: exclusive or)
of the 2-bit extraction with the 2-bit representation for T
:
A^T = 00^10 = 10
C^T = 01^10 = 11
G^T = 11^10 = 01
T^T = 10^10 = 00
Only T xor T
yields a false bool value. Using this fact, the code on a big-endian machine to decode the first 1/4 of a compressed string could look like:
unc[0] = (((0xC0000000 & compressed[0]) ^ 0x80000000)? /* is it a T? */
(((unsigned long)(0xC0000000 & compressed[0]) >> 5) | 0x41000000):
(0x54000000)) /* the letter T */
|
(((0x30000000 & compressed[0]) ^ 0x20000000)? /* is it a T? */
(((unsigned long)(0x30000000 & compressed[0]) >> 11) | 0x00410000):
(0x00540000)) /* the letter T */
|
(((0x0C000000 & compressed[0]) ^ 0x08000000)? /* is it a T? */
(((unsigned long)(0x0C000000 & compressed[0]) >> 17) | 0x00004100):
(0x00005400)) /* the letter T */
|
(((0x03000000 & compressed[0]) ^ 0x02000000)? /* is it a T? */
(((<unsigned long)(0x03000000 & compressed[0]) >> 23) | 0x00000041):
(0x00000054)); /* the letter T */
Each successive 2-bit pair is masked out of the compressed input and xor-ed with 10
. If that boolean is true, it is not a T
, and we just shift the 2-bits into their proper place and add the remaining common bits. If it is a T
, then we just return a T
in the proper location. unc[1]
, unc[2]
, and unc[3]
are similarly computed with the other 3/4 of the compressed word. For a little-endian machine, the same idea prevails with only different masking and shifting.
The same ideas presented above can also be used to do 4-bit and 5-bit compression, where 5-bit covers the entire alphabet.
Finally, note that it is easy to produce the reverse complement of a compressed nucleotide string. Along a double helix of dna, each nucleotide is paired with its complement, A
with T
, and C
with G
. A reverse complement string is the original string read backwards, replacing each letter with its complement. Reading backwards is accomplished by reversing the ordering of the 2-bit units within a word, and reversing the ordering of the words. Producing the complement is accomplished by xor
-ing each 2-bit pattern with 10
, incidentally the same thing we did above to determine T
-ness:
A^10 = 00^10 = 10 = T
C^10 = 01^10 = 11 = G
G^10 = 11^10 = 01 = C
T^10 = 10^10 = 00 = A
Code to produce the reverse complement for a string that exactly fills its final word looks like:
for(i=0; i<length; i++)
{
/* reverse and complement */
rc[i] = (((unsigned long)(0xCCCCCCCC & compressed[length-1-i])) >> 2) |
( (0x33333333 & compressed[length-1-i]) << 2);
rc[i] = ( (unsigned long)(0xF0F0F0F0 & rc[i])>>4) | ((0x0F0F0F0F & rc[i])<<4);
rc[i] = ( (unsigned long)(0xFF00FF00 & rc[i])>>8) | ((0x00FF00FF & rc[i])<<8);
rc[i] = (((unsigned long)(dbrc[i]) >> 16) | (rc[i] << 16)) ^ 0xAAAAAAAA;
}
^ 0xAAAAAAAA
complements the entire word at once after it has been reversed. Note that this code works for both big- and little-endian machines. Additional architecture dependent shifting must be done for strings that do not exactly fill their final word, see https://github.com/jlong777/cbl and associated links for more "coding to the metal".
When trying to create a minimal reproducible example as suggested by trincot and ggorlen, I took another look at my original code and realized that the cause of the issue was having a helper function invoke the Puppeteer instance methods without binding this
when calling them. Whether or not I was using Puppeteer within a module turned out to be irrelevant.
E.g.,
// I was doing this
function callMethod (func, ...args) {
func(...args);
}
callMethod(browser.newPage);
// And what I really needed to do was something like this
function callMethod (obj, method, ...args) {
obj[method].call(obj, ...args);
}
callMethod(browser, 'newPage');
During my original troubleshooting, I replaced the first couple calls to callMethod()
with a direct call to the respective Puppeteer instance method but left all of the other calls to callMethod()
intact, obscuring the fact that callMethod()
was the true culprit all along.
Lesson learned: Always make a minimum reproducible example, no matter how "simple" you believe your existing code is.
I would like to know if you managed to find a solution for this problem. We are experiencing the same challenge of having over 1000 client's ecommerce sites hosted in our single domain and most are asking for Meta's Pixel integration
@Mahdi Ahmadifard suggestion worked for me. Make sure you are using your broker terminal.
Thank you Santiago. Here is the exact syntax I used: Note: I'm only posting for clarity for Powershell rookies like me. I have no idea if this exact syntax changes from what Santiago gave me. I wager they both work.
Import-Module Activedirectory
$Attribcsv=Import-csv "C:\Temp\ADUsers_BulkEdit.csv"
ForEach ($User in $Attribcsv)
{
Get-ADUser -Identity $User.samAccountName | set-ADUser -replace @{costCenter=$($User.costCenter)}
}
Ich bin mit dich, Mark Longmire. I am with you. Six years later and still no solution found. Here is the lay of the land...
Someone else's binary: a .dll provided by a 3RD party who intentionally did not want it debugged.
Our own code: C# running in VS in Debug mode so we can step through our own work.
What we do NOT want: to debug the .dll (that is absolutely NOT our need or care).
What we DO want: to leave the .dll as an UNdecompiled UNexamined UNdebugged black box, exactly as its original creator intended.
What we DO want: to debug our OWN C# code withOUT Captain Obvious (Microsoft Visual Studio Debug mode IDE) halting execution by declaring "Symbol file not loaded" or "No symbol file loaded for .dll" or "Binary was not built with debug information"
We will gladly shower praise on anyone who knows of a way to make VS IDE "Shut Up And Proceed Anyway" withOUT attempting to debug the black box .dll which does not want to be debugged.
Our forever admiration and thanks in advance, Johann
Ive been having the same problem! did you find any solutions to this?
How about this? https://github.com/dany74q/ctap-keyring-device
This library provides an implementation of a virtual CTAP2 (client-to-authenticator-protocol) device, which uses the keyring library as its backend.
One may use this implementation as a reference for CTAP2-compatible devices, or to use ones host machine as an authenticator, rather than using an external one.
A common use-case would be to use this library as an authenticator for a webauthn flow, storing keys and retrieving assertions on a machine's configured keyring.
Notepad++ has a plugin named "JSON Tools" that will generate a random json document for the schema file in the active document tab. There are not a lot of options, but it does work well in a pinch.
public class Example {
public static void main(String[] args) {
int i = Integer. parseInt(args[0]);
double d = Double. parseDouble(args[1]);
String msg = args[2];
System. out. println(''i='' + i + '' d='' + d + '' msg='' + msg);
}
}
I ended up getting a solution through aynber's suggestion here.
In createApplicationRequest, I added
protected function failedValidation(Validator $validator)
{
throw new HttpResponseException(response($validator->errors(), Response::HTTP_UNPROCESSABLE_ENTITY));
}
This returned a response with the error and worked perfectly.
Just as an addendum for gui preferring users, I developed a stream multiplexing capability for ffmpeg that makes it easy to add audio streams to a given video. The free opensource application is FFmpeg Batch Converter.
Assuming fruit has an id column, you need:
select sum(p.numberPurchased * f.price)
from fruit f
join purchase p on f.id = p.fruitId;
You can omit prefixes (p., f.) if the field names do not repeat.
Just an update, which I feel is worth mentioning, SQS FIFO increased their inflight limits from 20K to 120K last year.
Using the v lookup function with IG column name like JB999 answered also worked on my APEX page. Thanks!
I am experiencing the same error as you are with my Golang code. I would appreciate your assistance in resolving this issue.
The code executes successfully and returns a response. However, I am encountering an error at the end of the process. Please provide your insights and guidance in resolving this matter.
Response: { "errorType": "Runtime.ExitError", "errorMessage": "RequestId: a3042e1e-2b0c-4c9f-b2fd-e69bdda70eb0 Error: Runtime exited without providing a reason" }
I encountered the same problem under Windows 10/11. After a thorough study of the application stack and its interactions, I was able to assume that this is not a JavaFX problem. The path led to the operating system itself - the driver is waiting for a reaction to a long press interaction(?), which is not present on the program side... or it is not present from the Java machine. A quick solution is to disable touch visualization in the touch driver settings. This is on setting the Pen and Touch settings, disable the Show visual feedback option, hit Apply.
I have a spark version 3.2.1 with scala 2.12.15 and I have downloaded "azure-eventhubs-spark_2.12-2.3.22.jar" from maven repos. I still get the same error even after following all the answers mentioned above. Any help will be highly appreciated.
import React from 'react'
const Blog = () => {
return (
<div>Blog</div>
)
}
export default Blog
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/18.3.0/umd/react.production.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/18.3.0/umd/react-dom.production.min.js"></script>
There is no deduplication ID in the SQL transport.
This is what I came up with using generics. Not sure if this is what the commenter had in mind. I can pass in different types of Tables and I get the correct output.
How do I resolve MyKeyType
to float
?
import std/strformat
import std/tables
type
Rect = ref object
x: int
y: int
RectTableStringKey = Table[string, Rect]
RefRectTableStringKey = ref Table[string, Rect]
MyKeyType = float
MyValueType = string
XTableYKey = Table[MyKeyType, MyValueType]
RefXTableYKey = ref Table[MyKeyType, MyValueType]
proc `$`[K,V](table: Table[K,V]): string =
result = fmt"hello from $Table[{$K}, {$V}]"
proc `$`[K,V](table: ref Table[K,V]): string =
result = fmt"hello from $refTable[{$K}, {$V}]"
var myTable1: RectTableStringKey
myTable1["one"] = Rect(x:10, y:20)
myTable1["two"] = Rect(x:15, y:25)
var myTable2: RefRectTableStringKey
new myTable2
myTable2["three"] = Rect(x:99, y:100)
myTable2["four"] = Rect(x:909, y:109)
var myTable3: XTableYKey
myTable3[3.14159] = "hello"
myTable3[2.78183] = "bye"
var myTable4: RefXTableYKey
new myTable4
myTable4[1.2345] = "dog"
myTable4[9.9998] = "horse"
echo myTable1
echo myTable2[]
echo myTable3
echo myTable4[]
Output:
hello from $Table[string, Rect]
hello from $Table[string, Rect]
hello from $Table[MyKeyType, MyValueType]
hello from $Table[MyKeyType, MyValueType]
I found a solution:
using ctypes it allows to control the mouse once on the phone
Turns out I was not doing anything wrong (for this specific problem). This was a new bug in BCP, for which there is now a pull request. Applying this patch to my 1.87.0 checkout makes Boost build now after running BCP against it.
Redshift now has VARBYTE
type and functions. So you can now convert base64 string to VARBYTE
with TO_VARBYTE(str, 'base64')
. Then you can do stuff with this VARBYTE
, like in my case I needed to see bit 3 of the first byte
SELECT GETBIT(SUBSTRING(TO_VARBYTE(str, 'base64'), 1, 1), 3) FROM ...
finally found it - the field firsstname was defined twice in the form ressource
the problem was me creating my own sockaddr_un I though I can create my own structure and I was wrong I though that since in the man page there was
The actual structure passed for the addr argument will depend on the address family. The sockaddr structure is defined as something like:
struct sockaddr {
sa_family_t sa_family;
char sa_data[14];
}
The only purpose of this structure is to cast the structure pointer passed in addr in order to avoid compiler warnings. See EXAMPLE below.
(IDK how to accept the comments as awnswers I will if I could)
Yes, Snowflake has introduced a connector for SharePoint, available through the Snowflake Marketplace.
This connector allows you to link a Microsoft 365 SharePoint site with Snowflake to ingest files and manage user permissions while keeping them up to date. It also supports the Cortex Search service, enabling ingested files to be prepared for conversational analysis, making them accessible for use in AI assistants through SQL, Python, or REST APIs.
Refer: https://docs.snowflake.com/en/connectors/unstructured-data-connectors/sharepoint/about
Previous to restore default settings.
Sign out of Postman: When Sign out of Postman it removes your synced history, collections and environments from local storage.
Otherwise when you re-install it, it is likely that it'll use the old data from the previous installation.
There is no VBA Key under HKEY_CURRENTUSER\Software\Microsoft\
Also regardless of whether I have docked checked or not on options, I can drag a window over the edge of the editor and onto another monitor. Nothing docks.
just install the package "tidyverse" and type library(tidyverse) this will work 100 percent very soon this answer will be on the most voted answer
This appears to be a navigation menu. In that case, you do not want to use ARIA menu roles. Get rid of all the role="menu" and role="menuitems". ARIA menu roles are only intended to be used in very specific circumstances, primarily when you are trying to recreate a native operating system menu, which a navigation menu is not. Reference: Don't Use ARIA Menu Roles for Site Nav
Implementing a navigation menu is actually much simpler than people make it out to be. But rather than trying to re-explain the wheel, I'll just give you a link to the master: Link + Disclosure Widget Navigation. Seriously, this guy is one of the foremost experts in web accessibility and his website is consider the Accessibility Bible by many. I would follow his example very closely.
I resolved this issue by adding a space between ServiceType in the RetrieveCounter()
method.
Well, just reality... you can't always have the latest data in caches, there's the mechanism of cache miss (vs the cache hits, which are of course more popular, but joking aside...) so a cache asked but not finding something should make it (or you) query the cantral backend and then cache that to update itself.
Is redis or ignite failing for you in that aspect? And what's the actual backend DB and are you perhaps concentrating too much on this one mechanism of caching? There are always multiple ways of optimizing database performance. First off the top of my hat are indexses, materialized views, load balancing, SAN, memory storage engines...
Is there a way to do so?
Indeed there is, although I don't know if it is documented anywhere and how well. Found this here: https://steamcommunity.com/groups/SteamClientBeta/discussions/0/154644787621730542/
C:\>set SHIM_MCCOMPAT=0x800000001
C:\>your_program_to_launch.exe
If you replace 0x800000001
with 0x800000000
, then this will become equivalent to choosing the “Integrated graphics” item. And an important note: this solution is almost certainly specific to NVIDIA Optimus, so you'll need something different for AMD GPUs.
def run_proxy(self, addon):
asyncio.run(self.start_proxy(addon))
async def start_proxy(self, addon):
...
await self.proxy.run()
return self.proxy
I encounter Cannot find module tapable
with next and resolved with
yarn add -D eslint-import-resolver-typescript
From my experience, every company decides the format for itself. Sometimes it's just enough to rely on REST error codes and return an empty body. Sometimes the team chooses to use their own error codes (and in that case error details can be found in API docs), in my current company we use "message" and "description" json fields in all our APIs but I can't say there is really some standard or best practice.
I use the following Notepad++ command to compile Visual C++:
cmd /k "D:\Programs\VC\VC\Auxiliary\Build\vcvarsall.bat x86 & cd /d $(CURRENT_DIRECTORY) & cl $(FILE_NAME) & $(NAME_PART).exe"
"D:\Programs\VC" should be replaced by the path to Visual Studio Build Tools on your system; the second "VC" and what follows remains. The vcvarsall.bat script with x86 argument (the argument depends on your system) sets the necessary variables and allows the compiler to find its libraries. This script ships with Visual Studio and is adjusted to your system during its installation.
& $(NAME_PART).exe is not necessary, it runs the compiled program.
Note also that I have the directory with cl.exe on PATH.
Changing the lombok version to 1.18.30 resolved my issue. I'm using JDK 17 in my application.
After some reading I found a solution to my issue.
So, if anyone have this trouble :
http://10.0.2.2:8000/api/register
npx expo start --tunnel
because your machine and emulator need to be on the same network.And now it's working.
Others Stackoverflow solutions (that doesn't work for me and multiple devs on the responses) advise to change emulator proxy with 10.0.2.2 or with the machine IP Address.
I found it. We had the WP/Woo cron disabled and enabled the cron in our server less often (There were many server spikes since some WP/Woo cron runs with every page load.). After enabling it again to run through WP/Woo and going to the order edit page, it will then save to the table. sigh.
If the spikes return, we'll need to figure out how to get this table working with the WP/Woo crons running on our server.
https://docs.aws.amazon.com/firehose/latest/dev/create-name.html MSK or Kafka is not a destination option in Amazon Data Firehose
Non of these worked for me. Apparently my git had to be updated. For Mac, see this link: https://git-scm.com/downloads/mac
If you want to develop a Windows UWP App, create a new JavaScript project in Visual Studio Code and start coding. It is preinstalled, since UWP apps have their own browser. I lost the download link for Visual Studio 2017.
ı say sudo apt install openjdk-21-jre-headless but ı didn t know the password
Using LinkedIn-provided example values:
secret = "iDzHQuN810pKNCHi".encode()
code = "f59fcbe0-d2e2-49cc-8c08-0ec02a2b8b0d".encode()
response = hmac.digest(secret, code, "sha256").hex()
correct = "52ff30198b4e72cdc69849b1634d0b0e78f165cee7771a235b6cc825eb10fbd9"
print(response, correct)
Thanks for the tip!
I had to make one small change because my callback path was "/signin-microsoft"
On the RegEx I modified it to the following:
Regex.Replace(context.RedirectUri, "redirect_uri=(.)+%2Fsignin-", "redirect_uri=https%3A%2F%2Fwww.yourcustomdomain.com%2Fsignin-")
Note the "signin-" instead of "signin-oidc"
This is the solution I ended up developing:
For each operation that I want to classify, I take 25 samples and measure the number of relevant operations. For example, I add 25 elements to the middle of an array list and measure how many memory writes occurred each time. Then I look at the relationship between X (1 to 25), Y (operations), delta Y (change in Y), and delta delta Y (change in change in Y).
For example:
Memory writes when adding an element to the middle of an array list:
+----+----+---------+---------+
| x | y | dy | ddy |
+----+----+---------+---------+
| 1 | 1 | | |
| 2 | 2 | 1.000 | |
| 3 | 2 | 0.000 | -1.000 |
| 4 | 3 | 1.000 | 1.000 |
| 5 | 3 | 0.000 | -1.000 |
| 6 | 4 | 1.000 | 1.000 |
| 7 | 4 | 0.000 | -1.000 |
| 8 | 5 | 1.000 | 1.000 |
| 9 | 13 | 8.000 | 7.000 |
| 10 | 6 | -7.000 | -15.000 |
| 11 | 6 | 0.000 | 7.000 |
| 12 | 7 | 1.000 | 1.000 |
| 13 | 7 | 0.000 | -1.000 |
| 14 | 8 | 1.000 | 1.000 |
| 15 | 8 | 0.000 | -1.000 |
| 16 | 9 | 1.000 | 1.000 |
| 17 | 25 | 16.000 | 15.000 |
| 18 | 10 | -15.000 | -31.000 |
| 19 | 10 | 0.000 | 15.000 |
| 20 | 11 | 1.000 | 1.000 |
| 21 | 11 | 0.000 | -1.000 |
| 22 | 12 | 1.000 | 1.000 |
| 23 | 12 | 0.000 | -1.000 |
| 24 | 13 | 1.000 | 1.000 |
| 25 | 13 | 0.000 | -1.000 |
+----+----+---------+---------+
slope of y = 0.525 (increasing)
slope of dy = -0.026 (about constant)
slope of ddy = 0.000 (about constant)
category: O(n)
The only parameter that has to be tuned in this approach is what it means for slope to be "about 0." I have to use different values for different data structures and algorithms. For example, all of my tests with various lists use a value of +-0.05, meaning that slopes between 0.05 and -0.05 are considered a slope of "about 0." For trees, I had to use 0.03, and for hashmaps I had to use 0.2.
Obviously this method is not perfect. As others have pointed out, asymptotic complexity is a theoretical problem, not an empirical one.
However, I was able to successfully unit test the performance of most common operations on common data structures. For example, I can show that insert, find, and remove from array lists and linked lists all have the right linear or constant time performance. I've also tested binary search trees, AVL trees, hash maps, binary heaps, adjacency list and adjacency matrix graphs, disjoint set forests, minimum spanning tree algorithms, and sorting algorithms (though this method proved very brittle for sorting).
This method also works when the theoretical performance relies on amortized analysis. For example, adding to the end of an array list still shows constant time performance even when the array needs to be periodically re-allocated. Same for re-hashing hashtables, though I had to raise the bounds for what counts as "about 0" for the slope quite a bit for it to work properly.
Hope this approach helps somebody else!
Any update on adding proxy in camel uri?
Looks like I needed to set publishWebProjects: true
I know this is kinda old, but this page address that and other common questions on this topic. https://github.com/dbeaver/dbeaver/wiki/MongoDB#executing-javascript
You have a trailing comma on the third line from the bottom that shouldn’t be there
While checking the SDK Documentation in detail for the same purpose. I noticed that the WebSDK based on Javascript can only be used for Enrollment. Not for verification. Attached the specific portion from the SDK Document.
You can do
colMeans(type.convert(test, as.is = TRUE))
Fruits - Apples Fruits - Oranges Fruits - Bananas
2.5 4.0 4.0
What do we gain from replicating?
The solution was to use a node, instead of a stage.
def stagesMap = [:]
for (int i = 1; i < 3; i++) {
stagesMap[testName] = {
node('nodeNameOrLabel') { //use node here instead of stage.
script {
someFunctionCall()
}
}
}
}
parallel stagesMap
Based on Eran Zimmerman Gonen answer:
what about this flow graph:
G=(V,E), s.t V={s,a,b,c,t}, E={(s,a),(a,b),(b,c),(c,t)}.
the capacity of all edges is 1, so as the flow.
s-(1/1)->a-(1/1)->b-(1/1)->c-(1/1)->t
So, based on the residual graph, I have a min-cut S={s}.
Starting from t I am getting S'={t,c,d,a}.
Looking at 7th label, V/S' = {s} = S which means I have only one min-cut, but I have many more: S={s}, S'={s,a}, S''={s,a,b}, S''' = {s,a,b,c}
Checkout this project. This project has more or less everything implemented in from the original eshoponcontainers project.
https://github.com/harshaghanta/springboot-eshopOnContainers
WinDbg now supports the [SetThreadDescription][1]
API, which is the new-ish properly supported way to name threads. See this for details: https://stackoverflow.com/a/43787005/434413