def push(stack,e): stack.append(e) def pop(stack): return stack.pop()
def is_empty(stack): return len(stack)==0 def is_palindrome(arr): stack=[] for i in arr: push(stack,i) for i in arr: if pop(stack)!=i: return False return True
I am running sudo 1.9.13p2 on MAC OSX, and sudo has a -l option to do that, so in a bash script:
if sudo -l > /dev/null; then
echo "This is sudoed"
else
echo "This is NOT sudoed"
fi
Firefox version of insta bridge contracter for home builders and my self a 5 bd and income susport of wifi passes
This page helped me setup primeng 19 and tailwindcss 4 and still retain some scss:
https://medium.com/@daniel.codrea/setting-up-a-primeng-v19-and-tailwindcss-v4-project-f1b550c8e2d0
I know the question was about Cloud Bitbucket API, but leaving also the answer here for those who did such a search for a Server installation and found this, like me.
There's default-branch
endpoint of the repo which serves the default branch name without fetching all the branches
rest/api/1.0/projects/{project_key}/repos/{repo_slug}/default-branch
Response example:
{
'id': 'refs/heads/master',
'displayId': 'master',
'type': 'BRANCH'
}
Tested on v8.19
.
It's not working on my image. The watermark is not properly removed and it also blur the image sometime.
As of today, I was able to display my plot while running in the debugger when I called plt.show()
on the debug console with a breakpoint in my plotting code.
My versions of stuff:
I believe this has been answered in Modify Cas overlay 7.1.3 to add custom RestController
Please note especially the last comment from Petr Bodnár, Jan 21 at 8:17
You can try either of the following:
I have the same problem. Did you manage to solve it?
It turns out that the preferred solution is to use apt
(docs https://github.com/GoogleContainerTools/rules_distroless/blob/main/docs/apt.md).
Not sure if you figure it out yet but AWS OpenSearch should work fine as long as you're using the elasticsearch gem < 7.14. I would also stay on OpenSearch 1.x.
Then find a different maps as you said, Your question isn't relevant, doesn't benefit to SO at all and can be answered with a bit of googling.
This issue usually happens when using Kendo UI for jQuery in an Angular project. Our team is looking into it and hopes to find a solution to make the Kendo jQuery package work with the latest Angular versions. However, we can't guarantee a fix, as this integration isn't officially supported by the Angular framework, Kendo UI for Angular, or Kendo UI for jQuery.
I am more concerned about exposing your AWS access and secret keys to the public!
NEXT_PUBLIC_AWS_ACCESS_KEY_ID
NEXT_PUBLIC_AWS_SECRET_ACCESS_KEY
For exact matches, Hashmap is the winner. But There is a setup time.
For range-based matches, binary search is better. But the data must be sorted beforehand.
Source: https://machinelearningx.com/algorithms/binary-search-hashmap-linear-search url
The following answer in github presented a relative easy way to do the transfer, couldn't find anything better:
Moving keys that are encrypted using the default mechanism is probably something that will never be supported / documented because of how fragile and error-prone it is. The easiest and most fool-proof way to migrate a live web app would be what @blowdart suggests: configure the Data Protection system to use the file system as the key repository, and also configure it to use an X.509 certificate to protect keys at rest. You can even do this using a console application and watch the key files get dropped on disk. Then change your web app's startup config to use the same repository / protection mechanism. After a few days (default 48 hours) the key rotation policy will kick in and the web application will start using the new keys on disk rather than the old keys from the registry. (The old keys will still be able to decrypt existing auth tokens, but they won't be used to issue new auth tokens.) Wait a few more days to make sure that all existing logged-on users have had their auth tokens refreshed. Then you can move the web application - keys and all - to the new machine. You'll lose the ability to decrypt with the old keys, but this shouldn't result in service interruption since all logged-on users should have had their auth tokens refreshed over the waiting period.
If you're hitting the k8s service directly then it will round robin the requests (k8s default), and as your deployment is not sidecar injected, you can't use load balancing algorithms from the DR to configure the client.
When the billing request flow returns success, call your function to set up the payment. You don't need any further customer interaction so just go ahead.
Revert commit on local - discard changes and force push to remote: git push -f
Generally its really bad practise to store any authentication passwords in plain text. Consider using encryption if planning to store in MySql / Maria db backend. Both PHP and MySql have functions to both store and retrieve hashed data - dont be tempted to just stick them in clear in a table.
Fetching user skills, education, and positions from the LinkedIn API has become challenging due to the deprecation of r_fullprofile and r_emailaddress scopes. Currently, only r_liteprofile and r_emailaddress are available, which provide limited access.
For user skills and positions, you may need to explore alternative API solutions or check LinkedIn’s latest documentation for updated endpoints. If you're looking for educational resources and exam-related updates, you can check this website for comprehensive details.
i have the same problem as well. have you found a solution for this? thank you!!
I created an account just to update this to say that you no longer have to despair if you need to update a connector and AWS added this feature https://docs.aws.amazon.com/msk/latest/developerguide/mkc-update-connector.html
For JetBrains Rider (2024.3.5) on Windows (11) the only thing that worked for me was this Plugin:
In Rider go to Settings (Ctrl + Alt + S) > Plugins > Marketplace > Search 'BrowseWordAtCaret' > Install.
After installing, check if it the plugin is enabled and then go to Settings > Editor > Appearance > scroll down to 'Browse Word at Caret' and check all the options.
(It didn't work without the step above for me)
Then use Ctrl + Alt + Up/Down in the editor to cycle between highlighted usages.
You can change the keymap in Settings > Keymap > Plugins > BrowseWordAtCaret.
Do you request some where memory? I don't see the helm flag like requestsMemory
in your config. I guess you have to give the pod some memory. In you docker config you only limited the application to use memory, but the pod does need some to start your application. so I guess your pod is not configured correctly
In a way, yes, you cannot access or edit the DNS records because the domain is not yours. The platform is "lending" a specific subdomain for use.
If you'd like to edit the DNS to enable Google Search Console, you need to use a custom domain. That way you can have a domain that you fully control. If you use Vercel's Nameservers, you should be able to edit any DNS records directly in your dashboard.
Same here with maven dependency:
Maybe this post can help you: https://techcommunity.microsoft.com/blog/analyticsonazure/workarounds-for-maven-json-smart-2-5-2-release-breaking-azure-databricks-job-dep/4377517
Good luck!
Have they added a Shortcut or an Extension to open the Microsoft Documentation online yet? 3 years later.
I'm using: VSCode and the extension: #C Dev Kit
If you want a permanent fix, run:
powershell
CopyEdit
Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy RemoteSigned
The line effectively serves as a security measure. It ensures that the script can only be executed if the ABSPATH constant is defined. If someone tries to access the script directly (for example, through a web browser), and ABSPATH is not defined, the script will terminate immediately, preventing unauthorized access or execution of the code.
In summary, this line of code is a safeguard to ensure that the script is being run in the context of a WordPress environment, and it prevents direct access to the script, which could lead to security vulnerabilities.
defined('ABSPATH'): This part checks if the constant ABSPATH is defined. In WordPress, ABSPATH is a constant that represents the absolute path to the WordPress directory. It is typically defined in the main WordPress configuration file (wp-config.php).
For the record, this might be helpful for a noob like me in Reactor testing.
I struggled very much to figure it out how to test an ExchangeFilterFunction. Really it took me many hours and tried all the suggestions from StackOverflow, reading the complete documentation from the Project Reactor about testing and so on. Even tried using AI, but if failed absolutely spectacularly.
And just before I wanted to give up I stumbled about this gem. It is a unit test from the Spring Weblux source code and really opened my eyes.
ExchangeFilterFunctionsTests.java
I speak about this particular one:
@Test
void basicAuthenticationUsernamePassword() {
ClientRequest request = ClientRequest.create(HttpMethod.GET, DEFAULT_URL).build();
ClientResponse response = mock();
ExchangeFunction exchange = r -> {
assertThat(r.headers().containsHeader(HttpHeaders.AUTHORIZATION)).isTrue();
assertThat(r.headers().getFirst(HttpHeaders.AUTHORIZATION)).startsWith("Basic ");
return Mono.just(response);
};
ExchangeFilterFunction auth = ExchangeFilterFunctions.basicAuthentication("foo", "bar");
assertThat(request.headers().containsHeader(HttpHeaders.AUTHORIZATION)).isFalse();
ClientResponse result = auth.filter(request, exchange).block();
assertThat(result).isEqualTo(response);
}
Once I put this in my codebase and started playing around with it (I had to adapt it a little and make it compile, probably I have an older Spring version), I could debug and modify it and very soon I was able to test my own implementation of an ExchangeFilterFunction.
This is such a elegant way of testing in isolation just the mutation done on a request inside an ExchangeFilterFunction. I will give the full example. I need to test an ExchangeFilterFunction function which reads some headers from a supplier and adds them to the outgoing request.
@Test
void should_propagate_headers_from_supplier_to_outgoing_request() {
ClientRequest request = ClientRequest.create(HttpMethod.GET, URI.create("https://example.com")).build();
ClientResponse response = Mockito.mock();
ExchangeFilterFunction exchangeFilterFunction = getPropagateHeadersExchangeFilter(
() -> {
HttpHeaders headers = new HttpHeaders();
headers.putAll(getTestHeaders());
return Mono.just(headers);
});
ExchangeFunction exchange = clientRequest -> {
getTestHeaders().forEach((headerName, headerValues) -> {
assertThat(clientRequest.headers()).containsEntry(headerName, headerValues);
});
return Mono.just(response);
};
ClientResponse result = exchangeFilterFunction.filter(request, exchange).block();
assertThat(result).isEqualTo(response);
}
For the record, this might be helpful for a noob like me in Reactor testing.
I struggled very much to figure it out how to test an ExchangeFilterFunction. Really it took me many hours and tried all the suggestions from StackOverflow, reading the complete documentation from the Project Reactor about testing and so on. Even tried using AI, but if failed absolutely spectacularly.
And just before I wanted to give up I stumbled about this gem. It is a unit test from the Spring Weblux source code and really opened my eyes.
ExchangeFilterFunctionsTests.java
I speak about this particular one:
@Test
void basicAuthenticationUsernamePassword() {
ClientRequest request = ClientRequest.create(HttpMethod.GET, DEFAULT_URL).build();
ClientResponse response = mock();
ExchangeFunction exchange = r -> {
assertThat(r.headers().containsHeader(HttpHeaders.AUTHORIZATION)).isTrue();
assertThat(r.headers().getFirst(HttpHeaders.AUTHORIZATION)).startsWith("Basic ");
return Mono.just(response);
};
ExchangeFilterFunction auth = ExchangeFilterFunctions.basicAuthentication("foo", "bar");
assertThat(request.headers().containsHeader(HttpHeaders.AUTHORIZATION)).isFalse();
ClientResponse result = auth.filter(request, exchange).block();
assertThat(result).isEqualTo(response);
}
Once I put this in my codebase and started playing around with it (I had to adapt it a little and make it compile, probably I have an older Spring version), I could debug and modify it and very soon I was able to test my own implementation of an ExchangeFilterFunction.
This is such a elegant way of testing in isolation just the mutation done on a request inside an ExchangeFilterFunction. I will give the full example. I need to test an ExchangeFilterFunction function which reads some headers from a supplier and adds them to the outgoing request.
@Test
void should_propagate_headers_from_supplier_to_outgoing_request() {
ClientRequest request = ClientRequest.create(HttpMethod.GET, URI.create("https://example.com")).build();
ClientResponse response = Mockito.mock();
ExchangeFilterFunction exchangeFilterFunction = getPropagateHeadersExchangeFilter(
() -> {
HttpHeaders headers = new HttpHeaders();
headers.putAll(getTestHeaders());
return Mono.just(headers);
});
ExchangeFunction exchange = clientRequest -> {
getTestHeaders().forEach((headerName, headerValues) -> {
assertThat(clientRequest.headers()).containsEntry(headerName, headerValues);
});
return Mono.just(response);
};
ClientResponse result = exchangeFilterFunction.filter(request, exchange).block();
assertThat(result).isEqualTo(response);
}
I have found the answer, We have to configure the values for below attribute in env variables in local IDE to establish the connection with the GCC .
DEPLOYMENT_ID OKTA_CLIENT_SECRET OKTA_CLIENT_ID OKTA_AUTH_SERVER_URL
After configuring the above attributes , i am able to run the server and able to pull the APD products into my local to visualize.
does anyone have another solution for the problem described above by F.Nik? I get the exact dame errors as described above, and the recommendation by Morteza does not help me because there is only one instance running in my case (Ubuntu)
Thank you very much for your responds.
Using ORDS 24 means also use Java 21 or higher. Don't forget to use that version for Tomcat.
I installed ORDS 24 on Linux in APEX 24.1 and used Tomcat 9, but I didn't started Tomcat with that Java 21 version. After fixed that, it worked fine. And yes, Tomcat didn't show any errors.
As I was suggested to use Lazy initialization for the static variable, i did the same and it worked flawlessly, I didn't even had to update testcases. Thanks a lot!
private static BlobServiceClient BLOB_SERVICE_CLIENT;
public static BlobServiceClient getBlobServiceClient() {
if(BLOB_SERVICE_CLIENT == null){
BLOB_SERVICE_CLIENT = new BlobServiceClientBuilder()
.connectionString(CONNECTION_STRING)
.buildClient();
}
return BLOB_SERVICE_CLIENT;
}
you can return svg in the component this way:
const Svg = () => (
<svg width="320" height="130" xmlns="http://www.w3.org/2000/svg">
<rect width="300" height="100" x="10" y="10" style="fill:rgb(0,0,255);stroke-width:3;stroke:red" />
</svg>
);
export default Svg;
Fedora users can run the following, as explained here.
sudo dnf install python3.13-freethreading
This will install the interpreter at /usr/bin/python3.13t
.
You may use: dbcv_score = dbcv(X, labels, check_duplicates=False)
I'm not quite familiar with the process, still experimenting, but inspecting the code, I found that it has that parameter as well, and turning it to False seems to have worked for me!
In such scenario, you probably need a better design or architecture for your project. writing code that's works, it's not everything. I can't say how or what to do scene, because I don't know how your code looks like. And one last advice these things happen frequently for beginner, so don't be discouraged.
I've found the solution at https://github.com/authts/oidc-client-ts/issues/1650.
There is a parameter which can be set in the UserManagerSettings
:
const settings: UserManagerSettings = {
authority: ...,
client_id: ...,
redirect_uri: ...,
revokeTokenAdditionalContentTypes: ['text/plain; charset=utf-8'],
};
i am used container in podman and restart it machine fix this exception, for me
def create (): employees=int(input("enter number of employees")) for i in range (employees): employees={} code=int(input("code:")) name=input("name:") salary=int(input("salary:")) employees[code]={'code':code, 'name':name, 'salary':salary} print(employees) create ()
def search (): print ("employees more than 80k: ") for code, emp in employees.items(): if emp["salary"]>80000: print (f"code: {code}, name: {emp['name']}, salary: {emp['salary']}") search ()
you can update "(READONLY)" into linker file with: .fini_array (READONLY): .init_array (READONLY): .preinit_array (READONLY) : .ARM (READONLY): .ARM.extab (READONLY) :
If you forked or cloned check package.json for the repository propery and the package name. They might still refer to old origin. I had the same error and fixed it by updating the repositry and name of the package.
I got this error related to a <s:select> on one of my JSP pages. The error went away after I corrected the "value" attribute.
It sounds like what you're looking for is the SnackBar widget. https://docs.flutter.dev/cookbook/design/snackbars
Hello @user688291 i have the same problem although i put the AID to the info.plist i dont know why
You need to embed fonts in the exported PDF that support the specific character. Check the official documentation for more details:
https://www.telerik.com/kendo-angular-ui/components/pdf-export/embedded-fonts
https://www.telerik.com/kendo-angular-ui/components/grid/export/pdf-export#embedding-custom-fonts
No esta resolviendo el error con solo "flutter clean" y "flutter pub get"
The Solution was to un-install/Install the HID Device Driver under the Human Interface Devices from the Device Manager. This solved my problem after 2 hours of trials.
input[data-autocompleted] {
background-color: transparent !important;
}
input:-webkit-autofill,
input:-webkit-autofill:focus {
transition: background-color 0s 0s, color 0s 0s;
transition-delay: calc(infinity * 1s);
}
For those using vs code, I delete Microsoft.VissualStudio.FallbackLocation.config
file present at
C:\Program Files (86)\Nuget\Config\Microsoft.VissualStudio.FallbackLocation.config
foreachPartition function works on executor level, you can see details logs on the executor logs rather than driver logs. Worth checking the executor logs. Also, do you see the data getting updated in PostgreSQL table?
The original question was made in 2020; today, SonarQube checks correctly the short-circuit conditionals in Java.
Here you have the issue regarding this situation that was finalized on October 2024.
https://sonarsource.atlassian.net/issues/SONARJAVA-496
And a screenshot showing that is not raising any issue now :
You can do this by changing the 'measure' from all logs
to a specific attribute:
DataDog will then allow you to select an aggregation like an average on the attribute:
Google cloud API keys must be in your front-end code, even the Google Maps sample code has an unprotected api key, and it is safe to include as long as it is properly restricted.
Sample code: https://github.com/googlemaps-samples/codelab-maps-platform-101-react-js/blob/main/solution/src/app.tsx
You can see more ways to restrict your api key access here: https://developers.google.com/maps/documentation/javascript/get-api-key#restrict_key Make sure to restrict your api key to the specific referrer URLs
You can also implement rate limiting: https://developers.google.com/maps/documentation/javascript/usage-and-billing#set-caps
I had the exact same problem. See my problem and solution here: ASP.NET Core 8 Web API : JWT token always invalid
TL;DR - Installing the package Microsoft.IdentityModel.JsonWebTokens
solved the problem for me
I know this thread is old, but in case someone is still struggling with this, I'll share my solution. For fun, I was looking for a possible workaround and came up with an idea—I can check what user-agent Outlook uses and then block access to the page if it performs prefetching with that user-agent.
I used a simple script:
file_put_contents("SERVER PATH TO PREFETCH LOG FILE",
date("Y-m-d H:i:s") . " - User-Agent: " . ($_SERVER['HTTP_USER_AGENT'] ?? 'None') . PHP_EOL, FILE_APPEND);
Thanks to this, I found out that Outlook doesn’t send a user-agent at all. So, it was enough to block access to all "non-browsers" (i.e., requests without a user-agent). In my case, I placed the following code at the beginning of my PHP file:
if (!isset($_SERVER['HTTP_USER_AGENT']) || empty($_SERVER['HTTP_USER_AGENT'])) {
http_response_code(403);
exit;}
All known browsers use User-Agent, and so far, no one has had any issues
Had the same issue after migrating to Angular 19. It works again after updating to the latest angular version (19.1.6).
ng update @angular/cli@^19 @angular/core^19
Any updates on this? All the answers are deprecated as today: 12 February 2025
Now in Mac Sequoia 15.3 and Excel v16.94 the whole commandline like underneath does not work at all
Mynumber ="1234"
Myfile = application.activeWorkbook.Path & Application.PathSeperator & "Invoice-" & `Mynumber & ".pdf"
ActiveSheet.ExportAsFixedFormat Type:=xlTypePDF, filename:=Myfile _
, Quality:=xlQualityMinimum, IncludeDocProperties:=True, IgnorePrintAreas:=False, OpenAfterPublish:=False
From the WS spec:
If the |Sec-WebSocket-Accept| value does not match the expected
value, if the header field is missing, or if the HTTP status code is
not 101, the connection will not be established, and WebSocket frames
will not be sent.
So the client verifies the digest.
Thanks for coming to my ted talk
Another module was using a class named "LogService" thus probably conflicting with the one described above. Removing the class solved the issue.
Is there anyone available to develop a REST middleware to connect multiple devices on to Impinj R420?
Use the following command to view all contents from collections:
db.<collectionName>.find()
for example:
An addition to @Denis' answer: If you only going to use your scripts in a modern Linux environment, you might consider using getopt
instead of getopts
; as Linux's getopt
has additional support for --long-flags
!
Here's a working proof of concept modified from this Wikipedia article:
#!bin/bash
# Formating
bold=$(tput bold)
normal=$(tput sgr0)
function test_arg_parse(){
flag_set=0
flag_with_arg=""
opt_flag=""
# Exit if arg parsing fails
args=$(getopt --options 'fF:O::' \
--longoptions 'flag,flagWithArg:,argument:,optionalFlag::' \
-- "${@}") || exit
eval "set -- ${args}"
while true; do
case "${1}" in
(-f | --flag)
((flag_set++))
shift
;;
(-F | --flagWithArg | --argument)
flag_with_arg=${2}
shift 2
;;
(-O | --optionalFlag)
# handle optional: getopt normalizes it (the argument)
# into an empty string
if [[ -n ${2} ]] ; then
opt_flag=${2}
echo "optional flag is present!"
fi
shift 2
;;
(--)
shift
break
;;
(*)
exit 1 # error
;;
esac
done
remaining_args=("${@}")
echo "args are: \"${args[@]}\""
cat <<EOF
===args===
flag_set is "$bold$flag_set$normal"
flag_with_arg is "$bold$flag_with_arg$normal"
opt_flag is "$bold$opt_flag$normal"
===~~~~===
remaining_args is "$bold${remaining_args[@]}$normal"
EOF
}
### Usages: ###
echo "Test with: $bold--flag -F\"Hello\"$normal"
test_arg_parse --flag -F"Hello"
echo "Test with: $bold--flagWithArg=\"Hola\" -O\"Bird is a Fake\" some_extra_args$normal"
test_arg_parse --flagWithArg="Hola" -O"Bird is a Fake" some_extra_args
echo 'You can chain some of the short options together (but why tho?)$normal'
echo "Test with: $bold-fF\"Hello!\" -O\"Bird is a Fake\" some_extra_args"
test_arg_parse -fF"Hello!" --optionalFlag="Bird is a Fake" some_extra_args
echo -e "\n**NOTE: short flags and their argument must mot be separated by any white space:"
echo "Test with: $bold--flagWithArg \"This is fine\" -O \"This is not :(\"$normal"
test_arg_parse --flagWithArg "This is fine" -O "This is not :("
Test with: --flag -F"Hello" args are: " --flag -F 'Hello' --" ===args=== flag_set is "1" flag_with_arg is "Hello" opt_flag is "" ===~~~~=== remaining_args is "" Test with: --flagWithArg="Hola" -O"Bird is a Fake" some_extra_args optional flag is present! args are: " --flagWithArg 'Hola' -O 'Bird is a Fake' -- 'some_extra_args'" ===args=== flag_set is "0" flag_with_arg is "Hola" opt_flag is "Bird is a Fake" ===~~~~=== remaining_args is "some_extra_args" You can chain some of the short options together (but why tho?)$normal Test with: -fF"Hello!" -O"Bird is a Fake" some_extra_args optional flag is present! args are: " -f -F 'Hello!' --optionalFlag 'Bird is a Fake' -- 'some_extra_args'" ===args=== flag_set is "1" flag_with_arg is "Hello!" opt_flag is "Bird is a Fake" ===~~~~=== remaining_args is "some_extra_args" **NOTE: short flags and their argument must mot be separated by any white space: Test with: --flagWithArg "This is fine" -O "This is not :(" args are: " --flagWithArg 'This is fine' -O '' -- 'This is not :('" ===args=== flag_set is "0" flag_with_arg is "This is fine" opt_flag is "" ===~~~~=== remaining_args is "This is not :("
See also: this post
So I couldn't figure it out on my own but thanks to initial response here and then feeding the code to chat GPT it gave some results although it looks quite different now but now it actually works both ways reading from json and html file respectively.
import java.io.*;
import java.net.ServerSocket;
import java.net.Socket;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class HttpServer implements Runnable {
private final int port;
// private final ExecutorService executor = Executors.newFixedThreadPool(5);
public HttpServer(final int port) {
this.port = port;
}
public void run() {
try (var serverSocket = new ServerSocket(port)) {
System.out.println("Server started on port " + port);
/* while (true) {
var socket = serverSocket.accept();
executor.submit(() -> handleClient(socket));
}*/
var socket = serverSocket.accept();
handleClient(socket);
} catch (IOException e) {
e.printStackTrace();
}
}
private void handleClient(Socket socket) {
try (socket;
var inputStream = new BufferedReader(new InputStreamReader(socket.getInputStream(), StandardCharsets.UTF_8));
var outputStream = new DataOutputStream(socket.getOutputStream())) {
String line;
int contentLength = 0;
while (!(line = inputStream.readLine()).isBlank()) {
System.out.println("Header: " + line);
if (line.toLowerCase().startsWith("content-length:")) {
contentLength = Integer.parseInt(line.split(":")[1].trim());
}
}
if (contentLength > 0) {
char[] buffer = new char[contentLength];
inputStream.read(buffer, 0, contentLength);
System.out.println("Received body: " + new String(buffer));
}
Path filePath = Path.of("src/main/resources/site.html");
byte[] body = Files.readAllBytes(filePath);
outputStream.write((
"HTTP/1.1 200 OK\r\n" +
"Content-Type: text/html\r\n" +
"Content-Length: " + body.length + "\r\n" +
"\r\n"
).getBytes(StandardCharsets.UTF_8));
outputStream.write(body);
outputStream.flush();
} catch (IOException e) {
e.printStackTrace();
}
}
public static void main(String[] args) {
new Thread(new HttpServer(8082)).start();
}
}
import java.io.IOException;
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.nio.file.Path;
import static java.net.http.HttpRequest.BodyPublishers.ofFile;
public class HttpClientRunner {
public static void main(final String[] args) throws IOException, InterruptedException {
var httpClient = HttpClient.newBuilder()
.version(HttpClient.Version.HTTP_1_1)
.build();
Path jsonFilePath = Path.of("src/main/resources/example.json");
var request = HttpRequest.newBuilder()
.uri(URI.create("http://localhost:8082"))
.header("Content-Type", "application/json")
.POST(ofFile(jsonFilePath))
.build();
var response = httpClient.send(request, HttpResponse.BodyHandlers.ofString());
System.out.println("Response headers: " + response.headers());
System.out.println("Response body: " + response.body());
}
}
I have a very similar situation when we execute a stored procedure in snowflake but the result does not appears in Data factory. For me it happens in a Copy activity in a ForEach loop.
Did you find any solution?
It seems I have found an answer - it may seem strange, but if I remove *
here Error -->*OperationError json:"error,omitempty" graphql:"... on Error"
the error is gone. Anyway thanks everyone for commitment
I spent a while working on this, and just found a solution.
I selected "Run only when user is logged on" (under General tab of the task's properties)
I think this is related to taskscheduler using a headless environment if you select "Run whether user is logged on or not" which caused issues for xlwings.
I cannot build a surfacePoint
data structure , the document of meshlib confused me... I only have a numpy array for the xyz coordinate of the point
I had to remove the com directory from the WEB-INF directory and it worked.
I found that using Round() helped:
Sub num_test()
Dim num As Double
num = 5.92427068015303E-10
Debug.Print num
num = Round(num, 11)
Debug.Print num
End Sub
Did you change your simulation type to "real device"? It's easy to overlook, done that myself a couple of times.
the best way to accomplish this is by creating a specialized image from one of your configured VMs. You can then store it in the Azure Compute Gallery and use it as the source for your VM scale set.
This approach ensures that all machines in the VMSS have the same image and configuration, aligning with Azure best practices.
I would strongly recommend you to take a closer look at following Azure documentation:
Create an image of a VM - https://learn.microsoft.com/en-us/azure/virtual-machines/capture-image-portal
Store or share images - https://learn.microsoft.com/en-us/azure/virtual-machines/shared-image-galleries?tabs=vmsource%2Cazure-cli
Create and use a custom image - https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/tutorial-use-custom-image-powershell
why default props is not being set even parent component not passing or that prop even being not set by parent . only working when parent passing data. import React, { Fragment } from "react";
import PropTypes from "prop-types";
const Props1= (prop) => {
return (
<Fragment>
<br />
I am child {prop.name}
<br />
{prop.children} {/* Render children */}
</Fragment>
);
}
// Set default value for name if it's not passed
Props1.defaultProps = {
name: "rajuuuu", // Default value for 'name'
};
export default Props1;
It was my fault: the magicmodule
repository's pubspec.yaml
contained further references to the wrong GIT repository. (I did not expected for self-reference.)
If you are here and the above solutions did not work for you, then please check if your newly defined custom class has blank lines after its definition - it needs those lines to work.
The issue was that double quotes were used to specify OPEANAI_API_KEY in the .env file. If the key is written in double quotes, e.g.
OPENAI_API_KEY="sk-proj-...................."
it works in the desktop app, but it results in an INVALID_API_KEY error in a docker image.
I removed the double quotes from the .env file and specified the key as follows:
OPENAI_API_KEY=sk-proj-....................
Now it works both in desktop application as well as in docker image.
Thank you!
Construct the URL Manually: https://dev.azure.com/{organization}/{project}/_git/{repository}/branchCompare?baseVersion=GC{commitA}&targetVersion=GC{commitB}&_a=files
you specify the source base and target version in the url query string.
also you can Mix and match versions
GB for a branch name GT for a tag
once you entered the url and load the page you will see a full diff view that lists all file changes between the two commits.
XML does allow an option to specify ordered sequence, JSON does not. This may be an option for you if your application is able to switch from JSON to XML.
Were you able to resolve this issue?
Switch to GET request instead of POST. It still works.
Ah, I just solved it. It seems I needed to do this instead:
app.get("/api/data/:serialNumber", async (req, res): Promise<any> => {
Here is the whole file as it stands now:
import express from "express";
import axios from "axios";
import { BEARER_TOKEN } from './config';
import { URL } from './config';
const app = express();
const PORT = 3000;
app.use(express.static("public"));
interface SerialNumberParams {
serialNumber: string;
}
app.get("/api/data/:serialNumber", async (req, res): Promise<any> => {
try {
const serialNumber = req.params.serialNumber;
if (!serialNumber || serialNumber.length < 5) {
return res.status(400).json({ error: "Invalid serial number" });
}
const API_URL = `${URL}${serialNumber}/events?page=1`;
const response = await axios.get(API_URL, {
headers: { Authorization: `Bearer ${BEARER_TOKEN}` }
});
return res.json(response.data);
} catch (error: any) {
return res.status(500).json({ error: error.response?.data || "Error fetching data" });
}
});
app.listen(PORT, () => {
console.log(`Server running at http://localhost:${PORT}`);
});
This is from my notes on the topic of atomic modification order and sequentially consistent operation order. Please suggest corrections if you find something off the mark.
In the paper P0668R5: Revising the C++ memory model and in this question, a specific execution of the presented code snippet is discussed. This execution was not legal pre-C++20 but is allowed since C++20. In this execution the way we can derive the modification order of y
, as I see it, is as follows.
Quoting the following for reference.
If an operation A that modifies an atomic object M happens before an operation B that modifies M, then A is earlier than B in the modification order of M.
If a side effect X on an atomic object M happens before a value computation B of M, then the evaluation B takes its value from X or from a side effect Y that follows X in the modification order of M.
(x)
writes a value that is read by (1)
and so (x)
synchronizes with (1)
. (1)
is also an RMW operation. So according to intro.races/15, (x)
is before (1)
in the modification order of y
.(1)
happens before y.load(relaxed);
yet it doesn't read the value written by (1)
. It instead reads a value written by (2)
. So according to intro.races/18 (1)
is before (2)
in the modification order of y
.From the above ordering requirements, the modification order of y
in this execution turns out to be (x)
-> (1)
-> (2)
corresponding to the value sequence of 0, 1, 2, 3
.
Further, to derive the sequentially consistent operations order, in addition to the above observations, we can observe the following.
Quoting the following for reference.
There is a single total order S on all memory_order::seq_cst operations ... if A and B are memory_order::seq_cst operations and A strongly happens before B, then A precedes B in S.
An atomic operation A on some atomic object M is coherence-ordered before another atomic operation B on M if ... A and B are not the same atomic read-modify-write operation, and there exists an atomic modification X of M such that A reads the value stored by X and X precedes B in the modification order of M.
for every pair of atomic operations A and B on an object M, where A is coherence-ordered before B, ... if A and B are both memory_order::seq_cst operations, then A precedes B in S.
(2)
strongly happens before (3)
. So according to atomics.order/4, (2)
is before (3)
in the seq_cst modification order.(3)
is coherence ordered before (4)
. So, according to atomics.order/4.1, (3)
is before (4)
in the seq_cst modification order.Combining all above ordering requirements, the sequentially consistent operations order turns out to be (1)
-> (2)
-> (3)
-> (4)
.
Is there any better solution for this question?
I was also trying to call mapRef.current.resize()
when changing width of the sidebar, but only after width transition ends it can resize map to the correct size. So the blank part will be visible for time of the transition.
As mentioned in the following post, path alias is an old way for replacing relative imports in monorepos. Path alias are a purely compile-time construct. To run the application, you will need a runtime plugin
With workspace concepts coming in pnpm/yarn/etc, it is recommended to use workspaces + project references as mentioned in the following ost:
https://docs.aws.amazon.com/systems-manager/latest/userguide/quick-setup-scheduler.html
I think my problem was caused by the Private Node Pool on GKE. And my jenkins agent was created on Private Pool and had no internet access.
we have to get the dimension first using the get api which is for example let windowWidth = Dimensions.get('window').width so after this we can use this variable(windoWidth) to use ternary condition or if as you wish and also we have to import dimension form expo-route too
Stream is a great option for building a chat app in Flutter, offering real-time messaging, scalability, and rich UI components. However, you might also want to check out this Flutter Chat SDK,
An example of DOI extraction from a PDF file using pdftotext, grep and sed:
pdftotext -f 1 -l 1 my.pdf - 2>/dev/null | grep -o -a -m 1 -E '\b10\.[0-9]{3,}([.][0-9]+)*/([[:graph:]])+\b' | sed -e 's/[^[A-Za-z0-9]*$//' -e 's/fi/fi/g' -e 's/fl/fl/g'
Have you tried to import your glb into : https://gltf.pmnd.rs/ , to see how the is the glb's structure when you exported it ?
I didn't run your code, but I can say that the error occurs because your code creates a static field initialized at class loading time, which can cause issues with mocking frameworks and unit testing. The static initialization happens before test methods run, making it difficult for mocking frameworks to intercept and mock the BlobServiceClient creation.
Try Lazy initialization and see if that works.
In Magento, a cart is considered abandoned when a customer adds items to their shopping cart but does not complete the purchase within a specified time.
The exact time limit for when a cart is considered abandoned is not set by default in Magento's core functionality. However, this feature is typically managed by third-party extensions or custom code.
Key Points: Default Behavior: Magento itself does not have a built-in setting to define when a cart is abandoned. It relies on extensions or custom logic to determine this.
Extensions: Many Magento extensions (e.g., for cart recovery or email reminders) allow you to set a time limit for cart abandonment. Common timeframes are 1 hour, 24 hours, or 48 hours, depending on the store's preferences.
Custom Logic: If you are implementing custom functionality, you can define the time limit in your code. For example, you might consider a cart abandoned if no activity occurs for 24 hours.
Admin Configuration: If you are using an extension, the time limit is usually configurable in the Magento admin panel under the extension's settings.
Example: If you are using a cart abandonment extension, you might find a setting like:
Abandoned Cart Time Limit: Set the time (e.g., 1 hour, 24 hours) after which the cart is considered abandoned.
How to Check: Go to Stores > Configuration in the Magento admin panel.
Look for the settings related to your cart abandonment extension (if installed).
Configure the time limit as per your business needs.
If you are not using an extension, you would need to implement custom logic to define and track abandoned carts.
Regards
Webkul Software
I solved it by adding cat('\n\n')
directly after printing the plot.
DynamoDB doesn’t have a batch update API, but you can use TransactWriteItems to update multiple items in one go (up to 100 per request). It’s atomic, meaning either all updates succeed or none do—perfect for keeping data consistent.