Finally working on my side, with some other solutions, I found somethig that works. Here's the code help with another stack overflow.
dynamic parsed = JsonConvert.DeserializeObject(MyJSON);
var jObj = (JObject)parsed;
foreach (JToken token in jObj.Children())
{
if (token is JProperty)
{
var prop = token as JProperty;
Console.WriteLine("hello {0}={1}", prop.Name, prop.Value);
}
}
Find the solution with another stackoverflow here: dynamic JContainer (JSON.NET) & Iterate over properties at runtime
Question : Deprecated: Assert\that(): Implicitly marking parameter $defaultPropertyPath as nullable is deprecated, the explicit nullable type must be used instead in
And For Solving this issue, follow below steps.
Open php.ini file and
Add below line error_reporting = E_ALL & ~E_DEPRECATED
Restart the apache server on windows.
This will remove all the Deprecated: warning issue with phpmyadmin
I also have the same problem. I have downgraded my livewire version, but the problem still persists.
This seems very similar to this answer from 2021:
Consul API, Retrieve services instances list from all nodes
Executive summary:
Consul doesn't have this feature. The only solution is to fetch all services, then filter the list yourself
When using react-router to create a single page app (SPA), you usually have to make a change to the server to make it serve the files correctly. Otherwise the server will try to respond with a file that matches your literal sub route address instead of your main index file. Hence the 404.
Github pages is a bit limited for the configuration of SPA page apps, but it looks like some people have found some workarounds including modifying the 404 template file.
See this page for some example configurations when setting up your app: page not found - react/vite app not routing correctly on github pages
You can also reference this repo as an example SPA: https://github.com/rafgraph/spa-github-pages
I was able to figure out the issue. There was a section of my code where i was calling state and strigifying the JSON object, thus removing the actual function.
let options = JSON.parse(JSON.stringify(this.state.options));
I updated my code to remove the strigify:
let options = this.state.options
It's working as intended now.
public void swipeLeft(WebElement element) { // Get the element's dimensions and coordinates int startX = element.getLocation().getX() + (int) (element.getSize().width * 0.8); // 80% from the left int endX = element.getLocation().getX() + (int) (element.getSize().width * 0.2); // 20% from the left int y = element.getLocation().getY() + (element.getSize().height / 2); // Center Y of the element
// Define a PointerInput for gestures PointerInput finger = new PointerInput(PointerInput.Kind.TOUCH, "finger"); Sequence swipe = new Sequence(finger, 1);
// Move to start position swipe.addAction(finger.createPointerMove(Duration.ofMillis(0), PointerInput.Origin.viewport(), startX, y));
// Press down swipe.addAction(finger.createPointerDown(PointerInput.MouseButton.LEFT.asArg()));
// Move to end position swipe.addAction(finger.createPointerMove(Duration.ofMillis(600), PointerInput.Origin.viewport(), endX, y));
// Release swipe.addAction(finger.createPointerUp(PointerInput.MouseButton.LEFT.asArg()));
// Perform the swipe driver.perform(Arrays.asList(swipe)); }
In my case I was doing: @JsonManagedRefence and @JsonBackReference, which are wrong since these ones are only for parent-child relationship.
For ManytoMany use: @JsonIdentityInfo(generator = ObjectIdGenerators.PropertyGenerator.class, property = "id") Hope this helps someone.
Figured it out! Downloaded the royal elementor addon plugin, which includes a checkout section, and it allowed me to change the style colors of the checkout! (even though I was using the woocommerce built-in checkout!)
The problem is you have multiple elements with the same id. The id's "project", "link", and "images" are all used on multiple elements. Unlike a class, an id needs to be unique to a single element.
Either change your id's to be classes, or change them to each be a unique id.
Unpleasant fix for me was to generate .editorconfig
files into all projects I use. My version of VS is 17.12.2
First I checked my settings if they were not rewritten:
Then I clicked on the Code Style tab and from there I generated .editorconfig
file into my project folder for each project I have:
This is ugly solution and I never had to put it inside project. It was my local settings I don't understand why this file should travel with the project. (for git adding it to .gitignore can be solution).
To import a ECMA Script Module (ESM) in a TypeScript CommonJS project, you can use load-esm:
import {loadEsm} from 'load-esm';
/**
* Import ES-Module in CommonJS TypeScript module
*/
(async () => {
const esmModule = await loadEsm('esm-module');
})();
UPDATE: I reviewed the thread that was returning and tracked it down to a .dll that is being built by us too. Doublechecking this showed that there was a _USRDLL and WINDOWS compile definition missing. I added these and this has working as intended.
Here is an additional information that I figured out, it seems this is something we have to live with for considerable time. https://issues.chromium.org/issues/40254754
The key is to pivot your data and do a Matrix report.
Ich habe früher mal mit ORACLE gearbeitet. Dort hat ein Primary Key mit zwei Spalten funktioniert. Warum hier nicht auch? die Behauptung "Eine Tabelle kann nie zwei Primärschlüssel enthalten. – Luuk Kommentiert3. Januar 2021 um 15:32 Uhr"
scheint etwas unseriös.
Mögliche Lösung:
definiere erst einen PK. logge dich über phpmyadmin in der Datenbank ein und wähle in der Ansicht der Tabelle die Indexe aus. Im PK kannst du dann eine zweite Spalte für den PK auswählen. Es funktioniert!Viel Spass
I have this same problem. Curious if a solution, explanation or deeper understanding was ever gained.
searchSimilar in Spring-Data Elasticsearch tries to utilize Elasticsearch's features for searching similarities. But Entities have to be correctly mapped for this to work in query creation. Some of the Fields or Relationships particularly the @OneToMany, @ManyToOne or @ElementCollection saved in Items are relational and so incompatible with Elasticsearch as experienced for itemindex especially if mismatch vs what Items have.Notes regions that facilitate search similarity-related queries may be missing or incorrectly mapped within our index.Let’s consider improving support for searchSimilar and the manner in which the index mappings are defined. Otherwise, things can be properly. If you use "JpaRepository" and "ElasticsearchRepository" together, there may be conflicts. If an entity's lifecycle is different in these repositories, it may cause errors when saving or searching. Thus, it's key to keep the repositories separated with only the necessary operations delegated to each.
Simply call @viteReactRefresh
before @vite('resources/js/app.jsx')
in your html file.
select * from table limit 800 offset 200; where there is no id to sort. This will fetch last 200 rows.
You can check if the container is running in the listener itself at the most critical point of message handling for your business logic and throw an error if it is not running. If i were you i wouldn't intervene a thread managed by container.
I am able to get the DUT to respond to the fabricated packet.
It appears the checksums were computed badly. I updated the check sum for the TCP computation, since I learned that the TCP needs to have a "pseudo IP" header to make the computation. It's explained here: Calculation of TCP Checksum
I also restructured the code to build it from the inside out (TCP-> IP-> Ethernet) and the DUT responds to the SYN.
I also disabled "Checksum offload" on the Linux PC to be sure, and to allow me to see and verify the checksums.
So the result: it puts me back to my first reported challenge trying to fabricate a test for RFC5961:
The problem is that after the ACK to the SYN, Linux is sending a RST on it's own. I learned that is because the socket has nothing listening or connected. I don't know how to get around that, since it's closing before my test even has a chance to issue a "revcfrom()"
For anyone who is interested, here is the updated code for "main.cpp". It's not meant to be robust or defensive, just to test fabricating a packet from the Ethernet level.
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <stddef.h>
#include <unistd.h>
#include <netinet/in.h>
#include <netinet/ip.h> // struct ip and IP_MAXPACKET (which is 65535)
#include <netinet/in.h> // IPPROTO_RAW, IPPROTO_IP, IPPROTO_TCP, INET_ADDRSTRLEN
#define __FAVOR_BSD // Use BSD format of tcp header
#include <netinet/tcp.h> // struct tcphdr
#include <arpa/inet.h> // inet_pton() and inet_ntop()
#include <errno.h>
#include "Packet.h"
int BuildEthernetHdr(unsigned char **buffer, uint8_t *src_ip, uint8_t *dst_ip);
int BuildIPHdr(unsigned char **buffer, const char *src_ip, const char *dst_ip);
int BuildTCPHdr(unsigned char **buffer, const char *, const char *);
uint16_t checksum (uint8_t *addr, int len);
unsigned char buffer[2048];
int main() {
int tcplen, iplen, maclen, len;
unsigned char *eth, *tcp, *ip, *pkt;
CPacket packet;
uint8_t srcMac[6];
uint8_t dstMac[6];
tcplen = BuildTCPHdr(&tcp, "192.168.1.211","192.168.1.94");
iplen = BuildIPHdr(&ip, "192.168.1.211","192.168.1.94");
// Know your MAC addresses...
memcpy(srcMac, "\x60\xa4\x4c\x63\x4d\x9e", 6);
memcpy(dstMac, "\xa4\x9b\x13\x00\xfe\x0e", 6);
maclen = BuildEthernetHdr(ð,srcMac, dstMac);
packet.Initialize();
pkt = buffer;
memcpy(pkt,eth, maclen);
pkt += maclen;
memcpy(pkt,ip, iplen);
pkt += iplen;
memcpy(pkt,tcp, tcplen);
pkt += tcplen;
len = pkt - buffer;
packet.SendMessage(buffer, len);
free(tcp);
free(ip);
free(eth);
packet.Cleanup();
return EXIT_SUCCESS;
}
#define IP4_HDRLEN 20
#define TCP_HDRLEN 20
#define ETH_HDRLEN 14
int BuildEthernetHdr(uint8_t **buffer, uint8_t *src_mac, uint8_t *dst_mac) {
ETHERHDR * ethhdr;
ethhdr = (ETHERHDR * )malloc(sizeof(ETHERHDR));
memcpy(ethhdr->srcMac, src_mac,6);
memcpy(ethhdr->dstMac, dst_mac,6);
ethhdr->etherType = htons(0x0800);
*buffer = (uint8_t*) ethhdr;
return sizeof(ETHERHDR);
}
int BuildIPHdr(uint8_t **buffer, const char *src_ip, const char *dst_ip) {
struct ip *iphdr;
int status;
unsigned int ip_flags[4];
iphdr = (struct ip*) malloc(sizeof(struct ip));
memset(iphdr,0,sizeof(struct ip));
iphdr->ip_hl = IP4_HDRLEN / sizeof (uint32_t);
// Internet Protocol version (4 bits): IPv4
iphdr->ip_v = 4;
// Type of service (8 bits)
iphdr->ip_tos = 0;
// Total length of datagram (16 bits): IP header + TCP header
iphdr->ip_len = htons (IP4_HDRLEN + TCP_HDRLEN);
// ID sequence number (16 bits): unused, since single datagram
iphdr->ip_id = htons (0);
// Flags, and Fragmentation offset (3, 13 bits): 0 since single datagram
// Zero (1 bit)
ip_flags[0] = 0;
// Do not fragment flag (1 bit)
ip_flags[1] = 1;
// More fragments following flag (1 bit)
ip_flags[2] = 0;
// Fragmentation offset (13 bits)
ip_flags[3] = 0;
iphdr->ip_off = htons ((ip_flags[0] << 15)
+ (ip_flags[1] << 14)
+ (ip_flags[2] << 13)
+ ip_flags[3]);
// Time-to-Live (8 bits): default to maximum value
iphdr->ip_ttl = 64;
// Transport layer protocol (8 bits): 6 for TCP
iphdr->ip_p = IPPROTO_TCP;
// Source IPv4 address (32 bits)
if ((status = inet_pton (AF_INET, src_ip, &(iphdr->ip_src))) != 1) {
fprintf (stderr, "inet_pton() failed for source address.\nError message: %s", strerror (status));
exit (EXIT_FAILURE);
}
// Destination IPv4 address (32 bits)
if ((status = inet_pton (AF_INET, dst_ip, &(iphdr->ip_dst))) != 1) {
fprintf (stderr, "inet_pton() failed for destination address.\nError message: %s", strerror (status));
exit (EXIT_FAILURE);
}
// IPv4 header checksum (16 bits): set to 0 when calculating checksum
iphdr->ip_sum = 0;
iphdr->ip_sum = checksum ((uint8_t*) iphdr, IP4_HDRLEN);
printf("IP Chk %x\n", iphdr->ip_sum);
*buffer = (uint8_t *)iphdr;
return sizeof(struct ip);
}
typedef struct {
uint32_t srcIP[1];
uint32_t dstIP[1];
uint8_t res[1];
uint8_t proto[1];
uint16_t len[1];
} IP_PSEUDO;
uint8_t * PseudoHeader(uint8_t * packet, uint16_t len, uint32_t dst, uint32_t src) {
IP_PSEUDO * iphdr;
memmove(&packet[12], packet, len);
iphdr = (IP_PSEUDO*)packet;
iphdr->dstIP[0] = dst; // 5e = 94
iphdr->srcIP[0] = src; // d3 = 211
iphdr->res[0] = 0;
iphdr->proto[0] = 6;
iphdr->len[0] = htons(len);
return &packet[20];
}
int BuildTCPHdr(uint8_t **buffer, const char * src, const char *dest) {
struct tcphdr *tcphdr;
int optsize = 0;
unsigned int tcp_flags[8];
unsigned char optbuffer[20];
tcphdr = (struct tcphdr *) malloc(sizeof(struct tcphdr));
memset(tcphdr,0,sizeof(struct tcphdr));
if (false) {
// Option length (with itself) value
optbuffer[0] = 2; optbuffer[1] = 4; optbuffer[2] = 5; optbuffer [3] = 0xb4; //Max Seg Size
optbuffer[4] = 4; optbuffer[5] = 2; // SACK permitted
uint32_t time1 = 0x12345678; uint32_t time2 = 0x87654321;
optbuffer[6] = 8; optbuffer[7] = 10; memcpy(&optbuffer[8], &time1, 4); memcpy(&optbuffer[12], &time2, 4);
optbuffer[16] = 1; // NoOp
optbuffer[17] = 3; optbuffer[18] = 3; optbuffer[19] = 7; // Shift Multiplier
optsize = 20;
}
// Source port number (16 bits)
tcphdr->th_sport = htons (32500);
// Destination port number (16 bits)
tcphdr->th_dport = htons (80);
// Sequence number (32 bits)
tcphdr->th_seq = htonl (5);
// Acknowledgement number (32 bits): 0 in first packet of SYN/ACK process
tcphdr->th_ack = htonl (0);
// Reserved (4 bits): should be 0
tcphdr->th_x2 = 0;
// Data offset (4 bits): size of TCP header in 32-bit words
tcphdr->th_off = (TCP_HDRLEN + optsize) / 4;
// Flags (8 bits)
// FIN flag (1 bit)
tcp_flags[0] = 0;
// SYN flag (1 bit): set to 1
tcp_flags[1] = 1;
// RST flag (1 bit)
tcp_flags[2] = 0;
// PSH flag (1 bit)
tcp_flags[3] = 0;
// ACK flag (1 bit)
tcp_flags[4] = 0;
// URG flag (1 bit)
tcp_flags[5] = 0;
// ECE flag (1 bit)
tcp_flags[6] = 0;
// CWR flag (1 bit)
tcp_flags[7] = 0;
tcphdr->th_flags = 0;
for (int i=0; i<8; i++) {
tcphdr->th_flags += (tcp_flags[i] << i);
}
// Window size (16 bits)
tcphdr->th_win = htons (8192);
// Urgent pointer (16 bits): 0 (only valid if URG flag is set)
tcphdr->th_urp = htons (0);
// TCP checksum (16 bits)
uint8_t temp[64];
memset(temp,0,64);
uint32_t ip_src, ip_dest;
inet_pton (AF_INET, src, &ip_src);
inet_pton (AF_INET, dest, &ip_dest);
memcpy(temp,tcphdr,sizeof(struct tcphdr));
PseudoHeader(temp,20, ip_src,ip_dest);
tcphdr->th_sum = checksum(temp, sizeof(struct tcphdr) + 12);
printf("TCP Chk %x\n", tcphdr->th_sum);
*buffer = (uint8_t*) tcphdr;
return sizeof(struct tcphdr);
}
// Computing the internet checksum (RFC 1071).
// Note that the internet checksum is not guaranteed to preclude collisions.
uint16_t checksum(uint8_t *addr, int len) {
int count = len;
register uint32_t sum = 0;
uint16_t answer = 0;
// Sum up 2-byte values until none or only one byte left.
while (count > 1) {
printf(" Adding %04x \n", *((unsigned short *)(addr)));
sum += htons(*((unsigned short *)(addr)));
addr += 2;
count -= 2;
}
// Add left-over byte, if any.
if (count > 0) {
sum += *(uint8_t *)addr;
}
// Fold 32-bit sum into 16 bits; we lose information by doing this,
// increasing the chances of a collision.
// sum = (lower 16 bits) + (upper 16 bits shifted right 16 bits)
sum = htonl(sum);
while (sum >> 16) {
sum = (sum & 0xffff) + (sum >> 16);
}
// Checksum is one's compliment of sum.
answer = ~sum;
return (answer);
}
You didn't provide your UserService class but I think it uses a User Model which means it tries to connect to a database. This fails since in the test environment, the database details (server, port, db name, user, password) are unknown. Please mind that the PHPUnit script will not bootstrap all these »magic« environment details like Laravel does. So, the Model class can't set up the connection (it equals to null). You have to mock a database connection for testing purposes. But, it's in general difficult to test classes oder methods that require a database connection.
You can use Swig to create wrapper for C code in order to use it in c#
I am not sure if things have changed in the 9+ years since this question last saw activity, but the current reading of the DELETE method indicates times have changed, based on the accepted answer:
The DELETE method requests that the origin server remove the association between the target resource and its current functionality. In effect, this method is similar to the "rm" command in UNIX: it expresses a deletion operation on the URI mapping of the origin server rather than an expectation that the previously associated information be deleted. (emphasis mine).
HTTP request methods are semantic intentions. The intent of DELETE is to remove the association between the URI and the target resource (not "actually get rid of the object" per the accepted answer).
If I receive a DELETE request, regardless of how I remove that association (actually deleting the record or marking it 'inactive/deleted/whatever'), the GET response (which ultimately satisfies the intention) should not return the resource from the requested URI. Does it matter if it physically exists or not?
Based on the current spec, anything that removes the association between the target resource and its associated URI mapping is the intent of the DELETE method.
Just ran into this issue when SQLServer was upgraded from 2014 to 2022 (without my knowledge) so leaving this here in case it helps someone. The error was:
"Can't open lib '/usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so' : file not found (0)"
The solution was to install the latest driver https://learn.microsoft.com/en-us/sql/connect/odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server?view=sql-server-ver16&tabs=alpine18-install%2Calpine17-install%2Cdebian8-install%2Credhat7-13-install%2Crhel7-offline
and update the .ini files that reference them, to include the new connection variables.
Conceptually you are looking at a structure of drivers having devices having addresses. But all devices from all drivers are in just one big list numbered 0 to 'dwNumDevs' (what you got back from LineInitializeEx) .
So you don't yet know which devices are your avaya's. Usually the OS has several built-in devices taking up the first few slots. Your errors are probably from trying to access these. You first need to look through the device list to find the ones you want.
Use lineGetDevCaps https://learn.microsoft.com/en-us/windows/win32/api/tapi/nf-tapi-linegetdevcaps to ask details for each device in the list. You want to look at field like 'DeviceName' and 'ProviderInfo' to spot your extensions. Only then do you know which devices to use for lineOpen.
Please note that if you are doing this from C++ you wish to negotiate the 2.2 version not the 3.0 version (3.x use COM)
I love dylansweb's answer because it discusses the fact that you need to have access to an SMTP server. However, to connect to it you will need a client library. Recently, I too needed to send an email programmatically, albeit in python - not C#. I brought up an SMTP server on my own computer, however, it was unsuccessful transporting an email. I don't know why it didn't work. Perhaps the server wasn't actually on the internet, for which it would need a publicly accessible IP address - not just a home network IP? I decided to go with Google's server, and I got it to work! Please note that to log into my gmail account to send the email, I had to set up an app password.
FAILURE: Build failed with an exception.
Exception java.lang.NoClassDefFoundError: Could not initialize class org.codehaus.groovy.reflection.ReflectionCache [in thread "Daemon worker"]
Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
Get more help at https://help.gradle.org
BUILD FAILED in 2s Error: Gradle task assembleDebug failed with exit code 1
So after spending a lot of time figuring out what is wrong, (I too was stuck here), I finally fixed it. The problem is with using fontWeight or similar modifications with a regular font file. I know it is frustrating but can't help, Just comment out the fontWeight param and everything works as usual.
Damnn... Dude u wont believe i got the same error but i kept some extra spaces in client_secret. Ik this sounds silly but removing those spaces saved me loll
-- Now create user with prop privileges CREATE USER 'security'@'%' IDENTIFIED BY 'MySQL@12345';
GRANT ALL PRIVILEGES ON * . * TO 'security'@'%';
CREATE DATABASE IF NOT EXISTS employee_directory
;
USE employee_directory
;
If you want to download files via gh cli without having to have the raw link on hand:
curl -L $(gh api /repos/<owner>/<repo>/contents/<file> --jq .download_url) > <output_file>
I am getting this error
ERROR: (gcloud.compute.disks.create) Missing required argument [--replica-zones]: --replica-zones is required for regional disk creation
perform a NOT operation on the bool
bool = !bool;
I opened the project in explorer and deleted the entire .vs folder. Then, I reopened the project in visual studio and it worked again.
I am in a similar situation. I am using selenium to access a webpage, navigate to a specific page and then clin on a link that opens the print dialog. I am planing on then connecting to the print dialog using the pywinauto to set up the printing options and print de document, but I am having plenty of issues locating the elements. When I use Inspect, I can locate the elements just fine, but when I execute my code that connects to the already opened crome window and then print I am unable to locate the ui elements.
When executing the following code:
from pywinauto.application import Application
app = Application(backend="uia").connect(title_re=".*- Google Chrome -.*")
win = app.window(best_match="using pywinauto, how can I draw a rectangle arround a ui element?")
win.print_control_identifiers()
The control identifiers do not display anything that looks remotely similar to the ui elements I need to access.
The most I have been able to locate in the control identifiers tree is the following:
| Pane - 'Print' (L299, T78, R1622, B955)
| ['Print', 'PrintPane', 'Pane22']
| child_window(title="Print", control_type="Pane")
| |
| | Pane - '' (L299, T78, R1622, B955)
| | ['Pane23']
| | |
| | | Pane - '' (L299, T78, R1622, B955)
| | | ['Pane24']
| | | |
| | | | Pane - '' (L0, T0, R0, B0)
| | | | ['Pane25']
| | | |
| | | | Pane - '' (L306, T83, R1615, B946)
| | | | ['Pane26']
Which seems to have something to do with the Print dialog, but I can't locate anything that looks like any of the drop-downs I need to interact with.
Any pointers will be greatly appreciated since I have been strugling with this for days.
check the error logs. there have to be an error failing auth process.
In my case, a similar error occurred because the credit card I had set up for my billing account was being expired. This is my case, I hope it helps
from datetime import datetime
year = datetime.now().year
month = datetime.now().month
OR
import datetime
year = datetime.datetime.now().year
month = datetime.datetime.now().month
I hadn’t annotated the relations in another class which contained a collection of Grade
I have the same problem, all http calls blocked....I have php 8.3 and laravel 10...Solutions?
At the moment, it's not supported, so please add your vote to the corresponding feature request: https://youtrack.jetbrains.com/issue/RUBY-33686/Add-jb-to-the-New-view-file-menu
I managed to achieve what I want but I consider this a workaround more than a robust solution. This seems like a known issue.
Firstly, I updated the code at App.razor to the following
private IComponentRenderMode? RenderModeForPage => new InteractiveServerRenderMode(prerender: false);
Now, the login page will keep reloading indefinitely, so I updated the AccountLayout.razor like this in order to control the continuous reload
@if (HttpContext is null && !IsInteractive)
{
<p>@localizer["Loading"]</p>
}
else
{
@Body
}
@code {
[CascadingParameter] private HttpContext? HttpContext { get; set; }
private bool IsInteractive
{
get
{
return NavigationManager.Uri.Contains("interactive=true") || NavigationManager.Uri.Contains("interactive%3Dtrue");
}
}
protected override void OnParametersSet()
{
if (HttpContext is null && !IsInteractive)
{
NavigationManager.NavigateTo($"{NavigationManager.Uri}?interactive=true", forceLoad: true);
}
}
}
Nothing fancy here, just adding a query string to stop the reload.
Now, I thought I should be able to access the protectedLocalStorage
after the successful login but I still find it equals null
at GetAuthenticationStateAsync()
So, I added a new razor component RedirectComponent.razor & redirected from login.razor to it, then redirect from the new component to the ReturnUrl
@page "/RedirectComponent"
@inject NavigationManager NavigationManager
@inject IStringLocalizer<Resource> localizer
<p>@localizer["Loading"]</p>
@code {
protected override async Task OnAfterRenderAsync(bool firstRender)
{
if (firstRender)
{
var uri = new Uri(NavigationManager.Uri);
var query = System.Web.HttpUtility.ParseQueryString(uri.Query);
var returnUrl = query["ReturnUrl"];
StateHasChanged();
NavigationManager.NavigateTo(returnUrl ?? "Account/Login", replace: true); //to prevent the page from registering in the browser history
}
}
}
This is login.razor code after successful login
var uri = new Uri(NavigationManager.Uri);
var query = System.Web.HttpUtility.ParseQueryString(uri.Query);
ReturnUrl = query["ReturnUrl"];
ReturnUrl = !string.IsNullOrEmpty(ReturnUrl) && ReturnUrl.Contains("?") ? ReturnUrl.Split("?")[0] : ReturnUrl;
StateHasChanged();
NavigationManager.NavigateTo("RedirectComponent?ReturnUrl=" + ReturnUrl ?? "", forceLoad: true);
Now, it's working as intended but I'm not satisfied with all these workarounds. I believe these should be a much more straightforward solution as this is a very common use case when using a third-party API to authenticate.
You need to set mimetype to text/*
Ich habe früher mal mit ORACLE gearbeitet. Dort hat ein Primary Key mit zwei Spalten funktioniert. Warum hier nicht auch? die Behauptung "Eine Tabelle kann nie zwei Primärschlüssel enthalten. – Luuk Kommentiert3. Januar 2021 um 15:32 Uhr"
scheint etwas unseriös.
The Accepted answer might be good for 2014 but in today scenario, most of the people are on an multi core architecture which supports parallel thread execution.
The downloading takes two to process and if indeed your internet speed is the limiting factor, no one can help you but there are scenarios where this multi process downloading helps you-
In fact, it might be possible or a great product if a download manager supports proxy setup to download a single resources in multiple segments from different IP addresses.
You can override Illuminate\Foundation\Exceptions\Handler -> prepareException method.
I was also expecting ModelNotFoundException exception rather than NotFoundHttpException. I have to override its default method prepareException so to catch ModelNotFoundException and to return "404 not found" json response from app\Exceptions\Handler.php
In my current architecture, I use RabbitMQ as the message broker. I chose RabbitMQ over other alternatives because it’s already integrated into the system and provides a simple solution for handling WebSocket sessions beyond the in-memory sessions offered by spring websocket library.
A key feature of RabbitMQ is its ability to share user sessions across multiple servers connected to the same RabbitMQ broker. This is crucial for scenarios where users are connected to different servers in a distributed system.
For example, if an admin is connected to Server A via a WebSocket and sends a notification, RabbitMQ ensures that all users connected to the application—regardless of whether their websocket connection is open on Server A or on another server—can receive the notification. This is because RabbitMQ shares user session from and to your application's servers.
To achieve this, the admin publishes the notification to a specific queue (e.g., "notify-queue"). All users subscribed to this queue, no matter which server they are connected to, will receive the message.
If you're considering Kafka as an alternative to RabbitMQ, you should evaluate whether Kafka provides a similar mechanism for distributing messages to WebSocket clients in a multi-server architecture.
Resources:
Since version 1.7.5 (Feb, 2024) they added the argument "data_sorter", now you can exclude xml tag sorting project readme:
import dict2xml
data={'QQQ': 1,'SSS':30,'AAA':100}
data_sorter=dict2xml.DataSorter.never()
xml = dict2xml.dict2xml(data,data_sorter=data_sorter)
I had this issue so I created a little script to erase and save dep in requirements.txt file automatically. You can have a look here : https://github.com/Romeo-mz/clean_all_venv
I cannot comment the previous comment of @rukmini.
To get cost by resource :
for row in result.as_dict()['rows']:
resource_id = row[2]
resource = resource_id.split('/')[-1]
cost_sum = row[1]
print(f"Cost for [{time_period_in_past}] days for resource [{resource}] is [{cost_sum}] USD")
Try to modify your model to return list of dictionaries instead of pd.DataFrame
Example:
return [{
'answers': a,
'sources': s,
'prompts': p
} for a, s, p in zip(answers, sources, prompts)]
Just wrap it with a SizedBox
widget. Also its not mandatory that a BottomAppBar
needs to be passed to that widget, it can be any widget which has a custom widget
como hago para poder hacer referencia a esa columna, la cual tiene un proceso. Ejemplo: (P1.Id_Factura - LAG(P1.Id_Factura, 1, P1.Id_Factura) OVER (ORDER BY P1.Id_Factura)-1) en una instrucción anidada select. Gracias
I dont get it too bc for some reason its the same error i tried to make a text pop up from the bottom of the screen but the tween doesnt work, why?
heres my code:
game.Players.PlayerAdded:Connect(function(plr)
local PopUpTweenInfo = TweenInfo.new(2, Enum.EasingStyle.Back)
local PopUpTweenGoal = {
Position = 0.196,0,0.136,0
}
local PopUpTween = TS:Create(PopUpFrame, PopUpTweenInfo, PopUpTweenGoal)
task.wait(8)
PopUpTween:Play()
end)
2024, using lodash.debounce
useEffect(() => {
const debouncedResize = debounce(() => {
mapRef.current?.resize();
}, 0);
const handleResize = () => {
debouncedResize();
};
const resizer = new ResizeObserver(handleResize);
if (mapContainerRef.current) {
resizer.observe(mapContainerRef.current);
}
return () => {
resizer.disconnect();
debouncedResize.cancel();
};
}, []);
decoration: BoxDecoration( image: DecorationImage(image: AssetImage(AppIcons().balls),scale: 2.5), ),
the higher the scale the lesser the image will be
It has been implemented in version 2024.12 and docs available at document operations
You can set an update period in the in the AppWidgetProviderInfo, however this doesn't support intervals smaller than 30mins. The proposed WorkManager workaround will also not work for intervals less than 15mins.
Frequent updates are not allowed because it would use too much power, sadly widgets are quite inefficient.
I suggest you read the guide on optimizing widget updates, but I don't think what you're trying to achieve is feasable.
can you please try, if SELECT *, (SELECT (SELECT *, ->has->api.* AS api FROM ->collection_to_folder->folder) AS folders FROM ->workspace_to_collection->collection) AS collections FROM workspaces:4yuie4oj7jrrhqvs3m7m;
returns the desired structure?
Another vendor/chip family: For TI's MSPM0 family, the Technical Reference Manual states in Sect. 1 Architecture, 1.2 Bus Organization: "All arbitration between the CPU and DMA is done on a round-robin basis."
Four years later, your solution helps again, thanks Kumar!
Has anybody noticed that open -g
doesn't seem to work anymore (Monterey/Sequoia)? It used to work fine, but seems to have stopped in recent upgrades.
I have this as the last line in a script I run from iTerm, but focus is immediately taken by VSCode:
open -g "/Users/sparker/dev/IL_IM.code-workspace"
I do want the VSCode window visible, just not foreground.
make_celery.py
import os
from flask.cli import load_dotenv
from app import create_app
load_dotenv() # add this !
flask_app = create_app(os.getenv('FLASK_CONFIG') or 'default')
celery_app = flask_app.extensions["celery"]
Problem solved with all details here: https://github.com/control-toolbox/OptimalControl.jl/issues/375.
I think below resource would be good guide for folks wanting to move from Vscode to Webstorm world:
good question, waiting for a solution
Thank you for responding to my question. Upon reading the documentation, it needs 24 hrs waiting time or latency after being made the account into orgadmin.
Well so actually the error was that rsp uses a custom toolchain defined in another related project. I needed to update it with sp1up
.
I am working on Blazor .Net Core. So I faced this error, i did some research and I found I haven't installed the NuGet package Syncfusion.Blazor package. I installed the package and resolved the error in this way. In broader aspect you are missing to install the NuGet package.
The fix I was looking for was more simple than I hoped. Thanks to BigBen for the reply.
RawDataMR.Range("A1").Resize(.Rows.Count, .Columns.Count).NumberFormat = "@"
, before transferring the value. – BigBen
This Prepared by "author" field is not the same as "username" field. The problem is when uploading an xlsx file to SharePoint as a template. We want each user who submits an instance of the template to have their username appear on the footer. Instead, it always shows the name of the original author of the template document.
I had to use this artifact (notice this is Android Specific)
implementation("androidx.compose.material:material-icons-extended-android:1.7.5")
There is a way to see your php code... but you have to make a .htaccess file change to the webserver, to show embedded php code in your html pages.
Inside VSC as far as I know, is not possible... it hasn't have that feature.
But you can install a php server in VSC, to show previews of your php files on your prefered browser.
You are expecting to deserialize a string to an ENUM. Are you actually sending a string?
Because I'm not seeing the ENUM being converted to string in our API example. By default ENUMs are serialized to their numeric values.
111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111
This solved for me:
model.output_names=['output']
This is an IDE bug, In order to fix it, only find Repair IDE from File menu and press it and and Rescan the project after reopening project it will be fixed.
After getting some help from AWS, I was able to create a connection. Here is what was recommended for the above setup. Add SecretsManagerReadWrite to IAM role.
Add the following VPC endpoints to the VPC and subnet where your Redshift cluster is configured:
This did not work for me fully, I was getting "Wrong Username / Invalid Credentials". I could get it to work by appending "AzureAD" before the user name like "AzureAD<username>@.onmicrosoft.com". This link helped me: https://www.benday.com/2022/05/17/fix-cant-log-in-to-azure-vm-using-azure-ad-credentials/
It's been a while since I don't write an e-mail template, but I suggest you doing this using tables and use some sort of e-mail boilerplate to help you to "normalize" the CSS among clients. Sometinhg like this https://htmlemailboilerplate.com/
so I just made the metadata parameters to accept data from the params in the layout.jsx file. I made sure that the parameters are gotten from the page.mdx file itself like below & it works... Cant believe I was stuck here for so long. just leaving the code here so that it helps anyone like me because I havent seen a reply in stackOverflow that helped me
(inside mdx.js)
export async function getArticleBySlug(slug, type) {
const filePath = path.join(process.cwd(), `src/app/${type}/${slug}/page.mdx`);
if (!fs.existsSync(filePath)) {
return null; // Return null if the file does not exist
}
const fileContent = fs.readFileSync(filePath, 'utf8');
const { data, content } = matter(fileContent);
return {
metadata: data, // Frontmatter metadata
content: await serialize(content), // Serialized MDX content
};
}
(inside layout.jsx)
export async function generateMetadata({ params }) {
// Dynamically load content based on the route
const { slug } = params;
let pageMetadata = {
title: 'page title',
description: 'page description',
url: 'https://page/',
image: 'https://page/defaultimage.png',
};
if (slug) {
// Example for blog articles
const articles = await loadArticles();
const article = articles.find((a) => a.slug === slug);
if (article) {
pageMetadata = {
title: `${article.title} - page`,
description: article.description || pageMetadata.description,
url: `https://page/${slug}`,
image: article.image || pageMetadata.image,
};
}
}
return {
title: {
template: '%s - page',
default: 'page description',
},
openGraph: {
title: pageMetadata.title,
description: pageMetadata.description,
url: pageMetadata.url,
type: 'website',
images: [
{
url: pageMetadata.image,
width: 800,
height: 600,
alt: 'image',
},
],
},
};
}
I have the same problem,this is my code can you verify with me please? "{ "expo": { "name": "Fettecha", "slug": "reactexpo", "privacy": "public", "version": "1.0.0", "orientation": "portrait", "icon": "./assets/icon.png", "userInterfaceStyle": "light", "splash": { "image": "./assets/splash.png", "resizeMode": "contain", "backgroundColor": "#ffffff" }, "ios": { "supportsTablet": true, "bundleIdentifier": "com.salem.kerkeni.reactexpo" }, "android": { "adaptiveIcon": { "foregroundImage": "./assets/icon.png", "backgroundColor": "#ffffff" }, "package": "com.salem.kerkeni.reactexpo", "config": { "googleMaps": { "apiKey": "apikey" } } }, "plugins": [ [ "expo-updates", { "username": "salem.kerkeni" } ], [ "expo-build-properties", { "android": { "usesCleartextTraffic": true }, "ios": { "flipper": true } } ] ], "package": "com.salem.kerkeni.reactexpo", "web": { "favicon": "./assets/favicon.png" }, "extra": { "eas": { "projectId": "2140de56-9d4e-4b36-86e2-869ebc074982" } }, "runtimeVersion": { "policy": "sdkVersion" }, "updates": { "url": "https://u.expo.dev/2140de56-9d4e-4b36-86e2-869ebc074982" }, "owner": "salem.kerkeni" } } "
As mentioned by @andreban, query-parameters may be used to pass info from native to the web-app.
Starting with Chrome 115, TWAs can also utilize postMessage
to communicate between the web and native app at runtime.
See the official docs here: https://developer.chrome.com/docs/android/post-message-twa
PIL's loop = 1
makes it loop twice in total, loop = 2
loops thrice, etc.
To get it to loop exactly once, remove the argument of loop
completely, it defaults to once.
def find(text, __sub, skip=0):
if skip == 0:
return text.find(__sub)
index = text.find(__sub) + 1
return index + find(text[index:], __sub, skip - 1)
ax1.text(-0.1, ratio.iloc[0].sum() + 0.5, 'N=515', fontsize=9, color='black', weight='bold', ha='center') ax1.text(0.9, ratio .iloc[1].sum() + 0.5, 'N=303', fontsize=9, color='black', weight='bold', ha='center')
https://youtu.be/29qBvRGMnp4 Made a video to explain. Hope it helps.
It looks like the issue is happening because Excel is trying to treat any cell that starts with === as a formula, which is causing the error
You can loop through the cells and check if they start with = (which includes ===), and then add an apostrophe (') at the start to make sure Excel treats it as plain text instead of a formula
This "non-boolean truth table", as the OP has named it, can be converted into 3 (3 bits to cover the 6 types of Activities) truth tables, each with 5 bits of input (note Location requires 2 bits). This will contain some don't care values since there are only 6 types of Activities vs. 2^3 = 8, and since there are only 3 types of Locations vs. 2^2 = 4. From these truth tables, the kmaps can be constructed. From these kmaps, the boolean minimized equations can be constructed. From these boolean equations, the efficient code can be written. Note that this is a lot of mental work, which might be error prone. Based on this, and the fact that the OP merely asked for guidance, I will leave this work for the OP.
Worked like a charm. Thanks A-Boogie18
I got the same error. I got a hint in the Windows Event Viewer and as it turned out, an external library was not properly included in the published Build. Adding the NuGet package that provides the missing library fixed the issue for me.
This question is old, so, probably my next answer is just relevant recently.
As of today - tested in a postgresql 15 - the function trunc does the trick:
SELECT round(cast (41.0255 as numeric),3), --> 41.026
trunc(cast (41.0255 as numeric),3) --> 41.025
This might be more pythonic way:
list1.index(list1[-1])
This works: apiVersion: apps/v1 kind: Deployment metadata: name: postgres spec: replicas: 1 selector: matchLabels: app: postgres template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres:17 imagePullPolicy: "IfNotPresent" ports: - containerPort: 5432 envFrom: - configMapRef: name: postgres-config env: - name: PGDATA value: /var/lib/postgresql/data/pgdata volumeMounts: - mountPath: "/var/lib/postgresql/data" name: postgredb volumes: - name: postgredb persistentVolumeClaim: claimName: postgres-pv-claim
the best way to get the issue solved is to add 'chromadb-pysqlite3' in your requirements.txt file
Right-Click on the database and select properties Click on Files under the Select a page Under the Owner, but just below the Database Name on the right-hand pane, select sa as the owner.
Another way: I think it's because your computer name is different from your windows authentication account. You can delete the login account and recreate a new authentication account with the current computer name
I have created the directory "public/storage" manually. Then "php artisan storage:link" always showed me the error "Link already exists". And in browser I saw error 403. When I deleted the directory "storage" and used "php artisan storage:link" again, it started to work. I was using local server (XAMPP package) and Windows 10.