Drag and move your svg or jpeg file in which the image is in the same folder or same sub-folder as your html file, browser is not able to search the path or location of file and hence show - file not found when you inspect the url, you can also read the error message. I did this and my code worked.
passenger start --instance-registry-dir /tmp
I used yarn build and it works,thank you so much
I had same issue, but it worked when i replaced the the fucntion createEmailSession with createEmailPasswordSession.
Create Custom Middleware in Laravel Laravel middleware is a mechanism that filters HTTP requests entering your application. It sits between the request and the application’s core, allowing you to intercept, modify, or reject requests based on certain conditions. Middleware can be used for tasks like authentication, logging, and request manipulation. It provides a flexible way to handle cross-cutting concerns in your application’s HTTP lifecycle, ensuring clean and modular code organization. Read More
I've been having the same issue for a bit in PyCharm. The solution is quite odd, simply install keras as a package by itself, and then replace imports to import tensorflow.keras to keras.api and it will compile just fine.
Would appreciate it if anyone could give further explanation as to why this works..
JanVerbeke, your approach of either using a table alias or the actual tableName.field works. However, the challenging I am having is when I want to do this dynamically and my frontend doesn't have any idea of the tableName.fieldName but rather the fieldName only. I have a DTO object that contains all the fields I am exposing to the frontend which I want to use for sorting on both the frontend and the backend but due to nested associations I have, I can't get this to work.
Any suggestions will be appreciated.
Could you please try this? Extract it and use the dlls in bin folder.
Also, you can try delete dbgcore.dll and dbghelp.dll, and check your PATH envvar to find if there's another dbgcore.dll on your computer that would result in a 'DLL Hell'.
I faced the same problem on my Mac today. I also noticed, that Docker didn't work at all. The fix that worked for me was completely removing Docker Desktop from my Mac and reinstalling a fresh version of it. I use Homebrew for Docker Desktop, so there might have been an issue with some upgrades.
I am having the same issue. In my case, a Raspberry Pi 2B was running Jessie. I wrote a PHP program accessing two solar charger controllers. It has worked since 2018.
The 2B's SD card crashed. I can not rebuild a card for Jessie, so now I am running Bookwork. The Modbus commands fail. I get the "Watchdog time expired [ 5 sec]!!! Connection to 192.168.1.1 is not established." error.
Any suggestions?
The Creative Pencil एक क्रिएटिव ड्रॉइंग चैनल है जहाँ आपको बेहतरीन आर्टवर्क देखने को मिलते हैं। इस बार मैं अपने चैनल पर भगवान कृष्ण का एक खास आर्टवर्क लेकर आ रहा हूँ। उनकी दिव्य छवि और आशीर्वाद को पेंसिल आर्ट के ज़रिए दर्शाने की कोशिश की है। अगर आपको आर्ट और क्रिएटिविटी पसंद है, तो चैनल को ज़रूर विजिट करें और सपोर्ट करें!
To what extent did your parents encourage you to pursue a career in nursing.
The FirstPersonController comes with its own "Cursor" as a crosshair kind of thing. You can change it by doing player.cursor.color or .scale or whatever you need to do with it. You can even change the texture or whatever you need.
Here is the documentation for the FirstPersonController
https://www.ursinaengine.org/api_reference.html#FirstPersonController
Here is the discord if you need any more help
Not ideal but does exactly what is wanted:
git commit -p small change (along with already staged changes)git stashgit reset HEAD^git commit -p small changegit add -ugit stash popWe end up where we started, with the same material staged and unstaged, except the small change is committed.
I know this question is old, but I just came across the same problem. While the solution from This answer did not work directly, I just tried setting the config http-host=127.0.0.1 (not publically documented) in keycloak.conf and it seems to work for me. Probably also works equally using keycloaks other config variants. If I were to guess, keycloak manually overrides quarkus.http.host with its own internal default.
I found below link helpful:
In short: Navigate in the console to "Spot Requests" and then: "actions" --> "cancel request".
It's used to collapse sections of code for better readability, it's especially useful when dealing with long code blocks. Here's an example:
You can specify the version of dart to use in pubspec.yaml:
environment:
sdk: '>=3.6.0 <4.0.0'
Self plug: I wrote a library to do this because TC39 recently abandoned the proposal to introduce it: https://www.npmjs.com/package/@nano-utils/op
User-side looks like:
import { op } from '@nano-utils/op';
class Vector {
constructor(...elems) {
this.arr = elems;
}
'operator+'(other) {
return new Vector(...this.arr.map((e, i) => e + other.arr[i]));
}
}
const a = new Vector(1, 2),
b = new Vector(3, 4);
console.log(op`${a} + ${b}`); // -> Vector(4, 6)
data = {'a': [3, 2], 'b': [[4], [7, 2]]}
df = pd.DataFrame(data)
df['c'] = df.apply(lambda row: [row['a'] * x for x in row['b']]**, axis=1**)
I had the same problem: post such upgrade, War was up but not running.
"Step -1 (minus 1): Do not replace all javax to jakarta"
It looks like you did replace also to javax.ws.rs
Good luck
Did you really say list of tuples by list comprehension? Okay, you're supposed to have it if you really, really have to:
n=3783780 # e.g.
print(
[(k,v)for k,v in{k:l.count(k)for l in[(lambda f,p,i:[]if i<p else f(f,p+1,i)if i%p else[p]+f(f,p,i/p))(lambda g,q,j:[]if j<q else g(g,q+1,j)if j%q else[q]+g(g,q,j/q),2,n)]for k in l}.items()]
)
It could probably be written a little shorter. Here the core is a list of prime factors (I had that written earlier), and that was just modified into tuples of prime+exponent pairs. It would be better to generate the desired pairs directly and put them as tuples in a list, or at least in a dict and then convert that to a list.
I believe I have it figured out:
def plot2(scores, mean_scores, times, mean_times):
display.clear_output(wait=True)
display.display(plt.gcf())
plt.clf()
plt.title('Some Titile')
plt.xlabel('Number of games')
plt.ylabel('Score')
ax1 = plt.gca()
ax2 = ax1.twinx()
ax2.plot(times)
ax2.plot(mean_times)
ax2.set_ylabel('Game Time')
plt.plot(scores)
plt.plot(mean_scores)
plt.ylim(ymin=0)
plt.text(len(scores)-1, scores[-1], str(scores[-1]))
plt.text(len(mean_scores)-1, mean_scores[-1], str(mean_scores[-1]))
plt.show()
plt.pause(0.1)
I had a similar issue. Support for Blazor WebAssembly projects isn't in Aspire yet. It's because Aspire has no way to pass the service discovery information to the client application except over HTTP.
I made a Nuget package that passes the service discovery information from the AppHost to the client by writing it to the client's appsettings.json files. Hopefully one day the feature will be baked into Aspire.
The package is here: https://www.nuget.org/packages/Aspire4Wasm/#readme-body-tab
The GitHub source repo is here: https://github.com/BenjaminCharlton/Aspire4Wasm/blob/master/README.md (If you would like to contribute improvements please send me a pull request!)
You use it like this:
Example Program.cs in AppHost
var builder = DistributedApplication.CreateBuilder(args);
var inventoryApi = builder.AddProject<Projects.AspNetCoreWebApi>("inventoryapi");
var billingApi = builder.AddProject<Projects.SomeOtherWebApi>("billingapi");
builder.AddProject<Projects.Blazor>("blazorServer")
.AddWebAssemblyClient<Projects.Blazor_Client>("blazorWasmClient")
.WithReference(inventoryApi)
.WithReference(billingApi);
builder.Build().Run();
Example Program.cs in your Blazor WebAssembly Client
Install (on the WebAssembly client) the Microsoft.Extensions.ServiceDiscovery Nuget package to get the official Aspire service discovery functionality that is going to read your resource information from your app settings.
builder.Services.AddServiceDiscovery();
builder.Services.ConfigureHttpClientDefaults(static http =>
{
http.AddServiceDiscovery();
});
builder.Services.AddHttpClient<IInventoryService, InventoryService>(
client =>
{
client.BaseAddress = new Uri("https+http://inventoryapi");
});
builder.Services.AddHttpClient<IBillingService, BillingService>(
client =>
{
client.BaseAddress = new Uri("https+http://billingapi");
});
I hope it solves your problem.
I just installed VS2022 on Windows10, and didn't know that I had to install Windows 11 SDK. I got the ctype.h error.
I did a search on the laptop for Visual Studio Installer, ran it, hit Modify, and checked Windows 11 SDK (10.0.22621.0). Then I could build and run C++ solution.
Can someone advise how to block or prevent this? illegalwebsite.com is hiding behind Cloudflare, so I cannot see their host IP.
If you go to Dashboard > Websites > Add a domain as if you would add a domain for your self, and you place there the "illegalwebsite.com" domain, Cloudflare will show you the A record with the IP address (I know, is weird, but that's how it works CF, even with our domains) along with all the other records. You can move from there to understand what is going on.
Ciao!
I had the same problem, on localhost I could access mongodb atlas, when I accessed it on another computer it gave me this error, I was able to solve it by changing the axios url, instead of leaving localhost I gave the IP of the computer that is running the server type http:/ /localhost:3000 moved to http:192.168.1.2:3000. That resolved it.
I have found an answer: Turn off "Enable Chrome V8 runtime" for your Apps Script. When this is enabled the failure occurs within 10 transactions, often within the first 3. When this is disabled I have run 400 transactions without a single failure.
After changing this setting you will need to Deploy your script again, even though you have not changed any code.
Yes, find the CUDA version with nvcc --version and run pip install cupy-cuda{major_version}x.
lrx!
My goal is to host this second Wordpress website on https://domain/myfolder.
...
I am able to redirect subdomain.domain.com to https://domain/myfolder but then I get a 404 error...
I think this is because this URL structure can be either for a normal page/post URL or sub-directory subsite URL in a WordPress multisite installation (which is actually and easily supported by Runcloud - though dealing with WP multisite on subdirectories is trickier than multisite on some domains).
I have many WP single sites and several multisites on Runcloud, and I will encourage you to reflect on the goals you are trying to achieve, because what you are saying you're trying to do is unusual and maybe a little bit hacky (although I admit I've tried to do it when I started dealing with the WP beast, around 13 years ago).
You'd better open a ticket on Runcloud and they will answer promptly and quickly, as they usually do.
God bless!
I had a similar issue, and contacted twilio. It turns out the connect to stripe button in twilio console does not currently work. Twilio is unable to get the OAuth Token From Stripe, and they are working on this issue. https://status.twilio.com/incidents/21lnz91yrxpv
Based on your description and the provided code, the issue seems to lie in the configuration or the connection process itself rather than with the Oracle database services since SQL Developer can successfully connect. Here are some steps to diagnose and resolve the issue:
Ensure your DBConnection connection string in the configuration file (e.g., App.config or Web.config) is correct. The connection string for Oracle typically looks like:
<connectionStrings>
<add name="DBConnection"
connectionString="User Id=<username>;Password=<password>;Data Source=<datasource>"
providerName="Oracle.ManagedDataAccess.Client" />
</connectionStrings>
Replace <username> and <password> with the correct Oracle credentials.
Replace <datasource> with the appropriate Oracle Data Source, such as:
hostname:port/service_name
localhost:1521/XEPDB1tnsnames.ora.Ensure you have the Oracle.ManagedDataAccess NuGet package installed in your project. It can be added using:
dotnet add package Oracle.ManagedDataAccess
Make sure the version of this library is compatible with the Oracle database version.
Update the catch block in your constructor and log detailed exception information:
catch (OracleException oracleEx)
{
Console.WriteLine($"OracleException: {oracleEx.Message}");
Console.WriteLine($"Error Code: {oracleEx.ErrorCode}");
Console.WriteLine($"Stack Trace: {oracleEx.StackTrace}");
}
catch (Exception ex)
{
Console.WriteLine($"General Exception: {ex.Message}");
Console.WriteLine($"Stack Trace: {ex.StackTrace}");
}
This will provide more specific details about why the connection fails.
Check TNS Listener: Use the tnsping command to verify the Oracle listener is reachable.
tnsping <datasource>
Replace <datasource> with your TNS alias or hostname.
Firewall: Ensure no firewall rules block the connection on port 1521 (or the port used by Oracle).
Try connecting with a minimal example to isolate the problem:
using Oracle.ManagedDataAccess.Client;
class TestOracleConnection
{
static void Main(string[] args)
{
string connectionString = "User Id=<username>;Password=<password>;Data Source=<datasource>";
try
{
using (var connection = new OracleConnection(connectionString))
{
connection.Open();
Console.WriteLine("Connection successful!");
}
}
catch (Exception ex)
{
Console.WriteLine($"Error: {ex.Message}");
}
}
}
If the database connection is slow, increase the Connection Timeout in the connection string:
<add name="DBConnection"
connectionString="User Id=<username>;Password=<password>;Data Source=<datasource>;Connection Timeout=60"
providerName="Oracle.ManagedDataAccess.Client" />
The Dispose method in your DBManager class is not implemented. Ensure proper disposal of resources:
public void Dispose()
{
if (_connection != null)
{
if (_connection.State != ConnectionState.Closed)
{
_connection.Close();
}
_connection.Dispose();
_connection = null;
}
}
Sometimes, mismatches in versions can cause issues. Verify the correct version of Oracle.ManagedDataAccess assembly is loaded using:
Console.WriteLine(typeof(OracleConnection).Assembly.FullName);
Look at the Oracle Database logs for any relevant errors, or enable tracing for your Oracle client to gather more information.
If none of the above resolves the issue, please share the exact error message and stack trace for further assistance.
Just use this and your GUI will close only after you click on the window.
win.exitonclick()
Just for completeness:
alik@linux:~/people/didalik/dak/cd/ci(main)$cat << HD | cat
> This works too.
>
> Cheers,
>
> Дід Alik
> HD
This works too.
Cheers,
Дід Alik
alik@linux:~/people/didalik/dak/cd/ci(main)$
Flexibility!
After turning off SIP (System Integrity Protection) everything worked.
As a general rule, in order to redirect ALL the traffic of your www.somnovozamcan.eu to your main site somnovozamcan.eu (called Canonical URL) you have to do it via the DNS settings of your domain: Record type: CNAME Name: www Target: somnovozamcan.eu. Is as simple as that, if you have access to your domain DNS.
Maybe, since you are using a shared hosting - and perhaps they are holding you from your DNS settings - you should find some settings on your admin/control panel.
BUT, all that said with an important CAVEAT and WARNING (precisely because you're using a shared hosting service): since you have your site on a "www" subdomain, that means all your files (let apart the database) are stored in a precise folder of your shared hosting provider's server. That said, before making any intervention, you should know your website situation, and the best way to do this is to ask your hosting provider.
Take care.
ScheduledTaskRegistrar can be used in that way (Spring Boot version 3.4.1):
import org.springframework.context.annotation.Configuration;
import org.springframework.lang.NonNull;
import org.springframework.scheduling.annotation.EnableScheduling;
import org.springframework.scheduling.config.ScheduledTaskRegistrar;
import org.springframework.scheduling.Trigger;
import org.springframework.scheduling.TriggerContext;
import org.springframework.scheduling.support.CronTrigger;
import java.time.Instant;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import java.util.concurrent.Executor;
import java.util.concurrent.Executors;
@Configuration
@EnableScheduling
public class SchedulingConfig implements org.springframework.scheduling.annotation.SchedulingConfigurer {
private final CustomCronTrigger customCronTrigger = new CustomCronTrigger("0/10 * * * * ?"); // Runs every 10 seconds.
@Override
public void configureTasks(ScheduledTaskRegistrar taskRegistrar) {
taskRegistrar.setScheduler(taskScheduler());
taskRegistrar.addTriggerTask(
new Runnable() {
@Override
public void run() {
LocalDateTime scheduledTime = customCronTrigger.getScheduledTime();
System.out.println("Method was scheduled at: " + scheduledTime.format(DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss")));
}
},
customCronTrigger
);
}
public Executor taskScheduler() {
return Executors.newScheduledThreadPool(1); // Use the executor of your liking.
}
class CustomCronTrigger implements Trigger {
private LocalDateTime scheduledTime;
private final CronTrigger cronTrigger;
public CustomCronTrigger(String cronExpression) {
this.cronTrigger = new CronTrigger(cronExpression);
}
public LocalDateTime getScheduledTime() {
return scheduledTime;
}
@Override
public Instant nextExecution(@NonNull TriggerContext triggerContext) {
Instant nextExecutionInstant = cronTrigger.nextExecution(triggerContext);
if (nextExecutionInstant != null) {
scheduledTime = nextExecutionInstant.atZone(java.time.ZoneId.systemDefault()).toLocalDateTime();
}
return nextExecutionInstant;
}
}
}
Output:
2025-01-12T00:56:04.804+0200 INFO Initializing ProtocolHandler ["http-nio-8080"]
2025-01-12T00:56:04.805+0200 INFO Starting service [Tomcat]
2025-01-12T00:56:04.805+0200 INFO Starting Servlet engine: [Apache Tomcat/10.1.34]
2025-01-12T00:56:04.825+0200 INFO Initializing Spring embedded WebApplicationContext
2025-01-12T00:56:04.827+0200 INFO Root WebApplicationContext: initialization completed in 398 ms
2025-01-12T00:56:05.033+0200 INFO Starting ProtocolHandler ["http-nio-8080"]
2025-01-12T00:56:05.043+0200 INFO Tomcat started on port 8080 (http) with context path '/'
2025-01-12T00:56:05.050+0200 INFO Started CrudCoreApplication in 0.847 seconds (process running for 1.127)
Method was scheduled at: 2025-01-12 00:56:10
Method was scheduled at: 2025-01-12 00:56:20
Method was scheduled at: 2025-01-12 00:56:30
Please move CustomCronTrigger in a separate class file, and if possible for Runnable as well, to have better organization of code. The class and runnable are provided as such for demonstration purposes of the answer.
Out of curiosity, why do you need to be that exact though?
Try to install this library manually, works for me https://www.npmjs.com/package/@splinetool/runtime
Have you find out how to do that?
For those who are not able to get a correct HH:mm:ss format from java.sql.Date, just set it as java.sql.Timestamp in field class type.
I literally don’t have words… this is some sick twisted (pun intended) shit going on. Gas lighting, preventing people from living their lives with freewill and using them as test subjects 24/7 is wrong. And beyond this, to abuse and hurt people just because you think u can hide behind a screen and not get caught is the most disturbing part of this. Time (lol) to get RIGHT… (not left)… wait switch? No thanks. Seriously u need to look into this because it’s the most fucked up situation in a long time. My name is Kelly lucido. I live in Carmel, ca and I am 38 years old.
For persistent occurrence of this error, you may need to clear your cache
expo start -c
Hola que tal prueba con newDevice.register() despues de haber creado la instancia :D espero te funcione!
If you have a custom validation, you have to add this in your javascript script once your field is filled out correctly =>
YOUR_FIELD.setCustomValidity('');
1.) run the touch command followed by the name of the file : 2.) $ touch main.rs 3.) $ ls main.rs 4.) open VS > open file main.rs
Solved by bsimmo over at Github
i was able to add the SMTP env variables in the docker compose file under
x-airflow-common:
environment:
FYI, airflow has a default connection_id 'smtp_default' that can be edited with the user account credentials. You can also refer to the other ways I tried to test in this answer link airflow smtp not working with docker - section/key [smtp/smtp_user] not found in config / OSError: [Errno 99] Cannot assign requested address
Also as an alternative, you can send email by using python's smtplib in a PythonOperator callable function refer https://docs.python.org/3/library/email.examples.html
Answering my own question here: had to compile GLFW myself, then compile using gcc instead of gcc -x c++, THEN add -lgdi32 to the compile instruction. This has solved the issue for me on on two separate machines. Thanks to @drescherjm, @Alan Birtles, and @Brecht Sanders for their help resolving the issue.
Problem was in home controller
@RestController
public class HomeController {
@RequestMapping("/{path:[^\\.]*}")
public String redirectToReact() {
return "forward:/index.html";
}
}
that was blocking requests from websocket, so if you should check if u have the same thing if u got the same problem
Mines says failed to parse lock file at my directory then caused by "lock file version 4 requires -Znext-lockfile-bump"
I faced the same problem, and since there wasn't a solution available, I decided to create a package: https://github.com/dazza-dev/Laravel-Batch-Validation that solves it. This is the first version, so it may have some bugs, but it would be great if you could try it out and help improve it further. It works well for my use case, but I hope it can be helpful to the Laravel community.
shut up old manshut up old manshut up old manshut up old manshut up old manshut up old manshut up old man
Not the best, but. Formula in B3 dragged right:
=LET(a,TAKE(FILTER($A$2:A2,$A$1:A1=1),1,-3),IF(COUNTA(a)>=3,SUM(a),0))
Result:
In my case the webserver in the EC2 was not running, I had to log in to the instance
Just missing the Initialized notification, which comes immediately after the initialize response is received.
{
"jsonrpc": "2.0",
"method": "initialized",
"params": {}
}
Incidentally the idea that sending and receiving notifications 1 at a time is always the case is a mistake, e.g.
src: https://iwanabethatguy.github.io/language-server-protocol-inspector/
The problem solved (after explore django source code): The permissions field in AbstractGroup model must have related_query_name as "group".
class AbstractGroup(models.Model):
name = models.CharField(_("name"), max_length=150, unique=True)
permissions = models.ManyToManyField(
Permission,
verbose_name=_("permissions"),
blank=True,
related_name="custom_group",
related_query_name="group",
)
...
Thanks.
After a not-so-short research, I found that my problem is actually (almost) the same as the uniform-machines scheduling problem that is formulated in the context of computer task scheduling. In stead of assigning packages to couriers, the uniform-machines scheduling assigns tasks to processors. To my surprise, the problem is much more difficult than I expected. For those who want to learn more about the problem, please refer to the wiki and the paper Exact and Approximate Algorithms for Scheduling Nonidentical Processors. Many thanks to @bsraskr .
A)A short-short answer is : in some file like
x:\MSYS2\home\some_name\.bash_profile ,
add these lines:
# ---- modify : j is j-Windows drive
tt=/j/MSYS2/mingw64/bin
PATH="${tt}:${PATH}"
B)A short answer is : in some file like
x:\MSYS2\home\some_name\.bash_profile ,
add these lines:
# pathz was defined in Windows using e.g. set pathz=%cd% before loading MSYS2
# You can also set it here (in Windows-format like 'c:\my\bin)
tt="${pathz}"
tt=\/"${tt}"
tt="${tt/:/}"
tt="${tt//\\//}"
PATH="${tt}:${PATH}"
# ---- modify these 2 lines :
mkdir -p "${tt}"/../2WORK
cd "${tt}"/../2WORK
#some tests :
input="a\b\c\d"
echo ${input}
echo "${input/\\//}"
echo "${input//\\//}"
echo "Note that : only the last echo with '//', not '/', to begin, give us global replace !!"
C)An instructive answer is :
1)Create a dir . In this dir, create a file named qqq.cmd that contains :
rem cd = current dir
set pathz=%cd%
set path=%pathz%;%path%
set pathzz=%path%
:: pathz and pathzz can be used in cygwin and MSYS2 , but must be translated.
if "%1" == "" (
rem ---- modify this line :
I:\MSYS2\msys2_shell.cmd -ucrt64
exit
)
set ZZZZZZ=123aBBC1111111111111
set zZZz=123aBBC
rem ---- modify this line :
I:\CYG\bin\mintty.exe -i /Cygwin-Terminal.ico -
exit
2)In this dir, create another file named ss that contains :
#ss
#in notepad++ : click Edit , choose EOL conversion , then choose unix for this file ss. (no extension).
echo $PATH
echo $path"assss----------"
echo $ZZZZZZ
echo $zZZz
echo $ZZZZ--=====++++++
echo $zZZz
echo $PATHZ --------
echo $pathz --====
cd /c
echo --------Note current dir :
pwd
if false; then
# ... Code I want to skip here ...
fdsgfsdsgfs
dfsdgfdsgfds
fi
exit
3)Do as in B) .
4)Create shortcut for qqq.cmd and move the short cut to any where.
5)Then execute qqq.cmd by double clicking the shortcut and now you have MSYS2. Type ss and check the output and note the current dir now!!
NB: if you modify qqq.cmd to load Cygwin then 3) above not needed
The only way I found to bind multiple keys or combinations is to bind them separately:
root.bind("<Control-Alt-g>", close_app)
root.bind("<F9>", close_app)
# Remaining code
root.mainloop()
Besides the implementation in statsmodels as described in @Josef's answer, the E-test is also implemented in the SciPy library, with the function scipy.stats.poisson_means_test.
Usage example:
import scipy.stats as stats
count1, n1, count2, n2 = 0, 100, 3, 100
res = stats.poisson_means_test(count1, n1, count2, n2)
res.statistic, res.pvalue
(-1.7320508075688772, 0.08837900929018157)
You're providing the wrong path I think as an input to your workflow. You're passing local-vars: '.github/workflows/variables/local-vars.env' as an input. However your file seems to be at the following path: .github/variables/local-vars.env.
Try it with: local-vars: '.github/variables/local-vars.env'
to accept keys, you have to use the imported <Fragment> instead of <> as <> cannot accept keys.
i.e.
import { Fragment } from 'react';
...
<Fragment key={yourKey}>...</Fragment>
Sorry for the brevity in the comment Your Common Sense,. Still getting used to Stackoverflow. Thank you for your reply. It has been very helpful. How about:
require_once("db_fns2025.php");
// Query to get table names
$sql = "SHOW TABLES";
$rs = mysqli_query($link, $sql);
// Check if there are tables
$tblCnt = 0;
if ($rs->num_rows > 0) {
echo "<h2>Tables in the $dbname database:</h2>";
echo "<ul>";
// Output data from each row
while ($row = $rs->fetch_array()) {
$tblCnt++;
echo "<li>" . $row[0] . "</li>";
}
echo "</ul>";
if ($tblCnt==1) {
echo "There is $tblCnt table<br />\n";
}
else if ($tblCnt>1){
echo "There are $tblCnt tables<br />\n";
}
} else {
echo "No tables found in the $dbname database.";
}
mysqli_close($link);
?>
I had a similar issue when I updated Flutter.
This video helped me to solve it https://www.youtube.com/watch?v=mC25tCXdPY8
After running the command "dart pub upgrade --major-versions" I managed to run the app on my emulated device.
Hope it can help.
Regards.
Try this and it should help you out.
For the column add the valueFormatter property
{
headerName: "Earned", field: "revenue", sortingOrder: ['asc', 'desc'], valueFormatter: currencyFormatter
}
Use this function for the formatting
function currencyFormatter(params) {
var usd = new Intl.NumberFormat('en-US', {
style: 'currency',
currency: 'USD',
minimumFractionDigits: 2
});
return usd.format(params.value);
}
@Hellen, Did you find a solution?
As an alternative, it seems like there actually is a library (record) that allows to record in all platforms in flutter: See https://pub.dev/packages/record .
Found the perfect function to turn off drop-highlighting by self.setDropIndicatorShown(False).
However for curiosity sake, I was wondering if there's a stylesheet for dropIndicator to customize its look? OR is the only option to use QStyleItemDelegate or something?
With modern eslint (from v8.21.0), if you're using the new config file eslint.config.js, the --ext option is no longer supported. Instead, specify the files to lint in the config, and then run eslint . to lint all specified files.
Example eslint.config.js:
/** @type {import('eslint').Linter.Config[]} */
export default [
{ files: ["**/*.{js,mjs,cjs,ts,jsx,tsx}"] },
{ ignores: ["dist" ] },
// Other config options here
];
Instead of creating multiple workflows, you could create one workflow with multiple jobs that need each other via the needs: keyword.
https://docs.github.com/en/actions/writing-workflows/workflow-syntax-for-github-actions#jobsjob_idneeds
on:
pull_request:
jobs:
tf-plan-upload:
runs-on: ubuntu-latest
steps:
- run: echo "This job does TF plan and uploads"
tf-plan-download-opa-scan:
needs: [tf-plan-upload]
runs-on: ubuntu-latest
steps:
- run: echo "This job downloads the TF plan and runs the OPA scan"
You can do :
use App\Http\Controllers as Controllers;
and then in routes
Route::get('/user', [Controllers\UserController::class, 'index']);
def programmersLife(): while asAlive: eat() #sleep() code()
#if name == "main":
Ok so after banging my head against this question for a while I have found something interesting..
When using Document ID 2PACX we get a 404. At first I thought it was a something related to service account not having proper permissions. Just for the record:
Still I got 404.
So, I tried a different method, I used the drive method to fetch all the documents and I found that the document that I wanted to fetch was associated to different ID.
Document ID from doc was "2PACX-1vTuwWllBnBa9StNEd1JzUI0sFi2jqOHG0sjL6WeN8j0Nv2nvP0UAETpEKx3zZDt2FqDKIdseJLOdKhT"
Document ID from drive fetch api was "1Hk-WnhWbE3yjfx0jKmrtbBN3CJsMtRIsCDauELOma2o"
I am not sure if there is some mismatch or that is how these IDs are intended to work. Anyways I modified your code with this knowledge and it seems to works. (FYI, just like google doc API u will need to enable drive API for this to work)
Please let me know if this helps to achieve what you want or do we need to strictly do it with using drive API
const { google } = require("googleapis")
const auth = new google.auth.GoogleAuth({
keyFile: "<path_to_service_account_key_json_file>",
scopes: [
"https://www.googleapis.com/auth/documents",
"https://www.googleapis.com/auth/drive.readonly"
]
})
async function getDocumentContents(documentId) {
try {
const authClient = await auth.getClient();
const response = await google.docs({ version: "v1", auth: authClient }).documents.get({ documentId: documentId })
return response.data;
} catch (error) {
console.error(error)
}
}
async function getDocumentID() {
try {
const authClient = await auth.getClient();
const drive = google.drive({ version: "v3", auth: authClient });
/*
In my case I know that i want to get file by name "Test" else u can use something like -
q: `name = ${fileName} and mimeType = 'application/vnd.google-apps.document'`
also u can fetch specific fileds only by passing fields parameter
fields: 'files(id, name)',
*/
const specificFileData = await drive.files.list({
q: `name = "Test" and mimeType = 'application/vnd.google-apps.document'`
});
return specificFileData.data.files[0].id;
} catch (error) {
console.error(error);
}
}
(async () => {
const documentId = await getDocumentID();
const documentContents = await getDocumentContents(documentId)
console.log('documentContents', documentContents.body.content)
})()
did you solve this problem? I am currently struggling with the same one and don't know how to solve it
Me aparece security alarma tamperig detected. (000c10) Please reboot que puedo hacer para que opere normal mente es verifone vx 520 gracias
I used incompatible version of KMP plugin with my Android Studio version. So I installed Android Studio Meerkat and latest KMP plugin. And now it's ok.
It looks like you're facing an issue with getting your subdomain to work with NGINX as a reverse proxy for your Node.js app. Let’s walk through the process step by step to make sure everything’s set up correctly. Here are some things to check:
First, double-check that the DNS for your subdomain is properly set up. If you’re pointing app.example.com to your server, make sure you have the correct A record. This should point directly to the IP address of the server running NGINX.
example.com).You can verify DNS resolution by using dig or nslookup:
dig app.example.com
This should return the correct IP address. If it’s not showing up, there could be a delay in DNS propagation, or something might be off in the DNS configuration.
Next, check your NGINX configuration. It should look something like this:
server {
listen 80;
server_name app.example.com; # Replace with your subdomain
location / {
proxy_pass http://127.0.0.1:3000; # Address of your Node.js app
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
A few things to make sure of:
server_name is pointing to your subdomain (e.g., app.example.com).proxy_pass is pointing to where your Node.js app is running (e.g., 127.0.0.1:3000).sudo nginx -t # To test the configuration
sudo systemctl reload nginx # To apply changes
Make sure that the firewall is allowing traffic through ports 80 (HTTP) and 443 (HTTPS). This can be critical if your firewall settings are restrictive.
You can check your NGINX status by running:
sudo systemctl status nginx
This will tell you if NGINX is running smoothly and listening on the expected ports.
Double-check that your Node.js app is listening on the correct interface. If it’s set to localhost or 127.0.0.1, NGINX might not be able to reach it. You want it to listen on 0.0.0.0 so it can accept connections from anywhere, including your NGINX reverse proxy.
In your Node.js app, you should have something like this:
app.listen(3000, '0.0.0.0', () => {
console.log('Node.js app is running on port 3000');
});
Once everything is set up, you can test your setup by running a curl request to check if it’s working:
curl http://app.example.com
If you see the expected result from your Node.js app, then everything is good to go!
If it’s still not working, check your NGINX error logs:
sudo tail -f /var/log/nginx/error.log
Any errors or misconfigurations will likely show up here and can help point you in the right direction.
DNS changes can sometimes take time to propagate. If you've recently updated your DNS, it might take a few hours or even up to 48 hours for the changes to fully propagate. If you’ve been testing with a cached version of DNS, try clearing it on your local machine or use a different network to test.
I'm late on this but the sizeof() function uses ch as a pointer and the type of ch (int, char*, struct... etc) as a guide to interpreting what value to return. So isn't ch the same as &ch[0] ??? Just sayin'
AWS support replied that Private DNS is not yet supported for DynamoDB.
Please check my course: Cisco CUCM Automation (Bulk Provisioning) with Python Zeep & Pandas
Course Link (with discount code) https://shasoft.thinkific.com/
in console tab > type sel > press tab > type your table name > hit enter
coming from mysql, i was looking for something quick like this too for datagrip. yea takes a few more actions but its actually quicker than clicking now so i can accept this, plus jetbrains suite overall is amazing so i have no problem doing this now
The previous answer by @supputuri works, however I needed to make a little modification which took hours to figure out. Here is the script that worked for me in case anyone else can't figure it out:
# navigate to chrome downloads page first
driver.get("chrome://downloads/")
driver.execute_script("document.querySelector('downloads-manager').shadowRoot.querySelector('#downloadsList downloads-item').shadowRoot.querySelector(\"button[id='cancel']\").click()")
The main changed part is button[id='cancel']
I am using xCode 16. Perhaps it's because these solutions worked for an earlier version, but whatever the reason, none worked for me. Then I came upon this issue CocoaPods did not set the base configuration ... which was another error that was seen. The solution there was to remove the configurations, see link for more.
spaCy doesn't explicitly mark the imperative mood, but you can infer it using syntactic clues.
dep_ == "ROOT").nsubj).Hi, this is a DFC version problem. For Yolo 11 you need the latest version which unfortunately doesn't work under wsl because of hailo_platform which we can't install. So for the moment, apart from switching to linux, we're limited to yolov10. And no, you didn't make a mistake by putting yolov11m with a v because hailo has the v (an aberration for me).
You can follow the styling solution provided in this blog.
It will align the components to the center and be a better version. Yet its applied on the parent div. Play around the css
(HY000/1130): Host 'xxx.xxx.xxx' is not allowed to connect to this MariaDB server
It indicates the database server doesn't recognize the host you're connecting from as a valid source. The user @'xxx.xxx.xxx does not have the appropriate privilages to connect from that IP/hostname.
I would also add some padding to inside area of buttons for example
padding:10px;
because align will work of course but it needs space inside button area, You can create adding paddings, margins
Actually I was able to resolve the issue I had with Deno by importing the aws-sdk from esm.sh which basically translates the library to ES Modules.
Kudos to ESM.SH for providing an easy way of Denofying this SDK :)
deno.json
"@aws-sdk/client-s3": "https://esm.sh/@aws-sdk/[email protected]"
Since we cannot place our localStorage calls directly into our initial state (like below) because those will also run on the server side, throwing an undefined error
That's the main problem! Nextjs runs everything that is outside of a useEffect on the server side so you cannot initialize the token with const [token, setToken] = useState(window.localStorage.getItem("token"); directly because localStorage is undefined in the server scope.
You can add a flag in your AuthContext named "initializing" or something like that, and pass it through the AuthContext.Provider value. In that way the consumers of this context value can wait until the AuthContext useEffect run at least once.
Here is an example:
const AuthContext = createContext<AuthContextProps>({
user: null,
token: null,
login: () => {},
logout: () => {},
initializing: true,
});
const AuthProvider: React.FC<{ children: React.ReactNode }> = ({
children,
}) => {
const [user, setUser] = useState<User | null>(null);
const [token, setToken] = useState<string | null>(null);
const [initializing, setInitializing] = useState(true);
const router = useRouter();
useEffect(() => {
console.log("AuthProvider useEffect", localStorage.getItem("token"));
const storedToken = localStorage.getItem("token");
if (storedToken) {
if (isTokenExpired(storedToken)) {
setToken(null);
setUser(null);
localStorage.removeItem("token");
return;
}
setToken(storedToken);
localStorage.setItem("token", storedToken);
const decodedToken = jwtDecode<User>(storedToken);
setUser(decodedToken);
}
setInitializing(false);
}, []);
const login = (token: string) => {
setToken(token);
localStorage.setItem("token", token);
const decodedToken = jwtDecode<User>(token);
setUser(decodedToken);
};
const logout = () => {
setToken(null);
setUser(null);
localStorage.removeItem("token");
router.push("/login");
};
const isTokenExpired = (token: string) => {
const decodedToken = jwtDecode<{ exp: number }>(token);
return Date.now() >= decodedToken.exp * 1000;
};
return (
<AuthContext.Provider value={{ user, token, login, logout, initializing }}>
{children}
</AuthContext.Provider>
);
};
const useAuth = () => {
const context = React.useContext(AuthContext);
if (context === undefined) {
throw new Error("useAuth must be used within an AuthProvider");
}
return context;
};
export { AuthContext, AuthProvider, useAuth };
And then you can reuse that initializing flag to know when to control or not yet if the user is loggedIn in the protected pages.
export const useLeads = () => {
const router = useRouter();
const { user, token, initializing } = useAuth();
const [data, setData] = useState<Lead[]>([]);
const [totalRecords, setTotalRecords] = useState(0);
const [pageIndex, setPageIndex] = useState(0);
const [pageSize, setPageSize] = useState(10);
const [filters, setFilters] = useState({});
const [loading, setLoading] = useState(false);
useEffect(() => {
if (initializing) {
return;
}
if (!token) {
router.push("/login");
return;
}
const fetchData = async () => {
setLoading(true);
try {
const result = await getLeads(token, pageIndex, pageSize, filters);
setData(result);
const count = await countLeads(token, filters);
setTotalRecords(count);
} catch (error) {
console.error("Error fetching data:", error);
} finally {
setLoading(false);
}
};
fetchData();
}, [user, token, pageIndex, pageSize, router, filters, initializing]);
return {
data,
totalRecords,
pageIndex,
pageSize,
filters,
loading,
setPageIndex,
setPageSize,
setFilters,
};
};
I encountered this solution in this article https://dev.to/ivandotv/protecting-static-pages-in-next-js-application-1e50
Kube Startup CPU Boost is a controller that increases CPU resource requests and limits during Kubernetes workload startup time. Once the workload is up and running, the resources are set back to their original values.
For resolving this issue simply use "with" instead of "assert" error can be resolved by it
It's the client you use. You need to use the document client.
Have a read of this blog post:
https://aws.amazon.com/blogs/database/exploring-amazon-dynamodb-sdk-clients/
Ubuntu doesn't have the Times New Roman font by default. This is the default font used by Graphviz, so I was getting poor results when dot layout was done on Ubuntu as compared to Windows or Mac (which come natively with the correct fonts).
The most important thing is to do if you are generating Grapviz images on Ubuntu is:
sudo apt install ttf-mscorefonts-installer1
Beware that will pop up on your terminal an EULA from Microsoft, and you may not see the prompt to respond 'yes'.
However that may not be enough to get consistent results if you have built GraphViz completely by yourself. You may still get anomalous results results unless you have pre-installed certain libraries. So you should also do:
apt-get install --no-install-recommends -y build-essential clang-format cmake git pkg-config autoconf bison libtool dh-python flex d-shlibs debhelper fakeroot freeglut3-dev libgts-dev swig libgtkglext1-dev libglade2-dev libqt5gui5 qt5-qmake qtbase5-dev libann-dev libaa1-dev libdevil-dev libgd-dev libgtk-3-dev ghostscript libgs-dev liblasi-dev libpoppler-dev libpoppler-glib-dev librsvg2-dev libwebp-dev ruby golang-go guile-3.0 guile-3.0-dev lua5.3 liblua5.3-dev libperl-dev php-dev libsodium-dev libargon2-0-dev libpython3-dev ruby-dev tcl-dev python3-venv gcovr lcov shellcheck
The guidance to help me determine the packages needed came from the Gitlab page with the Dockerfile that is used to build Graphviz for testing: https://gitlab.com/graphviz/graphviz/-/blob/main/ci/ubuntu-22.04/Dockerfile )
Once I had done the above apt installs and rebuilt Graphviz, it generated the same layouts on Ubuntu as it does on Mac and Windows.
The above was tested with Graphviz 12.2.1. I am running on ARM architecture (which was why I needed to rebuild Graphviz from source as there was no up to date deb available).
for jan 11 2025 go to android > build.gradle and update ext.kotlin_version like below
buildscript { ext.kotlin_version = '1.8.22' //only change this line
repositor.... //here do not touch'
There's a workaround for those cases, lets say you want to make a button like this:
<Button WidthRequest="20" HeightRequest="20" />
This may result in the following output: Button not showing properly
So to fix this you could make the button big enough so that it shows properly lets say
<Button WidthRequest="30" HeightRequest="30" />
output: Button now showing properly but bigger than what we want
But now here's the trick: to make it smaller you can play around with the ScaleX and ScaleY properties like this:
<Button WidthRequest="30" HeightRequest="30" ScaleX="0.7" ScaleY="0.7" />
And we get: Button now is smaller and showing propertly
There may be other ways to get the desire output, but this is the one I know. I hope this can be of help to you.
You can use the calculate function in css to calculate the width and adjust the padding , width accordingly.
It's very important to test :
input="a\b\c\d"
echo ${input}
echo "${input/\\//}"
echo "${input//\\//}"
Updating to the newest Docker Desktop version(v 4.37.2) fixed the problem from me.
I found your script for backing up databases, I have one question. How can I use this script to back up databases either to a separate folder, for example /backups/$Date$Time$MYSQL_backup or to pack all databases into .tar.gz in the format for example $Date$Time$MYSQL_backup.tar.gz