Hope you are doing well.
In short, in upgradeable smart contracts, you need to extend the Initializable contract and use the initialize function, because there is no constructor
I have also same Issue using storybook 8.4.5 in angular 19.0.0 In other project where i installed storybook 8.4.4 with angular 18.2.0 is working
@RequestBody will expect JSON/XML as body, and if you are sending key/value pairs then you need to either use @ModelAttribute to bind directly to your FilterObj or use MultiValueMap to (sort of manually) parse pairs and use them in your controller.
Or, if you control the client of your REST service, then use json to send objects to your endpoint.
Look likes you are using Next.js version 14 with App Router. You can retrieve id
from params
prop directly in server.
'use server'
export default async function Dashboard({
params: { id },
}: {
params: { id: string }
}) {
const data = await fetchData(id)
return (...);
}
References:
The issue arises because my UI test scripts and API test scripts are organized in different folders and at different directory levels, but they share the same before() method in the e2e.ts file. And in before() method, I use jsonPath to analyze data.
before() method in e2e.ts file. readDataFile() method use jsonPath
before(() => {
const conditionFile = path.join('data', Constant.PreTest);
cy.task('readDataFile', conditionFile).then((data) => {
.....
});
})
Solution 1: Consolidate API and UI test scripts into the same folder to ensure consistent execution. Solution 2: Modify the before() method in e2e.ts to handle both API and UI test scripts seamlessly, regardless of their folder structure or directory level.
Rui, be professional, if you're going to thumbs down twice with above, explain coherently why, you're a devops specialist, why keep silent?
Try to use OCSID
namespace.
// "conn" is an instance of java.sql.Connection:
conn.setClientInfo("OCSID.CLIENTID", "Alice_HR_Payroll");
conn.setClientInfo("OCSID.MODULE", "APP_HR_PAYROLL");
conn.setClientInfo("OCSID.ACTION", "PAYROLL_REPORT");
Set your programs to start at login through the task scheduler.
The Explorer startup sequence has a number of phases, carefully arranged to get visible things ready first, and less visible things ready later. And one of the lowest priority items is the Startup group.
Other references you should read:
Performance gains at the cost of other components
Very simple answer lol. I just had to hit control-shift-R to delete the browser's cache.
if it's a script inside package.json you can try:
"build": "export NODE_OPTIONS=--openssl-legacy-provider; REST OF YOUR COMMAND"
or try to add a .npmrc file with
node-options=--openssl-legacy-provider
See comment. padding to get to ldngth restrictionbbbbbbbbbbbbbbbbbbbbbb hhhhhjjjjjj*jjjjjjjjjjj hjjjjjjjjjjjjjjjjjjjjj
I have also encountered this problem yet.
I read the source code of decode_packet_trace.py , if the cmd is 1 or 4, it means ReadReq or WriteReq; if the cmd is 30 or 32, it means MemRead or MemWrite. However, the decode_packet_trace.py only marked ReadReq and WriteReq as 'r'/'w', but MemRead and MemWrite are marked as 'u'.
Considering that you place monitor over the membus, all traces are MemRead/MemWrite, so they are all marked as 'u'.
You can correct the decode_packet_trace.py to correct this problem.
The line pd.to_numeric(read['Gross'], errors='coerce') is correct for converting a column with potentially non-numeric values to numeric values in pandas. However, when errors='coerce' is used, any non-numeric value in the column will be replaced with NaN.
for Example:
data = {'Gross': ['1234', '$4567', '789a', '12,345', None]}
read = pd.DataFrame(data)
read['Gross'] = read['Gross'].astype(str).str.replace(',', '').str.replace('$', '').replace('nan', '')
read['Gross'] = pd.to_numeric(read['Gross'], errors='coerce')
read['Gross'].fillna(0, inplace=True)
This is kind of non-trivial task, because from single photo you can't determine object silhouette from incoming light direction. Quite common approach is to take contour of object from camera perspective, flip it horizontally and apply some shear to match light direction. Little more advanced technique involves "extrude" the shadow in direction of object depth. Like if you have brush in shape of the sheared shadow and draw a line with it. Then give it some opacity and Gaussian blur to make it look like real shadow.
You can project 3D vectors up, front and side to 2D space of your image and then use the result vectors to transforms of your shadow, the result will be valid.
Just if anyone is still coming across this issue...
I found that it was an issue with Jupyter Notebooks, I turned transparent off in my imported function for saving figures and then later when updating arguments like (transparent=False, facecolour='white') e.c.t. it didn't turn the background back on after re-importing the functions... Just needed to restart the Jupyter Notebooks Kernel and everything worked again without needing to define transparent e.c.t as the default is on.
Am i correct to understand that if you use an APNs authentication key which never expires there is nothing to be done but if you use an apple APN certificate you do have to create a new one?
Or If I'm using firebase, no matter what auth option i gave them to Apple's APNs I'm good.
Finally found a hot fix using the watch from_date. so whenever the date picker changes to the format '2024-01' I'll add '-01' to the end of it.
watch: {
from_date(val) {
if (val && val.length === 7) {
this.from_date = val + '-01'
}
}
}
Your sprintf is incorrect. It should be
str := fmt.Sprint("SELECT cidr FROM %v WHERE asn=?;", conf.SQLiteASN2CIDR)
Please switch user as jenkins in master node:
sudo su - jenkins 2.Now generate the key using ssh-keygen ssh-keygen
Copy the cat id_rsa to your jenkiens credentials
Copy the cat id_rsa.pub to your agent node ubuntu@agent:~/.ssh$ vi authorized_keys
now test the connection
Hope this weil work for you
I know this isn't the best solution, but it works.
inputStream.pipe(decipherStream).pipe(outputStream);
decipherStream.on("error", e => console.log("error", e));
decipherStream.once("close", () => outputStream.writableEnded || outputStream.end());
I finally found the documentation to achieve this: https://developers.cloudflare.com/speed/optimization/content/prefetch-urls/
python.exe -m pip install --upgrade pip
Try to run this command to resolve this Cannot uninstall numpy 1.21.2, RECORD file not found
run the above command it worked form me
As of 2024, there is now an experimental Document Picture-in-Picture API.
But check the browser compatibility before using it.
After submitting the form the abuse is now off for my subscription can I turn it back on if I want to and how to do so ?
This is an anekdote when we use the 425 Too Early. It differs from the ietf-425-definition
A) A fast one, legacy, better request seldomly
B) A smothly scaling API connected to a standard database (MySQL)
Whilst the legacy system gets updates within seconds, the API relies on the database, which gets updates from 3 to 5 hours later.
ELSE: Requesting the general existance of the data
No°1, obviously, delivers the full dump of requested data, whilst No°3 only indicates that there will be available data in the near future without specifying the ETA.
Have you tried the Webcomponent-based Native Federation?
You can find an article about this topic here: https://www.angulararchitects.io/blog/micro-frontends-with-modern-angular-part-2-multi-version-and-multi-framework-solutions-with-angular-elements-and-web-components/
I have a similar problem (until I found this question I felt like the only SwiftUI dev targeting macOS…). I think it's a SwiftUI bug. I have the Table backed by FetchedResults, and the performance issue also happens when I change the sortDescriptors.
Anyway, I am using an ugly workaround where I set a tableIsRefreshing state variable to true and replace the Table with a ProgressIndicator meanwhile. Adapted to your code this might look like this:
struct TableTest: View {
@State private var tableIsRefreshing = false
@State private var items: [Person] = (0..<500000).map { _ in Person() }
var body: some View {
VStack {
if tableIsRefreshing {
VStack {
Spacer()
ProgressView()
Spacer()
}
} else {
Table(items) {
TableColumn("Name", value: \.name)
TableColumn("Age") { person in Text(String(person.age)) }
}
}
}
.toolbar {
ToolbarItem() {
Button("Clear") {
tableIsRefreshing = true
items = []
tableIsRefreshing = false
}
}
}
}
struct Person: Identifiable {
let id = UUID()
let name: String = ["Olivia", "Elijah", "Leilani", "Gabriel", "Eleanor", "Sebastian", "Zoey", "Muhammad"].randomElement()!
let age: Int = Int.random(in: 18...110)
}
}
You might need to add a DispatchQueue.main.asyncAfter delay if this doesn't work.
I hope the bug gets fixed, please submit a bug report similar to mine: http://www.openradar.me/FB13639482
I have the same problem, it's not due to Apache POI, but it's an editing with LibreOffice ! When you delete a row with LibreOffice, the last row index is set to max rows available (1048575).
If you use Excel, you don't have the problem ...
For the moment, I don't have the solution ...
In my case, I have exclude my check of number of row when the getLastRowNum() return 1048575 ... (I suppose the file is editing with LibreOffice and my check is not possible !).
may I ask how you solved this problem?
I have a small question on this topic. Is it possible here not to count the answers but to show percent? For example for climate change it would show me 50%, 25% and 25%.
I guess i need to change the code "add_count" with another code, to get this output?
Thanks in advance
Rui, why did you give me a thumbs down?
This is a peer dependency issue. The create-react-app
is try to install @testing-library/[email protected]
, but it doesn't support React 19. You can either use yarn
/pnpm
/bun
or downgrade to React 18.
npm install -g yarn
yarn create react-app <your_app_name>
The --collect flag is an option for code coverage. If you prefer to work in the CLI, you can exclude projects as follows:
dotnet test --no-build --collect "Code Coverage;Exclude=[Exclude.Project.*],[Python.*]"
For example, this will exclude all projects with names starting with Exclude.Project and Python.
Just to post an answer and close the question the issue was related to an error in build that was not handled correctly causing the service to never create its endpoint, for some reason aspire still thought this was running fine and didnt notify me of any errors.
I am having the same issue, trying to deploy my nextjs app using cpanel
The issue seems to be related to Maven not being installed or configured on your system. Please check if Maven is installed.
To keep a div height and width flexible and making sure the background image stretches with the div, you need to use background-size: cover or contain values. However, there are pros and cons to both approach
background-size: cover would stretch the image to fit the container, while maintaining aspect ratio. However, it would clip the image if aspect ratio of image does not match that of the container.
background-size: contain would stretch the image to fit the container, while maintaining aspect ratio, AND ensure the entire image is visible at all times. However, it would leave empty spaces to side or above/below the image if aspect ratio of image does not match that of the container.
Lastly background-size: 100% 100% would keep the image stretched to container size, but would result in distortions as it does not try to preserve aspect ratio.
Here is a code you can try out
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Background Image Example</title>
<style>
body{
height: 100vh;
}
.background-div {
width: 100%;
height: 100%;
background-image: url('https://via.placeholder.com/400x300');
/* Change to contain to see both effects */
background-size: cover; /* Scale the image to cover the entire div */
background-repeat: no-repeat;
}
</style>
</head>
<body>
<div class="background-div"></div>
</body>
</html>
I have created a full HTML Background Images course playlist on YouTube which has detailed coverage of all these options with code examples. Most of code examples in common issues portion are using a div to hold background image so i think you would get good coverage of what you are trying to accomplish and more. I would strongly encourage to watch this so you can handle all of these and result in a professional grade webpage which looks good in all scenarios. It has concept videos as well as individual videos for common issues with background images. Please do let me know if it solved your query. If you like reading instead of watching, you can refer to article series version here
You can open the page like this:
window.open("https://google.com", _blank, { popup: true });
according to MDN Documentation
Press Ctrl+Shift+P to bring up the command palette, the keyword is "Python: Select Interpreter".
You can see a list of Python Virtual Environments to choose from.
After you have chosen one, the selected Python Interpreter will take effect immediately.
The tabs aren't designed to wrap. They already take 100% of the width, and will scroll to show the currently selected tab.
I'm curious what behavior you wanted if there are too many tabs to fit. Presumably the tabs would word wrap? So the first tab would be "Looooong\nTab\n0" on three lines?
There are many ways you can improve readability. Some of them:
labels: {style: {color: 'color matching series color'}},
Demo of first 2 steps: https://jsfiddle.net/BlackLabel/v89r6jh4/
Can you see all fiends in view form and in the default view? it was a critical field? create a new field copy all value to the new field, and delete corrupted field
Connect-PnPOnline -Url "https://yourtenant.sharepoint.com/sites/yoursite" -Interactive
Get-PnPField -List "YourListName"
Remove-PnPField -List "YourListName" -Identity "FieldInternalName"
When I came here I didn't check the comments but I finally found the answer myself, and then I saw the comments. To prevent other users such mistakes, here is the answer to find all the counted actions:
fields @timestamp, @message
| parse @message ',"nonTerminatingMatchingRules":[{"ruleId":"*","action":"*"' as rule, action
| filter action = "COUNT"
| sort @timestamp desc
| limit 20
You have 3 columns for filter in request (sku_id, package_sku_id, version). So really helping would be combined index of 3 columns. Separate indexes on 1 column is not much effictive for this request.
create index idx on rb (sku_id, package_sku_id, version)
sender = "[email protected]"
I use API https://graph.microsoft.com/v1.0/users/{sender}/sendMail to send Mail
get response :
{"error":{"code":"MailboxNotEnabledForRESTAPI","message":"The mailbox is either inactive, soft-deleted, or is hosted on-premise."}}
What's wrong?
this is due to your decorators order
@jwt_required
@app.route('/test')
def test():
return "Hello"
will return in the same Runtime error
you have to call get_current_user() after token is extracted and decoded by jwt_extended lib. a working example here Implementing roles with Flask-JWT-Extended
You need to go to "Capabilities Management" tab of the Chrome Driver Config and specify your user dir and profile dir there:
More information:
The answer was in log:
cannot alter type of a column used by a view or rule
I've drop the view which was blocking doing alter on colums. After that Hibernate updated database structure and I recreate the view. Since that application starts with parameter "spring.jpa.hibernate.ddl-auto" set to "update" without erros.
According to @M.Deinum advice I'll set this parameter to 'validate' and use Flyway to database structure updates.
The issue ended up being with my mac, and not with xcode itself.
In my case I was returning response.json()
and before this I was also console logging response.json()
Based on @Piotr Siupa's answer, I find it more convenient to use it as a callback:
import { useRef } from 'react';
export default function useFirstRender(callback) {
const ref = useRef(true);
if (ref.current) {
callback();
}
ref.current = false;
}
In a component:
...
useFirstRender(() => {
console.log("first");
});
...
You added annotation processor, but you forgot add the dependency:
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
the SDK product website typically provides information on whether it can be integrated with other SDKs within the same app. If you're looking to monetize beyond just ads, you might want to try PacketSDK. It generates income for you by leveraging the idle resources of app users. For more information, you can search for "packetsdk" or DM me.
Click on the (!)
icon on the left to display description of all errors in the file:
(encircled in blue)
Unfortunately, Keycloak doesn’t provide a direct endpoint to list scopes assigned to a specific role. However, you can use the GET /admin/realms/{realm}/roles-by-id/{role-id} endpoint to fetch details about a role. If the role is composite, the response will include all associated roles and permissions (scopes). Alternatively, you can use GET /admin/realms/{realm}/clients/{client-id}/scope-mappings to retrieve all scope mappings for a client and cross-reference them with your role.
In a hashtable, each key can only appear once. If you want to store several identical keys, use an array list or something like that, it doesn't matter what you put in it
● Objective: To visualize the distribution and relationship of numerical and categorical variables in the dataset
Check your .env file. It should look like: SECRET_KEY="your secret key here"
clever ones.
I have the same challenge when authorising a Service Account to access Google Sheets. I only get the id_token in the response, not the access_token.
I suspect it may be something I need to add or change in the JWT Claim Set; therefore, I'm going to add the JSON output I receive in the browser when I post the id_token to https://www.googleapis.com/oauth2/v3/tokeninfo?id_token=... in a browser's address field. (I purposely changed some of the values in the "azp", "kid", and "sub" fields not to expose the actual values)
JSON
alg: RS256
aud: https://www.googleapis.com/auth/worksheets,https://www.googleapis.com/auth/drive
azp: [email protected]
email: [email protected]
email_verified: true
exp: 1733472669
iat: 1733469069
iss: https://accounts.google.com
kid: 8c2a80af3fc12f13f44b168b6d5d0226eb0173c2
sub: 903102654606255428851
typ: JWT
I have to use Groovy as the development platform because it is part of the ScriptRunner (from Adaptavist) in an Atlassian Jira Cloud environment.
I would greatly appreciate help or ideas to try to solve this challenge.
Thank you so much for your attention and participation. Ben
Had the same problem when debugging project in Visual Studio 2019 Community. The error was "A debugger operation is taking longer than expected". I went to IIS Manager, Application pool stop and then start. You can also Recycle the Application pool
We do have the same exact problem. We also tried submitting the report with a system user and the authorization for the user to create them. This works for not printing the spool request, but if users submit the report at the same time an identical spool-id is given, which leads to other problems.
Is there a solution for this?
The error is due to the UPDATE statement. When updating values in MySQL, you cannot use the VALUES keyword like you would in an INSERT statement. Instead, you need to assign new values directly to the columns.
Use this:
import mysql.connector
item_id = 123 query = ( "UPDATE MyTable " "SET col1 = %s, col2 = %s, col3 = %s, col4 = %s " "WHERE item_id = %s" ) items = (val1, val2, val3, val4, item_id)
cursor.execute(query, items)
is there a way to breakline when using a variable instead of directly adding the text
This is a peer dependency issue. You are using React 19, but @testing-library/[email protected]
doesn't support it. You can either upgrade @testing-library/react
to at least version 16.1.0 or downgrade to React 18.
Reference: https://github.com/testing-library/react-testing-library/releases/tag/v16.1.0
To address this very problem, I have created spring-boot-starter-spark
It has latest of Spark version 3.5.3 and Spring boot version 3.4.0
In case you need different versions, you can checkout the source code on Github update the versions and build the starter jar.
Along with dependnecy descriptors it gives you customizable SparkSession auto-configurations and spring boot properties autocompletion assistance in IDEs.
I have also put a Demo Spark job implemented as Spring cloud task assing this starter.
Try upgrade your @react-navigation/*
to version 7+, it should resolve this error.
Labeling more documents will not improve the performance of pre-trained Generative AI processors, as they rely on fixed, pre-built models. However, for custom-trained processors, labeling more documents is essential as it helps the model learn and adapt to your specific use case, improving its accuracy over time.
If you're using the pre-trained foundation model and find its performance insufficient, consider switching to a custom processor where labeled data can make a meaningful difference.
For now fixed it by wrapping debug in waitFor:
await waitFor(() => {
screen.debug(undefined, 10000000);
});
What happens when you use only one "with open" with 'w+' ?
with open(r"\\path\to\file\datafile.txt", 'w+') as file:
data = json.load(file)
data.append(new_vars)
json.dump(data, file)
I don't know exactly but I guess that knm-files are Kotlin native and/or multiplatform compiled.
You need to reference the jvm-version of the library:
<dependency>
<groupId>dev.inmo</groupId>
<artifactId>krontab-jvm</artifactId>
<version>2.6.1</version>
</dependency>
After trying to fix it for two days, i found out, I created metro.config.js manually, so I deleted it again and put below code in terminal
npx expo customize metro.config.js
this generates metro.config.js file for you in the root folder.
then pasted below code in the file
const { getDefaultConfig } = require("expo/metro-config");
const config = getDefaultConfig(__dirname);
// Added this line:
config.resolver.assetExts.push("bin");
module.exports = config;
and it started working
I have to implement the same thing you asked, did you find anything useful for this? It would be a great help if anyone can help me in this
This is a look back from the later quaestion, for some search only shows this post instead of that one.
You can mark the accents in a colour different than the base letters, without the text-shadowing hack (demonstrated in other answers) in this way:
ñ
to n͏̃
) where the combining grapheme joiner (U+034F) is required after the base letter and before the combining diacritic marks.There is another workaround not as evil as text-shadowing, but will introduce some offsets. More on that down below.
This is 10 years after the quaestion was posed, but allow me to answer it.
I stepped into this problem on my journey through learning Sanskrit, as the Indic scripts mark the vowel on top. The fonts work same as Latin ones, with ते drawn as त and a े above.
Then there was the fiddle in the aforementioned post, of which the expected behaviour is with red diacritics and blue base letters as shown in the image. Happy after some tests, I decided to make a colourful grammar chart, and shared it with my mates. Later on I get complaints that no one other than me is able to open the correctly rendered chart.
Turns out that Chromium doesn't support this for some how. I tested the said fiddle on both my Win10 and Win11, for Chrome 131.0.6778.109, Edge 114.0.1823.86, Brave 1.73.97, Chromium 131.0.6778.108 and Opera 115.0.5322.77. All shown just blue. Only Firefox works.
As this other answer from the mentioned related post suggests, there is a workaround using another typeface/weight. The visual effect is worse than shadowing though.
As mentioned by @Damiao we can use the below syntax in the query as mentioned in the google Doc-
Query syntax format -
[PROJECT_ID.]region-REGION.INFORMATION_SCHEMA.JOBS[_BY_PROJECT]
EX - myproject
.region-us-west1
.INFORMATION_SCHEMA.JOBS
We can refer to the below Google doc for as an example . Posting the answer as community wiki for the benefit of the community that might encounter this use case in the future. Feel free to edit this answer for additional information
Check if any of these answers work for you
The issue was the way i was adding the section that will override the webpack settings was incorrect and also I neded to check the if (!isServer) condition
plugins: [
function customWebpackPlugin() {
return {
name: 'custom-webpack-plugin',
configureWebpack(config, isServer, utils) {
// Apply this configuration only for client-side builds
if (!isServer) {
return {
optimization: {
runtimeChunk: {
name: () => `runtime.main`, // Custom runtime file name
},
},
output: {
...config.output,
filename: 'assets/js/[name].[contenthash:8].js', // Custom path for main files
chunkFilename: 'assets/js/[name].[contenthash:8].chunk.js', // Custom path for chunk files
},
};
}
return {};
},
};
},
],
Once I added the above code to docusaurus.config.ts now its not adding the tilt
Thank you Bharat
Not sure about the first error but second one can be resolved by adding this scriptType to the module.exports
module.exports = {
output: {
uniqueName: "v17",
publicPath: "auto",
scriptType: "text/javascript",
},
....
You could consider using the @Generated annotation to indicate that the code in your model classes is auto-generated and should be excluded from SonarQube's analysis.
You should take a look at this question: what is the use of @Generated Lombok annotation
I am using both apn auth key and apn certificate for two of my firebase project. Will this update anyway affect that?
If I may ask, how did you resolve the MTU size issue? We are having a similar issue where it worked on all previous versions of Android we tested, but does not work on Android 14.
Regards Chris
What would be the most feasible way to update these?
This blog provides recommended actions when your project uses a package with a known vulnerability. Our recommendation is to prefer updates to packages “closest” to your direct references.
For example Package A has a dependency on package B, which in turn has a dependency on package C. In this example, we’ll consider that package C version 1.0.0 has a known vulnerability, fixed in version 2.0.0.
Recommendation steps:
If you want to upgrade transitive packages, you can do:
1.Add the fixed package version as a direct package reference.
2.Use Central Package Management with the transitive pinning functionality.
What would happen if the 3rd party upgrades the vulnerable package themselves and ive already installed another version?
Updating the top-level package can automatically update the vulnerable transitive package as well. I think it is possible to have two versions of the same NuGet package installed in a project, one is a transitive package another is direct reference. Then it picks the version that satisfies the most constraints.
Docs Referred:
NuGetAudit 2.0: Elevating Security and Trust in Package Management
Some people have asked similar questions, you can refer to the following, Firebase will help to update it https://stackoverflow.com/a/79203819/5957749
I also faced the same issue but the trick that worked for me as follows:
For test automation reporting you can use testreport.io, it also have direct jira and slack integration in it.
https://stackoverflow.com/a/79024366/21133532
This works. But make sure framework path is correct
Hello, community!
I've been struggling with this issue for almost two weeks now and need some help. I'm hosting a simple app on my server to test it online. To set it up properly, I decided to use Nginx as a reverse proxy for my Node.js application.
I want to be able to connect to my app using HTTPS (port 443) without having to specify any port in the URL. Currently, I can connect to the server on different ports (e.g., 8080 or 443), but I always need to include the port number explicitly. I would like to make this seamless and work as expected for HTTPS.
Network Configuration:
Public 80 -> Private 80
Public 443 -> Private 443
Public 8080 -> Private 8080
Example of the port forwarding configuration:
Public Private Protocol
80-80 80-80 TCP
443-443 443-443 TCP
8080-8080 8080-8080 TCP
Node.js Express Server:
server/index.js
file:
const express = require('express');
const routes = require('./routes');
const path = require('path');
const app = express();
const httpPort = 8080; // Port to listen for HTTP
// Middleware for processing JSON data
app.use(express.json());
// API routes
app.use('/api', routes);
// Static files served by Vue.js
app.use(express.static(path.join(__dirname, "../client", "dist")));
// Error handling for static files
app.use((err, req, res, next) => {
res.status(500).send('Something went wrong!');
});
// Vue.js default route
app.get('*', (req, res) => {
res.sendFile(path.join(__dirname, "../client", "dist", "index.html"));
});
// Start the HTTP server
app.listen(httpPort, () => {
console.log(`Server running on http://localhost:${httpPort}`);
});
Nginx Configuration:
server {
listen 80;
server_name MY_SERVER_NAME;
# Redirect HTTP to HTTPS
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name MY_SERVER_NAME;
# SSL certificates (Let's Encrypt or your own certs)
ssl_certificate /home/<user>/certificates/fullchain.pem;
ssl_certificate_key /home/<user>/certificates/privkey.pem;
# Configure the proxy to pass traffic to Node.js (HTTP)
location / {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Firewall Rules (UFW):
To Action From
443/tcp ALLOW Anywhere
80/tcp ALLOW Anywhere
8080/tcp ALLOW Anywhere
Checking Sockets:
ss -tuln
, I can see:
tcp LISTEN 0 511 0.0.0.0:443 0.0.0.0:*
tcp LISTEN 0 511 *:8080 *:*
Despite all this, I cannot connect to my app seamlessly on HTTPS (port 443). I still need to manually specify the port in the URL for the connection to work.
Thanks in advance for your help! Any insights would be greatly appreciated.
The issue can occur because there is already an artefact deployed on the server. Are you able to start the Payara server normally (not in debug mode) and undeploy the artefact, stop it, then restart it again in debug mode.
As it says " Failed to bundle asset files. build failed."
If you have files inside the assets folder you have to add the correct path to locate. Example: if you have
assets/icons/files.png
assets/icons/files2.png
..etc files, then your assets section in "pubspec.yml" file should look similar to this:
assets:
- assets/icons/
Then it should work.
With a simpler method:
with open("SCB_earthquakes.xml", "r") as infile:
lines = infile.read().split("\n")
for line in lines:
if '<azimuthalGap>' in line:
gaps.append(line.split('<azimuthalGap>')[1].split('<')[0])
Ok there is a solution -- plus I have additional information regards to error in visual studio when attempting to connect to your azure devopos repo and get an error "We could not refresh your credentials"enter image description here
With the above issue regarding invites not being sent -- Yesterday as mentioned, a new organization was created, users added and invites sent out which were not received -- Attempting to login to azure devops yesterday was denied -- As of today, 12hrs + later, I still had not received invite, however attempted to log in and was granted access -- Not sure if a fluke, definitely a bug in Azure Devops.....
Regards the second issue relating to the picture above -- Cloning a repo in visual studio is generally pretty straight forward, Team Explorer -> Connect to Project. If you're already signed into account you should see a list of servers available with associated repos -- another bug from Azure Devops & VS, is if / when servers are not found you can enter in the url directly, however this will result in an error advising to select an account from the dropdown list which will inevitably advise you no servers were found.
Solution := Don't muck around using VS, open a git bash, create a dir where proj should live then cmd git clone < link to repo > which will force git credential manager to ask for a PAT.
DONT waste your time trying to remove cred's from windows cred manager.
Error has been fixed when the ubifs image is created with LEB size not including the UBI headers and ubinize that image and flash erase and write that ubi image.
So basically, whenever you change the state of an element in react, and if you have initialised the state to be an empty array and then change the state to be a single value/string , it will give this error. If you want the user to select only one value, then you can initiase the state to be an empty string.
I am also working on a project on React Native. My development device is Windows and I am getting an error when I try to open the project with Expo Go on my iPhone. How did you achieve this with tunneling and can you explain it to me?
I run it fine by
ruby userinput.rb
What are you using to run your sourecode?
I'm working on this recently.
If you need real query execution. You need to replace expo-sqlite with sqlite3 of nodejs because the Jest runs on your machine but mobile device.
I'm trying to create wrapper of sqlite3 as expo-sqlite.
You don't need all wrapper but expo-sqlite and sqlite3 has very different API but could be wrapped for test because sqlite3 module just doesn't have Promise APi and do not provide same API style.
I need one help or guidance, where I want to run debian based docker images on gke cluster node pool, with ubuntu it is not working, and i can not modify/change the docker images somehow !! Any ways we can start custom node with GKE with Debian OS or can handle it any how with Ubuntu one ? Like deploy second disk with Debian OS and then use it as primay disk with each node pool ? Please help if anyone has idea on this. Thanks
For a Solr version 8.11.0 I did now an update to log4j2.24.2 by simply exchanging the 5 log4j*.jar files under "/opt/solr-8.11.0/server/lib/ext$". But after that solr doesn't respond with an error HTTP ERROR 404 Not Found URI: /solr/ STATUS: 404 MESSAGE: Not Found SERVLET: - Please help me.
Try this
<button @contextmenu.prevent="handleClick"> hold to play</button>
function handleclick(){
setTimeout(()=> {
plyavideo.value = true
}, 1000)
}