Ok, apparently I'm not the only one to think this is not good, and many others had the same issue:
I see the extra overhead when using cgo to call C functions from Go, which produces the performance problems. And it can slow things down even if you're using SIMD in C.
Try comparing the performance of plain C functions (without SIMD) via cgo with Go's native performance to see how much overhead cgo adds. Also, make sure you're using compiler optimizations like -O3 and that your memory is aligned properly for SIMD.
Also you can try parallelizing the work or look for Go libraries that use SIMD directly, avoiding cgo, in case if cgo overhead is still a problem.
Try to compile your model after load model and before using evaluate function.
In the latest version react-router-dom@6 use the useLocation() hook to access the state or data present to the child component.
import { useLocation } from "react-router-dom";
const location = useLocation();
console.log(location.FiltersDataList1) // To access the value
I know it's an old question, but as of 2025, I'd solve it like this:
int count = myEnumerable?.Count() ?? 0;
You mentioned that:
On each page, you need to make authenticated API calls to Google Drive. When you log in on a page, you obtain an access token via Google login, and your API requests work correctly. However, refreshing or navigating to a different page forces a re-login every time.
You are implementing an automatic Google login to solve the problem on the main page and storing the access token in Redis. The idea is to reuse the same token across pages so that users don’t have to log in again.
Upon researching your problem, I found in this documentation that access tokens have a limited lifetime.
Access tokens have limited lifetimes. If your application needs access to a Google API beyond the lifetime of a single access token, it can obtain a refresh token. A refresh token allows your application to obtain new access tokens.
By implementing refresh tokens in your OAuth 2.0 flow, you can ensure uninterrupted access to Google APIs for your application without requiring the user to authenticate every time the access token expires. But you should keep in mind the reason for the refresh token expiration.
You may also refer to this SO post: Why does Oauth v2 have both access and refresh tokens
Additionally, the following documentation might help you understand your current limitations:
You can also check out this article for a practical implementation guide:
For further understanding, refer to the official specification:
You can type the navigation props like this:
import { NavigationProp, ParamListBase } from "@react-navigation/native";
type HomeScreenProps = {
navigation: NavigationProp<ParamListBase>;
};
export default function HomeScreen({ navigation }: HomeScreenProps ) {
...
}
"typescript": "3.2.4", "@types/lodash-es": "4.17.5" can't upgrade typescript as project is in angular 7 can someone suggest a solution
I don't know why Apple seems to love to change this setting with every update, but in my case, I had to change the version numbers in those fields to make it work:
Select your project, select your target, open info tab and edit the "Bundle Version" and "Bundle version string (short)" values.
function setUp(){
// Save the h3 element
var element = document.getElementById("initial");
// Create setInterval, save to variable to clear later
var timer = setInterval(function() {
// Current element value
var value = element.innerHTML;
// Decrease
value--;
// Set value
element.innerHTML = value;
// Clear if we're at 0
if (value === 0) {
clearTimeout(timer);
}
}, 1000);
}
<button onclick="setUp()">Activate timer</button>
<h3 id="initial">60</h3>
In my case I tried many things but nothing works out. My project was old enough and minimum iOS version was also deprecated. Then I updated my project to Minimum iOS supported version '15.6' on XCode and also in 'Podfile' which is the main cause of this issue. And my all pods finally updated to latest.
In Podfile e.g:
platform :ios, '15.6'
Note: 15.6 works for my case. Always check Minimum iOS supported version.
post this on stackexchange.com
I have the same problem. Were you able to solve it? If so, how did you fix it?
use It in Spring Boot @RequestPart, Below is the example
@PostMapping public ResultModel sendNotification(@RequestPart("baseNotificationModel") EmailNotificationModel baseNotificationModel, @RequestPart(value = "file", required = false) MultipartFile[] file)
In postman do below changes to call API
Hello everyone, In postman choose Content-Type for both the Model(DTO) and file. If the column Content-Type isn't display then click on the Bulk edit and add column Content-Type.
I have similar requirements to the OP. I have tried to find the template "Update SharePoint list item when linked work item is changed in DevOps | Microsoft Power Automate)" but I cannot seem to locate it.
Any ideas?
I'd recommend you reaching out to Stripe support and see if they have a solution for you.
Computer\HKEY_CURRENT_USER\Software\Microsoft\FTP\Accounts\
See the ftp like subfolder and make changes there.
For Chrome, you can also force this setting through a command line switch:
--force-prefers-reduced-motion
--force-prefers-no-reduced-motion
See https://peter.sh/experiments/chromium-command-line-switches/ for a list of all Chrome command line switches.
This can be very useful, when your running Chrome headless / in a CI.
first try to calculate download speed over 5-10 second period like
speed
= downloaded_bytes_from_all_8_connections_in_last_10seconds
/10
it'll give you average speed over last 10 seconds
now just use simple math to get the remaining time,
remaning_number_of_seconds
= total_remaining_bytes
/speed
Was there a fix on this issue?
"Thanks Bill. This worked... I just changed one character in the header and got the result – Mandarb Gune "
Which character in the header did you change?
Edit
Change from
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}
to
headers = {'User-agent': 'Mozilla/5.0'}
I couldn't even uninstall, because the processes were blocked. To nobody's suprise, the Antivirus was the culprit. Internal IT reinstalled the Antivirus to solve it.
Check out this tutorial in youtube: https://youtu.be/Okf0dzCINXg?si=ADiBkd8ADytinb1U
trAIlique's facilities management software and CAFM solution simplify operations, optimize maintenance and improve efficiency for businesses. https://www.trailique.ai/solutions/facilities-management-software/
as h=2, when updating w_1^2, we have: G_{0:2}^λ =(1-λ)G_{0:1} + λG_{0:2} where G_{0:1}=R_1+\gamma\hat{v}(S_1,w_1^1) and G_{0:2}=R_1+\gamma{R_2}+\gamma\hat{v}(S_2,w_1^1), and then update G_{1:2}^λ =R_2 + \gamma\hat{v}(S_2,w_1^2).
Debugging a randomly unresponsive React.js web page can be tricky, but here’s a structured approach to identify and fix the issue:
F12
or Ctrl+Shift+I
in Chrome).useEffect
hooks to prevent infinite loops.console.log
inside event handlers to check if they are firing too frequently.useCallback
or useMemo
to optimize function calls.useEffect
with a cleanup function.<React.StrictMode>
in index.js
to catch potential issues in development mode.while(true) {}
loops or heavy calculations in the main thread.Would you like help diagnosing a specific issue in your project?
In cppcheckgui, you can config your project setting in [File],then check the qt checkbox
I think I got the answer.
Though the Table did contain 20,000 records, but with the join on other 2 tables, the total records produced are only 16994 and thus when limit starts at 17000, I get no results.
I think, I should have tried the query without the limit beforehand. This is how I found this at last.
I need using qt5.12.12 with msvc2017 and downloaded common dependencies from ftp mirror. But take error in Kit tab in QtCreator. "Qt version is not properly installed, please run make install" Then i installed offline qt 5.12.12 and compare two qmake.exe. Its difference by internal variables: qt_prfxpath, qt_epfxpath, qt_hpfxpath. In ftp case its equals for 'c:/Users/qt/work/install', in offline installer case - 'C:\Qt\Qt5.12.12\5.12.12\msvc2017_64'. For disable error set your according qt path by patch.
My experience shows that order by does nor serialize the row set but sort by does..
Your Flask app is designed to take a username and a game link from a form, process it using the GameReview class, and display the results. like The home route (/) serves the index.html template. The /review route processes form submissions, retrieves the username and game link, and uses the GameReview class to fetch game reviews. If successful, it renders review.html with the results. If an error occurs, it renders error.html. The app runs in debug mode for easier development.
I used the same version. Why isn't there an error?
https://reactnativeresponsive.hashnode.dev/making-react-native-apps-fully-responsive-with-react-native-responsive-screen u can read this blog for react native responsive screen library how to use and how it will work
Me funciono dejandolo así y reiniciando postgresql no olvides antes asignar contraseña al usuario postgres
local all postgres md5
local all all md5
host all all 127.0.0.1/32 md5
host all all ::1/128 md5
local replication all peer host replication all 127.0.0.1/32 scram-sha-256 host replication all ::1/128 scram-sha-256
I resolved this issue by following steps:
Thanks
The following steps was resolved issue for me:
VsCode >>> View >> Command Palette >> Clear Cache and Reload window
Is there someone by any chance has a cached version of the 2.11.3 or 2.11.4 aar file? Can you share? I can't find it anywhere and I am also unable to build from sources.
The official Bep20 token recovery can be accessed at: Token Recovery Center For detailed recovery steps, please refer to the documentation.
If you need assistance, please use the chat center on the page.
Important Notes:
Only use the official page I provided here for token recovery.
It turned out to not be an issue in my code but an issue with the low-code environment I was working in, which didn't allow passing entire datasources via the parameters.
An it will execute all the commands store in module / program / script that you had opened and show you the complete output in a separate python shell window. [fig.5.3(b)]
I had the same issues when network was disconnected. As a workaround, we developed our own sid/username cache which improve the reactivity and latency.
Check with Native Android Theme in Manifest - Use DayNight
Can we able to access odbc drivers from Azure web service application? Like we have simba spark installed on server and Azure web service trying to access this driver
I removed the seq from the source query and added to directly in the update statement, it is working ow thanks.
merge into tab_a tgt using (Select col1,col2 from tab_a f) src on (joinin condition) when matched then update set tgt.seq = seq.nextval;
Just changing
<style name="AppTheme" parent="Theme.AppCompat.DayNight.NoActionBar">
to
<style name="AppTheme" parent="Theme.MaterialComponents.DayNight.NoActionBar">
Use in the table element what you want to use. When you use the table element, you have the index of the column or row in the table in the DOM.
I spent all week looking for the answer to pre-filling hidden fields in Google Forms, and found the answer in another post.
Here's how to do it
The link will look similar to this:
https://docs.google.com/forms/d/e/<your form id>/viewform?usp=pp_url&entry.798192315=Foo1&entry.798192331=Foo2
In this example, entry.798192315
relates to the first field, and entry.798192331
relates to the second field.
Replace the values of Foo1 and Foo2 with the values you want to inject as hidden fields, as shown below (I've used UserID and ResponseID).
https://docs.google.com/forms/d/e/<your form id>/viewform?usp=pp_url&entry.798192315=MyUserID&entry.798192331=ThisResponseID
Finally, you need to add &pageHistory=0,1,2
to the end of the URL, this indicates that the form will have 3 sections.
https://docs.google.com/forms/d/e/<your form id>/viewform?usp=pp_url&entry.798192315=MyUserID&entry.798192331=ThisResponseID&pageHistory=0,1,2
Now when the user fills the form and clicks submit, the hidden field data will be stored in the form response.
I spent all week looking for the answer to pre-filling hidden fields in Google Forms, and found the answer here: https://stackoverflow.com/a/79299369/29738588
Try the steps below to see if that helps.
Have you tried to reproduce the issue in a newly created project, or is the issue specific to the current project?
Quick fix with statements:
#pragma warning(disable:4996);
Tested and run done :))
I experienced this on macOS and I had to make sure I pointed xcode-select -s
to the Xcode.app
I am using
I would like to know if PJSIP partial porting is done without using ioqueue and PJSocket ? If yes - could you please provide any reference ?
Maybe too late to answer, but roslyn library does support interprocedural analysis. It is implemented in the base class of DataFlowOperationVisitor and many built-in analysis like value content analysis consume these utilities. You need to configure what kind of analysis do you need context-sensitive/insensitive and then invoke the TryGetResult method of ValueContentAnalysis to see the results.
This explicitly uses the $regex operator. It searches for all names that start with "S". It is a more flexible approach because you can add additional regex options like case insensitivity ($options: 'i').
db.Employees.find({name: {$regex: /^S/}}); and db.Employees.find({name: {$regex: /^s/, $options: 'i'}});
I had the same issue and i upgraded the Constraintlayout version to 2.0.2 and its working fine now. Thanks
Could you give more details about this device - like hybrid or AAD joined? While the entries of Intune locations with old name here seems to be concerning, I still feel we need to look at DNS if it is hybrid or if there are any duplicate entries (with old name and new name) from the Intune/Entra portal that needs cleanup. You can try reenrolling the device, rather than directly cleaning from registry.
Hope that helps!
Understanding OEM_PAID and OEM_PRIVATE Networks in Android: https://www.devgem.io/posts/understanding-oem-paid-and-oem-private-networks-in-android
Sometimes all you need to do is uninstall the extension, reload extensions, and reinstall. That's what worked for me.
consider these alternatives that might be more cost-effective:
Pre-populated Lookup Table: Create a table with common public email domains. Join your subscriptions_table with this lookup table to classify domains. This avoids AI calls altogether.
Rule-Based Classification: Develop a set of rules based on domain patterns (e.g., contains "gmail", ends with ".edu"). You can implement this using SQL within BigQuery.
Hybrid Approach: Use generative AI for a smaller subset of ambiguous domains that cannot be easily classified with rules or a lookup table.
General Best Practices
Start Small: Begin with a small sample of data to test your prompts and estimate costs. Monitor Usage: Track your token consumption closely using BigQuery's monitoring tools. Explore Alternatives: Always consider if simpler, non-AI solutions can achieve similar results. Stay Updated: Keep an eye on BigQuery's documentation for any pricing changes or new features that can help you optimize costs.
MERGE INTO employees e USING (SELECT 101 AS emp_code, 'John Doe' AS emp_name FROM dual) src ON (e.emp_code = src.emp_code) WHEN MATCHED THEN UPDATE SET e.emp_name = src.emp_name WHEN NOT MATCHED THEN INSERT (emp_id, emp_code, emp_name) VALUES (emp_seq.NEXTVAL, src.emp_code, src.emp_name);
In mysql config file (my.ini), commenting the "innodb_force_recovery = 1" part helped me solve the error. Make sure restart the server after making changes.
I have reviewed Zoom SDK documentation and your code, and I suspect that the meeting you are trying to join is not in active
state, that might be the reason that you are not able to join.
Kindly do verify that, also if you are starting the meeting then make sure to provide zak
(Zoom Access Key) token.
I encountered the same issue working on a Google Cloud Function to read from a BigQuery table.
I found this from google.cloud import bigquery ModuleNotFoundError: No module named ‘db_dtypes’ and they referenced using google-cloud-bigquery[all] instead of google-cloud-bigquery. I updated my requirements.txt file to use google-cloud-bigquery[all], and the error disappeared.
If you are using Windows then Command: for Copy: ctrl + fn + insert and ctrl + insert for past: ctrl + v
Here are coin collecting animation code. Code is animate the image in timer.
func animateCoin() {
count = count + 1
let moneyView = UIImageView(image: UIImage(named: "Your image name"))
moneyView.frame = CGRect(x: 100, y: self.view.frame.size.height - 350, width: 100, height: 100)
view.addSubview(moneyView)
UIView.animate(withDuration: 1.0, animations: {
moneyView.frame.origin.x = 250
moneyView.frame.origin.y = 100
AudioServicesPlaySystemSound(1122)
}) { _ in }
}
func startCoinAnimation() {
Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { timer in
self.animateCoin()
if self.count == 10 {
timer.invalidate()
self.count = 0
}
}
}
Here is Demo link how animation look : https://www.youtube.com/watch?v=GR9hIpNeL-U
Well, I navigated to this github link, npsql downloaded the msi file and installed it. I refreshed my browser and it worked.
Steps to make key.json file:
Key steps:
Access Google Cloud Console: Go to the Google Cloud Platform console.
Navigate to Service Accounts: Under "IAM & Admin", select "Service Accounts".
Select your Service Account: Choose the service account you want to access the key for.
Create a new key: Click "Keys" > "Add Key" > "Create new key".
Choose JSON format: Select "JSON" as the key type.
Download the file: Click "Create" to download the "my-key.json" file.
Uploading to Colab:
Open your Colab notebook: Access your Google Colab notebook.
Use the file upload feature: In the left sidebar, click on the folder icon to access the file explorer and select "Upload".
Select "my-key.json": Choose the downloaded "my-key.json" file from your computer.
Here is the curated list of providers https://github.com/learnk8s/free-kubernetes
Is there a way display weight and volume is same line.
For example: weight, volume or weight|volume
The above code helped me a lot.
your problem most likely due to environment variables.
this is alternative answer. try to install nodejs using volta, because, volta will handle the variables. https://docs.volta.sh/guide/getting-started
For Class ID:
This ID must be unique across all from an issuer. This value needs to follow the format
issuerID.identifier
where issuerID is issued by Google and identifier is chosen by you. The unique identifier can only include alphanumeric characters, ., _, or -.
Google Wallet - Generic Class reference
For the issuerID
, you can get it in your Google Pay & Wallet Console
After 10+ years of generating the view on the server (PERL, JSP, Servlet), I'm coming down on the side of generating the view on the client, and putting a significant amount of energy into doing this in my latest project (using JavaScript - for example - and raw JavaScript in my case - by choice); however, Stackoverflow 'rules' suggest that posts that intend to generate discussion are not suitable/appropriate for this forum..
This is a common practice for cloud providers, as the exact implementation details of rate limiting are often considered internal and subject to change.
You're right, the documentation on Vertex AI quotas primarily focuses on the what (the limits themselves) rather than the how (the specific algorithm for calculating and enforcing those limits). This is a common practice for cloud providers, as the exact implementation details of rate limiting are often considered internal and subject to change.
However, you can infer some general principles based on common rate-limiting strategies and the information available:
General Rate Limiting Principles
Sliding Window Rate Limiting: It's highly probable that Vertex AI uses a sliding window algorithm rather than fixed time windows. This means that the rate limit is calculated over a moving time interval, rather than discrete, fixed intervals. For example, instead of resetting at the top of each minute, the system might calculate the number of requests or tokens consumed over the last 60 seconds. This provides a smoother and more flexible rate-limiting mechanism. This aligns with your second scenario, where limits move as requests are made. Token Bucket Algorithm: Another common approach is the token bucket algorithm. In this model, a "bucket" is filled with "tokens" at a certain rate. Each request consumes a token, and when the bucket is empty, requests are rejected. This allows for bursty traffic while still enforcing an average rate limit. Per-Project and Per-Region Limits: Vertex AI quotas are typically enforced on a per-project and per-region basis. This means that your usage in one project or region won't affect your limits in another. Quota Increases: Google Cloud provides mechanisms to request quota increases for Vertex AI. This indicates that the limits are configurable and can be adjusted based on your specific needs. API Specific Limits: The quota limits are also API specific. Meaning that different API's within Vertex AI will have their own seperate limits. Why the Exact Algorithm Isn't Public:
Security and Abuse Prevention: Publishing the exact details of the rate-limiting algorithm could make it easier for malicious actors to circumvent the limits. Flexibility and Optimization: Google Cloud may need to adjust the algorithm over time to optimize performance and handle changing traffic patterns. Keeping the details internal allows for greater flexibility. Complexity: The systems that manage rate limiting within a large cloud platform are very complex. Providing a full explanation of them would be very difficult. What You Can Do:
Monitor Your Usage: Use the Google Cloud Console and Cloud Monitoring to track your Vertex AI usage and stay within your quotas. Request Quota Increases: If you anticipate exceeding your quotas, submit a quota increase request through the Google Cloud Console. Implement Retry Logic: In your application, implement retry logic to handle rate-limiting errors (e.g., HTTP 429 Too Many Requests). Optimize Your Requests: Minimize the number and size of your requests to reduce your consumption of tokens and other resources.
Oh goodness.... i've been on these for about a week now.... i was scared for my job. but now it's good after i uninstalled Language Support for Java(TM) by Red Hat. Thank you sooooo much @Jinto Joseph and @antrollingsid
although mine was showing The supplied phased action failed with an exception. A problem occurred evaluating settings 'android'. assert localPropertiesFile.exists() | | | false C:\Users\Emmanuel\Documents\BD\android\local.propertiesJava(0)
I want to use conditional formatting for replacing empty or blank cells in a column to dash and highlight formatting for non empty or dashed cells
Perhaps you should change executor handler number from 1 to 2?
Before adding a public IP, you need to attach your VPC to the Internet Gateway router. this will create a bridge between your VPC router and the IGW router. When you add a public IP it sits outside of your VPC on the IGW router, then AWS automation will Nat this to your VPC router with your EC2 private IP. When a request is sent from internet to your EC2. the public IP ends at the IGW router then the request its forwarded to your router then to your EC2 private IP. You can see the communication on the VPC flow logs: End user public IP reaching EC2 private IP. AWS documentation does not explain all this.
After updating firebase_messaging
from ^14.6.9
to ^15.0.0
and firebase_core
from ^2.17.0
to ^3.0.0
, the issue was resolved. There was no need to add a module interface, and the project build successfully.
Well, the Problem was XML Files loading and everything else stopped.
So which means there was a Code problem.
user@ubuntu:~/backend$ docker logs 5d6e26ae2686
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v3.3.3)
2025-02-25 13:08:00 - Starting BackendApplication v0.0.1-SNAPSHOT using Java 17-ea with PID 1 (/app/app.jar started by root in /app)
2025-02-25 13:08:00 - The following 1 profile is active: "test"
2025-02-25 13:08:02 - Bootstrapping Spring Data JPA repositories in DEFAULT mode.
2025-02-25 13:08:02 - Finished Spring Data repository scanning in 189 ms. Found 24 JPA repository interfaces.
2025-02-25 13:08:03 - Bean 'webServiceConfigSoap' of type [com.backend.backend.addons.smart_meter.config.WebServiceConfigSoap$$SpringCGLIB$$0] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying). Is this bean getting eagerly injected into a currently created BeanPostProcessor [annotationActionEndpointMapping]? Check the corresponding BeanPostProcessor declaration and its dependencies.
2025-02-25 13:08:03 - Bean 'org.springframework.ws.config.annotation.DelegatingWsConfiguration' of type [org.springframework.ws.config.annotation.DelegatingWsConfiguration$$SpringCGLIB$$0] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying). The currently created BeanPostProcessor [annotationActionEndpointMapping] is declared through a non-static factory method on that class; consider declaring it as static instead.
2025-02-25 13:08:03 - Supporting [WS-Addressing August 2004, WS-Addressing 1.0]
2025-02-25 13:08:04 - Tomcat initialized with port 8080 (http)
2025-02-25 13:08:04 - Initializing ProtocolHandler ["http-nio-0.0.0.0-8080"]
2025-02-25 13:08:04 - Starting service [Tomcat]
2025-02-25 13:08:04 - Starting Servlet engine: [Apache Tomcat/10.1.28]
2025-02-25 13:08:04 - Initializing Spring embedded WebApplicationContext
2025-02-25 13:08:04 - Root WebApplicationContext: initialization completed in 3299 ms
2025-02-25 13:08:04 - HHH000204: Processing PersistenceUnitInfo [name: default]
2025-02-25 13:08:04 - HHH000412: Hibernate ORM core version 6.5.2.Final
2025-02-25 13:08:04 - HHH000026: Second-level cache disabled
2025-02-25 13:08:05 - No LoadTimeWeaver setup: ignoring JPA class transformer
2025-02-25 13:08:05 - HikariPool-1 - Starting...
2025-02-25 13:08:05 - HikariPool-1 - Added connection ConnectionID:1 ClientConnectionId: 128acbed-92c3-4650-b5ed-e8cd5370b87c
2025-02-25 13:08:05 - HikariPool-1 - Start completed.
2025-02-25 13:08:08 - HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration)
2025-02-25 13:08:08 - Initialized JPA EntityManagerFactory for persistence unit 'default'
2025-02-25 13:08:09 - Hibernate is in classpath; If applicable, HQL parser will be used.
2025-02-25 13:08:09 - Successfully executed sql script.
2025-02-25 13:08:10 - Resolved watchPath: /mnt/clouhes/
2025-02-25 13:08:10 - folderToWatch path: /mnt/clouhes
2025-02-25 13:08:10 - archiveFolder path: /mnt/clouhes/archived
2025-02-25 13:08:10 - Directory already exists: /mnt/clouhes
2025-02-25 13:08:10 - Directory already exists: /mnt/clouhes/archived
2025-02-25 13:08:10 - Processing existing XML files in: /mnt/clouhes
2025-02-25 13:08:10 - Found existing XML file: /mnt/clouhes/minutely2502222121_11548_0.xml
2025-02-25 13:08:23 - Processed 100 SmartMeter records so far.
2025-02-25 13:08:39 - Processed 200 SmartMeter records so far.
2025-02-25 13:09:02 - Processed 300 SmartMeter records so far.
2025-02-25 13:09:25 - Processed 400 SmartMeter records so far.
2025-02-25 13:09:46 - Processed 500 SmartMeter records so far.
2025-02-25 13:10:18 - Processed 600 SmartMeter records so far.
2025-02-25 13:11:04 - Processed 700 SmartMeter records so far.
2025-02-25 13:11:44 - Processed 800 SmartMeter records so far.
2025-02-25 13:12:27 - Processed 900 SmartMeter records so far.
2025-02-25 13:12:54 - Spent time: 4.73 minutes
Moved file to archive: /mnt/clouhes/archived/20250225/minutely2502222121_11548_0.xml
2025-02-25 13:12:54 - File archived: /mnt/clouhes/minutely2502222121_11548_0.xml
2025-02-25 13:12:54 - Found existing XML file: /mnt/clouhes/minutely2502221521_12695_0.xml
2025-02-25 13:13:00 - Processed 100 SmartMeter records so far.
2025-02-25 13:13:02 - Processed 200 SmartMeter records so far.
2025-02-25 13:13:05 - Processed 300 SmartMeter records so far.
2025-02-25 13:13:07 - Processed 400 SmartMeter records so far.
2025-02-25 13:13:12 - Processed 500 SmartMeter records so far.
2025-02-25 13:13:14 - Processed 600 SmartMeter records so far.
2025-02-25 13:13:19 - Processed 700 SmartMeter records so far.
2025-02-25 13:13:23 - Processed 800 SmartMeter records so far.
2025-02-25 13:13:27 - Processed 900 SmartMeter records so far.
2025-02-25 13:13:30 - Processed 1000 SmartMeter records so far.
2025-02-25 13:13:35 - Processed 1100 SmartMeter records so far.
2025-02-25 13:13:39 - Processed 1200 SmartMeter records so far.
2025-02-25 13:13:42 - Processed 1300 SmartMeter records so far.
2025-02-25 13:13:44 - Processed 1400 SmartMeter records so far.
2025-02-25 13:13:47 - Processed 1500 SmartMeter records so far.
2025-02-25 13:13:50 - Processed 1600 SmartMeter records so far.
2025-02-25 13:13:53 - Processed 1700 SmartMeter records so far.
2025-02-25 13:13:56 - Processed 1800 SmartMeter records so far.
2025-02-25 13:14:01 - Processed 1900 SmartMeter records so far.
2025-02-25 13:14:05 - Processed 2000 SmartMeter records so far.
2025-02-25 13:14:10 - Processed 2100 SmartMeter records so far.
2025-02-25 13:14:14 - Processed 2200 SmartMeter records so far.
2025-02-25 13:14:17 - Processed 2300 SmartMeter records so far.
2025-02-25 13:14:19 - Processed 2400 SmartMeter records so far.
2025-02-25 13:14:24 - Processed 2500 SmartMeter records so far.
2025-02-25 13:14:27 - Processed 2600 SmartMeter records so far.
2025-02-25 13:14:30 - Processed 2700 SmartMeter records so far.
It's working as intended here Loading XML Files and all.
And Here I tried netstat.
user@ubuntu:~/backend$ docker exec -it backend_app netstat -tulpn | grep 8080
user@ubuntu:~/backend$
Here I made an intentional path error to see what happens.
user@ubuntu:~/backend$ docker logs fd39d0d688ad
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v3.3.3)
2025-02-25 13:19:39 - Starting BackendApplication v0.0.1-SNAPSHOT using Java 17-ea with PID 1 (/app/app.jar started by root in /app)
2025-02-25 13:19:39 - The following 1 profile is active: "test"
2025-02-25 13:19:41 - Bootstrapping Spring Data JPA repositories in DEFAULT mode.
2025-02-25 13:19:41 - Finished Spring Data repository scanning in 180 ms. Found 24 JPA repository interfaces.
2025-02-25 13:19:42 - Bean 'webServiceConfigSoap' of type [com.backend.backend.addons.smart_meter.config.WebServiceConfigSoap$$SpringCGLIB$$0] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying). Is this bean getting eagerly injected into a currently created BeanPostProcessor [annotationActionEndpointMapping]? Check the corresponding BeanPostProcessor declaration and its dependencies.
2025-02-25 13:19:42 - Bean 'org.springframework.ws.config.annotation.DelegatingWsConfiguration' of type [org.springframework.ws.config.annotation.DelegatingWsConfiguration$$SpringCGLIB$$0] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying). The currently created BeanPostProcessor [annotationActionEndpointMapping] is declared through a non-static factory method on that class; consider declaring it as static instead.
2025-02-25 13:19:42 - Supporting [WS-Addressing August 2004, WS-Addressing 1.0]
2025-02-25 13:19:43 - Tomcat initialized with port 8080 (http)
2025-02-25 13:19:43 - Initializing ProtocolHandler ["http-nio-0.0.0.0-8080"]
2025-02-25 13:19:43 - Starting service [Tomcat]
2025-02-25 13:19:43 - Starting Servlet engine: [Apache Tomcat/10.1.28]
2025-02-25 13:19:43 - Initializing Spring embedded WebApplicationContext
2025-02-25 13:19:43 - Root WebApplicationContext: initialization completed in 3314 ms
2025-02-25 13:19:43 - HHH000204: Processing PersistenceUnitInfo [name: default]
2025-02-25 13:19:43 - HHH000412: Hibernate ORM core version 6.5.2.Final
2025-02-25 13:19:43 - HHH000026: Second-level cache disabled
2025-02-25 13:19:44 - No LoadTimeWeaver setup: ignoring JPA class transformer
2025-02-25 13:19:44 - HikariPool-1 - Starting...
2025-02-25 13:19:44 - HikariPool-1 - Added connection ConnectionID:1 ClientConnectionId: facf366b-49ba-4a18-9011-3fd2f42f6787
2025-02-25 13:19:44 - HikariPool-1 - Start completed.
2025-02-25 13:19:47 - HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration)
2025-02-25 13:19:48 - Initialized JPA EntityManagerFactory for persistence unit 'default'
2025-02-25 13:19:48 - Hibernate is in classpath; If applicable, HQL parser will be used.
2025-02-25 13:19:48 - Successfully executed sql script.
2025-02-25 13:19:50 - Resolved watchPath: /mnt/clouhed/
2025-02-25 13:19:50 - folderToWatch path: /mnt/clouhed
2025-02-25 13:19:50 - archiveFolder path: /mnt/clouhed/archived
2025-02-25 13:19:50 - Processing existing XML files in: /mnt/clouhed
2025-02-25 13:19:50 - Either watch or archive folder does not exist. Skipping file processing.
2025-02-25 13:19:50 - Global AuthenticationManager configured with UserDetailsService bean with name authService
2025-02-25 13:19:51 - Starting ProtocolHandler ["http-nio-0.0.0.0-8080"]
2025-02-25 13:19:51 - Tomcat started on port 8080 (http) with context path '/'
2025-02-25 13:19:51 - Started MdmBackendApplication in 13.09 seconds (process running for 14.287)
2025-02-25 13:19:51 - Processing existing XML files in: /mnt/clouhed
2025-02-25 13:19:51 - Either watch or archive folder does not exist. Skipping file processing.
2025-02-25 13:19:51 - Given path does not exist: /mnt/clouhed
And the Port is here. Assigned.
user@ubuntu:~/mdm-backend$ docker exec -it backend_app netstat -tulpn | grep 8080
tcp 0 0 :::8080 :::* LISTEN 1/java
user@ubuntu:~/mdm-backend$
So, The Problem was Backend was dropping everything else while loading or parsing XML files. I think. So, I am talking to the Developers to fix this error.
And Last thing what to do with the Question? Do I delete it or something?
Had this issue and driven me crazy, this is pretty much stupidity of NetSuite error messages.. you never know what to do.. For myself it was needed to change header from SuiteScruipt 2.x to 2.1
Hope this helps someone..
use .toLocaleString() you will find more about it on w3 schools but it is quite efficient when you want to do number formatting you can use us for seamless conversion ....however regexp is also a good approach here you can read about it
There are two types of image compression techniques Lossy and Lossless. Lossy Technique will remove the the details of the image permanently and can't be reversed which means the decompressed image will not be identical to the original image. Whereas Lossless will compress the image without affecting the quality of the image, allowing the original image to be fully restore after decompression.
JPEG Bitmap.CompressFormat.JPEG
is always a Lossy compression, which means some details are discarded. If you want to compress the image and retrieve it in the same form without any loss in quality try using PNG Bitmap.CompressFormat.PNG
or WEBP Lossless Bitmap.CompressFormat.WEBP_LOSSLESS
instead
for reference:- Does BitmapFactory.decodeFile() file decompress the JPEG image when decoding it?
I'm maintainer of JupyterLab extension called Package Manager. It provides GUI for managing packages. The code for extension is available at https://github.com/mljar/package-manager
You can list all packages with versions:
You can filter packages by name, delete or add a new feature in the manager. You can read more about the list feature in Jupyter in the article.
One More Possible Solution, which seems clean
This will change file to crlf
Hello @Ahmad Faizan if you using livewire 3 then try this
<script>
Livewire.hook('morph.updated', ({ component, el }) => {
$('#select_state_update_country').select2({
width: '100%',
});
})
</script>
Yo here in 2025 and adding myself as a user worked as last comment suggested!!
Deleting the node module and restart the system works for me.
FIX: I absolutel detest Satanic Microsoft evil crap But I respect Stackoverflow. so if you poor souls like me are forces to work for a IT company in love with That evil company then here is finally a fin fix:
THIS IS TO DO WITH MFA Microsoft dont seem to be able to tell you that anywere But if Edge has lost its MFA auth - you need to sign in and out and Setup the MFA again. Then the rest of these satanic sharepoint folders will sync..
BUT FOR GODS SAKE move to Google if you are forced for a business that needs email and sharing stupid document crap.
After doing even more searching, I tried getting the emulator to reboot (not just saving state and shutting down), and lo and behold that worked. After the emulator restarted, it was able to make API calls over https.
Leaving this here in cause anyone else runs into the same issue.
If you add the scrollview and give 4 corner constraint then it will give the ambiguity error. here are steps how we fix this issue.
Step 1: Add ScrollView to the ViewController XIB or storyboard Add constraints to all four corners.
Step 2: Add a UIView inside the UIScrollview and add constraints with the scrollView to all four corners.
Step 3: Set equal width between the UIView and the scrollView. Add height to the UIView (If the height is greater than the height of the device, It automatically becomes scrollable). Now view inside scrollview have 6 constraints.
Step 4: Add the other UI whatever you want inside uiView.
Here are live demo watch: https://www.youtube.com/watch?v=nvNjBGZDf80
Downgrade to the recent working version
npm install [email protected]
I was having the same problem. It turns out there is a low_memory
parameter to BERTopic. I set low_memory = True
when defining my model and that solved it for me!
If it doesn't HAVE to be a scatter plot, the general consensus is to just use bubble. basically, the exact same code but use bubble instead of scatter plot
sysuse auto2, clear
gen weight2 = sqrt(weight)
twoway bubble price mpg, size(weight2)
In VS Code you can re-arrange your Keybinding to avoid conflict of these extension functions
Ctrl Shift P > type Open Keyboard Shortcut > type Tab and change to your preference
Adding a series check before calling of the function variablePie()
helped.
if (!Highcharts.Chart.prototype.addSeries) {
variablePie(Highcharts);
}
I found How to pass variables into shell script : r/shortcuts and there is screenshot which suggests that you can put variables in script and it will copy/paste values automatically:
Image from answer on Reddit (author: Infamous_Pea6200):
here's what worked for me
Yeah, this issue can be frustrating because even after updating your manifest, Google Play Console still flags the error. The problem is likely an older app bundle that’s still active on a test track, even if that track is paused. Google Play still considers it “active,” which is why you're seeing the warning.
follow the steps here: https://stackoverflow.com/a/79453095/10613082
don't know why but i try python "C:\Python39\Scripts\cppclean" <path>
and it works
Switching to the theme powerlevel10k solves my problem.
Turns out I needed to add a DNS resolver to AWS DNS under the server block:
server {
resolver 169.254.169.253 valid=10s;
}
After that, the issue never happened again.
For the cardTitle
, you may want to try and use the wideLogo
. Somehow adding this hides the cardTitle
. See Google Wallet Generic Pass
All of your code looks fine from the standpoint of not freezing up. I Put it in a playground and it works fine, I think maybe a chrome browser popup or something freezing your computer. Look for any popups in your chrome toolbar maybe. It seems like maybe an auto fill, or a passwords not secure popup.
Code seems fine though.