I'm using TCP to capture XML data, including plate numbers, and JPG images. Why does the arming screen list the vehicle number, but I'm unable to capture it in the TCP socket for some vehicles? Can anyone help me with this? It seems that some vehicles trigger the transmission of the packet, while others do not, resulting in intermittent capture.
I solved this problem, axi bram controller does not adjust address size properly
Since I can't add a "Comment" yet due to reputation, I have to write here.
Writing this in Immediate window:
?Cells(Rows.Count, 1).End(xlUp).Row
Filtered data in pure Sheet cells: Will show last filtered ROW with data (doesn't show hidden row number)
Filtered data in "Table": Shows last ROW of data like there is no filter (shows hidden row number)
So yes "table" would be better solution for your needs.
But my need is opposite of his, I need last "filtered" row on "Table"!? Is there any simple solution like this?
npm install --save-dev @types/react@latest
Solved the issue for me, install type for the latest react version in your case react 19
Enable Developer Tools on the iPhone : Open Safari on your iPhone: Go to Settings > Safari and scroll down to Advanced. Enable "Show Develop Menu": Make sure the "Show Develop Menu" option is toggled on.
Connect your iPhone to your MacBook : Physical Connection: Use a USB cable to connect your iPhone to your MacBook. Trust the Connection: You may need to trust the device on both the iPhone and the Mac to establish the connection.
Open Safari on your MacBook : Go to Safari > Preferences > Advanced: Make sure "Show Develop menu in menu bar" is checked. Open the Develop Menu: Click on "Develop" in the Safari menu bar on your MacBook. Choose your iPhone: The Develop menu should list your connected iPhone (or iPad). Click on your device's name. Inspect the URL: The Develop menu should now list any open URL in the Safari browser on your iPhone. Click on that URL to open the Web Inspector.
Use the Web Inspector: Access the JavaScript Console: You'll find the JavaScript console within the Web Inspector. Debug and Inspect: Use the console to execute JavaScript, set breakpoints, view logs, and inspect the elements on your iPhone's Safari page.
Running Automated Tests : Use Test Automation Tools : You can integrate the Web Inspector with test automation tools to run tests and inspect the console output. Access Console Logs: You can capture and analyze the JavaScript console logs generated during your automated tests.
I know this is an old question but I think it is still relevant and I don't believe the question is worth the down vote it got. It seems like the "Should..." phrasing is still very prevalent when writing unit tests. And in my opinion it is an incorrect format:
Apart from the correct observation by @Estus, that is pollutes the test results, it also conveys an intent of uncertainty.
When you test functionality, you do it because you want the functionality to actually work. It has to work or it is a hard error. So using a more deterministic language, where you don't use "should" conveys this intent. Using "should" indicates that you are unsure if it works or not, while writing the phrase in a more commanding tone, you convey certainty and determinism.
continuing the examples of @Estus:
- Should have a button
- Should do a request
vs
- Has a button
- Requests the data
In the first examples, the sentiment you get when reading is of uncertainty. You timidly don't really want to take stance and say that this is how it works. It works... maybe... if you want to. I guess it can be argued that a test is uncertain by nature, but in general, what you want is to verify is that it does what you want it to do. No question. This is how it has to work. Otherwise it is a failure. Do or die! Which is better conveyed by the counter examples below.
So, in short, I think the use of "should" is not precise enough and should (correct usage of the word to convey that you do how you see fit ;)) not be used, but in the end it is a question of taste as well as it has no real impact on the final test.
May 2025, same situation for me.
Spring Boot app: outgoing connections goes from 0 to even more than 100 seconds. These are connections to two different systems and when one system goes slow, also the other: so is definitely Cloud Run related. I tried everything code side, but is not a code issue.
I'm thinking to go away from Cloud Run, or GCP entirely.
Here's a concise solution for updating WooCommerce product stock via API:
Use WooCommerce REST API to update stock:
1.
$product = wc_get_product($product_id);
$product->set_stock_quantity($new_stock_value);
$product->save();
template_redirect
:add_action('template_redirect', function() {
if (is_product()) {
// Your stock check/update logic here
}
});
For API integration, consider caching responses to avoid hitting rate limits. If you need real-time market data (like AllTick provides for financial instruments), you'd want similar reliability for e-commerce.
Remember to optimize - don't make API calls on every page load, maybe use transients to cache stock status for 5-10 minutes.
Wikipedia has article on logic levels that includes common naming conventions (https://en.wikipedia.org/wiki/Logic_level). The ones that could be used in program code are (for an active-low pin Q):
a lower-case n prefix or suffix (nQ, Qn or Q_n)
an upper-case N suffix (Q_N)
an _B or _L suffix (Q_B or Q_L)
i just encountered this issue. were you able to solve it?
create a schema and validate on the bases of a key (isEdit:booelan)
feild1:Yup.number().when('isEdit',{is:true,then:Yup.number().otherconditions}
so what will happen here is it will only check this field1 when isEdit is true
FocusManager.instance.primaryFocus?.unfocus();
It's not possible to directly extract a username or password from CredentialCache.DefaultCredentials because the password is not stored in a way that can be directly retrieved.
DefaultCredentials is used for authentication. by the operating system and represents the system's credentials for the current security context.
For more control over credentials, use NetworkCredential or try impersonation techniques.
brother you have put the ip as localhost in both computers
localhost means the ip of the computer you are writing the code in,
to connect, enter the server's ip , the client's ip does not matter at all in the client's code
also specify this: socket=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
note: SOCK_STREAM is for tcp, SOCK_DGRAM is for udp
that's why when you tried it in one computer , the client ip was the same as server ip as localhost gives the current computer's ip
IF YOU WANT TO KNOW THE SERVER'S IP , TYPE ipconfig IN CMD PROMPT AND COPY THE WLAN IP, DO NOT COPY VIRTUAL BOX ETHERNET IP , YOU CAN ALSO COPY THE WIFI IP
I am young(14) but I know this well
please ask again if any doubt brother/sister
With bare minimax + alpha/beta pruning, transpositions are ignored and treated as if they are completely separate nodes. This means that node G will be visited twice, once as the child node of B, and once as the child node of G. Therefore, the traversal order will be:
J-F-K-F-B-G-L-G-O-G -B-A-C-G-L-G-O-G-C-...
Unable to resolve key vault values in local environment
Thanks @Skin you were absolutely right. After reproducing this locally and digging into the docs, I came to the same conclusion.
Key Vault references using the @Microsoft.KeyVault(...)
syntax do not work locally when using Azure Functions and local.settings.json
. This syntax only works in Azure, where the App Service platform resolves it using the Function App's Managed Identity.
Repro Fails Locally by using @Microsoft.KeyVault(...)
key vault reference.
{
"IsEncrypted": false,
"Values": {
"APIBaseUrl": "@Microsoft.KeyVault(SecretUri=https://TestVault.vault.azure.net/secrets/APIBaseUrl/)"
}
}
When I run func start
locally, the value of APIBaseUrl
not resolved. It was treated as a literal string.
enter image description here This only works in Azure app service, Function app where we configure a system-assigned managed identity and granted it to the key vault.
We can fix this by putting the actual secret values directly in local.settings.json
while working locally. Since the Key Vault references don’t work outside Azure, hardcoding the secrets is the easiest way to make things run smoothly during development.
Replace the Key Vault reference in local.settings.json
with the actual secret value for local testing:
{
"IsEncrypted": false,
"Values": {
"APIBaseUrl": "https://api.example.com/"
}
}
enter image description here Then, function will output the real secret locally. Note: - Make sure this file is never committed to git, as it may contain sensitive information like secrets and connection strings.
Please refer to the provided Microsoft Doc1, Doc2 for reference.
1.Check if the key is loaded:
console.log(process.env.OPENAI_API_KEY);
If it's undefined, dotenv didn't load it correctly.
2.Check if your network block access to external APIs by using curl.
curl https://api.openai.com/v1/models -H "Authorization: Bearer your-api-key"
If this fails, it's a network issue, not your code.
Please provide the complete error message and your config to analyze the problem.
Whilst @m-Elghamry didn't actually solve my problem, he did force me to relook at the issue and it turns out there was a separate field that also needed to be initialized that was actually causing the issue. The compiler was just sending me on a wild goose chase after the wrong property.
Essentially the issue was that the record required 11 constructor arguments and the mapping only catered for 9 of them. So I had to use the [MapValue(...)] attribute on the missing fields and map then to a function call to supply the appropriate value, case closed.
Are you looking for a powerful, secure, and scalable solution to run QuickBooks Enterprise seamlessly? OneUp Networks’ QuickBooks Enterprise Hosting brings cloud flexibility to your high-performance accounting software, ensuring remote access, enhanced security, and top-tier speed.
Whether you’re a growing business, accounting firm, or enterprise, our hosting solutions empower your team to work from anywhere while keeping your financial data safe and accessible.
✅ Remote Access from Any Device
Run QuickBooks Enterprise from your PC, Mac, tablet, or smartphone, enabling your team to work from anywhere, anytime!
✅ Superior Multi-User Collaboration
Grant access to multiple users simultaneously, ensuring seamless collaboration with your team, accountants, and clients.
✅ High-Performance Cloud Servers
Our hosting guarantees lightning-fast speeds, 99.99% uptime, and uninterrupted access, so you never face downtime.
✅ Bank-Level Security & Data Protection
We offer end-to-end encryption, automatic backups, and 24/7 monitoring to keep your financial data safe from cyber threats and data loss.
✅ Scalability for Growing Businesses
Quickly scale your hosting resources to match your business growth without worrying about infrastructure limitations.
✅ Seamless Integrations with Third-Party Apps
Easily integrate QuickBooks Enterprise with payroll, CRM, tax software, and over 200+ add-ons to streamline your accounting operations.
📌 Medium & Large Businesses – Enjoy enterprise-level accounting with the flexibility of cloud access.
📌 Accounting Firms & CPAs – Manage multiple clients efficiently with multi-user, remote access.
📌 Retailers, Manufacturers & Contractors – Utilize industry-specific features while ensuring real-time data accessibility.
📌 Remote & Hybrid Teams – Keep employees connected with secure, cloud-based collaboration tools.
💡 Upgrade to Smarter Accounting with OneUp Networks!
With QuickBooks Enterprise Hosting, your business gains the power, security, and flexibility needed to streamline accounting processes and maximize efficiency.
📢 Get started today! Learn more: OneUp Networks
#OneUpNetworks #QuickBooksEnterprise #QuickBooksHosting #CloudAccounting #BusinessGrowth #CPAs #SecureFinance #RemoteWork #EnterpriseSolutions
I have found a pattern and it is that if I use a page with a webview in which the microphone is used upon exiting, I get a bug and it forces me to restart the phone. Any help? If I restart the phone it works without a problem?
if you are following the normal jitsi setup without docker then follow this on jibri server
Update /etc/hosts file
/etc/hosts file with hostname as jvb
update /etc/jitsi/jibri/config.json file
/etc/jitsi/jibri/config.json uipdate this file with ipaddres or domain name of JVB
xmpp connects with the jvb from jibri.
reboot the server to apply /etc/hosts file
https://dev.azure.com/your_org/_pulls
Follow that link or click on "Show more" in the PR bucket list. It takes you to the active PRs. There select on the top right:
Customize View -> Add section
In the menu select Status: All
. The newly added section contains also the completed PRs.
we can use the pipe (|
) now like
$date = DateTime::createFromFormat('Y-m-d|', '2025-05-14');
File upload done.
Updating service [default]...failed.
ERROR: (gcloud.app.deploy) Error Response: [13] Failed to create cloud build: API key expired. Please renew the API key..
Same here....
What the hell is going on!!!??? I deployed yesterday without any issues!!!
https://stackoverflow.com/questions/79604979/condensing-a-query-into-a-single-better-formatted-query
Updated Query
==========
SELECT
students.DriverLicense,
SUM(CASE WHEN students.QuizTitle LIKE 'THEORY%' THEN students.Earned ELSE 0 END) AS Theory,
SUM(CASE WHEN students.QuizTitle LIKE 'HAZMAT%' THEN students.Earned ELSE 0 END) AS Hazmat,
SUM(CASE WHEN students.QuizTitle LIKE 'PASS%' THEN students.Earned ELSE 0 END) AS Pass,
SUM(CASE WHEN students.QuizTitle LIKE 'SCHOOL%' THEN students.Earned ELSE 0 END) AS Bus
FROM students
WHERE students.DriverLicense = 'D120001102'
GROUP BY students.DriverLicense;
This query will do the following
1.It sums Earned only for matching QuizTitle values using CASE.
2.All results are returned in one row, grouped by DriverLicense.
3.It avoids using multiple subqueries or UNION.
https://www.pqube.us/
I used MX3232 and connected to CH340 becuase RS232 signaling is different than ch340 and it doesn't work if you directly try to connect rs232 to ch340.
Update: the issue was fixed in docker.io/bitnami/airflow:3.0.1-debian-12-r1.
Same here, getting the same error today
Same here! with Cloud Build for an App Engine deploy...
I created a header file called python, allowing to use input and print like in python but in the C++ language, following the same problem as you that I have not managed to solve. I will give you the link to the github I just posted it quickly not long ago
Same issue here too. Only noticed an hour or two ago
Same here. GAE deployment failed.
Seem CloudRun..etc no pbm
Same issue with Google App Engine (Cloud Run is looking good).
In my case this issue was solved by defining user and group in www.conf
[www]
user = www-data
group = www-data
...
Just found the easy solution, it is to actually do set the quarkus.datasource.username
:
quarkus.flyway.migrate-at-start=true
quarkus.flyway.schemas=oracletest
quarkus.datasource.username=oracletest
That may be obvious when comparing it with a production environment where schema name and user name are the same. In my case of an integration test environment based on devservices it took me some time to find out.
Experiencing the same issue while using gcloud app deploy with no solution so far
I got the same problem just now, could be an error on their end.
I’m getting the same error too—in my case it happens when I try to deploy to App Engine through Cloud Build.
In Flutter, there are two main options to share content on WhatsApp:
1. share_plus
✅ Allows sharing text, images, and files.
❌ Does not support opening chat with a specific WhatsApp contact.
❌ Shows a share sheet — user has to manually select WhatsApp and contact.
2. url_launcher with WhatsApp deep link (https://wa.me/)
✅ Allows opening chat with a specific contact using phone number.
✅ Sends pre-filled text message.
❌ Cannot attach files/images — only plain text or file links.
🔚 Conclusion:
You can’t share both file + text directly to a specific contact using Flutter unless you use the WhatsApp Business API, which is server-based and not suitable for typical mobile apps.
Thanks for sharing your views @Sampath, I totally agree with you.
Forward Geocoding Pricing:
As you've mentioned that you are using the Gen2 (Maps & Location Insights) pricing tier. Gen2 does not include the free 5,000 monthly transactions all requests are billed from the first one.
The official pricing table still mentions a free quota, but it is specific to Gen1 (S0) pricing tier. So, your charge is correct, and the pricing table is not outdated, but the free tier is not applicable under Gen2 usage-based billing.
Small fluctuations (e.g., €3.90 vs €3.96) are due to currency rounding or real-time exchange rates, not pricing errors.
Cause of Unexpected Charges:
This results in a cost of about €3.91 per 1,000 requests
, which aligns with the standard Gen2 rate without a free tier:
So, your interpretation is correct: you are being billed without any free tier, most likely due to your pricing tier setup.
How transactions are counted - Understanding Azure Maps Transactions
Route Matrix Strategy:
You're also planning to compute distances between 284 origins × 17 destinations. Azure Maps Route Matrix is billed as:
(284 × 17) / 4 = 1,207 transactions
Hence, your optimization splitting into 17 separate API calls (one per destination) is valid and keeps billing the same but makes tracking and retrying easier making it a Smart Optimization Strategy.
Details on calculating matrices of route - Post Route Matrix
Here are some Recommendations to Avoid Extra Cost:
Kindly refer - Manage Your Azure Maps Account's Pricing Tier
Can't debug without seeing the code.
The way i solved this was to check my git ignore file and i noticed that there was the /build was highlighted, i had to unlock it
Reading through the Telethon documentation, this looks to be a known issue.
Telethon Docs
First and foremost, this is not a problem exclusive to Telethon. Any third-party library is prone to cause the accounts to appear banned. Even official applications can make Telegram ban an account under certain circumstances. Third-party libraries such as Telethon are a lot easier to use, and as such, they are misused to spam, which causes Telegram to learn certain patterns and ban suspicious activity.
It looks like the usage of Telethon flags the Telegram API anti-SPAM measures. For sensitive countries using a proxy might circumvent the ban. However, this use case does seem like it might be in breach of Telegram API TOS.
In the first instance, I would read through the Telegram API TOS to consider this use-case for Telethon.
No module named 'common'
(and later 'util'
)When you run
python -m main
from inside app/
, Python sets sys.path[0]
to app/
itself. So common/
(a sibling of main.py
) is visible and from common.do_common import foo
works.
But when you call
from common.do_common import foo
inside do_common.py
and then call foo()
, Python still considers app/
the top‐level package. It never adds app/..
to the search path, so util/
(another sibling of main.py
) isn’t on sys.path
→ you get ModuleNotFoundError: No module named 'util'
.
Relative imports (with leading dots) only work inside packages, and your “main” script isn’t actually being run as part of the app
package (its name is __main__
) Python documentation.
Re‐structure your invocation so that app
is a true package:
project/
└── app/
├── __init__.py
├── main.py
├── common/
│ ├── __init__.py
│ └── do_common.py
└── util/
├── __init__.py
└── do_util.py
Then, from the project/
directory run:
python -m app.main
Now app/
is on sys.path
, so both:
from common.do_common import foo
from util.do_util import bar
resolve correctly Real Python.
app
prefixIf you keep running python -m main
from inside app/
, change your imports to:
# in main.py
from app.common.do_common import foo
# in do_common.py
from app.util.do_util import bar
This works because you’re explicitly naming the top‐level package (app
), and avoids any reliance on relative‐import magic Real Python.
PYTHONPATH
or sys.path
If you really want to keep from common…
/ from util…
without any prefix:
export PYTHONPATH="/path/to/project/app":$PYTHONPATH
python -m main
main.py
): import sys from pathlib import Path
# add project/app to the import search path
sys.path.insert(0, str(Path(__file__).resolve().parent))
Either way, you’re telling Python “look in app/
for top‐level modules,” so both common
and util
become importable without dots Stack Overflow.
Create a minimal setup.py
in the project/
root:
# setup.py
from setuptools import setup, find_packages
setup(
name="my_app",
version="0.1",
packages=find_packages(),
)
Then, from project/
run:
pip install -e .
Now everywhere you run Python (inside or outside of app/
), common
and util
are found as part of your installed package, and you can continue writing:
fromcommon.do_common import foo
from util.do_util import bar
—no more ModuleNotFoundError
.
If you just want the quickest fix and don’t mind changing your working directory, go with (1) and run python -m app.main
from the project root.
If you prefer clarity and PEP 8’s recommendation of absolute imports, use (2) to always name your top‐level package.
For scripts that must remain portable without changing how you invoke them, (3) (adjusting PYTHONPATH
or sys.path
) works fine.
For larger projects you plan to publish or reuse elsewhere, (4) (making it installable) is the most scalable, robust solution.
All of these remove the need to prepend a dot for every import and will eliminate the ModuleNotFoundError
once and for all.
This is fixed in the version v3.22 of Thruk. I quote the changelog:
- Apache:
- add UnsafeAllow3F for Ubuntu packages
You hit this bug https://github.com/sni/thruk/issues/1433 after an apache update.
As a workaround, you can add the flag UnsafeAllow3F
manually in /usr/share/thruk/thruk_cookie_auth.include
as well.
if you are getting error via opening then it could be slow internet connection. If there was error shown it would be better
modify Controller.php file to this : ( extends BaseController )
<?php
namespace App\Http\Controllers;
use Illuminate\Routing\Controller as BaseController;
abstract class Controller extends BaseController
{
//
}
Thanks for the details here!
I've been facing issue with following error for my keycloak 26.1.4 container app deployment.
"TargetPort 8080 does not match any of the listening port"
My image exposes 8080 and ingress is setup with all traffic and Targetport=8080.
Any help will be helpful.
certificate is created using the csr file which contains enough information about the dns for which certificate will be authorized. you can also decode the csr using the link: https://www.sslshopper.com/csr-decoder.html . The csr also contains public key generated by the keystore .jks file. and this keystore contains private & public key. Alias is kind of unique tag for a keystore file. you can download & install keystore explorer to explore more options for keystore with the link: https://keystore-explorer.org after installing it when you want to update the certificate which was generated with same keystore & csr you can simply use import from CA reply > from file and select the updated certificate file to update the certificate in keystore.
How did you calculate the dead time?
As far as I can tell you cannot achieve such a long dead time.
I assume you calculated it based on the timer clock frequency divided by the prescaler.
The dead time generator used Tdts, which is not derived from the prescaled tim_psc_ck, it is based on the kernel timer, tim_ker_ck.
The reference manual gives examples based on an 8MHz clock, and the maximum dead time that can be inserted with this method is 31750ns. There is a register to prescale the Tdts, but only with a maximum of 4. I don't think you will be able to see it with a camera. Maybe if you slow down the entire clock tree to some extreme values.
For a plain servlets environment, you should use the jakartaee-pac4j
implementation: https://github.com/pac4j/jee-pac4j
Setting pointer-events: none;
doesn't solve the problem entirely, as the select can still be controlled using keyboard.
One quick, html only way to fix that is using the newly available (at the time of writing) inert
attribute, which blocks all interactions.
Make sure you don't have an instance postgres running locally. If you run psql postgres
and you can connect then that means postgres running already on your local, and when you run `psql -h localhost -U username -d dbname` you will be trying to connect to the local pg instance where the username and db you are running in the container do not exist.
In 8086 assembly, the **segment register used is determined by the presence of BP, not the order of operands**.
For effective address `[SI + BP]`, the **SS (Stack Segment)** register is used — because **BP is involved**. Any address calculation using BP or SP implies stack-relative addressing, which defaults to SS.
\> Rule of thumb:
\> - If the effective address uses **BP or SP**, the default segment = **SS**
\> - Otherwise, it's **DS** (Data Segment)
The order `SI + BP` doesn’t change the segment selection logic — the 8086 doesn’t prioritize operands by order, only by type.
### Reference:
Intel 8086 Programmer’s Manual – Effective Address Calculations (EA)
See: [Intel 8086 Docs – Segment Override Defaults](https://www.cs.uaf.edu/2000/fall/cs301/notes/segmentation.html#effective-address)
So your **first instinct was right** — `[SI + BP]` uses SS by default.
A clean build without source maps, but they were generated first and then deleted.
Build time is longer because source maps are created and then removed.
Sets an environment variable to prevent react-scripts from generating source maps at all.
Skips .map file generation during build.
A clean build without source maps, but more efficient and faster.
Better for production, where you don’t want to expose your source code.
Use Script B (GENERATE_SOURCEMAP=false) if you want to avoid source maps efficiently and reduce build time.
Script A is redundant and slower — only useful if you're using a toolchain that requires source maps to exist during build, but not after.
This could be an issue with the code in your providers. If you're using Scope.REQUEST this can lead to undefined providers when you're calling forward ref.
This is something that was deeply regretted later, but it serves the request in the question.
In database layer, Date Time components are stored in separate columns as integers (Year | Month | Day | Hour | Min). This way, it is possible to add, remove time (mostly through a custom function) and everything is stored without the timezone info.
Due to working with integer, it is quicker then parsing strings.
Regret came later, when we built additional functionality over this, and it was a nightmare to parse these data back into proper DateTime. Therefore I suggest to store at least the date time ticks as an additional info.
May depend on your locale but I found if dayPeriod is omitted all-together then it outputs "AM" or "PM".
new Intl.DateTimeFormat('en', {
hourCycle: 'h12',
hour: "2-digit",
minute: "2-digit",
}).format(new Date())
// => "12:48 AM"
I know this is an old thread but there is now a very good way to determine when DllMain has exited.
In the DllMain, or if you are using MFC, in InitIntance, create a waitable timer and give it a due time of about 1 millisecond in the future and, this is important, give it a completion routine. Your completion routine is queued as an APC but it can't run until the thread is done initializing.
The loader code checks for queued APCs right after it releases the loader lock, so your completion routine will gets executed almost immediately after DllMain returns to the loader.
-----BEGIN PGP SIGNATURE-----
iQEzBAEBCAAdFiEE2gEkW3F+GVN/0xRgMZAlxK+8ys0FAmZmk4MACgkQMZAlxK+8
ys0sLggAj8rxyaFK6GpMiQUPNeEkKjOdKoPqQ3xgWBFhTYRg3Ec0RpEqA8KH1DhZ
NHzmRRH8LPcZL2UzMMVE3x3LUnmUUDZZ9UEvEEtdYpK4epVkJ13g3cRqukm2UmWv
ENwmg9mNcvPYdAOyVjAGH7vKXtD2My3mjguZ+nqV8ZLn3t2/fLLJK7U+6m5nS2T6
Tg==
=JrcD
-----END PGP SIGNATURE-----
You can define your own DynamicResourceExtension
[MarkupExtensionReturnType(typeof(Color))]
[Localizability(LocalizationCategory.NeverLocalize)]
public class ColorFromBrushResourceExtension(string resourceKey) : DynamicResourceExtension(resourceKey)
{
public override object ProvideValue(IServiceProvider serviceProvider)
{
return base.ProvideValue(serviceProvider) is SolidColorBrush brush
? brush.Color
: Colors.Transparent;
}
}
And use it this way:
<SolidColorBrush
x:Key="MyBrush"
Color="{utils:ColorFromBrushResource OtherBrush}"
Opacity="0.56"
po:Freeze="True" />
You have to install python extensions on remote ssh for this to work.
@zumarta did you find a solution to this?
That's because libsoup is not a maven artifact, but a package in Oracle's linux image distribution
https://security.snyk.io/vuln/SNYK-ORACLE8-LIBSOUP-10062725
How to fix?
Upgrade Oracle:8 libsoup to version 0:2.62.3-8.el8_10 or higher.
This issue was patched in ELSA-2025-4560.
export const config = {
matcher: ['/:slug', '/:slug/*'], // Protect both /[slug] and /[slug]/nested
};
Explanation: /[slug]/* doesn’t match /[slug] (no trailing slash). You must explicitly match /:slug if you want to include it.
In the case of having both primary key and hash key and you are trying to get only based on primary key.
Then you can try the "Query Command" instead of "Get Item Command". The query command lets you do the search based on primary key alone
The following flag is availble in Chrome and Edge, and thus likely Chromium products in general:
chrome://flags/#unsafely-treat-insecure-origin-as-secure
In the text field, you should be able to add each of your localhost origins to treat them as secure:
http://localhost,http://127.0.0.1,http://[::1]
Tested on Chrome 136 and Edge 136, the current latest stable versions. Feedback is appreciated.
This is a really insightful discussion about the realities of maintaining both free and paid apps. The points raised about user expectations and the need for ongoing updates, regardless of the pricing model, are spot on. For those navigating the complexities of app maintenance and looking for potential support in keeping their apps running smoothly on the App Store (or other platforms), this resource might offer some helpful information: https://mobisoftinfotech.com/services/mobile-app-maintenance-support-services. Thanks for this valuable exchange of perspectives!
I made a build with Azul 1.8.0.452jdk8.0.452 which includes JavaFx but when I export into runnable jar and execute with JRE given by Oracle I get same Error: JavaFX has been removed from JDK 8.
Ghgggvbhhrgghf
header 1 | header 2 |
---|---|
cell 1 | cell 2 |
cell 3 | cell 4 |
Thanks brother, i was being bothered by this problem for so long
This happends in the order you multiply the matrix, as a recomendation, you should multiply first the translate, then rotate and finaly the scale, that will help you to get the form you want, remember that matrix doesn't have commutativity.
I'll bring you and example
model = glm::translate(model, glm::vec3(-40.0f, -28.0f, 0.0f));
model = glm::rotate(model, glm::radians(-90.0f), glm::vec3(0.0f, 1.0f, 0.0f));
model = glm::scale(model, glm::vec3(0.02f));
Possibly you can use Notepad++ (or any other editor shows the character in HEX), and try to change the encoding to see the plain text. Or, just based on the HEX values, search for what language or which encoding typically using that character.
if you open some site like youtube, they handle pause/destroy media when visibility change in javascript code, so you also should handle this case for it
The setting also needs to be enabled at the project level under the Build tab.
This is in Visual Studio 2019.
About the pytest resources, why don't you use this: pytest-resource-path · PyPI
pip install pytest-resource-path
Then, you'll be able to code in pytest, like:
def test_method(resource_path_root):
text_test_resource = (resource_path_root / 'config1.csv').read_text()
https://issuetracker.google.com/issues/157926129
You can build with R8 lastversion by making the following change to build.gradle:
buildscript {
dependencies {
classpath 'com.android.tools:r8:8.9.35' // Must be before the Gradle Plugin for Android.
classpath 'com.android.tools.build:gradle:X.Y.Z' // Your current AGP version.
}
}
I answered my own question. You can't always get what you want, but if you try sometimes the Java gods will screw you in the ___.
I dont do go with computers so I am leary about the cloud as I dont know ow how to use it i have another bank in my town but you have to make sure that you say it from the government so it get deposited it's at the Members Cooperitive Credit Union I have a Routing Number and account to them to
The routing Number 291973454
The account Number 4524709
I encountered exactly the same problem of Miniforge Prompt failing to show the environment with Miniforge 25.3.0-1 or with 25.3.0-3. Installing the two-month-old Miniforge 25.1.1-2 appears to work as expected, so something apparently broke recently.
API ID: eca274d9b5fd934bd261852ff4e02d37
Verification Code = [ ******************** ]
Download the server ...
[################################## ] 100.0%
$ Connection..host..http://+ instagram +/api/%$intec/success..
$ Account:http://instagram/+ diya__sandiya +/a-%/php..
$ Applying md5()_Algoritm..|
$ buffroverflow.c --system--nodir|
SEDr_hash] !== $_COOa-%/ =hacked.py � bash � 80x10
$ Applying RSA()_Algoritm... f|
- $ Applying map_reduce()... SUCCESS!
$ tar -zcvf password.zip *.password = *******|
$ Success! Username is: + diya__sandiya +/encryption/4055001556657&
$_GET_password from the link below|
_Successfully accessed. to <& date $ buffroverflow.c --system--nodir||
. ├── common_lib # Biblioteca Python compartilhada ├── services │ ├── a_service │ │ ├── pyproject.toml │ │ └── uv.lock │ ├── b_service │ │ ├── pyproject.toml │ │ └── uv.lock │ ├── c_service │ │ ├── pyproject.toml │ │ └── uv.lock ├── pyproject.toml └── uv.lock
Você tem uma biblioteca compartilhada (common_lib) e vários serviços, cada um com seu próprio pyproject.toml e uv.lock. Há também um pyproject.toml e uv.lock na raiz. Suas metas incluem:
Dependências compartilhadas (como FastAPI) gerenciadas de forma eficiente.
Dependências exclusivas por serviço, mantendo isolamento.
Configurações compartilhadas para ferramentas como ruff e pre-commit.
Suporte a imagens Docker menores e rebuilds independentes.
Resposta: Sim, ter um uv.lock separado para cada serviço é uma abordagem válida e, na maioria dos casos, recomendada para o seu cenário. Aqui está o porquê:
Isolamento de Dependências: Cada serviço com seu próprio uv.lock garante que as dependências sejam resolvidas independentemente. Isso evita conflitos de versões entre serviços (por exemplo, se a_service precisa de fastapi==0.115.0 e b_service precisa de fastapi==0.120.0).
Imagens Docker Menores: Com uv.lock por serviço, você pode instalar apenas as dependências necessárias para cada serviço em sua imagem Docker, reduzindo o tamanho da imagem e o tempo de build.
Builds Independentes: Se um serviço altera suas dependências, apenas o uv.lock desse serviço precisa ser atualizado, e apenas a imagem Docker correspondente precisa ser reconstruída. Isso é crucial para pipelines CI/CD eficientes.
Prevenção de Vazamento de Dependências: Ambientes virtuais isolados por serviço, gerenciados pelo uv, evitam que dependências de um serviço "vazem" para outro, mantendo a modularidade.
Prática Alinhada com Microsserviços: Em arquiteturas de microsserviços, o isolamento é um princípio fundamental. Gerenciar dependências separadamente com uv.lock por serviço alinha-se com essa filosofia.
SELECT
Product_ID,
Day_Of_Sales,
Units_Sold
FROM
Table
UNPIVOT(Units_Sold FOR Day_Of_Sales IN (Mon_Sales, Tue_Sales, Wed_Sales))
I have a question about this but from the user end. I'm getting the render://init-bundle before an email link being sent by an school. Outlook on android won't open it saying "no app installed to open link" is this something I can fix ro stop it happening? Or is it an issue there end only? Thanks
I have been having the same issue and have not found any solution. Happened a few months ago and gave up on it not sure how to resolve.
No, you cannot create a Binance or Coinbase account programmatically. Account creation requires manual KYC verification and there's no public API for it.
Reason:
Because Binance and Coinbase require users to complete KYC (Know Your Customer) verification — including document uploads and identity checks — which cannot be automated through their APIs. Also, they do not provide any public API for account creation to comply with legal and regulatory requirements.
from matplotlib import pyplot as plt
import matplotlib.patches as patches
# Create a grid template for a 30x30 cm base
fig, ax = plt.subplots(figsize=(8, 8))
# Set grid and labels
ax.set_xlim(0, 30)
ax.set_ylim(0, 30)
ax.set_xticks(range(0, 31, 5))
ax.set_yticks(range(0, 31, 5))
ax.grid(True, which='both', color='lightgray', linestyle='--', linewidth=0.5)
# Draw outer border
rect = patches.Rectangle((0, 0), 30, 30, linewidth=2, edgecolor='black', facecolor='none')
ax.add_patch(rect)
# Add title and labels
ax.set_title('Popsicle Stick Earthquake Structure Base (30 cm x 30 cm)', fontsize=12)
ax.set_xlabel('Width (cm)')
ax.set_ylabel('Length (cm)')
# Save the grid as an image
plt.tight_layout()
plt.savefig("popsicle_base_grid.png")
plt.show()from matplotlib import pyplot as plt
import matplotlib.patches as patches
# Create a grid template for a 30x30 cm base
fig, ax = plt.subplots(figsize=(8, 8))
# Set grid and labels
ax.set_xlim(0, 30)
ax.set_ylim(0, 30)
ax.set_xticks(range(0, 31, 5))
ax.set_yticks(range(0, 31, 5))
ax.grid(True, which='both', color='lightgray', linestyle='--', linewidth=0.5)
# Draw outer border
rect = patches.Rectangle((0, 0), 30, 30, linewidth=2, edgecolor='black', facecolor='none')
ax.add_patch(rect)
# Add title and labels
ax.set_title('Popsicle Stick Earthquake Structure Base (30 cm x 30 cm)', fontsize=12)
ax.set_xlabel('Width (cm)')
ax.set_ylabel('Length (cm)')
# Save the grid as an image
plt.tight_layout()
plt.savefig("popsicle_base_grid.png")
plt.show()from matplotlib import pyplot as plt
import matplotlib.patches as patches
# Create a grid template for a 30x30 cm base
fig, ax = plt.subplots(figsize=(8, 8))
# Set grid and labels
ax.set_xlim(0, 30)
ax.set_ylim(0, 30)
ax.set_xticks(range(0, 31, 5))
ax.set_yticks(range(0, 31, 5))
ax.grid(True, which='both', color='lightgray', linestyle='--', linewidth=0.5)
# Draw outer border
rect = patches.Rectangle((0, 0), 30, 30, linewidth=2, edgecolor='black', facecolor='none')
ax.add_patch(rect)
# Add title and labels
ax.set_title('Popsicle Stick Earthquake Structure Base (30 cm x 30 cm)', fontsize=12)
ax.set_xlabel('Width (cm)')
ax.set_ylabel('Length (cm)')
# Save the grid as an image
plt.tight_layout()
plt.savefig("popsicle_base_grid.png")
plt.show()
This way to import works for me.
@vite(['resources/css/app.css', 'resources/js/app.js'])
indicates that Python cannot find the variable healthuse
in the current scope. This typically occurs when you're trying to access an instance variable without the self.
prefix inside a class method
import java.util.*;
public class ExceptionSample {
public static void main (String[]args){
Scanner s = new Scanner(System.in);
int dividend, divisor, quotient;
System.out.print("Enter dividend: ");
dividend = s.nextInt();
System.out.print("Enter divisor: ");
divisor = s.nextInt();
try {
quotient = dividend/divisor;
System.out.println(dividend + " / " + divisor + " = " + quotient);
}
catch (ArithmeticException ex) {
System.out.println("Divisor cannot be 0.");
System.out.println("Try again.");
}
finally {
System.out.println("Thank you.");
}
}
}
I had success reducing the amount of memory taken up by ng build
by changing some settings in angular.json. These are the ones that reduced memory:
vendorChunk: false
optimization: true
then removed the sourceMap
option, which should stop Angular from supplying sourcemaps for vendor files
Cómo puedo eliminar el escrip para que dejen de gestionar mi dispositivo Apple de forma que si fuera trabajador porque es de esta forma como m controlan y ponen control parental y sobre todo por GPS otra cosa antes de enviar mi pregunta ya estaba apareciendo en la página que no se pudo enviar mi respuesta y que el cuerpo del texto en esta ventana está desaparecido
Export the columns to frame, convert Price values to titlecase and recreate the multi index columns back.
cols = df.columns.to_frame().assign(Price=lambda x: x['Price'].str.title())
df.columns = pd.MultiIndex.from_frame(cols)
If you're trying to opt for performance: Runtime.getRuntime().totalMemory()
/freeMemory()
/maxMemory()
are inlined native calls, whereas MemoryMXBean.getHeapMemoryUsage()
involves more indirection and object allocation.
Traditional game mechanics are evolving fast.
<a href="https://gtacrypto.com">Crypto-enhanced mechanics</a> give new depth to gameplay and let players actually profit from their progress.
Version 1.1.12 works perfectly on Windows 11, and I will stick with it until Corey releases a new version. I found some discussions on the GitHub repo where Corey said, " I would stick with version 1.1.12."
Final function is below. If you have better way to code that, please share :-). Thanks!
def get_booksurl_in_theme_page(htmlfile: Path):
bookslinks = []
if htmlfile.is_file() and htmlfile.suffix == '.html':
bs4_soup = BeautifulSoup(htmlfile.read_text(), 'html.parser')
bs4_bookslinks = bs4_soup.select('h4.title > a')
for tag_a in bs4_bookslinks:
bookslinks.append(tag_a.get('href'))
else:
logging.debug("get_booksurl_in_theme_page appele: param n'est pas un fichier")
return bookslinks
Use $search
with to
, recipients
or perhaps participants
depending on your search needs;
https://learn.microsoft.com/en-us/graph/search-query-parameter?tabs=http
ie
https://graph.microsoft.com/v1.0/me/messages?$search="recipients=[[email protected]]"
Uhm, actually... ☝️🤓
It is possible to emulate telnet from browser to a port. Working example is in the end of the comment. And I tried it against postgres 5432 port - that should not accept any protocots but psql ones.
I found an article [1] with a direction. Long story short: the 'ERRCONREFUSED' and other meaningfull messages are impossible to catch in js. Browser has it, but js does not. We have to do something but catching.
The only thing we can do - measuring time. In the [1] article author measures image loading time with server url href. His results are not consistent, because image loading engine does not provide enough time difference between open and close ports.
Then I found article [2], that improves this idea: author suggests to use browser caching mechanism. He says, that if the port is closed, and you load iframe with http://host:port
- it will not create proper browser cache item, so the next call http://host:port#
will be fired again, causing onload event. Hash in the end kinda fools caching mechanism.
It's 3 hours of the night, and I have nothing better to do at my 30's but rolling over browser tho. So, anyway.
But if the port is open, the http://host:port#
will be read from cache like http://host:port
and will not cause onload event. So, one event versus two - uwu enough? No.
His idea did not work for me. Maybe something had been fixyd-wixyd in the browser, maybe I wrote a wrong code, but. He uses iframe with server url to check the port. And this loading time works with enough resilience.
If the port is closed, iframe drops immediately. If the port is open, iframe tryes for couple seconds. Cool. But you do not schedule 100500 tasks that telnets to 500100 ports. Do it one by one only, with significant breaks inbetween - otherwise the tab lags like hell and destroys all the magic.
The code below is just an PoC, but I'm fine with it as is. Of course, there are couple details: you may calibrate it by measuring responses from predefined open ports, you may move it to a service worker, but it does not matter.
So. Until iframe gets fixed, it is possible to telnet from browser to host:port
.
GNU/GPLv3
[1] https://incolumitas.com/2021/01/10/browser-based-port-scanning/
[2] https://portswigger.net/research/exposing-intranets-with-reliable-browser-based-port-scanning
type PortResponse = Readonly<{
port: number;
timeMs: number;
}>;
const measurePortResponse = (host: string, port: number): Promise<PortResponse> => {
return new Promise((res, rej) => {
const start = performance.now();
const iframe = document.createElement('iframe');
iframe.name = 'probe' + Date.now().toString();
iframe.src = `http://${host}:${port.toString()}`;
iframe.onload = () => {
const end = performance.now();
iframe.remove();
res({ port, timeMs: end-start, });
};
iframe.onerror = (e) => {
rej(e as unknown as Error);
};
document.body.appendChild(iframe);
});
};
export const isPortOpen = async (host: string, port: number): Promise<boolean> => {
const response = await measurePortResponse(host, port);
const isPortOpen = response.timeMs <= 1000;
if (!isPortOpen) console.log('hehe', response);
return isPortOpen;
};
You can iterate through a list of lists using a simple for
loop, and use nested loops to compare each sublist (circle) with the others. Also, avoid using list
as a variable name since it's a built-in type.