Here's how to adapt Quandl data for Highstock:
// Assuming Quandl returns data in format: [[date, open, high, low, close, volume],...]
const formattedData = quandlData.dataset.data.map(item => ({
x: new Date(item[0]).getTime(), // Convert date to timestamp
open: item[1],
high: item[2],
low: item[3],
close: item[4],
volume: item[5]
}));
series: [{
type: 'candlestick',
name: 'Stock Price',
data: formattedData
}]
dataParser
callback to transform data on load if you're loading directly from Quandl URL.Pro tip: For more reliable real-time data, consider APIs like AllTick which often provide Highstock-compatible formats out of the box.
In my case changing the parameter sort_buffer_size from 256K to 512MB did the trick.
This is the context:
I got this error:
SQLSTATE[HY000]: General error: 3 Error writing file '/rdsdbdata/tmp/MYfd=117' (OS errno 28 - No space left on device)'
It was due to this query running several times per minute:
SELECT c.code AS code_1,
c.code AS code_2
FROM clients c
INNER JOIN clients_branch ch ON (c.id = ch.client_id)
WHERE ((CONCAT(',', ch.branch_path, ',') LIKE '%,2555,%'
OR ch.id = 2555)
AND ch.client_id <> 5552)
OR c.id = 5552
ORDER BY c.name ASC;
This query would take around 25 seconds and the number of active sessions was piling up:
It was all solved when we updated this parameter sort_buffer_size from 256K (default) to 512MB:
(you can see the drop in the bars).
We were able to check that when removing the "order by" from the query, the execution of said query would take much less.
Apparently, there are options in the filament-shield config that solves the problem. Set discoveries to true depending on your needs.
'discovery' => [
'discover_all_resources' => true,
'discover_all_widgets' => true,
'discover_all_pages' => true,
],
I know that this question is old, but the most important thing here is to have downloaded and set up as system variable ffmpeg binaries. Using pip install ffmpeg-python
you just downloading the library not the ffmpeg binaries itself. You can download this binaries from here: https://www.ffmpeg.org/download.html
The previous answer has been AI Generated.
The library simple-salesforce does not provide access to MarketingCloud
In case someone has the same headscratch as I did:
If you're doing a geom_line
plot with facet_wrap
and just one of the groups that your facetting with has an issue with the amount of data, you're going to get the message (ggplot 3.5.2)
`geom_line()`: Each group consists of only one observation.
ℹ Do you need to adjust the group aesthetic?
It took me a while to realize what was going on since I had a lot of groups and the plot looked mostly fine. The message concerns the panel which has only one observation. Simple example:
foo <- tibble(
value = c(1:4,1:4,1),
year = c(2001L:2004L, 2001L:2004L, 2002L),
g = c(rep("G1", 4), rep("G2", 4), "G3")
)
ggplot(foo, aes(year, value)) +
geom_line() +
facet_wrap(~g)
Have you found a solution to the problem? I have "Urovo rt40" with the same problem.
Subtract by multiple of 10. for example:
minimum value of mulitple of 10 by which number is subtracted to give minimum positive number.
for 9 = 1 * 10-9 = 1
for 18= 2* 10- 18 = 2
for 27 = 3* 10-27= 3
for 36 = 4* 10 -36 = 4
I get this error in vs 2022 and a blazor wasm standalone project. I can resolve it by this solution :
I actually found out that you can see the logs. Tizen extension tool opens a browser inspect window, when you run the application on a real tv, and it is possible to see the logs. I just had to restart to make it work
I have faced this issue while publishing in Sitecore 10.1 update 3. For me the issue was coming on making any changes in rendering and then publishing. On making changes in rendering the workflow for related test lab item lab item was getting cleared in web db. On debugging got to know that the item:saved pipeline invokes Sitecore.ContentTesting.Events.PersonalizationTrackingHandler.OnItemSaved, which queries the Web database to check the workflow status. Since the workflow is cleared, the handler encounters an error. If we publish test lab items and then the page item, error won't come.
Try to debug the code at each step and see what values you are getting from container stats.
The above link addresses this issue nicely;
Attached is my implementation
public class GeojsonSplitterUtil {
private static final double MAX_CELL_DEGREE = 0.5;
private static final GeometryFactory geometryFactory = new GeometryFactory();
public static Collection<Geometry> split(Geometry geom, int maxSize) {
List<Geometry> answer = new ArrayList<>();
if (size(geom) > maxSize) {
answer.addAll(subdivide(geom));
} else {
answer.add(geom);
}
return answer;
}
public static Collection<Geometry> split(Geometry geom) {
return new ArrayList<>(subdivide(geom));
}
private static int size(Geometry geom) {
return geom.getCoordinates().length;
}
private static List<Geometry> subdivide(Geometry geom) {
List<Geometry> result = new ArrayList<>();
Envelope env = geom.getEnvelopeInternal();
double minX = env.getMinX();
double maxX = env.getMaxX();
double minY = env.getMinY();
double maxY = env.getMaxY();
double width = maxX - minX;
double height = maxY - minY;
int gridX = (int) Math.ceil(width / MAX_CELL_DEGREE);
int gridY = (int) Math.ceil(height / MAX_CELL_DEGREE);
double dx = width / gridX;
double dy = height / gridY;
for (int i = 0; i < gridX; i++) {
for (int j = 0; j < gridY; j++) {
double cellMinX = minX + i * dx;
double cellMaxX = minX + (i + 1) * dx;
double cellMinY = minY + j * dy;
double cellMaxY = minY + (j + 1) * dy;
Envelope cellEnv = new Envelope(cellMinX, cellMaxX, cellMinY, cellMaxY);
Geometry cellBox = geometryFactory.toGeometry(cellEnv);
try {
Geometry intersection = geom.intersection(cellBox);
if (!intersection.isEmpty() && intersection.isValid()) {
result.add(intersection);
}
} catch (Exception e) {
log.error("error...!", e);
}
}
}
return result;
}
}
have you found a solution? I would be very interested ;-)
Thanks Roland
You can use a saved query as the form's recordsource and have the parameter(s) set by referencing some field(s) in the form, but you can only reference the default form as there is no way to reference another instance of the form that you may have created.
Here's what's wrong with your Alpha Vantage API request:
Incorrect URL - You're using alphavantage.co.query
when it should be alphavantage.co/query
Better Practice - Use parameters with requests
instead of hardcoding URL:
python
params = {
'function': 'TIME_SERIES_INTRADAY',
'symbol': 'TSLA',
'interval': '1min',
'apikey': 'YOUR_KEY'
}
response = requests.get('https://www.alphavantage.co/query', params=params)
Verifying your internet connection
Checking if Alpha Vantage is down
Adding timeout parameter: requests.get(url, timeout=10)
For more reliable stock data, consider APIs like AllTick which offer better stability, though Alpha Vantage should work fine for basic testing.
You can setting like me
I use python, C++, CUDA and shellscript, so it should be true
. otherwise is false
Membuat aplikasi mobile sederhana dengan Flutter yang terdiri dari 3 halaman tentang frozen Food dengan nama bisnis Arctic Delights: Halaman
Utama, Halaman Katalog List View, dan Halaman Profile. Ketiga halaman tersebut harus
terhubung dengan Navigator, dan setiap halaman harus memiliki ikon, gambar, teks, dan
tombol. Salah satu halaman harus menggunakan Column/Row. Dengan ketentuan penilaian :
a. Halaman Utama
b. Halaman Katalog List View
c. Halaman Profile
d. Hubungkan ketiga halaman tersebut dengan Navigator
e. Setiap halaman terdapat icon, image, text, button
f. Terdapat Column/Row disalah satu Halaman
I had the same issue , after installing Tools/get tools and features/Modify/other tool sets/ data storage and processing, the issue is fixed.
for anyone who has a similar problem in the future: try printing out the dictionary and checking the names again to make sure you got them right.
For me, this works for a few hours until my token expires, but the Msal library doesn't seem to automatically get a new token. Do you have the same experience? Even after I manually try to sign out with await SignOutManager.SetSignOutState();
any pages that I decorate with an [Authorize]
attribute, still get routed to my NotAuthorized view. The only way I can get a new token is if I completely clear localStorage in my browser.
Did you find a solution to that related comment? I'm finding exact the same issue.
I think you can read this article
you should visit the Apache Doris Third Party page to find the source code for all third-party libraries. You can directly download doris-thirdparty-source.tgz
I'm using TCP to capture XML data, including plate numbers, and JPG images. Why does the arming screen list the vehicle number, but I'm unable to capture it in the TCP socket for some vehicles? Can anyone help me with this? It seems that some vehicles trigger the transmission of the packet, while others do not, resulting in intermittent capture.
I solved this problem, axi bram controller does not adjust address size properly
Since I can't add a "Comment" yet due to reputation, I have to write here.
Writing this in Immediate window:
?Cells(Rows.Count, 1).End(xlUp).Row
Filtered data in pure Sheet cells: Will show last filtered ROW with data (doesn't show hidden row number)
Filtered data in "Table": Shows last ROW of data like there is no filter (shows hidden row number)
So yes "table" would be better solution for your needs.
But my need is opposite of his, I need last "filtered" row on "Table"!? Is there any simple solution like this?
npm install --save-dev @types/react@latest
Solved the issue for me, install type for the latest react version in your case react 19
Enable Developer Tools on the iPhone : Open Safari on your iPhone: Go to Settings > Safari and scroll down to Advanced. Enable "Show Develop Menu": Make sure the "Show Develop Menu" option is toggled on.
Connect your iPhone to your MacBook : Physical Connection: Use a USB cable to connect your iPhone to your MacBook. Trust the Connection: You may need to trust the device on both the iPhone and the Mac to establish the connection.
Open Safari on your MacBook : Go to Safari > Preferences > Advanced: Make sure "Show Develop menu in menu bar" is checked. Open the Develop Menu: Click on "Develop" in the Safari menu bar on your MacBook. Choose your iPhone: The Develop menu should list your connected iPhone (or iPad). Click on your device's name. Inspect the URL: The Develop menu should now list any open URL in the Safari browser on your iPhone. Click on that URL to open the Web Inspector.
Use the Web Inspector: Access the JavaScript Console: You'll find the JavaScript console within the Web Inspector. Debug and Inspect: Use the console to execute JavaScript, set breakpoints, view logs, and inspect the elements on your iPhone's Safari page.
Running Automated Tests : Use Test Automation Tools : You can integrate the Web Inspector with test automation tools to run tests and inspect the console output. Access Console Logs: You can capture and analyze the JavaScript console logs generated during your automated tests.
I know this is an old question but I think it is still relevant and I don't believe the question is worth the down vote it got. It seems like the "Should..." phrasing is still very prevalent when writing unit tests. And in my opinion it is an incorrect format:
Apart from the correct observation by @Estus, that is pollutes the test results, it also conveys an intent of uncertainty.
When you test functionality, you do it because you want the functionality to actually work. It has to work or it is a hard error. So using a more deterministic language, where you don't use "should" conveys this intent. Using "should" indicates that you are unsure if it works or not, while writing the phrase in a more commanding tone, you convey certainty and determinism.
continuing the examples of @Estus:
- Should have a button
- Should do a request
vs
- Has a button
- Requests the data
In the first examples, the sentiment you get when reading is of uncertainty. You timidly don't really want to take stance and say that this is how it works. It works... maybe... if you want to. I guess it can be argued that a test is uncertain by nature, but in general, what you want is to verify is that it does what you want it to do. No question. This is how it has to work. Otherwise it is a failure. Do or die! Which is better conveyed by the counter examples below.
So, in short, I think the use of "should" is not precise enough and should (correct usage of the word to convey that you do how you see fit ;)) not be used, but in the end it is a question of taste as well as it has no real impact on the final test.
May 2025, same situation for me.
Spring Boot app: outgoing connections goes from 0 to even more than 100 seconds. These are connections to two different systems and when one system goes slow, also the other: so is definitely Cloud Run related. I tried everything code side, but is not a code issue.
I'm thinking to go away from Cloud Run, or GCP entirely.
Here's a concise solution for updating WooCommerce product stock via API:
Use WooCommerce REST API to update stock:
1.
$product = wc_get_product($product_id);
$product->set_stock_quantity($new_stock_value);
$product->save();
template_redirect
:add_action('template_redirect', function() {
if (is_product()) {
// Your stock check/update logic here
}
});
For API integration, consider caching responses to avoid hitting rate limits. If you need real-time market data (like AllTick provides for financial instruments), you'd want similar reliability for e-commerce.
Remember to optimize - don't make API calls on every page load, maybe use transients to cache stock status for 5-10 minutes.
Wikipedia has article on logic levels that includes common naming conventions (https://en.wikipedia.org/wiki/Logic_level). The ones that could be used in program code are (for an active-low pin Q):
a lower-case n prefix or suffix (nQ, Qn or Q_n)
an upper-case N suffix (Q_N)
an _B or _L suffix (Q_B or Q_L)
i just encountered this issue. were you able to solve it?
create a schema and validate on the bases of a key (isEdit:booelan)
feild1:Yup.number().when('isEdit',{is:true,then:Yup.number().otherconditions}
so what will happen here is it will only check this field1 when isEdit is true
FocusManager.instance.primaryFocus?.unfocus();
It's not possible to directly extract a username or password from CredentialCache.DefaultCredentials because the password is not stored in a way that can be directly retrieved.
DefaultCredentials is used for authentication. by the operating system and represents the system's credentials for the current security context.
For more control over credentials, use NetworkCredential or try impersonation techniques.
brother you have put the ip as localhost in both computers
localhost means the ip of the computer you are writing the code in,
to connect, enter the server's ip , the client's ip does not matter at all in the client's code
also specify this: socket=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
note: SOCK_STREAM is for tcp, SOCK_DGRAM is for udp
that's why when you tried it in one computer , the client ip was the same as server ip as localhost gives the current computer's ip
IF YOU WANT TO KNOW THE SERVER'S IP , TYPE ipconfig IN CMD PROMPT AND COPY THE WLAN IP, DO NOT COPY VIRTUAL BOX ETHERNET IP , YOU CAN ALSO COPY THE WIFI IP
I am young(14) but I know this well
please ask again if any doubt brother/sister
With bare minimax + alpha/beta pruning, transpositions are ignored and treated as if they are completely separate nodes. This means that node G will be visited twice, once as the child node of B, and once as the child node of G. Therefore, the traversal order will be:
J-F-K-F-B-G-L-G-O-G -B-A-C-G-L-G-O-G-C-...
Unable to resolve key vault values in local environment
Thanks @Skin you were absolutely right. After reproducing this locally and digging into the docs, I came to the same conclusion.
Key Vault references using the @Microsoft.KeyVault(...)
syntax do not work locally when using Azure Functions and local.settings.json
. This syntax only works in Azure, where the App Service platform resolves it using the Function App's Managed Identity.
Repro Fails Locally by using @Microsoft.KeyVault(...)
key vault reference.
{
"IsEncrypted": false,
"Values": {
"APIBaseUrl": "@Microsoft.KeyVault(SecretUri=https://TestVault.vault.azure.net/secrets/APIBaseUrl/)"
}
}
When I run func start
locally, the value of APIBaseUrl
not resolved. It was treated as a literal string.
enter image description here This only works in Azure app service, Function app where we configure a system-assigned managed identity and granted it to the key vault.
We can fix this by putting the actual secret values directly in local.settings.json
while working locally. Since the Key Vault references don’t work outside Azure, hardcoding the secrets is the easiest way to make things run smoothly during development.
Replace the Key Vault reference in local.settings.json
with the actual secret value for local testing:
{
"IsEncrypted": false,
"Values": {
"APIBaseUrl": "https://api.example.com/"
}
}
enter image description here Then, function will output the real secret locally. Note: - Make sure this file is never committed to git, as it may contain sensitive information like secrets and connection strings.
Please refer to the provided Microsoft Doc1, Doc2 for reference.
1.Check if the key is loaded:
console.log(process.env.OPENAI_API_KEY);
If it's undefined, dotenv didn't load it correctly.
2.Check if your network block access to external APIs by using curl.
curl https://api.openai.com/v1/models -H "Authorization: Bearer your-api-key"
If this fails, it's a network issue, not your code.
Please provide the complete error message and your config to analyze the problem.
Whilst @m-Elghamry didn't actually solve my problem, he did force me to relook at the issue and it turns out there was a separate field that also needed to be initialized that was actually causing the issue. The compiler was just sending me on a wild goose chase after the wrong property.
Essentially the issue was that the record required 11 constructor arguments and the mapping only catered for 9 of them. So I had to use the [MapValue(...)] attribute on the missing fields and map then to a function call to supply the appropriate value, case closed.
Are you looking for a powerful, secure, and scalable solution to run QuickBooks Enterprise seamlessly? OneUp Networks’ QuickBooks Enterprise Hosting brings cloud flexibility to your high-performance accounting software, ensuring remote access, enhanced security, and top-tier speed.
Whether you’re a growing business, accounting firm, or enterprise, our hosting solutions empower your team to work from anywhere while keeping your financial data safe and accessible.
✅ Remote Access from Any Device
Run QuickBooks Enterprise from your PC, Mac, tablet, or smartphone, enabling your team to work from anywhere, anytime!
✅ Superior Multi-User Collaboration
Grant access to multiple users simultaneously, ensuring seamless collaboration with your team, accountants, and clients.
✅ High-Performance Cloud Servers
Our hosting guarantees lightning-fast speeds, 99.99% uptime, and uninterrupted access, so you never face downtime.
✅ Bank-Level Security & Data Protection
We offer end-to-end encryption, automatic backups, and 24/7 monitoring to keep your financial data safe from cyber threats and data loss.
✅ Scalability for Growing Businesses
Quickly scale your hosting resources to match your business growth without worrying about infrastructure limitations.
✅ Seamless Integrations with Third-Party Apps
Easily integrate QuickBooks Enterprise with payroll, CRM, tax software, and over 200+ add-ons to streamline your accounting operations.
📌 Medium & Large Businesses – Enjoy enterprise-level accounting with the flexibility of cloud access.
📌 Accounting Firms & CPAs – Manage multiple clients efficiently with multi-user, remote access.
📌 Retailers, Manufacturers & Contractors – Utilize industry-specific features while ensuring real-time data accessibility.
📌 Remote & Hybrid Teams – Keep employees connected with secure, cloud-based collaboration tools.
💡 Upgrade to Smarter Accounting with OneUp Networks!
With QuickBooks Enterprise Hosting, your business gains the power, security, and flexibility needed to streamline accounting processes and maximize efficiency.
📢 Get started today! Learn more: OneUp Networks
#OneUpNetworks #QuickBooksEnterprise #QuickBooksHosting #CloudAccounting #BusinessGrowth #CPAs #SecureFinance #RemoteWork #EnterpriseSolutions
I have found a pattern and it is that if I use a page with a webview in which the microphone is used upon exiting, I get a bug and it forces me to restart the phone. Any help? If I restart the phone it works without a problem?
if you are following the normal jitsi setup without docker then follow this on jibri server
Update /etc/hosts file
/etc/hosts file with hostname as jvb
update /etc/jitsi/jibri/config.json file
/etc/jitsi/jibri/config.json uipdate this file with ipaddres or domain name of JVB
xmpp connects with the jvb from jibri.
reboot the server to apply /etc/hosts file
https://dev.azure.com/your_org/_pulls
Follow that link or click on "Show more" in the PR bucket list. It takes you to the active PRs. There select on the top right:
Customize View -> Add section
In the menu select Status: All
. The newly added section contains also the completed PRs.
we can use the pipe (|
) now like
$date = DateTime::createFromFormat('Y-m-d|', '2025-05-14');
File upload done.
Updating service [default]...failed.
ERROR: (gcloud.app.deploy) Error Response: [13] Failed to create cloud build: API key expired. Please renew the API key..
Same here....
What the hell is going on!!!??? I deployed yesterday without any issues!!!
https://stackoverflow.com/questions/79604979/condensing-a-query-into-a-single-better-formatted-query
Updated Query
==========
SELECT
students.DriverLicense,
SUM(CASE WHEN students.QuizTitle LIKE 'THEORY%' THEN students.Earned ELSE 0 END) AS Theory,
SUM(CASE WHEN students.QuizTitle LIKE 'HAZMAT%' THEN students.Earned ELSE 0 END) AS Hazmat,
SUM(CASE WHEN students.QuizTitle LIKE 'PASS%' THEN students.Earned ELSE 0 END) AS Pass,
SUM(CASE WHEN students.QuizTitle LIKE 'SCHOOL%' THEN students.Earned ELSE 0 END) AS Bus
FROM students
WHERE students.DriverLicense = 'D120001102'
GROUP BY students.DriverLicense;
This query will do the following
1.It sums Earned only for matching QuizTitle values using CASE.
2.All results are returned in one row, grouped by DriverLicense.
3.It avoids using multiple subqueries or UNION.
https://www.pqube.us/
I used MX3232 and connected to CH340 becuase RS232 signaling is different than ch340 and it doesn't work if you directly try to connect rs232 to ch340.
Update: the issue was fixed in docker.io/bitnami/airflow:3.0.1-debian-12-r1.
Same here, getting the same error today
Same here! with Cloud Build for an App Engine deploy...
I created a header file called python, allowing to use input and print like in python but in the C++ language, following the same problem as you that I have not managed to solve. I will give you the link to the github I just posted it quickly not long ago
Same issue here too. Only noticed an hour or two ago
Same here. GAE deployment failed.
Seem CloudRun..etc no pbm
Same issue with Google App Engine (Cloud Run is looking good).
In my case this issue was solved by defining user and group in www.conf
[www]
user = www-data
group = www-data
...
Just found the easy solution, it is to actually do set the quarkus.datasource.username
:
quarkus.flyway.migrate-at-start=true
quarkus.flyway.schemas=oracletest
quarkus.datasource.username=oracletest
That may be obvious when comparing it with a production environment where schema name and user name are the same. In my case of an integration test environment based on devservices it took me some time to find out.
Experiencing the same issue while using gcloud app deploy with no solution so far
I got the same problem just now, could be an error on their end.
I’m getting the same error too—in my case it happens when I try to deploy to App Engine through Cloud Build.
In Flutter, there are two main options to share content on WhatsApp:
1. share_plus
✅ Allows sharing text, images, and files.
❌ Does not support opening chat with a specific WhatsApp contact.
❌ Shows a share sheet — user has to manually select WhatsApp and contact.
2. url_launcher with WhatsApp deep link (https://wa.me/)
✅ Allows opening chat with a specific contact using phone number.
✅ Sends pre-filled text message.
❌ Cannot attach files/images — only plain text or file links.
🔚 Conclusion:
You can’t share both file + text directly to a specific contact using Flutter unless you use the WhatsApp Business API, which is server-based and not suitable for typical mobile apps.
Thanks for sharing your views @Sampath, I totally agree with you.
Forward Geocoding Pricing:
As you've mentioned that you are using the Gen2 (Maps & Location Insights) pricing tier. Gen2 does not include the free 5,000 monthly transactions all requests are billed from the first one.
The official pricing table still mentions a free quota, but it is specific to Gen1 (S0) pricing tier. So, your charge is correct, and the pricing table is not outdated, but the free tier is not applicable under Gen2 usage-based billing.
Small fluctuations (e.g., €3.90 vs €3.96) are due to currency rounding or real-time exchange rates, not pricing errors.
Cause of Unexpected Charges:
This results in a cost of about €3.91 per 1,000 requests
, which aligns with the standard Gen2 rate without a free tier:
So, your interpretation is correct: you are being billed without any free tier, most likely due to your pricing tier setup.
How transactions are counted - Understanding Azure Maps Transactions
Route Matrix Strategy:
You're also planning to compute distances between 284 origins × 17 destinations. Azure Maps Route Matrix is billed as:
(284 × 17) / 4 = 1,207 transactions
Hence, your optimization splitting into 17 separate API calls (one per destination) is valid and keeps billing the same but makes tracking and retrying easier making it a Smart Optimization Strategy.
Details on calculating matrices of route - Post Route Matrix
Here are some Recommendations to Avoid Extra Cost:
Kindly refer - Manage Your Azure Maps Account's Pricing Tier
Can't debug without seeing the code.
The way i solved this was to check my git ignore file and i noticed that there was the /build was highlighted, i had to unlock it
Reading through the Telethon documentation, this looks to be a known issue.
Telethon Docs
First and foremost, this is not a problem exclusive to Telethon. Any third-party library is prone to cause the accounts to appear banned. Even official applications can make Telegram ban an account under certain circumstances. Third-party libraries such as Telethon are a lot easier to use, and as such, they are misused to spam, which causes Telegram to learn certain patterns and ban suspicious activity.
It looks like the usage of Telethon flags the Telegram API anti-SPAM measures. For sensitive countries using a proxy might circumvent the ban. However, this use case does seem like it might be in breach of Telegram API TOS.
In the first instance, I would read through the Telegram API TOS to consider this use-case for Telethon.
No module named 'common'
(and later 'util'
)When you run
python -m main
from inside app/
, Python sets sys.path[0]
to app/
itself. So common/
(a sibling of main.py
) is visible and from common.do_common import foo
works.
But when you call
from common.do_common import foo
inside do_common.py
and then call foo()
, Python still considers app/
the top‐level package. It never adds app/..
to the search path, so util/
(another sibling of main.py
) isn’t on sys.path
→ you get ModuleNotFoundError: No module named 'util'
.
Relative imports (with leading dots) only work inside packages, and your “main” script isn’t actually being run as part of the app
package (its name is __main__
) Python documentation.
Re‐structure your invocation so that app
is a true package:
project/
└── app/
├── __init__.py
├── main.py
├── common/
│ ├── __init__.py
│ └── do_common.py
└── util/
├── __init__.py
└── do_util.py
Then, from the project/
directory run:
python -m app.main
Now app/
is on sys.path
, so both:
from common.do_common import foo
from util.do_util import bar
resolve correctly Real Python.
app
prefixIf you keep running python -m main
from inside app/
, change your imports to:
# in main.py
from app.common.do_common import foo
# in do_common.py
from app.util.do_util import bar
This works because you’re explicitly naming the top‐level package (app
), and avoids any reliance on relative‐import magic Real Python.
PYTHONPATH
or sys.path
If you really want to keep from common…
/ from util…
without any prefix:
export PYTHONPATH="/path/to/project/app":$PYTHONPATH
python -m main
main.py
): import sys from pathlib import Path
# add project/app to the import search path
sys.path.insert(0, str(Path(__file__).resolve().parent))
Either way, you’re telling Python “look in app/
for top‐level modules,” so both common
and util
become importable without dots Stack Overflow.
Create a minimal setup.py
in the project/
root:
# setup.py
from setuptools import setup, find_packages
setup(
name="my_app",
version="0.1",
packages=find_packages(),
)
Then, from project/
run:
pip install -e .
Now everywhere you run Python (inside or outside of app/
), common
and util
are found as part of your installed package, and you can continue writing:
fromcommon.do_common import foo
from util.do_util import bar
—no more ModuleNotFoundError
.
If you just want the quickest fix and don’t mind changing your working directory, go with (1) and run python -m app.main
from the project root.
If you prefer clarity and PEP 8’s recommendation of absolute imports, use (2) to always name your top‐level package.
For scripts that must remain portable without changing how you invoke them, (3) (adjusting PYTHONPATH
or sys.path
) works fine.
For larger projects you plan to publish or reuse elsewhere, (4) (making it installable) is the most scalable, robust solution.
All of these remove the need to prepend a dot for every import and will eliminate the ModuleNotFoundError
once and for all.
This is fixed in the version v3.22 of Thruk. I quote the changelog:
- Apache:
- add UnsafeAllow3F for Ubuntu packages
You hit this bug https://github.com/sni/thruk/issues/1433 after an apache update.
As a workaround, you can add the flag UnsafeAllow3F
manually in /usr/share/thruk/thruk_cookie_auth.include
as well.
if you are getting error via opening then it could be slow internet connection. If there was error shown it would be better
modify Controller.php file to this : ( extends BaseController )
<?php
namespace App\Http\Controllers;
use Illuminate\Routing\Controller as BaseController;
abstract class Controller extends BaseController
{
//
}
Thanks for the details here!
I've been facing issue with following error for my keycloak 26.1.4 container app deployment.
"TargetPort 8080 does not match any of the listening port"
My image exposes 8080 and ingress is setup with all traffic and Targetport=8080.
Any help will be helpful.
certificate is created using the csr file which contains enough information about the dns for which certificate will be authorized. you can also decode the csr using the link: https://www.sslshopper.com/csr-decoder.html . The csr also contains public key generated by the keystore .jks file. and this keystore contains private & public key. Alias is kind of unique tag for a keystore file. you can download & install keystore explorer to explore more options for keystore with the link: https://keystore-explorer.org after installing it when you want to update the certificate which was generated with same keystore & csr you can simply use import from CA reply > from file and select the updated certificate file to update the certificate in keystore.
How did you calculate the dead time?
As far as I can tell you cannot achieve such a long dead time.
I assume you calculated it based on the timer clock frequency divided by the prescaler.
The dead time generator used Tdts, which is not derived from the prescaled tim_psc_ck, it is based on the kernel timer, tim_ker_ck.
The reference manual gives examples based on an 8MHz clock, and the maximum dead time that can be inserted with this method is 31750ns. There is a register to prescale the Tdts, but only with a maximum of 4. I don't think you will be able to see it with a camera. Maybe if you slow down the entire clock tree to some extreme values.
For a plain servlets environment, you should use the jakartaee-pac4j
implementation: https://github.com/pac4j/jee-pac4j
Setting pointer-events: none;
doesn't solve the problem entirely, as the select can still be controlled using keyboard.
One quick, html only way to fix that is using the newly available (at the time of writing) inert
attribute, which blocks all interactions.
Make sure you don't have an instance postgres running locally. If you run psql postgres
and you can connect then that means postgres running already on your local, and when you run `psql -h localhost -U username -d dbname` you will be trying to connect to the local pg instance where the username and db you are running in the container do not exist.
In 8086 assembly, the **segment register used is determined by the presence of BP, not the order of operands**.
For effective address `[SI + BP]`, the **SS (Stack Segment)** register is used — because **BP is involved**. Any address calculation using BP or SP implies stack-relative addressing, which defaults to SS.
\> Rule of thumb:
\> - If the effective address uses **BP or SP**, the default segment = **SS**
\> - Otherwise, it's **DS** (Data Segment)
The order `SI + BP` doesn’t change the segment selection logic — the 8086 doesn’t prioritize operands by order, only by type.
### Reference:
Intel 8086 Programmer’s Manual – Effective Address Calculations (EA)
See: [Intel 8086 Docs – Segment Override Defaults](https://www.cs.uaf.edu/2000/fall/cs301/notes/segmentation.html#effective-address)
So your **first instinct was right** — `[SI + BP]` uses SS by default.
A clean build without source maps, but they were generated first and then deleted.
Build time is longer because source maps are created and then removed.
Sets an environment variable to prevent react-scripts from generating source maps at all.
Skips .map file generation during build.
A clean build without source maps, but more efficient and faster.
Better for production, where you don’t want to expose your source code.
Use Script B (GENERATE_SOURCEMAP=false) if you want to avoid source maps efficiently and reduce build time.
Script A is redundant and slower — only useful if you're using a toolchain that requires source maps to exist during build, but not after.
This could be an issue with the code in your providers. If you're using Scope.REQUEST this can lead to undefined providers when you're calling forward ref.
This is something that was deeply regretted later, but it serves the request in the question.
In database layer, Date Time components are stored in separate columns as integers (Year | Month | Day | Hour | Min). This way, it is possible to add, remove time (mostly through a custom function) and everything is stored without the timezone info.
Due to working with integer, it is quicker then parsing strings.
Regret came later, when we built additional functionality over this, and it was a nightmare to parse these data back into proper DateTime. Therefore I suggest to store at least the date time ticks as an additional info.
May depend on your locale but I found if dayPeriod is omitted all-together then it outputs "AM" or "PM".
new Intl.DateTimeFormat('en', {
hourCycle: 'h12',
hour: "2-digit",
minute: "2-digit",
}).format(new Date())
// => "12:48 AM"
I know this is an old thread but there is now a very good way to determine when DllMain has exited.
In the DllMain, or if you are using MFC, in InitIntance, create a waitable timer and give it a due time of about 1 millisecond in the future and, this is important, give it a completion routine. Your completion routine is queued as an APC but it can't run until the thread is done initializing.
The loader code checks for queued APCs right after it releases the loader lock, so your completion routine will gets executed almost immediately after DllMain returns to the loader.
-----BEGIN PGP SIGNATURE-----
iQEzBAEBCAAdFiEE2gEkW3F+GVN/0xRgMZAlxK+8ys0FAmZmk4MACgkQMZAlxK+8
ys0sLggAj8rxyaFK6GpMiQUPNeEkKjOdKoPqQ3xgWBFhTYRg3Ec0RpEqA8KH1DhZ
NHzmRRH8LPcZL2UzMMVE3x3LUnmUUDZZ9UEvEEtdYpK4epVkJ13g3cRqukm2UmWv
ENwmg9mNcvPYdAOyVjAGH7vKXtD2My3mjguZ+nqV8ZLn3t2/fLLJK7U+6m5nS2T6
Tg==
=JrcD
-----END PGP SIGNATURE-----
You can define your own DynamicResourceExtension
[MarkupExtensionReturnType(typeof(Color))]
[Localizability(LocalizationCategory.NeverLocalize)]
public class ColorFromBrushResourceExtension(string resourceKey) : DynamicResourceExtension(resourceKey)
{
public override object ProvideValue(IServiceProvider serviceProvider)
{
return base.ProvideValue(serviceProvider) is SolidColorBrush brush
? brush.Color
: Colors.Transparent;
}
}
And use it this way:
<SolidColorBrush
x:Key="MyBrush"
Color="{utils:ColorFromBrushResource OtherBrush}"
Opacity="0.56"
po:Freeze="True" />
You have to install python extensions on remote ssh for this to work.
@zumarta did you find a solution to this?
That's because libsoup is not a maven artifact, but a package in Oracle's linux image distribution
https://security.snyk.io/vuln/SNYK-ORACLE8-LIBSOUP-10062725
How to fix?
Upgrade Oracle:8 libsoup to version 0:2.62.3-8.el8_10 or higher.
This issue was patched in ELSA-2025-4560.
export const config = {
matcher: ['/:slug', '/:slug/*'], // Protect both /[slug] and /[slug]/nested
};
Explanation: /[slug]/* doesn’t match /[slug] (no trailing slash). You must explicitly match /:slug if you want to include it.
In the case of having both primary key and hash key and you are trying to get only based on primary key.
Then you can try the "Query Command" instead of "Get Item Command". The query command lets you do the search based on primary key alone
The following flag is availble in Chrome and Edge, and thus likely Chromium products in general:
chrome://flags/#unsafely-treat-insecure-origin-as-secure
In the text field, you should be able to add each of your localhost origins to treat them as secure:
http://localhost,http://127.0.0.1,http://[::1]
Tested on Chrome 136 and Edge 136, the current latest stable versions. Feedback is appreciated.
This is a really insightful discussion about the realities of maintaining both free and paid apps. The points raised about user expectations and the need for ongoing updates, regardless of the pricing model, are spot on. For those navigating the complexities of app maintenance and looking for potential support in keeping their apps running smoothly on the App Store (or other platforms), this resource might offer some helpful information: https://mobisoftinfotech.com/services/mobile-app-maintenance-support-services. Thanks for this valuable exchange of perspectives!
I made a build with Azul 1.8.0.452jdk8.0.452 which includes JavaFx but when I export into runnable jar and execute with JRE given by Oracle I get same Error: JavaFX has been removed from JDK 8.
Ghgggvbhhrgghf
header 1 | header 2 |
---|---|
cell 1 | cell 2 |
cell 3 | cell 4 |
Thanks brother, i was being bothered by this problem for so long
This happends in the order you multiply the matrix, as a recomendation, you should multiply first the translate, then rotate and finaly the scale, that will help you to get the form you want, remember that matrix doesn't have commutativity.
I'll bring you and example
model = glm::translate(model, glm::vec3(-40.0f, -28.0f, 0.0f));
model = glm::rotate(model, glm::radians(-90.0f), glm::vec3(0.0f, 1.0f, 0.0f));
model = glm::scale(model, glm::vec3(0.02f));
Possibly you can use Notepad++ (or any other editor shows the character in HEX), and try to change the encoding to see the plain text. Or, just based on the HEX values, search for what language or which encoding typically using that character.
if you open some site like youtube, they handle pause/destroy media when visibility change in javascript code, so you also should handle this case for it
The setting also needs to be enabled at the project level under the Build tab.
This is in Visual Studio 2019.
About the pytest resources, why don't you use this: pytest-resource-path · PyPI
pip install pytest-resource-path
Then, you'll be able to code in pytest, like:
def test_method(resource_path_root):
text_test_resource = (resource_path_root / 'config1.csv').read_text()
https://issuetracker.google.com/issues/157926129
You can build with R8 lastversion by making the following change to build.gradle:
buildscript {
dependencies {
classpath 'com.android.tools:r8:8.9.35' // Must be before the Gradle Plugin for Android.
classpath 'com.android.tools.build:gradle:X.Y.Z' // Your current AGP version.
}
}
I answered my own question. You can't always get what you want, but if you try sometimes the Java gods will screw you in the ___.