I tried to use jsdoc for an old project that was latge and complex. The verbosity killed me from a dx point of view, the amount of crap you have to type especially when dealing with generics just adds even more complexity to the project. I switched to TS and it more than halved the amount of effort required and was CONSIDERABLY better at dealing with generics. And having that extra step for build stage i believe is a worthwhile tradeoff. In my opinion for big projects TS is a more appropriate tool that helps you avoid overdocumenting and keeps your code quite elegant.
You have to use "|" instead of "/".
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: custom-metrics-gmp-hpa
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: custom-metrics-gmp
minReplicas: 1
maxReplicas: 5
metrics:
- type: Pods
pods:
metric:
name: prometheus.googleapis.com|custom_prometheus|gauge
target:
type: AverageValue
averageValue: 20
Check the official docs example:
https://cloud.google.com/kubernetes-engine/docs/tutorials/autoscaling-metrics#custom-metric
You can put a SequentialAgent in the sub_agents parameter of a root agent.
1. From q0:
δ(q0, a, Z0) = (q1, Z0, 1Z0)
2. From q1:
δ(q1, a, Z0) = (q2, Z0, 1Z0)
δ(q1, a, 1) = (q2, 1, 11)
3. From q2:
δ(q2, a, 1) = (q1, 1, 1)
δ(q2, b, 1) = (q3, 1, λ)
4. From q3:
δ(q3, b, 1) = (q3, 1, λ)
δ(q3, λ, Z0) = (q4, Z0, Z0)
q0 is initial state and also q4 is final state.
At first we should be sure that we have at least 2 a's then one b. Then we can use a loop for more a's.
Export all metadata
I guess your may use the Report Builder from ACS commons. It allows you export data into the excel file:
https://adobe-consulting-services.github.io/acs-aem-commons/features/report-builder/index.html
You will have to extend it with a custom report type, so it will handle assets metadata, please follow https://adobe-consulting-services.github.io/acs-aem-commons/features/report-builder/extending.html
Export delta
Run report programmically, save the excel file in the filesystem or /bin folder of AEM
Update the Upload asset workflow, so it will append metadata to that file
Add remove listener, so it will remove the record from file
Implement API to download excel file from file system or /bin folder****
Frame0, a sleek and modern Balsamiq alternative tool for hand-drawn style wireframing and diagramming including flowchart, ERD, UML, etc. There is a free version, so try it out to see if it fits your purpose.
Rust equivalent for Java System.currentTimeMillis() using time-now crate :
time_now::now_as_millis()
This crate also provides methods to get duration and current time as secs/millis/micros/nanos:
now_as_secs()
now_as_millis()
now_as_micros()
now_as_nanos()
duration_since_epoch()
I used to face similar issue using RUFUS... . Try using Ventoy as it creates a seperate bootable partition and a seperate partition for you for you to dump all the ISOs so u can have multiple installable OSes in one drive. you can also configure the installation methods.
i've solved this by checking my postman version, then go to the postman directory to delete other versions and update.exe
When you set n_jobs > 1, Optuna runs your objective function in multiple threads at the same time.
Hugging Face models (like GPT-2) and PyTorch don’t like being run in multiple threads in the same Python process. They share some internal data, and the threads end up stepping on each other’s toes.
That’s why you get the weird meta tensor error.
Once it happens, the Python session is “polluted” until you restart it (because that broken shared state is still there).
That’s why:
With n_jobs=1 → works (because only one thread runs).
With n_jobs=2 → fails (threads clash).
Even after switching back to n_jobs=1 → still fails until you restart (because the clash already broke the shared state).
Instead of running trials in threads, you need to run them in separate processes (so they don’t share memory/state).
There are two simple ways:
n_jobs=1 in Optuna, but run multiple copies of your script# terminal 1
python tune.py
# terminal 2
python tune.py
Both processes will write their results into the same Optuna storage (e.g., a SQLite database).
Example in code:
import optuna
def objective(trial):
# your Hugging Face model code here
...
if __name__ == "__main__":
study = optuna.create_study(
storage="sqlite:///optuna.db", # shared DB file
study_name="gpt2_tuning",
load_if_exists=True
)
study.optimize(objective, n_trials=10, n_jobs=1) # <- keep n_jobs=1
Now you can run as many parallel processes as you want, and they won’t interfere.
n_jobs > 1 → uses threads → Hugging Face breaks.
Solution = use processes instead of threads.
The easiest way: keep n_jobs=1 and launch the script multiple times, all writing to the same Optuna storage (SQLite file or database).
Microsoft says that mailbox provisioning usually takes less than half an hour but can take up to 24 hours in some cases.
In my experience it's generally at least ten to fifteen minutes, and an hour is not unusual. The unpredictability of the time frame is down to the fact that a shared platform along with many other companies, and someone else's activities could actually make yours take longer. Given that Microsoft is not highly motivated to overprovision their infrastructure to ensure maximum performance at all times, it's not really surprising.
Drawing inspiration from the wonderful answer given by @anubhava, you can do the above using the positive lookbehind assertion as well like below:
import re
lines = """water
I have water
I never have water
Where is the water.
I never have food or water
I never have food but I always have water
I never have food or chips. I like to walk. I have water"""
for line in lines.split("\n"):
if not re.search(r"(?<=never).{,20}\bwater\b", line):
print(line)
# OUTPUT:
water
I have water
Where is the water.
I never have food but I always have water
I never have food or chips. I like to walk. I have water
1. How to quickly add SHA1 fingerprint to Firebase?
You can generate SHA1-SHA256 by using this command
keytool -list -v -keystore ~/.android/debug.keystore -alias androiddebugkey -storepass android -keypass android
after that add the generated key to your firebase.
2. How to test Google Play Billing 8.0.0?
flutter build appbundle
Use in_app_purchase plugin; listen to purchaseStream and complete purchases properly:
in_app_purchase: ^3.2.3
// 1. Listen early in initState
_subscription = InAppPurchase.instance.purchaseStream.listen(_onPurchaseUpdated);
// 2. Handle updates
void _onPurchaseUpdated(List<PurchaseDetails> purchases) {
for (var purchase in purchases) {
if (purchase.status == PurchaseStatus.purchased ||
purchase.status == PurchaseStatus.restored) {
final valid = await verifyPurchase(purchase);
if (valid) {
deliverProduct(purchase);
}
}
if (purchase.pendingCompletePurchase) {
await InAppPurchase.instance.completePurchase(purchase);
}
}
}
You can also follow this youtube video for your reference:
3. Can I go to production in 36 hours?
Plan:
4. Can I submit to store without login system?
For any arbitrary MSIX, here's what you need:
$volume = "\\?\Volume{ce76ba5a-3887-4f8a-84de-3aefe64b7691}"
Add-AppxPackage -Volume $volume -Path $adobePremiere
you can get the volume information by Get-AppxVolume
To create a new volume in D drive, you will need MSIX Hero
Through MSIX Hero, you will can create a new volume and set that as default
Also just to be sure, make sure to goto Settings->Storage-> Change where new content is saved and then change the following:
I’ve been using a MATCH.AGAINST query with SQL_CALC_FOUND_ROWS for a while now, but since that’s deprecated I’ve been looking into better options. From what I’ve found, it’s best to drop SQL_CALC_FOUND_ROWS and just run a separate COUNT(*) query for pagination. Also, switching to IN BOOLEAN MODE with FULLTEXT search fixes the issue with common words not showing up and gives more flexibility with things like +required, -excluded, and partial matches using *. It seems like that’s the cleanest way to handle search in MySQL today, unless you move up to something like Elasticsearch or Meilisearch for heavier search features!!
The braces ({ ... }) are directives in XQuery to evaluate their contents. If you look at what your working attempt actually stored, you will probably see that the fn:concat() function was evaluated. Similarly, in your first attempt, the reference to $content is being evaluated, but the declaration, outside the braces, is not evaluated and so is unavailable.
You need to escape the braces by doubling them: {{ ... }}.
I am not sure what you are doing with the <module> element, however. It’s not a construct I’ve ever seen, and I get an error trying to use an XQuery module that is not plain text. I recommend defining the module content as a string and inserting that.
c² = a² + a² [right angle triangle having equal length legs]
c = sqrt(2) * a
---
a = c * sin(45°) [same triangle but with trigonometric relation]
---
c = sqrt(2) * c * sin(45°)
1 = sqrt(2) * sin(45°)
sqrt(2) = 1 / sin(45°)
---
x = 2^log_2(x)
log_2(x) = a
---
sqrt(x) = sqrt(2^a) = sqrt(2) ^ a
sqrt(2) ^ a = (1 / sin(45°)) ^ a = (1 / sin(45°)) ^ log_2(x)
sqrt(x) = (1 / sin(45°)) ^ log_2(x)
---
sin(n) = 2 * sin(n/2) * cos(n/2)
sin(90°) = 2 * sin(45°) * cos(45°)
sin(90°) = 1
---
sqrt(x) = (1 / sin(45°)) ^ log_2(x) = (2 * cos(45°)) ^ log_2(x) = x * (cos(45°) ^ log_2(x))
---
cos(45°) = 0.707106781
---
Test:
sqrt(500) = 22.360
500 * (0.707106781 ^ log_2(500)) = 22.360
For anyone who might still be interested, I use the following
isLastDayOfMonth() =>
last_day = str.format_time(time_close("1M"), "yyyy-MM-dd", syminfo.timezone)
this_day = str.format_time(time, "yyyy-MM-dd", syminfo.timezone)
this_day == last_day
You can also use the “Insert Current Date” and “Insert Current Time” commands and insert your own hotkeys. I used Alt+9 and Alt+8 for my convenience. You can choose your own.
Then if you use the Alt+9 hot key in any note the Date will appear and accordingly if Alt+8 the time will appear.
Credits to @TheLizzard too for this.
The fix is to set cleanresize=False in self.run():
self.run(cleanresize=False)
But now, the frames are not expanding vertically.
That is because, after setting cleanresize to False, we handle everything manually. Including the columns and the rows.
The problem was, you were not using grid_rowconfigure, so just add this line:
self.root.grid_rowconfigure(0, weight=1)
So your final code:
import tkinter as tk
import TKinterModernThemes as TKMT
class App(TKMT.ThemedTKinterFrame):
def __init__(self, theme, mode, usecommandlineargs=True, usethemeconfigfile=True):
super().__init__("Switch", theme, mode, usecommandlineargs=usecommandlineargs, useconfigfile=usethemeconfigfile)
self.switchframe1 = self.addLabelFrame("Switch Frame 1", sticky=tk.NSEW, row=0, col=0)
self.switchvar = tk.BooleanVar()
self.switchframe1.SlideSwitch("Switch1", self.switchvar)
self.switchframe2 = self.addLabelFrame("Switch Frame 2", sticky=tk.NSEW, row=0, col=1)
self.switchvar = tk.BooleanVar()
self.switchframe2.SlideSwitch("Switch2", self.switchvar)
self.root.grid_columnconfigure(0, weight=0)
self.root.grid_columnconfigure(1, weight=1)
self.root.grid_rowconfigure(0, weight=1)
self.run(cleanresize=False)
if __name__ == "__main__":
App("park", "dark")
SELECT pet.Name, pet.Type, AVG(Basic_Cost), MIN(Basic_Cost), MAX(Basic_Cost), SUM(Basic_Cost)
FROM visit, pet
where visit.pet_id = pet.pet_id
and visit.pet_id = 'P0001'
and visit.vet_id = 'V04'
GROUP BY pet.Name, pet.Type;
It looks like to be necessary to modify your app to use QuotaGuard as a proxy. Might https://devcenter.heroku.com/articles/quotaguardshield#https-proxy-python-django help?
If you're on Windows, check if Yarn appears under Settings > Apps > Installed Apps. If it does, uninstall it from there and then try again.
I have similar issue when using laptop from my university. The RStudio-Quatro shortcut for inserting new chunk of code (Ctrl + Alt + i)) doesn't work because the university has set that shortcut to open a certain application and I may not have an authority to change the shortcut setting.
However, I can still use (Alt + c) shortcut to access "Tabs bar" --> "Code", in which the "Insert chunk" is luckily the first option there. So, the next step is to just press "Enter", then I will get similar result.
So, the alternative I always use is: (Alt + c) then "Enter", when (Ctrl + Alt + i) doesn't work.
A bad joke got a Goggle phone and Google chrome what for?
Have to send em with mail and then download gets to download.
In MySQL 8, you must put the subquery in a derived table:
UPDATE details_audit a
JOIN ( SELECT COUNT(*) AS cnt FROM details_audit WHERE sort_id < 100 ) b
SET a.count_change_below_100 = b.cnt
Instead of mixing async I/O and ProcessPool Workers in the same container, could you split it into dedicated processes? Because both async I/O and ProcessPool Worker use CPU alot.
What I'm saying is, let's split your application in two.
In the first application, Async I/O pull message from Kafka topic A and write it to Redis.
In the second application, ProcessPool Workers read topic A message from Redis, run the algorithm, and write the result to Redis. And also Increment the redis counter.
In the first application, Async I/O reads the result from Redis and push it to Kafka topic B.
This way, you can run each application in different containers, reducing performance issues. But with this way you need more RAM for Redis.
Update - I've tested this in Adobe Illustrator 2025, and the following works fine:
#include 'includedFile.jsx';
I did not need to add anything to manifest.xml for this to work.
When you enable Windows Authentication in IIS, the client must first authenticate before the request body is read.
If your request body is too large (bigger than IIS’s maxAllowedContentLength), IIS rejects it before authentication completes.
Because of that, instead of giving you a clean HTTP 413 Payload Too Large or 404.13 Content Length Too Large, the client sometimes sees a Windows auth challenge popup (because IIS re-challenges authentication when it can’t properly process the request).
Solution, In web.config (IIS limit in bytes):
<system.webServer>
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="10485760" /> <!-- 10 MB, add more to fix the issue -->
</requestFiltering>
</security>
</system.webServer>
We can throw an exception on null using the code below, change myObj to your object.
object myObj = null;
ArgumentNullException.ThrowIfNull(myObj);
This instructions are still working in 2025, but command mongo is now '''mongosh''' which if not existent can be installed with '''apt-get install mongodb-mongosh'''
you can then use the instructions from here or the two one liners from Reilly Chase via hostfi 3561102
mongo --port 27117 ace --eval "db.admin.find().forEach(printjson);"
find the line of your admin account name
name: "<youradminname}>"
then use the following line to set your
mongo --port 27117 ace --eval 'db.admin.update( { "name" : "<youradminname>" }, { $set : { "x_shadow" : "$6$ybLXKYjTNj9vv$dgGRjoXYFkw33OFZtBsp1flbCpoFQR7ac8O0FrZixHG.sw2AQmA5PuUbQC/e5.Zu.f7pGuF7qBKAfT/JRZFk8/" } } )'
Thank you very much to Reilly C and @Dan Awesome
The reference documentation (https://www.toshiba-sol.co.jp/en/pro/griddb/docs-en/v4_5/GridDB_SQL_Reference.html?utm_source=chatgpt.com#over) states: Multiple use of the WINDOW function/OVER clause in the same SELECT clause, and ...are not allowed.
Your second approach is the right way
I just use the "is" operator which matches against types.
A simple example
if (objectThatCanBeNull is null) {
Console.WriteLine("It is NULL!");
} else {
Console.WriteLine("It is NOT NULL!");
}
Use https://faucet.circle.com/.
It is official USDC devnet faucet for Solana.
I was able to get in contact with the MS support for gipdocs and they said this:
The implementation of the protocol within Microsoft Windows sends these packets for various scenarios(such as device idle timeout etc) but we don’t have any public way for an application to send them. However, there is a private API no one is currently using that can do this :
IGameControllerProviderPrivate has a PowerOff method that would cause this packet to be sent to gip devices including the Xbox one devices, and also the series controller you are interested in. You may QueryInterface this from the public interface GipGameControllerProvider Class (Windows.Gaming.Input.Custom) - Windows apps | Microsoft Learn.
Which gives me hope that this is viable. But I feel like I am in over my head here.
You should avoid testing mocks with React Testing Library. The philosophy of RTL is to test your components the way users interact with them, not to verify implementation details like mock call counts. Now I think this code is highly problematic for a few reasons. Firstly, the act as you said is unnecessary for fireEvent - React Testing Library automatically wraps fireEvent in act. Async beforeEach can cause timing issues with test isolation - why do you click buttons in the beforeEach?
I just need to find out what to replace "$* = 1;" with, so that my page works properly again.
I think you need to call to plt.close(). This will free the momory used by matplotlib.
Ended up using chatgpt, it walked me through a ton of solutions. Turns out the swift on non Mac platforms does not respect the .cache directory in all phases. Was given a more specific version and the issues cleared up. During the build it was trying to use root.
A stack grows like stalactites, which both grow from the ceiling.
For this to work g would have to be the index (what you're calling x1) and x1 should be a vector of parameters of length equal to the number of categories in this first factor. Same for the second factor.
in mine it says Application's system version 8.4-104 does not match IdentityIQ database's system version 8.4-87
can anybody help me with it?
I just got this same error and, for me, it meant that I did not have a local copy of that branch (tfmain/master). To fix it, I just checked out a local copy by running git checkout tfmain/master
Not part of open-source Trino, but Starburst has a tool called Schema Discovery that allows you do this.
https://docs.starburst.io/latest/insights/schema-discovery.html#column-type-conversion
https://docs.starburst.io/starburst-galaxy/working-with-data/explore-data/schema-discovery.html
For those who want to know the answer after the moderator deleted the answer and the OP didn't repost it:
UPDATE : I may have found a possible cause for the "Failed connect to 'XXX' error: error = 11, message = Server not connected".
During my investigation, I identified the log directory for the 4 queues ems in which I saw:
Failed to create file '/opt/data/tibco/ems/ems_msg/config/shared/users.conf Administrator user not found, created with default password Failed to create file '/opt/data/tibco/ems/ems_msg/config/shared/groups.conf Administrator user not found, created with default password Failed to create file '/opt/data/tibco/ems/ems_msg/config/shared/stores.conf Administrator user not found, created with default password FATAL: Exception in startup, exiting.
Problem: In "ems-oss-1y.adb.XXX.XXX.com" there is not a tree structure of the type /opt/data/tibco.
I'm going to pass this information to the technical team. I thank you anyway.
I think user3666197's answer provided a lot of extremely useful technical context here, so I will highlight some other points at a higher level in my answer here. If you are looking for a general rule of thumb for whether numpy or native python will be faster, the rule of thumb is:
Numpy speeds up CPU bound tasks, but performs slower on IO bound tasks.
The context of this is that numpy does a ton of things to set up when executing code; every time you are executing a numpy function it is equally equipped to perform extremely complex computation on a 10 Exabyte n-dimensional array running on a super computer, as it to do a simple scalar addition on a chromebook. Thus, each time you run a numpy function it requires a little bit of time to set itself up. In user366619's answer they highlighted the details of such overhead. The other thing I would want to amend to that is if your problem is more CPU bound or more IO bound. The more CPU bound the problem, the more it will gain from using numpy.
Travis Oliphant, the creator of numpy, seems to regularly address this and basically comes back to the fact that numpy will always beat out other solutions on much larger and more computationally intensive problems. Otherwise, pure python solutions are much faster for smaller and more IO bound problems. Here is Travis addressing your question directly in an interview from a few years ago:
https://youtu.be/gFEE3w7F0ww?si=mfTO-uJQRIZdMKoL&t=6080
In manifest.json, you probably have numbers or spaces in "name". Check this.
I ordered something from this site two months ago but it hasn't been delivered yet, so it's better not to order anything from this site.https://zerothought.in/jetflux-pressure-washer/
I think I love the solution from @Werner Sauer, however today I needed to do it on a pre-2017 sql server -- no string_agg()! Here's what I landed with:
/* Dynamic Insert Statement generator */
DECLARE @SchemaName SYSNAME = 'dbo';
DECLARE @TableName SYSNAME = 'myTableName';
SET NOCOUNT ON;
SET TEXTSIZE 2147483647;
DECLARE @ColumnList NVARCHAR(MAX) = '';
DECLARE @ValueList NVARCHAR(MAX) = '';
DECLARE @SQL NVARCHAR(MAX);
-- Store column metadata in a table variable
DECLARE @Cols TABLE (
ColumnName SYSNAME,
DataType SYSNAME,
ColumnId INT
);
INSERT INTO @Cols (ColumnName, DataType, ColumnId)
SELECT
c.name AS ColumnName,
t.name AS DataType,
c.column_id
FROM sys.columns c
INNER JOIN sys.types t ON c.user_type_id = t.user_type_id
WHERE c.object_id = OBJECT_ID(QUOTENAME(@SchemaName) + '.' + QUOTENAME(@TableName));
-- Build comma-separated column names
SELECT @ColumnList = STUFF((
SELECT ', ' + QUOTENAME(ColumnName)
FROM @Cols
ORDER BY ColumnId
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '');
-- Build concatenation logic for each column
SELECT @ValueList = STUFF((
SELECT ' + '','' + ' +
CASE
WHEN DataType IN ('char','nchar','varchar','nvarchar','text','ntext')
THEN 'COALESCE('''''''' + REPLACE(' + QUOTENAME(ColumnName) + ', '''''''', '''''''''''') + '''''''', ''NULL'')'
WHEN DataType IN ('datetime','smalldatetime','date','datetime2','time')
THEN 'COALESCE('''''''' + CONVERT(VARCHAR, ' + QUOTENAME(ColumnName) + ', 121) + '''''''', ''NULL'')'
ELSE 'COALESCE(CAST(' + QUOTENAME(ColumnName) + ' AS VARCHAR), ''NULL'')'
END
FROM @Cols
ORDER BY ColumnId
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 8, ''); -- remove first " + ',' + "
-- Build the final SQL
SET @SQL =
'SELECT ''INSERT INTO ' + QUOTENAME(@SchemaName) + '.' + QUOTENAME(@TableName) +
' (' + @ColumnList + ') VALUES ('' + ' + @ValueList + ' + '') ;'' AS InsertStatement ' + CHAR(13) +
'FROM ' + QUOTENAME(@SchemaName) + '.' + QUOTENAME(@TableName) + ';';
-- Execute the generated SQL
EXEC sp_executesql @SQL;
What you are trynna do is when you drag the container on the right it will get wider and the .aside will get narrower. However you gave .aside a fix width value of 500px which stops it from shrinking.
You need to change your stylesheet of .aside into:
.aside {
background: #aaa;
height: 100%;
min-width: 320px;
flex-grow: 1;
}
So that the flex layout can shrink its width to give the container space to grow
Meanwhile you passed an empty array into dependency of useLayoutEffect hook which will only execute once before the browser paints the screen when the container.current is null
add dependency to the useLayoutEffect:
useLayoutEffect(() => {...}, [container.current]);
this way it will execute when the container ref is bound with the element
Refer this will solve your problem
Wayback Machine Save Page Now 2 API:
https://docs.google.com/document/d/1Nsv52MvSjbLb2PCpHlat0gkzw0EvtSgpKHu4mk0MnrA/edit
I remember it was quite short code to read mouse position in C in DOS. Mouse was connected by serial RS232 interface. I used this technique in DOS in Turbo C with BGI interface.
https://ic.unicamp.br/~ducatte/mc404/Lampiao/docs/dosints.pdf
I dont have this code anymore but look this link above I think I used interrupt 33 but it was almost 30 years ago. (INT 33h)
here is example code
#include <dos.h> // Or a similar header for interrupt functions
void main() {
int mouse_installed;
int x, y, buttons;
// Initialize the mouse driver
_AX = 0;
geninterrupt(0x33); // Call INT 33h
mouse_installed = _AX; // Check the return value in AX
if (mouse_installed) {
// Turn the mouse on
_AX = 1;
geninterrupt(0x33);
// Get the mouse position and button status
_AX = 3;
geninterrupt(0x33);
buttons = _BX; // Get button info
x = _CX; // Get X-coordinate
y = _DX; // Get Y-coordinate
printf("Mouse is installed. Cursor at %d,%d, button %d\n", x, y, buttons);
// Turn the mouse off (optional)
_AX = 2;
geninterrupt(0x33);
} else {
printf("Mouse driver not found.\n");
}
}
i wish i can choose an image that isnt so... uuuugggllyyy.....
Multithreading is inherent to web development. I suggest you download Django and start developing some web applications. This will provide you with some basic experience to multithreaded program development.
1- Exit Android Studio
2- Clear C:\Users\M\.gradle\wrapper\dists\caches\x.x (x.x: Gradle version)
3- Rerun Android Studio and build/ run the app.
Wouldn't something like this work? Just check for empty strings?
def read_loop():
while True:
chunk = r_stream.readline()
if not chunk: # Empty string = pipe closed
break
if chunk:
print('got chunk', repr(chunk))
got_chunks.append(chunk)
I solved this problem by this graph.
I agree with you that doSomethingMocked should only run once. I copied your code and ran the unit test, but the test passed in my environment:
I guess it's the issue with jest configuration?
here is my demo repo
import React from "react";
import { View, Text, Linking, Button, ScrollView } from "react-native";
import { WebView } from "react-native-webview";
import { createBottomTabNavigator } from "@react-navigation/bottom-tabs";
import { NavigationContainer } from "@react-navigation/native";
function HomeScreen() {
return (
\<ScrollView\>
\<Text style={{ fontSize: 22, textAlign: "center", margin: 10 }}\>
🎥 माझं YouTube Channel
\</Text\>
\<View style={{ height: 300, margin: 10 }}\>
\<WebView
source={{ uri: "https://www.youtube.com/@PranavVharkate" }}
/\>
\</View\>
\</ScrollView\>
);
}
function SocialScreen() {
return (
\<View style={{ flex: 1, justifyContent: "center", alignItems: "center" }}\>
\<Text style={{ fontSize: 20, marginBottom: 20 }}\>🌐 माझे सोशल लिंक\</Text\> https://youtube.com/@pranavvharkate?si=hTu85mvCYp0hujl5
\<Button title="Facebook उघडा" onPress={() =\> Linking.openURL("https://facebook.com/तुझाLink")} /\> https://www.facebook.com/profile.php?id=100091967667636&mibextid=ZbWKwL
\<Button title="Instagram उघडा" onPress={() =\> Linking.openURL("https://instagram.com/तुझाLink")} /\> https://www.instagram.com/pranavvharkate2?igsh=MW5hdjRsdHh1eDhsdA==
\</View\>
);
}
function CommunityScreen() {
return (
\<View style={{ flex: 1, justifyContent: "center", alignItems: "center" }}\>
\<Text style={{ fontSize: 20 }}\>👥 Community Page (Demo)\</Text\>
\<Text\>येथे नंतर Firebase जोडून posts टाकता येतील.\</Text\>
\</View\>
);
}
const Tab = createBottomTabNavigator();
export default function App() {
return (
\<NavigationContainer\>
\<Tab.Navigator\>
\<Tab.Screen name="Home" component={HomeScreen} /\>
\<Tab.Screen name="Social" component={SocialScreen} /\>
\<Tab.Screen name="Community" component={CommunityScreen} /\>
\</Tab.Navigator\>
\</NavigationContainer\>
);
}
I am also trying to build a postgres mcp server and provide table metadata based on user natural query to load the resources at runtime like tools and get a specific tables metadata and use it before writing the sql query.
I think we can just for the meantime put that functionality too as a tool where it takes schema and table name and reads a schema_table.json for extra context.
Until now, you have to do that way, but there is a pull request "Add character casing to TextBox control" will do that for you, but it still didn't merge with avalonia yet
homebrew emacs is fast enough...
There was a new powerpc reference added recently (simple design and all)
For Anyone who hates official manual from the ibm website ..
Index:
https://fenixfox-studios.com/manual/powerpc/index.html
Registers:
https://fenixfox-studios.com/manual/powerpc/registers.html
Syntax:
https://fenixfox-studios.com/manual/powerpc/syntax.html
Instruction like:
add - https://fenixfox-studios.com/manual/powerpc/instructions/add.html
mflr - https://fenixfox-studios.com/manual/powerpc/instructions/mflr.html
<iframe sandbox="allow-scripts allow-popups allow-popups-to-escape-sandbox allow-same-origin" src="https://art19.com/shows/the-stack-overflow-podcast/episodes/1ad2e1d3-71f2-43bf-8704-1006b9704859/embed?theme=light-custom" style="width: 100%; height: 200px; border: 0 none;" scrolling="no"></iframe>
sorry I am landing late .... 😁🙈
replace capital A by lowercase a in "declare -a"
it work (for me) ✌🏼
It looks like Flutter wants you to move the value "0" that you are using in your code to a member variable, and reference it using the member variable you created. The compiler is trying to prevent you from hard-coding numeric values.
In my case,
Xcode > Settings > Accounts
removing my current account with “-” and logging in again solved the problem.
Try to put compileSdk like this: compileSdk = flutter.compileSdkVersion Or as the error says, try to put compileSdkVersion = flutter.compileSdkVersion (or hardcoded value) if you are using older version of Flutter.
I had the same issue and I was able to fix it upon adding UTF-8 formatting option while parsing the JSON
Hey were you ever able to figure out how to do this?
You can leverage the query implementation to Coffee Bean Library. This library will translate GraphQL queries into SQL queries on the fly. Can be customized and does not require any vendor coupling. I am the author.
I was messing around because the font I use for my website is thicker, this works perfectly.
Same idea as the borders CSS.
u{text-decoration: underline 2px;}
<p>
Not underlined text <br>
<u>Underlined text</u><br>
<u>qwertyuiopasdfghjklzxcvbnm</u>
</p>
I had a similar issue, I updated filename from postcss.config.js -> postcss.config.mjs
Adding Working directory as $(ProjectPath) worked for me
You can refer to the following KB article
https://community.snowflake.com/s/article/Snowflake-JDBC-throws-error-while-fetching-large-data-JDBC-driver-internal-error-Timeout-waiting-for-the-download-of-chunk0?utm_source=chatgpt.com
I have two projects using different versions of an Okta library (cuz there've been 5 revs since I implemented the previous one), and the mismatched versions stashed somewhere ended up causing this issue. Using the same version in the other project fixed the issue. I've never had this issue before with any other nuGet packages and running this, that, or the other version.
Yeah, this is a common issue when you mix RAG and chat memory. The retriever keeps adding the same info every turn, and the memory just stores it blindly so you end up with repeated chunks bloating the prompt.
Quick fix: either dedupe the content before adding it, or use something like mem0 or flumes ai that tracks memory as structured facts and avoids repeating stuff across turns.
-----BEGIN PGP MESSAGE-----
0v8AAABJAY86Lka0Nnre6q9F7/9raOI/XetXWsGjOpeqXCtL7evUvWJVV/oN4IGkDCLdlhzMT7tX
WJVfKGu9/29lXc2GRB8hi0HxxF/mBA==
=WYHR
-----END PGP MESSAGE
-----
Looks like you are always setting count to 1
const response = await fetch('/add-crusher-columns', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({ count: 1 })
});
Here many years later to point out that as of 2022, Perforce provides Stream components (https://help.perforce.com/helix-core/server-apps/p4v/current/Content/P4V/stream-components.html), which seem to be able to achieve this.
In short, on the Components section of the Advanced tab of a client stream's property page (assuming you're using P4V), you'd specify a line like:
readonly dirX //streams/X
where stream X can itself contain other components etc. These components can be made writeable, as well as point to specific changelists in the source stream and not just the head. They look pretty similar to git modules, although I haven't yet had the chance to use them myself, so cannot comment much further.
create bat file:
@echo off
set "no_proxy=127.0.0.1,localhost"
set "NO_PROXY=127.0.0.1,localhost"
start "" "C:\Program Files\pgAdmin 4\pgAdmin4.exe"
I am working on an astrology application with the following directory structure. I am running a test in it- .\run_pytest_explicit.ps1; I am getting may errors in it-1. ModuleNotFoundError: No module named 'app.main'.2.No module named 'M2_HouseCalculation' 3.ModuleNotFoundError: No module named 'src'4.ModuleNotFoundError: No module named 'app.core.time_location_service'5. 6. ModuleNotFoundError: No module named 'app.pipeline.julian_day'7. ModuleNotFoundError: No module named 'app.pipeline.time_utils' ; Please tell me in beginner friendly way how to solve them? astro-backend
├── src/
│ ├── _init_.py # optional, usually src is not a package root
│ ├── app/
│ │ ├── _init_.py # marks app as package
│ │ ├── app.py # your main FastAPI app entrypoint
│ │ ├── core/
│ │ │ ├── _init_.py
│ │ │ └── ... # core utilities, helpers
│ │ ├── services/
│ │ │ ├── _init_.py
│ │ │ └── ... # app-wide service logic
│ │ ├── routes/
│ │ │ ├── _init_.py
│ │ │ └── ... # route definitions (optional)
│ │ └── ai_service/
│ │ ├── _init_.py
│ │ └── main.py # AI microservice router
│ ├── modules/
│ │ ├── _init_.py
│ │ ├── module3/
│ │ │ ├── _init_.py
│ │ │ ├── service.py
│ │ │ └── ai_service/
│ │ │ ├── _init_.py
│ │ │ └── main.py # AI microservice alternative location
│ │ └── other_modules/
│ └── tests/
│ ├── _init_.py # marks tests as package
│ └── ... # all test files and folders
├── .venv/ # your pre-existing virtual environment folder
├── PYTHONPATH_Set.ps1 # your PowerShell script to run tests
└── other project files...
It seems that is a bug, either in Qt Creator (not generating the correct escaped sequence) or in PySide (pyside6-uic doesn't generate the correct escaped sequence for QCoreApplication.translate() or QCoreApplication.translate() doesn't accept 16bit escape sequences).
A bug that seems to be related (QTBUG-122975, as pointed by @musicamante in the discussion) seems to be open since 2024.
As a workaround, for the time being, if your app doesn't need translation, you can deselect the translatable property in the QAbstractButton properties.
stage("One of the parallel Stage") {
script {
if ( condition ) {
...
} else {
catchError(buildResult: 'SUCCESS', stageResult: 'NOT_BUILT') {
error("Stage skipped: conditions not met")
}
}
}
}
In our case we deleted the apps in both slots and re-deployed.
Before that we tried an number of cleanup operations in Azure using the Kudo debug console without any progress. The warning message turned up when we activated the staging slot in our TEST environment, we don't use the staging slot in our DEV and there we didn't get the message. We have had this warning message for 4 days, so to us it looks like it hadn't gone away on its own.
I'm unsure if it is the expected behavior, but as of Apache Superset 5.0.0, you can create a virtual dataset by specifying the table_name to any value (a dataset with that name should not exist) and setting the desired sql query.
Solved. There might be a more elegant way, but this worked:
DECLARE @ID Table (ID int);
INSERT INTO Table1 (FIELD1, FIELD2, FIELD3)
output Inserted.IDFIELD INTO @ID
Select 1,2,3
where not exists (SELECT 'x' FROM Table1 T1 WHERE T1.FIELD1 = 1 AND T1.FIELD2 = 2;
INSERT INTO Table2 (Other1_theID, Other2, Other3)
(Select ID,'A','B'from @ID
where not exists (SELECT 'x' FROM Table2 T2 WHERE T2.Other2 = 'A' AND T2.Other3 = 'B')) UNION ALL
(Select ID,'C','D'from @ID
where not exists (SELET2 'x' FROM Table2 T2 WHERE T2.Other2 = 'C' AND T2.Other3 = 'D')) UNION ALL
(Select ID,'E','F'from @ID
where not exists (SELET2 'x' FROM Table2 T2 WHERE T2.Other2 = 'E' AND T2.Other3 = 'F'))
.payload on ActiveNotification is only set for notifications that your app showed via flutter_local_notifications.show(..., payload: '...').
It does not read the APNs/FCM payload of a remote push that iOS displayed for you. So for a push coming from FCM/APNs, activeNotifications[i].payload will be null.
Why? In the plugin, payload is a convenience string that the plugin stores inside the iOS userInfo when it creates the notification. Remote pushes shown by the OS don’t go through the plugin, so there’s nothing to map into that field.
Option A (recommended): carry data via FCM data and read it with firebase_messaging.
{
"notification": { "title": "title", "body": "body" },
"data": {
"screen": "chat",
"id": "12345" // your custom fields
},
"apns": {
"payload": { "aps": { "content-available": 1 } }
}
}
FirebaseMessaging.onMessageOpenedApp.listen((RemoteMessage m) {
final data = m.data; // {"screen":"chat","id":"12345"}
// navigate using this data
});
final initial = await FirebaseMessaging.instance.getInitialMessage();
if (initial != null) { /* use initial.data */ }
Option B: Convert the remote push into a local notification and attach a payload.
RemoteMessage.data, then call:await flutterLocalNotificationsPlugin.show(
1001,
m.notification?.title,
m.notification?.body,
const NotificationDetails(
iOS: DarwinNotificationDetails(),
android: AndroidNotificationDetails('default', 'Default'),
),
payload: jsonEncode(m.data), // <— this is what ActiveNotification.payload reads
);
Now getActiveNotifications() will return an ActiveNotification whose .payload contains your JSON string.
Gotcha to avoid: Adding a payload key inside apns.payload doesn’t populate the plugin’s .payload—that’s a different concept. Use RemoteMessage.data or explicitly set the payload when you create a local notification.
Bottom line: For FCM/APNs pushes, read your custom info from RemoteMessage.data (and onMessageOpenedApp/getInitialMessage). If you need .payload from ActiveNotification, you must show the notification locally and pass payload: yourself.
Experience shows that this happens when there are too many non-versioned files.
Unchecking "Show Unversioned Files" helped me.
You can also use “add to ignore list” to exclude directories that should not be captured with git.
OR would have worked too -- logically speaking: NOT (A) AND NOT (B) = NOT (A OR B)
Oh, I've figured out the problem. It turns out that changing a variable solved my problem.
From this:
var decoded;
for (const key of objectKeys) {
if (originalText.includes(key)) {
continue;
} else {
decoded = result.replaceAll(key, replaceObject[key])
}
}
To this:
var decoded = result;
for (const key of objectKeys) {
if (originalText.includes(key)) {
continue;
} else {
decoded = decoded.replaceAll(key, replaceObject[key])
}
}
Thank you so much, this worked perfectly for me! It also resolves problems with the design view of WindowBuilder.
This is due to Iconify Intellisense. There is already an Issue open with exactly this question in the Github repo.
In monorepo this error can happen when there is multiple vite versions, you need to install the same version, source: https://github.com/vitest-dev/vitest/issues/4048
When you’re talking about a 20 GB log file, you’ll definitely want to lean on S3’s multipart upload API. That’s what it’s built for: breaking a large file into smaller chunks (up to 10,000 parts), uploading them in parallel, and then having S3 stitch them back together on the backend. If any part fails, you can just retry that one chunk instead of the whole file.
Since the consuming application doesn’t want to deal with pre-signed URLs and can’t drop the file into a shared location, one pattern I’ve used is to expose an API endpoint in front of your service that acts as a broker:
The app calls your API and says “I need to send logs.”
Your service kicks off a multipart upload against S3 using your AWS credentials (so the app never touches S3 directly).
The app streams the file (or pushes chunks through your API), and your service forwards them to S3 using the multipart upload ID.
Once all parts are in, your service finalizes the upload with S3.
That gives you a central place to send back success/failure notifications:
On successful completion, your service can push a message (SNS, SQS, webhook, whatever makes sense) to both your system and the caller.
On error, you can emit a corresponding failure event.
The trade-off is that your API tier is now in the data path, so you’ll need to size it appropriately (20 GB uploads aren’t small), and you’ll want to handle timeouts, retries, and maybe some form of flow control. But functionally, this avoids presigned URLs, avoids shared locations, and still gives you control over how/when to notify both sides of the result.
self.addEventListener('fetch')
doesn`t call. NEVER !
WHY ???
Just have received this email after I couldnt log in anymore.
After that I have resetted my password account and it still didnt let me login.
But I was finally able to enter like this:
Login as Root at https://signin.aws.amazon.com/
When asking for 2FA / MFA then click on the bottom "Trouble signing in?"
Then click on Re-sync with AWS Servers
Then put in two 2FA codes after waiting for 30s apprx.
Finally enter again
Done ✅
i face the same issue, exactly as you described it. Have you found a fix ?
If your API is running correctly and returning a status code of 200, the basic solution is to send a message from your number to the WhatsApp number first where you expect to receive messages. Once you’ve done this initial message exchange, you will start receiving messages from WhatsApp