I just use the "is" operator which matches against types.
A simple example
if (objectThatCanBeNull is null) {
Console.WriteLine("It is NULL!");
} else {
Console.WriteLine("It is NOT NULL!");
}
Use https://faucet.circle.com/.
It is official USDC devnet faucet for Solana.
I was able to get in contact with the MS support for gipdocs and they said this:
The implementation of the protocol within Microsoft Windows sends these packets for various scenarios(such as device idle timeout etc) but we don’t have any public way for an application to send them. However, there is a private API no one is currently using that can do this :
IGameControllerProviderPrivate has a PowerOff method that would cause this packet to be sent to gip devices including the Xbox one devices, and also the series controller you are interested in. You may QueryInterface this from the public interface GipGameControllerProvider Class (Windows.Gaming.Input.Custom) - Windows apps | Microsoft Learn.
Which gives me hope that this is viable. But I feel like I am in over my head here.
You should avoid testing mocks with React Testing Library. The philosophy of RTL is to test your components the way users interact with them, not to verify implementation details like mock call counts. Now I think this code is highly problematic for a few reasons. Firstly, the act as you said is unnecessary for fireEvent - React Testing Library automatically wraps fireEvent in act. Async beforeEach can cause timing issues with test isolation - why do you click buttons in the beforeEach?
I just need to find out what to replace "$* = 1;" with, so that my page works properly again.
I think you need to call to plt.close(). This will free the momory used by matplotlib.
Ended up using chatgpt, it walked me through a ton of solutions. Turns out the swift on non Mac platforms does not respect the .cache directory in all phases. Was given a more specific version and the issues cleared up. During the build it was trying to use root.
A stack grows like stalactites, which both grow from the ceiling.
For this to work g would have to be the index (what you're calling x1) and x1 should be a vector of parameters of length equal to the number of categories in this first factor. Same for the second factor.
in mine it says Application's system version 8.4-104 does not match IdentityIQ database's system version 8.4-87
can anybody help me with it?
I just got this same error and, for me, it meant that I did not have a local copy of that branch (tfmain/master). To fix it, I just checked out a local copy by running git checkout tfmain/master
Not part of open-source Trino, but Starburst has a tool called Schema Discovery that allows you do this.
https://docs.starburst.io/latest/insights/schema-discovery.html#column-type-conversion
https://docs.starburst.io/starburst-galaxy/working-with-data/explore-data/schema-discovery.html
For those who want to know the answer after the moderator deleted the answer and the OP didn't repost it:
UPDATE : I may have found a possible cause for the "Failed connect to 'XXX' error: error = 11, message = Server not connected".
During my investigation, I identified the log directory for the 4 queues ems in which I saw:
Failed to create file '/opt/data/tibco/ems/ems_msg/config/shared/users.conf Administrator user not found, created with default password Failed to create file '/opt/data/tibco/ems/ems_msg/config/shared/groups.conf Administrator user not found, created with default password Failed to create file '/opt/data/tibco/ems/ems_msg/config/shared/stores.conf Administrator user not found, created with default password FATAL: Exception in startup, exiting.
Problem: In "ems-oss-1y.adb.XXX.XXX.com" there is not a tree structure of the type /opt/data/tibco.
I'm going to pass this information to the technical team. I thank you anyway.
I think user3666197's answer provided a lot of extremely useful technical context here, so I will highlight some other points at a higher level in my answer here. If you are looking for a general rule of thumb for whether numpy or native python will be faster, the rule of thumb is:
Numpy speeds up CPU bound tasks, but performs slower on IO bound tasks.
The context of this is that numpy does a ton of things to set up when executing code; every time you are executing a numpy function it is equally equipped to perform extremely complex computation on a 10 Exabyte n-dimensional array running on a super computer, as it to do a simple scalar addition on a chromebook. Thus, each time you run a numpy function it requires a little bit of time to set itself up. In user366619's answer they highlighted the details of such overhead. The other thing I would want to amend to that is if your problem is more CPU bound or more IO bound. The more CPU bound the problem, the more it will gain from using numpy.
Travis Oliphant, the creator of numpy, seems to regularly address this and basically comes back to the fact that numpy will always beat out other solutions on much larger and more computationally intensive problems. Otherwise, pure python solutions are much faster for smaller and more IO bound problems. Here is Travis addressing your question directly in an interview from a few years ago:
https://youtu.be/gFEE3w7F0ww?si=mfTO-uJQRIZdMKoL&t=6080
In manifest.json, you probably have numbers or spaces in "name". Check this.
I ordered something from this site two months ago but it hasn't been delivered yet, so it's better not to order anything from this site.https://zerothought.in/jetflux-pressure-washer/
I think I love the solution from @Werner Sauer, however today I needed to do it on a pre-2017 sql server -- no string_agg()! Here's what I landed with:
/* Dynamic Insert Statement generator */
DECLARE @SchemaName SYSNAME = 'dbo';
DECLARE @TableName SYSNAME = 'myTableName';
SET NOCOUNT ON;
SET TEXTSIZE 2147483647;
DECLARE @ColumnList NVARCHAR(MAX) = '';
DECLARE @ValueList NVARCHAR(MAX) = '';
DECLARE @SQL NVARCHAR(MAX);
-- Store column metadata in a table variable
DECLARE @Cols TABLE (
ColumnName SYSNAME,
DataType SYSNAME,
ColumnId INT
);
INSERT INTO @Cols (ColumnName, DataType, ColumnId)
SELECT
c.name AS ColumnName,
t.name AS DataType,
c.column_id
FROM sys.columns c
INNER JOIN sys.types t ON c.user_type_id = t.user_type_id
WHERE c.object_id = OBJECT_ID(QUOTENAME(@SchemaName) + '.' + QUOTENAME(@TableName));
-- Build comma-separated column names
SELECT @ColumnList = STUFF((
SELECT ', ' + QUOTENAME(ColumnName)
FROM @Cols
ORDER BY ColumnId
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '');
-- Build concatenation logic for each column
SELECT @ValueList = STUFF((
SELECT ' + '','' + ' +
CASE
WHEN DataType IN ('char','nchar','varchar','nvarchar','text','ntext')
THEN 'COALESCE('''''''' + REPLACE(' + QUOTENAME(ColumnName) + ', '''''''', '''''''''''') + '''''''', ''NULL'')'
WHEN DataType IN ('datetime','smalldatetime','date','datetime2','time')
THEN 'COALESCE('''''''' + CONVERT(VARCHAR, ' + QUOTENAME(ColumnName) + ', 121) + '''''''', ''NULL'')'
ELSE 'COALESCE(CAST(' + QUOTENAME(ColumnName) + ' AS VARCHAR), ''NULL'')'
END
FROM @Cols
ORDER BY ColumnId
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 8, ''); -- remove first " + ',' + "
-- Build the final SQL
SET @SQL =
'SELECT ''INSERT INTO ' + QUOTENAME(@SchemaName) + '.' + QUOTENAME(@TableName) +
' (' + @ColumnList + ') VALUES ('' + ' + @ValueList + ' + '') ;'' AS InsertStatement ' + CHAR(13) +
'FROM ' + QUOTENAME(@SchemaName) + '.' + QUOTENAME(@TableName) + ';';
-- Execute the generated SQL
EXEC sp_executesql @SQL;
What you are trynna do is when you drag the container on the right it will get wider and the .aside will get narrower. However you gave .aside a fix width value of 500px which stops it from shrinking.
You need to change your stylesheet of .aside into:
.aside {
background: #aaa;
height: 100%;
min-width: 320px;
flex-grow: 1;
}
So that the flex layout can shrink its width to give the container space to grow
Meanwhile you passed an empty array into dependency of useLayoutEffect hook which will only execute once before the browser paints the screen when the container.current is null
add dependency to the useLayoutEffect:
useLayoutEffect(() => {...}, [container.current]);
this way it will execute when the container ref is bound with the element
Refer this will solve your problem
Wayback Machine Save Page Now 2 API:
https://docs.google.com/document/d/1Nsv52MvSjbLb2PCpHlat0gkzw0EvtSgpKHu4mk0MnrA/edit
I remember it was quite short code to read mouse position in C in DOS. Mouse was connected by serial RS232 interface. I used this technique in DOS in Turbo C with BGI interface.
https://ic.unicamp.br/~ducatte/mc404/Lampiao/docs/dosints.pdf
I dont have this code anymore but look this link above I think I used interrupt 33 but it was almost 30 years ago. (INT 33h)
here is example code
#include <dos.h> // Or a similar header for interrupt functions
void main() {
int mouse_installed;
int x, y, buttons;
// Initialize the mouse driver
_AX = 0;
geninterrupt(0x33); // Call INT 33h
mouse_installed = _AX; // Check the return value in AX
if (mouse_installed) {
// Turn the mouse on
_AX = 1;
geninterrupt(0x33);
// Get the mouse position and button status
_AX = 3;
geninterrupt(0x33);
buttons = _BX; // Get button info
x = _CX; // Get X-coordinate
y = _DX; // Get Y-coordinate
printf("Mouse is installed. Cursor at %d,%d, button %d\n", x, y, buttons);
// Turn the mouse off (optional)
_AX = 2;
geninterrupt(0x33);
} else {
printf("Mouse driver not found.\n");
}
}
i wish i can choose an image that isnt so... uuuugggllyyy.....
Multithreading is inherent to web development. I suggest you download Django and start developing some web applications. This will provide you with some basic experience to multithreaded program development.
1- Exit Android Studio
2- Clear C:\Users\M\.gradle\wrapper\dists\caches\x.x (x.x: Gradle version)
3- Rerun Android Studio and build/ run the app.
Wouldn't something like this work? Just check for empty strings?
def read_loop():
while True:
chunk = r_stream.readline()
if not chunk: # Empty string = pipe closed
break
if chunk:
print('got chunk', repr(chunk))
got_chunks.append(chunk)
I solved this problem by this graph.
I agree with you that doSomethingMocked should only run once. I copied your code and ran the unit test, but the test passed in my environment:
I guess it's the issue with jest configuration?
here is my demo repo
import React from "react";
import { View, Text, Linking, Button, ScrollView } from "react-native";
import { WebView } from "react-native-webview";
import { createBottomTabNavigator } from "@react-navigation/bottom-tabs";
import { NavigationContainer } from "@react-navigation/native";
function HomeScreen() {
return (
\<ScrollView\>
\<Text style={{ fontSize: 22, textAlign: "center", margin: 10 }}\>
🎥 माझं YouTube Channel
\</Text\>
\<View style={{ height: 300, margin: 10 }}\>
\<WebView
source={{ uri: "https://www.youtube.com/@PranavVharkate" }}
/\>
\</View\>
\</ScrollView\>
);
}
function SocialScreen() {
return (
\<View style={{ flex: 1, justifyContent: "center", alignItems: "center" }}\>
\<Text style={{ fontSize: 20, marginBottom: 20 }}\>🌐 माझे सोशल लिंक\</Text\> https://youtube.com/@pranavvharkate?si=hTu85mvCYp0hujl5
\<Button title="Facebook उघडा" onPress={() =\> Linking.openURL("https://facebook.com/तुझाLink")} /\> https://www.facebook.com/profile.php?id=100091967667636&mibextid=ZbWKwL
\<Button title="Instagram उघडा" onPress={() =\> Linking.openURL("https://instagram.com/तुझाLink")} /\> https://www.instagram.com/pranavvharkate2?igsh=MW5hdjRsdHh1eDhsdA==
\</View\>
);
}
function CommunityScreen() {
return (
\<View style={{ flex: 1, justifyContent: "center", alignItems: "center" }}\>
\<Text style={{ fontSize: 20 }}\>👥 Community Page (Demo)\</Text\>
\<Text\>येथे नंतर Firebase जोडून posts टाकता येतील.\</Text\>
\</View\>
);
}
const Tab = createBottomTabNavigator();
export default function App() {
return (
\<NavigationContainer\>
\<Tab.Navigator\>
\<Tab.Screen name="Home" component={HomeScreen} /\>
\<Tab.Screen name="Social" component={SocialScreen} /\>
\<Tab.Screen name="Community" component={CommunityScreen} /\>
\</Tab.Navigator\>
\</NavigationContainer\>
);
}
I am also trying to build a postgres mcp server and provide table metadata based on user natural query to load the resources at runtime like tools and get a specific tables metadata and use it before writing the sql query.
I think we can just for the meantime put that functionality too as a tool where it takes schema and table name and reads a schema_table.json for extra context.
Until now, you have to do that way, but there is a pull request "Add character casing to TextBox control" will do that for you, but it still didn't merge with avalonia yet
homebrew emacs is fast enough...
There was a new powerpc reference added recently (simple design and all)
For Anyone who hates official manual from the ibm website ..
Index:
https://fenixfox-studios.com/manual/powerpc/index.html
Registers:
https://fenixfox-studios.com/manual/powerpc/registers.html
Syntax:
https://fenixfox-studios.com/manual/powerpc/syntax.html
Instruction like:
add - https://fenixfox-studios.com/manual/powerpc/instructions/add.html
mflr - https://fenixfox-studios.com/manual/powerpc/instructions/mflr.html
<iframe sandbox="allow-scripts allow-popups allow-popups-to-escape-sandbox allow-same-origin" src="https://art19.com/shows/the-stack-overflow-podcast/episodes/1ad2e1d3-71f2-43bf-8704-1006b9704859/embed?theme=light-custom" style="width: 100%; height: 200px; border: 0 none;" scrolling="no"></iframe>
sorry I am landing late .... 😁🙈
replace capital A by lowercase a in "declare -a"
it work (for me) ✌🏼
It looks like Flutter wants you to move the value "0" that you are using in your code to a member variable, and reference it using the member variable you created. The compiler is trying to prevent you from hard-coding numeric values.
In my case,
Xcode > Settings > Accounts
removing my current account with “-” and logging in again solved the problem.
Try to put compileSdk like this: compileSdk = flutter.compileSdkVersion Or as the error says, try to put compileSdkVersion = flutter.compileSdkVersion (or hardcoded value) if you are using older version of Flutter.
I had the same issue and I was able to fix it upon adding UTF-8 formatting option while parsing the JSON
Hey were you ever able to figure out how to do this?
You can leverage the query implementation to Coffee Bean Library. This library will translate GraphQL queries into SQL queries on the fly. Can be customized and does not require any vendor coupling. I am the author.
I was messing around because the font I use for my website is thicker, this works perfectly.
Same idea as the borders CSS.
u{text-decoration: underline 2px;}
<p>
Not underlined text <br>
<u>Underlined text</u><br>
<u>qwertyuiopasdfghjklzxcvbnm</u>
</p>
I had a similar issue, I updated filename from postcss.config.js -> postcss.config.mjs
Adding Working directory as $(ProjectPath) worked for me
You can refer to the following KB article
https://community.snowflake.com/s/article/Snowflake-JDBC-throws-error-while-fetching-large-data-JDBC-driver-internal-error-Timeout-waiting-for-the-download-of-chunk0?utm_source=chatgpt.com
I have two projects using different versions of an Okta library (cuz there've been 5 revs since I implemented the previous one), and the mismatched versions stashed somewhere ended up causing this issue. Using the same version in the other project fixed the issue. I've never had this issue before with any other nuGet packages and running this, that, or the other version.
Yeah, this is a common issue when you mix RAG and chat memory. The retriever keeps adding the same info every turn, and the memory just stores it blindly so you end up with repeated chunks bloating the prompt.
Quick fix: either dedupe the content before adding it, or use something like mem0 or flumes ai that tracks memory as structured facts and avoids repeating stuff across turns.
-----BEGIN PGP MESSAGE-----
0v8AAABJAY86Lka0Nnre6q9F7/9raOI/XetXWsGjOpeqXCtL7evUvWJVV/oN4IGkDCLdlhzMT7tX
WJVfKGu9/29lXc2GRB8hi0HxxF/mBA==
=WYHR
-----END PGP MESSAGE
-----
Looks like you are always setting count to 1
const response = await fetch('/add-crusher-columns', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({ count: 1 })
});
Here many years later to point out that as of 2022, Perforce provides Stream components (https://help.perforce.com/helix-core/server-apps/p4v/current/Content/P4V/stream-components.html), which seem to be able to achieve this.
In short, on the Components section of the Advanced tab of a client stream's property page (assuming you're using P4V), you'd specify a line like:
readonly dirX //streams/X
where stream X can itself contain other components etc. These components can be made writeable, as well as point to specific changelists in the source stream and not just the head. They look pretty similar to git modules, although I haven't yet had the chance to use them myself, so cannot comment much further.
create bat file:
@echo off
set "no_proxy=127.0.0.1,localhost"
set "NO_PROXY=127.0.0.1,localhost"
start "" "C:\Program Files\pgAdmin 4\pgAdmin4.exe"
I am working on an astrology application with the following directory structure. I am running a test in it- .\run_pytest_explicit.ps1; I am getting may errors in it-1. ModuleNotFoundError: No module named 'app.main'.2.No module named 'M2_HouseCalculation' 3.ModuleNotFoundError: No module named 'src'4.ModuleNotFoundError: No module named 'app.core.time_location_service'5. 6. ModuleNotFoundError: No module named 'app.pipeline.julian_day'7. ModuleNotFoundError: No module named 'app.pipeline.time_utils' ; Please tell me in beginner friendly way how to solve them? astro-backend
├── src/
│ ├── _init_.py # optional, usually src is not a package root
│ ├── app/
│ │ ├── _init_.py # marks app as package
│ │ ├── app.py # your main FastAPI app entrypoint
│ │ ├── core/
│ │ │ ├── _init_.py
│ │ │ └── ... # core utilities, helpers
│ │ ├── services/
│ │ │ ├── _init_.py
│ │ │ └── ... # app-wide service logic
│ │ ├── routes/
│ │ │ ├── _init_.py
│ │ │ └── ... # route definitions (optional)
│ │ └── ai_service/
│ │ ├── _init_.py
│ │ └── main.py # AI microservice router
│ ├── modules/
│ │ ├── _init_.py
│ │ ├── module3/
│ │ │ ├── _init_.py
│ │ │ ├── service.py
│ │ │ └── ai_service/
│ │ │ ├── _init_.py
│ │ │ └── main.py # AI microservice alternative location
│ │ └── other_modules/
│ └── tests/
│ ├── _init_.py # marks tests as package
│ └── ... # all test files and folders
├── .venv/ # your pre-existing virtual environment folder
├── PYTHONPATH_Set.ps1 # your PowerShell script to run tests
└── other project files...
It seems that is a bug, either in Qt Creator (not generating the correct escaped sequence) or in PySide (pyside6-uic doesn't generate the correct escaped sequence for QCoreApplication.translate() or QCoreApplication.translate() doesn't accept 16bit escape sequences).
A bug that seems to be related (QTBUG-122975, as pointed by @musicamante in the discussion) seems to be open since 2024.
As a workaround, for the time being, if your app doesn't need translation, you can deselect the translatable property in the QAbstractButton properties.
stage("One of the parallel Stage") {
script {
if ( condition ) {
...
} else {
catchError(buildResult: 'SUCCESS', stageResult: 'NOT_BUILT') {
error("Stage skipped: conditions not met")
}
}
}
}
In our case we deleted the apps in both slots and re-deployed.
Before that we tried an number of cleanup operations in Azure using the Kudo debug console without any progress. The warning message turned up when we activated the staging slot in our TEST environment, we don't use the staging slot in our DEV and there we didn't get the message. We have had this warning message for 4 days, so to us it looks like it hadn't gone away on its own.
I'm unsure if it is the expected behavior, but as of Apache Superset 5.0.0, you can create a virtual dataset by specifying the table_name to any value (a dataset with that name should not exist) and setting the desired sql query.
Solved. There might be a more elegant way, but this worked:
DECLARE @ID Table (ID int);
INSERT INTO Table1 (FIELD1, FIELD2, FIELD3)
output Inserted.IDFIELD INTO @ID
Select 1,2,3
where not exists (SELECT 'x' FROM Table1 T1 WHERE T1.FIELD1 = 1 AND T1.FIELD2 = 2;
INSERT INTO Table2 (Other1_theID, Other2, Other3)
(Select ID,'A','B'from @ID
where not exists (SELECT 'x' FROM Table2 T2 WHERE T2.Other2 = 'A' AND T2.Other3 = 'B')) UNION ALL
(Select ID,'C','D'from @ID
where not exists (SELET2 'x' FROM Table2 T2 WHERE T2.Other2 = 'C' AND T2.Other3 = 'D')) UNION ALL
(Select ID,'E','F'from @ID
where not exists (SELET2 'x' FROM Table2 T2 WHERE T2.Other2 = 'E' AND T2.Other3 = 'F'))
.payload on ActiveNotification is only set for notifications that your app showed via flutter_local_notifications.show(..., payload: '...').
It does not read the APNs/FCM payload of a remote push that iOS displayed for you. So for a push coming from FCM/APNs, activeNotifications[i].payload will be null.
Why? In the plugin, payload is a convenience string that the plugin stores inside the iOS userInfo when it creates the notification. Remote pushes shown by the OS don’t go through the plugin, so there’s nothing to map into that field.
Option A (recommended): carry data via FCM data and read it with firebase_messaging.
{
"notification": { "title": "title", "body": "body" },
"data": {
"screen": "chat",
"id": "12345" // your custom fields
},
"apns": {
"payload": { "aps": { "content-available": 1 } }
}
}
FirebaseMessaging.onMessageOpenedApp.listen((RemoteMessage m) {
final data = m.data; // {"screen":"chat","id":"12345"}
// navigate using this data
});
final initial = await FirebaseMessaging.instance.getInitialMessage();
if (initial != null) { /* use initial.data */ }
Option B: Convert the remote push into a local notification and attach a payload.
RemoteMessage.data, then call:await flutterLocalNotificationsPlugin.show(
1001,
m.notification?.title,
m.notification?.body,
const NotificationDetails(
iOS: DarwinNotificationDetails(),
android: AndroidNotificationDetails('default', 'Default'),
),
payload: jsonEncode(m.data), // <— this is what ActiveNotification.payload reads
);
Now getActiveNotifications() will return an ActiveNotification whose .payload contains your JSON string.
Gotcha to avoid: Adding a payload key inside apns.payload doesn’t populate the plugin’s .payload—that’s a different concept. Use RemoteMessage.data or explicitly set the payload when you create a local notification.
Bottom line: For FCM/APNs pushes, read your custom info from RemoteMessage.data (and onMessageOpenedApp/getInitialMessage). If you need .payload from ActiveNotification, you must show the notification locally and pass payload: yourself.
Experience shows that this happens when there are too many non-versioned files.
Unchecking "Show Unversioned Files" helped me.
You can also use “add to ignore list” to exclude directories that should not be captured with git.
OR would have worked too -- logically speaking: NOT (A) AND NOT (B) = NOT (A OR B)
Oh, I've figured out the problem. It turns out that changing a variable solved my problem.
From this:
var decoded;
for (const key of objectKeys) {
if (originalText.includes(key)) {
continue;
} else {
decoded = result.replaceAll(key, replaceObject[key])
}
}
To this:
var decoded = result;
for (const key of objectKeys) {
if (originalText.includes(key)) {
continue;
} else {
decoded = decoded.replaceAll(key, replaceObject[key])
}
}
Thank you so much, this worked perfectly for me! It also resolves problems with the design view of WindowBuilder.
This is due to Iconify Intellisense. There is already an Issue open with exactly this question in the Github repo.
In monorepo this error can happen when there is multiple vite versions, you need to install the same version, source: https://github.com/vitest-dev/vitest/issues/4048
When you’re talking about a 20 GB log file, you’ll definitely want to lean on S3’s multipart upload API. That’s what it’s built for: breaking a large file into smaller chunks (up to 10,000 parts), uploading them in parallel, and then having S3 stitch them back together on the backend. If any part fails, you can just retry that one chunk instead of the whole file.
Since the consuming application doesn’t want to deal with pre-signed URLs and can’t drop the file into a shared location, one pattern I’ve used is to expose an API endpoint in front of your service that acts as a broker:
The app calls your API and says “I need to send logs.”
Your service kicks off a multipart upload against S3 using your AWS credentials (so the app never touches S3 directly).
The app streams the file (or pushes chunks through your API), and your service forwards them to S3 using the multipart upload ID.
Once all parts are in, your service finalizes the upload with S3.
That gives you a central place to send back success/failure notifications:
On successful completion, your service can push a message (SNS, SQS, webhook, whatever makes sense) to both your system and the caller.
On error, you can emit a corresponding failure event.
The trade-off is that your API tier is now in the data path, so you’ll need to size it appropriately (20 GB uploads aren’t small), and you’ll want to handle timeouts, retries, and maybe some form of flow control. But functionally, this avoids presigned URLs, avoids shared locations, and still gives you control over how/when to notify both sides of the result.
self.addEventListener('fetch')
doesn`t call. NEVER !
WHY ???
Just have received this email after I couldnt log in anymore.
After that I have resetted my password account and it still didnt let me login.
But I was finally able to enter like this:
Login as Root at https://signin.aws.amazon.com/
When asking for 2FA / MFA then click on the bottom "Trouble signing in?"
Then click on Re-sync with AWS Servers
Then put in two 2FA codes after waiting for 30s apprx.
Finally enter again
Done ✅
i face the same issue, exactly as you described it. Have you found a fix ?
If your API is running correctly and returning a status code of 200, the basic solution is to send a message from your number to the WhatsApp number first where you expect to receive messages. Once you’ve done this initial message exchange, you will start receiving messages from WhatsApp
To call a stored procedure first create a Procedure which is like this:
Create Procedure CallStoredProcedure(parameters)
Language Database
External Name "Your_Stored_Procedure_Name"
Then just call this procedure with the required parameter
With the help of Anthropic I have found the issue. In the first kernel I was defining the swap space DenseXY while in the second the 3D matrix was declared DenseZY. I did not think this could make any difference except for how many cache misses I would have maybe encountered. Actually if I change all the declarations to DenseXY it compiles and runs.
By the way, for the sake of good order, I also understood that the density of the stride is opposite to what my intuition brought me to:
Stride3D.DenseXY:
Memory order: X → Y → Z (X changes fastest, Z changes slowest) For array[z][y][x]: consecutive X elements are adjacent in memory Memory layout: [0,0,0], [0,0,1], [0,0,2], ..., [0,1,0], [0,1,1], ..., [1,0,0]
Stride3D.DenseZY:
Memory order: Z → Y → X (Z changes fastest, X changes slowest) For array[x][y][z]: consecutive Z elements are adjacent in memory Memory layout: [0,0,0], [1,0,0], [2,0,0], ..., [0,1,0], [1,1,0], ..., [0,0,1]
This is an old post but I've had the same problem just now (using Squish for Windows toolkit).
Was caused by squish not using QA automation. This fixed it:
https://qatools.knowledgebase.qt.io/squish/windows/howto/automating-modern-ui-windows-store-apps/
I use :g/hello/s//world/g but I have been using vi forever. :/
I cannot reproduce your issue. If I setup a project, according to your description it works just fine.
I created a default project Next.js 15 project:
npx create-next-app@latest
Added an MP3 to public/audio/sample.mp3
Replaced the page.tsx with:
"use client";
const playDeleteSound = async () => {
try {
const audio = new Audio("/audio/sample.mp3");
await audio.play();
} catch (error) {
console.log("Audio playback error:", error);
}
};
export default function Home() {
return (
<div className="flex items-center justify-center min-h-screen bg-gray-100">
<button
onClick={playDeleteSound}
className="px-6 py-3 rounded-2xl bg-blue-600 text-white text-lg font-semibold shadow-md hover:bg-blue-700 transition"
>
▶ Play Sound
</button>
</div>
);
}
It shows a play button, when I click that, it starts playing the file.
Full project code: https://github.com/Borewit/serve-mp3-with-nextjs
Using Boost 1.89.0 solved the issue.
had the same problem and I see that still does not have an answer. If someone has the same error in leave-one out:
Error in round(x, digits) : non-numerical argument to mathematical function
Update your package - I used meta package v.8.1.0, updated to v.8.2.0, now works fine.
You just need to do $('#mySelect').empty();.
You can go like this:
import { Op } from 'sequelize'
where: { id: { [Op.in]: [1,2,3,4]} }
I found an answer on this https://www.qnx.com/developers/docs/6.5.0SP1.update/com.qnx.doc.neutrino_lib_ref/s/spawnv.html site, where it says
P_NOWAIT — execute the parent program concurrently with the new child process.
P_NOWAITO — execute the parent program concurrently with the new child process. You can't use wait() to obtain the exit code.
but i'm not sure if this applies to windows (cause it doesn't have wait(), only WaitForSingleObject()) and if by using P_NOWAIT I am obliged to call wait() on the pid
I have found a workaround for this.
A custom meter (System.Diagnostics.Metrics.Meter) is visible "MyTest" and from it a System.Diagnostics.Metrics.Counter named "asas" has been created with Meter.CreateCounter()
But: i dont think its intended to work so this might get patched.
i have solved this problem
using Ctrl+Shift+P → TypeScript: Restart TS Server
because my nodemodules are not read properly by typesript server
@Marks's solution works if the panel never opens, but if it sometimes opens that can be annoying since it's a toggle. As far as I can tell there's no action to open the panel, but I cobbled something together with closePanel, togglePanel, and compound tasks:
{
"label": "close Panel",
"command": "${command:workbench.action.closePanel}",
"type": "shell",
"problemMatcher": [],
},
{
"label": "open Panel",
"command": "${command:workbench.action.togglePanel}",
"type": "shell",
"problemMatcher": [],
"dependsOn": [
"close Panel"
],
"runOptions": {
"runOn": "folderOpen"
}
},
Not the prettiest, but it gets the job done.
Thank you, @tino for your solution! I had to make a minor adjustment as well in order for it to work on my end (Django 5.1.1).
Instead of the callback function that was originally proposed here was my minor tweak:
def skip_static_requests(record):
if record.args.__len__() and str(record.args[0]).startswith("GET /static/"):
return False
return True
Sometimes, the record.args[0] was not a string and was therefore running into problems with calling the startswith method.
I'm unable to find this option and I don't have GitLens installed.
react-datepicker alone doesn’t mimic native segmented typing, but you can achieve it with customInput + a mask library like react-input-mask
Sorry for the question.
Overlooked that there is already a PR questioning for enterprise accout support.
Use this for reference: https://github.com/AUTOMATIC1111/stable-diffusion-webui
You put your model file in models->stable-diffusion folder run through the ui.
For more information, use the above link.
Keep in mind that setting environment variables apply to all the agents on the host as a system capability, while using the API creates a USER capability that is local to that specific agent. This is useful if you have multiple agents on a single host.
For details on **Yoosee for Windows**, check out Yoosee Windows
This site returns a valid HTTP status code for any request type:
https://httpstatus.io/mocking-data
For me , I downgraded the electron version to make it compatible with nan of the node version and it worked. I used electron 30x
It turns out the error message was too long (95000 chars).
Not sure where the boundary is, 5000 chars still works ok.
I would have expected different behaviour so I guess this is a bug.
This problem is impossible to solve for given rules and a random numbers shuffle. To better visualize it in your head, imagine that your traveling point is a head of a snake in the snake game (but snake always grows, so it's tail stays at the start)
At some point you may enclose yourself, and you can only spiral inwards until you crash into your own body. The same thing happens here. If visited cells form closed area and your traveling point is is inside of it, you can't escape out of this area (because you can visit cell only once). So after some number of moves it will reach cell where every neighbour of it was already visited
def is_even():
x = input("Enter number: ")
x = int(x)
if x % 2 == 0:
return f" {x} is an even number"
else:
return f" {x} is an odd number"
print(is_even())
Public Function FileTxtWr(ByVal sFile As String, _
ByRef sRow() As String) As Boolean
Dim sUTF As String
Dim iChn As Integer
Dim i As Integer
sUTF = Chr$(239) & Chr$(187) & Chr$(191)
sRow(1) = sUTF & sRow(1)
iChn = FreeFile
On Local Error GoTo EH
Open sFile For Output Shared As iChn
On Local Error GoTo 0
For i = 1 To UBound(sRow)
Print #iChn, sRow(i)
Next i
Close iChn
FileTxtWr = True
Exit Function
EH:
FileTxtWr = False
End Function
For now, I'm just going with option 2 and silencing "reportExplicitAny" project-wide until I find a better solution. In my pyproject.toml:
[tool.basedpyright]
reportExplicitAny = "none"
Working with FilamentPHP repeaters can definitely get tricky when reactivity starts overriding field values. I’ve faced similar issues where fields reset unexpectedly, and it can be frustrating to debug. Sometimes separating calculations into a dedicated function or handling them after all inputs are set helps a bit. It reminded me of the rotmg dps calculator I once used, where real-time updates needed to balance accuracy without breaking existing inputs — kind of the same challenge here with keeping values stable while calculations run.
Reposting the answer from @GeorgeFields, because it solved the issue for me:
"...It ended up being a Gradle Daemon that had been running before I started Docker.
...So, I just didgradle --stopand then then next time it worked."
Here is a small class which does exactly that from CrazySqueak's answer. So please upvote his answer not mine!
import threading
class AdvTimer():
def __init__(self, interval, callback):
self.interval = interval
self.callback = callback
def restart(self):
self.timer.cancel()
self.start()
def start(self):
self.timer = threading.Timer(self.interval, self.callback)
self.timer.start()
That sounds like a tough situation to deal with, especially since managing multiple access tokens for the same institute can get really messy for both you and the users. Having to split products like investments and loans into separate configs feels like a workaround rather than a proper solution. I read something on investiit. com that touched on similar integration challenges, and it seems like the key is finding the balance between user experience and Plaid’s current limitations. Hopefully, Plaid adds more flexibility soon.