With Gnome 47, move emacs.desktop
from /usr/share/applications/
or /usr/share/emacs/30.1/etc/
to ~/.local/share/applications/
. Then Emacs path with be the same if launched from Activities or Terminal in default shell. exec-path-from-shell
is not useful.
My exact issue was that Python was 32 bit ( Python38-32 ), some old project, and it wasn't working with the latest installation of GTK. I was also getting the same error. Once i updated python version to 3.11, reinstalled GTK and set PATH To point to C:\Program Files\GTK3-Runtime Win64\bin it worked ( notice that GTK also has to match the python version )
Did you tried this kind of syntax with " referencedTable" ?
const { data, error } = await supabase
.from('friends')
.select('recieved!inner(*), sent!inner(*)')
.eq('accepted', true)
.or(`id.eq.${user?.id}`, { referencedTable: 'sent' })
.or(`id.eq.${user?.id}`, { referencedTable: 'receive' })
You don't even need printf
format
seq -w 0 0010
Downgrading to @auth/prisma-adapter
2.7.2 should solve it. There is an open issue for this, see https://github.com/nextauthjs/next-auth/issues/12731
OK. Found the culprit!
When maxing out LZMADictionarySize, LZMABlockSize must be left out or remarked, because in the newest LZMA SDK, this directive heavily degrades compression ratio. Don't know what has been changed in this newest version, but LZMABlockSize directive is really degrading compression with the settings above.
Hope this helps...
Regards
We can simplify the proposition of Esteban Filardi with this code :
const oldDate = new Date('July 21, 2001 01:15:23');
const todayDate = new Date();
const oneYear = 1000 * 60 * 60 * 24 * 365;
isPastOneYear = todayDate - oldDate > oneYear;
I have a hack for this-
But you'll have to wait a few more seconds after starting the server.
The hack is to create a .bat file and write the routes you are working on. (you can write all of them, but it will take more time)
Let's say I'm working on the dashboard today. like this:
curl http://localhost:3000/dashboard/home
curl http://localhost:3000/dashboard/help
curl http://localhost:3000/dashboard/something.....
Now just run your bat file just after starting your dev server.
Pro Tip: you can make files like dashboard-prerender.bat, landingpage.bat, api-routes.bat, all.bat for all different kind of needs.
It seems you mentioned having "same three file" which I interpret as having three similar files (e.g., three Excel or CSV files containing payment data like the one you shared). To create an effective data model in Power BI for analyzing the 24-hour payment trend across these files, you’ll need to combine and structure the data properly. Below is a step-by-step guide to perform data modeling in Power BI with multiple files.
---
### Assumptions
- You have three files (e.g., `File1.xlsx`, `File2.xlsx`, `File3.xlsx`) with similar structures, each containing columns like `T_TRANSACTION`, `T_DATE_VALUE`, `T_FILTERED`, `T_AMOUNT`, `T_ENTITY`, and `T_TYPE`.
- Each file represents a subset of your payment data (e.g., different days, batches, or entities).
- The goal is to create a unified 24-hour payment trend dashboard as discussed earlier.
---
### Step-by-Step Guide to Data Modeling in Power BI
#### 1. **Load the Three Files into Power BI**
- Open **Power BI Desktop**.
- Click **Get Data** > **Excel** (or **Folder** if the files are in the same directory).
- If using **Excel**, load each file individually:
- Select `File1.xlsx` > Click **Load** or **Transform Data**.
- Repeat for `File2.xlsx` and `File3.xlsx`.
- If using **Folder** (recommended for multiple files):
- Choose **Get Data** > **Folder**, browse to the directory containing your files, and click **Combine & Transform**.
- Power BI will detect similar tables across the files and create a single table by appending the data. Ensure the column names and data types match across all files.
- In the Power Query Editor:
- Check that `T_FILTERED` is set to **Date/Time** type.
- Remove any unnecessary columns (e.g., if some files have extra metadata).
- Rename the table (e.g., `PaymentData`) if needed.
#### 2. **Append the Data**
- If you loaded each file separately, append them into a single table:
- In the Power Query Editor, click **Home** > **Append Queries**.
- Select `File1`, `File2`, and `File3` to combine them into one table (e.g., `PaymentData`).
- Click **OK** and ensure the data aligns correctly (e.g., same column order and types).
- Click **Close & Apply** to load the combined data into the model.
#### 3. **Create a Date Table (Calendar Table)**
- To enable time intelligence and ensure all 24 hours are represented (even with no data), create a separate Date table:
- Go to **Modeling** > **New Table**.
- Use the following DAX to generate a Date table:
```
DateTable = CALENDAR(DATE(2024, 12, 1), DATE(2024, 12, 31))
```
- Adjust the date range based on your data (e.g., if it spans multiple months).
- Create additional columns for the hour:
```
HourBucket = FORMAT(TIME(HOUR([Date]), 0, 0), "h:mm AM/PM")
```
- This generates hourly buckets like "1:00 AM", "2:00 AM", etc.
- Mark this as a **Date Table**:
- Go to **Modeling** > **Mark as Date Table**, and set the `Date` column as the unique identifier.
#### 4. **Create a Relationship**
- Ensure a relationship exists between your `PaymentData` table and the `DateTable`:
- Go to the **Model** view.
- Drag the `T_DATE_VALUE` (or a derived date column from `T_FILTERED`) from `PaymentData` to the `Date` column in `DateTable`.
- Set the relationship to **1-to-many** (one date in `DateTable` can relate to many transactions in `PaymentData`).
- Ensure the `HourBucket` from `DateTable` will be used for hourly aggregation.
#### 5. **Add HourBucket to PaymentData (Optional)**
- If you want to keep the hour logic in the `PaymentData` table (instead of relying on `DateTable`):
- In Power Query Editor, add a custom column for `HourBucket` using:
```
Text.From(DateTime.Hour([T_FILTERED])) & ":00 " & if DateTime.Hour([T_FILTERED]) < 12 then "AM" else if DateTime.Hour([T_FILTERED]) = 12 then "PM" else "PM"
```
- This ensures each transaction is tagged with its hour.
#### 6. **Create a Measure for Payment Count**
- Go to **Modeling** > **New Measure**.
- Enter:
NumberOfPayments = COUNTROWS('PaymentData')
- This counts the number of transactions based on the filters applied (e.g., by date and hour).
#### 7. **Build the 24-Hour Trend Dashboard**
- **Add a Matrix Visual**:
- Drag a **Matrix** visual.
- Set:
- **Rows**: `Date` (from `DateTable`) and `HourBucket` (from `DateTable` or `PaymentData`).
- **Values**: `NumberOfPayments`.
- Enable "Show items with no data" in the visual options to display all 24 hours, even if no payments occurred.
- **Add a Line Chart**:
- Drag a **Line Chart** visual.
- Set:
- **Axis**: `HourBucket`.
- **Values**: `NumberOfPayments`.
- **Legend**: `Date` (to show trends for each day).
- Sort `HourBucket` using a custom sort order (see Step 8).
- **Add a Slicer**:
- Drag a **Slicer**, set it to `Date`, and allow users to filter by day.
#### 8. **Sort HourBucket**
- Create a sorting table to ensure hours are in the correct order (1:00 AM to 12:00 AM):
- Go to **Modeling** > **New Table**.
- Enter:
```
HourSort = DATATABLE("HourBucket", STRING, "SortOrder", INTEGER,
{
{"1:00 AM", 1}, {"2:00 AM", 2}, {"3:00 AM", 3}, {"4:00 AM", 4},
{"5:00 AM", 5}, {"6:00 AM", 6}, {"7:00 AM", 7}, {"8:00 AM", 8},
{"9:00 AM", 9}, {"10:00 AM", 10}, {"11:00 AM", 11}, {"12:00 PM", 12},
{"1:00 PM", 13}, {"2:00 PM", 14}, {"3:00 PM", 15}, {"4:00 PM", 16},
{"5:00 PM", 17}, {"6:00 PM", 18}, {"7:00 PM", 19}, {"8:00 PM", 20},
{"9:00 PM", 21}, {"10:00 PM", 22}, {"11:00 PM", 23}, {"12:00 AM", 24}
}
)
```
- Relate `HourBucket` in `PaymentData` (or `DateTable`) to `HourBucket` in `HourSort`.
- Set the sort order of `HourBucket` to use the `SortOrder` column.
#### 9. **Verify and Test**
- Check that the Matrix and Line Chart show all 24 hours for each day, with zeros where no payments occurred.
- Use the Slicer to filter by specific days and confirm the trends.
#### 10. **Publish and Share**
- Save your Power BI file and publish it to the Power BI Service if needed.
---
### Expected Output
Your Matrix visual might look like this:
| Date | HourBucket | Number of Payments |
|------------|------------|--------------------|
| 12/11/2024 | 1:00 AM | 33 |
| 12/11/2024 | 2:00 AM | 0 |
| 12/11/2024 | 3:00 AM | 0 |
| ... | ... | ... |
| 12/11/2024 | 12:00 AM | 0 |
| 12/12/2024 | 1:00 AM | 25 |
| 12/12/2024 | 2:00 AM | 10 |
| ... | ... | ... |
The Line Chart will plot the 24-hour trend for each selected day.
---
### Tips for Success
- **Consistency**: Ensure column names and data formats are identical across the three files before appending.
- **Missing Hours**: The `DateTable` with `HourBucket` ensures all 24 hours are displayed, even if your data is sparse.
- **Upload Files**: If possible, upload the three files here, and I can guide you through the exact modeling process with your data.
### Next Steps
- Upload the three files (e.g., as `.xlsx` or `.csv`) if you’d like me to assist with the specific data modeling.
- Let me know if you need help with any step (e.g., appending, creating the Date table, or sorting hours)!
Would you like to proceed with uploading the files?
There's an easier way, iOS has a hidden enum that lets you do is using NSData
: https://objectionable-c.com/posts/brotli-ios/
use this:
function foo {
typeset x=2 # Makes x local to the function
}
x=1
foo
echo $x # This will print 1
Try setting the security protocol to:
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;
My issue was that the Working directory inside the Run/Debug Configurations was not set to the folder where I have package.json.
looks like you have got Dialogflow embed code in google tag manager. i am having issues with GTM not recognizing <df-messenger> html component, saying its not standard. Could you describe how you have set up GTM with Dflow . Thanks for the help
V
In 2025, you need to disable the --turbopack
option run the dev
run configuration for debugging to work in webstorm.
I made a dev-no-turbo
run configuration in package.json
for use in debugging:
"dev": "next dev --turbopack",
"dev-no-turbo": "next dev",
Two solutions offered here didnt work for me since labelStyle and renderLabel dont exist on TabBar or TabView, so after looking at the docs, I found that you can set the fontSize by:
commonOptions={{ labelStyle: {fontSize: 16, }}
on the TabView
flutter clean
flutter pub get
flutter run
I have exactly the same problem using mdns_dart when WLAN disconnected. I can't catch the exception. Any news?
From List children of a driveItem
If a collection exceeds the default page size (200 items), the @odata.nextLink property is returned in the response to indicate more items are available and provide the request URL for the next page of items. You can control the page size through optional query string parameters.
enter image description hereselect image help
A | B | C |
---|---|---|
1122 | 24_hhgf | =if(Isnumber(match(A1&"*";B1:B10;0));Yes;No) |
1354 | 1122_hfff |
=if(Isnumber(match(A1&"*";B1:B10;0));Yes;Not)
با تابع match
سلول دارای عدد به همراه کارلکتر اضافی را پیدا خواهید کرد
=match(112&"*";B1:B10;0)
Can anybody tell me what's wrong with this query? Getting Error Incorrect syntax near ','. but it doesn't tell me where it is.
mysql = mysql + " SET @sql = N'"
mysql = mysql + " SELECT [REP CODE], [CUST CODE], [REP NAME], [CUSTOMER NAME],"
mysql = mysql + " '' +"
mysql = mysql + " (SELECT STRING_AGG(''ISNULL('' + QUOTENAME(MonthYear) + '', 0) AS '' + QUOTENAME(MonthYear), '', '')"
mysql = mysql + " FROM ("
mysql = mysql + " SELECT DISTINCT"
mysql = mysql + " DATENAME(MONTH, bolh_shp_or_prt_date) + '' '' + CAST(YEAR(bolh_shp_or_prt_date) AS NVARCHAR) AS MonthYear,"
mysql = mysql + " MIN(bolh_shp_or_prt_date) As MinDate"
mysql = mysql + " From sisl_data04.dbo.so_bol_headers"
mysql = mysql + " WHERE (bolh_salesrep_id = @salesrep_id OR @salesrep_id='''')"
mysql = mysql + " AND bolh_shp_or_prt_date BETWEEN @from_date AND @to_date"
mysql = mysql + " AND bolh_stage_flg >= @bolh_stage_flg"
mysql = mysql + " GROUP BY DATENAME(MONTH, bolh_shp_or_prt_date), YEAR(bolh_shp_or_prt_date)"
mysql = mysql + " ) AS MonthList"
mysql = mysql + " ) +"
mysql = mysql + " '',"
mysql = mysql + " ISNULL('' + (SELECT STRING_AGG(''ISNULL('' + QUOTENAME(MonthYear) + '', 0)'', '' + '') "
mysql = mysql + " FROM ("
mysql = mysql + " SELECT DISTINCT"
mysql = mysql + " DATENAME(MONTH, bolh_shp_or_prt_date) + '' '' + CAST(YEAR(bolh_shp_or_prt_date) AS NVARCHAR) AS MonthYear,"
mysql = mysql + " MIN(bolh_shp_or_prt_date) As MinDate"
mysql = mysql + " From sisl_data04.dbo.so_bol_headers"
mysql = mysql + " WHERE (bolh_salesrep_id = @salesrep_id OR @salesrep_id='''')"
mysql = mysql + " AND bolh_shp_or_prt_date BETWEEN @from_date AND @to_date"
mysql = mysql + " AND bolh_stage_flg >= @bolh_stage_flg"
mysql = mysql + " GROUP BY DATENAME(MONTH, bolh_shp_or_prt_date), YEAR(bolh_shp_or_prt_date)"
mysql = mysql + " ) AS MonthList) + '', 0) AS [TOTAL],"
mysql = mysql + " SortOrder, SortKey"
mysql = mysql + " INTO #PivotResult"
mysql = mysql + " FROM ("
mysql = mysql + " SELECT"
mysql = mysql + " bolh_salesrep_id AS [REP CODE],"
mysql = mysql + " bolh_cust_id AS [CUST CODE],"
mysql = mysql + " UPPER(sr.sr_salesrep_name) AS [REP NAME],"
mysql = mysql + " UPPER(c.cu_name) AS [CUSTOMER NAME],"
mysql = mysql + " DATENAME(MONTH, bolh_shp_or_prt_date) + '' '' + CAST(YEAR(bolh_shp_or_prt_date) AS NVARCHAR) AS [MonthYear],"
mysql = mysql + " SUM(ISNULL(bolh_taxinclship_amt, 0)) AS TOTALSales,"
mysql = mysql + " 0 AS SortOrder, -- Regular rows (SortOrder = 0)"
mysql = mysql + " bolh_salesrep_id + ISNULL(bolh_cust_id, '''') AS SortKey"
mysql = mysql + " FROM so_bol_headers bh WITH (NOLOCK)"
mysql = mysql + " INNER JOIN sales_reps sr WITH (NOLOCK) ON bh.bolh_salesrep_id = sr.sr_salesrep_id"
mysql = mysql + " INNER JOIN customers c WITH (NOLOCK) ON bh.bolh_cust_id = c.cu_cust_id"
mysql = mysql + " WHERE (bolh_salesrep_id = @salesrep_id OR @salesrep_id='''')"
mysql = mysql + " AND bolh_shp_or_prt_date BETWEEN @from_date AND @to_date"
mysql = mysql + " AND bolh_stage_flg >= @bolh_stage_flg"
mysql = mysql + " GROUP BY bolh_salesrep_id, bolh_cust_id, sr.sr_salesrep_name, c.cu_name, DATENAME(MONTH, bolh_shp_or_prt_date), YEAR(bolh_shp_or_prt_date)"
mysql = mysql + " ) AS SourceTable"
mysql = mysql + " PIVOT ("
mysql = mysql + " SUM (TOTALSales)"
mysql = mysql + " FOR [MonthYear] IN ('' + @cols + '')"
mysql = mysql + " ) AS PivotTable;"
mysql = mysql + " SELECT [REP CODE], [CUST CODE], [REP NAME], [CUSTOMER NAME],"
mysql = mysql + " '' + @cols + '',"
mysql = mysql + " [Total]"
mysql = mysql + " FROM ("
mysql = mysql + " SELECT [REP CODE], [CUST CODE], [REP NAME], [CUSTOMER NAME],"
mysql = mysql + " '' + @cols + '',"
mysql = mysql + " [TOTAL], 0 AS SortOrder, SortKey"
mysql = mysql + " FROM #PivotResult"
mysql = mysql + " Union ALL"
mysql = mysql + " SELECT"
mysql = mysql + " '''' AS [REP CODE],"
mysql = mysql + " '''' AS [CUST CODE],"
mysql = mysql + " '''' AS [REP NAME],"
mysql = mysql + " [REP CODE] + '' TOTAL'' AS [CUSTOMER NAME],"
mysql = mysql + " '' + @TOTALCols + '',"
mysql = mysql + " ISNULL(SUM([TOTAL]),0) AS [TOTAL],"
mysql = mysql + " 1 AS SortOrder,"
mysql = mysql + " [REP CODE] + ''ZZZZZZ'' AS SortKey"
mysql = mysql + " FROM #PivotResult"
mysql = mysql + " GROUP BY [REP CODE]"
mysql = mysql + " Union ALL"
mysql = mysql + " SELECT"
mysql = mysql + " '''' AS [REP CODE],"
mysql = mysql + " '''' AS [CUST CODE],"
mysql = mysql + " '''' AS [REP NAME],"
mysql = mysql + " ''GRAND TOTAL'' AS [CUSTOMER NAME],"
mysql = mysql + " '' + @TOTALCols + '',"
mysql = mysql + " ISNULL(SUM([TOTAL]), 0) AS [TOTAL],"
mysql = mysql + " 2 AS SortOrder,"
mysql = mysql + " ''ZZZZZZZZZZ'' AS SortKey"
mysql = mysql + " FROM #PivotResult"
mysql = mysql + " ) AS FinalResult"
mysql = mysql + " ORDER BY SortKey, SortOrder, [CUST CODE];"
mysql = mysql + " DROP TABLE #PivotResult;';"
mysql = mysql + " EXEC sp_executesql @sql, N'@salesrep_id NVARCHAR(MAX), @from_date DATE, @to_date DATE, @bolh_stage_flg NVARCHAR(10)', @salesrep_id, @from_date, @to_date, @bolh_stage_flg;"
Some of your underlying code maybe using the legacy Places API instead.
For a test, I would try and activate the legacy Places API as well via this direct link:
https://console.cloud.google.com/apis/library/places-backend.googleapis.com
Because the legacy API is going away, it's currently been made not visible, so the deep link is the only way to activate it. Once it's active, it's visible again.
If it's a windows box and you can install diff somewhere, then install diff. Check it by running diff in a dos shell. Then add the path to the diff.exe to your path environment variable and restart emacs. That worked for me.
This is the way that I understand it.
Let me first illustrate the forward pass.
Given that self-attention is Softmax(QKT)V (ignoring scaling factor in flop calculation, and sorry for the use of the same notation for different things!).
Since we only care about the flop of a single token, our query Q has size (1xQ). K and V has size (TxQ), which the query will use to interact with the neighboring tokens.
If we focus on just 1 head of 1 layer, we can ignore L and H for now. QKT is a multiplication between (1xQ) and (QxT), which has ~2QT operations. This operation yields a single vector of size (1xT)
But there is still the operation of computing the product between Softmax(QKT) and V. The product is between a vector (1xT) and a matrix (TxQ), which has again ~2QT operations.
Combining both steps, we get 2(2QT). Then finally we scale by the number of heads (H) and the number of layers (L), giving us 2LH(2QT) for the forward pass. If we take the backward to be twice the flop of the forward pass, we get:
2LH(2QT) (1 + 2) = 6LH(2QT) = 12LHQT flops per token.
im my case when i tried to do any flutter command it was saying
Waiting for another flutter command to release the startup lock so enter this command to shutdown all dart activities
killall -9 dart
then it works
it was actually very easy - and the answer was actually in the description - the optional part regard the cancelation token:
emailClientMock.Setup(c => c.SendAsync(WaitUntil.Started, It.IsAny<EmailMessage>(), It.IsAny<CancellationToken>()))
Please pay attention on the problem that may happened with awaitables that migrate between the threads https://github.com/chriskohlhoff/asio/issues/1366.
Excerpt from Linux Device Drivers, Third Edition, Chapter 7: Time, Delays, and Deferred Work
A device driver, in many cases, does not need its own workqueue. If you only submit tasks to the queue occasionally, it may be more efficient to simply use the shared, default workqueue that is provided by the kernel. If you use this queue, however, you must be aware that you will be sharing it with others. Among other things, that means that you should not monopolize the queue for long periods of time (no long sleeps), and it may take longer for your tasks to get their turn in the processor.
Otherwise, it may be better to create your own workqueue.
If you create a fragment without adding it to the activity using the fragment transaction manager it will show up as a false positive.
Syntax like ${env.MY_VAR}
works.
Did i found in docs? No - just guessed :D
I'm having the same problem. I'm using OAuth for Facebook. I've already verified my business, but it still says the app isn't active. What can I do?
You can checkout this website. May be this example helps you to solve your issue. A JSChart is deployed on this link
I have no explanation why Jenkins jobs do not react to PowerShell error as described in the docs: https://www.jenkins.io/blog/2017/07/26/powershell-pipeline/
Either try -ErrorAction Stop
after each command (Everything you wanted to know about exceptions)
Or set $ErrorActionPreference = 'Stop'
at the beginning
Example
try {
powershell """
throw new Exception("ERROR: This is a test Exception.") -ErrorAction Stop
"""
}
This is also exact answer for default keeping console process. Tools-Options-Node.js Tools- click "wait for input when process exits normally" in Microsoft Visual Studio 2022.
PhoneGap is now depreciated. Use https://volt.build/
this worked for me.
WHERE user_id::uuid = :tenantId::uuid
If you are getting this when inserting a row, the solution for me was that I'd failed to add AUTOINCREMENT to the SQLite primary keys while creating the temporary test DB. Ordinal 0 is the first column (usually the integer PK)
When running Nest in start:dev
mode, make sure to set deleteOutDir
to false
in the nest-cli.json
configuration file:
{
"compilerOptions": {
"deleteOutDir": false
}
}
Also, add the following to your .gitignore
file to avoid committing build artifacts:
dist // or your chosen output directory
*.tsbuildinfo
This is important because, in watch mode (--watch
), TypeScript relies on the .tsbuildinfo
file. This file helps TypeScript to recompile only the files that have changed. If the output directory were deleted on every build, you’d lose the cached, unmodified files, slowing down the build process.
You can access the legacy API and enable it here: https://console.cloud.google.com/apis/library/places-backend.googleapis.com Once enabled, you will be able to manage the API as usual in the Maps console.
it seems pretty similar to https://github.com/ansible-collections/ansible.netcommon/issues/551 which is open
Getting the following error when enabling commit with or without confirm_commit:
false, "msg": "Argument must be bytes or unicode, got 'int'"}
I have seen bug reports on the same on previous ansible version 2.9 and 2.7 - we are on 2.13
it is the waiting time specified under "commit: 10" that fails
But also fails to identify "confirm_commit: true" or "confirm_commit: yes"
There is a fix to it, but not sure when it is going to be merge https://github.com/ansible-collections/ansible.netcommon/pull/670
I don't see a workaround for it, other than not using "confirm"
It seems that the Google Cloud engineering team has temporarily rolled back the changes to the TLS protocol versions for the App Engine Standard and Flexible Environments. They may send another email regarding the update. At this time, I would suggest keeping an eye on the issue tracker link you shared in the comments or review the Google App Engine release notes for the most recent updates.
When the Model Context is building, make sure CalData is in schema or Swift will not be able to find it
private var models: [any PersistentModel.Type] = [
CalData.self, CalDataVal.self,
]
lazy public var container: ModelContainer = {
let schema = Schema(models)
//...
}
Did you look at what it's complaining about? When I got that error the issue was that PostgreSQL had escaped the backslashes within the strings twice. Which you can fix as follows:
sed -e 's/\\\\/\\/g'< input.json |jq . >output.json
Save the existing file if you need some of the variables written there. Copy the .env.example and change the name to .env. Finally try running it again.
Other scenario: > sail Try to see if the file is already present in the container and have permission to change it.
For me what worked was to use the right path when I was calling the web socket from the front. Something like "wss://a1d5-83-39-106-145.ngrok-free.app/ws/frontend". The "/ws/frontend is very important.
Try your change with this repo: https://github.com/HydraDragonAntivirus/AutoNuitkaDecompiler if your goal is detect malicious payload from Nuitka then you need look: https://github.com/HydraDragonAntivirus/HydraDragonAntivirus
otherwise AutoNuitkaDecompiler is enough to make some progress.
Apparently, this is a limitation of the platform Android itself, when the dark mode is enabled: https://github.com/MaikuB/flutter_local_notifications/issues/2469
Be careful because if you cut\paste the controls like @Tony H said you will keep the properties but lose all the events of each control.
Use php's builtin solution for this:
$username = getenvvar('USERNAME') ;
Or on unix: $username = gettnvvar('LOGNAME') ;
You have a big problem. Order need UNIQUE constraint. For move a item you need:
1. Removes the UNIQUE constraint.
2. Moves the item.
3. Reactivates the UNIQUE constraint.
Other another way is use strings ("a", "b", "c", etc..), or integer * 100 list (100, 200, 300, etc...). But this way is most simple moves, but add complexity in reorder and automatize when you have a 100, 101 and 102 itens.
Is this happening each time you deploy? Also, if the functions are not visible are they still working? As in does your http trigger still work?
I've encountered this issue before in app service; however, I was missing these two environment variables
SCM_DO_BUILD_DURING_DEPLOYMENT : true
ENABLE_ORYX_BUILD : true
here's the entire list of settings with explanation: https://learn.microsoft.com/en-us/azure/azure-functions/functions-app-settings
for me it was not having an await
in front of screen.findByRole("row")
Your query:
While https://stackoverflow.com/users/205608/o-jones has good answer, there's a scope of further optimisation.
SELECT *
FROM account
WHERE client_id = ?
AND disabled IN (?)
AND (username LIKE '%?%' OR name LIKE '%?%' OR surname LIKE '%?%')
ORDER BY name, surname
LIMIT ?;
Your constraints: client_id might have high number of rows.
How to optimize this query?
No index would binary search on username LIKE %?%
or other similar LIKE
statements. One has to do a full index scan (this is different from table scan
that currently your query might be doing, its order of magnitudes faster).
The conjunction between clauses in your query is OR
i.e. username
, name
OR surname
. That implies, you have to do a composite index containing these three.
The other clauses contain client_id
, disabled
. However, selectivity of either of these are not as high. So, an individual index on any of these columns is of no use. However, a composite index containing these columns + above three columns do make sense but is optional [But index selection would only work if these guys are present]. So, till now we are at: client_id,disabled,username,name,surname
[I1]
You want to optimize ORDER BY
clause as well. Only way to do that, is to keep the data physically sorted in the order of your clause. Remember, index is always physically sorted. So, let's change
client_id,disabled,username,name,surname
To:
name,surname,username,client_id,disabled
Consider partitioning your data.
The above index will speed up your query but would have negative consequences. So, you might want to drop the last two items and use an index hint.
It doesn't make sense, but what actually solved my problem in a simple way was simply changing
jniLibs.srcDirs = ['src/main/jniLibs']
To
jniLibs.srcDirs = ['src/main/libs']
Your issue is due to the UWP class library not being properly registered. Deploy the UWP project first (Right-click → Deploy), or package the WPF app using MSIX to ensure the UWP component is registered. Let me know if you need setup guidance!
I would recommend you to use basic marker and refactor the function in order to instantiate it with L.marker :
this.routingControl = L.Routing.control({
waypoints: [
L.latLng(pickup.location.latitude, pickup.location.longitude),
L.latLng(dropOff.location.latitude, dropOff.location.longitude),
],
routeWhileDragging: true,
createMarker: (i: number, waypoint: L.Routing.Waypoint, n: number) => {
return L.marker(waypoint.latLng, {
icon: this.createIcon(i.toString()), // Custom icon if needed
});
}
}).addTo(this.mapAdmin);
You can check this documentation to prevent any syntax error : https://leafletjs.com/reference.html#marker
Manage tasks efficiently with BoldDesk Task Management System—automate, collaborate, and track progress seamlessly!
I didn't know the error code: 0x80004005 has so confused like above..
In my case, the error caused by duplicated key like this
Error: 0xC0202009 at Data Flow Task, ACR_ARP_DATA Client_ACR_AFW [65]: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005.
An OLE DB record is available. Source: "Microsoft OLE DB Provider for SQL Server" Hresult: 0x80004005 Description: "Duplicate key was ignored.".
Once I have cleared out the duplicated PK conflict, it has resolved.
I had a similar issue that was really a pain. I was getting an error because of the domain mismatch. I necessarily didn’t want to remove the domain parameter in the return response. I went ahead and create a certificate for the custom subdomain I wanted in my naming scheme. Update the dns records. Then I created a custom api domain name with the same subdomain and mapped it to the api I was having issues with. I hope this helps so you don’t have to remove the domain parameter because this does pose security risk.
It seems to me that all that you have to do is just move the files (if it's a git project, do not move the .git folder...). After that you need a clean build.
As @sonle mentioned, this is the expected behavior. I call it the "Most Permissive Group Rule". This means that when a tester belongs to multiple groups, they will automatically receive builds distributed to the most inclusive group they're part of, regardless of individual group settings.
That's why I prefer setting manual distribution for every group so I have a more granular control.
My key recommendations to mitigate this issue would be:
Create all tester groups with manual distribution by default
Use programmatic scripts integrated into CI pipelines to control build distribution
Carefully manage group memberships to minimize overlap
Implement a dynamic distribution system that provides precise control over which builds are sent to specific groups
By using manual distribution and custom scripts, developers can achieve granular control over build access for different tester groups.
I've worked on this article that covers topics like the one you mentioned. Hope you can use it as a reference!
I was getting a very similar error but with error code -532462766.
In Visual Studio (2022), doing Build | Clean Solution fixed the problem.
when configuring the firebase hosting you mistakenly overwrote the public/index.html, to correct it just copy any other project public/index.html and replace with it, then run npm run build
and firebase deploy --only hosting
I found the issue here. The culprit here is 'All workspace users' with default 'Can Manage' access. This cannot be modified for the 'Shared' folder inside 'Workspace' folder. We have to create notebooks outside 'Shared' folder to impose such permissions. Its preferable to have it in Git.
Here is the component in esp registry with support for C++ google-protobuf official library:
https://components.espressif.com/components/albkharisov/esp_google_protobuf
Usage example included.
una vez que se genera la clave, puede suceder que los nuevos datos no se carguen por ya estar registrados en cache. lo que podrías intentar es ejecutar ¨php artisan optimize¨, para limpiar la cache de las rutas también. no solo de las configuraciones
Did u find any solutions? Going through the exact same issue :/
Where did you get "org.zaproxy.zap.extension.script.CryptoJS" from? That class does not appear to be in the ZAP codebase.
ZAP does not provide its own SHA265 implementaion, it uses the standard Java MessageDigest class. Any standard Java class can be accessed via ZAP scripts.
I am having the same issue with a vue + typescript setup.
The Problem seems to be the standard plugins not loading correctly, since only the buttons that are considered plugins are not loading. I will keep you updated if I ever figure this out.
Check the settings related to the commit window. There may be an option that determines how diffs are opened: in a new window/tab or in an area within the commit window. Look for settings related to "Commit Dialog" or "Commit Window". Pay attention to options related to "Show DiffЭ "Open Diff", "Diff ViewerЭ
I came across the same situation, where I don't want the test report to say pass for fail for that particular test. I tried Assert.Warn("This scenario not required")
which is helping me identify the test which were not executed. - @BernardV
Another option is viewer.js. Might be worth a try.
I would go with form objects. The unique rule allows to ignore a defined ID like so
Rule::unique('recipes,name')->ignore($existingId),
as I have a problem with the suggested solution, i'm not sure if i cand make my question here, or i have to open a new 3d, in wich case i apologise for my error.
I usually use the above suggested function, but in our company we have many omonimous user
so , for those, first and second name is the same, but (if course) they have different email, userid, etc...
when i resolve my name (e.g. "Pinco Pallino") , in case multiple "Pinco Pallino" exist, myRecipient.Resolved return false
can you suggest me a way to have (let me say) an array as result with one (or all) the user(s) found ?
Just ask copilot how to disable copilot in vscode
I also have an issue with the memory leak in LangGraph.
I created a GitHub issue for it: https://github.com/langchain-ai/langgraph/issues/3898
After the top solution didn't work for me, I finally found my issue.
I opened xcodeproj instead of xcodeworkspace.
In summary, try:
1- flutter clean && flutter pub get && cd ios && pod install && cd .. && flutter build ios
2- Check that xcode has xcodeworkspace
open and not xcodeproj
.
That worked here too, thanks!!
A better approach is to use DataStream API.
Use a FlatMap function to extract rows from list and then do the aggregation. Or use a MapFunction to caculate to only aggregate on a List rows.
These operations are not stateful so you are not using flink states. You also process each row individually so you do not need to set window at all.
You can't really use Yandex Disk or Google Drive as a proper CDN for hosting static assets like CSS, JS, or images. These services aren't designed for direct web asset delivery and come with limitations like:
No proper HTTP headers for static files. Rate limiting and download restrictions. Lack of custom domains or CNAME support. No edge caching or optimization, which defeats the purpose of a CDN.
Here are some CDNs that have servers in the Russian Federation and are known to be budget-friendly:
Selectel CDN, CDNvideo, Timeweb CDN
Don't use Yandex Disk or Google Drive for static file hosting—it’s not viable as a CDN. Use a proper CDN like Selectel, CDNvideo, or G-Core Labs for Russian delivery and Bitrix compatibility.
I would like to do this as well. I can't find any documentation on a connection string.
Check the settings related to the commit window. There may be an option that determines how diffs are opened: in a new window/tab or in an area within the commit window. Look for settings related to "Commit Dialog" or "Commit Window". Pay attention to options related to "Show Diff", "Open Diff", "Diff Viewer"
You can resolve the shortcut conflict using VSCode [Main menu] > File > Preferences > Keyboard Shortcuts.
Great answer! This approach is essential in most cases. However, if using commonly available orbit controls, you can simplify it by accessing the Spherical theta directly via controls.getAzimuthalAngle()
few approaches to fix this issue:
@OneToMany(mappedBy = "test1Id", fetch = FetchType.LAZY, cascade = CascadeType.ALL, orphanRemoval = true)
private Set<Test2> contacts = new HashSet<>();
2.Modify the persistence code to explicitly control the order
db.persist(test1);
db.flush(); // Force flush to ensure Test1 is in the database
db.persist(test2);
3.Use @MapsId for the child entity
@ManyToOne(fetch = FetchType.LAZY)
@MapsId("objectId") // assuming your embedded ID has an objectId field
@JoinColumn(name = "object_id")
private Test1 test1Id;
Check your Hibernate configuration
In Hibernate 6, check if you have changed any settings related to order operations or batch processing. The property hibernate.order_inserts
might be relevant here.
I encountered the same problem and I resolved it by settings:
openFileDialog1.RestoreDirectory = false;
openFileDialog1.ShowReadOnly = false;
Zilverspin
What you are looking for has been answered here: https://stackoverflow.com/a/79517166/9365246
It loads a large file by n given number of rows without loading the whole file from S3
Sounds like the type definitions for jest are not install in your project.Try adding them with:
npm install --save-dev @types/jest
.
Calling functions from the template is not possible in Django normally, however this call templatetag
provides this capability for any method you would like to evaluate. It is available in the dj-angles
library and can be installed via pip install dj-angles
.
{% call obj.get_something(...) as something %}
{{ something }}
While this couldn't have been your issue when you asked this, if anyone is getting this issue now it is because the Places API is in Legacy and can't be used for new usage. It's been replaced by the Places API (new).
Maybe less relevant here, but I used an older version of the code generation and the solution was to use typeMappings in the (pom) configuration.
Go to Android Studio → Settings → Build → Build Tools → Gradle
Then, change the JDK version according to your project requirements.
Good morning,
sorry but I would need a more detailed explanation.
Do you want the circle to always be blurry?
However, i analyzed its HTML and CSS code.
I found some errors:
In the HTML:
At Line 9:
Tag must be paired, no start tag: [< /style>] (without space)
At line 18:
The html element name of [ feGaussianBlur ] must be in lowercase.
At line 19:
The html element name of [ feColorMatrix ] must be in lowercase.
For the CSS:
Unexpected duplicate selector ".container .circle", first used at line 22
Here you are the corrected code:
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
display: flex;
justify-content: center;
align-items: center;
min-height: 100vh;
}
.container {
position: relative;
width: 500px;
display: flex;
filter: url(#flt);
background-color: bisque;
}
.container .circle {
background-color: #000;
width: 200px;
height: 200px;
position: relative;
border-radius: 50%;
animation: leftM 5s ease-in-out 2s forwards;
}
svg {
display: none;
}
@keyframes leftM {
0% {
transform: translateX(0px);
}
100% {
transform: translateX(600px);
}
}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style><title></title>
<link rel="stylesheet" href="style.css">
</style>
</head>
<body>
<div class="container">
<div class="circle"></div>
</div>
<svg>
<filter id="flt">
<fegaussianblur in="SourceGraphic" stdDeviation="8"></fegaussianblur>
<fecolormatrix type="matrix" values="
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 2 -1
"></fecolormatrix>
</filter>
</svg>
</body>
</html>
You could actually use the AnnotatedString.Companion.fromHtml
.
developper.android.com : fromHtml
val cleanDescription = AnnotatedString.Companion.fromHtml(bookdata?.description)
If the text contains a symbol that is delimiter, there is no way for machine or person to distinguish (Except for Analysts). Only way is request the source team to enclose complete text with " if it contains |.
In our case, source team does not have a way to enclose text " selectively. Hence we has requested to enclose each field with ", irrespective of if it contains | or not :-) . And it would work.
Your problems are likely because the Distance Matrix API, the Directions API, and the Places API are in "Legacy" status as of March 1. From the developer docs on each of them (only present in English version):
This product or feature is in Legacy status and cannot be enabled for new usage.
The Distance Matrix API and the Directions API have been replaced by the Routes API. The Places API has also been replaced, by the Places API (new).
:/opt/jfrog/xray/var/etc$ sudo systemctl status xray.service
● xray.service - Xray service
Loaded: loaded (/lib/systemd/system/xray.service; enabled; vendor preset: enabled)
Active: active (exited) since Tue 2025-03-18 10:19:57 UTC; 6min ago
Process: 156136 ExecStart=/opt/jfrog/xray/app/bin/xray.sh start (code=exited, status=0/SUCCESS)
Main PID: 156136 (code=exited, status=0/SUCCESS)
CPU: 32ms
Mar 18 10:19:21 db systemd[1]: Starting Xray service...
Mar 18 10:19:57 db systemd[1]: Finished Xray service.
why this error is accruing, when i try to connect with artifactory xray and database all are hosted in a different server
There are separate settings for new projects. You can update it once and it will be fine.
Select -> Settings for New Project:
And configure Maven home path there:
For anyone looking for what's the arrow ref, here is the doc for the mui V6 with the whole implementation.
You can simply use the SET command:
DECLARE deletedRowCount integer;
WITH deleted as
(
DELETE FROM MyTable "t"
USING tmp_closed_positions
WHERE condition
RETURNING "t"."Id"
)
set deletedRowCount = (SELECT COUNT(*) FROM deleted);