By default, CSV files store table data in text format and does not support columns.
You need a tool like Excel text to column function to do it. But if you convert it using Excel, you'll need to save if as a Excel file, since CSV does not support columns.
@user8581670
Yeah, changing the file path to a shorter path was successful for me, It Worked!
For anyone else stumbling upon this and wondering how to fix this with Google Sheets:
When its an integer you don't have to do anything like @Diego Queiroz said.
The problem with Google Sheet fields' numbers is fixed by selecting the column, Format > Number > 0 (close to bottom of the list).
Afterwards remove the data source in Google Data Studio and reimport it. Afterwards it was fixed for me.
I have found this on YouTube shorts, it is regarding Null reference exception , it might help you
If its a fixed device, maybe you could hardcode the MAC in and connect without looking for the advertisement? That reduces it to near minimum and is common practise for hardware that connects over bluetooth (like speakers that connect to each other via bluetooth).
The original documentation is very poor. You can use the tutorials to learn at this website: https://www.pythonguis.com/pyqt5-tutorial/
I think that the problem is in the way you structure your post route and in which context are you using it..
You can't just type "/add" in your browser address bar. That's a GET request, but your code only handles POST.
You need to add in your index.ejs file a form like this:
<form action="/add" method="POST">
<input type="text" name="country" placeholder="Enter a country">
<button type="submit">Add Country</button>
</form>
Just put this form on your page, fill a country name, and submit. That will trigger the POST request your route is expecting. Browser address bars can only do GET requests.
@Extra: If you want to test your POST route directly without creating a form, Insomnia is perfect for that. Just download Insomnia, create a new POST request to http://localhost:3000/add, set the body to 'Form URL Encoded' format, add a key-value pair with 'country' as the key and your country name as the value, then hit send. It's way easier for testing API endpoints than trying to build forms every time. You can test routes almost in real-time, that will boost your production.
These are local variables, and they are automatically managed by Dart’s memory management system. Note that the garbage collector will end up deleting it once its no longer referenced anywhere. On the upside, no memory leaks. On the downside, each time the function runs (like a callback execution), the variables are re-created and do not retain any previous state.
2025-03-18 21:15:26 INFO root - Rasa server is up and running. Bot loaded. Type a message and press enter (use '/stop' to exit):
Your input -> what is chat gpt
I'm here to assist with any questions related to forms. If you need help with our services, features, or how to fill out a form, feel free to ask!
Your input -> hello
Hey there! What's your name?
Your input -> my name is John
In which city do you live?
Your input -> im from new york
Can i get your phone number? Your input -> 0123456789
Hey vishwa,New York is a very beautifull place. Your phone number is 0123456789
Your input -> sdas
Your input ->
i have some issues :
1 Out of Context
what is chat gpt or what is world these are out of context messages for these i need to respons like this "I'm here to assist with any questions related to forms. If you need help with our services, features, or how to fill out a form, feel free to ask!" becuse user can ask differnet questions this bot will only focus on form filling.
2 Unclear
sdas and in intent example if i give a something not difine in it there then say "Sorry, I didn't understand that. Can you rephrase?" ex there is example like this my name is [Jhon] (name) but i give a prompt like this my name is new york. for that i need to say i cant understand can you rephrase like that . and this is not only for name and i need a solution without hardcording how to do this what is the best way use in rasa.
how to get a solution for both senarios.
Another late reply but the above (apparently now "below" answer is still the best anwser according to Google search results (It should be awarded "best answer"). I should add that it depends on the mouse you're using. I'm using a Logitech M100 and it doesn't exhibit this quirk, but if I attach one of my other gaming mouse/mice to my computer (A Zowie S2 and a Microsoft Intellimouse Pro) it has this problem. It's easy enough to solve, you just have to figure out which keyboard HID device to disable, since apparently there can be multiple "HID Keyboard Device" entries in the device manager!
I just registered here today, I'm not sure if it's encouraged to answer very old threads in StackOverflow, so apologies in advance!
I'm working on something similar but I'm not understanding what data is in the A, B, and D reference cells. Can you clarify that? Thank you!
There is no off the shelf support for nanoseconds. You will be limited to MicroSeconds even while you use the stopwatch class.
To set a max-width (like 1200px) and center your container in the free Elementor:
Select the Section.
Go to Layout tab.
Set "Content Width" to "Boxed".
Set "Width" to 1200 px.
Set "Horizontal Align" to "Center".
This uses Elementor's built-in settings, no custom CSS needed.
For me deleting the lock file mentioned in the error message worked.
Note: Deleting that file results in loosing you plugins and any other custom settings that you would have added since it resets.
try to update your typescript may solve your problem because it seems like your using the older version of Typscript
I managed to resolve this - It was caused by a difference in environment variables.
Specifically, The CI job set the AWS_ACCOUNT_ID which I didn't have set locally. That env var was having a some interaction with a change made in botocore 1.37.0 related to DynamoDB URLs.
Setting that env var locally allowed me to reproduce the error locally. Pinning botocore to 1.36.x fixed the issue locally and on the CI server.
IP checking websites like https://www.whatismyip.com are blocked by oxylabs default.
Use https://ip.oxylabs.io instead.
I also ran into this issue. I did not want to download the pre-built wheel as Rajeesh suggested, instead I ran the installation instruction as found in the packages installation documentation: https://pillow-heif.readthedocs.io/en/latest/installation.html
Probably the error comes from an outdated libheif-dev version, following said instalation instructions, I now have version 1.19.5 of libheif-dev, the old version that came with my system was 1.14.2.
After the update of libheif-dev, the installation of pillow-heif with pip worked as a charm.
resolved by removing editor.stickyScroll.maxLineCount thanks
I just applied load prop I mean this one loader={customLoader} and it was fixed properly
please can someone help me explaining this part of the code,
if [Index.1]>0
then if [Index.1] - 1 and [Index] = [Index]
then 0
else 1
else 1
I'm having the same issue has the example but I was not able to use the M code
thanks
I am not a expert in PINN. But I think the problem is about extrapolation. Like a polynomial a neural network is NOT a genetic function approximator. You may use a Fourier Layer. This can help.
Nice piece of code. How can I see only directories in the selection
I have been facing this challenge for almost a week I am so frustrated... I have tried everything here but still.
I wanted to know if you could help me by giving me the code for how you logged in. I have the same problem and I don't know how to solve it.
You can change @ng-bootstrap/ng-bootstrap from 13 to "@ng-bootstrap/ng-bootstrap": "^12.0.0".This will resolve your problem.
The most simple solution is probably to create a script (.bat/.sh/...) which will run a proccess for each bot by passing different tokens to main(). What's more, you can put them into different tmux session, cmd windows and etc.
Try using "wrap-table-header"
Specifies how the table header is wrapped. Set to 'all' to wrap all column headers. Set to 'none' to clip all column headers. Set to 'by-column' to wrap/clip column headers based on the wrap/clip setting for that individual column. The default is 'none'.
https://developer.salesforce.com/docs/component-library/bundle/lightning-datatable/specification
Thanks You,
Preeti Pujari,
Salesforce Consultant, Winobell Inc
For amazing insights on upcoming trends in Salesforce platform follow us at:
https://www.linkedin.com/company/winobell/
https://www.winobell.com/contact
It seems that although the __getitem__-docs list slice support as optional, the typing information requires both support for integer indices and slices.
Implementing them resolves both warnings.
The code below no longer produces warnings for me.
from collections.abc import Sequence
from typing import override, overload
class MySeq(Sequence[float]):
def __init__(self):
self._data: list[float] = list()
@override
def __len__(self) -> int:
return len(self._data)
@overload
def __getitem__(self, key: int) -> float:
pass
@overload
def __getitem__(self, key: slice) -> Sequence[float]:
pass
@override
def __getitem__(self, key: int | slice) -> float | Sequence[float]:
return self._data[key]
Thanks for the comments of @jonrsharpe and @chepner for providing the right direction with this.
I'm having the same issue and was wondering if you were able to resolve this issue?
Bootstrap 5 has reserved a $zindex-fixed: 1030; variable which places elements below the $zindex-modal: 1055;. It took me a while to wrap my head around but this actually helped me solve this same issue in my Vue project. My child component also has the modal inside it, whereas the element that I want to be "above everything else … but not modals" I just assign this variable to.
@use 'bootstrap' as bs;
.my-fullscreen {
position: fixed;
z-index: bs.$zindex-fixed;
// …
}
Temporarily deactivate all plugins except Elementor and Elementor Pro (if you have it) can help rule out plugin conflicts. If the issue resolves, reactivate plugins one by one to identify the culprit.
This is the easy way:
<%= form_with url: destroy_user_session_path, method: :delete do %>
<%= submit_tag 'Log Out' %>
<% end %>
@Ben Richards, I know this method and I like it a lot, the code looks very clear. Your code doesn't work for me, I have applied minor improvements and now it works.
# Define variables for readability
$IDMPath = "C:\Program Files (x86)\Internet Download Manager\IDMan.exe"
$DownloadPath = "C:\Users\Andrzej\Desktop\qap 22"
$FileName = "foo bar.exe"
$DownloadURL = "https://www.foobar2000.org/files/foobar2000-x64_v2.24.3.exe"
# Build the argument list as an array to avoid quoting issues
$Arguments = @(
'/p'
"""$DownloadPath"""
'/h'
'/n'
'/q'
'/f'
"""$FileName"""
'/d'
"""$DownloadURL"""
)
# Start IDM with arguments using Start-Process
Start-Process -FilePath $IDMPath -ArgumentList $Arguments -NoNewWindow -Wait
or
# Define variables for readability
$IDMPath = "C:\Program Files (x86)\Internet Download Manager\IDMan.exe"
$DownloadPath = "C:\Users\Andrzej\Desktop\qap 22"
$FileName = "foo bar.exe"
$DownloadURL = "https://www.foobar2000.org/files/foobar2000-x64_v2.24.3.exe"
# Build the argument list as an array to avoid quoting issues
$Arguments = @(
"/p ""$DownloadPath"""
'/h'
'/n'
'/q'
"/f ""$FileName"""
"/d ""$DownloadURL"""
)
# Start IDM with arguments using Start-Process
Start-Process -FilePath $IDMPath -ArgumentList $Arguments -NoNewWindow -Wait
(setq make-backup-files nil) ; stop creating backup~ files
(setq auto-save-default nil) ; stop creating #autosave# files
If you're using SQL Server, you're looking for UNPIVOT:
select Location, Type, Date, Account, Value
from t unpivot (Value for Account in (Quantity, Net)) av;
See it running on some of your data.
I would like to add that if you don't want to use migrationBuilder.Sql() and are trying to change the column type via standard EF Core methods, there is a workaround. The only way to get around this without SQL is to create a new column, drop the old one, and rename the new one.
I'm seeing the same, unfortunately. I haven't reached any quotas, and I have the necessary speech permissions.
Uninstall and reinstall the app;
that should resolve the issue.
After debugging, this was the answer:
EAS_LOCAL_BUILD_SKIP_CLEANUP=1 eas build --platform ios --profile preview --localHaving similar problems. How did you solve the problem?
In case anyone else runs into the issue, there was problem in the reverse proxy where the whole request wasn't being properly forwarded to the Firebase Auth server - just the URL. We had to change from fetch(updatedURL) to `fetch(new Request(updatedUrl, oldRequest))`. And then it worked :)
In case it helps anyone, this error was solved by setting the AbstractFileSystem for gs URIs per (this)[https://github.com/GoogleCloudDataproc/hadoop-connectors/blob/master/gcs/INSTALL.md].
.config("spark.hadoop.fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem")
I have no idea why downloading the JAR file from https://storage.googleapis.com/hadoop-lib/gcs/gcs-connector-hadoop3-latest.jar works though
I believe that you need to use no buffer -N or --no-buffer in curl for this.
curl -N XYZ | jq -r -c '.results[0].data[0].row[]'
In older versions of Android Studio there was an option for "Empty Activity" (as well as "No Activity" and other Activity options). These activities used the Views XML layout format. Now in Android Studio, all the older activities are called Views Activity as they use the older Views format. There is now a choice for "Empty Activity" which uses the Compose layout, which is declarative (programmatic) layout, not XML. The old "Empty Activity" is now called "Empty Views Activity".
The views are for backward compatibility. For new projects it is suggested you use the new Compose declarative layout which is much easier to handle in code.
Since version 3.12.0 it's possible to omit the version of the annotation processor library if it is defined inside the dependency management.
See MCOMPILER-391
You likely can't directly make an element inside a closed new accordian tab visible outside before it's opened due to Elementor's optimization. Consider these shorter alternatives:
With Gnome 47, move emacs.desktop from /usr/share/applications/ or /usr/share/emacs/30.1/etc/to ~/.local/share/applications/. Then Emacs path with be the same if launched from Activities or Terminal in default shell. exec-path-from-shell is not useful.
My exact issue was that Python was 32 bit ( Python38-32 ), some old project, and it wasn't working with the latest installation of GTK. I was also getting the same error. Once i updated python version to 3.11, reinstalled GTK and set PATH To point to C:\Program Files\GTK3-Runtime Win64\bin it worked ( notice that GTK also has to match the python version )
Did you tried this kind of syntax with " referencedTable" ?
const { data, error } = await supabase
.from('friends')
.select('recieved!inner(*), sent!inner(*)')
.eq('accepted', true)
.or(`id.eq.${user?.id}`, { referencedTable: 'sent' })
.or(`id.eq.${user?.id}`, { referencedTable: 'receive' })
You don't even need printf format
seq -w 0 0010
Downgrading to @auth/prisma-adapter 2.7.2 should solve it. There is an open issue for this, see https://github.com/nextauthjs/next-auth/issues/12731
OK. Found the culprit!
When maxing out LZMADictionarySize, LZMABlockSize must be left out or remarked, because in the newest LZMA SDK, this directive heavily degrades compression ratio. Don't know what has been changed in this newest version, but LZMABlockSize directive is really degrading compression with the settings above.
Hope this helps...
Regards
We can simplify the proposition of Esteban Filardi with this code :
const oldDate = new Date('July 21, 2001 01:15:23');
const todayDate = new Date();
const oneYear = 1000 * 60 * 60 * 24 * 365;
isPastOneYear = todayDate - oldDate > oneYear;
I have a hack for this-
But you'll have to wait a few more seconds after starting the server.
The hack is to create a .bat file and write the routes you are working on. (you can write all of them, but it will take more time)
Let's say I'm working on the dashboard today. like this:
curl http://localhost:3000/dashboard/home
curl http://localhost:3000/dashboard/help
curl http://localhost:3000/dashboard/something.....
Now just run your bat file just after starting your dev server.
Pro Tip: you can make files like dashboard-prerender.bat, landingpage.bat, api-routes.bat, all.bat for all different kind of needs.
It seems you mentioned having "same three file" which I interpret as having three similar files (e.g., three Excel or CSV files containing payment data like the one you shared). To create an effective data model in Power BI for analyzing the 24-hour payment trend across these files, you’ll need to combine and structure the data properly. Below is a step-by-step guide to perform data modeling in Power BI with multiple files.
---
### Assumptions
- You have three files (e.g., `File1.xlsx`, `File2.xlsx`, `File3.xlsx`) with similar structures, each containing columns like `T_TRANSACTION`, `T_DATE_VALUE`, `T_FILTERED`, `T_AMOUNT`, `T_ENTITY`, and `T_TYPE`.
- Each file represents a subset of your payment data (e.g., different days, batches, or entities).
- The goal is to create a unified 24-hour payment trend dashboard as discussed earlier.
---
### Step-by-Step Guide to Data Modeling in Power BI
#### 1. **Load the Three Files into Power BI**
- Open **Power BI Desktop**.
- Click **Get Data** > **Excel** (or **Folder** if the files are in the same directory).
- If using **Excel**, load each file individually:
- Select `File1.xlsx` > Click **Load** or **Transform Data**.
- Repeat for `File2.xlsx` and `File3.xlsx`.
- If using **Folder** (recommended for multiple files):
- Choose **Get Data** > **Folder**, browse to the directory containing your files, and click **Combine & Transform**.
- Power BI will detect similar tables across the files and create a single table by appending the data. Ensure the column names and data types match across all files.
- In the Power Query Editor:
- Check that `T_FILTERED` is set to **Date/Time** type.
- Remove any unnecessary columns (e.g., if some files have extra metadata).
- Rename the table (e.g., `PaymentData`) if needed.
#### 2. **Append the Data**
- If you loaded each file separately, append them into a single table:
- In the Power Query Editor, click **Home** > **Append Queries**.
- Select `File1`, `File2`, and `File3` to combine them into one table (e.g., `PaymentData`).
- Click **OK** and ensure the data aligns correctly (e.g., same column order and types).
- Click **Close & Apply** to load the combined data into the model.
#### 3. **Create a Date Table (Calendar Table)**
- To enable time intelligence and ensure all 24 hours are represented (even with no data), create a separate Date table:
- Go to **Modeling** > **New Table**.
- Use the following DAX to generate a Date table:
```
DateTable = CALENDAR(DATE(2024, 12, 1), DATE(2024, 12, 31))
```
- Adjust the date range based on your data (e.g., if it spans multiple months).
- Create additional columns for the hour:
```
HourBucket = FORMAT(TIME(HOUR([Date]), 0, 0), "h:mm AM/PM")
```
- This generates hourly buckets like "1:00 AM", "2:00 AM", etc.
- Mark this as a **Date Table**:
- Go to **Modeling** > **Mark as Date Table**, and set the `Date` column as the unique identifier.
#### 4. **Create a Relationship**
- Ensure a relationship exists between your `PaymentData` table and the `DateTable`:
- Go to the **Model** view.
- Drag the `T_DATE_VALUE` (or a derived date column from `T_FILTERED`) from `PaymentData` to the `Date` column in `DateTable`.
- Set the relationship to **1-to-many** (one date in `DateTable` can relate to many transactions in `PaymentData`).
- Ensure the `HourBucket` from `DateTable` will be used for hourly aggregation.
#### 5. **Add HourBucket to PaymentData (Optional)**
- If you want to keep the hour logic in the `PaymentData` table (instead of relying on `DateTable`):
- In Power Query Editor, add a custom column for `HourBucket` using:
```
Text.From(DateTime.Hour([T_FILTERED])) & ":00 " & if DateTime.Hour([T_FILTERED]) < 12 then "AM" else if DateTime.Hour([T_FILTERED]) = 12 then "PM" else "PM"
```
- This ensures each transaction is tagged with its hour.
#### 6. **Create a Measure for Payment Count**
- Go to **Modeling** > **New Measure**.
- Enter:
NumberOfPayments = COUNTROWS('PaymentData')
- This counts the number of transactions based on the filters applied (e.g., by date and hour).
#### 7. **Build the 24-Hour Trend Dashboard**
- **Add a Matrix Visual**:
- Drag a **Matrix** visual.
- Set:
- **Rows**: `Date` (from `DateTable`) and `HourBucket` (from `DateTable` or `PaymentData`).
- **Values**: `NumberOfPayments`.
- Enable "Show items with no data" in the visual options to display all 24 hours, even if no payments occurred.
- **Add a Line Chart**:
- Drag a **Line Chart** visual.
- Set:
- **Axis**: `HourBucket`.
- **Values**: `NumberOfPayments`.
- **Legend**: `Date` (to show trends for each day).
- Sort `HourBucket` using a custom sort order (see Step 8).
- **Add a Slicer**:
- Drag a **Slicer**, set it to `Date`, and allow users to filter by day.
#### 8. **Sort HourBucket**
- Create a sorting table to ensure hours are in the correct order (1:00 AM to 12:00 AM):
- Go to **Modeling** > **New Table**.
- Enter:
```
HourSort = DATATABLE("HourBucket", STRING, "SortOrder", INTEGER,
{
{"1:00 AM", 1}, {"2:00 AM", 2}, {"3:00 AM", 3}, {"4:00 AM", 4},
{"5:00 AM", 5}, {"6:00 AM", 6}, {"7:00 AM", 7}, {"8:00 AM", 8},
{"9:00 AM", 9}, {"10:00 AM", 10}, {"11:00 AM", 11}, {"12:00 PM", 12},
{"1:00 PM", 13}, {"2:00 PM", 14}, {"3:00 PM", 15}, {"4:00 PM", 16},
{"5:00 PM", 17}, {"6:00 PM", 18}, {"7:00 PM", 19}, {"8:00 PM", 20},
{"9:00 PM", 21}, {"10:00 PM", 22}, {"11:00 PM", 23}, {"12:00 AM", 24}
}
)
```
- Relate `HourBucket` in `PaymentData` (or `DateTable`) to `HourBucket` in `HourSort`.
- Set the sort order of `HourBucket` to use the `SortOrder` column.
#### 9. **Verify and Test**
- Check that the Matrix and Line Chart show all 24 hours for each day, with zeros where no payments occurred.
- Use the Slicer to filter by specific days and confirm the trends.
#### 10. **Publish and Share**
- Save your Power BI file and publish it to the Power BI Service if needed.
---
### Expected Output
Your Matrix visual might look like this:
| Date | HourBucket | Number of Payments |
|------------|------------|--------------------|
| 12/11/2024 | 1:00 AM | 33 |
| 12/11/2024 | 2:00 AM | 0 |
| 12/11/2024 | 3:00 AM | 0 |
| ... | ... | ... |
| 12/11/2024 | 12:00 AM | 0 |
| 12/12/2024 | 1:00 AM | 25 |
| 12/12/2024 | 2:00 AM | 10 |
| ... | ... | ... |
The Line Chart will plot the 24-hour trend for each selected day.
---
### Tips for Success
- **Consistency**: Ensure column names and data formats are identical across the three files before appending.
- **Missing Hours**: The `DateTable` with `HourBucket` ensures all 24 hours are displayed, even if your data is sparse.
- **Upload Files**: If possible, upload the three files here, and I can guide you through the exact modeling process with your data.
### Next Steps
- Upload the three files (e.g., as `.xlsx` or `.csv`) if you’d like me to assist with the specific data modeling.
- Let me know if you need help with any step (e.g., appending, creating the Date table, or sorting hours)!
Would you like to proceed with uploading the files?
There's an easier way, iOS has a hidden enum that lets you do is using NSData: https://objectionable-c.com/posts/brotli-ios/
use this:
function foo {
typeset x=2 # Makes x local to the function
}
x=1
foo
echo $x # This will print 1
Try setting the security protocol to:
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;
My issue was that the Working directory inside the Run/Debug Configurations was not set to the folder where I have package.json.
looks like you have got Dialogflow embed code in google tag manager. i am having issues with GTM not recognizing <df-messenger> html component, saying its not standard. Could you describe how you have set up GTM with Dflow . Thanks for the help
V
In 2025, you need to disable the --turbopack option run the dev run configuration for debugging to work in webstorm.
I made a dev-no-turbo run configuration in package.json for use in debugging:
"dev": "next dev --turbopack",
"dev-no-turbo": "next dev",
Two solutions offered here didnt work for me since labelStyle and renderLabel dont exist on TabBar or TabView, so after looking at the docs, I found that you can set the fontSize by:
commonOptions={{ labelStyle: {fontSize: 16, }}
on the TabView
flutter clean
flutter pub get
flutter run
I have exactly the same problem using mdns_dart when WLAN disconnected. I can't catch the exception. Any news?
From List children of a driveItem
If a collection exceeds the default page size (200 items), the @odata.nextLink property is returned in the response to indicate more items are available and provide the request URL for the next page of items. You can control the page size through optional query string parameters.
enter image description hereselect image help
| A | B | C |
|---|---|---|
| 1122 | 24_hhgf | =if(Isnumber(match(A1&"*";B1:B10;0));Yes;No) |
| 1354 | 1122_hfff |
=if(Isnumber(match(A1&"*";B1:B10;0));Yes;Not)
با تابع match
سلول دارای عدد به همراه کارلکتر اضافی را پیدا خواهید کرد
=match(112&"*";B1:B10;0)
Can anybody tell me what's wrong with this query? Getting Error Incorrect syntax near ','. but it doesn't tell me where it is.
mysql = mysql + " SET @sql = N'"
mysql = mysql + " SELECT [REP CODE], [CUST CODE], [REP NAME], [CUSTOMER NAME],"
mysql = mysql + " '' +"
mysql = mysql + " (SELECT STRING_AGG(''ISNULL('' + QUOTENAME(MonthYear) + '', 0) AS '' + QUOTENAME(MonthYear), '', '')"
mysql = mysql + " FROM ("
mysql = mysql + " SELECT DISTINCT"
mysql = mysql + " DATENAME(MONTH, bolh_shp_or_prt_date) + '' '' + CAST(YEAR(bolh_shp_or_prt_date) AS NVARCHAR) AS MonthYear,"
mysql = mysql + " MIN(bolh_shp_or_prt_date) As MinDate"
mysql = mysql + " From sisl_data04.dbo.so_bol_headers"
mysql = mysql + " WHERE (bolh_salesrep_id = @salesrep_id OR @salesrep_id='''')"
mysql = mysql + " AND bolh_shp_or_prt_date BETWEEN @from_date AND @to_date"
mysql = mysql + " AND bolh_stage_flg >= @bolh_stage_flg"
mysql = mysql + " GROUP BY DATENAME(MONTH, bolh_shp_or_prt_date), YEAR(bolh_shp_or_prt_date)"
mysql = mysql + " ) AS MonthList"
mysql = mysql + " ) +"
mysql = mysql + " '',"
mysql = mysql + " ISNULL('' + (SELECT STRING_AGG(''ISNULL('' + QUOTENAME(MonthYear) + '', 0)'', '' + '') "
mysql = mysql + " FROM ("
mysql = mysql + " SELECT DISTINCT"
mysql = mysql + " DATENAME(MONTH, bolh_shp_or_prt_date) + '' '' + CAST(YEAR(bolh_shp_or_prt_date) AS NVARCHAR) AS MonthYear,"
mysql = mysql + " MIN(bolh_shp_or_prt_date) As MinDate"
mysql = mysql + " From sisl_data04.dbo.so_bol_headers"
mysql = mysql + " WHERE (bolh_salesrep_id = @salesrep_id OR @salesrep_id='''')"
mysql = mysql + " AND bolh_shp_or_prt_date BETWEEN @from_date AND @to_date"
mysql = mysql + " AND bolh_stage_flg >= @bolh_stage_flg"
mysql = mysql + " GROUP BY DATENAME(MONTH, bolh_shp_or_prt_date), YEAR(bolh_shp_or_prt_date)"
mysql = mysql + " ) AS MonthList) + '', 0) AS [TOTAL],"
mysql = mysql + " SortOrder, SortKey"
mysql = mysql + " INTO #PivotResult"
mysql = mysql + " FROM ("
mysql = mysql + " SELECT"
mysql = mysql + " bolh_salesrep_id AS [REP CODE],"
mysql = mysql + " bolh_cust_id AS [CUST CODE],"
mysql = mysql + " UPPER(sr.sr_salesrep_name) AS [REP NAME],"
mysql = mysql + " UPPER(c.cu_name) AS [CUSTOMER NAME],"
mysql = mysql + " DATENAME(MONTH, bolh_shp_or_prt_date) + '' '' + CAST(YEAR(bolh_shp_or_prt_date) AS NVARCHAR) AS [MonthYear],"
mysql = mysql + " SUM(ISNULL(bolh_taxinclship_amt, 0)) AS TOTALSales,"
mysql = mysql + " 0 AS SortOrder, -- Regular rows (SortOrder = 0)"
mysql = mysql + " bolh_salesrep_id + ISNULL(bolh_cust_id, '''') AS SortKey"
mysql = mysql + " FROM so_bol_headers bh WITH (NOLOCK)"
mysql = mysql + " INNER JOIN sales_reps sr WITH (NOLOCK) ON bh.bolh_salesrep_id = sr.sr_salesrep_id"
mysql = mysql + " INNER JOIN customers c WITH (NOLOCK) ON bh.bolh_cust_id = c.cu_cust_id"
mysql = mysql + " WHERE (bolh_salesrep_id = @salesrep_id OR @salesrep_id='''')"
mysql = mysql + " AND bolh_shp_or_prt_date BETWEEN @from_date AND @to_date"
mysql = mysql + " AND bolh_stage_flg >= @bolh_stage_flg"
mysql = mysql + " GROUP BY bolh_salesrep_id, bolh_cust_id, sr.sr_salesrep_name, c.cu_name, DATENAME(MONTH, bolh_shp_or_prt_date), YEAR(bolh_shp_or_prt_date)"
mysql = mysql + " ) AS SourceTable"
mysql = mysql + " PIVOT ("
mysql = mysql + " SUM (TOTALSales)"
mysql = mysql + " FOR [MonthYear] IN ('' + @cols + '')"
mysql = mysql + " ) AS PivotTable;"
mysql = mysql + " SELECT [REP CODE], [CUST CODE], [REP NAME], [CUSTOMER NAME],"
mysql = mysql + " '' + @cols + '',"
mysql = mysql + " [Total]"
mysql = mysql + " FROM ("
mysql = mysql + " SELECT [REP CODE], [CUST CODE], [REP NAME], [CUSTOMER NAME],"
mysql = mysql + " '' + @cols + '',"
mysql = mysql + " [TOTAL], 0 AS SortOrder, SortKey"
mysql = mysql + " FROM #PivotResult"
mysql = mysql + " Union ALL"
mysql = mysql + " SELECT"
mysql = mysql + " '''' AS [REP CODE],"
mysql = mysql + " '''' AS [CUST CODE],"
mysql = mysql + " '''' AS [REP NAME],"
mysql = mysql + " [REP CODE] + '' TOTAL'' AS [CUSTOMER NAME],"
mysql = mysql + " '' + @TOTALCols + '',"
mysql = mysql + " ISNULL(SUM([TOTAL]),0) AS [TOTAL],"
mysql = mysql + " 1 AS SortOrder,"
mysql = mysql + " [REP CODE] + ''ZZZZZZ'' AS SortKey"
mysql = mysql + " FROM #PivotResult"
mysql = mysql + " GROUP BY [REP CODE]"
mysql = mysql + " Union ALL"
mysql = mysql + " SELECT"
mysql = mysql + " '''' AS [REP CODE],"
mysql = mysql + " '''' AS [CUST CODE],"
mysql = mysql + " '''' AS [REP NAME],"
mysql = mysql + " ''GRAND TOTAL'' AS [CUSTOMER NAME],"
mysql = mysql + " '' + @TOTALCols + '',"
mysql = mysql + " ISNULL(SUM([TOTAL]), 0) AS [TOTAL],"
mysql = mysql + " 2 AS SortOrder,"
mysql = mysql + " ''ZZZZZZZZZZ'' AS SortKey"
mysql = mysql + " FROM #PivotResult"
mysql = mysql + " ) AS FinalResult"
mysql = mysql + " ORDER BY SortKey, SortOrder, [CUST CODE];"
mysql = mysql + " DROP TABLE #PivotResult;';"
mysql = mysql + " EXEC sp_executesql @sql, N'@salesrep_id NVARCHAR(MAX), @from_date DATE, @to_date DATE, @bolh_stage_flg NVARCHAR(10)', @salesrep_id, @from_date, @to_date, @bolh_stage_flg;"
Some of your underlying code maybe using the legacy Places API instead.
For a test, I would try and activate the legacy Places API as well via this direct link:
https://console.cloud.google.com/apis/library/places-backend.googleapis.com
Because the legacy API is going away, it's currently been made not visible, so the deep link is the only way to activate it. Once it's active, it's visible again.
If it's a windows box and you can install diff somewhere, then install diff. Check it by running diff in a dos shell. Then add the path to the diff.exe to your path environment variable and restart emacs. That worked for me.
This is the way that I understand it.
Let me first illustrate the forward pass.
Given that self-attention is Softmax(QKT)V (ignoring scaling factor in flop calculation, and sorry for the use of the same notation for different things!).
Since we only care about the flop of a single token, our query Q has size (1xQ). K and V has size (TxQ), which the query will use to interact with the neighboring tokens.
If we focus on just 1 head of 1 layer, we can ignore L and H for now. QKT is a multiplication between (1xQ) and (QxT), which has ~2QT operations. This operation yields a single vector of size (1xT)
But there is still the operation of computing the product between Softmax(QKT) and V. The product is between a vector (1xT) and a matrix (TxQ), which has again ~2QT operations.
Combining both steps, we get 2(2QT). Then finally we scale by the number of heads (H) and the number of layers (L), giving us 2LH(2QT) for the forward pass. If we take the backward to be twice the flop of the forward pass, we get:
2LH(2QT) (1 + 2) = 6LH(2QT) = 12LHQT flops per token.
im my case when i tried to do any flutter command it was saying
Waiting for another flutter command to release the startup lock so enter this command to shutdown all dart activities
killall -9 dart
then it works
it was actually very easy - and the answer was actually in the description - the optional part regard the cancelation token:
emailClientMock.Setup(c => c.SendAsync(WaitUntil.Started, It.IsAny<EmailMessage>(), It.IsAny<CancellationToken>()))
Please pay attention on the problem that may happened with awaitables that migrate between the threads https://github.com/chriskohlhoff/asio/issues/1366.
Excerpt from Linux Device Drivers, Third Edition, Chapter 7: Time, Delays, and Deferred Work
A device driver, in many cases, does not need its own workqueue. If you only submit tasks to the queue occasionally, it may be more efficient to simply use the shared, default workqueue that is provided by the kernel. If you use this queue, however, you must be aware that you will be sharing it with others. Among other things, that means that you should not monopolize the queue for long periods of time (no long sleeps), and it may take longer for your tasks to get their turn in the processor.
Otherwise, it may be better to create your own workqueue.
If you create a fragment without adding it to the activity using the fragment transaction manager it will show up as a false positive.
Syntax like ${env.MY_VAR} works.
Did i found in docs? No - just guessed :D
I'm having the same problem. I'm using OAuth for Facebook. I've already verified my business, but it still says the app isn't active. What can I do?
You can checkout this website. May be this example helps you to solve your issue. A JSChart is deployed on this link
I have no explanation why Jenkins jobs do not react to PowerShell error as described in the docs: https://www.jenkins.io/blog/2017/07/26/powershell-pipeline/
Either try -ErrorAction Stop after each command (Everything you wanted to know about exceptions)
Or set $ErrorActionPreference = 'Stop' at the beginning
Example
try {
powershell """
throw new Exception("ERROR: This is a test Exception.") -ErrorAction Stop
"""
}
This is also exact answer for default keeping console process. Tools-Options-Node.js Tools- click "wait for input when process exits normally" in Microsoft Visual Studio 2022.
PhoneGap is now depreciated. Use https://volt.build/
this worked for me.
WHERE user_id::uuid = :tenantId::uuid
If you are getting this when inserting a row, the solution for me was that I'd failed to add AUTOINCREMENT to the SQLite primary keys while creating the temporary test DB. Ordinal 0 is the first column (usually the integer PK)
When running Nest in start:dev mode, make sure to set deleteOutDir to false in the nest-cli.json configuration file:
{
"compilerOptions": {
"deleteOutDir": false
}
}
Also, add the following to your .gitignore file to avoid committing build artifacts:
dist // or your chosen output directory
*.tsbuildinfo
This is important because, in watch mode (--watch), TypeScript relies on the .tsbuildinfo file. This file helps TypeScript to recompile only the files that have changed. If the output directory were deleted on every build, you’d lose the cached, unmodified files, slowing down the build process.
You can access the legacy API and enable it here: https://console.cloud.google.com/apis/library/places-backend.googleapis.com Once enabled, you will be able to manage the API as usual in the Maps console.
it seems pretty similar to https://github.com/ansible-collections/ansible.netcommon/issues/551 which is open
Getting the following error when enabling commit with or without confirm_commit:
false, "msg": "Argument must be bytes or unicode, got 'int'"}
I have seen bug reports on the same on previous ansible version 2.9 and 2.7 - we are on 2.13
it is the waiting time specified under "commit: 10" that fails
But also fails to identify "confirm_commit: true" or "confirm_commit: yes"
There is a fix to it, but not sure when it is going to be merge https://github.com/ansible-collections/ansible.netcommon/pull/670
I don't see a workaround for it, other than not using "confirm"
It seems that the Google Cloud engineering team has temporarily rolled back the changes to the TLS protocol versions for the App Engine Standard and Flexible Environments. They may send another email regarding the update. At this time, I would suggest keeping an eye on the issue tracker link you shared in the comments or review the Google App Engine release notes for the most recent updates.
When the Model Context is building, make sure CalData is in schema or Swift will not be able to find it
private var models: [any PersistentModel.Type] = [
CalData.self, CalDataVal.self,
]
lazy public var container: ModelContainer = {
let schema = Schema(models)
//...
}
Did you look at what it's complaining about? When I got that error the issue was that PostgreSQL had escaped the backslashes within the strings twice. Which you can fix as follows:
sed -e 's/\\\\/\\/g'< input.json |jq . >output.json
Save the existing file if you need some of the variables written there. Copy the .env.example and change the name to .env. Finally try running it again.
Other scenario: > sail Try to see if the file is already present in the container and have permission to change it.
For me what worked was to use the right path when I was calling the web socket from the front. Something like "wss://a1d5-83-39-106-145.ngrok-free.app/ws/frontend". The "/ws/frontend is very important.
Try your change with this repo: https://github.com/HydraDragonAntivirus/AutoNuitkaDecompiler if your goal is detect malicious payload from Nuitka then you need look: https://github.com/HydraDragonAntivirus/HydraDragonAntivirus
otherwise AutoNuitkaDecompiler is enough to make some progress.
Apparently, this is a limitation of the platform Android itself, when the dark mode is enabled: https://github.com/MaikuB/flutter_local_notifications/issues/2469
Be careful because if you cut\paste the controls like @Tony H said you will keep the properties but lose all the events of each control.
Use php's builtin solution for this:
$username = getenvvar('USERNAME') ;
Or on unix: $username = gettnvvar('LOGNAME') ;
You have a big problem. Order need UNIQUE constraint. For move a item you need:
1. Removes the UNIQUE constraint.
2. Moves the item.
3. Reactivates the UNIQUE constraint.
Other another way is use strings ("a", "b", "c", etc..), or integer * 100 list (100, 200, 300, etc...). But this way is most simple moves, but add complexity in reorder and automatize when you have a 100, 101 and 102 itens.
Is this happening each time you deploy? Also, if the functions are not visible are they still working? As in does your http trigger still work?
I've encountered this issue before in app service; however, I was missing these two environment variables
SCM_DO_BUILD_DURING_DEPLOYMENT : true
ENABLE_ORYX_BUILD : true
here's the entire list of settings with explanation: https://learn.microsoft.com/en-us/azure/azure-functions/functions-app-settings
for me it was not having an await in front of screen.findByRole("row")
Your query:
While https://stackoverflow.com/users/205608/o-jones has good answer, there's a scope of further optimisation.
SELECT *
FROM account
WHERE client_id = ?
AND disabled IN (?)
AND (username LIKE '%?%' OR name LIKE '%?%' OR surname LIKE '%?%')
ORDER BY name, surname
LIMIT ?;
Your constraints: client_id might have high number of rows.
How to optimize this query?
No index would binary search on username LIKE %?% or other similar LIKE statements. One has to do a full index scan (this is different from table scan that currently your query might be doing, its order of magnitudes faster).
The conjunction between clauses in your query is OR i.e. username, name OR surname. That implies, you have to do a composite index containing these three.
The other clauses contain client_id , disabled. However, selectivity of either of these are not as high. So, an individual index on any of these columns is of no use. However, a composite index containing these columns + above three columns do make sense but is optional [But index selection would only work if these guys are present]. So, till now we are at: client_id,disabled,username,name,surname [I1]
You want to optimize ORDER BY clause as well. Only way to do that, is to keep the data physically sorted in the order of your clause. Remember, index is always physically sorted. So, let's change
client_id,disabled,username,name,surname
To:
name,surname,username,client_id,disabled
Consider partitioning your data.
The above index will speed up your query but would have negative consequences. So, you might want to drop the last two items and use an index hint.