The SslHostCertificateFingerprint was outdated and I used the new key and WINSCP continued to work as normal...Thanks
I had the same error except it was gwt-dev.jar that had the javax.xml.parsers class that was conflicting with the JRE. It was shown by using CTRL-SHIFT-T and typing javax.xml.parsers.
I opened the gwt-dev.jar in 7zip and removed the javax.xml.parsers folder and the error went away from my GWT project.
The SCIM endpoint must be internet-facing for use with the Entra provisioning service unless you use Entra's on-premises provisioning agent. The agent requires outbound internet connectivity and opens a connection to Entra, eliminating the need for inbound connections to be allowed. The agent also doesn't need to be on the same server as the application, as long as it has internal (intranet) network connectivity to the application.
Finally UNIX_TIMESTAMP worked and my column of type timestamp(3) can be cast to string
select myts_col,UNIX_TIMESTAMP(CAST(myts_col AS STRING)), CAST(myts_col AS STRING),DATE_FORMAT(myts_col,'yyyy-MM-dd HH:mm:ss') from mytable
mycol_ts conv_col
2024-11-06 06:34:53.316 1730874893
Have you tried to use appsrc element? https://gstreamer.freedesktop.org/documentation/app/appsrc.html?gi-language=c
You can add data from your RAM to an appsrc element using its signal need-data: https://gstreamer.freedesktop.org/documentation/app/appsrc.html#appsrc::need-data
There are many examples on how the appsrc element can be used. I suggest to look into the official GStreamer tutorial: https://gstreamer.freedesktop.org/documentation/tutorials/basic/short-cutting-the-pipeline.html?gi-language=c
Rather than doing it through code, you can put the opacity of the button to zero and once someone touches that table or row anything, make it one for that time. This should solve your case.
The move semantics are too well explained in Rust Book. It's general concept, so if you read it you will also understand how move works in C++ and other languages. The one thing I will notice that in C++ there are no safety checks like in Rust.
Personally, I understood how move works in C++, Rust, etc. after reading these 2 sentences (I couldn't do it after reading dozens of articles on the topic of move in C++):
If you’ve heard the terms shallow copy and deep copy while working with other languages, the concept of copying the pointer, length, and capacity without copying the data probably sounds like making a shallow copy. But because Rust also invalidates the first variable, instead of being called a shallow copy, it’s known as a move.
I believe the issue is as Michael Logothetis suggested. You are having issues because of the %2F. Flask is automatically decoding the parameter you're passing as a string by default (see https://flask.palletsprojects.com/en/stable/api/#url-route-registrations). You can specify a converter to accommodate for the possibility that your string will include a / by using /getExposeIds/<path:cookie> as your endpoint.
I am struggling with Glue and Great Expectations. What version are you using? Did you just add it as %additional_python_modules great_expectations? I managed to do all steps, but in the end I get
"ConfigNotFoundError: Error: No great_expectations directory was found here!
- Please check that you are in the correct directory or have specified the correct directory.
- If you have never run Great Expectations in this project, please run `great_expectations init` to get started."
but i am running in glue notebook, so that's not an option.
Would you be so kind sharing your set up and some simple examples? Plase!!
simply disabling my vpn solved this issue for me
As of the 2023-09-28 release of Docker 4.24 Resource Saver is no longer a Feature in development as Alexander describes in his answer.
You can disable or configure Resource Saver in Docker Desktop for Windows (Mac is documented as the same) by going to Settings / Resources and then unchecking the Enable Resource Saver button
Cnt = CALCULATE( DISTINCTCOUNT( BRAND[pIn_nbr] ), ALLSELECTED( BRAND[pIn_nbr] ) )
Delete your existing sha1 keys from both firebase and Google Cloud Credentials and then just add the upload and signing sha1 keys from google play console. This should generate a new google-services.json. Use this to build the app bundle. This worked for me.
Thank you for everyones input. It helped me understand better how the SQL was calculating.
I was able to resolve my issue with overriding precedence using parenthesis as shown below.
SELECT (24/24*16.34+6) * 48 * 40 / 1728 as CUBIC_FT
FROM ...
Thanks for the tip! I was doing some work translating some Excel calculations that used Excel's IsError function and the Mid function. I had already replaced all the IfError functions in Access, with a combination of Iif and IsError, and was still getting the same #Func! error. It turns out, the Mid function in Access does not return a true error in the same way Excel does, but a zero, which kind of make sense but that was completely messing my original logic which as I mentioned came from Excel. In the end, I substututed this
Iif(IsError(Mid(Afield,Start,End)),AlternateValue,Mid(Afield,Start,End))
with this other formula
Iif(Mid(Afield,Start,End)=0,AlternateValue,Mid(Afield,Start,End))
This is a much more compact and easier to understand formula in Access, and it actually worked! All thanks for your answer that took 6 years to figure out! It is now another 6 years in 2024 when I am reading your comments, it is now my time to give something back. Thanks you very much!
The most straightforward way I could come up with is the join method for strings.
>>> a = ('x','y','z')
>>> print(f'{" ".join(a)}')
x y z
If your tuple elements aren't strings, they have to be converted (if possible).
>>> a = (0,-1,1)
>>> print(f'{" ".join(map(str, a))}')
0 -1 1
Not to self-promote, but you should take a look at my repository, unstatistical/engine. When I started this project, I was fairly new to Wayland and EGL development, but I believe that some code may still be relevant. Wayland code can be found in src/wayland.
I believe the best way to control window position is using a wl_region object. You can create a region using this function:
wlContext.region = wl_compositor_create_region(wlContext.compositor);
And then, you can set the window's position like so:
wl_region_add(wlContext.region, x, y, w, h);
Best of luck on your project.
EDIT: original and not for stackoverflow simplified code to replace the string is:
$entry=preg_replace('/[^-a-z0-9_ßüÜöÖäÄßÜüÖöÄä_@:\.\, ]/i', '', $entry);
and in my question i used this simplified code:
$entry = preg_replace( '/[^a-z0-9 ]/i', '', $entry );
That should do it, as gre_gor commented. So the problem is the not-simplified preg_replace above. Solved. Thank you.
If you wanted it to be all one element, you would definitely need an SVG. This question has roots in another question relating to masking/cutting out parts of an element. I've come up with a way to create the shape you have drawn in 2 div shapes wrapped under a div, based on the accepted answer to the previously stated question, but it currently relies on a fixed height and width. Each div is given a different color to differentiate between them:
:root {
--rounding-factor: 20px;
}
body {
margin: 20px;
}
#wrapperDiv {
background-color: pink;
height: calc(200px + var(--rounding-factor));
width: 400px;
position: relative;
}
#lDiv {
position: absolute;
top: var(--rounding-factor);
width: 50%;
height: 200px;
border-top-left-radius: var(--rounding-factor);
overflow:hidden;
z-index: 1;
}
#lDiv::after {
content: '';
position: absolute;
left: 0px;
bottom: 0px;
width: 200px;
height: var(--rounding-factor);
border-top-left-radius: var(--rounding-factor);
box-shadow: 0px 0px 0px 2000px orange;
}
#rDiv {
position: absolute;
right: 0;
width: 50%;
height: 200px;
border-bottom-right-radius: var(--rounding-factor);
overflow:hidden;
z-index: 1;
}
#rDiv::after {
content: '';
position: absolute;
left: 0px;
top: 0px;
width: 200px;
height: var(--rounding-factor);
border-bottom-right-radius: var(--rounding-factor);
box-shadow: 0px 0px 0px 2000px yellow;
}
#content {
position: absolute;
z-index: 2;
width: 100%;
top: var(--rounding-factor);
height: calc(200px - var(--rounding-factor));
background-color: rgba(100, 255, 200, 0.5);
display: flex;
align-items: center;
justify-content: center;
font-size: 2rem;
}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<link rel="stylesheet" href="style.css" />
<title>Browser</title>
</head>
<body>
<div id="wrapperDiv">
<div id="lDiv">
</div>
<div id="rDiv">
</div>
<div id="content">
<p>Some content here.</p>
</div>
</div>
</body>
</html>
the dx2 and dy2 may be wrong, d/dx of cos = - sin you have + sin
also check num_points as the cast to int might drop precision, or have other issues when it rounds up and down.
is "duration" defined in scope? as it is not listed on this page?
check your parametric equation and that your trig identities are correct.
best luck
Same issue here...any solutions?
As you suspected, events is the way to do this.
execution.events during past 7d
| summarize number_of_users = user.count() by context.device_platform
<sandbox>
com.sandbox.system.stackoverflow
<https://www.sanbox.com/privacy/page
/sandbox>
Before throwing an exception, it would be interesting to ask some questions like : Do I only need a custom error to return to the user in a specific controller or endpoint ? Or do I have to set a consistent error responses across the API with centralizing error handling ?
Depending on the need, you can add or not exception throwing. So you have two approches :
1 - Returning a ResponseEntity with an Error Code Directly (for quick responses with minimal logic): lets say all we want is returning an error because we know that the response is not the right one :
@GetMapping("/example-failure")
public ResponseEntity<String> getExampleFailure() {
//Return a response of type String instead of MyResponseType.
String response = "This is an invalid response";
return new ResponseEntity<>(response, HttpStatus.NOT_ACCEPTABLE);
}
2 - If you really need an exception handling knowing that exceptions are often reserved for truly exceptional cases or errors that need to propagate up the stack but can also be used for consistent error responses across your API.
One of the best practices for handling exceptions globally is to use @ControllerAdvice :
@GetMapping("/example-failure")
public ResponseEntity<String> getExampleFailure() {
// **Throw a new exception**
throw new InvalidResponseTypeException("Item not found")
}
...
// Here The GlobalExceptionHandler declared in the controller advice class, catches the thrown exception and responds with a consistent error
@ControllerAdvice
public class GlobalExceptionHandler {
@ExceptionHandler(InvalidResponseTypeException.class)
public ResponseEntity<ErrorResponse> handleNotFoundException(InvalidResponseTypeException ex) {
ErrorResponse error = new ErrorResponse("NOT_ACCEPTABLE", ex.getMessage());
return new ResponseEntity<>(error, HttpStatus.NOT_ACCEPTABLE);
}
// You can also have in the same class other exception handlers...
}
The answer given by @Bob only returns the imaginary part of the transform. If you'd like to return a complex number (more similar to the scipy hilbert function), you can return a complex tensor like so.
def hilbert_transform(data):
# Allocates memory on GPU with size/dimensions of signal
data = torch.from_numpy(data).to('cuda')
transforms = -1j * torch.fft.rfft(data, axis=-1)
transforms[0] = 0;
imaginary = torch.fft.irfft(transforms, axis=-1)
real = data
return torch.complex(real, imaginary)
This will play nicely with functions like torch.angle to get signal phase and torch.abs to get the envelope using both real and imaginary components.
Curl now has --etag-save and --etag-compare
https://curl.se/docs/manpage.html#--etag-compare
You should be able to use these together as of 7.70.0
curl --etag-compare etag.txt --etag-save etag.txt http://example.org
Well this is not strictly possible with RBAC from the docs :(
Note: You cannot restrict create or deletecollection requests by their resource name. For create, this limitation is because the name of the new object may not be known at authorization time. If you restrict list or watch by resourceName, clients must include a metadata.name field selector in their list or watch request that matches the specified resourceName in order to be authorized. For example, kubectl get configmaps --field-selector=metadata.name=my-configmap
https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources
I will try allowing create for any resource name, but restricting other verbs with resourceNames
Fully working example with Angular 18, custom fonts and no need of customize html toolbar of quill editor.
Your destination needs to be a filename, not an existing folder. And, you need to specify a raw string as shown below:
for example:
url = "https://github.com/CharlesKnell/folder-display/blob/main/dist/folder_display_2.2.1.exe"
destination = r"C:\Users\charl\Downloads\folder_display_2.2.1.exe"
Fully working example with Angular 18, custom fonts and no need of customize html toolbar of quill editor.
I created a fully working example with Angular 18 and without need to customize html.
Apple's tech site for all iPhones
So I believe you are missing the gist here by mentioning it. Lazy loading simply means, nothing will be worked upon until compute() function is called, which will then take care of execution.!
So you need to have all your logic in place, at the end you can call compute function.
If your data is too large, compute internally merges everything and brings the entire data into the caller, which may still kill you application, die to OOM, so better to use map_partions with compute, to write it into separate files or push output to some database.!
Solution to move async to the whole function in the callback:
mediaRecorderAudio.addEventListener("dataavailable", async (stream) => {
// Send stream data while mediaRecorderAudio is active
let blob = new Blob(stream.data, { type: "audio/ogg; codecs=opus" });
let buffer = await blob.arrayBuffer();
let data_to_send = new Uint8Array(buffer);
socket.emit('socket_audio',JSON.stringify(data_to_send))
});
mediaRecorderAudio.start(100);
The crash is likely due to an unrecognised selected error, common when a method is called on an object that doesn't support it. Check your extensions or modifications to standard controllers, and ensure all objects are initialised properly. Also, confirm compatibility with any external libraries.
The problem is that this feature is only for premium and ultimate users as you can see here at the top
https://docs.gitlab.com/ee/ci/secrets/#use-vault-secrets-in-a-ci-job
Although I don't know why, but they left Free on this page, which made me confused too.
See how it looks https://docs.gitlab.com/ee/ci/secrets/
PS. Now in 17.+ versions you are not allowed to use CI_JOB_JWT, instead use id's https://docs.gitlab.com/ee/ci/yaml/index.html#id_tokens
The simplest correct answer is:
Get-Process SourceTree | Stop-Process -Force
"SourceTree" is a process name.
I finnaly solved it. I created a fully working git repository with Angular 18
i got same error
3061' Too few parameters. Expected 2. anyone help me please
here is vb code
Private Sub Form_AfterUpdate()
On Error GoTo ErrorHandler
Dim db As DAO.Database
Dim rs As DAO.Recordset
Dim calculatedSalesPercents As Double
Dim salesPercentsID As Variant
' Store the value of the SalesPercents control and calculate the result
calculatedSalesPercents = IIf([RePrice] = 0 Or [BDDUTY] = 0, 0, Round(1 - ([BDDUTY] / [RePrice]), 2))
salesPercentsID = Forms![Item Menu Suplier(03)]![SalesPercents] ' Value from the SalesPercents textbox
' Set up the database and recordset
Set db = CurrentDb()
' Add the value from the SalesPercents textbox directly to the SQL statement
Set rs = db.OpenRecordset("SELECT * FROM ItemName WHERE ID = " & salesPercentsID, dbOpenDynaset)
' If a record is found, update it
If Not rs.EOF Then
rs.Edit
rs("SalesPercents") = calculatedSalesPercents
rs.Update
End If
rs.Close
Set rs = Nothing
Set db = Nothing
Exit Sub
ErrorHandler: MsgBox "Error " & Err.Number & ": " & Err.Description End Sub
In addition to @Quentin's answer, this issue also occurs when the for attribute in your <label> element does not reference any <input>, or <select> element id
For example:
<label for="filter">Filter Options</label>
<select name="filter">
<option value="asc">Ascending</option>
<option value="desc">Descending</option>
<option value="def">Default</option>
</select>
To fix this, make sure, the input or select element id matches with the for attribute.
Essentially:
<label for="filter">Filter Options</label>
<select name="filter" id="filter"> <!-- id="filter" now references for="filter" -->
<option value="asc">Ascending</option>
<option value="desc">Descending</option>
<option value="def">Default</option>
</select>
Unfortunately, there are no features like this. But you can try some plugins instead.
https://plugins.jetbrains.com/plugin/7793-markdown/versions/stable
try:
(sizeof(int16_t)
instead of
sizeof(uint16_t))
I did exactly what you wrote : 60f346c11edf_venv/bin# systemctl status gunicorn.service × gunicorn.service - gunicorn daemon for Django project Loaded: loaded (/etc/systemd/system/gunicorn.service; enabled; vendor preset: en> Active: failed (Result: exit-code) since Wed 2024-11-06 21:35:51 CET; 1min 7s ago Main PID: 268969 (code=exited, status=200/CHDIR) CPU: 2ms
Nov 06 21:35:51 vps-etus-relizane systemd[1]: Started gunicorn daemon for Django proj> Nov 06 21:35:51 vps-etus-relizane systemd[268969]: gunicorn.service: Changing to the > Nov 06 21:35:51 vps-etus-relizane systemd[268969]: gunicorn.service: Failed at step C> Nov 06 21:35:51 vps-etus-relizane systemd[1]: gunicorn.service: Main process exited, > Nov 06 21:35:51 vps-etus-relizane systemd[1]: gunicorn.service: Failed with result 'e> lines 1-11/11 (END)
React Router has built-in support since v6.12.0 by enabling the v7_startTransition feature flag:
https://reactrouter.com/en/6.22.3/guides/api-development-strategy#react-router-future-flags
<BrowserRouter future={ { v7_startTransition: true } }>
<Routes> ... </Routes>
</BrowserRouter>
//OR
<RouterProvider
router={ router }
future={ { v7_startTransition: true } }
/>
well in my case there was empty brackets [] which then I removed and problem solved
I found the main reason. When I installed @twilio/conversations package, in its modals there were lots of unnecessary symbols so I just removed them, and patched those files.
In the end I wrote a js script in another file.
const ghpages = require('gh-pages');
const { execSync } = require('child_process');
try {
const commitHash = execSync('git rev-parse HEAD').toString().trim();
ghpages.publish('build', {
dotfiles: true,
message: `Deploying commit ${commitHash}`
}, (err) => {
if (err) console.error('Deployment error:', err);
else console.log('Deployed successfully');
});
} catch (error) {
console.error('Error fetching commit hash:', error);
}
Called from package.json with:
"deploy": "node scripts/deploy.cjs"
I was able to sort of solve this in an excel template by making the first sheet hold all rows, the second sheet uses excel formula to decide whether to keep an id/name. I'm not sure this is a robust solution, though, and it doesn't answer my question if my goal was to do it in one sheet, since I created a dependency on the data sheet rather than doing it in-place.
Does anyone have a suggestion for this as a single sheet only method?
here's how I did the two sheet method, note this is psuedocode since otherwise i'd need to attach the excel workbook
Dynamic List Sheet
A1.Comment = jx:each (items="data" var="row" lastCell="C2")
A2.Comment = jx:each(items="data" var="row" lastCell="C2")
A2.Value = ${row.ID}
B2.Value = ${row.NAME}
C2.Value = ${row.TERM}
Compute Sheet
A1.Comment = jx:each (items="data" var="row" lastCell="C2")
A2.Comment = jx:area(lastCell="C2")
A2.Formula=IF(COUNTIF('Dynamic list'!$A$2:A2, 'Dynamic list'!A2) > 1, "", 'Dynamic list'!A2)
B2.Formula=IF(COUNTIF('Dynamic list'!$A$2:B2, 'Dynamic list'!B2) > 1, "", 'Dynamic list'!B2)
C2.Formula='Dynamic list'!C2```
I found the problem, a Promise have 3 state.
pending - The initial state of a promise.
fulfilled - The state of a promise representing a successful operation.
rejected - The state of a promise representing a failed operation.
and a promise can hold state of pending forever(until page exist and not closed)
therefore codes after await Swal.fire... in above example, never run because that is always in pending state. (a bug that resolved in sweetalert version 11)
Figured it out!
foo <- <a gt_tbl object>
library(ggplot2)
library(ggplotify)
library(rsvg) #required to render SVG cells
foo_1 <- as_gtable(foo)
foo_2 <- as.ggplot(foo_1)
ggsave('path/to/file.png', foo_2)
SELECT Name, SUM(Marks) from table1 GROUP BY Name;
Should do it?
have you found the solution? using same theme. I would like on product category page to show description, not overriding it with content block
Também tive este problema! A solução que encontrei foi fazer o update do pacote log4net da versão 2.0.8 para versão 2.0.12, fiz também o update de todos os pacotes do crystal reports para ultima versão. Após isso reinstalei o SAP Crystal Reports Runtime conforme imagem.
The developers of kableExtra were able to resolve this issue. Details found here. add_header_above must be placed prior to kable_styling
---
title: "Test"
output: pdf_document
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = FALSE)
library(tidyverse)
library(knitr)
library(kableExtra)
```
Some text to anchor the first table.
```{r test-table}
tibble(col1 = rep("Random text", 80),
n1 = 1:80,
perc1 = 1:80,
n2 = 1:80,
perc2 = 1:80) %>%
kbl(col.names = c("Category", "n", "\\%", "n", "\\%"),
caption = "Reproducible caption",
longtable = TRUE, escape = FALSE, booktabs = TRUE, linesep = "") %>%
add_header_above(c(" " = 1,
"Group 1" = 2,
"Group 2" = 2)) %>%
kable_styling(latex_options = c("HOLD_position", "repeat_header"),
position = "left")
```
I lost two hours in this problem and was solved using
collectionView.contentInset = .init(top: 1, left: 15, bottom: 1, right: 15)
collectionView.scrollIndicatorInsets = .init(top: 1, left: 1, bottom: 1, right: 1)
I made a project that does exactly this, I put it on my github to take a look. https://github.com/LeonardoQueres/Integration-Docker---.NET---SQL-SERVER
In the program.cs file add the line of code below.
builder.Services.AddHttpsRedirection(options =>
{
options.RedirectStatusCode = Status308PermanentRedirect;
options.HttpsPort = 3001;
});
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
var application = app.Services.CreateScope().ServiceProvider.GetRequiredService<ApplicationDbContext>();
var pendingMigrations = await application.Database.GetPendingMigrationsAsync();
if (pendingMigrations != null)
await application.Database.MigrateAsync();
}
Update your dockerfile confirm the code below adding the migration lines.
FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS base
USER app
WORKDIR /app
EXPOSE 3000
EXPOSE 3001
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
ARG BUILD_CONFIGURATION=Release
WORKDIR /src
COPY ["project/project.csproj", "project/"]
COPY ["Thunders_Repositories/Thunders_Repositories.csproj", "Thunders_Repositories/"]
COPY ["Thunders_Borders/Thunders_Borders.csproj", "Thunders_Borders/"]
COPY ["Thunders_UseCases/Thunders_UseCases.csproj", "Thunders_UseCases/"]
RUN dotnet restore "./project/project.csproj"
COPY . .
WORKDIR "/src/project"
RUN dotnet tool install --global dotnet-ef
ENV PATH="$PATH:/root/.dotnet/tools"
RUN dotnet build "./project.csproj" -c $BUILD_CONFIGURATION -o /app/build
CMD dotnet ef database update --environment Development --project src/project_Repositories
FROM build AS publish
ARG BUILD_CONFIGURATION=Release
RUN dotnet publish "./project.csproj" -c $BUILD_CONFIGURATION -o /app/publish /p:UseAppHost=false
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "project.dll"]
The code below belongs to docker-compose, update yours as needed.
services:
project:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_HTTP_PORTS=3000
- ASPNETCORE_HTTPS_PORTS=3001
container_name: project
image: ${DOCKER_REGISTRY-}project
build:
context: .
dockerfile: project/Dockerfile
ports:
- "3000:3000"
- "3001:3001"
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/home/app/.microsoft/usersecrets:ro
- ${APPDATA}/ASP.NET/Https:/home/app/.aspnet/https:ro
networks:
- compose-bridge
depends_on:
sqlserver:
condition: service_healthy
sqlserver:
image: mcr.microsoft.com/mssql/server:2022-preview-ubuntu-22.04
container_name: sqlserver
ports:
- "1433:1433"
environment:
- SA_PASSWORD=passwork 'não pode ser uma senha fraca, sql nao funciona. nada de usar 123456 hehehehe'
- ACCEPT_EULA= Y
volumes:
- ./sqlserver/data:/var/opt/mssql/data
- ./sqlserver/log:/var/opt/mssql/log
networks:
- compose-bridge
healthcheck:
test: /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "leoQueres123" -Q "SELECT 1" -b -o /dev/null
interval: 10s
timeout: 3s
retries: 10
start_period: 10s
volumes:
sqlserver:
networks:
compose-bridge:
driver: bridge
I hope I helped.
You should use default import of dynamic:
import dynamic from 'next/dynamic';
'''
Future<void> getLocation() async {
setState(() {
_isLoading = true;
});
// Optimized location settings with reduced accuracy for faster fetching
LocationSettings locationSettings;
if (defaultTargetPlatform == TargetPlatform.android) {
locationSettings = AndroidSettings(
accuracy: LocationAccuracy.reduced, // Using reduced accuracy for faster results
forceLocationManager: true,
);
} else if (defaultTargetPlatform == TargetPlatform.iOS || defaultTargetPlatform == TargetPlatform.macOS) {
locationSettings = AppleSettings(
accuracy: LocationAccuracy.reduced, // Using reduced accuracy for faster results
activityType: ActivityType.other,
// Reduced timeout since we expect faster response
);
} else {
locationSettings = LocationSettings(
accuracy: LocationAccuracy.reduced, // Using reduced accuracy for faster results
);
}
try {
// Check if location services are enabled
bool serviceEnabled = await Geolocator.isLocationServiceEnabled();
if (!serviceEnabled) {
serviceEnabled = await Geolocator.openLocationSettings();
if (!serviceEnabled) {
setState(() {
_isLoading = false;
});
AwesomeDialog(
context: context,
dialogType: DialogType.error,
animType: AnimType.scale,
title: 'Location Services Disabled',
titleTextStyle: TextStyle(
color: Color(0XFF0068B3),
fontWeight: FontWeight.bold,
fontSize: 16.sp,
),
desc: 'Please enable location services to continue.',
descTextStyle: TextStyle(
color: Color(0XFF585F65),
fontWeight: FontWeight.w500,
fontSize: 12.sp,
),
btnOkText: 'Open Settings',
buttonsTextStyle: TextStyle(
fontSize: 14.sp,
color: Colors.white,
),
btnOkColor: Colors.blue,
btnOkOnPress: () async {
await Geolocator.openLocationSettings();
},
).show();
return;
}
}
// Check and request permissions
LocationPermission permission = await Geolocator.checkPermission();
if (permission == LocationPermission.denied) {
permission = await Geolocator.requestPermission();
if (permission == LocationPermission.denied) {
setState(() {
_isLoading = false;
});
AwesomeDialog(
context: context,
dialogType: DialogType.warning,
animType: AnimType.scale,
title: 'Location Permission Required',
titleTextStyle: TextStyle(
color: Color(0XFF0068B3),
fontWeight: FontWeight.bold,
fontSize: 16.sp,
),
desc: 'Please grant location permission to use this feature.',
descTextStyle: TextStyle(
color: Color(0XFF585F65),
fontWeight: FontWeight.w500,
fontSize: 12.sp,
),
btnOkText: 'Request Permission',
buttonsTextStyle: TextStyle(
fontSize: 14.sp,
color: Colors.white,
),
btnOkColor: Colors.blue,
btnOkOnPress: () async {
await getLocation();
},
).show();
return;
}
}
if (permission == LocationPermission.deniedForever) {
setState(() {
_isLoading = false;
});
AwesomeDialog(
context: context,
dialogType: DialogType.error,
animType: AnimType.scale,
title: 'Location Permission Denied',
titleTextStyle: TextStyle(
color: Color(0XFF0068B3),
fontWeight: FontWeight.bold,
fontSize: 16.sp,
),
desc: 'Location permission is permanently denied. Please enable it from app settings.',
descTextStyle: TextStyle(
color: Color(0XFF585F65),
fontWeight: FontWeight.w500,
fontSize: 12.sp,
),
btnOkText: 'Open Settings',
buttonsTextStyle: TextStyle(
fontSize: 14.sp,
color: Colors.white,
),
btnOkColor: Colors.blue,
btnOkOnPress: () async {
await Geolocator.openAppSettings();
},
).show();
return;
}
// Try to get the last known location first
Position? lastKnownPosition = await Geolocator.getLastKnownPosition(
forceAndroidLocationManager: true,
);
if (lastKnownPosition != null ) {
// Use last known position if it's recent
latitude = lastKnownPosition.latitude;
longitude = lastKnownPosition.longitude;
print("lat and lon from lastknwnlocation ${latitude}${longitude}");
} else {
// Get current position with reduced accuracy settings
Position position = await Geolocator.getCurrentPosition(
locationSettings: locationSettings,
);
latitude = position.latitude;
longitude = position.longitude;
print("lat and lon from currentlocation ${latitude}${longitude}");
}
if (latitude != null && longitude != null) {
await getCurrentPlace();
} else {
setState(() {
_isLoading = false;
});
AwesomeDialog(
context: context,
dialogType: DialogType.error,
animType: AnimType.scale,
title: 'Location Not Found',
desc: 'Unable to fetch your current location. Please try again.',
btnOkText: 'Retry',
btnOkColor: Colors.blue,
btnOkOnPress: () async {
await getLocation();
},
).show();
}
} catch (e) {
print('Error fetching location: $e');
setState(() {
_isLoading = false;
});
// More specific error handling
String errorMessage = 'Failed to fetch location. Please try again.';
if (e is TimeoutException) {
errorMessage = 'Location fetch is taking too long. Please check your GPS signal and try again.';
}
AwesomeDialog(
context: context,
dialogType: DialogType.error,
animType: AnimType.scale,
title: 'Error',
titleTextStyle: TextStyle(
color: Color(0XFF0068B3),
fontWeight: FontWeight.bold,
fontSize: 16.sp,
),
desc: errorMessage,
descTextStyle: TextStyle(
color: Color(0XFF585F65),
fontWeight: FontWeight.w500,
fontSize: 12.sp,
),
btnOkText: 'Retry',
buttonsTextStyle: TextStyle(
fontSize: 14.sp,
color: Colors.white,
),
btnOkColor: Colors.blue,
btnOkOnPress: () async {
await getLocation();
},
).show();
} finally {
setState(() {
_isLoading = false;
});
}
}
'''
If your host computer is running ubuntu. Changeing the variable PACKAGE_CLASSES in local.conf. from package_rpm to package_deb made it work.
#PACKAGE_CLASSES ?= "package_rpm"
PACKAGE_CLASSES ?= "package_deb"
That was alot of headache!
Ever since I first read about that it seems to me that autodiff really is symbolic differentiation implemented in an efficient way for computer programs.
From my point of view if you are not approximating the derivative by a small, non zero "h" it is, from a mathematical perspective, symbolic differentiation.
However, in the practical world there is a big difference in performance between working with symbolic expressions and their derivatives in a computer (for instance in Mathematica or SymPy) or using autodiff.
That is why my opinion today is that from a "mathematical" point of view there is no difference, from a "computer science" point of view there is a difference, which I summarize as "autodiff is an efficient implementation of symbolic derivatives".
I was looking for an opensource Java solution to convert DOC/DOCX to PDF without loosing the design of the document.
I came to know about the usage of LibreOffice on Linux machine for the same in the following post. Thank you, Anmol.
https://stackoverflow.com/a/73711219/3085879
So, I gave a try installing LibreOffice on windows machine, and running the following command.
*C:\Users\ABCD\Desktop\sofficetest>"C:\Program Files\LibreOffice\program\soffice.exe" --convert-to pdf:writer_pdf_Export --outdir "C:\Users\ABCD\Desktop\sofficetest\output" .docx
Please note that I have used double quotes for paths to avoid space related issues. It works like a charm. :)
I found the syntax of the command when I ran the following command on the command prompt.
C:\Users\ABCD\Desktop\sofficetest>"C:\Program Files\LibreOffice\program\soffice.exe" --help
Please refer the highlighted section in the command run result screenshot.
The environment and software details are as follows.
In my case I had this issue because I was logged in as Administrator instead of my usual user id. Environment variable for path did not include location of VSCode for Administrator user. Updating the path variable fixed the issue.
As per AWS, Mongo Compass do not support correct NodeJs driver.
Drivers that support Amazon DocumentDB 5.0 and the MONGODB-AWS authentication mechanism should work with the IAM authentication implementation in Amazon DocumentDB. There is a known limitation with NodeJS drivers which are currently not supported by Amazon DocumentDB for IAM authentication. NodeJS driver support will be updated once the limitation is resolved
@Query("select p FROM Partner p JOIN p.partnerIdentifier")
List<Partner> findAfterDates();
You don't need NOT NULL statement you'd just get null values if using LEFT JOIN
When working with big file editing, I would simple save the file I'm working on into a duplicate with :w somefile, do whatever editing I need to by yanking, deleting, and putting working only one file then do a bash diff or compare on the two files to see if the correct changes are indeed correct. If they are and you still have extensive editing then save the working file to a second :w some2ndfile and so forth ... Keep track of your changes in Note.file just in case you have to go back and reconstruct something. Best done if line numbering is turned on. Also, use NerdTree to navigate between files. Some mapping that I put in .vimrc are: map the keystroke jj to go to command mode when in insert mode and map to get a command prompt : when in command mode, to save a file in both command and insert modes, to open nerdTree in a side pane, ctrl-w navigates through the nerdTree panes. I also set my default window nearly as big as the screen so I have plenty room navigating between and into files. If you are using vim as a quassi IDE another handy tool is to map a function key to generate a tagfile in your /home/projects/directory/file so that if you can't quite remember some command you used some time ago, it is easy to without leaving vim using nerdTree.
TylerH, I think this happens when you try to use the v3 link instead of the v2 in the package sources. I did that in one project, then changed the link to v2 and the Package Manager wouldn't load. Used this fix and all is good.
Looks ok to me, this may be throwing it:
--border-radius: 50px;
Doesn't render for me because I'm not calling a variable
var( --border-radius);
But that should not matter. Copy the code into another editor or run in another browser. Good luck!
I have figured it out! As it turns out i had this in my web.php file:
Route::get('/starter', function () {
return view('logged.start');
})->name('start');
After deleting it everything started working as it should.
https://medium.com/@6386391ritesh/unity-websocket-support-ea2420df452c
you might Find answer here as this article Describes how to connect to Web socket Server.
It is important to note that there are TWO SymmetricDS Docker images: one open-source and another for the PRO version (paid license). https://hub.docker.com/u/jumpmind
The open-source SymmetricDS Docker image depends on YOU to provide details of database connection in the .properties engine file (as described in the User guide). So this is working as designed. https://symmetricds.sourceforge.net/doc/3.15/html/user-guide.html
The PRO version of SymmetricDS has many more features, including a web console UI. This image works out of the gate and allows web console user to configure all database connections (multiple if you wish), provide license details and reboot to apply changes. https://downloads.jumpmind.com/symmetricds/doc/3.15/html/user-guide.html
@null doesn't work for me. I use
@android:color/transparent
Solved it, the call to UseStatusCodePagesWithReExecute needs to go before calling UseRouting. That way, I can use UseExceptionHandler without 404s going unhandled by the ErrorController. It's worth noting that if you use UseExceptionHandler, the controller will need to have logic to log the exception, else, it just gets swallowed.
After generating the document test.html (typically located at C:\xampp\htdocs\test.html or where your xampp is installed), simply try to access it using the address http://localhost/test.html
Login: Generate and Store the Token On the React frontend, log in and store the token
Token Verification and Auto-Logout Use Axios interceptors in React to catch 401 Unauthorized responses globally. This way, if the token is expired or invalid, you can automatically log out the user and redirect them to the login page.
Token Deletion on Logout In your Laravel API, use the logout function to revoke all tokens. When logging out on the frontend, clear the token from localStorage and remove it from Axios defaults.
Token Expiration/Invalidation on Laravel With Laravel Sanctum, tokens do not expire by default. To achieve auto-expiration, you can:
Set a custom expiration time for tokens. Use a middleware to handle token expiration.
Complete Flow User logs in: The token is generated by Laravel and saved in the React frontend’s localStorage. User navigates protected routes: Axios sends the token with each request. Token verification: Laravel verifies the token; if it’s expired, it responds with a 401 Unauthorized. Auto logout on expiration: The Axios interceptor in React catches the 401 and logs the user out automatically. User logout: On manual logout, the token is removed from both the frontend and backend, ensuring it cannot be reused.
There is now an implementation of RoPE at torchtune.modules.RotaryPositionalEmbeddings; torchtune is authored by the official pytorch team.
How did you installed it manually? Is there a site or microsoft store?
As for my understanding (I'm not a dev), when faced with that kind of errors (FP, Cocoa), it usually could point to a few things:
I know that's not a lot, but hopefully, it will point you in the right direction.
Only app:layout_optimizationLevel="barrier" is required to be used on corresponding parent ConstraintLayout
If we use app:layout_optimizationLevel="none" layout rendering might take longer, as each constraint will be re-evaluated more thoroughly. For complex layouts, this can cause performance issues.
Thanks to @Diego who opened an issue with geopandas (https://github.com/geopandas/geopandas/issues/3433), I came across two different ways to solve this problem (which originates from pyproj):
As described at https://github.com/pyproj4/pyproj/issues/705, add these lines to the preamble:
import pyproj
pyproj.network.set_network_enabled(False)
As described at https://pyproj4.github.io/pyproj/stable/transformation_grids.html, add the proj-data package to the conda environment. It's also safest to use a single channel.
Either run
conda install -c conda-forge proj-data
or use this environment:
name: geotest
channels:
- conda-forge
dependencies:
- python
- geopandas
- proj-data
Add to Gemfile
gem 'activerecord-import'
I ran into something but it is not very similar, it causes EOF error though. I changed my config in the php.ini in the xdebug entry as follows:
xdebug.connect_timeout_ms=20000 xdebug.max_nesting_level = 2048
I think this might work. Let you give it a try.
REG_MATCH(POSTAL_CODE, '^(?:\\d{3}|[A-Za-z]\\d[A-Za-z])')
I ran into something but it is not very similar, it causes EOF error though. I changed my config in the php.ini in the xdebug entry as follows:
xdebug.connect_timeout_ms=20000 xdebug.max_nesting_level = 2048
Please make sure that the ConnectorExporter is selected in the CompositeExporter module when you are exporting the CAR file / running inside the embedded Micro Integrator.
if (_allowances[from][to] < value) {
revert ERC20InsufficientAllowance(to, _allowances[from][to], value); //error
}
_spendAllowance(from, to, value);
Here you are using to as spender, but the spender is actually msg.sender.
Not sure what the difference, maybe the version. But to make it work, I explicitly created a docker network and ran the containers to make use of it. They were then able to connect after that.
The only exif info I want is the gps location and the direction the camera is pointing, mine does.
Exif info can be displayed on some of my software and pics on the web My thought is There is ocr software that lets you select the area and will convert it to text. This might be a place to start
If you want to continue using truffle/ganache, then you have to change the solidity version to 0.8.19.
I suggest however to get rid of truffle/ganache since they are deprecated. Try hardhat or forge.
I was having the exact same error with some fairly simple dataset. Apparently the CRAN version of prophet is pretty outdated. I installed the latest version with:
remotes::install_github('facebook/prophet@*release', subdir = 'R')
After re-running my code, the error simply went away.
Force a Full Build on Each Run:
You can configure Android Studio to always perform a clean build before running: Go to Run > Edit Configurations. Select your app configuration and scroll down to Before Launch. Click on Add (+), then select Gradle-aware Make. This ensures that a Gradle build task runs before launching the app, forcing it to rebuild and apply any XML changes. It works 100%
I don't think you posted enough info to solve this problem, but that error means that somewhere in the code there is something called identity_trans and that somewhere in the code you are trying to access an attribute of that object that is not an attribute of that object.
I would look for identity_trans in the code and see where it's trying to reference dataspace_is_numerical as a start.
You posted some of your code but you need to show the part of the code that has dataspace_is_numerical in it for us to really help you.
Alternatively you could append the following to the end of the line that is generating the error:
# type: ignore
That will silence any Pylance errors on that line only
I have been able to get around this limitation by adding a few checks when shift end time move into the next day morning. Here is the modified code, that works well for all intervals along with break deductions:
function calculateHoursGroups(startShift, endShift) {
const parseTime = (timeStr) => {
const [hours, minutes] = timeStr.split(":").map(Number);
return hours * 60 + minutes;
};
const formatHours = (minutes) => (minutes / 60).toFixed(2);
var check = false;
var endTime;
const NORMAL_START = parseTime("06:00");
const NORMAL_END = parseTime("18:00");
const EVENING_START = parseTime("18:00");
const EVENING_END = parseTime("21:00");
const NIGHT_START = parseTime("21:00");
const NIGHT_END = parseTime("06:00");
const BREAK_TIMES = {
normal: { start: parseTime("12:00"), end: parseTime("12:30") },
evening: { start: parseTime("20:00"), end: parseTime("20:30") },
night: { start: parseTime("04:00"), end: parseTime("04:30") },
midnightBreak: { start: 1680, end: 1710 }, // Midnight break from 28:00 to 28:30
};
const startTime = parseTime(startShift);
if(parseTime(endShift) < startTime) {
endTime = parseTime(endShift) + 1440;
check = true;
}
else {
endTime = parseTime(endShift);
}
const shiftDuration = endTime - startTime;
let breakDuration = 0;
if (shiftDuration == 6 * 60) breakDuration = 20;
else if (shiftDuration > 6 * 60) breakDuration = 30;
const calculateOverlap = (start, end, rangeStart, rangeEnd) => {
// console.log(start)
// console.log(end)
const overlapStart = Math.max(start, rangeStart);
const overlapEnd = Math.min(end, rangeEnd);
return Math.max(0, overlapEnd - overlapStart);
};
const calculateShiftHoursInRange = (rangeStart, rangeEnd) => {
return calculateOverlap(startTime, endTime, rangeStart, rangeEnd);
};
let normalHours = calculateShiftHoursInRange(NORMAL_START, NORMAL_END);
let eveningHours = calculateShiftHoursInRange(EVENING_START, EVENING_END);
let nightHours = calculateShiftHoursInRange(NIGHT_START, NIGHT_END + 1440);
if (breakDuration > 0) {
if (calculateOverlap(startTime, endTime, BREAK_TIMES.normal.start, BREAK_TIMES.normal.end) > 0) {
normalHours -= breakDuration;
} if (calculateOverlap(startTime, endTime, BREAK_TIMES.evening.start, BREAK_TIMES.evening.end) > 0) {
eveningHours -= breakDuration;
} if (calculateOverlap(startTime, endTime, BREAK_TIMES.night.start, BREAK_TIMES.night.end) > 0) {
nightHours -= breakDuration;
}
}
// console.log(shiftDuration);
if(check){
console.log(startTime);
console.log(endTime)
console.log("Into the Next Day");
// console.log(BREAK_TIMES.midnightBreak.start)
// console.log(BREAK_TIMES.midnightBreak.end)
if(nightHours == 360 && startTime < BREAK_TIMES.midnightBreak.start && endTime > BREAK_TIMES.midnightBreak.end){
nightHours = nightHours - 20;
}
if(nightHours > 360 && startTime < BREAK_TIMES.midnightBreak.start && endTime > BREAK_TIMES.midnightBreak.end){
nightHours = nightHours - 30;
}
if(endTime > 1800){
normalHours = endTime - 1800;
}
}
if (startTime < NORMAL_START) { // Handle shifts starting before 06:00
console.log("Into before normal start time")
console.log(startTime);
console.log(endTime);
nightHours = Math.min(NORMAL_START, endTime) - startTime;
normalHours = Math.max(0, endTime - NORMAL_START);
var totalHours = nightHours + normalHours;
console.log("Sum: ", totalHours)
if(totalHours == 360 && startTime < 240 && endTime > 270){
nightHours = nightHours - 20;
}
if(totalHours > 360 && startTime < 240 && endTime > 270){
nightHours = nightHours - 30;
}
console.log(nightHours);
console.log(normalHours)
}
return {
normal: formatHours(normalHours),
evening: formatHours(eveningHours),
night: formatHours(nightHours)
};
}
function testShiftCalculation(){
const result = calculateHoursGroups("16:00", "00:00");
console.log(result);
}
These added checks, explicitly get start and end time, compare them with the break intervals and then compile the results, any suggestions to improve the overall efficiency of the script would be much appreciated.
Using update.effective_chat.send_message instead of update.message.reply_text solved a similar problem for me. Thanks Никита К
The issue ended up being with puppeeter, heroku has an issue with it, to fix it we have to add a flag to the build.
When Puppeteer is installed, it triggers a download of a large file such as Chrome. Depending on the network, this can cause a build to timeout. We've found that users no longer see timeouts after adding a PUPPETEER_SKIP_DOWNLOAD config var and setting it to true. This skips the Chrome download and has improved build time.
.slider-position-prev is good enter image description here
.slider-position-next is not enter image description here
code: main.js const swiper = new Swiper('.slider', { loop: true, grabCursor: true, spaceBetween: 30, observer: true, observeParents: true, parallax:true, pagination: { el: '.swiper-pagination', clickable: true, dynamicBullets: true },
navigation: { prevEl: '.slider-position-prev', nextEl: '.slider-position-next', },
breakpoints: { 0: { slidesPerView: 1, }, }, });
i got the answer, thanks to smartbear support team.
The ans. is in Testexecute behavior, the Testexecute application check on license availability before getting start, if license is available in the pool then only Testexecute application will open.
Hence in loop i kept running below command, until testexecute open: Start-Process -FilePath $testExecutePath -ArgumentList "/SilentMode /Exit /AccessKey:$AccessKey" -PassThru -Wait
Your code is well-organized for processing keyboard and mouse events using chromedp. Since you're dealing with user-input events in JSON format from an external source, security checks are essential to prevent unexpected or malicious inputs from interfering with Chrome's behavior. Here are some recommendations for security checks and improvements:
For example:
DispatchKeyEventParams: Validate fields like Type, Modifiers, Key, etc., to ensure they are within the expected range or type. DispatchMouseEventParams: Check X, Y coordinates and event Type to ensure they align with your application's requirements. Using a JSON schema validator can help automate this, but a manual check for critical fields is also viable.
Limit Event Rate If the input source can flood your application with a large volume of events, it might overwhelm your application or Chrome. Consider rate-limiting or debouncing these events to reduce excessive, rapid events. Implement a mechanism to limit the number of events processed per second or per client session.
Restrict Allowed Key Combinations For keyboard events, you might want to restrict certain key combinations or sequences that could be harmful. For example:
Disallow System Keys: Prevent triggering keys like F12, Alt+Tab, or Ctrl+Alt+Delete that might interfere with Chrome’s operation or the host system. Block Keys with Side Effects: Prevent actions that could change Chrome’s state, like Ctrl+W (close tab), Ctrl+N (new window), etc., unless these are explicitly required for your use case. You can filter keys by validating keyevt.Key, keyevt.Modifiers, and keyevt.Type fields, and reject combinations that could pose issues.
Coordinate Bounds Check for Mouse Events For DispatchMouseEventParams, ensure that the X and Y values are within the bounds of the intended viewport. Large values, either accidentally or maliciously, could cause erratic behavior or even unintended scrolling.
Limit Event Types for Both Keyboard and Mouse Ensure that only supported event types are allowed to go through. For example:
For keyevt.Type, allow only legitimate types like "keyDown", "keyUp", or "char". For ptrevt.Type, allow only types like "mousePressed", "mouseReleased", "mouseMoved", etc. Rejecting unsupported or unusual event types can reduce potential abuse.
Example: Implement an API token or other identity checks when clients connect.
I found the problem. The thing is that when adding angle, geom_image seems to be adding some blank square around the image. The solution is to remove the background through the argument image_fun :)
geom_image(
aes(
image = image,
angle = orientation_degree
),
image_fun = \(img) {
img %>%
magick::image_background('none')
}
)
help I need answers to this too. What I've tried doing is the tile way. since it takes 8 tiles to complete a 'circle' at 1 radius, i did something like:
radius= 10;
r= 0;
offset= 360/(r * 8);
i= i+offset;
if i===360, r+1;
if r===radius, r=0;
posx= sin(i)*r+originx;
posy= cos(i)*r+originy;