I just tested your case on macOS with Docker Desktop v27.4.0 and Docker Compose v2.31.0-desktop.2 (since you didn't specify your versions) and everything just works:
$ echo $ADC
/Users/mikalai/Documents/personal/compose/adc_creds.json
$ cat .env # just to have it
$ cat compose.yml
services:
web:
build: .
ports:
- "8000:5000"
env_file:
- .env
environment:
- GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/adc_creds.json
volumes:
- ${ADC}:/tmp/keys/adc_creds.json:ro
$ docker compose up -d
[+] Running 1/1
✔ Container compose-web-1 Started 0.1s
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
678b8a3058af compose-web "flask run --debug" 7 minutes ago Up 3 seconds 0.0.0.0:8000->5000/tcp compose-web-1
$ docker exec -it compose-web-1 /bin/sh
/code # ls -l $GOOGLE_APPLICATION_CREDENTIALS
-rw-r--r-- 1 root root 0 Feb 7 12:39 /tmp/keys/adc_creds.json
Env var is indeed not set. I can assume that you are either
export
ing your env var (export ADC=...
) which is why the child process does not see it. Confirm by setting env var with $ export ADC=...
docker compose
command in different shell session (e.g. in different terminal) where the ADC
var is not set. Could you please confirm that by running docker compose up...
and echo $ADC
sequentially in one terminal?Your Docker / Compose version doesn't support direct host env var reading. Since I'm not sure if this "feature" was actually added at some point (I think it should have always worked) I'm not going to go over the versions right now, but just wait for your answer.
I am facing the same issue. In the AWS Academy's AWS Details, it mentions that students can use LabRole to assume roles, but in practice, I found that LabRole does not have the Assume Role policy. I haven't found a workaround yet, so I hope someone here can help.
How the Code Works Function Logic:
The function print_backwards() reads a character from the user. If the character is not '.', the function calls itself recursively before printing the character. This means the function keeps calling itself until it reaches '.', without printing anything. How Recursion Works in This Case:
Each function call waits until the next character is processed. Once the recursion reaches '.', it stops calling itself and starts returning. As the function calls return in reverse order, characters are printed in reverse order. Step-by-Step Execution For example, if the user enters:
A B C D . The recursive calls and returns will happen as follows:
Step Input Character Action Stack (Call Stack)
1 'A' Calls print_backwards(), waits. A
2 'B' Calls print_backwards(), waits. A → B
3 'C' Calls print_backwards(), waits. A → B → C
4 'D' Calls print_backwards(), waits. A → B → C → D
5 '.' Stops recursion, prints "Output in reverse order:". A → B → C → D
6 - Prints 'D'. A → B → C
7 - Prints 'C'. A → B
8 - Prints 'B'. A
9 - Prints 'A'. (Empty)
Final Output:
Enter a character ('.' to end program): ABCD.
Output in reverse order: DCBA
Each function call reads a character but does not print it immediately. Recursive calls keep adding to the stack until '.' is reached. Once the base case ('.') is reached, function calls start returning. Characters are printed in reverse order as the stack unwinds. This is why the input is displayed in reverse when printed.
Debug -> Windows -> Show Diagnostic Tools
Alternatively, Ctrl + Alt + F2
You may need to click "Select tools" to have all your options show up in that window again. Next time you Debug, it should pop up automatically.
I just delete the volume then run everything again, it worked
From SDK 33 (Tiramisu), instead of using updateConfiguration the recommended way to change language is:
AppCompatDelegate.setApplicationLocales(locale);
It is worth to note that: This API should always be called after Activity.onCreate(), apart from any exceptions explicitly mentioned in this documentation.
Reference: setApplicationLocales
I used this code and it works fine. I tested it on local. Sometimes instead of adding one number, it adds more numbers to the visitors. For example, on the first refresh, it adds 11 numbers and on subsequent refreshes, more than 15 numbers are added per refresh. Is there a way to solve this problem?
I have a work around so you do not have to create a relationship.
If you just want to be able to select a project on a dropdown and see the budget and the partners ID, I would just link the tables by a combined field name and use a slicer.
If you have a field with an identical name and data type, Looker should link the 2 together automatically so that if you add a slicer to one it will filter the other table as well.
If the fields do not automatically join you can create a calculated field in each data table that references the ID you want to use and name the field IDs with exactly the same name.
aaptOptions { noCompress("tflite") }
Use this instead of
aaptOptions { noCompress "tflite" }
This is not a response, but I cannot add comments yet. I also face some challenges when developing a React Native app. I tried to install and run expo doctor, this helps a lot. If you also face problems after successfully building the APK, it's good to connect a phone to Android Studio and check the logs in the terminal.
Try using 14.0 or 15.0 and also need to clear the cache and the clean build folder (Product -> Clean build folder).
"plugins": [
[
"expo-build-properties": {
"ios": {
"deploymentTarget": "14.0"
}
}
]
]
It has also happened to me. Sometimes its due to some extensions or due to number extensions that you have installed. In my case they were over 30 extensions installed on my PC. I had to uninstall some and it worked just right for me. You can try the same.
Borrowing the new java time classes:
Date.from(Instant.parse("2025-02-07T07:53:59Z"))
Or if still using joda:
DateTime.parse("2025-02-07T07:53:59Z").toDate()
If you're using Node.js to SSH into a server, you have a couple of solid options: ssh2 and hivessh. ssh2 has a callback approach that can be unmaintainable and ugly if you use a lot of nested or dependent ssh operations.
Hivessh is an ssh2 wrapper that provides a promise based approach with some nice utilities. like checking if a command exists.
https://github.com/NobleMajo/hivessh https://github.com/mscdex/ssh2
If you need to get the localised name of a specific service, so you can check it's running status, in powershell you have to use the following command:
Get-Service | Where-Object {$_.Name -like "*scard*" -or $_.DisplayName -like "*tarj*" -or $_.DisplayName -like "*card*"}
This can also be set using the GIT CLI
git config --global --add safe.directory '*'
This is not a solid / all encompassing solution however a collegue of mine has suggested setting a min height / width in pixels of the other controls and a min on the image panel. This allows the app to sit nicely and scales quite well. Dont be too all in on bindings, sometimes just having some simple base values in pixels is good enough.
There could be a situation where packets are duplicated due to Network Load ballancers without using IGMP snooping, were somehow in a complex network a duplicated packet is not dropped, while it was picked up by one of the nodes that handles the Load-ballancing traffic. When that packet, due to routing issues, is delayed and still delivered, it could trigger a error message (almost false positive) like this.
i see this. and still not work. loca4-7132795:~/myapp$ flutter build apk
Checking the license for package Android SDK Build-Tools 34 in /nix/store/gvspaq4wcw58ld00ygansidsb3akpkpw-android-sdk-platform-tools-34.0.5/libexec/android-sdk/licenses Warning: License for package Android SDK Build-Tools 34 not accepted. Checking the license for package Android SDK Platform 35 in /nix/store/gvspaq4wcw58ld00ygansidsb3akpkpw-android-sdk-platform-tools-34.0.5/libexec/android-sdk/licenses Warning: License for package Android SDK Platform 35 not accepted.
FAILURE: Build failed with an exception.
Failed to install the following Android SDK packages as some licences have not been accepted. build-tools;34.0.0 Android SDK Build-Tools 34 platforms;android-35 Android SDK Platform 35 To build this project, accept the SDK license agreements and install the missing components using the Android Studio SDK Manager. All licenses can be accepted using the sdkmanager command line tool: sdkmanager.bat --licenses Or, to transfer the license agreements from one workstation to another, see https://developer.android.com/studio/intro/update.html#download-with-gradle
Using Android SDK: /nix/store/gvspaq4wcw58ld00ygansidsb3akpkpw-android-sdk-platform-tools-34.0.5/libexec/android-sdk
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. Get more help at https://help.gradle.org.
BUILD FAILED in 12s Running Gradle task 'assembleRelease'... 14.7s Gradle task assembleRelease failed with exit code 1 loca4-7132795:~/myapp$ ^C loca4-7132795:~/myapp$
For those reading this and wondering how to solve this, I did it like this. Of course there are multiple solutions to the same issue.
I added --service-account="myOwnServiceAccount@gcp..." to the yaml file that creates the Cloud Run container. This indeed seems to change the "default" service account into the one I specified, and then I could set the needed roles to this myOwnServiceAccount to make sure it works as expected.
All I had to do was to delete the node_module and package-lock.json file and run npm install
again. It build alright when I next ran npm build
command
This issue is not a problem with VS code. It depends on your computer configuration. VS code is not responding maybe because of low ram and processor or too many background processes.
If your computer configuration is well enough, you should check for background processes.
I came across the same problem, what i did was - simply uninstall the extension from vscode and install it again.
Theme builder is not available with the free version of Elementor. In this new version of Elementor, many things have moved around. Instead of seeing Header and Footer settings under the Appearance menu, we now have to go to the plugin called Ultimate Addons for Elementor Lite. Here, even after creating my template as a Footer (dropdown option), i find that it treats my headers and footers as pages and they do not show up elsewhere. It is really so frustrating trying to fix this! Let me now try the permalinks solution above and keep my fingers crossed.
In my case where I moved the directory on the host system, I actually had to restart the Docker Engine completely to pick it up. Seems the old path hang in some cache.
By using this Reference: https://medium.com/creative-technology-concepts-code/detect-device-browser-and-version-using-javascript-8b511906745
package used: universal html
import 'package:flutter/material.dart';
import 'package:universal_html/js.dart' as js;
class Home extends StatefulWidget {
const Home({super.key});
@override
State<Home> createState() => _HomeState();
}
class _HomeState extends State<Home> {
String data = "Tap on button below.";
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: Column(
crossAxisAlignment: CrossAxisAlignment.center,
mainAxisAlignment: MainAxisAlignment.center,
children: [
Text(data),
ElevatedButton(
onPressed: () async {
final result =
await js.context.callMethod('showDeviceData', [""]);
setState(() {
data = result;
});
},
child: const Text("get os")),
],
),
),
);
}
}
Update your index.html inside -> web/index.html
<!DOCTYPE html>
<html>
<head>
<!--
If you are serving your web app in a path other than the root, change the
href value below to reflect the base path you are serving from.
The path provided below has to start and end with a slash "/" in order for
it to work correctly.
For more details:
* https://developer.mozilla.org/en-US/docs/Web/HTML/Element/base
This is a placeholder for base href that will be replaced by the value of
the `--base-href` argument provided to `flutter build`.
-->
<base href="$FLUTTER_BASE_HREF">
<meta charset="UTF-8">
<meta content="IE=Edge" http-equiv="X-UA-Compatible">
<meta name="description" content="A new Flutter project.">
<!-- iOS meta tags & icons -->
<meta name="apple-mobile-web-app-capable" content="yes">
<meta name="apple-mobile-web-app-status-bar-style" content="black">
<meta name="apple-mobile-web-app-title" content="cric_live">
<link rel="apple-touch-icon" href="icons/Icon-192.png">
<!-- Favicon -->
<link rel="icon" type="image/png" href="favicon.png"/>
<title>cric_live</title>
<link rel="manifest" href="manifest.json">
</head>
<body>
<script src="flutter_bootstrap.js" async></script>
<script>
var os = [
{ name: 'Windows Phone', value: 'Windows Phone', version: 'OS' },
{ name: 'Windows', value: 'Win', version: 'NT' },
{ name: 'iPhone', value: 'iPhone', version: 'OS' },
{ name: 'iPad', value: 'iPad', version: 'OS' },
{ name: 'Kindle', value: 'Silk', version: 'Silk' },
{ name: 'Android', value: 'Android', version: 'Android' },
{ name: 'PlayBook', value: 'PlayBook', version: 'OS' },
{ name: 'BlackBerry', value: 'BlackBerry', version: '/' },
{ name: 'Macintosh', value: 'Mac', version: 'OS X' },
{ name: 'Linux', value: 'Linux', version: 'rv' },
{ name: 'Palm', value: 'Palm', version: 'PalmOS' }
]
var browser = [
{ name: 'Chrome', value: 'Chrome', version: 'Chrome' },
{ name: 'Firefox', value: 'Firefox', version: 'Firefox' },
{ name: 'Safari', value: 'Safari', version: 'Version' },
{ name: 'Internet Explorer', value: 'MSIE', version: 'MSIE' },
{ name: 'Opera', value: 'Opera', version: 'Opera' },
{ name: 'BlackBerry', value: 'CLDC', version: 'CLDC' },
{ name: 'Mozilla', value: 'Mozilla', version: 'Mozilla' }
]
var header = [
navigator.platform,
navigator.userAgent,
navigator.appVersion,
navigator.vendor,
window.opera
];
// Match helper function
function matchItem(string, data) {
var i = 0,
j = 0,
regex,
regexv,
match,
matches,
version;
for (i = 0; i < data.length; i += 1) {
regex = new RegExp(data[i].value, 'i');
match = regex.test(string);
if (match) {
regexv = new RegExp(data[i].version + '[- /:;]([\\d._]+)', 'i');
matches = string.match(regexv);
version = '';
if (matches) { if (matches[1]) { matches = matches[1]; } }
if (matches) {
matches = matches.split(/[._]+/);
for (j = 0; j < matches.length; j += 1) {
if (j === 0) {
version += matches[j] + '.';
} else {
version += matches[j];
}
}
} else {
version = '0';
}
return {
name: data[i].name,
version: parseFloat(version)
};
}
}
return { name: 'unknown', version: 0 };
}
//should be same as method name defined in dart file
function showDeviceData() {
var agent = header.join(' ');
var detectedOS = matchItem(agent, os);
var detectedBrowser = matchItem(agent, browser);
return `Browser: ${detectedBrowser.name} ${detectedBrowser.version}, OS: ${detectedOS.name} ${detectedOS.version}`;
}
</script>
</body>
</html>
You can deploy and test it or
Test (locally):
Here's the result: Result
We also were looking for a similar framework when moved from Django to Node. First, we tried AdminJS but we were not satisfied with quality and customization, so we did our own Free & OpenSource framework: https://adminforth.dev/
you have to do KYC that is mandatory enable Test mode then you can create API keys without website verification. You can do dummy payments without real money
just complete KYC selecting ecommerce on that you get aa call for verification just tell im going to create ecommerce website for that i need payment gate way
As of today, on the latest 2.2 version of pandas, this correctly returns an empty Series if you apply on an empty dataframe. I have the same problem with 1.2 version where I have to explicitly add the result_type='reduce'
argument while I dont have any problems with the 2.2 version.
E.g.,
import pandas as pd
df = pd.DataFrame(columns=['A', 'B'])
# only works in 1.2 version if I add result_type='ignore'
df['C'] = df.apply(lambda x: {x['A'], x['B']}, axis=1, result_type='reduce')
# works fine in 2.2 without any additional args
df['C'] = df.apply(lambda x: {x['A'], x['B']}, axis=1)
Must have been a bug or smth in earlier versions.
I was facing the same issue with playwright version 1.48
My fix was:
npm uninstall playwright
npm install -D @playwright/test
I will say, you should go for both, one text based search and one vector based search to get to know both of these like the back of your hand and then once you are comfortable, you can choose any.
Btw, these days, in the latest solr versions, vector search is available there also. And Pinecone is configurable with Elastic search too :)
import torch
import torch.nn.functional as F
import math
def get_proj(volume,right_angle,left_angle,distance_to_obj = 4,surface_extent=3,N_samples_per_ray=200,H_out=128, W_out=128,grid_sample_mode='bilinear'):
"""
Generates a 2D projection of a 3D volume by casting rays from a specified camera position.
This function simulates an orthographic projection of a 3D volume onto a 2D plane. The camera is positioned on a sphere
centered at the origin, with its position determined by the provided right and left angles. Rays are cast from the camera
through points on a plane tangent to the sphere, and the volume is sampled along these rays to produce the projection.
Args:
volume (torch.Tensor): A 5D tensor of shape (N, C, D, H, W) representing the 3D volume to be projected.
right_angle (float): The azimuthal angle (in radians) determining the camera's position around the z-axis.
left_angle (float): The polar angle (in radians) determining the camera's elevation from the xy-plane.
distance_to_obj (float, optional): The distance from the camera to the origin. Defaults to 4.
surface_extent (float, optional): The half-extent of the tangent plane in world units. Defines the plane's size. Defaults to 3.
N_samples_per_ray (int, optional): The number of sample points along each ray. Higher values yield more accurate projections. Defaults to 200.
H_out (int, optional): The height (in pixels) of the output 2D projection. Defaults to 128.
W_out (int, optional): The width (in pixels) of the output 2D projection. Defaults to 128.
Returns:
torch.Tensor: A 4D tensor of shape (1, 1, H_out, W_out) representing the 2D projection of the input volume.
Raises:
ValueError: If the input volume is not a 5D tensor.
RuntimeError: If the sampling grid is out of the volume's bounds.
Example:
```python
import torch
# Create a sample 3D volume
volume = torch.zeros((1, 1, 32, 32, 32))
volume[0, 0, 16, :, :] = 1 # Add a plane in the middle
# Define camera angles
right_angle = 0.5 # radians
left_angle = 0.3 # radians
# Generate the projection
projection = get_proj(volume, right_angle, left_angle)
# Visualize the projection
import matplotlib.pyplot as plt
plt.imshow(projection.squeeze().cpu().numpy(), cmap='gray')
plt.show()
```
Note:
- Ensure that the input volume is normalized to the range [-1, 1] for proper sampling.
- The function assumes an orthographic projection model.
- Adjust `N_samples_per_ray` for a trade-off between performance and projection accuracy.
"""
device = volume.device
ra = right_angle
la = left_angle
# Compute camera position p on the unit sphere.
p = torch.tensor([
math.cos(la) * math.cos(ra),
math.cos(la) * math.sin(ra),
math.sin(la)
]).to(device)
p*=distance_to_obj
# p is of shape (3,). (It lies on the unit sphere.)
# The camera is at position p and always looks to the origin.
# Define the opposite point on the sphere:
q = -p # This will be the point of tangency of the projection plane.
# -------------------------------------------------------------------
# 3. Define an orthonormal basis for the projection plane tangent to the unit sphere at q.
# We need two vectors (right, up) lying in the plane.
# One way is to choose a reference vector not colinear with q.
# -------------------------------------------------------------------
ref = torch.tensor([0.0, 0.0, 1.0]).to(device)
if torch.allclose(torch.abs(q), torch.tensor([1.0, 1.0, 1.0]).to(device) * q[0], atol=1e-3):
ref = torch.tensor([0.0, 1.0, 0.0])
# Compute right as the normalized cross product of ref and q.
right_vec = torch.cross(ref, q,dim=0)
right_vec = right_vec / torch.norm(right_vec)
# Compute up as the cross product of q and right.
up_vec = torch.cross(q, right_vec)
up_vec = up_vec / torch.norm(up_vec)
# -------------------------------------------------------------------
# 4. Build the image plane grid.
#
# We want to form an image on the plane tangent to the sphere at q.
# The plane is defined by the equation: q · x = 1.
#
# A convenient parameterization is:
#
# For (u, v) in some range, the 3D point on the plane is:
# P(u,v) = q + u * right_vec + v * up_vec.
#
# Note: Since q is a unit vector, q · q = 1 and q is perpendicular to both right_vec and up_vec,
# so q · P(u,v) = 1 automatically.
#
# Choose an output image resolution and an extent for u and v.
# -------------------------------------------------------------------
# Choose an extent so that the sampled points remain in [-1,1]^3.
# (Since our volume covers [-1,1]^3, a modest extent is needed.)
extent = surface_extent # you may adjust this value
u_vals = torch.linspace(-extent, extent, W_out).to(device)
v_vals = torch.linspace(-extent, extent, H_out).to(device)
grid_v, grid_u = torch.meshgrid(v_vals, u_vals, indexing='ij') # shapes: (H_out, W_out)
# For each pixel (u,v) on the plane, compute its world coordinate.
# P = q + u * right_vec + v * up_vec.
plane_points = q.unsqueeze(0).unsqueeze(0) + \
grid_u.unsqueeze(-1) * right_vec + \
grid_v.unsqueeze(-1) * up_vec
# plane_points shape: (H_out, W_out, 3)
# -------------------------------------------------------------------
# 5. For each pixel, sample along the ray from the camera p through the point P.
#
# Since the camera is at p and the ray passing through a pixel is along the line from p to P,
# the ray can be parameterized as:
#
# r(t) = p + t*(P - p), for t in [0, 1]
#
# t=0 gives the camera position, t=1 gives the intersection with the image plane (P).
# -------------------------------------------------------------------
N_samples = N_samples_per_ray
t_vals = torch.linspace(0, 1, N_samples).to(device) # shape: (N_samples,)
# Expand plane_points to sample along t:
# plane_points has shape (H_out, W_out, 3). We want to combine it with p.
# Compute (P - p): note that p is a vector; we can reshape it appropriately.
P_minus_p = plane_points - p.unsqueeze(0).unsqueeze(0) # shape: (H_out, W_out, 3)
# Now, for each t, compute the sample point:
# sample_point(t, u, v) = p + t*(P(u,v) - p)
# We can do:
sample_grid = p.unsqueeze(0).unsqueeze(0).unsqueeze(0) + \
t_vals.view(N_samples, 1, 1, 1) * P_minus_p.unsqueeze(0)
# sample_grid now has shape: (N_samples, H_out, W_out, 3).
# Add a batch dimension (batch size 1) so that grid_sample sees a grid of shape:
# (1, N_samples, H_out, W_out, 3)
sample_grid = sample_grid.unsqueeze(0)
# IMPORTANT: grid_sample expects the grid coordinates in the normalized coordinate system
# of the input volume. Here our volume is defined on [-1, 1]^3. Make sure that the computed
# sample_grid falls in that range. (Depending on extent, p, etc., you may need to adjust.)
# For our setup, choose the parameters so that sample_grid is within [-1, 1].
# -------------------------------------------------------------------
# 6. Use grid_sample to sample the volume along each ray and integrate.
# -------------------------------------------------------------------
# grid_sample expects input volume of shape [N, C, D, H, W] and grid of shape [N, D_out, H_out, W_out, 3].
proj_samples = F.grid_sample(volume, sample_grid, mode=grid_sample_mode, align_corners=False)
# proj_samples has shape: (1, 1, N_samples, H_out, W_out)
# For a simple projection (like an X-ray), integrate along the ray.
# Here we simply sum along the sample (ray) dimension.
proj_image = proj_samples.sum(dim=2) # shape: (1, 1, H_out, W_out)
return proj_image
It can be used like this
import matplotlib.pyplot as plt
# this is volume that defines 3d object
volume = torch.zeros(1, 1, 32, 32, 32, requires_grad=True).cuda()
def make_cube(volume):
volume[0, 0, :, 0, 0] = 1
volume[0, 0, :, -1, 0] = 1
volume[0, 0, :, 0, -1] = 1
volume[0, 0, :, -1, -1] = 1
volume[0, 0, 0, :, 0] = 1
volume[0, 0, -1, :, 0] = 1
volume[0, 0, 0, :, -1] = 1
volume[0, 0, -1, :, -1] = 1
volume[0, 0, 0, -1, :] = 1
volume[0, 0, 0, 0, :] = 1
volume[0, 0, -1, 0, :] = 1
volume[0, 0, -1, -1, :] = 1
with torch.no_grad():
make_cube(volume)
# Create a figure and axis
fig, ax = plt.subplots()
right_angle =0.5
left_angle = 0.2
proj_image = get_proj(volume, right_angle, left_angle,surface_extent=4)
proj_image=proj_image.cpu().detach()[0, :].transpose(0,-1)
# Display the new image
plt.imshow(proj_image, cmap='gray')
This took me a lot of time, but for me this worked:
@update:search="searchInput = $event"
Instead of the :search-input.sync
attribute.
I can't legally suggest a way for you to bypass EDR, but the “rundll32.exe C:\windows\System32\comsvcs.dll” that you have done here can be said that there is no EDR left in the market that will not catch this command. Also, opening a file, writing into it and saving the file is also not ignored by an EDR, you need to do this through a process that is already doing this, so that the EDR will ignore a process that is in exclusion. In short, when trying EDR Bypass, you need to fully understand the working logic first, I can leave you a few links for this;
https://www.vaadata.com/blog/antivirus-and-edr-bypass-techniques/ https://medium.com/@ankitsinha81195_47457/a-deep-dive-into-edr-bypass-strategies-ed25b3929bb1 https://github.com/tkmru/awesome-edr-bypass
Did you add the required permission to use the internet because you are using the Image.network() widget. If not then add this required permission to your AndroidManifest.xml file
<uses-permission android:name="android.permission.INTERNET"/>
hci_inquiry: No such device error is posted using this example code
// now it is device descfriptor !
dev_id = hci_get_route(NULL);
if (dev_id < 0) {
perror("hci_get_route");
exit(EXIT_FAILURE);
}
else
{
text = "SUCCESS constructor " ;
qDebug()<< text;
}
if (dev_id < 0) {
perror("hci_get_route");
exit(EXIT_FAILURE);
}
else
{
text = "SUCCESS hci_get_route constructor \n" ;
text += " dev_id \t" ;
text += QString::number(dev_id);
//debug->append(text);
qDebug()<< text;
}
// back to device descriptor ??
int dd = hci_open_dev(dev_id);
//if(dd != 0 )
{
text = " device descriptor ";
text += QString::number(dd);
qDebug()<< text;
}
//debug->append(text);
int sock = hci_open_dev( dev_id );
if (dev_id < 0 || sock < 0) {
perror("opening socket");
exit(1);
}
if (sock < 0) {
text = "FAILURE hci_open_dev \n" ;
text += " sock \t" ;
text += QString::number(sock);
//debug->append(text);
qDebug()<< text;
perror("hci_open_dev");
exit(1);
}
else
{
text = "SUCCESS sock = hci_open_dev constructor \n" ;
text += " sock \t" ;
text += QString::number(sock);
//debug->append(text);
qDebug()<< text;
}
//debug->append(text);
// Perform inquiry (scan for devices)
num_rsp = hci_inquiry(sock, 1, 10, NULL, &ii, 0);
if (num_rsp < 0) {
text = "FAILURE num_rsp = hci_inquiry constructor \n" ;
text += " num_rsp \t" ;
text += QString::number(num_rsp);
//debug->append(text);
qDebug()<< text;
//perror("hci_open_dev");
//exit(1);
perror("hci_inquiry");
//exit(1);
}
else
{
text = "SUCCESS num_rsp = hci_inquiry constructor \n" ;
text += " num_rsp \t" ;
text += QString::number(num_rsp);
//debug->append(text);
qDebug()<< text;
//perror("hci_open_dev");
//exit(1);
perror("hci_inquiry");
//exit(1);
}
This is actual MODIFIED partial code and it FAILS hci_inquiry for unknown reason. The Bluetooth service is verified and is running. PLEASE help to resolve. PS I am NOT allowed to ask my own question ....
I mean to encode the entire dict to a string
"{'A': 123.02 , 'B': 12.3}"
these dicts are the values in a pandas column.
you will need to use SPA ( Single Page Application ) Architecture.
Try using framework like React with React Router where React Router is helpfull for seamless page transitions in SPA.
Simply restarting the computer helped in this scenario for me. I didn't have an index.lock file in .git folder.
I’ve been working with Vue.js and local databases for a while, and I recommend giving RxDB in your vue app a try. It’s a great fit for Vue because it’s easy to integrate, supports reactivity out of the box, and lets you keep your data in sync between the browser and the server. One of RxDB’s standout features is its offline-first approach—it can store data locally and then sync seamlessly once the device is back online. Plus, it has plugins for encryption, conflict resolution, and server replication, which are all really helpful if you need to handle more complex use cases.
If RxDB isn’t your style or you want to explore alternatives, you could also look at PouchDB for a simpler offline-first approach, or Dexie.js if you prefer a more lightweight IndexedDB wrapper. Gun.js might be another interesting choice, especially if you’re building real-time collaborative applications.
Should have waited out a bit longer because it can be done with GQL - which may be helpful to others:
query productById(
$productId: Int!
# Use GraphQL Query Variables to inject your product ID
) {
site {
product(entityId: $productId) {
id
entityId
name
variants {
edges {
node {
sku
isPurchasable
}
}
}
}
}
}
Just note that the product needs to be 'visible on storefront'
For anyone using coc-nvim
, :CocInstall coc-emoji
would be the best.
Simply type :
in insertion mode and search for emoji by its shortcode
See below
A similar old plugin is vim-emoji
but I am not able to use it, possibly due to conflictions with coc
The following few statements seem to resolve the issue, where the variable page carries the previous reference. After loading the new page (click in this case):
current_url = await page.evaluate("window.location.href") # Get the new url
page = await driver.get(current_url)
As @Yoann commented, the overscroll-behavior
css property does the trick.
However, if you include his solution as is
html, body {
overscroll-behavior: none;
}
you are disabling trackpad gestures navigation, which in my case were important.
You can disable elastic scrolling only on the y-axis, allowing users to navigate with their trackpad by doing:
html, body {
overscroll-behavior: auto none;
}
Unfortunately, there is no ready to use plugin to achieve it. But it still might be done by using custom-made eslint plugin.
Please, have a look at eslint-plugin-import
plugin, it's new-line-after-import
rule (see this link). This rule only supports new line addition (lines count > 0). But you might just create your own plugin. It's totally achievable by using this plugin's source code as a starting point.
Nevermind, I managed to find a workaround, it turned out the problem was one of the extensions i was using, the Code Runner, which launched Python instead of py. Maybe thats not the solution but its an acceptable workaround for now. If anyone else has any idea still let me know, and thank you!
Verify Installation: Ensure the module is installed in the environment you're using. Run pip list to confirm the module is present.
Use Specific Python: Run the script by specifying the Python executable directly:
bash /path/to/python -u "path-to-file" Replace /path/to/python with the full path to your Python executable.
Activate Virtual Environment: If you're using a virtual environment, activate it:
bash
source venv/bin/activate
venv\Scripts\activate PYTHONPATH: Ensure the PYTHONPATH variable includes the paths to your modules. You can temporarily set it in the terminal:
bash export PYTHONPATH=$PYTHONPATH:/path/to/your/modules I hope this resolves your issue. If you're still having trouble, please share more details about your setup and the specific errors you're encountering.
Below code snippet solved it for me
$teams = Get-Team
$count = 0
foreach ($team in $teams) {
$owner = Get-TeamUser -GroupId $team.GroupId -Role Owner
if ($owner.User -eq "<UPN of your user>") {
$team.DisplayName
$count++
}
} "found $count teams where the target user is owner"
Not exactly what you are looking for, because it only works for the interval [0, 1):
x = symbols('x', nonnegative = True)
x = x/(x+1)
x >= 0, x < 1)
(True, True)
I was experiencing the similar problem on Ubuntu 20.04 while using Angular 17. I had previously developed an application that worked fine on Ubuntu 20.04, so I built my new project on top of the old one. However, I was still facing the same problem. I had the same problem with new applications I installed from scratch with different package managers. I think the package-lock.json file causes some incompatibilities while being created. You can try using the package-lock.json file that you used in an old working project. I solved the problem this way.
Yes! You can achieve this in Alpine.js without a MutationObserver by using Alpine’s built-in @attr directive. This allows you to reactively update fooEnabled when the data-foo attribute is added or removed.
<div x-data="{ fooEnabled: false }">
<button @click="fooEnabled = !fooEnabled; fooEnabled ? $el.setAttribute('data-foo', '123') : $el.removeAttribute('data-foo')">
Toggle data-foo
</button>
<p x-text="fooEnabled ? 'Enabled' : 'Disabled'"></p>
</div>
@attr:data-foo.window is triggered whenever the data-foo attribute changes. Inside the handler, we check if data-foo exists using $el.hasAttribute('data-foo'), then update fooEnabled accordingly.
This is a clean and simple Alpine.js solution without needing an external MutationObserver.
If you want to connect Xsoar with Elastic Search, Xsoar already has Elastic Integration, all you need to do here is to enter the Server URL you get from Elastic into the integration. Afterwards, you can assign them to a playbook you want. You don't need to pull any special API or anything. This video may work for you;
Use ~/Library/Android/sdk/emulator/emulator
instead of ~/Library/Android/sdk/tools/emulator
I had the same issue and changing the quality from 1 to 0 also helped me solve it ! Thanks a lot I was loosing my hair on this one :D
const checkedArray = await Promise.all(
originalArray.map(async (elem) => {
const someCheck = await myAsyncCheckingMethod(elem);
return someCheck ? elem : null;
})
);
const finalArray.filter((elem) => elem !== null);
Works if you are not expecting nulls as a regular elements in original array
I have made the two SIP registrations. I have attached one to scenario A (Openai) and the other to a new scenario B (testing scenario). I have a routing rule for each scenario, do I need just one that applies to both?
scenario A:
require(Modules.ASR)
async function sendUtterance(query,session){
const url = "xxxxxxxx"
Logger.write("Session: " + session)
const options = {
headers: {
"Content-Type": "application/json",
"Ocp-Apim-Subscription-Key": "xxxxxxx"
},
method: "POST",
postData: JSON.stringify({
"query": query,
"session": session,
"origin": "phone"
})
}
const result = await Net.httpRequestAsync(url, options);
Logger.write(result.text);
const response = JSON.parse(result.text)
var textresponse = ""
response.responses.forEach(element => {
textresponse += element.text + ". "
});
return textresponse
}
var waitMessagePlaying = false;
var isProcessing;
var isResponding = false;
var waitingMessages = [
"Un momento, por favor.",
"Deme un instante.",
"Permítame un momento.",
"Necesito un segundo.",
"Deme un momento.",
"Un instante si es tan amable.",
"Un segundito, por favor.",
"Espere un momento, por favor.",
"Un segundito, si es tan amable.",
];
var waitingIndex = 0;
async function waitMessage(){
if (waitMessagePlaying){
return
}
waitMessagePlaying = true;
function recursiveWait() {
if (isProcessing && !isResponding) {
var message = waitingMessages[waitingIndex];
player = VoxEngine.createTTSPlayer(message, {
language: defaultVoice,
progressivePlayback: true
})
player.sendMediaTo(call)
player.addMarker(-300)
player.addEventListener(PlayerEvents.PlaybackMarkerReached, (ev) => {
player.removeEventListener(PlayerEvents.PlaybackMarkerReached)
setTimeout(recursiveWait,6000)
waitingIndex++;
if (waitingIndex == waitingMessages.length) {
waitingIndex = 0;
}
})
} else {
waitMessagePlaying = false;
}
}
recursiveWait();
}
async function queryProc(messages,session){
if (messages == ""){
return
}
if (messages != "/start"){
timer = setTimeout(waitMessage,2000)
}
isProcessing=true;
try {
let ts1 = Date.now();
var res = await sendUtterance(messages, session)
let ts2 = Date.now();
Logger.write("Petición completada en " + (ts2 - ts1) + " ms")
isResponding = true;
if (timer){clearTimeout(timer);}
if (waitMessagePlaying) {
player.stop()
waitMessagePlaying = false;
}
player = VoxEngine.createTTSPlayer(res,
{
language: defaultVoice,
progressivePlayback: true
})
player.sendMediaTo(call)
player.addMarker(-300)
} catch(err){
player = VoxEngine.createTTSPlayer('Disculpe, no le escuché, ha habido un error en sistema, ¿me lo puede repetir?',
{
language: defaultVoice,
progressivePlayback: true
})
player.sendMediaTo(call)
player.addMarker(-300)
}
player.addEventListener(PlayerEvents.PlaybackMarkerReached, (ev) => {
player.removeEventListener(PlayerEvents.PlaybackMarkerReached)
call.sendMediaTo(asr)
isProcessing=false;
isResponding = false;
})
}
var call, player, asr, timer;
const defaultVoice = VoiceList.Microsoft.Neural.es_ES_LiaNeural
// Procesar la llamada entrante
VoxEngine.addEventListener(AppEvents.CallAlerting, (e) => {
call = e.call
const session = uuidgen()
asr = VoxEngine.createASR({
profile: ASRProfileList.Microsoft.es_ES,
singleUtterance: true
})
// Procesar el resultado del ASR
asr.addEventListener(ASREvents.Result, async (e) => {
messages = e.text
Logger.write("Enviando query '" + messages + "' al dto")
// Tiempo de respuesta
if (!isProcessing){
await queryProc(messages,session)
}
})
call.addEventListener(CallEvents.Connected, async (e) => {
await queryProc('/start',session)
})
call.addEventListener(CallEvents.Disconnected, (e) => {
VoxEngine.terminate()
})
call.answer()
})
scenario B:
require(Modules.ASR);
VoxEngine.addEventListener(AppEvents.CallAlerting, e => {
e.call.startEarlyMedia();
e.call.say("Hola melón, soy el contestador de las clínicas", VoiceList.Microsoft.Neural.es_ES_ElviraNeural);
e.call.answer();
});
In the documentation I see this method:
const call = VoxEngine.callSIP("sips:[email protected]:5061", {
callerid: "5510",
displayName: "Steve Rogers",
password: "shield",
authUser: "captain",
});
But I don't know how to integrate it into scenario A. Can you help?
I have data in sheet1 I want to place a few data based on multiple criteria onto another sheet I tried excel but couldn’t Can you guide with VBA
I start using an Espressif ESP32-C3-Mini and I'm not able to get any data on the Monitor ! To see any return on the monitor window I need to change the Baud-Rate I.E 115200 to 9600 and then the monitor work. Can anyone try to test this code (using NimBLE-Arduino library see Github to download) :
#include <NimBLEDevice.h>
void setup()
{
Serial.setTxTimeoutMs(0); // Use it with USB CDC On Boot is enabled.
Serial.begin(115200);
// delay(3000); // Wait serial monitor
Serial.println("Init the NimBLE...");
NimBLEDevice::init(""); // Init the NimBLE
Serial.println("NimBLE initialized!");
}
void loop()
{
}
I try many setup on IDE (I use 1.8.19) but nothing works ... Any idea ? Thanks.
I did something like this:
var culture = System.Globalization.CultureInfo.CreateSpecificCulture("pl");
DateTime.ParseExact("Luty", "MMMM", culture).Month
You can specify any other culture to match your preferences :)
Sigma max Max include Printf sig max
Create a new XML file, res/drawable/list_divider.xml Apply it in your ListView
IntelliJ IDEA 2024.1.4 (Build #IU-241.18034.62, built on June 20, 2024)
We are in 2025: And still this issue... I have to use this dumb product because the company impose this dumb product... A so simple thing like just refactor the package name... I'm upset
Smshggahahaggabshsuahaggahwbzhhza
To @Berbardo,
It is possible to go get the correct the answer to a cell in spreadsheet.
To do that, enable quiz settings to the form through newForm.setIsQuiz(true)
where
newForm
is a Form instance (such as newForm = FormApp.create('new Form');
only this worked ...i tried top 3 answers and modified skyho's answer
services: redis-stack: image: redis/redis-stack:latest ports: - "127.0.0.1:6379:6379" - "127.0.0.1:8001:8001" environment: REDIS_ARGS: "--requirepass ${REDIS_HOST_PASSWORD}" env_file: - .env volumes: - ./data:/data restart: unless-stopped
.env file
REDIS_HOST_PASSWORD=YourStrongPassword
Had a lot of issues with this, found a native solution provided by Apple after three days: https://stackoverflow.com/a/79420711/11890457
This is because you are mixing "ZYX" (intrinsic rotations) with "zyx" (extrinsic rotations). Use the same format everywhere and you should get consistent results.
I've had similar question to yours last days, and here is what I found:
Transaction is not being created, but synchronisation (and Hibernate session) is created. So because method is annotated with @Transactional, Spring crates for the whole method's duraiton a Hibernate session. Upon creation, Hibernate reserves a DB connection from the pool.
Because the setting propagation is NEVER, there is no transaction created (no BEGIN issued).
So the mistake we made is to think that transaction and Sring synchronisation have the same scope. They do not, and because Hibernate is used, some extra resources might get tied.
I suggest extra read here: https://github.com/spring-projects/spring-framework/issues/31209 User had similar problem, and there are actually some settings suggested for hibernate to not reserve a connection without transaction active. (like DELAYED_ACQUISITION_AND_RELEASE_AFTER_STATEMENT)
There are other consequences: because hibernate session is active, its L1 cache is also enabled. So any queries made via JpaRepository or other Hibernate beans will be cached (even without transaction). This can add another layer of problems (which I have met personally).
Suggestion for readers: if you can, just drop hibernate (spring-data-jpa) in favour for JDBC (spring-data-jdbc). I prefer to get what I see and not be surprised by some framework magic.
Did you find a solution? I have the exact same issue
https://github.com/tmds/Tmds.Ssh is a modern .NET SSH client library that supports certificates.
After further investigation, I found that the error was caused by an incorrect value in the component_config table. Specifically, the provider_id for the Keycloak AES encryption was mistakenly updated with an invalid value.
The correct provider_id should have been aes-generated, but it was replaced by an invalid value, causing the encryption key error.
Once I reverted the change and restored the correct value for the provider_id, the issue was resolved, and the Keycloak console page now loads without errors.
Thanks for read!
curl https://tradeit.gg/api/steam/v1/steams/float-item-finder?inspectLink=steam:%2F%2Frungame%2F730%2F76561202255233023%2F%2Bcsgo_econ_action_preview%2520M5176417937479079495A40112471881D4784131275388679845
Now i just need to replace M... A... and D..., and I got info
Did you get any solution for this.I have to work on same kind of requirement. Can you share your implementation if your's is working.
https://github.com/tmds/Tmds.Ssh is a modern .NET SSH client library that supports certificates.
Currently, the answer seems to be that you can't.
There is a recent suggestion on their GitHub to add such a feature: https://github.com/vercel/next.js/discussions/51672
As far as I understand, you are trying to define automation on a playbook to run an automation you have written with certain periods, for this, creating it as a job, not as a task, will meet your requests. Jobs on XSoar are one of the features that already exist for this job. You can look here for this;
You could try this one : https://github.com/sami-fennich/TextTool
or this one : https://github.com/sami-fennich/Edit
At least i was able to figure this question out. Here is the code that works:
import nodriver as uc
import asyncio
async def b_start_chrome_and_log_in():
driver = await uc.start()
await driver.main_tab.maximize()
FIND IT!
In the response of an interactive message there are two whamid: the first is the one of the message received, the second is the one for the mesage sent
In your case, you are passing the different objects using polymorphism which are created based on extension of abstract. And while invoking to those methods you are referring to abstract instead of original and that abstract may or may not have the calling method.
in manifest file I was using wrong user.
readOnly: true
securityContext:
allowPrivilegeEscalation: false
runAsUser: 10001
AutoML object detection can be approached by considering several factors that influence performance. If optimizing the current setup doesn't yield the desired efficiency, consider deploying the model for online predictions. Online predictions handle individual requests in real-time, which can be more efficient for smaller datasets or when immediate results are required. However, this approach may not be suitable for large-scale batch processing due to potential scalability constraints.
By systematically addressing these areas, you can enhance the efficiency of your batch predictions in AutoML object detection tasks.
a) will always return false and it will set this.restricted to false. Test to prove will be set this.restricted to true and expect output as true.
The description you provided look like it would be covered by the ConveyorBelt
block rather than the pipeline. The conveyor belt assumes that any bulk material (and fluids are bulk materials in AnyLogic) takes a specific amount of time to reach the other side, based on the speed and length of the belt. You can have it be fed for 1h, then idle for 15min, then fed again for 1h, and it will show off on the output end of the conveyor belt after a fixed delay (based on speed/length).
but.... our problem is a little bit more complicated as we want to use column definitions for a datatable. with c:set the value will not be displayed because it refers to the "var" attribute of the datatable. here a short sample
so first datatable doesn't display data and 2nd datatable displays the data but we can't implement a loop for each column inside datatable.
is there any other solution?
here a short sample
TestView.xhtml
import at.sozvers.kgkk.faces.ViewScoped;
import jakarta.annotation.PostConstruct;
import jakarta.inject.Named;
@ViewScoped
@Named("testView")
public class TestView implements Serializable
{
private static final long serialVersionUID = 4290918565613185179L;
private List<Product> products = new ArrayList<>();;
@PostConstruct
public void init()
{
if (products.isEmpty())
{
products.add(new Product(1000, "f230fh0g3", "Bamboo Watch"));
products.add(new Product(1001, "nvklal433", "Black Watch"));
products.add(new Product(1002, "zz21cz3c1", "Blue Band"));
}
}
public List<Product> getProducts()
{
return products;
}
public void setProducts(List<Product> products)
{
this.products = products;
}
}
test.xhtml
<h:head>
<title>PF TEST VIEW</title>
</h:head>
<h:body id="body">
<!-- bean and dto definitions -->
<ui:param name="bean" value="#{testView}" />
<ui:param name="DTO_List" value="#{bean.products}" />
<ui:param name="count_columns_max" value="3" />
<ui:param name="updateViewSpecificComponents" value="#{datatableId}" />
<h:form id="form">
<!-- initialize all columns for 1st datatable -->
<c:forEach begin="1" end="#{count_columns_max}" var="idx">
<c:set var="#{'column'.concat(idx).concat('_label')}" value="" scope="view" />
<c:set var="#{'column'.concat(idx).concat('_value')}" value="" scope="view" />
</c:forEach>
<!-- define view specific columns for 1st datatable -->
<c:set var="column1_label" value="Id" scope="view" />
<c:set var="column1_value" value="#{data.id}" scope="view" />
<c:set var="column2_label" value="Code" scope="view" />
<c:set var="column2_value" value="#{data.code}" scope="view" />
<c:set var="column3_label" value="Name" scope="view" />
<c:set var="column3_value" value="#{data.name}" scope="view" />
<ui:param name="datatable_rowKey" value="#{data.id}" />
<ui:param name="datatableId" value="dataTable1" />
<h2>DATATABLE 1: with c:set vor value</h2>
<p></p>
<div>
<p:dataTable id="#{datatableId}" value="#{DTO_List}" var="data" rowKey="#{datatable_rowKey}">
<c:forEach begin="1" end="#{count_columns_max}" var="idx">
<p:column headerText="#{viewScope['column' += idx += '_label']}">
<h:outputText value="#{viewScope['column' += idx += '_value']}" />
</p:column>
</c:forEach>
</p:dataTable>
</div>
<!-- initialize all columns for 2nd datatable -->
<c:forEach begin="1" end="#{count_columns_max}" var="idx">
<c:set var="#{'column'.concat(idx).concat('_label')}" value="" scope="view" />
<ui:param name="#{'column'.concat(idx).concat('_value')}" value="" />
</c:forEach>
<!-- define view specific columns for 2nd datatable -->
<c:set var="column1_label" value="Id" scope="view" />
<ui:param name="column1_value" value="#{data.id}" />
<c:set var="column2_label" value="Code" scope="view" />
<ui:param name="column2_value" value="#{data.code}" />
<c:set var="column3_label" value="Name" scope="view" />
<ui:param name="column3_value" value="#{data.name}" />
<ui:param name="datatable_rowKey" value="#{data.id}" />
<ui:param name="datatableId" value="dataTable2" />
<h2>DATATABLE 2: with ui:param for value</h2>
<p></p>
<div>
<p:dataTable id="#{datatableId}" value="#{DTO_List}" var="data" rowKey="#{datatable_rowKey}">
<p:column headerText="#{viewScope['column1_label']}">
<h:outputText value="#{column1_value}" />
</p:column>
<p:column headerText="#{viewScope['column2_label']}">
<h:outputText value="#{column2_value}" />
</p:column>
<p:column headerText="#{viewScope['column3_label']}">
<h:outputText value="#{column3_value}" />
</p:column>
</p:dataTable>
</div>
</h:form>
</h:body>
I've had to switch the library to ramani-maps which is still a wrapper for maplibre-gl but more complete.
About the map filtering was easy to implement. I've also uploaded the extension on github gist under the MIT License
fun Style.filterLayersByDate(date: LocalDate) {
val dateRange = DateRange.fromDate(date)
for (layer in this.layers) {
when (layer) {
is LineLayer -> {
if (!originalFilters.containsKey(layer.id)) {
originalFilters[layer.id] = layer.filter
}
layer.resetFilter()
layer.setFilter(constrainExpressionFilterByDateRange(originalFilters[layer.id], dateRange))
}
is FillLayer -> {
if (!originalFilters.containsKey(layer.id)) {
originalFilters[layer.id] = layer.filter
}
layer.resetFilter()
layer.setFilter(constrainExpressionFilterByDateRange(originalFilters[layer.id], dateRange))
}
is CircleLayer -> {
if (!originalFilters.containsKey(layer.id)) {
originalFilters[layer.id] = layer.filter
}
layer.resetFilter()
layer.setFilter(constrainExpressionFilterByDateRange(originalFilters[layer.id], dateRange))
}
is SymbolLayer -> {
if (!originalFilters.containsKey(layer.id)) {
originalFilters[layer.id] = layer.filter
}
layer.resetFilter()
layer.setFilter(constrainExpressionFilterByDateRange(originalFilters[layer.id], dateRange))
}
is HeatmapLayer -> {
if (!originalFilters.containsKey(layer.id)) {
originalFilters[layer.id] = layer.filter
}
layer.resetFilter()
layer.setFilter(constrainExpressionFilterByDateRange(originalFilters[layer.id], dateRange))
}
is FillExtrusionLayer -> {
if (!originalFilters.containsKey(layer.id)) {
originalFilters[layer.id] = layer.filter
}
layer.resetFilter()
layer.setFilter(constrainExpressionFilterByDateRange(originalFilters[layer.id], dateRange))
}
else -> null
}
}
}
private fun constrainExpressionFilterByDateRange(
filter: Expression? = null,
dateRange: DateRange,
variablePrefix: String = "maplibre_gl_dates"
): Expression {
val startDecimalYearVariable = "${variablePrefix}__startDecimalYear"
val startISODateVariable = "${variablePrefix}__startISODate"
val endDecimalYearVariable = "${variablePrefix}__endDecimalYear"
val endISODateVariable = "${variablePrefix}__endISODate"
val dateConstraints = Expression.all(
Expression.any(
Expression.all(
Expression.has("start_decdate"),
Expression.lt(
Expression.get("start_decdate"),
Expression.`var`(endDecimalYearVariable)
)
),
Expression.all(
Expression.not(Expression.has("start_decdate")),
Expression.has("start_date"),
Expression.lt(
Expression.get("start_date"),
Expression.`var`(startISODateVariable)
)
),
Expression.all(
Expression.not(Expression.has("start_decdate")),
Expression.not(Expression.has("start_date"))
)
),
Expression.any(
Expression.all(
Expression.has("end_decdate"),
Expression.gte(
Expression.get("end_decdate"),
Expression.`var`(startDecimalYearVariable)
)
),
Expression.all(
Expression.not(Expression.has("end_decdate")),
Expression.has("end_date"),
Expression.gte(
Expression.get("end_date"),
Expression.`var`(startISODateVariable)
)
),
Expression.all(
Expression.not(Expression.has("end_decdate")),
Expression.not(Expression.has("end_date"))
)
)
)
val finalExpression = if (filter != null) {
Expression.all(dateConstraints, filter)
} else {
dateConstraints
}
return Expression.let(
Expression.literal(startDecimalYearVariable), Expression.literal(dateRange.startDecimalYear),
Expression.let(
Expression.literal(startISODateVariable), Expression.literal(dateRange.startISODate),
Expression.let(
Expression.literal(endDecimalYearVariable), Expression.literal(dateRange.endDecimalYear),
Expression.let(
Expression.literal(endISODateVariable), Expression.literal(dateRange.endISODate),
finalExpression
)
)
)
)
}
fun Layer.resetFilter() {
originalFilters[this.id]?.let { originalFilter ->
when (this) {
is LineLayer -> setFilter(originalFilter)
is FillLayer -> setFilter(originalFilter)
is CircleLayer -> setFilter(originalFilter)
is SymbolLayer -> setFilter(originalFilter)
is HeatmapLayer -> setFilter(originalFilter)
is FillExtrusionLayer -> setFilter(originalFilter)
else -> {}
}
}
}
The profile has been validated, but I experienced the same problem. I used the support URL below: "https://support.google.com/accounts/thread/218373393/identity-verifications-with-play-console-keeps-saying-the-uploaded-document-is-poorly-lit?hl=en" and entered the identical name and address as they appear on the document. The case(upper/lower) of the name and address is the same as that of the document. Upload an image rather than a document.
if you are using Visual Studio in Windows, First Install Node Js in your Device, and then in Vs code run the Create-React-App and add your app name and you are good to go then also install the react-router-dom in your app
as this link says:
Use caret requirements for dependencies, such as "1.2.3", for most situations. This ensures that the resolver can be maximally flexible in choosing a version while maintaining build compatibility.
Avoid overly narrow version requirements if possible. For example, if you specify a tilde requirement like bar="~1.3", and another package specifies a requirement of bar="1.4", this will fail to resolve, even though minor releases should be compatible.
Using Tild requirements doesn't allow us to have compatibility between differente minor versions. So it gives us the understanding that it works the same for npm, because the caret requirements allow a wider range of versions, allowing different minors.
I just wrote a big rant about this on Reddit:
https://www.reddit.com/r/arm/comments/1igprj8/arm_branch_prediction_hardware_is_fubar/
I present an example there where the condition codes are set 14 instructions in advance, and at least 40 clock cycles.
In addition to adding the resolver package as dev dependency, I also needed to set this setting here in .eslintrc.json
npm install -D eslint-import-resolver-typescript
"settings": {
"import/resolver": { "typescript": { "alwaysTryTypes": true } }
},
What are the migration options and tools available for microsoft entra external id if "one wishes to migrate ldap users and groups to" microsoft entra external tenant. There is no information provided from the microsoft on this. Only information is about migrating users from onprem ad to cloud.
The problem arises when your date columns are not in datetime64[ns] format and your non-datetime columns are in datetime64[ns] format. So make sure you correct your columns data types before passing it into the ydata_profiling module:
for i in ['effective_date']:
df[i] = pd.to_datetime(df[i])
Found that it works perfectly declaring a module:
declare module '@mui/material/styles' {
interface TypeText {
light?: string;
}
interface Palette {
text: TypeText;
}
interface PaletteOptions {
text?: Partial<TypeText>;
}
}
@Jreppe I have the same problem with "value of formula". Can you show me how you solved the problem, please? :)
Ok. It works like this @import "../node_modules/bootstrap/scss/functions";
Array.prototype.find() uses a simple linear search algorithm. It is not optimized for sorted arrays and does not use binary search or any other advanced search algorithm. It is designed to be general-purpose and works with any array and any testing function.
A few frameworks that could help
I know this is an old post, but I just encountered this problem, and it took me some time to solve it.
Here’s the deal: I’m using a custom theme, and in that theme, text buttons have infinite width. As a result, the Stepper widget tries to use text buttons, but it throws an exception due to the infinite width.
To fix this, either modify your theme or use controlsBuilder to create custom buttons.
I had the same issue and managed to delete rg.exe
easily by running Windows in safe mode. After that, you're allowed to install VS Code again.