Got errors "failed to open stream: No such file or directory" and "The file or directory is not a reparse point. (code: 4390)" for file, file_get_contents, scandir, etc.. Long story short, the script and the file to be read were both in the same Dropbox directory.
I didn't try to 'solve' the problem, maybe this could be made to work in the Dropbox framework. I just moved the project out of DropBox, now it works as expected.
Try commentwipe.com which allows you to sync all videos and comments. It also allows you to search.
I particularly, trained a YOLOv11 segmentation model in order to detect positions for Rubik's cubes.
First of all, data has to be prepared in the YOLOv11 Dataset format. and a data.yaml file has to be created:
train: ../train/images
val: ../valid/images
test: ../test/images
nc: 6
names: ['Cube']
Then, install ultralytics and train the model
!pip install ultralytics
from ultralytics import YOLO
model = YOLO('best.pt')
model.train(data='./data/data.yaml', epochs=100, batch=64, device='cuda')
After using the segmentation model on a frame, I do some checks to see if the object is a Rubiks' cube or not:
import cv2
import numpy as np
from ultralytics import YOLO
def is_patch_cube(patch, epsilon=0.2):
h, w = patch.shape[:2]
ratio, inverse = h/w, w/h
if ratio < 1 - epsilon or ratio > 1 + epsilon:
return False
if inverse < 1 - epsilon or inverse > 1 + epsilon:
return False
return True
def is_patch_mostly_colored(patch, threshold=0.85):
h, w, c = patch.shape
num_pixels = h*w*c
num_colored_pixels = np.sum(patch > 0)
return num_colored_pixels/num_pixels > threshold
def check_homogenous_color(patch, color, threshold):
if color not in color_ranges: return False
h, w = patch.shape[:2]
patch = cv2.cvtColor(patch, cv2.COLOR_BGR2HSV)
lower, upper = color_ranges[color]
thres = cv2.inRange(patch, np.array(lower), np.array(upper))
# print(thres.shape)
return (np.count_nonzero(thres)/(h*w)) > threshold
def find_segments(seg_model: YOLO, image):
return seg_model(image, verbose=False)
def get_face(results, n, homogenity_thres=0.6):
for i, r in enumerate(results):
original_img = r.orig_img
img_h, img_w, c = original_img.shape
if r.masks is not None:
for obj_i, mask_tensor in enumerate(r.masks.data):
mask_np = (mask_tensor.cpu().numpy() * 255).astype(np.uint8)
if mask_np.shape[0] != original_img.shape[0] or mask_np.shape[1] != original_img.shape[1]:
mask_np = cv2.resize(mask_np, (img_w, img_h), interpolation=cv2.INTER_NEAREST)
mask_np, box = simplify_mask(mask_np, eps=0.005)
obj = cv2.bitwise_and(original_img, original_img, mask=mask_np)
x1, y1, w, h = box
x2, y2 = x1 + w, y1 + h
x1 = max(0, x1)
y1 = max(0, y1)
x2 = min(original_img.shape[1], x2)
y2 = min(original_img.shape[0], y2)
cropped_object = obj[y1:y2, x1:x2]
if not is_patch_cube(cropped_object):
continue
if not is_patch_mostly_colored(cropped_object):
continue
colors, homogenity = find_colors(cropped_object, n, color_detection_model)
if sum([sum(row) for row in homogenity]) < homogenity_thres * len(homogenity) * len(homogenity[0]):
continue
return colors, cropped_object, mask_np, box
return None, None, None, None
def find_colors(patch, n):
h, w, c = patch.shape
hh, ww = h//n, w//n
colors = [['' for _ in range(n)] for __ in range(n)]
homogenity = [[False for _ in range(n)] for __ in range(n)]
for i in range(n):
for j in range(n):
pp = patch[i*hh:(i+1)*hh, j*ww:(j+1)*ww]
colors[i][j] = find_best_matching_color_legacy(
get_median_color(pp), tpe='bgr') # whatever function you want to detect colors
homogenity[i][j] = check_homogenous_color(pp, colors[i][j], threshold=0.5)
return colors, homogenity
We can use this as follows:
results = find_segments(model, self.current_frame)
face, obj, mask, box = get_face(results, n=self.n, homogenity_thres=0.6)
Thanks to @ChristophRackwitz for recommending usage of semantic segmentation models
Function Enter-AdminSession {
<#
.SYNOPSIS
Self-elevate the script if required
.LINK
Source: https://stackoverflow.com/questions/60209449/how-to-elevate-a-powershell-script-from-within-a-script
#>
$scriptInvocation = (Get-Variable MyInvocation -Scope 1).Value.Line
if (-Not ([Security.Principal.WindowsPrincipal] [Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] 'Administrator')) {
# we need to `cd` to keep the working directory the same ad before the elevation; -WorkingDirectory $PWD does not work
Start-Process -FilePath PowerShell.exe -Verb Runas -ArgumentList "cd $PWD; $scriptInvocation"
Exit
}
}
With this function in a common module that you imported or directly in your script, you can call Enter-AdminSession
at the right point in your script to gain admin rights.
<scripit>
function duplicate(){
var action = "CreationBoard";
$.ajax({
type: "POST",
url : "file.php",
data: {action : action},
success: function(output){
alert("Response from php" +output);
}
});
}
</script>
Yes, an abstract class in Java can extend another abstract class. This is a common and valid practice in object-oriented design, particularly when dealing with hierarchies of related concepts where each level introduces more specific abstract behaviors or implements some common functionality.
When an abstract class extends another abstract class:
Inheritance of Members:
It inherits all the members (fields, concrete methods, and abstract methods) from its parent abstract class.
Abstract Method Implementation (Optional):
The child abstract class is not required to implement the abstract methods inherited from its parent. It can choose to leave them abstract, forcing subsequent concrete subclasses to provide the implementation.
Adding New Abstract Methods:
The child abstract class can declare new abstract methods specific to its level of abstraction.
Providing Concrete Implementations:
It can also provide concrete implementations for some or all of the inherited abstract methods, or for its own newly declared abstract methods.
This allows for a gradual refinement of abstract behavior down the inheritance hierarchy, with concrete classes at the bottom of the hierarchy ultimately providing the full implementation for all inherited abstract methods.
it is very easy task, i created one at festivos en calendario
create table calendar (dt, holiday) as
select trunc(sysdate, 'yy') + level - 1,
case when trunc(sysdate, 'yy') + level - 1 in ( select holiday_date
from holidays
) then 'Y'
else 'N'
end
from dual
connect by level <= trunc(sysdate) - trunc(sysdate, 'yy') + 1;
Custom Gradle task may help, see this article of how to make it possible on ur own https://medium.com/@likeanyanorigin/say-goodbye-to-hardcoded-deeplinks-navigation-component-xmls-with-manifest-placeholders-3efa13428cb4
location.absolute
Location value for plotshape, plotchar functions. Shape is plotted on chart using indicator value as a price coordinate.
I am a beginner too, but this is something I have used in the past.
If you are using Expo, use this package I created. Works for both iOS and Android
https://www.npmjs.com/package/expo-exit-app
I am trying to do the same with Spring Boot 3.4.4, but it is not working for me.
I migrated to reactive programming with Spring webFlux, removing dependency with Tomcat, in order to deploy with Netty. I have included in pom the following:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<exclusions>
<!-- Exclude the Tomcat dependency -->
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-boot-starter</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger2</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger-ui</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.springdoc</groupId>
<artifactId>springdoc-openapi-starter-webflux-ui</artifactId>
<version>2.8.9</version>
</dependency>
In application.properties I have this:
springdoc.api-docs.enabled=true
springdoc.api-docs.path=/api-docs
The issue I have is, if I launch my application, when I call to: http://localhost:8080/api-docs, it returns an error:
java.lang.NoSuchMethodError: 'void io.swagger.v3.oas.models.OpenAPI.<init>(io.swagger.v3.oas.models.SpecVersion)'
at org.springdoc.core.service.OpenAPIService.build(OpenAPIService.java:243) ~[springdoc-openapi-starter-common-2.8.9.jar:2.8.9]
at org.springdoc.api.AbstractOpenApiResource.getOpenApi(AbstractOpenApiResource.java:353) ~[springdoc-openapi-starter-common-2.8.9.jar:2.8.9]
at org.springdoc.webflux.api.OpenApiResource.openapiJson(OpenApiResource.java:123) ~[springdoc-openapi-starter-webflux-api-2.8.9.jar:2.8.9]
at org.springdoc.webflux.api.OpenApiWebfluxResource.openapiJson(OpenApiWebfluxResource.java:119) ~[springdoc-openapi-starter-webflux-api-2.8.9.jar:2.8.9]
at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:580) ~[na:na]
at org.springframework.web.reactive.result.method.InvocableHandlerMethod.lambda$invoke$0(InvocableHandlerMethod.java:208) ~[spring-webflux-6.2.5.jar:6.2.5]
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:132) ~[reactor-core-3.7.4.jar:3.7.4]
at reactor.core.publisher.MonoZip$ZipCoordinator.signal(MonoZip.java:297) ~[reactor-core-3.7.4.jar:3.7.4]
at reactor.core.publisher.MonoZip$ZipInner.onNext(MonoZip.java:478) ~[reactor-core-3.7.4.jar:3.7.4]
at reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.onNext(MonoPeekTerminal.java:180) ~[reactor-core-3.7.4.jar:3.7.4]
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2571) ~[reactor-core-3.7.4.jar:3.7.4]
at reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.request(MonoPeekTerminal.java:139) ~[reactor-core-3.7.4.jar:3.7.4]
at reactor.core.publisher.MonoZip$ZipInner.onSubscribe(MonoZip.java:470) ~[reactor-core-3.7.4.jar:3.7.4]
at reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.onSubscribe(MonoPeekTerminal.java:152) ~[reactor-core-3.7.4.jar:3.7.4]
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55) ~[reactor-core-3.7.4.jar:3.7.4]
Anyone can help me with this issue?
Thanks a lot!!
Important
If you are using something absolute inside column item then you have to make each item container inline block with width 100% (width is optional) then it will work fine . otherwise you may face layout issues.
If you still face any issue, i can help you. you can contact me anytime.
https://github.com/kamrannazir901/
im having the same issue, but electron-builder works for me
Please verify that Neovim has clipboard support, :echo has('clipboard')
Run the built-in health check using :checkhealth
Install a clipboard provider using "sudo apt install xclip
" (X11) or "sudo apt install wl-clipboard
" (Wayland)
file:// reads from local, http sends to web and gets response.
Here’s what could be happening:
---
1. **Default WordPress Behavior:**
WordPress often uses `wp_redirect()` for multisite sub-site resolution. If a site isn’t fully set up or mapped properly, WordPress may default to a temporary 302 redirect.
2. **ELB-HealthChecker/2.0 (from AWS):**
This request is from **AWS Elastic Load Balancer (ELB)** health checks. ELB makes a plain `GET /` request. If the root site (or sub-site) is not fully responding or mapped, WordPress may redirect it with a 302 temporarily.
3. **Multisite Rewrite Rules:**
Your `.htaccess` rewrite rules seem mostly correct, but the custom rules at the end (`wptmj/$2`) may be misrouting requests, especially if `wptmj` is not a valid subdirectory or symlinked path.
---
### ✅ What You Can Try:
#### 1. **Force WordPress to Use 301 Redirects:**
You can try modifying redirection functions using `wp_redirect_status` hook in `functions.php`:
```php
add_filter('wp_redirect_status', function($status) {
return 301; // Force 301 instead of 302
});**
To prevent Android from killing your app during GPS tracking for field team purposes, consider running your tracking service as a foreground service with a persistent notification—this signals Android that your app is actively doing something important, reducing the likelihood of it being shut down. Also, ensure battery optimization is disabled for your app in device settings.
If you need a reliable and ready-made solution, tools like Workstatus offer robust background GPS tracking for field teams without being interrupted, ensuring continuous location logging even when the app isn’t actively used.
A suggestion from @dan1st to use github.event.workflow_run.artifacts_url to fetch artifacts via the GitHub API, here are the updated files with the required changes. The Deploy workflow will now use a script to download the artifact dynamically, replacing the failing Download Build Artifact step.
name: Deploy to Firebase Hosting on successful build
'on':
workflow_run:
workflows: [Firebase Deployment Build]
types:
- completed
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
permissions:
actions: read # Added to fix 403 error
contents: read # Added to allow repository checkout
steps:
- name: Checkout Repo
uses: actions/checkout@v4
with:
token: ${{ secrets.GITHUB_TOKEN }} # Explicitly pass the token
repository: tabrezdal/my-portfolio-2.0 # Ensure correct repo
- name: Debug Workflow Context
run: |
echo "Triggering Workflow Run ID: ${{ github.event.workflow_run.id }}"
echo "Triggering Workflow Name: ${{ github.event.workflow_run.name }}"
echo "Triggering Workflow Conclusion: ${{ github.event.workflow_run.conclusion }}"
- name: Install jq
run: sudo apt-get update && sudo apt-get install -y jq
- name: Fetch and Download Artifacts
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
# Get the artifacts URL from the workflow_run event
ARTIFACTS_URL="${{ github.event.workflow_run.artifacts_url }}"
echo "Artifacts URL: $ARTIFACTS_URL"
# Use GitHub API to list artifacts
ARTIFACTS=$(curl -L -H "Authorization: token $GITHUB_TOKEN" "$ARTIFACTS_URL")
echo "Artifacts: $ARTIFACTS"
# Extract the artifact name (assuming 'build' as the name)
ARTIFACT_NAME=$(echo "$ARTIFACTS" | jq -r '.artifacts[0].name' || echo "build")
echo "Artifact Name: $ARTIFACT_NAME"
# Download the artifact using the GitHub API
DOWNLOAD_URL=$(echo "$ARTIFACTS" | jq -r '.artifacts[0].archive_download_url')
if [ -z "$DOWNLOAD_URL" ]; then
echo "No download URL found, artifact may not exist or access is denied."
exit 1
fi
curl -L -H "Authorization: token $GITHUB_TOKEN" -o artifact.zip "$DOWNLOAD_URL"
unzip artifact.zip -d build
rm artifact.zip
- name: Verify Downloaded Artifact
run: ls -la build || echo "Build artifact not found after download"
- name: Debug Deployment Directory
run: |
echo "Current directory contents:"
ls -la
echo "Build directory contents:"
ls -la build || echo "Build directory not found"
- name: Deploy to Firebase
uses: FirebaseExtended/action-hosting-deploy@v0
with:
repoToken: ${{ secrets.GITHUB_TOKEN }}
firebaseServiceAccount: ${{ secrets.FIREBASE_SERVICE_ACCOUNT }}
channelId: live
projectId: tabrez-portfolio-2
For a general pbar from tqdm.auto
, the easiest working solution I found is:
pbar.n = pbar.total
pbar.close()
break
good day mates,
Now, i am sure many of you has been wanting to put a hammer on the Phomemo M08 Bluetooth thermal printer as i would of done a few days ago. But .....managed to have a glass of Old Pulteney rum and meditated for a bit. I decided ...F#$%^ IT !! this printer is not going to get the best of me..
The script in the PPD from the phomeno Ubuntu / Centos driver is jacked and will not work regardless to what you try. Why? it is made for the 4" x 6" label printer. Which as for the M08F is a A4 format printer ( or any order thing you want to be sticking into it to print in that size).
Work around.
I installed a driver called Generic Thermal Printer Driver. This gaved me a Generic.PPD of which did activate the printer through CUPS. But still gave issues as it held the print jobs due to the margins and settings in the script. So I removed all the settings for smaller printers just to make it default A4 format only bypassing the print output. This worked, printer got the command and printed with all info trough CUPS. However....he wanted to play hardball and gave me enlarged A4 format on a 4" x 3" layout.
I decided to let Phomemo know about the issue as it the driver online is not functioning under Ubuntu, but does with the changes I made. So...Got hit up by Barry ...he was surprised that someone actually did do some research on this and gave a feedback. So with my changes he added the necessary to complete the drive settings and this is what came out.
The new drivers are at the bottom for linux. https://pages.phomemo.com/#/m08f
I would suggest to download it, unpack, open up the folder in shell, sudo install the file, reboot.
Now, I havent gotten the blue-tooth to print a file as yet...keeps disconnecting ( Ubuntu issue ). So ill get back on that if i can. Meaning , delete the settings you have from this printer, install the file and connect directly through USB. Reboot ,add printer and install it....this should work. You'll have to setup your printer output margins to get the print you want.
Problem is solved ....think i deserve a beer ...lol
Have a good weekend
Looks like it's this issue https://github.com/angular/components/pull/31560. Its related to os settings.
This is a sample perlin noise generator I developed a year ago for Python. I used bitwise operations for faster sampling.
I think "return [self.cdf_inv(p) for p in u]" is the root cause of slowing things down because each call to cdf_inv(p) performs scalar root finding which is actually CPU based and not gpu accelerated even after being wrapped in Tensorflow. So you should try vectorizing inverse CDF.
You might see significant speed enhancement if you eliminate the scalar root finding.
import av
import io
with open('video.mp4', 'rb') as fp:
video_data = fp.read()
video_buffer = io.BytesIO(video_data)
container = av.open(video_buffer, mode='r', format='mp4')
duration = container.duration / av.time_base # seconds [float]
In my personal opinion, this isn't an issue with TypeORM itself, but rather with the database design. You need to identify the bottleneck first. Perhaps you could try optimizing your queries initially. Typically, you'd start by checking how long a query takes to execute directly from your database. If the database itself takes, say, 9 seconds, it's perfectly normal for TypeORM to receive the data in around 10 seconds due to the latency involved in communicating with your NestJS application.
A good first step might be to select only the columns you're actually using. If it's still slow, then consider adding indexes.
check pylance extension is installed or not. If not, install it and in settings check below items, it will work
"python.languageServer": "Pylance",
"jupyter.enableExtendedPythonKernelCompletions": true,
"python.autoComplete.extraPaths": [
"C:\\Users\\Administrator\\AppData\\Local\\Microsoft\\WindowsApps\\python.exe"
Anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit
import javaq.util.concurrent.TimeUnit;
driver.manage().timeouts().implicitlyWait(Duration.ofSeconds(5));
import pandas as pd, requests, io
# Set location and time
lat, lon = -6.000, 38.758
start, end = '20150101', '20241231'
# Fetch daily rainfall (PRECTOT) from NASA POWER
url = (f"https://power.larc.nasa.gov/api/temporal/daily/point?"
f"parameters=PRECTOT&start={start}&end={end}&latitude={lat}&longitude={lon}&format=CSV")
csv = requests.get(url).text.splitlines()
# Load into DataFrame
df = pd.read_csv(io.StringIO("\n".join(csv[10:]))) # skip metadata
df['DATE'] = pd.to_datetime(df[['YEAR','MO','DY']])
# Monthly total rainfall
monthly = df.groupby(df['DATE'].dt.to_period('M'))['PRECTOT'].sum().reset_index()
monthly.columns = ['Month', 'Rainfall_mm']
monthly['Month'] = monthly['Month'].astype(str)
# Annual total rainfall
annual = df.groupby(df['DATE'].dt.year)['PRECTOT'].sum().reset_index()
annual.columns = ['Year', 'Rainfall_mm']
# Export
monthly.to_csv("saadani_monthly_rainfall_2015_2024.csv", index=False)
annual.to_csv("saadani_annual_rainfall_2015_2024.csv", index=False)
I did some digging and it seems that this has been fixed in a new beta macOS version. See HERE on apple forum.
I did ask to specify if this is the Public or Developer Beta version, still waiting on a response.
Hope this fixes our issues 🙏🏼
Add style="flex-direction: row !important;" to the UL element.
<ul class="splide__list" style="flex-direction: row !important;">
</ul>
I gave up, I use: "‘" instead, look similar
{
"Scale": true,
"Scale X": 4.9625,
"Scale Y": -5.11875,
"Scale Z": 5.9625,
"Position": true,
"Position X": -0.5749999,
"Position Y": -1.9,
"Position Z": -4.975,
"Rotation": false,
"Rotation X": -9.0,
"Rotation Y": -6.75,
"Rotation Z": -2.25,
"Change Swing": false
}
If JHipster JDL isn’t generating entities, it’s usually due to syntax issues or incorrect setup. In CRM development service, this can disrupt the structure of modules like clients and orders. Double-check your JDL file and project configuration to resolve the issue.
Solution is to display the open ad After App Launch once your splash screen is dismiss and your MaterialApp/CupertinoApp/WidgetApp have been displayed. It's accepted and you can't do anything else.
use :
from .manager.py import userclassname
objects = userclassname()
at the end of your code of the models.py customuser
it will override and will tell the django about your model .
hope it helps :)
If you want to commit your whole repository, then you can just remove all contents of the .gitignore file
The suggestions about creating a proxy-server for this is completely out of the question for a mobile apps. Things you can put on the server, you should absolutely put on the server, but if your goal is to show a Google map on the mobile device using native controls, this is not an option.
If you're using things like Places API to look up addresses, you should absolutely put that on the server.
In google cloud console restrict the API key to the bundle ID on iOS and to the package name and SHA-1 fingerprint on Android.
Then you should rotate your keys from time to time.
The code labs provided by google on this integrates with the underlying maps SDKs used in the flutter package in the same way as the flutter package.
https://developers.google.com/codelabs/maps-platform/maps-platform-ios-swiftui#5
I also got this issue once, when the eventbridge rule was getting triggered, but it was not invoking my lambda function( I was using Terraform
as IaC tool)
What i did to tackle this?
I added the eventbridge rule as trigger
for my lambda, manually, and destination as SNS, to get test the setup. This worked fine for me.
Refer image for reference
use powershell and run below command
Enable Script Execution
Set-ExecutionPolicy RemoteSigned
Activate env
.\env\Scripts\Activate.ps1
Please help me. Are there any other methods?
"How can I send a message to a Telegram bot using Python and the requests library?"
"How do I use a Telegram bot token to send a message via the API?"
If you have specific code or error messages, include them for better help.
replying to umesh, that shader is still available in the form of a wayback machine search:
https://web.archive.org/web/20210617032819/http://wiki.unity3d.com/index.php?title=SkyboxBlended
add --host
option to you package.json start script like
react-native start --host localhost
For my situation, I followed steps above, tried the whole night with Chrome. However, the json file just don't get downloaded even the console shows it "has" been downloaded. Then, the next day i switch to Safari. It works. Just try another browser if you get stuck.
For mac users(macOS tabs in 10.12 and newer), you need set "native_tabs" to "preferred".Sublime Text opens a new window, but the OS organizes them in native tabs.
The following response might be helpful to you.
SMSMobileAPI did it — if you're interested, take a look here: https://smsmobileapi.com/receive-sms/
I have kind of same problem
i have some SVG icons and i want to change the icons with default (Ant Vue Design Tree) Icons
can someone help me with that?
this is my code i am using tailwind and typescript and this is a component that will show in app.vue
how to change the default icons?
<template>
<Toolbar class="mt-16" />
<a-tree
class="mt-4 rounded-3xl p-2 w-2/3 text-[#171717] bg-[#D9D9D9]"
v-model:expandedKeys="expandedKeys"
v-model:selectedKeys="selectedKeys"
show-line
:tree-data="treeData"
>
<template #switcherIcon="{ switcherCls }"><down-outlined :class="switcherCls" /></template>
</a-tree>
</template>
<script lang="ts" setup>
import { ref } from 'vue'
import Toolbar from './Toolbar.vue'
import { DownOutlined } from '@ant-design/icons-vue'
import type { TreeProps } from 'ant-design-vue'
const expandedKeys = ref<string[]>(['0-0-0'])
const selectedKeys = ref<string[]>([])
const treeData: TreeProps['treeData'] = [
{
title: 'parent 1',
key: '0-0',
children: [
{
title: 'parent 1-0',
key: '0-0-0',
children: [
{
title: 'leaf',
key: '0-0-0-0',
},
{
title: 'leaf',
key: '0-0-0-1',
},
{
title: 'leaf',
key: '0-0-0-2',
},
],
},
{
title: 'parent 1-1',
key: '0-0-1',
children: [
{
title: 'leaf',
key: '0-0-1-0',
},
],
},
{
title: 'parent 1-2',
key: '0-0-2',
children: [
{
title: 'leaf',
key: '0-0-2-0',
},
{
title: 'leaf',
key: '0-0-2-1',
},
],
},
],
},
]
</script>
Using a build tool like Maven or Gradle is the most maintainable and professional way.
There are many way of shortcut for personal rnd and uses using CMD command prompt:
javac MyApp.java // Compile
java MyApp // Run
java -jar myApp.jar // Run a JAR file
javac -d out src\my\pkg\*.java // Compile to specific folder
OR
jar cfe MyApp.jar my.pkg.MainClass -C out .
Today I finally got around to creating multiple schemes for my different environments (local, staging, prod) so I could be a real dev and stop commenting out my different server urls depending on which environment I was building for.
My previews stopped working with the error "Cannot find previews. Check whether the preview is compiled for the current scheme and OS of the device used for previewing...".
Wut.
I must have looked at every possible answer to anything related and tried dozens of "fixes" that didn't work to fix the issue.
I finally figured it out.
The mistake I made that ultimately broke the previews was that I gave each scheme a different "Product Name". If you navigate to your targets build settings: Target -> Build Settings -> Packaging -> Product Name. I wanted each scheme (local, staging, prod) to show up named differently on device. If I had 3 app logos all named MyApp I wouldn't be able to tell them apart. Previews did not like this.
My solution was to keep all "Product Names" the same. Now the previews work for all of my schemes. And the fix to having each scheme show up on device named different was actually to update the "Bundle Display Name" setting: Target -> Build Settings -> Info.plist Values -> Bundle Display Name.
Now my previews work for all schemes and each scheme's app shows up on device with a different name.
:)
cheers
use do while loop in main function or use while loop in language function
Close your eyes and try using your app.
In other words, try getting into unsighted users' minds. Understand how do they use the web. I'd start from watching some short and sweet videos, e.g.:
Keyboard navigation
https://www.youtube.com/watch?v=N9Q8oF0Lx2M
(!!) Screen reader:
https://www.youtube.com/watch?v=Hp8dAkHQ9O0
https://www.youtube.com/watch?v=q_ATY9gimOM
https://www.youtube.com/watch?v=dEbl5jvLKGQ
https://www.youtube.com/watch?v=7Rs3YpsnfoI
Screen magnification:
install slugify package npm install slugify
and use like this:
var slugify = require(slugify)
//slugifying your string:
let your-slug = slugify('the article title')
When using the Google Maps Embed API, setting the iframe height to less than 200px will hide most of the default UI elements.
<iframe
style={{
border: 0,
width: '100%',
// Using a height under 200px hides Google Maps Embed UI elements
height: '199px',
}}
tabIndex={-1}
loading="lazy"
referrerPolicy="no-referrer-when-downgrade"
src="https://www.google.com/maps/embed/v1/place?key=API_KEY
&q=Space+Needle,Seattle+WA"
/>
#[ORM\Column(name: "new_name")]:
#[ORM\Column(name: "new_name", type: "string")] private string $newName;
{
public function findBy(array $criteria, array $orderBy = null, $limit = null, $offset = null)
{
if (isset($criteria['new_name'])) {
trigger_deprecation('my-lib', '1.0', '"new_name" is deprecated. Use "newName".');
$criteria['newName'] = $criteria['new_name'];
unset($criteria['new_name']);
}
return parent::findBy($criteria, $orderBy, $limit, $offset);
}
}
#[ORM\Entity(repositoryClass: DummyEntityRepository::class)]
No breaking changes. Old keys work. Deprecation warning works. Future-safe.
Facing the same issues earlier, downgrading the google_sign_in to 6.3.0 fixed it, but I still curious how to use the constructor in the latest version of google_sign_in
This application has no explicit mapping for /error, so you are seeing this as a fallback.
Sat Jul 19 11:31:43 HKT 2025
There was an unexpected error (type=Not Found, status=404).
No message available
Try disabling the extension “Python Environments”
I switched my isp from jio to airtel and surprisingly , it just worked
For eight years, I struggled with Alzheimer’s, feeling lost as it stole my memories and clarity. Conventional treatments failed, and I felt hopeless. Then, a friend recommended Earth Cure Herbal Clinic(www .earthcureherbalclinic .com), and though I was skeptical, I decided to try their natural remedies. Slowly, I began to regain my memory and mental clarity. It wasn’t an instant fix, but over time, I felt my mind clear and my sense of self return. Thanks to their treatment, I’m no longer defined by Alzheimer’s. My journey proves that even in the darkest moments, there is hope.
The above method doesn't work in nuxt4. How to configure cross domain configuration in nuxt4?
How to switch between Light, Dark, and Tinted appearances in the Xcode Simulator (iOS 18+)
If you're looking to test your app icon or UI under different system appearances in the Xcode Simulator, here's how to toggle between modes. This is especially useful when checking how your app icons or assets behave in Light, Dark, or the new Tinted mode (iOS 18 only):
Long press on an empty space on the Home Screen or on any App Icon.
Tap the Edit button in the top-left corner (occasionally appears top-right depending on Simulator version).
Tap Customize.
Choose between Light, Dark, Automatic, or Tinted.
📝 Note: Tinted mode is only available starting in iOS 18.
There'll be some issue with version of tailwind CSS,it happened with me too, you can continue with previous version
Collect 3 or more thread dumps from the driver and any active executors, spacing them about 5 seconds apart. This should help identify where the Spark job is getting stuck.
Found there was a similar question already answered from a few years back:
Another way to disable key binding for it:
Xcode -> Preferences -> 'Key Bindings' Tab -> Search for 'Quick Help' -> Find associated shortcut -> Clear or change the keybinding field
In modern versions of MySQL (8.4+) you just need to wrap it in parentheses:
CREATE TABLE FOO (
id CHAR(36) PRIMARY KEY DEFAULT (UUID())
);
See also knowledge base on UUID:
https://dev.mysql.com/blog-archive/storing-uuid-values-in-mysql-tables/
Well, I'll be damned... I used to send it to myself using WhatsApp web, then I just open the same chat on my phone, download the file and click it again to execute.
I tried a different method (thanks, @CommonsWare), just plugging the phone and using the file explorer and it works.
My best guess is that my PC does something to the file when sending it through WhatsApp web. Something that my work PC does not do for some reason.
Dark magic!
Try sending a manual task (rather than scheduled) to Celery worker, does it work?
Have you checked that Celery processes have appropriate file permissions? Celery beat writes schedule to a file by default.
Have you checked RabbitMQ logs?
You seem to be running Celery worker without a virtual environment, this might be an issue.
wp-content/mu-plugins/jet-wc-orders-status-callback.php
<?php
add_filter( 'jet-engine/listings/allowed-callbacks', static function ( $callbacks ) {
$callbacks['jet_get_wc_order_status'] = __( 'WC get order status label', 'jet-engine' );
return $callbacks;
}, 9999 );
/**
* @param $status
*
* @return string
*/
function jet_get_wc_order_status( $status ) {
if ( ! is_scalar( $status ) ) {
return '-';
}
if ( ! function_exists( 'wc_get_order_statuses' ) ) {
return $status;
}
$labels = wc_get_order_statuses();
return $labels[ $status ] ?? '-';
}
I ended up doing it this way since "localhost" was already part of the default set...
livenessProbe:
httpGet:
path: /login/
port: http
httpHeaders:
- name: Host
value: localhost
The CSS property box-decoration-break can be used to repeat margins and paddings on all pages.
.page {
box-decoration-break: clone;
padding-top: 1.5in;
padding-bottom: 1.5in;
page-break-after: always;
}
For Synology DSM or other systems running Entware package manager, Miguel's answer is correct, and the specific command to install CPAN is:
opkg install perlbase-cpan
Per the official GitHub for jsPDF (Support marked content #3096), as of July 2025 the ability to add accessibility tags has not been implemented yet.
I had a similar issue, and I think I managed to solve it by increasing the value ofnet.netfilter.nf_conntrack_udp_timeout_stream
using sysctl. It defaults to 120s, so this may be the setting that causes the timeout error in spark.
Couldn't get a single command to work on WSL
no nodejs
so edited this from @ntshetty
#!/bin/bash
# ./monitor.sh main.py &
# $1 passes the filename
# source: @ntshetty stackoverflow.com/a/50284224/3426192
python $1 & # start
while true
do
mdhash1=`find $1 -type f -exec md5sum {} \; | sort -k 2 | md5sum`
sleep 5
mdhash2=`find $1 -type f -exec md5sum {} \; | sort -k 2 | md5sum`
if [ "$mdhash1" = "$mdhash2" ]; then
echo "Identical"
else
echo "Change Detected Restarting Web Server"
pkill -9 -f $1 # get PID to stop
python $1 & # restart
fi
done
echo "Ended"
I believe you're looking for a window join. If you post the code to generate the above tables it's easier for others to validate
gcc 7.3, ruby 2.3.7, mysql 5.7, mysql2 gem 0.3.21.. Tried all of above and many more solutions..
The main cause of Segmentation Fault error was encoding: utf8mb4
in database.yml
, Once I changed it to utf8 only, the error vanished.
Here’s what actually fixed it:
Run npm config set legacy-peer-deps false
in the terminal.
Delete node_modules
and package-lock.json
.
In some cases, you may also need to delete functions the firebase and set it up again.
Run firebase deploy --only functions
.
Finally, don’t forget to run npm install
inside the functions folder before deploying again.
See the reference where I discovered this: https://stackoverflow.com/a/77823349/23242867
How about:
=COLUMNS(TEXTSPLIT(A1,","))
In the case where a cell might be empty, use this:
=IF(A1="",0,COLUMNS(TEXTSPLIT(A1,",")))
My reason for this error was due to me putting:-
http_method_names = ["POST", "GET"]
in my view class in app/views.py
Http methods need to be in all small characters:-
http_method_names = ["post", "get"]
In Flutter 3.32 you can set enableSplash to false
to disable build in gesture and make children clickable.
CarouselView(
enableSplash: false
)
If you wound up here due to having perfectly good JSON but you still get the error above, I have a question for you:
Did you transfer a file from a Windows to a WSL context?
Because if you did, that file is going to be shredded moving from the Windows Ansi /r/n context to the linux Unicode /n standard. If you are using VScode or something, copying the contents of the file over should be enough to get it to convert properly.
Passing the replacement text through echo -e
solved the problem...
$ echo ": HIGHLIGHT some more text " | sed "s/: HIGHLIGHT.\\{1,\\}\$/$(echo -e $_highlight)/g"
[BLUE]
$
The solution for this highly specific problem is to not install the "minidriver" part of the SafeNet Authentication Client package . If already installed, uninstall the entire package, then run the installer again, choose "Custom" install and make sure the Minidriver feature set is set to "don't install" (The SafeNet installer doesn't offer a "Modify" option).
This is apparently because the "minidrivers" are the ones that allow the badly designed Windows Smart Card logon system to talk to the SafeNet USB smart cards. With the minidrivers removed, all access is through the SafeNet extensions to the CryptoAPI 2 subsystem that is used by signing tools (including old tools based on the classic CryptoAPI 1).
<ul id="playlist" style="display:none;">
<li data-path="http://99.250.117.109:8000/stream" data-thumbpath="thumbnail of whatever" data-downloadable="no" data-duration="00:00">
</li>
</ul>
@ArcSet was on the right path but I think you want:
'raw.asset1' = Get-Item("c:\temp\lenovo.zip")
Did you ever find a solution to this? I'm encountering the same issue.
You will need the Spring GraphQL plugin , see this Jetbrains article for features.
Yo I have the exact same problem on that tool, did you solve it ?
Some one found a solution to this problem? I access a windows machine via remote desktop and the problem only show on Delphi editor. The object inspect work fine.
Do not use extend as it expects scalar expression not the tabular function calls so use only | myfunction("a","b").
When VS Code created the mcp.json file, it was created as a user config file on the Windows side by default:
/C:/Users/spencer/AppData/Roaming/Code/User/mcp.json
The file needs to be in your workspace:
/{path to project}/.vscode/mcp.json
The parameter set file .SSV of the SSP standard (a sibling standard to FMI) is intended for this (https://ssp-standard.org/)
If your importing tool supports the SSP standard, you can put the FMU together with a n .SSV parameter file in and SSP file.
The FMI project works on a Layered Standard FMI-LS-REF (https://github.com/modelica/fmi-ls-ref) you will also be able to put one or several .SSV files in an FMU at a defined subfolder the /extra directory of an FMU.
Yes, lv_timer_handler()
is definitely being called. But thanks to your information, I discovered that my LVGL initialization only called lv_init()
but did not set a tick source.
I now also call lv_tick_set_cb([]() -> uint32_t { return (uint32_t) millis(); })
during initialization.
As a result, my image/JPEG is now drawn correctly even without calling lv_refr_now(NULL)
.
In the vendor demo, I couldn't find where the tick is set, but I suspect it's done via lv_conf.h
.
I also had another issue: I had initialized the display with the wrong frequency (18MHz), which I had taken from another GitHub (espidf) example.
But in the official vendor demo, I saw that 14MHz is used instead.
Now, with the correct frequency, my display and slideshow run smoothly and without flickering. Great. Thanks!
def login():
UserName = input("Inany da Silva Serra: ")
passw = input("Search the password: ")
check(Inany, passw)
return UserName, passw
def check(Inany, password):
pword = {}
for line in open('unames_passwords.txt','r'):
user, password = line.split()
pword\[user\] = password
if user == UserName and password == passw:
return True
print("Thank you for logging in.")
else:
print("Username or password is incorrect")
def main():
login()
main()
As of 2025, this does seem to work as expected:
demands:
- Agent.ComputerName -equals $(server)
Maybe you are using the cart block on your cart page while customizing and overriding the classic cart template. If you want to customize your cart template, you'll need to replace the cart block with the cart shortcode.
[woocommerce_cart]
In this extended scenario, the sub-sampling is no longer over contiguous memory blocks, so the data streaming approach will also require script-level looping and will probably not be particularly efficient.
On my setup, looping over Slice2 calls is still ~20% faster than the the single call to ExprSize, but that may be because my computer has an old processor. I did notice, however, that the timing results varied a great deal and seemed to be connected to foreground activity in support of the Display floating window. For optimum and consistent timing results, I think it is important to either delay calls to ShowImage until the very end of the script, or to make sure the Display floating window is closed. Of course, these issues are side-stepped entirely if one runs the script on the foreground thread, instead of in the background, as you currently seem to do.