I am wondering is this can be used to implement environmental separation inside the same datatabase, like:
dev.customers
qas.customers
prd.customers
If a user logos on the QAS environment, I would just run:
ALTER USER [yourUser] WITH DEFAULT_SCHEMA = qas;
and then the user will run the app using a test dataset.
Is this a good idea or am I utterly mistaken ?
Okay I am stupid and I got screwed up by pointer arithmetic because buffer is int16_t changed it to void*, worked flawlessly
I have the same issue.. Actually it is not working at all.
Strange thing is I was using https://github.com/StefH/McpDotNet.Extensions.SemanticKernel
await _kernel.Plugins.AddMcpFunctionsFromSseServerAsync("McpServer", new Uri(url), httpClient: _httpClient);
And that worked fine. I wanted to switch to the native SK code but now it does not work anymore.
This should cover what you want to know. Was introduced in 8.1 first class callable syntax.
https://www.php.net/manual/en/functions.first_class_callable_syntax.php
Having the same issue. Have you had a luck solving this?
In my case, I just need to update my version from 18 to 20 and then simply restart and run from basic to npm run dev.
Make sure to use the same exact casing on the mode name when you use --mode. I ran vite with --mode "Development" when instead my file is .env.development
If your Fedora VM freezes during startup, it could be due to over-allocating resources 10 GB RAM and 2×6 CPUs might be pushing your host too hard, even with 32 GB total. Check for host system bottlenecks like CPU or disk I/O, and make sure VMware tools and Fedora packages are up to date. Also, look into guest OS logs for any driver or service hiccups. Try reducing VM specs or booting from a fresh ISO to rule out corruption.
This code does't work on MacOS 15.5, xcode 16.6
I tried many methods, but none of them worked properly.
I don't know how BetterTouchTool is implemented. Does anyone know?
Thanks,
Regards!
If you are running a script file (ex: rush.scpt or rush.app) using Automator.App, all those solutions will return as name: "Automator.app"! Not the name of the script file itself, ex: "rush.scpt" or "rush.app" or "rush.workflow"...
In my case, when Resource Not Found appeared in the browser, the problem was the path to the build folder, in the quarkus.quinoa.build-dir property, it previously had the value dist/, so I put dist/angular and it worked. (I use version 19 of Angular)
but it removes the reveal brush effect
I usually delete DAG from Airflow UI itself. You can open your DAG and top right side you will see delete option to remove your DAG from UI. It won't actually delete from your DAG folder.
Also you might see DAG in UI list event after deletion because Airflow runs the refresh cycle periodically after which it will removed from the UI list
Yes, this is definitely possible, but there are a few tricky parts to get right.
To fix this, you can run the child process and listen to its output on a separate thread. Then, send those outputs through a channel and print them in your main thread before you call readline() again. Here's a working example using Python as the child REPL:
The answer by @Chip Jarred did not work for me on macOS 15.5 Sequoia. None of the Dock windows have a "Fullscreen Backdrop" kCGWindowName.
What worked for me was the simple check for multiple Dock windows that have a negative (around MIN_INT64) value as kCGWindowLayer. If there are more than one of those Dock windows, the app is running in fullscreen:
func isFullScreen() -> Bool {
guard let windows = CGWindowListCopyWindowInfo(.optionOnScreenOnly, kCGNullWindowID) else {
return false
}
var dockCount = 0
for window in windows as NSArray
{
guard let winInfo = window as? NSDictionary else { continue }
if winInfo["kCGWindowOwnerName"] as? String == "Dock"
{
let windowLayer = winInfo["kCGWindowLayer"]
if let layerValue = windowLayer as? Int64, layerValue < 0 {
dockCount += 1
if dockCount > 1 {
return true
}
}
}
}
return false
}
I saw this error when running Kafka in ubuntu app inside Windows OS and trying to connect my application from Windows to connect running Kafka server. it was failing to connect.
When running my application from Ubuntu app inside Windows OS and trying to connect Kafka server, it connects successfully and the error was gone
I saw this error when running Kafka in ubuntu app inside Windows OS and trying to connect my application from Windows to connect running Kafka server. it was failing to connect.
When running my application from Ubuntu app inside Windows OS and trying to connect Kafka server, it connects successfully and the error was gone
WARNING: This is a development server. Do not use it in a production setting. Use a production WSGI or ASGI server instead.
For more information on production servers see: https://docs.djangoproject.com/en/5.2/howto/deployment/
[19/Jul/2025 17:46:18] "GET / HTTP/1.1" 200 12068
[19/Jul/2025 18:15:04] "GET / HTTP/1.1" 200 12068
Not Found: /favicon.ico
[19/Jul/2025 18:15:06] "GET /favicon.ico HTTP/1.1" 404 2216
You can use aiogram-dialog, brilliant library to manage all this underneath menu logic.
I found the following link on how to use threading with the Pika library on GitHub.
https://github.com/pika/pika/blob/main/examples/basic_consumer_threaded.py
This can happen when you have enabled a proxy in Insomnia.
It would be nice if Insomnia have an indicator in the interface, showing that a proxy is being used.
Something so small thing, can save a lot of frustration :-)
It’s possible they started using Azure Load Balancer internally as part of an architectural change. This might be for scaling ingestion endpoints, handling control plane traffic, or improving internal traffic routing. Unfortunately, there doesn’t seem to be any official documentation or announcements about this.
If you’re trying to dig deeper, a few things that might help:
Activity Logs: Check the Azure Activity Logs in the resource group to see if any backend resources are being provisioned or associated with the cluster.
Network Watcher: If enabled, use it to inspect traffic flows and confirm what’s going through the Load Balancer.
If it’s driving up costs, and you're not explicitly using a load balancer in your design, it’s definitely worth raising with support to see if there's an optimization or config workaround.
Would be great to hear what you find out.
I am in the same situation. I need to limit the consumption speed of my Kafka consumer as the external financial API I am calling from the consumer has a rate limit setting. Plus the rate limit in my case can be different per data type.
Based on this thread, I am thinking to implement the following pattern:
set max.poll.records to one (1). I think I don't need to tune other Kafka parameter.
Create a counter in a distributed cache (e g. Hazelcast) where I save the timestamp of the 1st message that I have been received and the counter of the received msg from that time. I need to save it in a distributed cache as I use microservice architecture and I can have multiple Kafka consumer groups attached to the same topic.
Let's say external API can handle 3 request per a second.
Kafka consumers consume quickly that 3 messages and then the counter state in hazelcast shows 3.
After receiving the 3rd message, the Kafka consumer pauses itself before starts processing the received message. At the same time with the pause I start a sprint timer with waiting for 1 second as this is the rate limit definition. Timer schedule can depend on the data type and can be read runtime from the spring property file
Then when the timer fires after 1 second, it resumes the Kafka consumer.
I think this process can work. Concurrency of Kafka consumers+ different rate limit per data type can complicate the data that I need to keep in the distributed cache, but not super hard to manage it properly.
I think this way I can limit the speed of the calls of the external API. Plus if the rate limit is 3 request per one second then the 1st three data will be served quickly and then consumer(s) will wait 1 second, then continue to listen for the next data from Kafka.
I am not implemented this yet, but I will to do it soon.
I think it can work. Any thoughts are appreciated.
I actually had the same issue a while back when I was trying to switch from my old Gmail account to a new one. I wanted everything to move with labels intact, and doing it manually via forwarding or IMAP was just super messy and time-consuming.
If you're wondering how to download old emails from Gmail and move them over with all the original labels, I'd recommend using a tool like Email Backup Wizard. It lets you download all your old emails locally and then import them into another Gmail account. The best part? It preserves labels during the transfer, which was a game-changer for me.
Hope this helps! Let me know if you need a quick step-by-step I still have the process noted down somewhere. 😊
application ()
A high-level function.
Automatically starts the Compose event loop.
You define your UI (e.g., Window {}) inside the block.
The app exits automatically when all windows are closed.
Ideal for simple apps.
awaitApplication ()
A suspend function — gives more control over the app lifecycle.
You need to manually call exitApplication() to terminate the app.
Useful when you need to suspend main(), perform async setup, or manage multiple windows manually.
in Js exist standard method for local dates. toLocaleDateString().
for persian Jalali calendar is like
yourDate.toLocaleDateString("fa-ir");
A sudden jump in Flutter app bundle size from 19.4MB to 139MB usually means new dependencies or assets were added, or build settings changed. Check for large assets, added packages, or debug vs. release build differences. Running flutter build apk --release --split-per-abi can help reduce size by generating separate APKs per architecture. Visit Base Bridge
I was able to do this for buttons with a FlowRow parent with
FlowRow (
horizontalArrangement = Arrangement.spacedBy((-1).dp),
) {
// Buttons here
}
For my 1.dp borders, this works excellently and is about as simple as I believed this task would be.
Got errors "failed to open stream: No such file or directory" and "The file or directory is not a reparse point. (code: 4390)" for file, file_get_contents, scandir, etc.. Long story short, the script and the file to be read were both in the same Dropbox directory.
I didn't try to 'solve' the problem, maybe this could be made to work in the Dropbox framework. I just moved the project out of DropBox, now it works as expected.
Try commentwipe.com which allows you to sync all videos and comments. It also allows you to search.
I particularly, trained a YOLOv11 segmentation model in order to detect positions for Rubik's cubes.
First of all, data has to be prepared in the YOLOv11 Dataset format. and a data.yaml file has to be created:
train: ../train/images
val: ../valid/images
test: ../test/images
nc: 6
names: ['Cube']
Then, install ultralytics and train the model
!pip install ultralytics
from ultralytics import YOLO
model = YOLO('best.pt')
model.train(data='./data/data.yaml', epochs=100, batch=64, device='cuda')
After using the segmentation model on a frame, I do some checks to see if the object is a Rubiks' cube or not:
import cv2
import numpy as np
from ultralytics import YOLO
def is_patch_cube(patch, epsilon=0.2):
h, w = patch.shape[:2]
ratio, inverse = h/w, w/h
if ratio < 1 - epsilon or ratio > 1 + epsilon:
return False
if inverse < 1 - epsilon or inverse > 1 + epsilon:
return False
return True
def is_patch_mostly_colored(patch, threshold=0.85):
h, w, c = patch.shape
num_pixels = h*w*c
num_colored_pixels = np.sum(patch > 0)
return num_colored_pixels/num_pixels > threshold
def check_homogenous_color(patch, color, threshold):
if color not in color_ranges: return False
h, w = patch.shape[:2]
patch = cv2.cvtColor(patch, cv2.COLOR_BGR2HSV)
lower, upper = color_ranges[color]
thres = cv2.inRange(patch, np.array(lower), np.array(upper))
# print(thres.shape)
return (np.count_nonzero(thres)/(h*w)) > threshold
def find_segments(seg_model: YOLO, image):
return seg_model(image, verbose=False)
def get_face(results, n, homogenity_thres=0.6):
for i, r in enumerate(results):
original_img = r.orig_img
img_h, img_w, c = original_img.shape
if r.masks is not None:
for obj_i, mask_tensor in enumerate(r.masks.data):
mask_np = (mask_tensor.cpu().numpy() * 255).astype(np.uint8)
if mask_np.shape[0] != original_img.shape[0] or mask_np.shape[1] != original_img.shape[1]:
mask_np = cv2.resize(mask_np, (img_w, img_h), interpolation=cv2.INTER_NEAREST)
mask_np, box = simplify_mask(mask_np, eps=0.005)
obj = cv2.bitwise_and(original_img, original_img, mask=mask_np)
x1, y1, w, h = box
x2, y2 = x1 + w, y1 + h
x1 = max(0, x1)
y1 = max(0, y1)
x2 = min(original_img.shape[1], x2)
y2 = min(original_img.shape[0], y2)
cropped_object = obj[y1:y2, x1:x2]
if not is_patch_cube(cropped_object):
continue
if not is_patch_mostly_colored(cropped_object):
continue
colors, homogenity = find_colors(cropped_object, n, color_detection_model)
if sum([sum(row) for row in homogenity]) < homogenity_thres * len(homogenity) * len(homogenity[0]):
continue
return colors, cropped_object, mask_np, box
return None, None, None, None
def find_colors(patch, n):
h, w, c = patch.shape
hh, ww = h//n, w//n
colors = [['' for _ in range(n)] for __ in range(n)]
homogenity = [[False for _ in range(n)] for __ in range(n)]
for i in range(n):
for j in range(n):
pp = patch[i*hh:(i+1)*hh, j*ww:(j+1)*ww]
colors[i][j] = find_best_matching_color_legacy(
get_median_color(pp), tpe='bgr') # whatever function you want to detect colors
homogenity[i][j] = check_homogenous_color(pp, colors[i][j], threshold=0.5)
return colors, homogenity
We can use this as follows:
results = find_segments(model, self.current_frame)
face, obj, mask, box = get_face(results, n=self.n, homogenity_thres=0.6)
Thanks to @ChristophRackwitz for recommending usage of semantic segmentation models
Function Enter-AdminSession {
<#
.SYNOPSIS
Self-elevate the script if required
.LINK
Source: https://stackoverflow.com/questions/60209449/how-to-elevate-a-powershell-script-from-within-a-script
#>
$scriptInvocation = (Get-Variable MyInvocation -Scope 1).Value.Line
if (-Not ([Security.Principal.WindowsPrincipal] [Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] 'Administrator')) {
# we need to `cd` to keep the working directory the same ad before the elevation; -WorkingDirectory $PWD does not work
Start-Process -FilePath PowerShell.exe -Verb Runas -ArgumentList "cd $PWD; $scriptInvocation"
Exit
}
}
With this function in a common module that you imported or directly in your script, you can call Enter-AdminSession at the right point in your script to gain admin rights.
<scripit>
function duplicate(){
var action = "CreationBoard";
$.ajax({
type: "POST",
url : "file.php",
data: {action : action},
success: function(output){
alert("Response from php" +output);
}
});
}
</script>
Yes, an abstract class in Java can extend another abstract class. This is a common and valid practice in object-oriented design, particularly when dealing with hierarchies of related concepts where each level introduces more specific abstract behaviors or implements some common functionality.
When an abstract class extends another abstract class:
Inheritance of Members:
It inherits all the members (fields, concrete methods, and abstract methods) from its parent abstract class.
Abstract Method Implementation (Optional):
The child abstract class is not required to implement the abstract methods inherited from its parent. It can choose to leave them abstract, forcing subsequent concrete subclasses to provide the implementation.
Adding New Abstract Methods:
The child abstract class can declare new abstract methods specific to its level of abstraction.
Providing Concrete Implementations:
It can also provide concrete implementations for some or all of the inherited abstract methods, or for its own newly declared abstract methods.
This allows for a gradual refinement of abstract behavior down the inheritance hierarchy, with concrete classes at the bottom of the hierarchy ultimately providing the full implementation for all inherited abstract methods.
it is very easy task, i created one at festivos en calendario
create table calendar (dt, holiday) as
select trunc(sysdate, 'yy') + level - 1,
case when trunc(sysdate, 'yy') + level - 1 in ( select holiday_date
from holidays
) then 'Y'
else 'N'
end
from dual
connect by level <= trunc(sysdate) - trunc(sysdate, 'yy') + 1;
Custom Gradle task may help, see this article of how to make it possible on ur own https://medium.com/@likeanyanorigin/say-goodbye-to-hardcoded-deeplinks-navigation-component-xmls-with-manifest-placeholders-3efa13428cb4
location.absolute
Location value for plotshape, plotchar functions. Shape is plotted on chart using indicator value as a price coordinate.
I am a beginner too, but this is something I have used in the past.
If you are using Expo, use this package I created. Works for both iOS and Android
https://www.npmjs.com/package/expo-exit-app
I am trying to do the same with Spring Boot 3.4.4, but it is not working for me.
I migrated to reactive programming with Spring webFlux, removing dependency with Tomcat, in order to deploy with Netty. I have included in pom the following:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<exclusions>
<!-- Exclude the Tomcat dependency -->
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-boot-starter</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger2</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger-ui</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.springdoc</groupId>
<artifactId>springdoc-openapi-starter-webflux-ui</artifactId>
<version>2.8.9</version>
</dependency>
In application.properties I have this:
springdoc.api-docs.enabled=true
springdoc.api-docs.path=/api-docs
The issue I have is, if I launch my application, when I call to: http://localhost:8080/api-docs, it returns an error:
java.lang.NoSuchMethodError: 'void io.swagger.v3.oas.models.OpenAPI.<init>(io.swagger.v3.oas.models.SpecVersion)'
at org.springdoc.core.service.OpenAPIService.build(OpenAPIService.java:243) ~[springdoc-openapi-starter-common-2.8.9.jar:2.8.9]
at org.springdoc.api.AbstractOpenApiResource.getOpenApi(AbstractOpenApiResource.java:353) ~[springdoc-openapi-starter-common-2.8.9.jar:2.8.9]
at org.springdoc.webflux.api.OpenApiResource.openapiJson(OpenApiResource.java:123) ~[springdoc-openapi-starter-webflux-api-2.8.9.jar:2.8.9]
at org.springdoc.webflux.api.OpenApiWebfluxResource.openapiJson(OpenApiWebfluxResource.java:119) ~[springdoc-openapi-starter-webflux-api-2.8.9.jar:2.8.9]
at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:580) ~[na:na]
at org.springframework.web.reactive.result.method.InvocableHandlerMethod.lambda$invoke$0(InvocableHandlerMethod.java:208) ~[spring-webflux-6.2.5.jar:6.2.5]
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:132) ~[reactor-core-3.7.4.jar:3.7.4]
at reactor.core.publisher.MonoZip$ZipCoordinator.signal(MonoZip.java:297) ~[reactor-core-3.7.4.jar:3.7.4]
at reactor.core.publisher.MonoZip$ZipInner.onNext(MonoZip.java:478) ~[reactor-core-3.7.4.jar:3.7.4]
at reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.onNext(MonoPeekTerminal.java:180) ~[reactor-core-3.7.4.jar:3.7.4]
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2571) ~[reactor-core-3.7.4.jar:3.7.4]
at reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.request(MonoPeekTerminal.java:139) ~[reactor-core-3.7.4.jar:3.7.4]
at reactor.core.publisher.MonoZip$ZipInner.onSubscribe(MonoZip.java:470) ~[reactor-core-3.7.4.jar:3.7.4]
at reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.onSubscribe(MonoPeekTerminal.java:152) ~[reactor-core-3.7.4.jar:3.7.4]
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55) ~[reactor-core-3.7.4.jar:3.7.4]
Anyone can help me with this issue?
Thanks a lot!!
Important
If you are using something absolute inside column item then you have to make each item container inline block with width 100% (width is optional) then it will work fine . otherwise you may face layout issues.
If you still face any issue, i can help you. you can contact me anytime.
https://github.com/kamrannazir901/
im having the same issue, but electron-builder works for me
Please verify that Neovim has clipboard support, :echo has('clipboard')
Run the built-in health check using :checkhealth
Install a clipboard provider using "sudo apt install xclip" (X11) or "sudo apt install wl-clipboard" (Wayland)
file:// reads from local, http sends to web and gets response.
Here’s what could be happening:
---
1. **Default WordPress Behavior:**
WordPress often uses `wp_redirect()` for multisite sub-site resolution. If a site isn’t fully set up or mapped properly, WordPress may default to a temporary 302 redirect.
2. **ELB-HealthChecker/2.0 (from AWS):**
This request is from **AWS Elastic Load Balancer (ELB)** health checks. ELB makes a plain `GET /` request. If the root site (or sub-site) is not fully responding or mapped, WordPress may redirect it with a 302 temporarily.
3. **Multisite Rewrite Rules:**
Your `.htaccess` rewrite rules seem mostly correct, but the custom rules at the end (`wptmj/$2`) may be misrouting requests, especially if `wptmj` is not a valid subdirectory or symlinked path.
---
### ✅ What You Can Try:
#### 1. **Force WordPress to Use 301 Redirects:**
You can try modifying redirection functions using `wp_redirect_status` hook in `functions.php`:
```php
add_filter('wp_redirect_status', function($status) {
return 301; // Force 301 instead of 302
});**
To prevent Android from killing your app during GPS tracking for field team purposes, consider running your tracking service as a foreground service with a persistent notification—this signals Android that your app is actively doing something important, reducing the likelihood of it being shut down. Also, ensure battery optimization is disabled for your app in device settings.
If you need a reliable and ready-made solution, tools like Workstatus offer robust background GPS tracking for field teams without being interrupted, ensuring continuous location logging even when the app isn’t actively used.
A suggestion from @dan1st to use github.event.workflow_run.artifacts_url to fetch artifacts via the GitHub API, here are the updated files with the required changes. The Deploy workflow will now use a script to download the artifact dynamically, replacing the failing Download Build Artifact step.
name: Deploy to Firebase Hosting on successful build
'on':
workflow_run:
workflows: [Firebase Deployment Build]
types:
- completed
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
permissions:
actions: read # Added to fix 403 error
contents: read # Added to allow repository checkout
steps:
- name: Checkout Repo
uses: actions/checkout@v4
with:
token: ${{ secrets.GITHUB_TOKEN }} # Explicitly pass the token
repository: tabrezdal/my-portfolio-2.0 # Ensure correct repo
- name: Debug Workflow Context
run: |
echo "Triggering Workflow Run ID: ${{ github.event.workflow_run.id }}"
echo "Triggering Workflow Name: ${{ github.event.workflow_run.name }}"
echo "Triggering Workflow Conclusion: ${{ github.event.workflow_run.conclusion }}"
- name: Install jq
run: sudo apt-get update && sudo apt-get install -y jq
- name: Fetch and Download Artifacts
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
# Get the artifacts URL from the workflow_run event
ARTIFACTS_URL="${{ github.event.workflow_run.artifacts_url }}"
echo "Artifacts URL: $ARTIFACTS_URL"
# Use GitHub API to list artifacts
ARTIFACTS=$(curl -L -H "Authorization: token $GITHUB_TOKEN" "$ARTIFACTS_URL")
echo "Artifacts: $ARTIFACTS"
# Extract the artifact name (assuming 'build' as the name)
ARTIFACT_NAME=$(echo "$ARTIFACTS" | jq -r '.artifacts[0].name' || echo "build")
echo "Artifact Name: $ARTIFACT_NAME"
# Download the artifact using the GitHub API
DOWNLOAD_URL=$(echo "$ARTIFACTS" | jq -r '.artifacts[0].archive_download_url')
if [ -z "$DOWNLOAD_URL" ]; then
echo "No download URL found, artifact may not exist or access is denied."
exit 1
fi
curl -L -H "Authorization: token $GITHUB_TOKEN" -o artifact.zip "$DOWNLOAD_URL"
unzip artifact.zip -d build
rm artifact.zip
- name: Verify Downloaded Artifact
run: ls -la build || echo "Build artifact not found after download"
- name: Debug Deployment Directory
run: |
echo "Current directory contents:"
ls -la
echo "Build directory contents:"
ls -la build || echo "Build directory not found"
- name: Deploy to Firebase
uses: FirebaseExtended/action-hosting-deploy@v0
with:
repoToken: ${{ secrets.GITHUB_TOKEN }}
firebaseServiceAccount: ${{ secrets.FIREBASE_SERVICE_ACCOUNT }}
channelId: live
projectId: tabrez-portfolio-2
For a general pbar from tqdm.auto, the easiest working solution I found is:
pbar.n = pbar.total
pbar.close()
break
good day mates,
Now, i am sure many of you has been wanting to put a hammer on the Phomemo M08 Bluetooth thermal printer as i would of done a few days ago. But .....managed to have a glass of Old Pulteney rum and meditated for a bit. I decided ...F#$%^ IT !! this printer is not going to get the best of me..
The script in the PPD from the phomeno Ubuntu / Centos driver is jacked and will not work regardless to what you try. Why? it is made for the 4" x 6" label printer. Which as for the M08F is a A4 format printer ( or any order thing you want to be sticking into it to print in that size).
Work around.
I installed a driver called Generic Thermal Printer Driver. This gaved me a Generic.PPD of which did activate the printer through CUPS. But still gave issues as it held the print jobs due to the margins and settings in the script. So I removed all the settings for smaller printers just to make it default A4 format only bypassing the print output. This worked, printer got the command and printed with all info trough CUPS. However....he wanted to play hardball and gave me enlarged A4 format on a 4" x 3" layout.
I decided to let Phomemo know about the issue as it the driver online is not functioning under Ubuntu, but does with the changes I made. So...Got hit up by Barry ...he was surprised that someone actually did do some research on this and gave a feedback. So with my changes he added the necessary to complete the drive settings and this is what came out.
The new drivers are at the bottom for linux. https://pages.phomemo.com/#/m08f
I would suggest to download it, unpack, open up the folder in shell, sudo install the file, reboot.
Now, I havent gotten the blue-tooth to print a file as yet...keeps disconnecting ( Ubuntu issue ). So ill get back on that if i can. Meaning , delete the settings you have from this printer, install the file and connect directly through USB. Reboot ,add printer and install it....this should work. You'll have to setup your printer output margins to get the print you want.
Problem is solved ....think i deserve a beer ...lol
Have a good weekend
Looks like it's this issue https://github.com/angular/components/pull/31560. Its related to os settings.
This is a sample perlin noise generator I developed a year ago for Python. I used bitwise operations for faster sampling.
I think "return [self.cdf_inv(p) for p in u]" is the root cause of slowing things down because each call to cdf_inv(p) performs scalar root finding which is actually CPU based and not gpu accelerated even after being wrapped in Tensorflow. So you should try vectorizing inverse CDF.
You might see significant speed enhancement if you eliminate the scalar root finding.
import av
import io
with open('video.mp4', 'rb') as fp:
video_data = fp.read()
video_buffer = io.BytesIO(video_data)
container = av.open(video_buffer, mode='r', format='mp4')
duration = container.duration / av.time_base # seconds [float]
In my personal opinion, this isn't an issue with TypeORM itself, but rather with the database design. You need to identify the bottleneck first. Perhaps you could try optimizing your queries initially. Typically, you'd start by checking how long a query takes to execute directly from your database. If the database itself takes, say, 9 seconds, it's perfectly normal for TypeORM to receive the data in around 10 seconds due to the latency involved in communicating with your NestJS application.
A good first step might be to select only the columns you're actually using. If it's still slow, then consider adding indexes.
check pylance extension is installed or not. If not, install it and in settings check below items, it will work
"python.languageServer": "Pylance",
"jupyter.enableExtendedPythonKernelCompletions": true,
"python.autoComplete.extraPaths": [
"C:\\Users\\Administrator\\AppData\\Local\\Microsoft\\WindowsApps\\python.exe"
Anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit anasurya mohit
import javaq.util.concurrent.TimeUnit;
driver.manage().timeouts().implicitlyWait(Duration.ofSeconds(5));
import pandas as pd, requests, io
# Set location and time
lat, lon = -6.000, 38.758
start, end = '20150101', '20241231'
# Fetch daily rainfall (PRECTOT) from NASA POWER
url = (f"https://power.larc.nasa.gov/api/temporal/daily/point?"
f"parameters=PRECTOT&start={start}&end={end}&latitude={lat}&longitude={lon}&format=CSV")
csv = requests.get(url).text.splitlines()
# Load into DataFrame
df = pd.read_csv(io.StringIO("\n".join(csv[10:]))) # skip metadata
df['DATE'] = pd.to_datetime(df[['YEAR','MO','DY']])
# Monthly total rainfall
monthly = df.groupby(df['DATE'].dt.to_period('M'))['PRECTOT'].sum().reset_index()
monthly.columns = ['Month', 'Rainfall_mm']
monthly['Month'] = monthly['Month'].astype(str)
# Annual total rainfall
annual = df.groupby(df['DATE'].dt.year)['PRECTOT'].sum().reset_index()
annual.columns = ['Year', 'Rainfall_mm']
# Export
monthly.to_csv("saadani_monthly_rainfall_2015_2024.csv", index=False)
annual.to_csv("saadani_annual_rainfall_2015_2024.csv", index=False)
I did some digging and it seems that this has been fixed in a new beta macOS version. See HERE on apple forum.
I did ask to specify if this is the Public or Developer Beta version, still waiting on a response.
Hope this fixes our issues 🙏🏼
Add style="flex-direction: row !important;" to the UL element.
<ul class="splide__list" style="flex-direction: row !important;">
</ul>
I gave up, I use: "‘" instead, look similar
{
"Scale": true,
"Scale X": 4.9625,
"Scale Y": -5.11875,
"Scale Z": 5.9625,
"Position": true,
"Position X": -0.5749999,
"Position Y": -1.9,
"Position Z": -4.975,
"Rotation": false,
"Rotation X": -9.0,
"Rotation Y": -6.75,
"Rotation Z": -2.25,
"Change Swing": false
}
If JHipster JDL isn’t generating entities, it’s usually due to syntax issues or incorrect setup. In CRM development service, this can disrupt the structure of modules like clients and orders. Double-check your JDL file and project configuration to resolve the issue.
Solution is to display the open ad After App Launch once your splash screen is dismiss and your MaterialApp/CupertinoApp/WidgetApp have been displayed. It's accepted and you can't do anything else.
use :
from .manager.py import userclassname
objects = userclassname()
at the end of your code of the models.py customuser
it will override and will tell the django about your model .
hope it helps :)
If you want to commit your whole repository, then you can just remove all contents of the .gitignore file
The suggestions about creating a proxy-server for this is completely out of the question for a mobile apps. Things you can put on the server, you should absolutely put on the server, but if your goal is to show a Google map on the mobile device using native controls, this is not an option.
If you're using things like Places API to look up addresses, you should absolutely put that on the server.
In google cloud console restrict the API key to the bundle ID on iOS and to the package name and SHA-1 fingerprint on Android.
Then you should rotate your keys from time to time.
The code labs provided by google on this integrates with the underlying maps SDKs used in the flutter package in the same way as the flutter package.
https://developers.google.com/codelabs/maps-platform/maps-platform-ios-swiftui#5
I also got this issue once, when the eventbridge rule was getting triggered, but it was not invoking my lambda function( I was using Terraform as IaC tool)
What i did to tackle this?
I added the eventbridge rule as trigger for my lambda, manually, and destination as SNS, to get test the setup. This worked fine for me.
Refer image for reference
use powershell and run below command
Enable Script Execution
Set-ExecutionPolicy RemoteSigned
Activate env
.\env\Scripts\Activate.ps1
Please help me. Are there any other methods?
"How can I send a message to a Telegram bot using Python and the requests library?"
"How do I use a Telegram bot token to send a message via the API?"
If you have specific code or error messages, include them for better help.
replying to umesh, that shader is still available in the form of a wayback machine search:
https://web.archive.org/web/20210617032819/http://wiki.unity3d.com/index.php?title=SkyboxBlended
add --host option to you package.json start script like
react-native start --host localhost
For my situation, I followed steps above, tried the whole night with Chrome. However, the json file just don't get downloaded even the console shows it "has" been downloaded. Then, the next day i switch to Safari. It works. Just try another browser if you get stuck.
For mac users(macOS tabs in 10.12 and newer), you need set "native_tabs" to "preferred".Sublime Text opens a new window, but the OS organizes them in native tabs.
The following response might be helpful to you.
SMSMobileAPI did it — if you're interested, take a look here: https://smsmobileapi.com/receive-sms/
I have kind of same problem
i have some SVG icons and i want to change the icons with default (Ant Vue Design Tree) Icons
can someone help me with that?
this is my code i am using tailwind and typescript and this is a component that will show in app.vue
how to change the default icons?
<template>
<Toolbar class="mt-16" />
<a-tree
class="mt-4 rounded-3xl p-2 w-2/3 text-[#171717] bg-[#D9D9D9]"
v-model:expandedKeys="expandedKeys"
v-model:selectedKeys="selectedKeys"
show-line
:tree-data="treeData"
>
<template #switcherIcon="{ switcherCls }"><down-outlined :class="switcherCls" /></template>
</a-tree>
</template>
<script lang="ts" setup>
import { ref } from 'vue'
import Toolbar from './Toolbar.vue'
import { DownOutlined } from '@ant-design/icons-vue'
import type { TreeProps } from 'ant-design-vue'
const expandedKeys = ref<string[]>(['0-0-0'])
const selectedKeys = ref<string[]>([])
const treeData: TreeProps['treeData'] = [
{
title: 'parent 1',
key: '0-0',
children: [
{
title: 'parent 1-0',
key: '0-0-0',
children: [
{
title: 'leaf',
key: '0-0-0-0',
},
{
title: 'leaf',
key: '0-0-0-1',
},
{
title: 'leaf',
key: '0-0-0-2',
},
],
},
{
title: 'parent 1-1',
key: '0-0-1',
children: [
{
title: 'leaf',
key: '0-0-1-0',
},
],
},
{
title: 'parent 1-2',
key: '0-0-2',
children: [
{
title: 'leaf',
key: '0-0-2-0',
},
{
title: 'leaf',
key: '0-0-2-1',
},
],
},
],
},
]
</script>
Using a build tool like Maven or Gradle is the most maintainable and professional way.
There are many way of shortcut for personal rnd and uses using CMD command prompt:
javac MyApp.java // Compile
java MyApp // Run
java -jar myApp.jar // Run a JAR file
javac -d out src\my\pkg\*.java // Compile to specific folder
OR
jar cfe MyApp.jar my.pkg.MainClass -C out .
Today I finally got around to creating multiple schemes for my different environments (local, staging, prod) so I could be a real dev and stop commenting out my different server urls depending on which environment I was building for.
My previews stopped working with the error "Cannot find previews. Check whether the preview is compiled for the current scheme and OS of the device used for previewing...".
Wut.
I must have looked at every possible answer to anything related and tried dozens of "fixes" that didn't work to fix the issue.
I finally figured it out.
The mistake I made that ultimately broke the previews was that I gave each scheme a different "Product Name". If you navigate to your targets build settings: Target -> Build Settings -> Packaging -> Product Name. I wanted each scheme (local, staging, prod) to show up named differently on device. If I had 3 app logos all named MyApp I wouldn't be able to tell them apart. Previews did not like this.
My solution was to keep all "Product Names" the same. Now the previews work for all of my schemes. And the fix to having each scheme show up on device named different was actually to update the "Bundle Display Name" setting: Target -> Build Settings -> Info.plist Values -> Bundle Display Name.
Now my previews work for all schemes and each scheme's app shows up on device with a different name.
:)
cheers
use do while loop in main function or use while loop in language function
Close your eyes and try using your app.
In other words, try getting into unsighted users' minds. Understand how do they use the web. I'd start from watching some short and sweet videos, e.g.:
Keyboard navigation
https://www.youtube.com/watch?v=N9Q8oF0Lx2M
(!!) Screen reader:
https://www.youtube.com/watch?v=Hp8dAkHQ9O0
https://www.youtube.com/watch?v=q_ATY9gimOM
https://www.youtube.com/watch?v=dEbl5jvLKGQ
https://www.youtube.com/watch?v=7Rs3YpsnfoI
Screen magnification:
install slugify package npm install slugify
and use like this:
var slugify = require(slugify)
//slugifying your string:
let your-slug = slugify('the article title')
When using the Google Maps Embed API, setting the iframe height to less than 200px will hide most of the default UI elements.
<iframe
style={{
border: 0,
width: '100%',
// Using a height under 200px hides Google Maps Embed UI elements
height: '199px',
}}
tabIndex={-1}
loading="lazy"
referrerPolicy="no-referrer-when-downgrade"
src="https://www.google.com/maps/embed/v1/place?key=API_KEY
&q=Space+Needle,Seattle+WA"
/>
#[ORM\Column(name: "new_name")]:
#[ORM\Column(name: "new_name", type: "string")] private string $newName;
{
public function findBy(array $criteria, array $orderBy = null, $limit = null, $offset = null)
{
if (isset($criteria['new_name'])) {
trigger_deprecation('my-lib', '1.0', '"new_name" is deprecated. Use "newName".');
$criteria['newName'] = $criteria['new_name'];
unset($criteria['new_name']);
}
return parent::findBy($criteria, $orderBy, $limit, $offset);
}
}
#[ORM\Entity(repositoryClass: DummyEntityRepository::class)]
No breaking changes. Old keys work. Deprecation warning works. Future-safe.
Facing the same issues earlier, downgrading the google_sign_in to 6.3.0 fixed it, but I still curious how to use the constructor in the latest version of google_sign_in
This application has no explicit mapping for /error, so you are seeing this as a fallback.
Sat Jul 19 11:31:43 HKT 2025
There was an unexpected error (type=Not Found, status=404).
No message available
Try disabling the extension “Python Environments”
I switched my isp from jio to airtel and surprisingly , it just worked
For eight years, I struggled with Alzheimer’s, feeling lost as it stole my memories and clarity. Conventional treatments failed, and I felt hopeless. Then, a friend recommended Earth Cure Herbal Clinic(www .earthcureherbalclinic .com), and though I was skeptical, I decided to try their natural remedies. Slowly, I began to regain my memory and mental clarity. It wasn’t an instant fix, but over time, I felt my mind clear and my sense of self return. Thanks to their treatment, I’m no longer defined by Alzheimer’s. My journey proves that even in the darkest moments, there is hope.
The above method doesn't work in nuxt4. How to configure cross domain configuration in nuxt4?
How to switch between Light, Dark, and Tinted appearances in the Xcode Simulator (iOS 18+)
If you're looking to test your app icon or UI under different system appearances in the Xcode Simulator, here's how to toggle between modes. This is especially useful when checking how your app icons or assets behave in Light, Dark, or the new Tinted mode (iOS 18 only):
Long press on an empty space on the Home Screen or on any App Icon.
Tap the Edit button in the top-left corner (occasionally appears top-right depending on Simulator version).
Tap Customize.
Choose between Light, Dark, Automatic, or Tinted.
📝 Note: Tinted mode is only available starting in iOS 18.
There'll be some issue with version of tailwind CSS,it happened with me too, you can continue with previous version
Collect 3 or more thread dumps from the driver and any active executors, spacing them about 5 seconds apart. This should help identify where the Spark job is getting stuck.
Found there was a similar question already answered from a few years back:
Another way to disable key binding for it:
Xcode -> Preferences -> 'Key Bindings' Tab -> Search for 'Quick Help' -> Find associated shortcut -> Clear or change the keybinding field
In modern versions of MySQL (8.4+) you just need to wrap it in parentheses:
CREATE TABLE FOO (
id CHAR(36) PRIMARY KEY DEFAULT (UUID())
);
See also knowledge base on UUID:
https://dev.mysql.com/blog-archive/storing-uuid-values-in-mysql-tables/
Well, I'll be damned... I used to send it to myself using WhatsApp web, then I just open the same chat on my phone, download the file and click it again to execute.
I tried a different method (thanks, @CommonsWare), just plugging the phone and using the file explorer and it works.
My best guess is that my PC does something to the file when sending it through WhatsApp web. Something that my work PC does not do for some reason.
Dark magic!
Try sending a manual task (rather than scheduled) to Celery worker, does it work?
Have you checked that Celery processes have appropriate file permissions? Celery beat writes schedule to a file by default.
Have you checked RabbitMQ logs?
You seem to be running Celery worker without a virtual environment, this might be an issue.
wp-content/mu-plugins/jet-wc-orders-status-callback.php
<?php
add_filter( 'jet-engine/listings/allowed-callbacks', static function ( $callbacks ) {
$callbacks['jet_get_wc_order_status'] = __( 'WC get order status label', 'jet-engine' );
return $callbacks;
}, 9999 );
/**
* @param $status
*
* @return string
*/
function jet_get_wc_order_status( $status ) {
if ( ! is_scalar( $status ) ) {
return '-';
}
if ( ! function_exists( 'wc_get_order_statuses' ) ) {
return $status;
}
$labels = wc_get_order_statuses();
return $labels[ $status ] ?? '-';
}
I ended up doing it this way since "localhost" was already part of the default set...
livenessProbe:
httpGet:
path: /login/
port: http
httpHeaders:
- name: Host
value: localhost
The CSS property box-decoration-break can be used to repeat margins and paddings on all pages.
.page {
box-decoration-break: clone;
padding-top: 1.5in;
padding-bottom: 1.5in;
page-break-after: always;
}