.${categoryNamee}{ display: flex; flex-direction: row-reverse; }
Removing flex-wrap: wrap; helped
It's because stackoverflow, like many sites, uses a widely accepted 960px page container width standard.
Within your full-width footer container, put a second, invisible container for the content, whose width is the same as the width of the main document body, e.g. 960px. Here is stackoverflow's body container:
#content {
margin: 0 auto;
width: 960px;
min-height: 450px;
}
And here is the container for the footer content:
.footerwrap {
margin: 0 auto;
text-align: left;
width: 960px;
}
Additionally, I don't think the footer 'sticks' to anything. It's simply at the end of the document, so it's rendered at the bottom of the page. I might be wrong as I haven't looked at stackoverflow's html source in that much detail, but anything else seems like overkill.
I faced this issue earlier, and I tried the following steps:
flutter clean // or fvm flutter clean
flutter pub get // or fvm flutter pub get
flutter run
But after running the app, I faced another issue where Flutter recommended updating the Kotlin version.
🔹 It seems like even after cleaning, Flutter still expects the latest Kotlin version in both settings.gradle
and build.gradle
.
What I’ve already done:
Updated org.jetbrains.kotlin.android
to 2.2.0
inside settings.gradle
Tried cleaning and rebuilding
What I want to know:
Is there anything else I need to update in build.gradle
or gradle-wrapper.properties
?
Does Kotlin 2.2.0
require any specific Android Gradle Plugin or Gradle version?
Is there a recommended way to handle this Kotlin version mismatch with Flutter?
Binary search and binary search tree (BST) are two different concepts, though they sound similar. Binary search is an algorithm used to find an element in a sorted array or list. It works by repeatedly dividing the search range in half — comparing the middle element with the target, and then deciding whether to search in the left or right half.
On the other hand, a binary search tree is a data structure, specifically a type of binary tree where each node has at most two children. The left child of a node contains values smaller than the node, and the right child contains values greater.
While both use the "binary" idea of halving or branching into two parts, binary search is an algorithm, and a binary search tree is a structure that stores data in a way that supports binary-search-like operations. Binary search applies to sorted linear structures like arrays, whereas a BST organizes data hierarchically using nodes. So, they are related by concept but are not the same.
You're right to be cautious. Microsoft 365 does require the device to connect to the internet about every 30 days to keep the license active. If it doesn't, Office apps go into reduced functionality mode (basically read-only).
Unfortunately, Microsoft doesn’t expose the actual license validation date or the number of days remaining in any way that's accessible via VBA or even PowerShell. It's all managed internally by the Click-to-Run service, and there’s no registry key or public API that provides a countdown.
That said, you can work around it by using VBA to log the last time the machine was online. On workbook open, check for an internet connection. If it's online, write the current date to a local text file or a hidden worksheet. Each time the workbook opens, compare today's date to that last online date. If you're getting close to 30 days, display a warning message.
You can probably find example scripts online that do this, or write a simple one yourself. It’s not too complex.
To answer my own question, mock uses the rpm from the target OS to build the package.
The issue with the checkBalance
function not appearing in the Candid UI or responding to CLI calls is likely due to its incorrect declaration using async
in a shared query
function, which isn't necessary and can cause it to be excluded from the generated interface. To fix this, change the method signature to public shared query func checkBalance(): Nat
(removing async
), then stop the DFX server, delete the .dfx
folder, and restart everything cleanly using dfx stop
, rm -rf .dfx
, dfx start --clean
, and dfx deploy
. After redeploying, the method should appear in the Candid UI and respond properly to dfx canister call
commands.
With the new constraints imposed in the edited question, the answer is "there is no implementation possible."
Please accept this answer so I can claim the meaningless 50 points.
sudo ln -s /usr/bin/nodejs /usr/bin/node
This will make the system recognize the node command by linking it to nodejs.
Above answer is right. Please mark above answer as USEFUL.
I am just adding screenshot for better Visualisations
Having the same issue with Python 3.12 and Python 3.13 on Windows 11 x64.
In my case, as it turns out, it's because I have the following in my PYTHONPATH:
C:\Program Files\QGIS 3.28.2\apps\qgis\python
C:\Program Files\QGIS 3.28.2\apps\Python39\Lib\site-packages
Removing this, and there is no need to do anything special with upgrading pip
or choosing specific numpy
version.
fake solutions
nothing is working, time wastage
Did anyone find a solution to this in windows using python/anaconda?
i'm getting:
(deepfacelive) C:\DeepFaceLive-master>python main.py run "DeepFaceLive"--- user data-dir "C:\DeepFaceLive-master"
usage: main.py run [-h] {DeepFaceLive} ...
main.py run: error: argument {DeepFaceLive}: invalid choice: 'DeepFaceLive---\u200auser' (choose from 'DeepFaceLive')
I don't understand where it is getting
"u200auser"
?
did u find a solution i just did the same exsact thing. Anything helpd!
Please double check your .vscode\launch.json.
Check this reference.
Had the same problem. Looked in the Event Viewer and it suggested installing .net5 using this link:
Executed the exe at this link and it worked.
If you can't find the IDL using anchor idl fetch <PROGRAM_ID>
or by visiting https://solscan.io/account/<PROGRAM_ID>
I suggest using tools for IDL reverse engineering like Solvitor
or idlGuesser
: https://solvitor.xyz/public_idl
/<PROGRAM_ID>
You mentioned “I was imagining using one, database-wide, unit of work class”.
I have just begun to design a Unit of Work pattern and had the same concern.
I had been considering one approach which might solve the concern quoted above, but I’m not yet sure of it and I’ve yet to try it. Basically, it utilises the CQRS pattern. Here it is..
Similar to CQRS pattern, where each use-case has a unique pair of Command class and Command’s handler class, how about if we create different UnitOfWork classes for each use-case. This, each use case will now have 3 things, command, its handler and a UoW class.
In CQRS, usually, the handler contains (is dependent) only on the repositories required by it. In my proposal, the repositories needed by that handler may be moved to its UoW class. In the handler, UoW will be injected instead of the repositories.
In this way,
we avoid instantiation of unnecessary repositories,
overhead is removed
It has all Pros of CQRS. UoW is more granular, which is the selling point of CQRS, more maintainable for changing logic with evolving business logic etc.
Anyone buying?
spring.session.store-type=jdbc
spring.session.jdbc.table-name=SPRING_SESSION
spring.session.jdbc.schema=classpath:org/springframework/session/jdbc/schema-postgresql.sql
spring.session.jdbc.initialize-schema=always
spring.session.timeout.seconds=900
spring.session.jdbc.cleanup-interval=3600
To solve this, I added .modelContainer(for: DrinkModel.self)`
to the <AppName>App.swift
App Struct like so:
import SwiftUI
@main
struct DrinkLess_watchOS_Watch_AppApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}.modelContainer(for: DrinkModel.self)
}
}
You need not do this in double precision: single precision is enough! The trouble is what you're using the precision for, and by trying to represent cosines near 1 rather than near 0 you're throwing away all the precision available to you. Navigators in days of yore—with much less precision than modern IEEE 754 binary32 single precision!—had the same problem, and what did they do?
Let's write this as a formula without all the unit conversions—you have latitudes 𝜑₁ and 𝜑₂ and longitudes 𝜆₁ and 𝜆₂ and you want the great-circle angle 𝜃, which you've decided to approach with the relation (probably derived from converting the spherical law of cosines naively to a longitude/latitude coordinate system):
cos 𝜃 = sin 𝜑₁ sin 𝜑₂ + cos 𝜑₁ cos 𝜑₂ cos(𝜆₂ − 𝜆₁).
The crux of the problem is that you're trying to find an angle whose cosine is very close to 1, and you're doing it by computing that cosine. In your example, sin 𝜑₁ sin 𝜑₂ + cos 𝜑₁ cos 𝜑₂ cos(𝜆₂ − 𝜆₁) comes out to roughly 0.999999937 = 1 − 6.3×10⁻⁸, but the single-precision floating-point numbers in [½, 1] can only discern increments of about 6×10⁻⁸, so you've essentially wasted all your precision on those leading nines. Navigators in days of yore didn't even have eight digits (for example, the 1913 tables of E.R. Hedrick and the 1800 tables of Josef de Mendoza y Ríos provide only around five digits), so how did they do it?
You probably grew up on the three trig functions sine, cosine, and tangent if you're much less than a century old, but there are others like haversine (hav 𝜃 = ½ (1 − cos 𝜃)) and exsecant (exsec 𝜃 = sec 𝜃 − 1 = 1/cos 𝜃 − 1)—invented not to torture students in school, but because they made calculations feasible with limited precision! The trick is not to compute the cosine 1 − 𝛿 for very small 𝛿, but instead compute the versine 𝛿, or the haversine ½𝛿, so you can use the full precision available to you to represent it.
The venerable haversine formula relates the latitudes 𝜑₁ and 𝜑₂ and longitudes 𝜆₁ and 𝜆₂ to the great-circle angle 𝜃 by:
hav 𝜃 = hav(𝜑₂ − 𝜑₁) + cos 𝜑₁ cos 𝜑₂ hav(𝜆₂ − 𝜆₁).
This is better for nearby angles because for small angle differences, haversine preserves most of the precision of the input to return a result near 0, while cosine throws it away to return a result near 1.
Of course, the Android math library might be missing a haversine function (pity!), but you can recover it from the trigonometric identity:
hav 𝜃 = ½ (1 − cos 𝜃) = sin²(𝜃/2).
You can always divide by two exactly in floating-point arithmetic (unless it underflows), and squaring won't amplify the error much, so the formula sin²(𝜃/2) will work reliably, and you can recover the inverse archav 𝑥 = 2 arcsin(√𝑥), using functions that probably are in the Android math library. And this identity—together with coordinate transformation between longitude/latitude and spherical triangle angles—is how you can derive the haversine formula from (e.g.) the spherical law of cosines, as the Wikipedia article shows.
Once you use the haversine formula, even if you compute everything in single precision, you get within about 30cm of the exact result in your example, or <0.014% relative error—which is smaller than the error of about 60cm arising from using a spherical approximation to the oblate spheroid of Earth!
Select/option elements In Chrome don't "drop down" if they are inside a draggable element on as explained in: No possibility to select text inside <input> when parent is draggable
Some workarounds are suggested in the link.
I swear to go VBA just likes to screw with me.
refStr = "=" & colName & "[" & colHdr & "]"
This works fine and I have no idea why it wasn't working before. It's like VBA doesn't update to reflect your changes or something.
It turns out that I missed the null checking in the Case class.
private static void OnPropChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
{
if (d is Case _case && _case.SwitchCase != null && _case.SwitchValue != null)
{
_case.Content = _case.SwitchCase.ToString() == _case.SwitchValue.ToString() ? _case.SwitchContent : null;
}
}
Looks like you aren't awaiting the pool.end()
. This could mean your test runner may exit before pool.end()
resolves which could potentially be leaving open sockets.
afterAll(async () => {
await pool.end();
});
एक लकड़हारा था, जो रोज़ पेड़ काटता। एक दिन पेड़ गिरा नहीं...
पहले वार, दूसरा, तीसरा... दस… बीस… पचास…
लेकिन वो नहीं रुका।
100वें वार पर — पेड़ गिर गया!
लोग बोले — "एक ही वार में गिरा दिया!"
पर उसे पता था…
ताक़त उस एक वार में नहीं, उन 99 हार ना मानने वाले वारों में थी।
This is currently a feature missing from rocket_ws
. There is an open ticket regarding it.
Yes, I had the same issue, then i came across this github issue, to simplify the build the Proj4 projection were removed and only the EPSG:4326 and EPSG:3857 projections are supported.
Here is the link of the github issue
I tried all the suggestions from that short-lived AI answer and the only thing that ended up working for me was installing Python 3.11 using MacPorts, setting up a Venv and reinstalling all dependencies, now the code works like a charm again.
Wish I knew why my old installation broke when I didn't change any of the code files, but I suspect it has something to do with the packages I installed yesterday when toying around with Flask.
Taught me to use a Venv for different projects.
open vscode settings, search for 'exclude', first option will be Files: exclude
or in user settings json file:
"files.exclude": {
"**/*.d.ts": true,
}
Did you make any more progress on this?
After a bit of research, I found this construction:
my ($a, @b, $c);
:($a is copy, @b, $c) := \(42, <a b c>, 42);
$a = 1;
say [$a, @b, $c]; # OUTPUTS: [1 (a b c) 42]
While this is still technically binding, it works the way I wanted — I can reassign $a
afterwards.
Unfortunately, this only works without any declarators. If we write my :($a is copy, @b, $c) := \(42, <a b c>, 42);
, it throws a Cannot assign to a readonly variable or a value
exception when attempting to reassign $a
.
Don't run the form while still encountering errors. It seems like it will just display the last stable version.
template<typename T>
struct Loop {
template<typename U>
struct X{
using type =typename Loop<X>::type;
};
using type = typename X<T>::type;
};
int main()
{
Loop<int>::type;
}
so we are at expo sdk 51 and not planning to upgrade any sooner.
[
"expo-build-properties",
{
"android": {
"compileSdkVersion": 34,
"targetSdkVersion": 35,
"buildToolsVersion": "35.0.0",
}
}
],
will this work for us?
tried the above mentioned answer, but got build error so just making compileSdkVersion:'34' ll work? we are on free eas plan it already takes a long time to start build and i do make direct eas build (no prebuild etc)
{
"expo": {
...
"plugins": [
[
"expo-build-properties",
{
"android": {
"compileSdkVersion": 35,
"targetSdkVersion": 35,
"buildToolsVersion": "35.0.0"
},
}
]
]
}
}
So apparently, I messed the math up, instead of
(img_tk.width() / int(2) + 1)
I need to add the 2, not width/2, meaning I add to add brackets like this:
(img_tk.width() / (int(2) + 1))
Answering as a comment.
I have a .NET 8 API with VueJs frontend using <SpaRoot> setup.
Firstly, check if you're using the SpaRoot in the csproj of the server or not.
The SpaRoot should be pointing to your front end UI folder and there should be an execute command near that SpaRoot element.
You don't need a multi-app start config. When you run the server on its own it should boot up and display "waiting for SPA".
Then VS will run your launch command, think it's default to npm run dev. Where you'll see a console population and build and run the server-host for your UI on a port.
Your page is displaying at your API port and will redirect to the port your UI is at.
If you get debug port errors, could be a config issue or port mismatch in launch settings json and/or your front-end thing.config.json AND csproj Spa settings.
If all else fails, check port availability incase somthing is using it. It happens sometimes.
Aha! I found the problem. The actual problem was not the units, it was the range()
function. I forgot that range()
actually didn't include the final number. To solve it, I replaced
range(1, int(2)+1)
with
range(1, int(2)+2)
@override
Widget build(BuildContext context) {
return Scaffold(
body: Container(
decoration: BoxDecoration(
gradient: LinearGradient(
colors: \[Colors.blue.shade300, Colors.blue.shade900\],
begin: Alignment.topCenter,
end: Alignment.bottomCenter,
),
),
child: Center(
child: Padding(
padding: const EdgeInsets.all(24.0),
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: \[
Text(
'Versículo do Dia',
style: TextStyle(
fontSize: 28,
color: Colors.white,
fontWeight: FontWeight.bold,
),
),
SizedBox(height: 40),
Container(
padding: EdgeInsets.all(20),
decoration: BoxDecoration(
color: Colors.white.withOpacity(0.9),
borderRadius: BorderRadius.circular(20),
),
child: Text(
versiculoAtual,
style: TextStyle(
fontSize: 20,
fontStyle: FontStyle.italic,
color: Colors.black87,
),
textAlign: TextAlign.center,
),
),
SizedBox(height: 30),
ElevatedButton.icon(
onPressed: gerarVersiculo,
icon: Icon(Icons.refresh),
label: Text('Novo Versículo'),
style: ElevatedButton.styleFrom(
backgroundColor: Colors.white,
foregroundColor: Colors.blueAccent,
),
),
\],
),
),
),
),
);
}
Another thing that i want to suggest.
as you are using "fish" shell..
so try this one time.. see if it's went well or not
"code-runner.executorMap": {
"python": "source .venv/bin/activate.fish && python"
}
You don’t need to retrain your entire model to adapt it to each user’s gesture style. A common approach is to collect a few samples of the user’s swipe gestures (e.g., 10–20 during a calibration step) and use these to fine-tune the detection threshold or train a small classifier on top of your pre-trained model’s embeddings.
For example:
Use your current model as a feature extractor.
Capture user-specific gesture data and adjust the decision threshold based on the model’s confidence scores.
If you need better accuracy, train a lightweight classifier (like logistic regression or SVM) on-device using the captured embeddings.
Frameworks like TensorFlow Lite or Core ML allow on-device personalization, so you can adapt without redeploying the entire model. If on-device retraining isn’t possible, you can collect user data (with consent) and periodically fine-tune the model server-side.
from collections import deque
def reverse_lines(filename):
try:
with open(filename,'r') as f:
lines = deque()
for line in f:
lines.append(line)
while lines:
yield lines.pop()
except FileNotFoundError:
print("File not found")
for line in reverse_lines(r'/content/example.txt'):
print(line)
The following blog post is on chunk parallel download of files using Python and curl:
https://remotalks.blogspot.com/2025/07/download-large-files-in-chunks_19.html
i think this does not work now as text.legnth i am getting 0
import { YoutubeTranscript } from "youtube-transcript";
var transcript_obj = await YoutubeTranscript.fetchTranscript("_cY5ZD9yh2I");
const text = transcript_obj.map((t) => t.text).join(" ");
console.log(text);
a. const should be let
b. greeting === undefined
c. "${this.greeting} ${this.name}" should be `${this.greeting} ${this.name}`
domain="[('id', 'in', duplicate_ids)]"
duplicates = self.env['hr.applicant'].with_context(active_test=False).search(domain)
Ensure the widget context does not override active_test=False
implicitly. Currently, your field looks like this<field name="selected_duplicate_id" widget="many2one" options="{'no_create': True, 'no_open': True}" context="{'active_test': False, 'hr_force_show_archived': True, 'search_default_duplicates': True}" domain="[('id', 'in', duplicate_ids)]" class="oe_inline w-100"/>
You are very close — your backend logic is correct. The final fix likely lies in ensuring the context
is properly passed into name_search()
from the form, and confirming that duplicate_ids
truly includes inactive records at runtime.
Would you like help checking your _get_similar_applicants_domain()
method too? Sometimes the filter (‘active’, ‘in’, [True, False])
might be missing there.
The Yeray's solution above is good enough for a bar chart but not suitable for a line chart where working series (lines) might have zero value.
In that case, when using OnAfterDraw, the working line series are hidden by the customdrawn horizontal line.
In my case I actually needed a solution for a line chart. For this, the best solution seems to be to use the OnBeforeDrawSeries event instead. The remaining code is the same.
If anyone uses the TaChart component in Lazarus instead of TeeChart in Delphi, the approach there is similar. But because OnBeforeDrawSeries event is missing in TaChart, OnAfterCustomDrawBackWall appears to be the best to do the job.
Another possible solution which I've found myself in the meantime is to use a dummy zero line series.
Another solutions specific for TaChart might be available in the Lazarus forum where I asked the same question:
TAChart how to make different width and/or color only for a specific grid line
Good day.
(Sorry, my eng not very good)
According to the GitHub issue(#395 and #272), this problem can be solved using this method
Solution:
Change param in Default Settings (JSON)
"code-runner.executorMap": { ... "python": "$pythonPath -u $fullFileName" ... }
You can set a static path to your Python from the virtual environment. And on the next run, the logic will be as in the screenshot below. (This way you won't encounter the error)
Could not build wheels for ____ which use PEP 517 and cannot be installed directly
pip install <package-name> --no-binary :all:
don’t use pre-built stuff, build it from scratch.
Also, make sure your build tools are up-to-date:
pip install setuptools wheel build
If this helped you, please upvote — it might help someone else too! 👍
The following blog post is on chunk parallel download of files using Python and curl:
https://remotalks.blogspot.com/2025/07/download-large-files-in-chunks_19.html
You need to add this to your manifest.json file
"web_accessible_resources": [
{
"resources": ["page.html"],
"matches": ["http://YourUrl/*"]
}
],
I am wondering is this can be used to implement environmental separation inside the same datatabase, like:
dev.customers
qas.customers
prd.customers
If a user logos on the QAS environment, I would just run:
ALTER USER [yourUser] WITH DEFAULT_SCHEMA = qas;
and then the user will run the app using a test dataset.
Is this a good idea or am I utterly mistaken ?
Okay I am stupid and I got screwed up by pointer arithmetic because buffer is int16_t changed it to void*, worked flawlessly
I have the same issue.. Actually it is not working at all.
Strange thing is I was using https://github.com/StefH/McpDotNet.Extensions.SemanticKernel
await _kernel.Plugins.AddMcpFunctionsFromSseServerAsync("McpServer", new Uri(url), httpClient: _httpClient);
And that worked fine. I wanted to switch to the native SK code but now it does not work anymore.
This should cover what you want to know. Was introduced in 8.1 first class callable syntax.
https://www.php.net/manual/en/functions.first_class_callable_syntax.php
Having the same issue. Have you had a luck solving this?
In my case, I just need to update my version from 18 to 20 and then simply restart and run from basic to npm run dev.
Make sure to use the same exact casing on the mode name when you use --mode. I ran vite with --mode "Development" when instead my file is .env.development
If your Fedora VM freezes during startup, it could be due to over-allocating resources 10 GB RAM and 2×6 CPUs might be pushing your host too hard, even with 32 GB total. Check for host system bottlenecks like CPU or disk I/O, and make sure VMware tools and Fedora packages are up to date. Also, look into guest OS logs for any driver or service hiccups. Try reducing VM specs or booting from a fresh ISO to rule out corruption.
This code does't work on MacOS 15.5, xcode 16.6
I tried many methods, but none of them worked properly.
I don't know how BetterTouchTool is implemented. Does anyone know?
Thanks,
Regards!
If you are running a script file (ex: rush.scpt or rush.app) using Automator.App, all those solutions will return as name: "Automator.app"! Not the name of the script file itself, ex: "rush.scpt" or "rush.app" or "rush.workflow"...
In my case, when Resource Not Found appeared in the browser, the problem was the path to the build folder, in the quarkus.quinoa.build-dir property, it previously had the value dist/, so I put dist/angular and it worked. (I use version 19 of Angular)
but it removes the reveal brush effect
I usually delete DAG from Airflow UI itself. You can open your DAG and top right side you will see delete option to remove your DAG from UI. It won't actually delete from your DAG folder.
Also you might see DAG in UI list event after deletion because Airflow runs the refresh cycle periodically after which it will removed from the UI list
Yes, this is definitely possible, but there are a few tricky parts to get right.
To fix this, you can run the child process and listen to its output on a separate thread. Then, send those outputs through a channel and print them in your main thread before you call readline() again. Here's a working example using Python as the child REPL:
The answer by @Chip Jarred did not work for me on macOS 15.5 Sequoia. None of the Dock windows have a "Fullscreen Backdrop" kCGWindowName
.
What worked for me was the simple check for multiple Dock windows that have a negative (around MIN_INT64) value as kCGWindowLayer
. If there are more than one of those Dock windows, the app is running in fullscreen:
func isFullScreen() -> Bool {
guard let windows = CGWindowListCopyWindowInfo(.optionOnScreenOnly, kCGNullWindowID) else {
return false
}
var dockCount = 0
for window in windows as NSArray
{
guard let winInfo = window as? NSDictionary else { continue }
if winInfo["kCGWindowOwnerName"] as? String == "Dock"
{
let windowLayer = winInfo["kCGWindowLayer"]
if let layerValue = windowLayer as? Int64, layerValue < 0 {
dockCount += 1
if dockCount > 1 {
return true
}
}
}
}
return false
}
I saw this error when running Kafka in ubuntu app inside Windows OS and trying to connect my application from Windows to connect running Kafka server. it was failing to connect.
When running my application from Ubuntu app inside Windows OS and trying to connect Kafka server, it connects successfully and the error was gone
I saw this error when running Kafka in ubuntu app inside Windows OS and trying to connect my application from Windows to connect running Kafka server. it was failing to connect.
When running my application from Ubuntu app inside Windows OS and trying to connect Kafka server, it connects successfully and the error was gone
WARNING: This is a development server. Do not use it in a production setting. Use a production WSGI or ASGI server instead.
For more information on production servers see: https://docs.djangoproject.com/en/5.2/howto/deployment/
[19/Jul/2025 17:46:18] "GET / HTTP/1.1" 200 12068
[19/Jul/2025 18:15:04] "GET / HTTP/1.1" 200 12068
Not Found: /favicon.ico
[19/Jul/2025 18:15:06] "GET /favicon.ico HTTP/1.1" 404 2216
You can use aiogram-dialog
, brilliant library to manage all this underneath menu logic.
I found the following link on how to use threading with the Pika library on GitHub.
https://github.com/pika/pika/blob/main/examples/basic_consumer_threaded.py
This can happen when you have enabled a proxy in Insomnia.
It would be nice if Insomnia have an indicator in the interface, showing that a proxy is being used.
Something so small thing, can save a lot of frustration :-)
It’s possible they started using Azure Load Balancer internally as part of an architectural change. This might be for scaling ingestion endpoints, handling control plane traffic, or improving internal traffic routing. Unfortunately, there doesn’t seem to be any official documentation or announcements about this.
If you’re trying to dig deeper, a few things that might help:
Activity Logs: Check the Azure Activity Logs in the resource group to see if any backend resources are being provisioned or associated with the cluster.
Network Watcher: If enabled, use it to inspect traffic flows and confirm what’s going through the Load Balancer.
If it’s driving up costs, and you're not explicitly using a load balancer in your design, it’s definitely worth raising with support to see if there's an optimization or config workaround.
Would be great to hear what you find out.
I am in the same situation. I need to limit the consumption speed of my Kafka consumer as the external financial API I am calling from the consumer has a rate limit setting. Plus the rate limit in my case can be different per data type.
Based on this thread, I am thinking to implement the following pattern:
set max.poll.records
to one (1). I think I don't need to tune other Kafka parameter.
Create a counter in a distributed cache (e g. Hazelcast) where I save the timestamp of the 1st message that I have been received and the counter of the received msg from that time. I need to save it in a distributed cache as I use microservice architecture and I can have multiple Kafka consumer groups attached to the same topic.
Let's say external API can handle 3 request per a second.
Kafka consumers consume quickly that 3 messages and then the counter state in hazelcast shows 3.
After receiving the 3rd message, the Kafka consumer pauses itself before starts processing the received message. At the same time with the pause I start a sprint timer with waiting for 1 second as this is the rate limit definition. Timer schedule can depend on the data type and can be read runtime from the spring property file
Then when the timer fires after 1 second, it resumes the Kafka consumer.
I think this process can work. Concurrency of Kafka consumers+ different rate limit per data type can complicate the data that I need to keep in the distributed cache, but not super hard to manage it properly.
I think this way I can limit the speed of the calls of the external API. Plus if the rate limit is 3 request per one second then the 1st three data will be served quickly and then consumer(s) will wait 1 second, then continue to listen for the next data from Kafka.
I am not implemented this yet, but I will to do it soon.
I think it can work. Any thoughts are appreciated.
I actually had the same issue a while back when I was trying to switch from my old Gmail account to a new one. I wanted everything to move with labels intact, and doing it manually via forwarding or IMAP was just super messy and time-consuming.
If you're wondering how to download old emails from Gmail and move them over with all the original labels, I'd recommend using a tool like Email Backup Wizard. It lets you download all your old emails locally and then import them into another Gmail account. The best part? It preserves labels during the transfer, which was a game-changer for me.
Hope this helps! Let me know if you need a quick step-by-step I still have the process noted down somewhere. 😊
application ()
A high-level function.
Automatically starts the Compose event loop.
You define your UI (e.g., Window {}) inside the block.
The app exits automatically when all windows are closed.
Ideal for simple apps.
awaitApplication ()
A suspend function — gives more control over the app lifecycle.
You need to manually call exitApplication() to terminate the app.
Useful when you need to suspend main(), perform async setup, or manage multiple windows manually.
in Js exist standard method for local dates. toLocaleDateString().
for persian Jalali calendar is like
yourDate.toLocaleDateString("fa-ir");
A sudden jump in Flutter app bundle size from 19.4MB to 139MB usually means new dependencies or assets were added, or build settings changed. Check for large assets, added packages, or debug vs. release build differences. Running flutter build apk --release --split-per-abi
can help reduce size by generating separate APKs per architecture. Visit Base Bridge
I was able to do this for buttons with a FlowRow
parent with
FlowRow (
horizontalArrangement = Arrangement.spacedBy((-1).dp),
) {
// Buttons here
}
For my 1.dp borders, this works excellently and is about as simple as I believed this task would be.
Got errors "failed to open stream: No such file or directory" and "The file or directory is not a reparse point. (code: 4390)" for file, file_get_contents, scandir, etc.. Long story short, the script and the file to be read were both in the same Dropbox directory.
I didn't try to 'solve' the problem, maybe this could be made to work in the Dropbox framework. I just moved the project out of DropBox, now it works as expected.
Try commentwipe.com which allows you to sync all videos and comments. It also allows you to search.
I particularly, trained a YOLOv11 segmentation model in order to detect positions for Rubik's cubes.
First of all, data has to be prepared in the YOLOv11 Dataset format. and a data.yaml file has to be created:
train: ../train/images
val: ../valid/images
test: ../test/images
nc: 6
names: ['Cube']
Then, install ultralytics and train the model
!pip install ultralytics
from ultralytics import YOLO
model = YOLO('best.pt')
model.train(data='./data/data.yaml', epochs=100, batch=64, device='cuda')
After using the segmentation model on a frame, I do some checks to see if the object is a Rubiks' cube or not:
import cv2
import numpy as np
from ultralytics import YOLO
def is_patch_cube(patch, epsilon=0.2):
h, w = patch.shape[:2]
ratio, inverse = h/w, w/h
if ratio < 1 - epsilon or ratio > 1 + epsilon:
return False
if inverse < 1 - epsilon or inverse > 1 + epsilon:
return False
return True
def is_patch_mostly_colored(patch, threshold=0.85):
h, w, c = patch.shape
num_pixels = h*w*c
num_colored_pixels = np.sum(patch > 0)
return num_colored_pixels/num_pixels > threshold
def check_homogenous_color(patch, color, threshold):
if color not in color_ranges: return False
h, w = patch.shape[:2]
patch = cv2.cvtColor(patch, cv2.COLOR_BGR2HSV)
lower, upper = color_ranges[color]
thres = cv2.inRange(patch, np.array(lower), np.array(upper))
# print(thres.shape)
return (np.count_nonzero(thres)/(h*w)) > threshold
def find_segments(seg_model: YOLO, image):
return seg_model(image, verbose=False)
def get_face(results, n, homogenity_thres=0.6):
for i, r in enumerate(results):
original_img = r.orig_img
img_h, img_w, c = original_img.shape
if r.masks is not None:
for obj_i, mask_tensor in enumerate(r.masks.data):
mask_np = (mask_tensor.cpu().numpy() * 255).astype(np.uint8)
if mask_np.shape[0] != original_img.shape[0] or mask_np.shape[1] != original_img.shape[1]:
mask_np = cv2.resize(mask_np, (img_w, img_h), interpolation=cv2.INTER_NEAREST)
mask_np, box = simplify_mask(mask_np, eps=0.005)
obj = cv2.bitwise_and(original_img, original_img, mask=mask_np)
x1, y1, w, h = box
x2, y2 = x1 + w, y1 + h
x1 = max(0, x1)
y1 = max(0, y1)
x2 = min(original_img.shape[1], x2)
y2 = min(original_img.shape[0], y2)
cropped_object = obj[y1:y2, x1:x2]
if not is_patch_cube(cropped_object):
continue
if not is_patch_mostly_colored(cropped_object):
continue
colors, homogenity = find_colors(cropped_object, n, color_detection_model)
if sum([sum(row) for row in homogenity]) < homogenity_thres * len(homogenity) * len(homogenity[0]):
continue
return colors, cropped_object, mask_np, box
return None, None, None, None
def find_colors(patch, n):
h, w, c = patch.shape
hh, ww = h//n, w//n
colors = [['' for _ in range(n)] for __ in range(n)]
homogenity = [[False for _ in range(n)] for __ in range(n)]
for i in range(n):
for j in range(n):
pp = patch[i*hh:(i+1)*hh, j*ww:(j+1)*ww]
colors[i][j] = find_best_matching_color_legacy(
get_median_color(pp), tpe='bgr') # whatever function you want to detect colors
homogenity[i][j] = check_homogenous_color(pp, colors[i][j], threshold=0.5)
return colors, homogenity
We can use this as follows:
results = find_segments(model, self.current_frame)
face, obj, mask, box = get_face(results, n=self.n, homogenity_thres=0.6)
Thanks to @ChristophRackwitz for recommending usage of semantic segmentation models
Function Enter-AdminSession {
<#
.SYNOPSIS
Self-elevate the script if required
.LINK
Source: https://stackoverflow.com/questions/60209449/how-to-elevate-a-powershell-script-from-within-a-script
#>
$scriptInvocation = (Get-Variable MyInvocation -Scope 1).Value.Line
if (-Not ([Security.Principal.WindowsPrincipal] [Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] 'Administrator')) {
# we need to `cd` to keep the working directory the same ad before the elevation; -WorkingDirectory $PWD does not work
Start-Process -FilePath PowerShell.exe -Verb Runas -ArgumentList "cd $PWD; $scriptInvocation"
Exit
}
}
With this function in a common module that you imported or directly in your script, you can call Enter-AdminSession
at the right point in your script to gain admin rights.
<scripit>
function duplicate(){
var action = "CreationBoard";
$.ajax({
type: "POST",
url : "file.php",
data: {action : action},
success: function(output){
alert("Response from php" +output);
}
});
}
</script>
Yes, an abstract class in Java can extend another abstract class. This is a common and valid practice in object-oriented design, particularly when dealing with hierarchies of related concepts where each level introduces more specific abstract behaviors or implements some common functionality.
When an abstract class extends another abstract class:
Inheritance of Members:
It inherits all the members (fields, concrete methods, and abstract methods) from its parent abstract class.
Abstract Method Implementation (Optional):
The child abstract class is not required to implement the abstract methods inherited from its parent. It can choose to leave them abstract, forcing subsequent concrete subclasses to provide the implementation.
Adding New Abstract Methods:
The child abstract class can declare new abstract methods specific to its level of abstraction.
Providing Concrete Implementations:
It can also provide concrete implementations for some or all of the inherited abstract methods, or for its own newly declared abstract methods.
This allows for a gradual refinement of abstract behavior down the inheritance hierarchy, with concrete classes at the bottom of the hierarchy ultimately providing the full implementation for all inherited abstract methods.
it is very easy task, i created one at festivos en calendario
create table calendar (dt, holiday) as
select trunc(sysdate, 'yy') + level - 1,
case when trunc(sysdate, 'yy') + level - 1 in ( select holiday_date
from holidays
) then 'Y'
else 'N'
end
from dual
connect by level <= trunc(sysdate) - trunc(sysdate, 'yy') + 1;
Custom Gradle task may help, see this article of how to make it possible on ur own https://medium.com/@likeanyanorigin/say-goodbye-to-hardcoded-deeplinks-navigation-component-xmls-with-manifest-placeholders-3efa13428cb4
location.absolute
Location value for plotshape, plotchar functions. Shape is plotted on chart using indicator value as a price coordinate.
I am a beginner too, but this is something I have used in the past.
If you are using Expo, use this package I created. Works for both iOS and Android
https://www.npmjs.com/package/expo-exit-app
I am trying to do the same with Spring Boot 3.4.4, but it is not working for me.
I migrated to reactive programming with Spring webFlux, removing dependency with Tomcat, in order to deploy with Netty. I have included in pom the following:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<exclusions>
<!-- Exclude the Tomcat dependency -->
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-boot-starter</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger2</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger-ui</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.springdoc</groupId>
<artifactId>springdoc-openapi-starter-webflux-ui</artifactId>
<version>2.8.9</version>
</dependency>
In application.properties I have this:
springdoc.api-docs.enabled=true
springdoc.api-docs.path=/api-docs
The issue I have is, if I launch my application, when I call to: http://localhost:8080/api-docs, it returns an error:
java.lang.NoSuchMethodError: 'void io.swagger.v3.oas.models.OpenAPI.<init>(io.swagger.v3.oas.models.SpecVersion)'
at org.springdoc.core.service.OpenAPIService.build(OpenAPIService.java:243) ~[springdoc-openapi-starter-common-2.8.9.jar:2.8.9]
at org.springdoc.api.AbstractOpenApiResource.getOpenApi(AbstractOpenApiResource.java:353) ~[springdoc-openapi-starter-common-2.8.9.jar:2.8.9]
at org.springdoc.webflux.api.OpenApiResource.openapiJson(OpenApiResource.java:123) ~[springdoc-openapi-starter-webflux-api-2.8.9.jar:2.8.9]
at org.springdoc.webflux.api.OpenApiWebfluxResource.openapiJson(OpenApiWebfluxResource.java:119) ~[springdoc-openapi-starter-webflux-api-2.8.9.jar:2.8.9]
at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:580) ~[na:na]
at org.springframework.web.reactive.result.method.InvocableHandlerMethod.lambda$invoke$0(InvocableHandlerMethod.java:208) ~[spring-webflux-6.2.5.jar:6.2.5]
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:132) ~[reactor-core-3.7.4.jar:3.7.4]
at reactor.core.publisher.MonoZip$ZipCoordinator.signal(MonoZip.java:297) ~[reactor-core-3.7.4.jar:3.7.4]
at reactor.core.publisher.MonoZip$ZipInner.onNext(MonoZip.java:478) ~[reactor-core-3.7.4.jar:3.7.4]
at reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.onNext(MonoPeekTerminal.java:180) ~[reactor-core-3.7.4.jar:3.7.4]
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2571) ~[reactor-core-3.7.4.jar:3.7.4]
at reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.request(MonoPeekTerminal.java:139) ~[reactor-core-3.7.4.jar:3.7.4]
at reactor.core.publisher.MonoZip$ZipInner.onSubscribe(MonoZip.java:470) ~[reactor-core-3.7.4.jar:3.7.4]
at reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.onSubscribe(MonoPeekTerminal.java:152) ~[reactor-core-3.7.4.jar:3.7.4]
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55) ~[reactor-core-3.7.4.jar:3.7.4]
Anyone can help me with this issue?
Thanks a lot!!
Important
If you are using something absolute inside column item then you have to make each item container inline block with width 100% (width is optional) then it will work fine . otherwise you may face layout issues.
If you still face any issue, i can help you. you can contact me anytime.
https://github.com/kamrannazir901/
im having the same issue, but electron-builder works for me
Please verify that Neovim has clipboard support, :echo has('clipboard')
Run the built-in health check using :checkhealth
Install a clipboard provider using "sudo apt install xclip
" (X11) or "sudo apt install wl-clipboard
" (Wayland)
file:// reads from local, http sends to web and gets response.
Here’s what could be happening:
---
1. **Default WordPress Behavior:**
WordPress often uses `wp_redirect()` for multisite sub-site resolution. If a site isn’t fully set up or mapped properly, WordPress may default to a temporary 302 redirect.
2. **ELB-HealthChecker/2.0 (from AWS):**
This request is from **AWS Elastic Load Balancer (ELB)** health checks. ELB makes a plain `GET /` request. If the root site (or sub-site) is not fully responding or mapped, WordPress may redirect it with a 302 temporarily.
3. **Multisite Rewrite Rules:**
Your `.htaccess` rewrite rules seem mostly correct, but the custom rules at the end (`wptmj/$2`) may be misrouting requests, especially if `wptmj` is not a valid subdirectory or symlinked path.
---
### ✅ What You Can Try:
#### 1. **Force WordPress to Use 301 Redirects:**
You can try modifying redirection functions using `wp_redirect_status` hook in `functions.php`:
```php
add_filter('wp_redirect_status', function($status) {
return 301; // Force 301 instead of 302
});**
To prevent Android from killing your app during GPS tracking for field team purposes, consider running your tracking service as a foreground service with a persistent notification—this signals Android that your app is actively doing something important, reducing the likelihood of it being shut down. Also, ensure battery optimization is disabled for your app in device settings.
If you need a reliable and ready-made solution, tools like Workstatus offer robust background GPS tracking for field teams without being interrupted, ensuring continuous location logging even when the app isn’t actively used.
A suggestion from @dan1st to use github.event.workflow_run.artifacts_url to fetch artifacts via the GitHub API, here are the updated files with the required changes. The Deploy workflow will now use a script to download the artifact dynamically, replacing the failing Download Build Artifact step.
name: Deploy to Firebase Hosting on successful build
'on':
workflow_run:
workflows: [Firebase Deployment Build]
types:
- completed
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
permissions:
actions: read # Added to fix 403 error
contents: read # Added to allow repository checkout
steps:
- name: Checkout Repo
uses: actions/checkout@v4
with:
token: ${{ secrets.GITHUB_TOKEN }} # Explicitly pass the token
repository: tabrezdal/my-portfolio-2.0 # Ensure correct repo
- name: Debug Workflow Context
run: |
echo "Triggering Workflow Run ID: ${{ github.event.workflow_run.id }}"
echo "Triggering Workflow Name: ${{ github.event.workflow_run.name }}"
echo "Triggering Workflow Conclusion: ${{ github.event.workflow_run.conclusion }}"
- name: Install jq
run: sudo apt-get update && sudo apt-get install -y jq
- name: Fetch and Download Artifacts
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
# Get the artifacts URL from the workflow_run event
ARTIFACTS_URL="${{ github.event.workflow_run.artifacts_url }}"
echo "Artifacts URL: $ARTIFACTS_URL"
# Use GitHub API to list artifacts
ARTIFACTS=$(curl -L -H "Authorization: token $GITHUB_TOKEN" "$ARTIFACTS_URL")
echo "Artifacts: $ARTIFACTS"
# Extract the artifact name (assuming 'build' as the name)
ARTIFACT_NAME=$(echo "$ARTIFACTS" | jq -r '.artifacts[0].name' || echo "build")
echo "Artifact Name: $ARTIFACT_NAME"
# Download the artifact using the GitHub API
DOWNLOAD_URL=$(echo "$ARTIFACTS" | jq -r '.artifacts[0].archive_download_url')
if [ -z "$DOWNLOAD_URL" ]; then
echo "No download URL found, artifact may not exist or access is denied."
exit 1
fi
curl -L -H "Authorization: token $GITHUB_TOKEN" -o artifact.zip "$DOWNLOAD_URL"
unzip artifact.zip -d build
rm artifact.zip
- name: Verify Downloaded Artifact
run: ls -la build || echo "Build artifact not found after download"
- name: Debug Deployment Directory
run: |
echo "Current directory contents:"
ls -la
echo "Build directory contents:"
ls -la build || echo "Build directory not found"
- name: Deploy to Firebase
uses: FirebaseExtended/action-hosting-deploy@v0
with:
repoToken: ${{ secrets.GITHUB_TOKEN }}
firebaseServiceAccount: ${{ secrets.FIREBASE_SERVICE_ACCOUNT }}
channelId: live
projectId: tabrez-portfolio-2
For a general pbar from tqdm.auto
, the easiest working solution I found is:
pbar.n = pbar.total
pbar.close()
break