I was so close as well. My handler just was not working at all. Was pulling my hair out. Thank you so much for also posting you answer :-)
on a list
(.*)\|(.*)
This regex in Notepad ++ works exact as you expect
with replace pattern:
<tr><td>\1</td><td>\2</td></tr>
I modified esbuild.js
file to include the following code in esbuild.context
:
loader: {
'.html': 'text', // 👈 treat HTML imports as plain text
},
from gtts import gTTS
# हिंदी नैरेशन टेक्स्ट
hindi_text = """
एक सुनहरी दोपहर… एलिस अपनी बहन के साथ नदी किनारे बैठी थी। किताब बेमज़ेदार लग रही थी… तभी उसकी नज़र पड़ी एक अजीब से खरगोश पर… सफेद खरगोश, जिसने कोट पहना था और हाथ में जेब घड़ी पकड़ी थी।
जिज्ञासा से भरी एलिस उसके पीछे दौड़ी… और धड़ाम! खरगोश के बिल में जा गिरी।
लंबी सुरंग से गिरती हुई, वह एक अजीब गलियारे में पहुँची, जहाँ दरवाज़ों की कतार थी… और मेज़ पर रखी थी सोने की एक छोटी चाबी।
‘पी लो’ लिखा हुआ बोतल… और ‘खा लो’ लिखा हुआ केक… कभी वह छोटी हो जाती, कभी बहुत बड़ी।
आखिरकार, वह उस अद्भुत बगीचे में पहुँच गई।
वहीं मिली… रहस्यमयी मुस्कान वाली चेशायर बिल्ली।
फिर पहुँची… पागलपन से भरी मैड हैटर की चाय पार्टी।
और आखिरकार… गुस्सैल क्वीन ऑफ हार्ट्स के सामने, जिसने ज़ोर से चिल्लाया —
‘Off with their heads!’
लेकिन एलिस ने हिम्मत दिखाई, झूठे इल्ज़ामों के ख़िलाफ़ डटकर खड़ी हो गई।
और तभी… सबकुछ धुंधला पड़ गया…
आँख खुली तो एलिस फिर से नदी किनारे थी।
वह मुस्कुराई… और समझ गई…
कि वंडरलैंड की यह सारी रोमांचक यात्रा… बस एक अजीब-सा… ख्वाब थी।
"""
# ऑडियो बनाएं
tts = gTTS(text=hindi_text, lang="hi")
tts.save("hindi_narration.mp3")
print("✅ हिंदी नैरेशन ऑडियो (hindi_narration.mp3) तैयार हो गया!")
You don’t need to read the refresh cookie with JS (and shouldn’t). Instead, pair it with a separate CSRF token mechanism (double-submit cookie pattern) or rely on SameSite cookies. Django already supports this workflow out of the box.
The same problem occurred in my GW claim center server start since one field of the gradle.properties were blank, for example ado.password=
path C:\..claimcenter\gradle.properties
Once I set the password, problem get resolved.
The error POST http://localhost:4000/auth/login 404 (Not Found)
means your frontend is trying to access a backend route that doesn't exist or isn't set up correctly. First, make sure your backend server is running on port 4000. Then, check that the /auth/login
route is correctly defined and mounted — for example, if you're using Express, ensure app.use('/auth', authRoutes)
is set and authRoutes includes a POST /login
handler. Also, confirm that you're not using a different base path like /api/v1/auth/login
, in which case your Axios URL should match that. You can test the route with Postman or curl to make sure it's working independently from the frontend.
I upgraded the version of octokit and seems to have resolved the problem:
"@octokit/rest": "^22.0.0"
It turned out that this keybinding was in my file. It looked like this:
{
"key": "ctrl+backspace",
"command": "deleteWordLeft",
"when": "textInputFocus && && !editorReadonly &&inlineDiffs.activeEditorWithDiffs"
},
Quite unnoticeable, isn't it?
The problem is the condition syntax error:
... textInputFocus && && !editorReadonly ...
. This condition is considered absent and the keystroke cannot be removed due to uncompleted when
expression.
The "moral" of this is that syntax errors in keybindings.json file can lead to unpredictable results.
Short answer: you’re close, but a few tweaks will save you a lot of pain. The biggest risks are (1) letting “system test” become a grab-bag of half-ready features and (2) not having a crisp, repeatable RC process with versioned release branches and back-merges.
Here’s a pragmatic way to refine what you have.
A dedicated pre-prod/RC branch separate from main.
A place to integrate features before RC.
A clear “promote, don’t rebuild” path: preprod → main.
System test as a playground
If it includes both current and future features, you’ll get “hidden dependencies” and late surprises when you try to cherry-pick only some changes into RC.
Rolling back is hard if future work bleeds in.
No versioned release branches
preprod
branch that mutates over time makes it hard to track exactly what’s in RC1 vs RC2, generate clean release notes, or hotfix a specific release.Hotfix path ambiguity
Keep your branch names, add a few guardrails.
Branches
main
→ production only.
preprod
→ acts as the current release candidate, but create versioned RC branches when you cut a release: release/1.8
(or release/2025.09
). You can keep preprod
as a pointer (or alias) to the active release branch if the name helps your team.
develop
(your system test) → integration of all features for next releases, but protected with feature flags for anything not planned for the current RC.
Short-lived feature/*
branches → merge into develop
via MR.
Flow
Cut RC
When you’re ready to stabilize, branch from develop
to release/x.y
.
Only allow bug-fix merges into release/x.y
(no new features). Tag candidates vX.Y.0-rc.1
, -rc.2
, etc.
Stabilize RC
Run full regression in pre-prod environment from release/x.y
.
Any fixes are merged into release/x.y
and back-merged into develop
(to avoid regressions next cycle).
Release
When green, fast-forward or merge release/x.y → main
, tag vX.Y.0
, deploy.
Optionally, merge main → develop
to ensure post-release parity (if your GitLab settings don’t auto-sync).
Hotfixes
Create hotfix/x.y.z
from main
, merge back to main
, tag vX.Y.Z
, deploy.
Then cherry-pick to any open release/x.y
(if applicable) and merge to develop
. Keep a checklist so hotfixes don’t get lost.
Why this helps
You still use your “system test” branch, but release hardening happens in a clean, versioned branch.
You prevent the “playground” effect from polluting RC by cutting RC from a known commit and controlling what gets cherry-picked.
main
is the only long-lived branch, features behind flags, and cut release/x.y
only during stabilization. This reduces long-lived divergence but requires strong CI + feature flag discipline.Protected branches & approvals
main
and all release/*
branches. Require MR approvals (e.g., code owner + QA). Disable direct pushes.Merge rules
“Merge when pipeline succeeds”, enable merge trains on develop
/main
to reduce flaky integration breaks.
Prefer squash merges for feature branches to keep history clean.
Pipelines by branch
feature/*
: unit + component tests, static analysis.
develop
: full integration + e2e on a Review App or shared “system test” env.
release/*
: full regression, perf/smoke, DB migration dry-run, security scans.
main
: deploy to prod, post-deploy smoke, rollback job.
Environments & tagging
Use GitLab Environments: system-test
for develop
, preprod
for release/*
, production
for main
.
Tag RCs (vX.Y.0-rc.N
) and releases (vX.Y.Z
) for traceability and release notes.
Feature flags
develop
but disabled by default. Only features planned for release/x.y
get their flags enabled in that branch/env.Back-merge automation
main
(hotfix), auto-open MRs to develop
and active release/*
branches (GitLab CI job or a small bot).MR templates
Database migrations
release/*
pipeline (dry-run). Include a down/rollback plan.Release freeze
release/*
before GA; only severity-rated fixes allowed.“System test includes current + later features”: OK if and only if those later features are behind flags and you cut RC from a known good commit (or cherry-pick only the features intended for the release). Otherwise, create a next
branch to park future features separately.
“Preprod as RC branch”: Better to make it versioned release/x.y
and map your Preprod environment to whichever release branch is active. You can keep a preprod
alias branch, but the versioned branch is what you merge and tag.
“Push the feature branch to RC”: always via MRs (no direct push) with approvals, and ideally cherry-pick or merge only the specific commits intended for the RC to avoid dragging unrelated changes.
Branches: feature/login-otp
, develop
, release/2025.09
, main
, hotfix/2025.09.1
Tags: v2025.09.0-rc.1
, v2025.09.0
, v2025.09.1
Envs: system-test
(develop), preprod
(release/2025.09), production
(main)
If you adopt the versioned release/*
branch + feature flags + protected merges, your current plan will work smoothly and remain auditable. Want me to write a short GitLab policy doc (branch protections, MR templates, CI “rules:” snippets) tailored to your repo?
Thanks
public_html/
│
└───nrjs/ (node.js app is in subdirectory)
│ app.js
│
├───public/
│ login.html (Publicly accessible login page)
│
└───protected/
index.html (Private home page, should not be directly accessible)
Was running into a similar issue when trying to deploy my service, and it turned out to be a memory issue. 512 MB was not enough to properly run chromium. Scaling up to 2GB fixed it
Set new environment variable in Render.com portal "Manage > Environment > Edit"
PUPPETEER_CACHE_DIR=/opt/render/project/.cache/puppeteer
For future reference, this can be because the device being used for testing is not signed into Google Services, most likely because it is an emulator and was never signed into Google Services. If you go to Settings → Google and sign in this exception will likely go away.
The model expects input of shape (3,H,W), ie without batch dimension, so this works:
summary(model, input_size=(3, 224, 224))
I've implemented a very simple package for this, designed to be simple, readable and use the Result type from functional programming.
My surmise is that the code without a break is getting vectorized by the compiler, while the code with a break cannot be vectorized and must remain as a scalar loop. Because the loop exit happens almost 90% of the way through the array, the inefficiency of iterating through the last 10% of the array is small compared to the gains from vectorization.
I am also working on something similar. Please check out my github and I'll take any advice. I am having issues with the overlay in Google earth. github.com/festeraeb/Garmin-Rsd-Sidescan
it worked out, I'm using fedora 42 Thanks!!!
sudo dnf install sqlite3
This is happening to me, I’m also using an Avada theme. Is there any fix? Or do I have to find a new theme? I’m setting up hundreds of products in WooCommerce, and all the links are breaking because of it
The problem is that StripeWrapper
is defined after the usage,
to fix this you simply move the definition up:
import logo from './logo.svg';
import { Routes, Route, createBrowserRouter, RouterProvider, Navigate, Outlet } from 'react-router-dom';
import { Elements } from '@stripe/react-stripe-js';
import { loadStripe } from '@stripe/stripe-js';
import './App.css';
import 'bootstrap/dist/css/bootstrap.css';
import Header from "../src/Components/Header";
import Home from "../src/Components/Home";
import Search from "../src/Components/Search";
import Login from "../src/Components/Login";
import Sell from "../src/Components/Sell";
import Signup from "../src/Components/Signup";
import ErrorPage from "../src/Components/ErrorPage";
import StripeComplete from "../src/Components/StripeComplete";
import StripeError from "../src/Components/StripeError";
import BuyImageCheckout from "../src/Components/BuyImageCheckout";
import Selling from "../src/Components/Selling";
import {} from "./APIRequests/Api";
const stripePromise = loadStripe('xxxxx');
const StripeWrapper = ({ children }) => {
return (
<Elements stripe={stripePromise}>
{children}
</Elements>
);
};
const router = createBrowserRouter([
{
path: '/',
element: <LayoutComponent />,
children: [
{
index: true,
element: <Home />,
},
{
path: '/search',
element: <Search />,
},
{ path: '/sell',
element: (
<PrivateRoute>
<Sell />
</PrivateRoute>
),
},
{
path: '/login',
element: <Login />,
},
{
path: '/signup',
element: <Signup />,
},
{
path: '/stripecomplete',
element: <StripeComplete />,
},
{
path: '/stripeerror',
element: <StripeError />,
},
{
path: '/selling',
element: <Selling />,
},
{
path: '/sell/buyimagecheckout',
element: (
<PrivateRoute>
<StripeWrapper>
<BuyImageCheckout />
</StripeWrapper>
</PrivateRoute>
)
},{
path: '*',
element: <ErrorPage />,
},
],
},
]);
function PrivateRoute({ children }) {
return localStorage.getItem("userGuid") != null ? children : <Navigate to="/login" />;
}
function LayoutComponent() {
return (
<div>
<Header></Header>
<main>
<Outlet /> {/* Nested routes render here */}
</main>
</div>
);
}
function App() {
return <RouterProvider router={router} />;
}
export default App;
Edit: Why is everything indented?
if you're using the Helm chart provided by Datadog, you can completely disable Redis integration via this conf :
datadog:
#List of integration(s) to ignore auto_conf.yaml.
ignoreAutoConfig:
- redisdb
- istio
find link below :
https://docs.datadoghq.com/containers/guide/auto_conf/?tab=helm
Use Result<Person>
as the return type instead of Person
https://docs.langchain4j.dev/tutorials/ai-services#return-types
Google's basically getting ready for Android 15 which will support 16KB page sizes (instead of the traditional 4KB), and they're warning developers ahead of time.
So here's the deal - this isn't really a Capacitor-specific issue, it's more about the native libraries that get bundled with your app. The good news is that Capacitor 7.4.2 should actually be fine on their end, but the warning is probably coming from one of your plugins or their dependencies.
First thing I'd check - do you have any other Capacitor plugins installed beyond the ones you listed? Stuff like camera, filesystem, push notifications, etc? Those are usually the culprits because they might include older native libraries.
Quick fix to try:
Update all your Capacitor plugins to their latest versions (not just core)
In your android/app/build.gradle
, bump your compileSdkVersion
and targetSdkVersion
to 34 if they're not already
Clean and rebuild: npx cap sync android
then rebuild
If you're still getting the warning after that, you might need to add this to your android/app/build.gradle
:
android {
packagingOptions {
jniLibs {
useLegacyPackaging = false
}
}
}
The nuclear option if nothing else works - you can actually ignore this warning for now since 16KB page support isn't mandatory yet. But it's better to fix it since Google will eventually require it.
Also worth checking if you have any old .so files hanging around in your android folders from previous builds. Sometimes a clean build (rm -rf android/app/build
before syncing) helps.
To order the compiler to inline, use a macro. This comes up exceedingly rarely, for example: the relocation engine itself can't make function calls.
If you're hitting a point where you think you need this, declare inline, and crank up the optimization levels. If you are on 2025 hardware getting to the point where inline this function the compiler won't at aggressive speed optimizations I want to know what on earth you are doing to get to that point.
react-native-svg
is a native dependency, and after installing native dependencies, you need to run npx expo run:android
again.
You can also try deleting the app, and running npx expo prebuild --clean
to regenerate the native code.
I'm still having this problem in NextJS that the seeding isn't being executed correctly. When I run npx prisma db seed
, no errors are thrown, but the seed isn't being executed.
When I try to start the seed.ts file manually via npx tsx prisma/seed.ts
, I get this error:
error: Environment variable not found: DATABASE_URL.
--\> schema.prisma:3
|
2 | provider = “postgresql”
3 | url = env(“DATABASE_URL”)
|
This is what my prisma.config.ts file looks like
import 'dotenv/config';
import path from 'node:path';
import { defineConfig } from 'prisma/config';
export default defineConfig({
schema: path.join('prisma', 'schema.prisma'),
migrations: {
path: path.join('prisma', 'migrations'),
seed: 'tsx prisma/seed.ts',
},
});
It also doesn't matter whether I use tsx prisma/seed.ts
or npx tsx prisma/seed.ts
as the seed command in the config file, neither works.
Any solutions? Do I need to adjust something else in the seed.ts
file?
Nevermind, the solution was deleting the file and replacing it with another one with the same name
Also, all 3 beans must use the @Primary annotation in the main DB
@Configuration
@EnableJpaRepositories(
basePackages = ["mx.collia.api.maintenance.repository"],
entityManagerFactoryRef = "maintenanceEntityManagerFactory",
transactionManagerRef = "maintenanceTransactionManager")
class MaintenanceDBConfig {
@Bean
@Primary
@ConfigurationProperties(prefix = "maintenance.datasource")
fun maintenanceDataSource(): DataSource {
println("Maintenance DB Config Loaded")
return DataSourceBuilder.create().build()
}
@Bean
@Primary
fun maintenanceEntityManagerFactory(builder: EntityManagerFactoryBuilder, @Qualifier("maintenanceDataSource") dataSource: DataSource): LocalContainerEntityManagerFactoryBean {
return builder.dataSource(dataSource).packages("mx.collia.api.maintenance.model").persistenceUnit("maintenance").build()
}
@Bean
@Primary
fun maintenanceTransactionManager(@Qualifier("maintenanceEntityManagerFactory") emf: EntityManagerFactory): PlatformTransactionManager {
return JpaTransactionManager(emf)
}
}
Проверьте правильно ли написано ConnectionStrings в appsettings.json (не ConnectionString).
{
"ConnectionStrings": {
"DefaultConnection": "Host=localhost;Port=5432;Database=BulletinBoard;Username=postgres;Password=ДофигаСложныйПароль"
},
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning"
}
},
"AllowedHosts": "*"
}
У меня в di в качестве строки подключения передавался null при инициализации контекста, но при этом обращения к нему при обычных запросах проходили (хз почему).
public static IServiceCollection RegistrarAppContexsts(this IServiceCollection services, IConfiguration configuration)
{
services.AddDbContext<BulletinContext>(options =>
{
options.UseNpgsql(
configuration.GetConnectionString("DefaultConnection"),
b => b.MigrationsAssembly("BulletinBoard.Infrastructure.DataAccess")
);
});
return services;
}
В общем будьте внимательны, не будьте как я.
Have you tried creating a python package of the script? I believe at minimum you would require an empty "_init_" file and toml file. After creating the package, you should be able to run your script from any directory.
Tailwind’s responsive lessons (sm:, lg: and so forth.) are based totally on viewport width, not the width of a figure field. So even in case u set w-[370px] on a div the browser viewport is still huge, which means that sm:grid-cols-2 and lg:grid-cols-three hold to apply. That’s why u see grid-cols-3 on computing device-Tailwind is asking on the browser length, not ur simulated container length. If u want to truly simulate devices inside a element, u’ll need to either: Use browser DevTools responsive mode (max straightforward), Wrap the preview in an <iframe> and manipulate its width, or Enable Tailwind field queries (v3.2 ) so patterns respond to the field size rather than the viewport.
They added an easy copy button to the channels setting modal. Just go to the channel, then just click on the channels name at the top. Make sure you are on the about tab inside that modal and then at the bottom it says Channel ID: xxx and there is also a quick copy button.
Answer to your question:
The main issue isn’t Tailwind itself, it’s that responsive logic (`sm:/md:/lg:`) is hard-coded in every component. That couples breakpoints directly to JSX and makes global updates painful. The scalable fix is to introduce a thin abstraction layer:
Layout primitives (`Grid`, `Stack`, `Container`) → take props like cols/gap and generate the responsive classes.
Variants (`tailwind-variants`/`cva`) → define reusable styles and states (tone, density, padding) instead of repeating class strings.
Shared Tailwind preset → centralize screens, colors, spacing; change them once, all apps update.
Container queries → when layout depends on parent width instead of global viewport.
Design tokens in CSS variables → theming and density modes without JSX refactors.
This way you keep Tailwind’s power but gain central control, consistency, and scalability.
Additional improvements worth adding:
- **DX & consistency:** eslint-plugin-tailwindcss, Prettier Tailwind plugin, `twMerge` + `clsx` for programmatic classes.
- **Theming:** tokens in `:root` and `[data-theme]`, dark mode via attributes, plan RTL support.
- **Primitives API:** keep them small and predictable (Grid: cols/gap; Stack: direction/gap).
- **Variants system:** use semantic prop names (`tone`, `density`) not raw class names.
- **Accessibility:** focus-visible styles, respect prefers-reduced-motion.
- **Performance:** safelist minimal, audit arbitrary values, tree-shake icons.
- **Testing:** Storybook + viewport addon, visual regression tests for breakpoints/themes.
- **Docs:** usage recipes + “do/don’t” examples; not encyclopedias.
- **Migration plan:** 1) add preset & primitives, 2) migrate worst offenders, 3) enforce lint rules to block new raw utility soups.
- **Monorepo ops:** publish `@org/tw-preset` + `@org/ui`; version them and document breaking changes.
- **Ergonomics:** ship default responsive presets (e.g. `cols={presets.twoUpToThree}`), global density modes.
- **Edge cases:** SSR determinism, iframe/microfrontend theming via CSS vars, always expose `className` escape hatch.
Bottom line:
Stop scattering `sm:/md:/lg:` everywhere. Move them into controlled wrappers, centralize tokens, and let Tailwind do the heavy lifting behind clean APIs. That’s how you keep responsive design flexible and scalable without drowning in utility chains.
This is a feature that should be added immediately. Rider is crazy to navigate. I use the solution view to scope to my project. But because I am using Unity, Rider also sees all of my plugins and extension code. Even though I can't edit any of it.
Everyone knows that adding a feature to a feature request for a large company will mostly get it swept under the rug.
It would be incredibly helpful to just have a window on the left pane that displays only the methods of the class I have open. It should also be able to scope to parent/child classes, interfaces and other linked functions. I know of the structure window but its useless also adding in all the variables. It even positions the variables in the order they occur in the file. I get variables, function, variables, functions that I need. In this huge list. What a mess.
I’ve used the TYPO3 extension "pictureino" to handle this. It automatically generates responsive images in the frontend while keeping the cropping set in the backend. The nice thing is that it works completely automatically, without any extra configuration.
This is absurd but I found that if I launch the app using Xcode (Build & Run), the menu items don't appear. But if I double-click it from the Finder, then it works. This is on macOS 14.
It's possible that when the method with the break statement was executing, its thread got interrupted and another one started executing, which increased the measured execution time of the method. In that case, it would be helpful to try executing each method multiple times and then finding the average runtime for each method. An example of this is available at 300x slower program with while.
the best solution i found:
router.dismissTo({
pathname:xxx
params:{...}
})
another option would be to use
router.dismissTo({
pathname:xxx
params:{...}
})
I'm using intelij -
removing this in
<scope>provided</scope>
<dependency>
<groupId>org.apache.tomcat.embed</groupId>
<artifactId>tomcat-embed-jasper</artifactId>
<scope>provided</scope>
</dependency>
If you want to know why this behavior is occurring, then maybe it is because you can't declare two variables with the same name in Javascript? Is there a reason why both values need to have the name MY_FAV?
Is this what you need?
int lastPowNum(int pow) {
if (pow == 0) return 1;
if (pow % 4 == 1) return 2;
if (pow % 4 == 2) return 4;
if (pow % 4 == 3) return 8;
if (pow % 4 == 0) return 6;
}
I have found a simpler way to disable power throttling from this link: https://forums.ni.com/t5/LabVIEW/CPU-load-when-application-is-minimized-is-reduce-50/td-p/4343470/page/2
Run this in an Administrator CMD for your exe:
powercfg /powerthrottling disable /path "C:\Program_Path\program.exe"
Let's assume that the B-tree has, on average, a entries in each node, and b child nodes per node:
In that case, the number of entries in the tree follows a finite geometric series with a ratio of b and the first term equalling a:
N = a + ab + ab2 + ab3 + ... + abn
= a(1-bn+1)/(1-b)
N(1-b)/a = 1-bn+1
bn+1 = 1 - N(1-b)/a
n = logb(1 - N(1-b)/a) - 1
I ended up with four different formulas and a helper column created on the Data Points tab. Final formulas that work are highlighted in bright pink.
Returns unique courses for the selected student.
={"Last Progress Reported";arrayformula(VSTACK(iferror(sort(UNIQUE(FILTER('Data Points'!F2:F,'Data Points'!A2:A=B2,VSTACK('Data Points'!A2:A)<>"")),1,0)),""))}
Returns most recent data points for the selected student and the unique courses from the previous formula.
={"Overall Grade";arrayformula(ifNA(VLOOKUP($B$4&$G19:G&$B$2,sort('Data Points'!B2:L,11,false),9,false),""))}
={"% Complete (Count)";arrayformula(ifNA(VLOOKUP($B$4&$G19:G&$B$2,sort('Data Points'!B2:L,11,false),7,false),""))}
={"Date";arrayformula(ifNA(VLOOKUP($B$4&$G19:G&$B$2,sort('Data Points'!B2:L,11,false),11,false),""))}
I could not figure out how to combine all the formulas into one like I had orginally started with.
Note that even though currently multi-objective tuning is not supported in mlr3
, there are similar situations where these multi-objective problems appear and pareto optimal solutions that represent the best trade-off between (2 or more) objectives have been proposed (as there are many algorithms on findings "knees" of the (multi-dimensional) pareto front - see a nice review in this article).
In a recent feature selection example, I implemented a very simple 2d knee point identification method, to find the pareto point point with as minimum selected features as possible, while retaining as high performance as possible, see this mlr3 gallery post.
I just created a venv and performed everything again. It worked. Must have been some library mismatch.
I'm having the same problem. Did you solve this?
The command you're attempting to run tells your system to delete the com.apple.quarantine
attribute from the file, but the error message that you're getting says that it doesn't exist, which means it was already deleted or was never added. Either way, you can safely skip running that command, as the desired outcome has been reached!
Since Safari 10, the debugger console has been greatly improved and supports console logs and breakpoints in dedicated workers. Service workers can be debugged by going to Develop > Service Workers.
Thanks Psionman! Exactly what I needed to create a wx.Image from a PIL image (I didn't actually need a bitmap). The 1st line of the function might be simplified(?) to wx_image = wx.Image(*pil_image.size)
When choosing the best cloud solution for handling JSON requests, the right option depends on your project’s scale, performance needs, and integration requirements. JSON (JavaScript Object Notation) is lightweight, easy to parse, and widely supported, making it the standard for modern APIs.
Popular Cloud Options:
AWS Lambda + API Gateway: A serverless choice for quickly processing JSON requests without managing infrastructure. Great for scalability.
Google Cloud Functions: Ideal if you’re already in the Google ecosystem. It handles JSON efficiently and integrates with Firebase and BigQuery.
Microsoft Azure Functions: Offers robust JSON handling, especially for enterprise-level applications with strong security needs.
Key Considerations:
Scalability – Can the service handle sudden spikes in requests?
Latency – JSON parsing should be fast for real-time applications.
Ease of Integration – Look for services with SDKs and REST API support.
Cost-effectiveness – Pay-as-you-go serverless models are often budget-friendly.
For businesses that need tailored IT solutions beyond just cloud hosting, exploring resources like TTS can be helpful. They provide practical insights and services for integrating digital technology into real-world business needs, ensuring your infrastructure supports growth and flexibility.
Pro tip: If you’re just starting, try serverless platforms first—they’re low-cost, easy to manage, and scale automatically with your JSON requests.
The below link lists SQL exception and warning messages that you may encounter when using jConnect.
Right now you already have a good pipeline:
Search with keyphrases → get candidate sentences.
Use embeddings to find similar examples.
LLM checks patterns and makes the final call.
This works well because embeddings + LLM can capture meaning and handle fuzzy matches.
Structure: You can connect rules → groups → keywords → example sentences.
Disambiguation: Add explicit links like “6th day ≠ 7th day” or “evening ≠ night” so the system knows how to separate similar rules.
Explainability: Easier to show why a sentence matched a rule (“matched Rule X because of keyword Y and example Z”).
New rules: A KG can flag “unmatched sentences,” but creating a new rule is still an expert job.
Accuracy: If embeddings + LLM already work well, a KG won’t suddenly make results much better.
Maintenance: Building and updating a KG for 300+ rules takes work/
Don’t replace your current pipeline.
Use a small KG as an extra layer for disambiguation and explanations.
For new rules, cluster unmatched sentences and let experts decide.
A KG/Graph-RAG can help with clarity, disambiguation, and trust, but it won’t replace what you already built. Think of it as a way to organize and explain results, not as a magic accuracy booster.
In Python 3, the standard division operator (/
) always performs "true division" and returns a float
result, even if both operands are integers and the division results in a whole number.
It is a Firefox issue. Firefox remembers old data entered into the input fields and replaced those after reloading. Currently looking for another solution...
#edit: Seems like adding autocomplete="off" to the form is doing the trick.
When you submit a form, the FormResponse object contains all your responses. To get the answer for a specific form item, like a CHECKBOX_GRID, you use the getItemResponses() method on the FormResponse object.
The key to a CHECKBOX_GRID is that the getResponse() method returns an array of arrays, not a simple string or a flat array of strings. Each inner array corresponds to a row in the grid and contains the column titles of the selected options for that specific row.
For example, a grid with rows "X-Small," "Small," and "Medium," and columns "White" and "Navy."
If you selects "Navy" for the "X-Small" row, selects nothing for the "Small" row, and selects both "White" and "Navy" for the "Medium" row, the getResponse() method would return: [['Navy'], [], ['White', 'Navy']] or as string Navy,,White,Navy.
The first inner array ['Navy'] represents the selections for the "X-Small" row.
The second inner array [] is an empty array and represents the "Small" row, where no checkbox was selected. This is the correct way to handle unselected rows, not by returning null or an empty string ''.
The third inner array ['White', 'Navy'] represents the selections for the "Medium" row, showing multiple choices within a single row.
As a Java developer, I strongly recommend using ID instead of <tableName>_id in case of primary key.
When working with @OneToOne, @ManyToOne or @ManyToMany properties, you need to specify the @JoinColumn annotation with the values of the "name" and "referencedColumnName" properties. When they have the same values, it can be very confusing. Although I understand that by default for a primary key, referencedColumnName is not used, but sometimes it is inconvenient and takes a little more time.
You can run several ways that:
wine tasklist
Also you can run what others have answered
winedbg --command "info proc"
The error means Document AI can’t find the processor version you’re asking for.
In your code you are mixing project identifiers and using a processor version that doesn’t exist.
A few things to check:
1. Make sure you use the same project everywhere. Document AI accepts either the project number ( 466668368501) or the project ID (inspired-ether-458806-u7), but you must use the same one consistently.
2. If you don’t have a custom processor version, don’t pass processor_version_id. Just build the resource name like this:
python:
name = client.processor_path(project_id, location, processor_id)
“Discover the power of AhaChat AI – designed to grow your business.”
“AhaChat AI: The smart solution for sales, support, and marketing.”
“Empower your business with AhaChat AI today.”
For security, the Auth schema is not exposed in the auto-generated API. If you want to access users data via the API, you can create your own user tables in the public
schema.
html <body style=margin:0;background:#000;overflow:hidden><div style=position:absolute;top:10%;left:10%;width:80vmin;height:80vmin;border-radius:50%;border:1px solid #fff><div style=position:absolute;top:50%;left:50%;transform:translate(-50%,-50%);width:20vmin;height:1px;background:#fff></div><div style=position:absolute;top:50%;left:50%;transform:translate(-50%,-50%);width:1px;height:20vmin;background:#fff></div></div></body>
What you’re seeing is Anaconda automatically activating the base environment in your shell. That’s why every time you open PowerShell or VS Code terminal, it starts with
(base)
in the prompt.
You don’t need to uninstall Anaconda — you can simply tell it not to auto-activate:
conda config --set auto_activate_base false
After running this command once, restart PowerShell/VS Code and it will open in the normal terminal without (base)
showing up.
When you do want to use conda, you can still activate it manually with:
conda activate base
or switch to any other environment you’ve created.
In VS Code specifically, also make sure you’ve selected the Python interpreter you want (Ctrl + Shift + P
→ Python: Select Interpreter). That way it won’t keep defaulting to conda if you don’t want it to.
This way you can keep Anaconda installed for data science projects, but still have a clean, fast PowerShell/VS Code terminal for your everyday Python work.
I also found myself needing a hook that runs after new refs are fetched from the remote, no matter if I merged them into a branch or not.
Given the lack of post-fetch
hook, I made a python script to simulate it. It works by wrapping the ssh
command and call the post-fetch hook after a fetch
happens.
Here's the gist: https://gist.github.com/ssimono/f074f40c9ab9efee722e69d1ac255411
Maybe it helps someone.
Making sure that the views and view models in your subregion implement INavigationAware or IConfirmNavigationRequest will allow Prism to automatically call their OnNavigatedFrom/OnNavigatedTo methods during navigation. This is a more elegant way to take advantage of Prism's built-in navigation and region lifecycle management. To handle the subregion lifecycle consistently with the parent view and make your code cleaner and easier to maintain, think about utilizing scoped regions or navigation-aware actions rather than manually deleting views.
npm update
helped me in my case of same error
So, the solution ended up to be quite trivial. The only thing I missed is adding the appState.target
property to the parameters of the loginWithRedirect
method. So in case you're facing the same issue, do this:
auth0.loginWithRedirect({
appState: {
target: "/auth-callback"
}
});
Currently, you're using a Hardcoded Attribute Mapper in Keycloak. This mapper does not extract dynamic values (such as the user ID) from the identity provider token. Instead, it assigns predefined static values to user attributes after a successful login.
For example, if you configure a Hardcoded Attribute Mapper for the email attribute with the value [email protected], then after a user logs in via an identity provider like Twitter, the user's email attribute will be set to [email protected].
If you want to map the user ID or other token claims dynamically, you should use a "User Attribute Importer", "Claim to User Attribute", or "Attribute Importer" mapper — not the Hardcoded one. I am not sure if those mapper are available in the keycloak 26.x.x version and keycloak provide an options to create a own custom SPI.
0 minutes seems like a riskier length of time to allow for one token to be active. There is some drift with tokens, you don't need to enter it within 30 seconds, but half an hour is not among any recommended lengths of token expiration time.
<!DOCTYPE html> <html> <body> <canvas id="c" style="width:100%;height:90vh;background:#000"></canvas> <script> let c = document.getElementById('c'), a = c.getContext('2d'); c.width = window.innerWidth; c.height = window.innerHeight; a.lineWidth = 1; a.strokeStyle = 'rgba(200,0,255,0.7)'; function r() { a.clearRect(0, 0, c.width, c.height); a.beginPath(); for (let x = 0; x < c.width; x++) { let y = c.height / 2 + Math.sin(x / 100) * x / 3; a.lineTo(x, y); } a.stroke(); requestAnimationFrame(r); } r(); </script> </body> </
I agree with @mice here. He isn't talking about storing the passed hash, but a sha256 hash of it. Using heavy hash functions, versus lighter hashes, reduces the random password length required for the same security against brute force cracking.
Yes, if you could reverse hash the sha256 hash stored on the server to the 'random' 32 byte binary output of the heavy hash, you could use that as the password, but that isn't feasible with current technology. The alternative would be to start with the actual password candidates and calculate both the heavy hash and the sha256 hash a gazillion times until you find the one that produces the stored hash. This could be feasible if the password is weak, but that wouldn't be the fault of the system.
In short, I see nothing wrong with this idea if implemented correctly, and sufficiently strong passwords are chosen. In practice though, it only allows you to shorten password by a couple of characters for equivalent security, so is it worth it?
Just to add to @user456814's answer, if you are using powershell you need to escape the @ with a backtick:
# For Git on Powershell
git push origin -u `@
Height: 100% indeed solves the problem but creates even worse issues...
I see that when I do it, it restricts the site of the content incorrectly, particularly if there's an iframe inside the page...
Please refer the comparison between different primary key options before deciding the primary key. https://newuuid.com/database-primary-keys-int-uuid-cuid-performance-analysis
I think you should use python 3.13 or newer and setting up a virtual environment.
sudo apt-get update
sudo apt install python3.13
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
pip install flask-humanify
Best regards!
Stick with @Enumerated(EnumType.STRING)
. The values (PENDING_REVIEW
, APPROVED
, REJECTED
) are stable and not business-configurable. Renames can be handled via a one-time DB migration if it ever happens. The extra complexity of lookup tables or converters isn’t worth it unless requirements change.
For anyone wondering how to temporarily disable GitHub's Copilot:
if you hover over Copilot
's icon in the bottom-right, this popup brings up, and you can Snooze
the auto completion for 5 minutes
I edited the dev.to page, clicked Preview, then Save. Then the images appeared again. This shows that the pages needed to be regenerated this way for some reason.
Including model.safetensors.index.json will solve the problem locally but if you are using huggingface repository (Private Space provide you with that much space to save the model - 100 Gb as of now), You will still face the error of loading the model, somehow huggingface repository is treating it as a safetensor file (having stack icon alongside). And transformers library by default look for the same file name convention. Even after trying multiple attempt still facing the issue.
I had such problem in Laravel 9. Solution - check your version Bootstrap. After installation of correct version bootstrap problem was solved. I hope this help you
It's the same data structure, so you can just cast the pointer
vector<complex<float>> a(100);
float s;
ippsMaxAbs_32fc(static_cast<Ipp32fc*>(a.data()), a.size(), &s) ;
I think this is good, thanks for sharing
I tried many solutions but nothing worked for. I added one more uses feature and now the app is installed in all the devices, whether the device has NFC or not
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools">
<uses-feature
android:name="android.hardware.nfc"
android:required="false" />
<uses-feature
android:name="android.hardware.nfc.hce"
android:required="false"
tools:node="replace" />
Correct command to install pre-commit
globally is,
$ uv tool install pre-commit
$ which pre-commit
/home/username/.local/share/../bin/pre-commit
See official uv docs; The uv tool
interface.
Modify your run configuration to "runClient --rerun-tasks"
This error for me was due to upgrading a package dependency. Once downgraded, the error went away. Added back, the app crashed again via the dreaded "Lost connection to device. Exited." message. It took a while though because the dependency package was upgraded along with some other packages and SDK-related stuff.
I'd suggest one of two things; 1) start disabling packages, or 2) create a new app and add your app into it, piecemeal.
The dependency in my case (every case would likely be different) was the http package (v1.5.0). Once downgraded back to v1.4.0, the crashes stopped.
I've had this happen with other packages, though, also, like routing packages, and they can take hours to days to debug because the error can happen in some asynchronous call that happened dozens of debugging steps before the actual crash, and the logs give no help whatsoever.
Lost connection to device. Exited. | GL figuring it out. :P
You can try using browser's sessionStorage to store auth token or other auth info.
I need help I'm unable to create an application on my.telegram.org whenever I try to create it popups an error Says: my.telegram.org says ERROR
please help me to get API ID AND API HASH
CMake does not naturally supports different C/CXX compilers in CMake Project. However, it does support "subdirs". Which means CMake understand that different project have completely different toolchains.
Just make an independent CMake for the MCU firmware and independent one for the SBC.
The third CMake will be your root CMake, it should add the others as subdirs and make the joint.
https://cmake.org/cmake/help/latest/command/add_subdirectory.html
Good article here https://medium.com/@karimelsayed0x1/01-path-traversal-0c52daffd26e
with example
The md5ext
value should be a hash, not a filename:
"md5ext": "a1b2c3d4e5f67890abcdef1234567890.ttf"
For TurboWarp/Electron apps, fonts are typically expected in
resources/fonts/
resources/assets/
Same directory as game.json
Switching my internet service provider fixed this
YOU CAN DO IT IN THE NEXT SOLUTION , I USE THE ANSERS OF ALL TO BRING THIS SOLUTION THANKS
src="data:image/jpg;base64,@Convert.ToBase64String(curso.Imagen)"
دیدگاه
تسلیت به مناسبت زلزله شرق افغانستان
﴿وَلَنَبْلُوَنَّكُمْ بِشَيْءٍ مِّنَ ٱلْخَوْفِ وَٱلْجُوعِ وَنَقْصٍ مِّنَ ٱلۡأَمۡوَٰلِ وَٱلۡأَنفُسِ وَٱلثَّمَرَٰتِۗ وَبَشِّرِ ٱلصَّٰبِرِينَ * ٱلَّذِينَ إِذَآ أَصَٰبَتۡهُم مُّصِيبَةٞ قَالُوٓاْ إِنَّا لِلَّهِ وَإِنَّآ إِلَيۡهِ رَٰجِعُونَ﴾
ترجمه: «و قطعاً شما را به چیزی از ترس و گرسنگی و کاهش اموال و جانها و ثمرات ابتلاء (امتحان یا آزمایش) میکنیم؛ و مژده ده به صابران، همان کسانی که چون مصیبتی به آنان برسد، گویند: ما از آنِ الله (سبحانه و تعالی) هستیم و به سوی او بازمیگردیم.»
چند روز قبل؛ زمینلرزهای قدرتمند ولایات مشرقی، بهویژه کنر و ننگرهار و مناطق اطراف را لرزاند. این حادثه تعداد زیادی را شهید و شمار زیادی را زخمی و بیخانمان ساخت. داغ این حادثه قلوب همه ما را سوزاند. از بارگاه پروردگار سبحانه و تعالی میطلبیم که شهدا را در فردوس برین جای دهد، بر مجروحان شفای عاجل ارزانی کند و بر دل بازماندگانشان صبر جمیل نازل فرماید.
رسول الله ﷺ در حدیث مبارکی که از صهیب (رض) در صحیح مسلم روایت شده است فرمودند: «عَجَبًا لِأَمْرِ الْمُؤْمِنِ، إِنَّ أَمْرَهُ كُلَّهُ لَهُ خَيْرٌ، وَلَيْسَ ذَاكَ لِأَحَدٍ إِلَّا لِلْمُؤْمِنِ؛ إِنْ أَصَابَتْهُ سَرَّاءُ شَكَرَ، فَكَانَ خَيْرًا لَهُ، وَإِنْ أَصَابَتْهُ ضَرَّاءُ صَبَرَ، فَكَانَ خَيْرًا لَهُ.» ترجمه: شگفتانگیز است حال مؤمن! زیرا همه کار او برایش خیر است، و این جز برای مؤمن نیست: اگر خوشی به او برسد، شکر کرده و این برایش خیر است؛ و اگر سختی به او برسد، صبر میکند و آن نیز برایش خیر است.
آری! هرچند این مصیبتها تلخ و سنگیناند، اما برای اهل ایمان دریچهای برای صبر، بازگشت به پروردگار و بیداری دلها میباشند. اینگونه حوادث به ما یادآوری میکنند که دنیا گذراست و آنچه باقی میماند ایمان و اعمال صالح است.
با وجود دعا و صبر، امت باید بیدار گردد که دولتهای ملی قادر به ادای مسئولیتهای اساسی نیستند. سالها حاکمیت این دولتها بر افغانستان، با وجود سرازیر شدن میلیونها دالر خارجی و جمعآوری مالیات داخلی، نتوانست زمینه اسکان امن و تدابیر لازم را برای مردم فراهم سازد. در حالی که بر دولتها لازم است وظایفشان را بهگونه تخنیکی و عملی انجام دهند: نصب دستگاههای هشداردهنده زلزله و ایجاد شبکههای اطلاعرسانی فوری، آموزش مردم در برابر حوادث، ساخت منازل و تأسیسات مقاوم، از جمله وظایف حیاتی است که متأسفانه دولتهای ملی در طی سالیان متمادی از انجام آن عاجز بودهاند. به شکلی که مردم در ولایات دور دست در مناطق غیر استندرد و زلزله خیز و حتی درون درههایی که قبلا دریاچه بوده است و استحکامی ندارد، مسکن دارند.
این ابتلای الهی فرصتی است تا بیش از پیش به سوی وحدت و همبستگی برویم و در غم و درد یکدیگر شریک شویم. بدون شک، امت زمانی امت واقعی خواهد بود که فکر و احساس مشترک داشته باشد. از الله متعال مسئلت داریم که شهدای این حادثه را با نور رحمت خویش بپوشاند، مجروحان را شفای عاجل عطا کند و امت اسلامی ما را از مصیبتها و پریشانیها حفظ نماید.
إِنَّا لِلَّهِ وَإِنَّا إِلَيْهِ رَاجِعُونَ!
Open Control Panel:
Win + S
, type Control Panel
, and hit Enter.Go to Programs > Programs and Features.
Look for Scala in the list.
If it's there, right-click > Uninstall.
Currently, we are developing an ACS-to-ACS call solution for one of our customers, and while the call placement is working perfectly fine, we have encountered an issue when trying to add a PSTN number to an existing ACS-to-ACS call. Specifically, we are faced with error code 400. Our solution utilizes the Frontend SDK (@azure/communication-calling) to create a peer-to-peer call. Could you kindly provide your insights on how to resolve this issue? Your assistance would be greatly appreciated.
Provider store={store}>
\<App /\>
</Provider>,
document.getElementById('root'
)
)
Stumbled upon the same issue today.
In Arduino IDE, go to Tools menu and enable the "USB CDC On Boot" option. This will fix the issue.
There's another implementation of Zenity: https://github.com/ncruces/zenity/releases
You can do e.g. zenity.exe -info -text "my message"
to get a dialog box.