I misput the closing bracket in the homeController file
router.post('/put-crud'), homeController.putCRUD;
// it should be
router.post('/put-crud', homeController.putCRUD);
Instead of using IClassFixture<T>
you should use ICollectionFixture
which allows you to create a single test context and share it among tests in several test classes. You can find an example of the usage here Collection Fixtures. Hope it helps🤞🥲.
I have always used it this way:
ifndef MAKECMDGOALS
@echo "$$(MAKECMDGOALS) is not defined"
else
@echo "$(MAKECMDGOALS) is defined"
endif
No parentheses, or "$" sign. A lesson hard learned, or was it? What is the difference in these two syntaxes?
ifndef $(MAKECMDGOALS)
@echo "$$(MAKECMDGOALS) is not defined"
else
@echo "$(MAKECMDGOALS) is defined"
endif
I found a Way to Resolve the CSRF Token Mistach Error on Insomnia. By default we have a funcionality that takes a Header value and store it in a Tag/Environment Variable.
But the XSRF token comes with a bad formatation and just put it as reference on the headers is'nt enough to Correct it.
Below i'will pass a function that extract the corret value from the XSRF TOKEN in the header and store it correctly in a environment variable:
**1. Create an Empty envinronment variable**
```
{
"XSRF_TOKEN": ""
}
```
**2. Create a new HTTP Request on Insomnia, put your URL with GET method.**
````` you-api/sanctum/csrf-cookie` ``
**3. Below URL we have different paths of configuration like Params, Body, Auth, go to Scripts and put the code Below**
```
const cookieHeaders = insomnia.response.headers
.filter(h => h.key.toLowerCase() === 'set-cookie');
const xsrfHeader = cookieHeaders
.find(h => h.value.startsWith('XSRF-'));
console.log(xsrfHeader);
if (xsrfHeader) {
// 3. Extrai o valor do cookie
let xsrfValue = xsrfHeader.value
.split(';')\[0\] // "XSRF-TOKEN=…"
.split('=')\[1\]; // pega só o valor
xsrfValue = xsrfValue.slice(0, -3);
// 4. Armazena na base environment
insomnia.environment.set("XSRF_TOKEN", xsrfValue);
console.log('⭐ XSRF-TOKEN salvo:', xsrfValue);
} else {
console.warn('⚠️ XSRF-TOKEN não encontrado no header');
}
```
In console of response you can see if any errors occurs.
**4. Finally put the variable on the Headers of any Http Request as you want. Like this:**
```
header value
X-XSRF-TOKEN {{XSRF_TOKEN}}
```
After that you will be able to make Your Request to your login and Have access to your application and Auth::user !
Obs: I already was receving the Token in Frontend so my Backend was okay, if you dont, just follow the steps of a lot of youtubers, in my case i've struggle for a while in this step because my domain of backend and frontend was not of the same origin.
i Create an environment domain for main application to redirect the localhost of Frontend and Backend.
My Backend is a server vagrant that points to **http://api-dev.local** and my frontend is
**http://frontend.api-dev.local**
Below my** vite.config.js** where i change the domain, "you have to point the domain in your hosts file of your system" i'm using Windows 11"
```
import { fileURLToPath, URL } from 'node:url'
import { defineConfig } from 'vite'
import vue from '@vitejs/plugin-vue'
import vueDevTools from 'vite-plugin-vue-devtools'
export default defineConfig({
plugins: [
vue(),
vueDevTools(),
],
resolve: {
alias: {
'@': fileURLToPath(new URL('./src', import.meta.url))
},
},
// server: {
// host: 'test-dev.local',
// port: 5173,
// https: false, // ou true se você gerar certificado local
// }
server: {
host: 'frontend.api-dev.local',
port: 5173,
https: false,
cors: true
}
})
```
and my Important Variables in **.env** of laravel
`APP_URL=http://api-dev.local
SESSION_DOMAIN=.api-dev.local
SANCTUM_STATEFUL_DOMAINS=http://frontend.api-dev.local:5173
FRONTEND_URL=http://frontend.api-dev.local:5173`
Final OBS:
The routes are in WEB not in API, below you see my **web.php** file
```
<?php
use App\Http\Controllers\AuthController;
use Illuminate\Support\Facades\Route;
use App\Modulos\Usuario\Http\ApiRoute as UsuarioRoute;
Route::get('/', function () {
return view('welcome');
});
Route::middleware('web')->group(function () {
Route::post('/login', \[AuthController::class, 'login'\])-\>name('login');
Route::post('/logout', \[AuthController::class, 'logout'\])-\>name('logout');
Route::get('/user', \[AuthController::class, 'user'\])-\>name('user')-\>middleware('auth');
UsuarioRoute::routes();
});
```
I'm not using the user route in this case, just return the user data in login.
My English is so Bad, glad to Help!
Finally you will be able to Resolve the problem
Downloading Chromium 136.0.7103.25 (playwright build v1169) from https://playwright.download.prss.microsoft.com/dbazure/download/playwright/builds/chromium/1169/chromium-win64.zip
Error: getaddrinfo ENOTFOUND playwright.download.prss.microsoft.com
at GetAddrInfoReqWrap.onlookupall [as oncomplete] (node:internal/dns/promises:99:17) {
errno: -3008,
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'playwright.download.prss.microsoft.com'
}
Failed to install browsers
Error: Failed to download Chromium 136.0.7103.25 (playwright build v1169), caused by
Error: Download failure, code=1
at ChildProcess.<anonymous> (G:\automatiza\Lib\site-packages\playwright\driver\package\lib\server\registry\browserFetcher.js:94:32)
at ChildProcess.emit (node:events:518:28)
at ChildProcess._handle.onexit (node:internal/child_process:293:12)
i am getting this error for all the 3 retries with 3 different domain names, in my docker file i am having command && playwright install --with-deps chromium, this command initially it was working but now after 3 retries also it is getting failed
No negative impact—your approach is correct and commonly used.
Applying custom weights to the TF-IDF matrix (with norm=None), then normalizing each row using sklearn.preprocessing.normalize, produces unit vectors just like norm='l2' in TfidfVectorizer. This preserves cosine similarity and ensures each row has L2 norm = 1.
Key point: The order matters. Weighting first, then normalizing, gives you control over the influence of features before projecting vectors onto the unit sphere.
If you normalized before weighting, the vectors would not be unit length anymore.
There is no difference (other than order of operations) between the manual normalization and letting the vectorizer do it, as long as normalization is the last step.
ctor
changed from an attribute macro to a proc macro at some point after version 0.2.0
, which makes re-exporting not possible. Therefore, the solution is to use ctor = "0.2"
within Cargo.toml
for mylib
.
You did not mention anything about adding permissions to the Android Manifest.
<uses-permission android:name="android.permission.CAMERA" />
You may also consider this if it is required:
<uses-feature android:name="android.hardware.camera" android:required="true" />
I think this policy export endpoint is the closest you can get.
https://help.zscaler.com/zia/policy-export#/exportPolicies-post
There are still a few other things to export (URL categories, groups, ATP URL list, etc.) if you want a complete picture.
I got the same problem and I tried installing the package multcompView
then I used cld() function with my emmean object, without using pairs(), and I finally got an expected grouping.
For example:
library(tidyverse)
library(lme4)
library(lmerTest)
library(emmeans)
library(multcomp)
formula_str <- "log(OC) ~ sev_dpt * fraction + 1 + (1|Sites)" # where OC is numerical, and the other ones are factors
mod.lme <- lmer(as.formula(formula_str), data = db)
emmeans(mod.lme, list(pairwise~sev_dpt*fraction), adjust="tukey") %>%
# pairs() %>% # I needed to avoid the use of pairs to make it work
multcomp::cld()
Versions matter. After ensuring both my host and micro frontend had the same angular version (19.0.0) and the same version of @angular-architects/native-federation (also 19.0.0) the NG0203 error disappeared an federation started working for me.
You're casting a char (from std::string) into an unsigned char. That's where the compiler warns about the possible loss of data.
This is innocuous as long as your input actually contains printable characters. If there's binary data in your string, that's when you could loose data. But you probably don't want to call Converter::tolower() passing a string with binary data, right?
If anyone is looking for DiscUtils since Codeplex has shut down, It can be downloaded from Github: https://github.com/DiscUtils/DiscUtils
Why can’t I delete large nodes in Firebase Realtime Database? (TRIGGER_PAYLOAD_TOO_LARGE) Firebase Realtime Database has a hard limit: if a write/delete operation would trigger a payload larger than 1MB (because triggers/events send the full data), it fails with TRIGGER_PAYLOAD_TOO_LARGE. This limit affects deletes via:
Changing settings like .settings/strictTriggerValidation or using service accounts does not bypass this.
What can you do?
async function deleteInBatches(path) {
const ref = admin.database().ref(path);
let snap = await ref.limitToFirst(100).once('value');
while (snap.exists()) {
const updates = {};
snap.forEach(child =\> updates\[child.key\] = null);
await ref.update(updates);
snap = await ref.limitToFirst(100).once('value');
}
await ref.remove(); // Optionally remove empty parent
}
If any individual child exceeds 1MB, this method will also fail.
No reliable way to force-delete huge single nodes If even a single child (or the node itself) is >1MB, there is no public method to delete it. No setting or permission will bypass this. This is a platform limitation by Firebase to protect against massive accidental deletes.
Contact Firebase support If you absolutely must delete such data (especially in production), open a support ticket with your project and path. In some cases, Firebase support can perform backend deletes not possible via public APIs.
Prevention tips Avoid storing large blobs in Realtime Database; use Cloud Storage. Keep your data shallow and well-partitioned.
Summary:
Chunked/batched deletes work if children are small enough. No workaround exists for single nodes/children >1MB—contact support.
References:
Firebase Functions limits (https://firebase.google.com/docs/functions/limits)
1- عامر 🐺
2- ابدولا ♋️
3- خالد
4- محمد يوسف
5- محمد الهرمودي 🔷
6- محمد 🐉
7- عبدالله احمد
8- مايد ز
9- محمد الهاجري
10- عبدالسلام
11-اسماعيل
12-عبدالله الهاجري
13- حمد عارف
14- عيسى
15- بو مهره بامر من بوخالد
16- ماجد
17- عمران
18- روسي
I followed this and the word wrap is working. Only problem is that the word wrapped text size has shrunk so much that I can hardly read it. Is there a way to remedy this?
You probably need to add the jasypt spring boot library:
implementation 'com.github.ulisesbocchio:jasypt-spring-boot-starter:3.0.5'
<dependency>
<groupId>com.github.ulisesbocchio</groupId>
<artifactId>jasypt-spring-boot-starter</artifactId>
<version>3.0.5</version>
</dependency>
To work in Pipenv
from Jupyter Notebook
do:
pipenv install ipykernel
pipenv shell
ipython kernel install --name=`basename $VIRTUAL_ENV` --user
jupyther notebook
# select .venv kernel
Note that ipython
installation is made.
During SymfonyOnline June 2024, Fabien Potencier himself gave a talk about this topic.
The replay of the talk is free to view (requires a registration): https://live.symfony.com/account/replay/video/968
The slides are available here:
Looks like you aren't importing PassKit in your Swift code. Add import PassKit
to the top and it should compile.
NoInfer<T>
There's a TypeScript utility type since version 5.4 for this very use case.
async function get<U>(url: string): Promise<NoInfer<U>> {
return getUrl<U>(url);
}
I want to share a subtle but critical pitfall that can break VS Code sidebar webviews and waste hours of debugging time. If your resolveWebviewView()
method never gets called—no errors, no logs, just a stubborn placeholder in the sidebar—this might be your culprit.
You register your WebviewViewProvider
in activate()
Your extension activates as expected
The sidebar icon and view show up in VS Code
But resolveWebviewView()
is never fired. There are no log messages and nothing appears in the sidebar but a placeholder and the error:
"There is no data provider registered that can provide view data."
Despite what most documentation suggests, just defining a view with an id
in your package.json
and registering a provider with the matching ID is not enough.
VS Code will happily display your sidebar view, but will never hook up your provider unless you explicitly set the view’s type
to "webview"
in package.json
.
You need this in your package.json
view contribution:
"views": {
"mySidebarContainer": [
{
"type": "webview", // <-- This is required!
"id": "mySidebarView",
"name": "Dashboard"
}
]
}
If you leave off "type": "webview"
, VS Code treats your view as a static placeholder.
Your provider will never be called—no matter how perfect your code is.
Your extension activates and compiles without any errors or warnings
The sidebar icon and panel appear, giving the illusion that everything is wired up
There is zero feedback from VS Code that the view provider isn’t actually being used
This makes it really hard to diagnose, especially in projects with build steps or lots of code.
package.json (relevant bits):
"viewsContainers": {
"activitybar": [
{
"id": "mySidebarContainer",
"title": "My Sidebar",
"icon": "media/icon.svg"
}
]
},
"views": {
"mySidebarContainer": [
{
"type": "webview",
"id": "mySidebarView",
"name": "Sidebar Webview"
}
]
}
Activation:
vscode.window.registerWebviewViewProvider(
"mySidebarView",
new MySidebarViewProvider(context)
);
Provider:
export class MySidebarViewProvider implements vscode.WebviewViewProvider {
resolveWebviewView(view: vscode.WebviewView) {
view.webview.options = { enableScripts: true };
view.webview.html = '<h1>It works!</h1>';
}
}
If your sidebar webview isn’t working and resolveWebviewView
is never called, double check that you included "type": "webview"
in your view’s package.json entry.
This tiny detail makes all the difference and is easy to overlook.
I added "?ignore_skipped=true" on badges configuration URLs to hide skipped pipelines :
gitlab.tech.orange/%{project_path}/-/commits/%{default_branch}?ignore_skipped=true
gitlab.tech.orange/%{project_path}/badges/%{default_branch}/pipeline.svg?ignore_skipped=true
Follow this solution to hide skipped pipelines :
https://forum.gitlab.com/t/force-pipeline-on-tag-push-when-commit-message-contains-skip-ci/60169/2
.gitlab-ci.yml
workflow:
rules:
- if: $CI_COMMIT_MESSAGE =\~ /^chore\\(release\\):/
when: never
- when: always
.releaserc
\[
"@semantic-release/git",
{
"assets": \["CHANGELOG.md", "package.json", "package-lock.json", "npm-shrinkwrap.json", "constants/version.ts", "sonar-project.properties"\],
"message": "chore(release): ${nextRelease.version}\\n\\n${nextRelease.notes}"
}
\],
C++ is getting to know the hardware as it is used to denote a system like a television screen you see whether it is an input or output.
The software follows as when is is programmed by syntax to denote such as . for part of thereof.
I’ve worked on similar hybrid setups during migrations — especially between Symfony versions — and this kind of session management conflict is pretty common.
Is there a proper way to tell Symfony 6 to not start a session but use the existing one?
// In a Symfony 6 controller or service
$session = $request->getSession();
if ($session->isStarted()) {
// Access session safely without triggering session_start()
$userId = $session->get('user_id');
}
But to avoid starting the session, you must:
We ran into similar problems with Azure functions. Our team tried multiple things. One of them was to move the functions folder out of the src/ directory and have it at the root of the project. Azure is weird. Also, there are multiple flags associated with remote build, look into that as well.
Try ensuring the .launch()
call is distant from the registration call (at least not in the same (re)compose).
I suspect the Photo Picker behaves like other Activities, and thus this note from docs on getting results from activities would apply (emphasis mine):
Note: You must call
registerForActivityResult()
before the fragment or activity is created, but you can't launch theActivityResultLauncher
until the fragment or activity'sLifecycle
has reachedCREATED
.
To me, this implies registerForActivityResult
must be called early enough for the Lifecycle
of the underlying Photo Picker activity to reach CREATED
, though I'm unsure how to test that theory.
I only started Android dev today so 0 experience and YMMV, but this resolved the same issue for me.
I assume that this error is related to paths. In order to go to the fonts folder, you first need to exit the css folder. Try putting "../" in front of path, for example: "../fonts/Recoleta-SemiBold.woff"
Since VS2022 OnAfterBackgroundSolutionLoadComplete is deprecated. I can't find any reliable answer on what's the new designated way of listening to solution loading. Any ideas?
You probably want to set render_template_as_native_obj=True
for your DAG:
render_template_as_native_obj – If True, uses a Jinja
NativeEnvironment
to render templates as native Python types. If False, a JinjaEnvironment
is used to render templates as string values.
I have found a workaround to display the image sharp.
It requires manually changing a value in the code each time you change scaling in Windows.
Step 1:
%%javascript
const ratio = window.devicePixelRatio;
alert("devicePixelRatio: " + ratio);
Step 2:
devicePixelRatio = 1.875 # manually enter the value that was shown
Step 3:
from PIL import Image
from IPython.display import HTML
import numpy as np
import io
import base64
# 32x32 data
bw_data = np.zeros((32,32),dtype=np.uint8)
# (odd_rows, even_columns)
bw_data[1::2,::2] = 1
# (even_rows, odd_columns)
bw_data[::2,1::2] = 1
# Build pixel-exact HTML
def display_pixel_image(np_array):
# Convert binary image to black & white PIL image
img = Image.fromarray(np_array * 255).convert('1')
# Convert to base64-encoded PNG
buf = io.BytesIO()
img.save(buf, format='PNG')
b64 = base64.b64encode(buf.getvalue()).decode('utf-8')
# HTML + CSS to counteract scaling
html = f"""
<style>
.pixel-art {{
width: calc({img.width}px / {devicePixelRatio});
image-rendering: pixelated;
display: block;
margin: 0;
padding: 0;
}}
</style>
<img class="pixel-art" src="data:image/png;base64,{b64}">
"""
display(HTML(html))
display_pixel_image(bw_data)
output:
Visual Studio Code cannot access ipython kernel so I don't know how to retrieve devicePixelRatio from Javascript. I tried to make an ipython widget, but was not able to refresh it automatically. If this can be done automatically then it won't require user input.
Did you got any solution for this how to capture winlogon screen with desktop duplication api
django_stubs_ext.monkeypatch()
should not be placed inside an if TYPE_CHECKING:
block, because it needs to be executed at runtime, not just during type checking.
The purpose of this function is to patch certain Django internals so that mypy
and type hints work correctly. If you wrap it in TYPE_CHECKING
, it will never actually run during program execution — defeating its purpose.
As stated in the official documentation:
“This only needs to be called once, so the call to
monkeypatch
should be placed in your top-level settings.”
Therefore, make sure you call it at the very top of your settings file (or another central entry point) before any django imports.
Here is the link for the reference that I used to answer your question:
https://pypi.org/project/django-stubs-ext/
The example code that is provided in the documentation:
from os import environ
import django_stubs_ext
from split_settings.tools import include, optional
# Monkeypatching Django, so stubs will work for all generics,
# see: https://github.com/typeddjango/django-stubs
django_stubs_ext.monkeypatch()
# Managing environment via `DJANGO_ENV` variable:
environ.setdefault('DJANGO_ENV', 'development')
_ENV = environ['DJANGO_ENV']
_base_settings = (
'components/common.py',
'components/logging.py',
'components/csp.py',
'components/caches.py',
# Select the right env:
'environments/{0}.py'.format(_ENV),
# Optionally override some settings:
optional('environments/local.py'),
)
# Include settings:
include(*_base_settings)
sudo apt install openmpi-bin
solves the issue since it downgrades mpirun (was 4.2 now is 4.1.6)
the usual hello_world test works https://matthew.malensek.net/cs220/schedule/code/week07/mpi_hello.c.html
Hope it helps
The simplest way to handle this would be to use a SetupIntent to create a Payment Method, then create all 5 subscriptions using the resulting Payment Method.
Stripe has a pretty thorough guide that covers the basics of using SetupIntents here:
https://docs.stripe.com/payments/save-and-reuse
For your specific flow it would look something like this:
const customer = await stripe.customers.create();
const setupIntent = await stripe.setupIntents.create({
customer: customer.id,
usage: "off_session"
});
Collect and confirm the payment details on the frontend using Stripe Elements
Verify that the payment method has been created and attached to the customer (one way to do that would be listening for the payment_method.attached
webhook event
Create your subscriptions like so:
const subscription = await stripe.subscriptions.create({
customer: customer.id,
items: [
{
price: priceId,
},
],
default_payment_method: yourPMId
});
Alternatively, rather than providing the Payment Method ID when creating the subscription you can update the customer's default Payment Method: https://docs.stripe.com/api/customers/update#update_customer-invoice_settings-default_payment_method
It's also worth noting that if the billing period is the same for all of the subscriptions, you can just add them all as different items on the subscription per this guide: https://docs.stripe.com/billing/subscriptions/multiple-products
I have noted that in the first one I am using
$billData = UploadInvoiceBillData::from($request->only(['invoice_number']));
Isntead of
$billData = UploadInvoiceBillData::validateAndCreate($request->only(['invoice_number']));
The second one I am using
$billData = BillData::from($request);
I feel from method only validates first if it is an instance of Illuminate\Http\Request
-- BEGIN SSH2 PUBLIC KEY ----
Comment: "rsa-key-20250327"
AAAAB3NzaC1yc2EAAAADAQABAAACAQC+WXWp fXhmT0OCNZNRh6xvhZHOF9bR/8c
u4O55pUUVnmJpwtXam1TWevtIC4CgyfMWa9jPGazYBFsat8FczdFUZ/fLb94UKe
IHH2G5Azclzy0tLUQMgAfbphimNL1CSeAgEctchnF2Ck89xcsSRs4M6TIFgr1o
ojplv4p0bUodJyVPfQc5tpbEmFbnHWY/wRCSUyM5sHv1iosf44Sy5vM2mXVtbES
pk5caQJ1ax/tP3hKAFepBb2wKcRkHWV/67cjnS/RzVgrU9XdtRVNv7jsdq7ZYKxc
PLN9Cyped4ZOfPRfenC/9eHWXRak29NYykN7RD92ZOjP7/dD3H6Y0kDcQ1oszX2c
H+JP17NG3p1qZVsnlzJBP8xvTRaXNYup5zkQcdQpfjMSdn45fddCo1cjnvIF/AG2
UkMKXfnroYdxIwGVUvDf76RQZyWs5rRNPQcPNmoGW+yJa0+LCdv7jsdGQDMShJ
puHMq0OZt9sLiQPJSK66zx9Q6W//cIoWAi9d8OYQMFqG07Px1TcLkyvJJJY8YBftk
ovkF8Je1E1BwCRcbt8mHuygMj1lxJutTkq4UJSsg0MmYQ0DxSP+ZoDmUHfnBd5H
zbaM8QWJ25OwNPIEGPosrxKFsxeEm8e2WJjWcMWTvvvKtiFVBcJvwkM4mFJsq4Re
WcuxXDP5yQ==
---- END SSH2 PUBLIC KEY ----
Ok, so here is the my answer to the question. I did not succeed to make diagram work with the beamer output (but the code is working with reealjs output, which was the original output). So at the end, this is what I did:
---
title: Test
author:
- John Due
format:
beamer:
pdf-engine: lualatex
keep-tex: true
header-includes:
- \usepackage{tikz}
- \usetikzlibrary{arrows.meta}
- \tikzset{arrow/.style = {> = {Latex[length = 1.2mm]}}}
---
## Test
::: {.cell}
```{=latex}
\begin{tikzpicture}
\node (i) at (0, 0) {i};
\node (j) at (2, 0) {j};
\draw[->, arrow] (i) -- (j);
\end{tikzpicture}
```
:::
Then, thanks to the comments of @SamR I could also fix the problems of nbformat
and nbclient
. Note that this was unrelated to the compilation problem of today (but I was also having problems with other documents...).
SELECT Acct.Id, TransactionCode, COUNT (*) as Cnt
From Account
WHERE Acct.Id NOT IN (SELECT Acct.Id FROM Account WHERE TransactionCode NOT IN ('Code3','Code4') GROUP BY Acct.Id)
Group by Acct.Id, TransactionCode
Have you try using worker logs in INFO and see there how to remove package versions to allow pip to attempt to solve the dependency conflict ?
You need to add the exact url route
(absolute path) you used in your code, which handles the authorization callback
functions in the Backend
Add this url http://localhost:3000/auth/linkedin/callback
endpoint in the LinkedIn developer under Auth
section
https://developer.linkedin.com/
Ref the MS docs: https://learn.microsoft.com/en-us/linkedin/shared/authentication/authorization-code-flow?tabs=HTTPS1#step-1-configure-your-application
You are getting this error because when you activate the persistence, the baseline doesn't adjust by itself anymore.
So the data you stored into your caches are lost when the only node in the baseline shutdown.
You need to enable the baselineAutoAdjustEnabled so that the node leaving or joining the cluster will enter/leave the baseline and share all the data of your cache.
You should also configure backup to your cache and set the mode to REPLICATED to be sure you won't lost data.
Hope it help
There is a new feature in Marklogic 11 that relates to overflowing to disk in order to protect memory. It is only listed as an Optic feature. However, since optic is SPARQL under the hood, maybe the feature is kicking in and using disk.
The link below describes the feature and also various ways to see if it is being used.
A third way to add to the answer of @kevin, is:
To recieve the value as String and set a validation-annotation. The validation-annotation could like:
@Constraint(validatedBy = {ValidEnum.EnumValidator.class})
@Target({TYPE, FIELD, TYPE_USE, PARAMETER})
@Retention(RUNTIME)
@Documented
public @interface ValidEnum {
String message() default "your custom error-message";
Class<?>[] groups() default {};
Class<? extends Payload>[] payload() default {};
Class<? extends Enum<?>> enumClass();
class EnumValidator implements ConstraintValidator<ValidEnum, String> {
protected List<String> values;
protected String errorMessage;
@Override
public void initialize(ValidEnum annotation) {
errorMessage = annotation.message();
values = Stream.of(annotation.enumClass().getEnumConstants())
.map(Enum::name)
.toList();
}
@Override
public boolean isValid(String value, ConstraintValidatorContext context) {
if (!values.contains(value)) {
context.disableDefaultConstraintViolation();
context
.buildConstraintViolationWithTemplate(errorMessage)
.addConstraintViolation();
return false;
} else {
return true;
}
}
}
}
And use this newly created annotation to set on your class:
@Getter
@Setter
@NoArgsConstructor
public class UpdateUserByAdminDTO {
private Boolean isBanned;
private @ValidEnum(enumClass=RoleEnum.class) String role;
private @ValidEnum(enumClass=RoleEnum.class, message="an override of the default error message") String anotherRole;
}
This way you get to reuse this annotation on whatever enum variable and even have a custom error for each of the different enums you want to check.
One remark: the annotation does not take into account that the enum may be null, so adapt the code to your needs
You can find all the info to set a private container registry in the official documentation at https://camel.apache.org/camel-k/next/installation/registry/registry.html#kubernetes-secret
I know I'm late but there's probably still people out there facing the same issue. Loading cookies or your own user profile isn't working at all since chrome updated to 137. version
Best you can do is downgrade your chrome and hold the package to avoid auto updating it.
Down below is everything you need in order to fix it (Linux
# Delete current version of chrome
sudo apt remove -y google-chrome-stable --allow-change-held-packages
# Download and install old version of chrome / Hold chrome version
cd tmp
wget -c https://mirror.cs.uchicago.edu/google-chrome/pool/main/g/google-chrome-stable/google-chrome-stable_134.0.6998.165-1_amd64.deb
sudo dpkg -i google-chrome-stable_134.0.6998.165-1_amd64.deb
sudo apt -f install -y
sudo apt-mark hold google-chrome-stable
# Also download the correct chromedriver and install it
sudo rm -f /usr/local/bin/chromedriver
wget -c https://storage.googleapis.com/chrome-for-testing-public/134.0.6998.165/linux64/chromedriver-linux64.zip
unzip chromedriver-linux64.zip
sudo mv chromedriver-linux64/chromedriver /usr/local/bin/
sudo chmod +x /usr/local/bin/chromedriver
For undetected_chromedriver:
driver = uc.Chrome(driver_executable_path="/usr/local/bin/chromedriver", version_main=134, use_subprocess=True)
I was able to solve my issue by including the libraries from this repository:
If you're working with Qt on Android for USB serial communication, this library provides the necessary JNI bindings and Java classes to make it work. After integrating it properly, everything started working as expected.
Using the function app's managed identity (instead of a creating secret) is now available in preview, as documented in a section added recently to the article I mentioned in my question.
It works by adding the managed identity as a federated identity credential in the app registration. I implemented it in my azd template and it works like a charm (despite it is advertised as a preview at the date of this posting).
To force an update on Dockerfile image builds:
docker build --no-cache <all your other build options>
To force an update with docker compose
docker compose -f <compose file> up --build --force-recreate
To achieve this, I suggest using the tickPositioner
function on the axis. You can make it to always return just one tick positioned at the center of the axis range.
API reference: https://api.highcharts.com/highcharts/xAxis.tickPositioner
Demo: https://jsfiddle.net/BlackLabel/63g80emu/
tickPositioner: function () {
const axis = this;
const range = axis.max - axis.min;
const center = axis.min + range / 2;
return [center];
},
you also check this:
certificateVerifier.setAlertOnMissingRevocationData(new LogOnStatusAlert(Level.WARN));
where you do:
certificateVerifier.setCheckRevocationForUntrustedChains(false);
Found the answer, there's an API call called list_ingestions that gives me field (IngestionTimeInSeconds) that I was searching for.
Thanks!
Your local dev NextJS version might be different from production.
do an upgrade on production to match the version you have in dev mode or localhost
Ok i finally figured it out. Here is a stripped down version of the code then ended working for me:
public async byte[] GetImage(string symbolpath)
{
var getBitmapSizePath = symbolpath + "#<<ITcVnBitmapExportRpcUnlocked>>GetBitmapSize";
var getBitmapPath = symbolpath + "#<<ITcVnBitmapExportRpcUnlocked>>GetBitmapImageRpcUnlocked";
//see https://infosys.beckhoff.com/index.php?content=../content/1031/tf7xxx_tc3_vision/16954359435.html&id=
var getBitmapSizeHandle = (await adsClient.CreateVariableHandleAsync(getBitmapSizePath, cancelToken)).Handle;
var getBitmapHandle = (await adsClient.CreateVariableHandleAsync(getBitmapPath, cancelToken)).Handle;
int status;
ulong imageSize;
uint width;
uint height
byte[] readBytes = new byte[20];
byte[] sizeInput = new byte[8];
var resultGetBitmapSize = await adsClient.ReadWriteAsync((uint)IndexGroupSymbolAccess.ValueByHandle, getBitmapSizeHandle, readBytes, sizeInput, cancelToken);
//parse the result:
using (var ms = new MemoryStream(readBytes))
using (var reader = new BinaryReader(ms))
{
status = reader.ReadInt32();
imageSize = reader.ReadUInt64();
width = reader.ReadUInt32();
height = reader.ReadUInt32();
}
//todo check resultGetBitmapSize and status on if it succeeded before continuing
//now lets get the image
//prep input
byte[] input = new byte[16];
BitConverter.GetBytes(imageSize).CopyTo(input, 0);
BitConverter.GetBytes(width).CopyTo(input, 8);
BitConverter.GetBytes(height).CopyTo(input, 12);
int imageBufferSize = 20 + (int)imageSize;
byte[] buffer = new byte[imageBufferSize]; //todo use a shared array pool to limit memory use
byte[] imageData = new byte[imageBufferSize];
int imageStatus;
var resultGetImage = await adsClient.ReadWriteAsync((uint)IndexGroupSymbolAccess.ValueByHandle, getBitmapHandle, buffer, input, cancelToken);
//parse the result:
using (var imageStream = new MemoryStream(imageDataArray))
using (var imageReader = new BinaryReader(imageStream))
{
imageStatus = imageReader.ReadInt32();
ulong byteCount = imageReader.ReadUInt64();
imageReader.Read(imageData, 0, (int)byteCount);
}
//todo check resultGetImage and imageStatus to see if it was successful
//clean up the handles
await adsClient.DeleteVariableHandleAsync(getBitmapSizeHandle, cancelToken);
await adsClient.DeleteVariableHandleAsync(getBitmapHandle, cancelToken);
return imageData; //todo convert byte array to bitmap.
}
the main magic is that i needed to use ITcVnBitmapExportRpcUnlocked instead. This is documented here: https://infosys.beckhoff.com/index.php?content=../content/1031/tf7xxx_tc3_vision/16954359435.html&id=
this is a more detailed answer of the one given by kungfooman
If you are using lutris:
Go to configure -> Turn on advanced toggle -> System options -> game execution -> Environment variables Click on Add and add
| MESA_EXTENSION_MAX_YEAR | 2002 |
and hit save
now you game will run hopefully.
I have used the official builds of ffmpeg 7.1 from here and it worked inside AWS Lambda running node22.
I use Laravel-ZipStream, it resolve all my problems !
Thanks !
<soap:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<soap:Header>
<Headers xmlns="urn:Ariba:Buyer:vrealm_2">
<variant>vrealm_2</variant>
<partition>prealm_2</partition>
</Headers>
</soap:Header>
<soap:Body>
<ContractRequestWSPullReply partition="prealm_2" variant="vrealm_2" xmlns="urn:Ariba:Buyer:vrealm_2"/>
</soap:Body>
</soap:Envelope> i am not getting correct response any answer ?
dart:html
is deprecated.
Import the "web" package (https://pub.dev/packages/web) :
import "package:web/web.dart";
void navigate(String url, {String target = "_self"}) {
if (target == "_blank") {
window.open(url, "_blank")?.focus();
} else {
window.location.href = url;
}
}
// In the current tab:
navigate("https://stackoverflow.com/questions/ask");
// In another tab:
navigate("https://stackoverflow.com/questions/ask", target: "_blank");
Even after splitting them, the error is still occuring, the weird behaviour is, it gives the error but at the same time it creates it successfully
If the price is coming as an integer but you wish to make it double why not parse it?
There could several ways but i suggest
{
'mid': int mid,
'icode': String icode,
'name': String name,
'description': String description,
'price': num? price, // could be anything (int, double or null)
'gid': int gid,
'gname': String gname,
'pic': String pic,
'quantity': int quantity,
} =>
Product(
mid: mid,
icode: icode,
name: name,
description: description,
price: price?.toDouble() ?? 0.0, // if null add default value else make it double
gid: gid,
gname: gname,
pic: pic,
quantity: quantity,
),
This way if price is null you get a default value while if it is int or double either way it will be converted to double.
If you want to develop an Android MDM app similar to MaaS360, the best place to start is with the Android Enterprise APIs and Device Policy Controller (DPC) framework. Google provides official documentation on Android Management APIs that cover device and app restrictions. Also, checking out open-source DPC samples can help understand how to implement app control features.
For a concise overview of Android device management concepts, there are some useful write-ups available online that explain how device management works on Android.
After removing Move.lock
, I was able to deploy the whole package(all modules) again.
Not sure why individual module deployment did not work as below:
sui client publish ./sources/my_module_name.move --verify-deps
Make it *icon:upload[] Upload files*
, to get the icon rendered inside the strong
element.
cy.get('[id^="features-"]' will capture all id begins with "features-" value
The "twitch" in Safari likey happens because currentAudio.duration or currentAudio.currentTime can be unreliable right after play() starts. You can try adding a short delay before calling updateProgress(). I think that this gives Safari a bit of time to stabilize
Try asking the file from your colleague, and add it manually for yourself again. and Set its Build Action = GoogleServicesJson
and clean everything and rebuild. It might be IDE's issue, which should get resolved if added file manually.
OpenCL helps you achieve this. You can start with OpenCL guides and documentation.
in my case,
change :
val option = GetGoogleIdOption.Builder()
.setServerClientId(AppKey.Google.WebClientId)
.build()
to:
val option = GetSignInWithGoogleOption.Builder(AppKey.Google.WebClientId)
.build()
its work.
In my case the config file was causing trouble, deleting it from the project and rebuilding helped.
NLog.config
I have been clicking all over the screen in platformio and cannot find Debug Settings anywhere. Are there any clearer instructions please or a screenshot of precisely where to start looking?
Thannks
For fun, here's another attractor from the specialized literature “Elegant Chaos Algebraically Simple Chaotic Flows” chapter 4.1. This is the Nosé - Hoover oscillator.
Right Click -> JRE System Library
Go to Properties (At the last)
Choose Workspace default JRE (jre) or any configuration that you want at Execution Env Dropdown.
Click Apply and Close.
This was the code that fixed the issue for me. You can try this
.mdc-list-item.mdc-list-item--with-one-line{
height: 1%;
}
::ng-deep .mdc-list-item__content{
padding-top: 1%;
}
fixed this by updating jdk version
Delete the publishing profile and recreate it.
I also want to connect 2 apps through twilio voice feature in my android application with kotlin but I don't know will it work or not. If anyone has a code, to place a call and receive it in other application and vise versa through VoIP from Twilio kindly share.
Any luck in solving this issue?
I have encountered very strange behavior in iframe - app redirects in infinite loop.
Did you check Configuration Manager?
You said you have same build of VS and the codes are all the same. But if your platform settings are not same, VS would link different references and could result in your issue.
You have to add the mocking like this
// imports...
jest.mock('next/headers', () => ({
cookies: jest.fn(() => ({
get: jest.fn(() => ({ value: 'mocked-theme' })),
set: jest.fn(),
})),
}));
describe('My component', () => {
// your Unit tests...
})
My Apple Developer Program had expired
The solution was to just not call beforeAll
during setupAfterEnv
, and instead do the check as part of the actual tests. The OS dialogs are a bit unreliable in the Azure DevOPs pipeline macOS environment, though.
Maybe you can refer to the new features of PyTorch, torch.package
https://docs.pytorch.org/docs/stable/package.html
import torch.package
# save
model = YourModel()
pkg = torch.package.PackageExporter("model_package.pt")
pkg.save_pickle("model", "model.pkl", model)
import torch.package
import sys
import importlib.util
# load
imp = torch.package.PackageImporter("model_package.pt")
model = imp.load_pickle("model", "model.pkl")
Initially, while writing this, I didn't know what was going on. I was sure I was not modyfing the same lock in parallel, so it made no sense to me that error was concurrent modification, and I wanted to ask for help. I accidentally found out that there was another lock that was suposed to be issued with a grant at the same time, so i tried to reproduce this issue.
So conclusion is, you can't create multiple grants at the same time, even if deifferent resources are involved, I guess what was common is owner id.
Queston for tapkey team, is there any particular reason for this limitation? I wasn't able to find anything in the docs, and it caused real problems in my production environemnt.
I read it is a old thread, but I experience the same problem:
In my web root, I created 3 folders:
css
fonts
livres (where some of my html files are hosted)
main.css contains:
@font-face {
font-family: "Recoleta-SemiBold";
src: url('/fonts/Recoleta-SemiBold.woff') format('woff'),
url('/fonts/Recoleta-SemiBold.eot?#iefix') format('embedded-opentype'),
url('/fonts/Recoleta-SemiBold.ttf') format('truetype');
font-weight: 600; /* 500 for medium, 600 for semi-bold */
font-style: normal;
font-display: swap;
}
.header .title {
font-family: "Recoleta-SemiBold", "Georgia", serif;
font-size: 40px;
font-weight: normal;
margin: 0px;
padding-left: 10px;
color:#3f0ec6;
}
index.html contains:
In the <head>:
<base href = "https://www.yoga-kids.net/">
In the <body>:
<header>
<div class = "header">
<div class = "title">Livre de yoga</div>
</div> <!-- end header -->
</header>
The font is not shown when I open the index.html file (located in "livres" directory).
However, if I place the index.html file in the web root folder, the font is shown!!!
Same behavior on my local and on the server...
Any idea?
Thank you.
You can also use an online tool like
It has tools to directly generate code in multiple languages from your database.
It's really easy
Create a new diagram
Click on "Connect Database" and sync Evernox with your Database
Click on "Generate code" and select Entity Framework from the list
I've worked with the gemma model and its quantization in the past, as per my investigation/ experimentation regarding this error, the following is my observation/suggestion:
Probably, the following could be some of the causes for this error:
Memory Need:
a) The overhead from CUDA, NCCL, PyTorch, and TGI runtime, plus model sharding inefficiencies, would have caused out-of-memory errors.
Multi-GPU Sharding:
a) Proper multi-GPU distributed setup requires NCCL to work flawlessly and enough memory on each GPU to hold its shard plus overhead.
NCCL Errors in Docker on Windows/WSL2:
a) NCCL out-of-memory error can arise from driver or environment mismatches, more specifically in Windows Server with WSL2 backend.
b) We must check the compatibility of NCCL and CUDA versions. Ensure that Docker is configured correctly to expose the GPUs and shared memory.
My Suggestions or possible solutions you can try:
Test on a Single GPU First:
a) Try to load the model on a single GPU to confirm whether the model loads correctly without sharding. This will help to understand whether the issue is with model files or sharding.
b) If this works fine, then proceed to the other points below.
Increase Docker Shared Memory:
a) Allocate more shared memory, for example: Add `--shm-size=2g`or higher to the “docker run” command. ( docker run --gpus all --shm-size=2g)
Please do not set `CUDA_VISIBLE_DEVICES` Explicitly in Docker:
a) When you set <CUDA_VISIBLE_DEVICES> inside the container, it can sometimes interfere with NCCL's device discovery and cause errors.
Verify NCCL Debug Logs:
a) Please run the container with `NCCL_DEBUG=INFO` environment variable to get detailed NCCL logs and identify the exact failure point.
Please let me know if this approach works for you.
In my keycloak instance the problem was that "Add to userinfo" was not selected in client scope "client roles". Ticking this checkbox solved the issue for me.
A somewhat late answer, in addition to @Ruikai Feng's answer, if your UI (Swagger, Scalar, or other) doesn't display the correct Content-Type, you can specify it like this in your controller at your endpoint:
[Consumes("multipart/form-data")] // 👈 Add it like this
[HttpPost("register"), DisableRequestSizeLimit]
public IActionResult RegisterUser([FromForm] RegisterModel registermodel)
{
return StatusCode(200);
}
Stable diffusion is nearly impossible to train if you only have 5 images. Also, the features of your images are not obvious enough, so neither GAN nor stable diffusion can generate images you want. My suggestion is to enhance your data, get more and make them more clear. You can try to generate data by using CLIP-guided style GAN.
Just a guess: Maybe there is no data in your tblHistoricRFID ("r") that corresponds to your tblHistoricPallets ("h")? It's hard to tell since you're not selecting any of the "r" data, but all "p" (tblPalletTypes) data in your screenshot is null which would be the case if there is no corresponding data in "r" for "p" to join on.
The error seemed to be related to the URL's after all. Now Cypress correctly detects both requests. They were copy pasted to the tests, but after copypasting them from the network tab in Chrome devTools, it started working!
use Security Mode = None is not a correct parameter, use allowedSecurityPolicies instead.
from("milo-client:opc.tcp://LeeviDing:53530/OPCUA/SimulationServer?" +
"node=RAW(ns=3;i=1011)" +
"&allowedSecurityPolicies=None")
.log("Received OPC UA data: ${body}");
Could you modify the code to call FlaskUI
like this?
def run_flask():
app.run(port=60066)
FlaskUI(
app=app,
server=run_flask,
width=1100,
height=680
).run()
/api
(and some others like /swagger
and /connect
for authentication, etc. But if you add to Program.cs app.MapHub<MyHub>('/hub')
, that's not going to be redirected to the backend. To redirect, you need to make change to proxy.conf.js. See below:const { env } = require('process');
const target = env.ASPNETCORE_HTTPS_PORT ? `https://localhost:${env.ASPNETCORE_HTTPS_PORT}` :
env.ASPNETCORE_URLS ? env.ASPNETCORE_URLS.split(';')[0] : 'https://localhost:7085';
const PROXY_CONFIG = [
{
context: [
"/api",
"/swagger",
"/connect",
"/oauth",
"/.well-known"
],
target,
secure: false
},
{ // ADD THIS
context: ["/hub"],
target,
secure: false,
ws: true, // Because SignalR uses WebSocket NOT HTTPS, you need to specify this.
changeOrigin: true, // To match your 'target', one assumes... That's what AI told me.
logLevel: "debug" // If you need debugging.
}
]
module.exports = PROXY_CONFIG;
That'll solve the 400 issue not found.
But after that, why do one get 405 Method Not Found? At first, one thought it is really the need for POST, but however one tried, one couldn't get it to work. In the end, one realized that in one's use-signalr.service.ts
where one call the SignalR, before, one changes what it calls. Before one knows about changing proxy, to make it run, one changes the url from /hub
to /api/hub
so it'll pass through; and that's the problem. Changing it back solve the problem. Though, one didn't dig deeper into researching whether it's because
/api is using https and not ws that causes the problem (as per defined in proxy.conf.js), or
The URL simply doesn't exist in the backend, since one already changed it everywhere except for the service.ts, so it returns that error. This sounds kinda weird -- shouldn't it have returned 400 instead? But no, it returned 405, which is kinda confusing.
And it not only magically solve the problem, but it also solve the ALLOW GET, HEAD issue. Even when it don't allow POST, when one set skipNegotiation: true
instead of false
in the frontend, it worked like a charm! One'll let you investigate on the 'why' if you'd like to know. One'll stay with the 'how' here.
There is no official public API from GSTN for checking GSTIN status due to security and captcha restrictions.
However, some third-party services provide GST-related APIs and compliance support.
One such platform is TheGSTCo.com – they offer VPOB/APOB solutions and help eCommerce sellers manage GST registrations across India.
After update SSH.NET library version from 2016.0.0 to 2023.0.1.0 is able to connect SFTP server
If you want to update the value (or you've created an empty secret + want to add a value):
gcloud secrets versions add mySecretKey --data-file config/keys/0010_key.pem
did you use this endpoint as it is or we have to change it with our own ? Pls answer.
These are not restricted scopes and so should be available to all apps.
As this seems to be an error specific to your app, please could you raise a case with Xero Support using this link https://developer.xero.com/contact-xero-developer-platform-support and include details of the client id for your app so that this can be investigated for you.