After digging around relentlessly, I've been able to come up with a solution, many thanks to a chap on GitHub who published the following npm package electron-forge-plugin-dependencies.
Instead of installing the npm package, I've decided to manually add the code into my project. Pasting the solution in here would make for an answer far too long, so please refer to these changes I've made in the repository linked in my original question. I will keep this repository up until the end days for anyone that might wish to use it as a starting point for an electron application built using forge with SQLite3 and sequelizer.
DECLARE @json NVARCHAR(MAX) = '
[
{ "id": 1, "name": "Alice" },
{ "id": 2, "name": "Bob" }
]';
SELECT *
FROM OPENJSON(@json)
WITH (
id INT,
name NVARCH
AR(100));
You need to allow suggestions while debugging
Go to Tool -> Options -> IntelliCode -> Advanced -> Uncheck Disable suggestions while debugging
Sorry for the late to respond.
Maybe you can try the Java-Python Linker Library (JPYL), which allows you to invoke C-Python scripts from your Java App and catch the Python outputs. The library also allows you to pass parameters to your Python Scripts.
https://github.com/livegrios/jpyll
Greetings!
topic_id = None
if getattr(event.reply_to, 'forum_topic', None):
topic_id = top if (top := event.reply_to.reply_to_top_id) \
else event.reply_to_msg_id
await client.send_message(event.chat_id, 'response', reply_to=topic_id)
You can do this by simply using [class]
<div [class]="loading ? 'loading-state my-class' : ''"></div>
Please consider using source generator. Write async code, and the source generator will generate a sync version behind the scenes:
Why my instagram in not valid this issue
Convert a new issue my new issue is instagram link or not come on my tiktok ?
I had to update my backend hostname to be api.example.com instead of example.com/api. Then I updated the DNS config to include the new hostname so it is all good now. I think the requests were being caught by the frontend public hostname instead of the backend one which made it so the requests were never received by express.
you need to use tags idiot..
<h1> hello + world </h1>
there ya go kiddo
import numpy as np
arr = np.array([[1, 2, 3], [4, 5, 6]])
py_list = arr.tolist()
print(py_list)
This works:
print(hello+" "+world)
this is not only useful and clear, it is true. it seems VS does not respect additional include directories correctly anyy more in 2022. sad.
@tomerpacific I am getting imagePath from const saveFileToStorageOmOrder in "app.tsx" . I move temporary files to a permanent location then enter the await addTimestampWithLocation function.
const saveFileToStorageOmOrder = async (sTempFilePath, iP_ID, iOO_ID, columnName) => {
try {
const sFolderPath = `${RNFS.ExternalDirectoryPath}/OmOrder/${iP_ID}/${iOO_ID}`;
if (!await RNFS.exists(sFolderPath)) await RNFS.mkdir(sFolderPath);
const ext = sTempFilePath.split('.').pop() || 'jpg';
const now = new Date();
const fileName = `${now.getFullYear()}${(now.getMonth()+1).toString().padStart(2,'0')}${now.getDate().toString().padStart(2,'0')}_${
now.getHours().toString().padStart(2,'0')}${now.getMinutes().toString().padStart(2,'0')}${now.getSeconds().toString().padStart(2,'0')}.${ext}`;
const sPermanentPath = `${sFolderPath}/${fileName}`;
// 1. move temporary files to a permanent location
await RNFS.moveFile(sTempFilePath, sPermanentPath);
// 🟨 timestamp + location
await addTimestampWithLocation(sPermanentPath);
// 3. Save to JSON
insertUpdateFileJsonOmOrder(sPermanentPath, iP_ID, iOO_ID, columnName);
} catch (error) {
_WriteLog(`❌ Error in saveFileToStorageOmOrder: ${error.message}`);
}
};
I had a similar issue. I could not resolve it.
I started again with a new clean project and moved all components, services, etc. to finally get rid of the problem.
It's a very old question, but if I found it, maybe someone else will come here, so here's what I found:
(I tested it with PIC12F615, Windows XP, PICDisasm 1.5, it worked, I still don't know if it's compatible with the current version of MPLABX, but it's already helped me a lot)
PICDisasm convert a Hex-File to a ASM-File.
The ASM-File is compatible to the Microchip-Assembler (MPLab-IDE). It works with PIC10, PIC12 and PIC16-Types.The Windows-Program PICDisasm is Freeware.
PICDisasm 1.6 (194 KByte) May,02 2008
more details by commands: ADDLW, ANDLW, IORLW, MOVLW, RETLW, SUBLW, XORLW
new PIC-Types added
12F519
16F526, 16F722, 16F723, 16F724, 16F726, 16F727some bugs fixed
PICDisasm 1.5 (192 KByte) March,04 2007
new PIC-Types added
12F609, 12HV609, 12F615, 12HV615
16F610, 16HV610, 16F616, 16HV616, 16F631, 16F677, 16HV785, 16F882, 16F883, 16F884, 16F886, 16F887PICDisasm 1.4 (192 KByte) March,23 2006
new PIC-Types added
10F220, 10F222, 12F510, 16F506, 16F946PICDisasm 1.3c (195 KByte) June,16 2005
new PIC-Types added
10F200, 10F202, 10F204, 10F206
12F508, 12F509, 12F635
16F59, 16F505, 16F636, 16F639, 16F685, 16F687, 16F689, 16F690, 16F785, 16F913, 16F914, 16F916, 16F917some bugs fixed
PICDisasm 1.2 (243 KByte) May,7 2004
new PIC-Types added
12F683, 16F54, 16F57, 16F684, 16F688, 16F716
After hours and hours of efforts finally it was the stupid line
import {createLogger} from "vite"
in my one fils,I dont even remember importing it
Cuando me apareció ese error mientras estaba haciendo pruebas en mi computadora, resulta que en .env tenía:
NODE_ENV=production
Solamente quité eso ya que realmente no aplica en entorno local y desapareció el error.
Gusto ko Makita Ang lahat Ng I want to see all the content of those images or pictures.
I don't think that trying to sync three data bases is a good solution to try and achieve. You would have a lot more success keeping local keycloak users local and importing ldap users as they log in and just sync a keycloak imported ldap user up every login.
Using basic as crude example let DateMY$ as string be your shorthand Date just a string of characters in sequence eg "June 2020" or "06.20" or 06/2020" for any calls or functions requiring date type data for calculations have the code do a quick string build by padding the string "01" to existing date string "01"& DateMY$=DateDMY$ printing DateDMY$ should read 01 June 2020, 01.06.20 or 01/06/2020 then declaring your format eg dd/mm/yyyy etc you should be able to get the code to read the val(DateDMY$) as date() type to do as you wish. Iv used several old code language that was just stubborn like that. Being you can do all this invisibly the end user would be none the wiser and treated as a string probably has it perceived as compatible with wider range of Code, platforms, OS operating systems. Logic for my variables declared. Given your initial date value 03.2020 make date string var as Date with Month Year DateMY$ the $ always been a old string symbol. Final date string DateDMY$ same as 1st but with ad D for day.
If you have a table with many INSERTs and also frequent SELECTs, it's common to see INSERT operations waiting for cache, which can cause delays. This happens because SELECT queries can lock or slow down INSERTs. A good practice is to use SQL_NO_CACHE in SELECT statements on tables with heavy insert activity. InnoDB also recommends using uncached queries in such cases. Of course, if your workload is mostly SELECTs and rarely updated, caching is beneficial — especially when the query cache hit ratio is over 90%. But for large tables with frequent inserts, it's better to disable caching for those SELECTs using SQL_NO_CACHE. Disabling the whole query cache isn't ideal unless you're ready to redesign the software (e.g. using Redis), so using SQL_NO_CACHE is a simple and effective optimization for improving INSERT performance.
I created a VM in Azure with Windows and downloaded SSMS. Once I ran the "generate scripts" command by right-clicking in the database name and selecting one script per file, I uploaded them to OneDrive and then downloaded them to my MAC. It took a while to download and upload, so consider that in your delivery timeline. Best of luck
Embora você não possa evitar a mensagem do sistema, você pode gerenciar a sequência de feedback para o usuário no seu aplicativo:
Exibir uma mensagem clara imediatamente: Assim que você detectar o erro de "Certificado Revogado" (dentro do case .verified(let transaction) onde transaction.revocationDate != nil), você deve imediatamente atualizar a UI do seu aplicativo para informar o usuário sobre a falha e o motivo. Por exemplo, você pode exibir um alerta, uma mensagem de erro na tela ou desabilitar funcionalidades relacionadas à compra.
Swift
case .verified(let transaction):
if transaction.productType == product.type, transaction.productID == productID {
if transaction.revocationDate == nil, transaction.revocationReason == nil {
purchaseState = true
await transaction.finish()
} else {
// Certificado revogado ou transação revogada.
// Aqui você exibe sua própria mensagem de erro para o usuário.
self.error = .error(NSError(domain: "PurchaseError", code: 1, userInfo: [NSLocalizedDescriptionKey: "A compra não pôde ser concluída devido a um certificado revogado. Por favor, tente novamente mais tarde ou contate o suporte."]))
// Não chame transaction.finish() para transações revogadas,
// a menos que você tenha uma lógica específica para elas.
// O StoreKit pode limpar essas transações automaticamente em alguns casos.
}
} else {
// produto inválido
self.error = .error(NSError(domain: "PurchaseError", code: 2, userInfo: [NSLocalizedDescriptionKey: "Produto da transação não corresponde."]))
}
Gerenciar o estado purchaseState: Certifique-se de que purchaseState só seja definido como true se a compra for realmente válida e verificada. Em caso de certificado revogado, purchaseState deve permanecer false (ou ser redefinido) e você deve apresentar o erro.
Ao fazer isso, embora o usuário veja o "You're all set" momentaneamente, ele será rapidamente seguido pela mensagem de erro do seu aplicativo, o que comunica a falha de forma mais eficaz e reduz a confusão.
Em resumo, a arquitetura do StoreKit, especialmente com os arquivos de configuração, separa a confirmação inicial da compra da validação posterior. Concentre-se em fornecer um feedback claro ao usuário no seu aplicativo assim que você detectar a falha de verificação.
O erro "invalid device ordinal" que você está enfrentando é peculiar, especialmente porque nvidia-smi e GPU.get_available_devices() detectam sua GPU corretamente.
Vamos abordar suas perguntas e possíveis soluções:
Sim, o HOOMD-blue (e o CUDA em geral) é projetado para ser compatível com GPU no WSL2. A NVIDIA tem feito esforços significativos para garantir que suas GPUs e o CUDA funcionem perfeitamente dentro do WSL2, incluindo o suporte para o subsistema de GPU virtual (vGPU) que o WSL2 utiliza. Muitos usuários executam com sucesso aplicações CUDA, incluindo outras bibliotecas de simulação e aprendizado de máquina, dentro do WSL2.
O fato de você conseguir ver a GPU com nvidia-smi e o HOOMD listar a GPU disponível indica que a camada de comunicação entre o WSL2, os drivers NVIDIA e o runtime CUDA está funcionando em um nível básico. O problema parece ser mais específico à forma como o HOOMD está tentando inicializar o dispositivo.
O erro "invalid device ordinal" geralmente ocorre quando o programa tenta acessar uma GPU usando um índice que não corresponde a um dispositivo válido ou quando há um problema subjacente na inicialização do contexto CUDA para aquele dispositivo. Dado que você já tentou device_id=0 e CUDA_VISIBLE_DEVICES=0, e sua GPU é a única listada, o problema não é a seleção de um índice incorreto.
Aqui estão algumas abordagens e considerações para tentar resolver o problema:
Verifique a Versão do Driver NVIDIA:
Embora você mencione "drivers mais recentes", certifique-se de que são os drivers mais recentes para WSL2. A NVIDIA lança drivers específicos para o WSL2 que contêm otimizações e correções. Você pode baixá-los diretamente do site da NVIDIA (geralmente na seção de drivers para desenvolvedores ou drivers de notebook).
Pode ser útil tentar um driver ligeiramente mais antigo, mas ainda compatível com WSL2, para descartar uma regressão em uma versão muito recente.
Atualize o Kernel do WSL2:
Certifique-se de que o kernel do WSL2 está atualizado. Problemas de compatibilidade entre o kernel do Linux e os drivers da GPU podem causar erros estranhos.
Abra o PowerShell como administrador e execute:
PowerShell
wsl --update
wsl --shutdown
Reinicie o Ubuntu no WSL2.
Verifique a Versão do CUDA Toolkit:
Você mencionou que testou o CUDA Toolkit 12.2 e 11.8. O HOOMD-blue 5.2.0 é compilado contra uma versão específica do CUDA. Embora o CUDA seja geralmente retrocompatível, pode haver pequenas incompatibilidades.
Sugestão: Verifique a documentação oficial do HOOMD-blue 5.2.0 ou as notas de versão para ver qual versão do CUDA Toolkit é a recomendada ou utilizada para a compilação oficial das builds do Conda/Micromamba. Tentar corresponder essa versão pode resolver o problema.
Às vezes, ter várias versões do CUDA Toolkit instaladas pode causar conflitos nas variáveis de ambiente. Certifique-se de que o PATH e outras variáveis estão apontando corretamente para a versão que você deseja usar.
Teste com um Exemplo CUDA Mais Simples:
pycuda ou numba com CUDA) que apenas inicialize um dispositivo e faça uma operação trivial. Se esses exemplos funcionarem, o problema é mais provável que seja específico da interação do HOOMD. Se eles falharem, o problema é mais fundamental na sua configuração CUDA/WSL2.Variáveis de Ambiente Adicionais do CUDA:
Embora CUDA_VISIBLE_DEVICES seja o mais comum, pode haver outras variáveis de ambiente do CUDA que afetam a inicialização do dispositivo.
Por exemplo, CUDA_LAUNCH_BLOCKING=1 pode ajudar a depurar, forçando as chamadas CUDA a serem síncronas, o que pode revelar o ponto exato da falha. Não é uma solução, mas pode ajudar na depuração.
Reinstalação Limpa do HOOMD (e dependências):
Às vezes, pacotes podem ser corrompidos. Tente criar um novo ambiente Micromamba/Conda do zero e instale apenas o HOOMD:
Bash
micromamba create -n hoomd_env python=3.10
micromamba activate hoomd_env
micromamba install hoomd=5.2.0
Isso garante que não há conflitos de pacotes ou arquivos residuais de instalações anteriores.
Verifique Logs de Erro do Sistema:
dmesg, /var/log/syslog).Se as opções acima não funcionarem, e você precisar de aceleração de GPU, as principais alternativas (além de hoomd.device.CPU()) seriam:
Linux Nativo: A maneira mais robusta de garantir compatibilidade e desempenho máximo para cargas de trabalho de GPU é executar o Linux nativamente (dual boot ou em uma máquina dedicada). Isso elimina a camada de virtualização do WSL2 e seus potenciais overheads ou incompatibilidades.
Windows Nativo (se HOOMD oferecer suporte GPU nativo): Verifique se o HOOMD-blue oferece builds com suporte a GPU para Windows. Se sim, essa seria uma alternativa mais simples do que o Linux nativo, embora o desempenho possa não ser idêntico ao do Linux.
Embora eu não tenha acesso a feedback de usuários em tempo real sobre o HOOMD-blue especificamente no WSL2, a comunidade CUDA em geral tem tido sucesso significativo. O erro "invalid device ordinal" é geralmente um sinal de que algo está errado na interação entre o runtime CUDA, os drivers e o ambiente (neste caso, o WSL2).
A chave para resolver problemas como este no WSL2 é garantir que:
Os drivers NVIDIA no Windows são os compatíveis com WSL2 e estão atualizados.
O kernel do WSL2 está atualizado.
A versão do CUDA Toolkit que você está usando no ambiente Micromamba/Conda é compatível com a versão do HOOMD-blue que você instalou.
Espero que estas sugestões o ajudem a resolver o problema! Se puder fornecer mais detalhes sobre a versão exata do seu driver NVIDIA para WSL2 e a output completa do erro, isso pode ajudar a refinar o diagnóstico.
In your sql query the input string type is NVARCHAR. This when encoding uses UTF-16.
In C# you used UTF8 encoding.
The byte representation differs in these 2 cases and hence you see diff SHA-256 hashes.
Both are valid. It depends on the encoding you prefer.
If you prefer to use UTF-8 in sql then use VARCHAR for input type.
I am not seeing any of this approaches work in multithreaded environments in PRODUCTION. it does not find exact match for class name or match for other flags. how to use all this code in multithreaded environment while different threads calls MessageBox window?
The redirect URI (where the response is returned to) must be registered in the APIs console, and the redirect_uri_mismatch error indicates that it's not registered or that it's been set up incorrectly. Ensure that the redirect URl you provide in your OAuth authorization request matches exactly the URl that you have added in the Google Developer Console. See related post that might be helpful to you.
Yes, you can set terminationGracePeriodSeconds to more than 15 seconds for non-system Pods on GKE Spot VMs, but it is honored only on a best-effort basis. When a Spot VM is preempted, it gets a 30 seconds termination notice. The kubelet prioritizes gracefully shutting down non-system Pods, usually allowing up to 15 seconds. If you set a longer period like 25 seconds kubelet may attempt to honor it, but system Pods also require time to shut down. If the total exceeds the 30 seconds window, your Pod may be forcefully killed.
Therefore, while values above 15 seconds are allowed, they may reduce the time available for critical system Pods, making 15 seconds or less the safest option for reliable shutdown behavior on Spot VMs. If your workloads require longer shutdown periods such as for draining connections or flushing logs, consider using standard VMs instead.
For best practice you can refer to this documentation for additional information.
Use Terminal to Launch the Executable
Instead of using open, you can directly invoke the executable file within the app bundle: bash
/Applications/MyApp.app/Contents/MacOS/MyApp
This ensures the application attaches to the terminal and can read user input. i hope*
It looks like Heroku just updated their Python classic buildpack. The error message you are seeing is addressed in the following change log:
Instagram’s API does not support programmatic login using a saved username and password due to strict security and privacy policies. OAuth 2.0 is the only official and supported method, which requires explicit user interaction for authentication. Attempting to bypass this flow by storing credentials and automating login violates Instagram's terms of service and may lead to API access being revoked.
Instead, you should authenticate the account once using the standard OAuth flow, obtain a long-lived access token, and securely store it (e.g., in a database). You can then use this token to programmatically fetch images without needing further logins. This method adheres to Instagram's API guidelines while allowing automated access to the account's media.
When creating an app, there is a new concept of "use cases". When asked to select a use case, select "Other" (at the bottom). It will then use the older layout and you are able to add Products.
Genius!
ps: if you work on a burger menu - change the "top" css attribute by -1 pixel -
This work for me:
Try to put the logic such as AuthOptions at the different file example
/src/auth.ts
export const { handlers, signIn, signOut, auth } = NextAuth({})
/src/app/api/auth/[...nextauth]/route.ts
import { handlers } from "@/auth"
export const { GET, POST } = handlers
You could try using JavaScript instead of VBScript:
<HTML>
<HEAD>
<TITLE>Fix</TITLE>
</HEAD>
<BODY>
<FORM>
<INPUT TYPE="Run" NAME="Button" VALUE="Click">
<SCRIPT FOR="Button" EVENT="onClick" LANGUAGE="JavaScript">
var WshShell = new ActiveXObject("WScript.Shell");
WshShell.Run("cmd.exe '.\example.bat'");
</SCRIPT>
</FORM>
</BODY>
</HTML>
const cors = Require [ 'cors '] app .usse [ cors ][]
This may not be the cleanest solution and I'll still leave this question as unsolved as it isn't fully understood in my opinion. Yet still I want to share my experience and how it worked out for me.
I modified it a little bit but I used the code from this post as minimal working example. I thought map_id has to be the same as activeMapType: map.supportedMapTypes[map_id] but this seems not to be the case. Using map_id = 1 and map.supportedMapTypes[0] resulted in the desired Street Map map type and using offline tiles.
I wrote an article on searching encrypted birthdays. As long as the data you are encrypting/searching isn't identifiable by its self, this method works well.
https://www.sqlservercentral.com/articles/searching-an-encrypted-column
Today Google will show the test it makes so you can confirm if Google is using the up to date validation as per Schema.ORG CSV or JSON file for allowed @type elements. Simply hover over the 'failed' test element and then compare to : https://github.com/schemaorg/schemaorg/blob/main/data/releases/14.0/schemaorg-all-https.jsonld
Line 19554 confirms.
Serilog–Sub loggers would be an adequate start. Almost your exact example.
To ensure the public/ directory is correctly exposed when deploying a PHP project on Vercel, you need to configure the routing so that all incoming requests are internally directed to that folder. This allows users to access the app without explicitly including public/ in the URL. Typically, this is done by setting up URL rewrites in the deployment configuration so that the public/ folder acts as the root for your application. Additionally, it’s important to place all your public-facing files inside that directory and avoid referencing the public/ folder in your paths or links directly. This setup helps maintain a clean and secure URL structure while aligning with Vercel's deployment model.
Thanks for help, I change a little bit inputs, reinstal python and lybraries to have path set automaticly and finnaly its giving me hand landmarks :)
First hand landmarks: [NormalizedLandmark(x=0.35780423879623413, y=0.005128130316734314, z=7.445843266395968e-07, visibility=0.0, presence=0.0),
What I finally have is:
import mediapipe as mp
from mediapipe.tasks import python
from mediapipe.tasks.python import vision
from mediapipe.tasks.python.core.base_options import BaseOptions
# Use an absolute path
task_path = Path(__file__).parent / "hand_landmarker.task"
print("Resolved model path:", task_path)
if not task_path.exists():
raise FileNotFoundError(f"Model file not found at: {task_path}")
options = HandLandmarkerOptions(
base_options=BaseOptions(model_asset_path=str(task_path.resolve())),
running_mode=VisionRunningMode.IMAGE,
num_hands=2
)
detector = vision.HandLandmarker.create_from_options(options)
# Hands
mp_image = mp.Image(image_format=mp.ImageFormat.SRGB, data=cv2.cvtColor(cv2_image, cv2.COLOR_BGR2RGB))
hands_results = detector.detect(mp_image)
results['hands'] = hands_results.hand_landmarks
hand_landmarks = results.get('hands')
first_hand = hand_landmarks[0] if hand_landmarks and len(hand_landmarks) > 0 else None
print("First hand landmarks:", first_hand)
You fail to call your mainLoop(). You initialize your app, but all that does is call the constructor. Add myApp.mainLoop() at the end.
You can do this using laravel eloquant. First create these modals
Country, PaymentMethod, CountryPaymentMethod, PaymentMethodConfiguration
And then create these relationships inside modals.
// Country.php
public function countryPaymentMethods()
{
return $this->hasMany(CountryPaymentMethod::class);
}
// PaymentMethod.php
public function countryPaymentMethods()
{
return $this->hasMany(CountryPaymentMethod::class);
}
// CountryPaymentMethod.php
public function configurations()
{
return $this->hasMany(PaymentMethodConfiguration::class);
}
public function country()
{
return $this->belongsTo(Country::class);
}
public function paymentMethod()
{
return $this->belongsTo(PaymentMethod::class);
}
// PaymentMethodConfiguration.php
public function countryPaymentMethod()
{
return $this->belongsTo(CountryPaymentMethod::class);
}
And you can query them like this
$country = Country::where('name', 'Ireland')->first();
$paymentMethod = PaymentMethod::where('name', 'Mobile Money')->first();
$configurations = PaymentMethodConfiguration::whereHas('countryPaymentMethod', function ($query) use ($country, $paymentMethod) {
$query->where('country_id', $country->id)
->where('payment_method_id', $paymentMethod->id)
->where('is_active', true);
})->get();
Study about this more here
Laravel - Eloquent "Has", "With", "WhereHas" - What do they mean?
Thank you for the feedback. I am the author of grapa. It started out as more of a personal project where I could test out ideas, but evolved over time. There are a lot of capabilities in the language that I am finding with ChatGPT and Cursor are unique. With the help of Cursor, I've revamped the docs considerably, and also revised the CLI interface. The CLI piping is now there.
https://grapa-dev.github.io/grapa/about/
https://grapa-dev.github.io/grapa/cli_quickstart/
See the above for the new CLI options.
Just recently added is what appears to be the most feature rich and performant grep available.
https://grapa-dev.github.io/grapa/grep/
I will be focusing on completing the language over the next few months - it is pretty complete as it is and production ready and well tested and stable (with the exception of a ROW table type with the DB, but COL column store works very well and is quite a bit faster anyway).
I implemented the file system and DB using the same structure so you traverse the DB with similar commands you would a file system. And the DB supports multiple levels with GROUP.
https://grapa-dev.github.io/grapa/sys/file/
The docs were fully created by GenAI tools - and GenAI does not have the pattern of the grapa language, so it keeps generating grapa code with javascript or python syntax. I had it create the basic syntax doc in the docs to help constrain this, and that helps a great deal, but it doesn't always reference it. I still need to scrub all the docs to verify/fix things.
Your understanding is correct: volumeBindingMode belongs to StorageClass. The Helm chart's templating is likely using this variable to configure an associated StorageClass, and it's not being directly applied to the PersistentVolumeClaim in the final manifest.
If you find volumeBindingMode actually appearing in the rendered PVC YAML, then that would indicate an incorrect Helm chart definition, which should be reported to the chart maintainers.
https://www.parseviewerx.com/json/json_viewer
try this out
“I recently tried an online JSON viewer tool and found it amazing. I would like to implement something similar on my site to enhance usability.”
If I get it right, you can easily achieve this by using scipy.ndimage module. Like this:
import numpy as np
from scipy.ndimage import label, find_objects
DashBoard = np.zeros((10,10), dtype=int)
DashBoard[5,5] = 1
DashBoard[5,4] = 1
DashBoard[5,6] = 1
DashBoard[6,6] = 1
DashBoard[7,7] = 1
print(DashBoard)
initial Dashboard. note the "alien" at (7,7)

# define which neighbours count as adjacent (in this case, exclude diagonals)
s = [[0,1,0],
[1,1,1],
[0,1,0]]
# label adjacent "islands"
labeled, nums = label(DashBoard, structure=s)
print(nums)
print(labeled)
labeled visualized:
loc = find_objects(labeled)[0]
res = labeled[loc]
res visualized:
print(res.T.shape)
will give you
(3, 2)
Only the data manipulation language (DML) changes are updated automatically during continuous migrations, whereas the data definition language (DDL) changes are the user's responsibility to ensure compatibility between the source and destination databases, and can be done in two ways—refer to this documentation.
It is also recommended to review the known limitations of your migration scenario before proceeding with the database migration. See here for the limitations specific to using a PostgreSQL database as the source.
But how will i run it on debug mode ?
@mass-dot-net:
Resolving the Cookies problem just requires creating your own implementation of HttpCookies that you can instantiate:
public class FakeHttpCookies : HttpCookies
{
public override void Append(string name, string value)
{
throw new NotImplementedException();
}
public override void Append(IHttpCookie cookie)
{
throw new NotImplementedException();
}
public override IHttpCookie CreateNew()
{
throw new NotImplementedException();
}
}
Now you can set the value in the MockHttpResponseData constructor:
public MockHttpResponseData(FunctionContext functionContext, HttpCookies? cookies = null) : base(functionContext)
{
Cookies = cookies ?? new FakeHttpCookies();
}
Especially you want to create Indexed View (Materialized View) with UNION inside that view which is not possible: Cannot create index on view 'sql.dbo.vw_INDX' because it contains one or more UNION, INTERSECT, or EXCEPT operators. Consider creating a separate indexed view for each query that is an input to the UNION, INTERSECT, or EXCEPT operators of the original view.
But I need that view in another Query to join with.
PS: MS SQL Server 2014 Enterprise
I need help in adobe to create conditional font color for any value within a text field that is greater than 0 to be green, anything less than 0 to be red, and anything neutral to be black...? i am not a coder so any help to assist with custom java script in my pdf form would be helpful?
To limit Tailwind CSS styles to a specific React component, you can scope Tailwind classes using custom prefixes, context wrappers, or CSS Modules. Tailwind is global by default, so it doesn't naturally scope to one component
My observation is this. Schema.org content can be deployed in both the HEADER and BODY. The question is not cut and dry. The nuance is the size of the script and what tools you use to deploy.
Consider page load, time to paint implications too.
In a world of AI engines we need to tackle this problem further upstream and look to create AI endpoints with full JSON-LD Schema.org content files for real-time ingestion. Not depend on page by page loading.
There is an initiative, see git Schema.txt, to direct this traffic via robots.txt to a dedicated schema.txt file that contains links to json-ld files and catalog schema.org @IDs. AI crawlers can then access what they need to run semantic queries, that replace today's regular search. In turn this makes your Schema metadata even more precious to your future website.
When it comes down to numbers, then everybody is just "silent". Its a secret in data mining and huge conspiracy against end user ! Well i test a lot and i can only say, someone is lying big time ! All those tools aren't working the way are supposed to, results obtained in reality are clear evidence off that ! And all i hear in excuse of those tools used, "your data quality ins't good enough". What that sentence means in real, model cant learn from data (but it should due weights adjusted in model) !? Tools way to poor developed. Optuna in the wastes of HP space should find proper HP and model obtained should converge to that target. If u use the same file setup as in optuna with same classifier and target column u should get prety much the same results when test on rest file is used for that particular target column. Why those results differ so much its clue that i still figure it up. Its a math, that reproduces the same results then !? If not there is something wrong with math, weights aren't the same thus results are always worse. How to solve this i don't know. At the end i always manually tune models, find proper HP sometimes, that optuna newer finds, but it should. AI should do model HP search with all the needed metrics and evaluation plots and explanations. We don't have time to fool around with manualism's altogether. And i thought optuna will solve my problems too. Optuna at my experience newer delivers good models and i try out all the optionable metrics in optuna even custom balanced AUC original conditions formula as customizey by my ovn. Optuna simply isn't up the job. Issues for that can be very vast and all u hear is all philosophizing about it.
if you want to print hello world or just hello i know how to do that code using java
class main{
public static void main(String[]args){
System.out.println("Hello World")
}
}
if you want only hello then inside print function write ony hello
You're asking a few different things here. First, you already partly answered your question on ESM compatibility, and partly the const-style enums etc. The remaining question is how to share this between angular and nest.
Nothing in a DTO should be Angular or Nest specific. How you set it up is kinda up to you, and out of scope of at least the titular question. So I'll take the second part to try to answer.
---
What you want to do here is essentially share an typescript package, between two different typescript projects. How, depends on how your projects are set up. I can only suggest vague ideas and concepts, you'll have to either clarify the question further, or figure out the rest of the way on your own.
Do you have a monorepo set up? Something like nx.dev or turborepo? npm workspaces? They each have their own ways of setting up and sharing projects.
Otherwise, if you have separate repos, you could again do a few things differently. For one - you could install the DTO in, say, your NestJS package. You would have to export that library as an actual export. You could then add that entire package as a dependency for your Angular project. That would, of course, suck for various reasons.
The last option I'll propose is to make a third package, call it "validations" or something similar, and share that as a dependency for both other packages.
That's how you can get clean isolation, no duplication, even versioning support, support tree shaking properly (so that you actually import typescript files into your sources, before you transpile the resulting app code into javascript.
Seems to be working now in beta4.
Something like this?
>>> p=r"([^()]*)"
>>> re.sub(p," ","Hello (this is extra) world (more text)")
'Hello world '
Maybe a little late for you, but the best way to do this is use useFetchClient
const { get, post } = useFetchClient();
const test = await get('/users')
this uses internal admin routes with authentication
Not shared by default, but you can configure it
https://learn.microsoft.com/en-us/entra/msal/dotnet/how-to/token-cache-serialization?tabs=msal
That little “highlight” (called a tab attention indicator) is controlled by the browser itself, and you cannot manually trigger it via JavaScript. Browser only shoes this for specific built-in events like:
•alert(), confirm(), or prompt() dialogs.
•some types of audio playing.
•push notifications.
It is a fundamental part of the UX of the browser, hence it can't be controlled whatsoever.
auto_lang_field packageI was working on a feature in a project and I want to catch the locale of the text inside the TextField to change the direction of the widget and TextStyle as well
I have build a flutter package to handle this.
pub.dev : auto_lang_fieldflutter pub add auto_lang_fieldFor customization I also created this repo contains +81 languages data to customize the language detection if you want languages, also check the Language Detection Considerations in the package
READMEsection for more details
I really hope this package is helpful for the Flutter developers
Have you considered edge cases where the timezone shifts due to DST or other regional rules? Using Python’s zoneinfo (Python 3.9+) or pytz can help ensure the billing cycle consistently aligns with the 26th–25th range across different timezones.
Oh I got solution for this already, my problem wasn't AJV validation at all, my mistake was using a select option with a value = "", this will made the Select element required by Html.
<select>
<option value = ""> option 0 </option> <---- this will make select tag required.
<select>
Can I apply the required attribute to <select> fields in HTML?
Although this solution may not be an automation, but you can manually specify the folder name in which you want to clone your repository. Here's a way to do it:
git clone https://github.com/xyz/project1.git xyz/project1
This will clone your repository within xyz directory, in the project1 sub-directory.
have you found a solution to this problem? Thank you in advance for your response
The router address you're using (0xE592427A0AEce92De3Edee1F18E0157C05861564) is incorrect for Sepolia— that's the mainnet address for Uniswap V3's SwapRouter. On the Sepolia testnet, you should use the SwapRouter02 address at 0x3bFA4769FB09eefC5a80d6E87c3B9C650f7Ae48E instead.
In Remix, when the transaction simulates and fails, expand the error message—it often includes a specific revert reason (e.g., "INSUFFICIENT_LIQUIDITY" or "INVALID_POOL"). That can pinpoint if it's something beyond the router address.
Reverting to 1.17.5 seems to have fixed the problem for me.
I opened a gh issue so maybe it will be fixed in 1.18.2
The problem with the PIVOT function is my case I get insufficient privileges error message(s). Not sure which calls within the pivot function(s) are causing this error.
Metpy divergence computation uses the three-point method outlined in "Derivative formulae and errors for non-uniformly spaced points" Proceedings of the Royal Society A - May 2005 DOI:10.1098/rspa.2004.1430
Free download at https://www.researchgate.net/publication/228577212
SELECT
*
FROM your_table
WHERE
xmin = pg_current_xact_id()::xid
;
I asked this question on the Ansible forums as well and received a response.
Quick link: Error: 'ProxmoxNodeInfoAnsible' object has no attribute 'module' · Issue #114 · ansible-collections/community.proxmox · GitHub - i.e. check your proxmoxer version.
TLDR: ProxMox v.8.4 only comes with proxmoxer 1.2.0. Needs to be >=2.2. Best option suggested is to create a venv for python3 and run a special ansible user on the proxmox host.
It works for me, it is a save place ibm.biz don't worry.
to prevent potential disruptions in the main automation flow, especially in cases where the MySQL database is unavailable or encounters an error, all logging operations are executed asynchronously. This is implemented using the "Execute Workflow" node with the "Wait for Completion" option set to false. By offloading the logging process to an independent sub-workflow, the system ensures that logging tasks are executed in the background without impacting the execution or speed of the main workflow. The sub-workflow contains the logic to insert logs into the MySQL table and includes proper error handling (e.g., try-catch) to silently handle any database issues. This design pattern offers a reliable and fault-tolerant approach to logging, maintaining data traceability without compromising workflow continuity.
This code accesses the "ColumnTitle" (e.g., "Previous Value") from a SharePoint HTTP request output, extracting a list of user details with LookupId, LookupValue (name), and Email from a nested JSON structure.
DSC00005
DSC00009
DSC00016
DSC00052
DSC00064
DSC00110
DSC00111
DSC00115
DSC00122
DSC00125
DSC00127
DSC00144
DSC00158
DSC00161
DSC00170
DSC00174
DSC00179
DSC00188
DSC00198
DSC00203
DSC00213
DSC00238
DSC00253
DSC00262
DSC00273
DSC00291
DSC00307
DSC00311
DSC00320
DSC00324
DSC00340
DSC00366
DSC00393
DSC00404
DSC00423
DSC00439
DSC00453
DSC00466
DSC00471
DSC00477
DSC00479
DSC00481
DSC00485
DSC00488
DSC00493
DSC00495
DSC00496
DSC00504
DSC00505
DSC00514
DSC00518
DSC00523
DSC00527
DSC00534
DSC00538
DSC00545
DSC00559
DSC00560
DSC00562
DSC00563
DSC00569
DSC00570
DSC00576
DSC00578
DSC00581
DSC00589
DSC00592
DSC00595
DSC00611
DSC00635
DSC01622
DSC01626
DSC01628
DSC01640
DSC01641
DSC01645
DSC01648
DSC01655
DSC01663
DSC01668
DSC01671
DSC01672
DSC01676
DSC01689
DSC01708
DSC01710
DSC01712
DSC01718
DSC01725
DSC01727
DSC01734
DSC01739
DSC01741
DSC01756
DSC01759
DSC01762
DSC01767
DSC01769
DSC01774
DSC01778
DSC01784
DSC01790
DSC01800
DSC01803
DSC01808
DSC01815
DSC01816
DSC01824
DSC01826
DSC01834
DSC01838
DSC01844
DSC01845
DSC01849
DSC01851
DSC01855
DSC01858
DSC01860
DSC01880
DSC01896
DSC01904
DSC01909
DSC01912
DSC01917
DSC01922
DSC01930
DSC01932
DSC01937
DSC01943
DSC01945
DSC01948
DSC01954
DSC01957
DSC01960
DSC01964
DSC01970
DSC01972
DSC01973
DSC01979
DSC01980
DSC01981
DSC01986
DSC01991
DSC01992
DSC01998
DSC02009
DSC02013
DSC02014
DSC02017
DSC02025
DSC02027
DSC02029
DSC02038
DSC02040
DSC02042
DSC09511
DSC09534
DSC09544
DSC09548
DSC09550
DSC09559
DSC09567
DSC09579
DSC09588
DSC09592
DSC09634
DSC09640
DSC09662
DSC09666
DSC09672
DSC09680
DSC09702
DSC09707
DSC09713
DSC09716
DSC09721
DSC09746
DSC09777
DSC09845
DSC09871
DSC09945
DSC09949
DSC09955
DSC09989
IMG_1623
IMG_1628
IMG_1632
IMG_1636
IMG_1638
IMG_1640
IMG_1655
IMG_1661
IMG_1663
IMG_1666
IMG_1669
IMG_1670
IMG_1673
IMG_1674
IMG_1677
IMG_1679
IMG_1680
IMG_1683
IMG_1686
IMG_1691
IMG_1695
IMG_1697
IMG_1699
IMG_1701
IMG_1703
IMG_1704
IMG_1706
IMG_1709
IMG_1711
IMG_1713
IMG_1715
IMG_1718
IMG_1720
IMG_1721
IMG_1723
IMG_1728
IMG_1729
IMG_1732
IMG_1733
IMG_1735
IMG_1738
IMG_1740
IMG_1742
IMG_1743
IMG_1746
IMG_1748
IMG_1750
IMG_1755
IMG_1757
IMG_1760
IMG_1761
IMG_1765
IMG_1766
IMG_1769
IMG_1771
IMG_1773
IMG_1775
IMG_1777
IMG_1789
IMG_1791
IMG_1792
IMG_1797
IMG_1798
IMG_1801
IMG_1803
IMG_1805
IMG_1806
IMG_1809
IMG_1813
IMG_1814
IMG_1817
IMG_1821
IMG_1824
IMG_1827
IMG_1828
IMG_1833
IMG_1834
IMG_1838
IMG_1840
IMG_1841
IMG_1844
IMG_1845
IMG_1849
IMG_1850
IMG_1859
IMG_1861
IMG_1863
IMG_1870
IMG_1872
IMG_1875
IMG_1879
IMG_1882
IMG_1884
IMG_1887
IMG_1891
trying different things it looks like it could be the names of arguments in wnominate(). try adding "rcObject" argument:
res <- wnominate(
rcObject = rc_samp,
dims = 1,
polarity = 4,
minvotes = 2
)
After 10 days of debugging, I found the root cause: I forgot to define the key and value in my Kubernetes ConfigMap in the Terraform config. This had several consequences:
The application received an empty list for `allowedOrigins` in the CORS configuration
The CORS filter couldn't match the request's origin against the empty allowed origins list
For preflight requests, the filter simply passed the request down the chain
Since I hadn't implemented any handler for OPTIONS requests in my application, it resulted in a 405 Method Not Allowed error
Hy, a bit late but i made a workaround :
I provide here the Javacode, the needed html been explained on the top.
// This Code needs a InputField with ID "Sequenz"
// - a div with ID "Feedback" to show keydown char
// - a div with ID "FeedbackU" to show input char
// - a div with ID "key" to show entered key
function init(){
document.addEventListener('keydown', function(event){keydown(event);})
document.getElementById('Java').innerHTML="Keyress";
//Use this Line to add Events to every Input Field, just call with the ID - Number
AddEventtoInput("Sequenz");
}
var handled = false;
function keydown(event){
if(event.which == 229){
return;
}else{
handled = true;
document.getElementById('Feedback').innerHTML = event.key;
if (keypress(event.key) == true){
event.preventDefault();
}
}
// ----- Demonstration of values. Please Delete.
document.getElementById('Feedback').innerHTML = document.getElementById('Feedback').innerHTML + " - " + event.which;
// ------------------------
}
function AddEventtoInput(div){
document.getElementById(div).addEventListener("input", keyinput);
}
function keyinput(event){
var key = event.target.value;
key = key.slice(-1);
if(key.indexOf('\t') != -1){key = "Tab";}
if(handled == false){
// ----- Demonstration of values. Please Delete.
document.getElementById('FeedbackU').innerHTML = key;
// ------------------------
if (keypress(key) == true){
event.target.value = event.target.value.slice(0,-1);
}
}else{
handled = false;
}
}
function keypress(key){
prevent = false;
//------------------------ Here Code to Check entered Char, Change prevent to true to prevent Char
document.getElementById('key').innerHTML = key;
//-----------------------
return prevent;
}
I have written this code and it is free of use (Public Domain). Please help yourself to add it in your project.
M. Glaser
Munich Germany
Looks like this issue has been fixed in the last few versions of Angular.
Here is an example of Angular Material components working inside a component with encapsulation set to ShadowDom:
And if you need the contents of cdkOverlay, modals, etc to be inside the shadow dom, here is a proof of concept doing that:
It seems like the n=n+1 is inside the for loop, which may be causing it to increment multiple times unexpectedly. You might be skipping some indexes or going out of range. Try moving the increment outside the for loop or consider using a nested loop instead of combining while and for.
I am writing a Python compatible syntax, but the concept should work for C++ too.
comboBox.setEditable(True)
comboBox.lineEdit().setAlignment(Qt.AlignCenter)
comboBox.lineEdit().setReadOnly(True)
Idea is, to make it editable, but setting lineEdit inside QComboBox as read-only later.
This code defines a function to find matches of a regex pattern that occur only outside paranthasis in a string by manually tracking nesting depth, ensuring precise, context-aware pattern matching.
One alternative is the Boxtin project, but as of now, it's not yet released. "Boxtin is a customizable Java security manager agent, intended to replace the original security manager"
You should add missing foreign keys to ensure data integrity and generate accurate database diagrams. However, don’t add all of them at once — do it in phases. First, check for invalid data that may violate the constraints. Adding too many FKs at once can cause locks, performance hits, or app errors if the data isn’t clean. Test changes in a dev environment, fix any orphaned rows, and roll out gradually. This improves design, avoids bad data, and helps tools like SSMS show correct relationships.
This happened to me, using the JUCE library with Projucer. Uninstalled the app from my phone, then ran it on Android Studio again.
I think this AI-assisted answer is the clearest:
=EOMONTH(A1,0)-MOD(WEEKDAY(EOMONTH(A1,0),2)-5,7)
A1 is a date value with any date in the subject month
-5 is for Friday. For other days:
-4 is for Thursday
etc for the sequence -1 (Monday) through -7 (Sunday)
Can someone create a zig zag indicator with ATR filter?
This is such a good game. This code implements a basic Tic-Tac-Toe game using Jetpack Compose in Kotlin, handling game logic, state management, UI rendering, and player turns with reactive updates and modal for end-game results like win or draw.
I was able to make this work by changing the 'print_info' argument to 'print_stats'. The documentation for the tg_louvain graph data science library function (https://docs.tigergraph.com/graph-ml/3.10/community-algorithms/louvain) incorrectly mentions the argument to print the result as 'print_info', I simply referred to the actual implementation here https://github.com/tigergraph/gsql-graph-algorithms and figured out what the correct argument name was.
Tried using the server’s own cert for verify, but it failed with an issuer error. Makes sense now: it’s just the leaf cert, not the whole trust chain.
This version has a conversational tone and reflects your own debugging experience, like how real developers talk when they’re troubleshooting quirks.
how's this?
I had to get rid of the sub picker each time the category changed... redundant code... but this worked...
struct ContentView2: View {
enum Category: String, CaseIterable, Identifiable {
case typeA , typeB , typeC
var id: Self { self }
var availableOptions: [ Option ] {
switch self {
case .typeA : return [ .a1 , .a2 ]
case .typeB : return [ .b1 , .b2 ]
case .typeC : return [ .c1 , .c2 ]
}
}
}
enum Option: String, CaseIterable, Identifiable {
case a1 , a2 , b1 , b2 , c1 , c2
var id: Self { self }
}
@State private var selectedCategory: Category = .typeA
@State private var selectedOption: Option = .a1
var body: some View {
Form {
Section ( header: Text ( "Selection" ) ) {
HStack {
Picker ( "" , selection: $selectedCategory ) {
ForEach ( Category.allCases ) { category in
Text ( category.rawValue.capitalized ) .tag ( category )
}
}
Spacer()
switch selectedCategory {
case .typeA:
Picker ( "" , selection: $selectedOption ) {
ForEach ( self.selectedCategory.availableOptions ) { option in
Text ( option.rawValue.uppercased() ) .tag ( option )
}
}
case .typeB:
Picker ( "" , selection: $selectedOption ) {
ForEach ( self.selectedCategory.availableOptions ) { option in
Text ( option.rawValue.uppercased() ) .tag ( option )
}
}
case .typeC:
Picker ( "" , selection: $selectedOption ) {
ForEach ( self.selectedCategory.availableOptions ) { option in
Text ( option.rawValue.uppercased() ) .tag ( option )
}
}
}
}
}
.labelsHidden()
.pickerStyle ( .menu )
}
}
}
#Preview {
ContentView2()
}
When I need to inspect multiple popup or dropdown elements that disappear on blur, I use this handy snippet:
Open DevTools → Console
Paste the code below and press Enter
window.addEventListener('keydown', function (event) {
if (event.key === 'F8') {
debugger;
}
});
Press F8 (or change the key) anytime to trigger debugger; and pause execution
This makes it easy to freeze the page and inspect tricky UI elements before they vanish.
👋
You're asking a great question — and it's a very common challenge when bridging string normalization between Python and SQL-based systems like MariaDB.
Unfortunately, MariaDB collations such as utf8mb4_general_ci are not exactly equivalent to Python's str.lower() or str.casefold(). While utf8mb4_general_ci provides case-insensitive comparison, it does not handle Unicode normalization (like removing accents or special casing from some scripts), and it’s less aggressive than str.casefold() which is meant for caseless matching across different languages and scripts.
str.lower() only lowercases characters, but it's limited (e.g. doesn't handle German ß correctly).str.casefold() is a more aggressive, Unicode-aware version of lower(), intended for caseless string comparisons.utf8mb4_general_ci is a case-insensitive collation but doesn't support Unicode normalization like NFKC or NFKD.Use utf8mb4_unicode_ci or utf8mb4_0900_ai_ci (if available):
general_ci.str.casefold() completely.Example:
CREATE TABLE example (
name VARCHAR(255)
) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
Normalize in Python before insert:
If exact normalization (like casefold() or unicodedata.normalize()) is critical, consider pre-processing strings before storing them:
import unicodedata
def normalize(s):
return unicodedata.normalize('NFKC', s.casefold())
Store a normalized column: Add a second column that stores the normalized value and index it for fast equality comparison.
ALTER TABLE users ADD COLUMN name_normalized VARCHAR(255);
CREATE INDEX idx_normalized_name ON users(name_normalized);
Use generated columns (MariaDB 10.2+): With a bit of trickery (though limited to SQL functions), you might offload normalization to the DB via generated columns — but it won't replicate Python's casefold/Unicode normalization fully.
There is no MariaDB collation that is fully equivalent to str.casefold(). Your best bet is to:
utf8mb4_unicode_ci for better Unicode-aware comparisons than general_ci.Hope that helps — and if anyone found a closer match for casefold() in SQL, I’d love to hear it too!