I have created a VB.NET application that generates a Carousel Image Slider HTML webpage on which the images when clicked will execute standard desktop shortcuts. It is available at http://www.mv-w.net/BallyOak/SliderPlus/index.html If anyone is interested in how I did it they can contact me.
No PHP or js needed except for the interface. Just VB.
Access token expires after 1 hour. The refresh token is what expires after 14 days
.NET 9
internal static string Sha1(string path)
{
using var stream = File.OpenRead(path);
using var sha1 = System.Security.Cryptography.SHA1.Create();
return Convert.ToHexStringLower(sha1.ComputeHash(stream));
}
Because at @106.0.2
, executablePath
was not a function, it was a getter that returns a promise.
You can either use several pre-existing AI agents or easily make one in platforms like N8N that can easily do this job. You can find a nice tutorial here to help you auto-Publish YouTube Videos to Facebook & Instagram.
I'm facing the same problem, but unable to solve it. Using spring boot 3.4.5, r2dbc-postgresql 1.0.7. My query looks like:
select test.id,
test.name,
test.description,
test.active,
count(q.id) as questions_count
from test_entity test
left join test_question q on q.test_entity_id = test.id
group by test.id, test.name, test.description, test.active
I tried many variants of spelling questions_count, but always get null.
I even tried to wrap this query into
select mm.* (
...
) mm
But that doesn't help.
I'm using R2dbcRepository with @Query annotation, and interface for retrieving result set.
Thanks for the hints. The tast seem to be not fully automatable, so I created a guide for the IfU.
As it may be of use for someone reading this post, I post it here:
To improve system security and minimize potential vulnerabilities, it is strongly recommended that the Firebird database service does not run under the Local System account or any user account with elevated privileges.
Instead, use the provided Firebird tool instsvc.exe to install the service under a dedicated low-privilege user account:
1. Create a Dedicated Local User Account
Press Win + R, type compmgmt.msc, and press Enter to open Computer Management.
Navigate to System Tools → Local Users and Groups → Users.
Right-click on Users and select New User….
Create a new account (e.g., firebird_svc) with the following settings:
Set a strong password (in this example "YourSecurePassword")
Disable "User has to change password ..."
Enable “User cannot change password” and “Password never expires”.
Do not add the user to the Administrators group.
5. Click Create, then Close.
Open secpol.msc
Go to Local Policies → User Rights Assignment
Find Deny log on locally
Add the firebird_svc user
Open a Command Prompt with Administrator rights.
Navigate to the Firebird installation directory (e.g., C:\Program Files\Firebird\Firebird_4_0).
Run the following commands to install the service under the dedicated user:
instsvc stop
instsvc remove
instsvc install -l firebird_svc YourSecurePassword
instsvc start
4. Right-click the Firebird installation directory (e.g., C:\Program Files\Firebird\Firebird_4_0), select Properties, then navigate to the Security tab. Ensure that the firebird_svc account is listed and has Full Control permissions assigned. If the account is not listed, add it and assign the appropriate rights.
The Firebird server now runs under a dedicated user account with limited system permissions, significantly enhancing the overall security of the system by reducing the risk of privilege escalation.
Additionally, access to the database file (YourApplicationsDatabaseFile.fdb) can be restricted to the Firebird service account and system administrators only. This prevents unauthorized users from reading or modifying the file and supports secure system operation.
1. Open Command Prompt as Administrator
2. Navigate to PathWhereYourDbFileIsLocated
cd \ProgramData\MyDbPRogram
3. Remove Inherited Permissions
icacls "YourApplicationsDatabaseFile.fdb" /inheritance:r
4. Grant Access to Firebird Service User
icacls "YourApplicationsDatabaseFile.fdb" /grant firebird_svc:(M)
5. Grant Full Control to Administrators
icacls "YourApplicationsDatabaseFile.fdb" /grant *S-1-5-32-544:(OI)(CI)(F)
Are you using flutter_native_splash
? As far as I can see, you can't disable this first splash (at least not on the Flutter side), because the native app loads Flutter here. But you can adjust the color of the native splash so that the transition to your own splash isn't quite as harsh.
If you use this code, that error still occurred?
export type EnvironmentTypes {
Development: string,
Production: string,
Test: string,
}
export const Environment: EnvironmentTypes {
Development: 'development',
Production: 'production',
Test: 'test',
}
// MainActivity.java package com.ashish.bdgfakehacksim;
import android.app.AlertDialog; import android.content.Intent; import android.os.Bundle; import android.os.Handler; import android.text.InputType; import android.widget.EditText; import android.widget.TextView; import android.widget.Button; import androidx.appcompat.app.AppCompatActivity;
public class MainActivity extends AppCompatActivity {
private static final String CORRECT_PASSWORD = "Ashish440";
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
EditText passwordInput = new EditText(this);
passwordInput.setInputType(InputType.TYPE_CLASS_TEXT | InputType.TYPE_TEXT_VARIATION_PASSWORD);
new AlertDialog.Builder(this)
.setTitle("Enter Password")
.setView(passwordInput)
.setCancelable(false)
.setPositiveButton("Enter", (dialog, which) -> {
String input = passwordInput.getText().toString();
if (input.equals(CORRECT_PASSWORD)) {
startActivity(new Intent(this, LoadingActivity.class));
finish();
} else {
finish();
}
})
.show();
}
}
// LoadingActivity.java package com.ashish.bdgfakehacksim;
import android.content.Intent; import android.os.Bundle; import android.os.Handler; import android.widget.TextView; import androidx.appcompat.app.AppCompatActivity;
public class LoadingActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); TextView textView = new TextView(this); textView.setText("Loading BDG Hack Engine..."); textView.setTextSize(24); setContentView(textView);
new Handler().postDelayed(() -> {
startActivity(new Intent(this, ResultActivity.class));
finish();
}, 3000);
}
}
// ResultActivity.java package com.ashish.bdgfakehacksim;
import android.os.Bundle; import android.widget.TextView; import androidx.appcompat.app.AppCompatActivity; import java.util.Random;
public class ResultActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); TextView resultText = new TextView(this); resultText.setTextSize(24);
boolean success = new Random().nextBoolean();
if (success) {
resultText.setText("✅ BDG Hack Successful! Points Added: +9999");
} else {
resultText.setText("❌ BDG Hack Failed. Try Again Later.");
}
setContentView(resultText);
}
}
I got the same problem of same IP address assigned to both the nodes. My cluster was setup through KinD (Kubernetes in Docker).
If this is the same case as yours, you only need to stop the container of each node and start those containers again. You might see distinct IP addresses assigned to both the nodes.
PS: One of my peer did this on my KinD cluster.
I'm having the same problem in 2025, but I need a solution that works without an external library. As my problem is related to <input type="date" />
(see my update of https://stackoverflow.com/a/79654183/15910996) and people use my webpage in different countries, I also need a solution that works automatically with the current user's locale.
My idea is to take advantage of new Date().toLocaleDateString()
always being able to do the right thing but in the wrong direction. If I take a static ISO-date (e.g. "2021-02-01") I can easily ask JavaScript how this date is formatted locally, right now. To construct the right ISO-date from any local date, I only need to understand in which order month, year and date are used. I will find the positions by looking at the formatted string from the static date.
Luckily, we don't have to care about leeding zeros and the kind of separators that are used in the locale date-strings.
With my solution, on an Australian computer, you can do the following:
alert(new Date(parseLocaleDateString("21/11/1968")));
In the US it will look and work the same like this, depending on the user's locale:
alert(new Date(parseLocaleDateString("11/21/1968")));
Please note: My sandbox-example starts with an ISO-date, because I don't know which locale the current user has... 😉
// easy:
const localeDate = new Date("1968-11-21").toLocaleDateString();
// hard:
const isoDate = parseLocaleDateString(localeDate);
console.log("locale:", localeDate);
console.log("ISO: ", isoDate);
function parseLocaleDateString(value) {
// e.g. value = "21/11/1968"
if (!value) {
return "";
}
const valueParts = value.split(/\D/).map(s => parseInt(s)); // e.g. [21, 11, 1968]
if (valueParts.length !== 3) {
return "";
}
const staticDate = new Date(2021, 1, 1).toLocaleDateString(); // e.g. "01/02/2021"
const staticParts = staticDate.split(/\D/).map(s => parseInt(s)); // e.g. [1, 2, 2021]
const year = String(valueParts[staticParts.indexOf(2021)]); // e.g. "1968"
const month = String(valueParts[staticParts.indexOf(2)]); // e.g. "11"
const day = String(valueParts[staticParts.indexOf(1)]); // e.g. "21"
return [year.padStart(4, "0"), month.padStart(2, "0"), day.padStart(2, "0")].join("-");
}
Update / clarification: I realized my original post listed the wrong versions.
I’m actually on Spring Boot 3.5.3 with Java 21.
For reference, here’s the relevant part of my build.gradle
plugins {
id 'java'
id 'org.springframework.boot' version '3.5.3'
id 'io.spring.dependency-management' version '1.1.7'
}
java {
toolchain {
languageVersion = JavaLanguageVersion.of(21)
}
}
The rest of the question remains the same—just wanted to correct the environment details.
To increase playback speed without changing pitch, use:
await sound.setRateAsync(2.0, true, Audio.PitchCorrectionQuality.High);
Workaround: Delete the XIB/Storyboard files that caused compile error and build the project again without cleaning the build folder. If another XIB/Storyboard file fails, delete it as well and repeat the process until the compilation is successful. Afterward, you can restore the deleted XIB/Storyboard files (using Git to discard the changes) and build the project again.
If a function or variable has SSN then fortify treat it as privacy violation because fortify treat it as related to SocialSecurityNumer(ssn). If you change SSN to some other text the error will vanish
If you really want to prevent drift at all you should start using deployment stacks. Using stacks you will be able to prevent any changes happening outside of the deployment stack. Currently what-if is not very reliable as it produces what-if noise on many of the resources. From the Bicep community calls we have learned that improvement to what-if is planned but that improvement will be only when using deployment stacks. So even if you do not use the deny option of deployment stacks I will suggest to start using it now as when the what-if improvements are introduced you will be ready to take advantage of it. You can still do what-if validation now but overall you will have to review the changes somehow manually due to the amount of noise. For example, you can have pipelines with two stages. One stage runs only what-if. You validate the results. Based on the validation you decide to run the second stage where the actual deployment will be done.
expo-av is declared as Deprecated. Please use the coresponding expo-audio or expo-video:
"Deprecated: The Video
and Audio
APIs from expo-av
have now been deprecated and replaced by improved versions in expo-video
and expo-audio
"
My specific want has been resolved by this PR https://github.com/apache/airflow/pull/46535!
thank you for your response. Before I saw that someone had answered my question, I tried using the options and it worked.
In my controller
And in my form:
grunge-style analog photos around 2025.i was taking pictures in front of a Nissan GT 86 car together. where London England, was sitting with a pose model style turned toward the kamera wearing a black t outfit Jens and nike air Jordan low shoes, using flash
The parsing error was caused because the code presents each 2D tensor as a single byte string using tf.io.serialize_tensor
, but the parsing schema was set to expect a fixed-length array of strings. To fix this, change the FixedLenFeature to expect a single scalar string, which will then be correctly decoded by tf.io.parse_tensor
. Kindly refer to the gist for working code.
The TypeError
in BroadcastTo.call()
was caused by the Masking layer (applied to jet_masked = keras.layers.Masking(mask_value=0.0)(jet_input))
. This layer created a boolean mask (shape (None, 3, 2))
that interfered with downstream layers like LSTM. To fix this, remove the Masking layer and instead pass a custom jet_mask tensor directly to your LSTM's mask argument. This approach prevents automatic masking while still allowing jet_input to be concatenated with other inputs. Please refer to this gist.
thats too unity, they;ve moved that Layout option to a tiny icon in the toolbar of scene view..
As I accidently hit it my project are invisible with UI anymore, took me 1 hr to find out
Initialize your / JSON Object given
Parse JSON - Converting JSON Object in to structured data object to easy manipulate for programming.
Compose action to get out put required:
{
"email": "[email protected]",
"first name": "Donald",
"last name": "Duck"
}
Schema for reference:
{
"definition": {
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
"contentVersion": "1.0.0.0",
"triggers": {
"When_a_HTTP_request_is_received": {
"type": "Request",
"kind": "Http"
}
},
"actions": {
"Parse_JSON": {
"runAfter": {
"Initialize_JSON_Object": [
"Succeeded"
]
},
"type": "ParseJson",
"inputs": {
"content": "@variables('JSON Object')",
"schema": {
"type": "object",
"properties": {
"email": {
"type": "string"
},
"phone number": {
"type": "string"
},
"fields": {
"type": "array",
"items": {
"type": "object",
"properties": {
"description": {
"type": "string"
},
"value": {
"type": "string"
},
"id": {
"type": "integer"
}
},
"required": [
"description",
"value",
"id"
]
}
}
}
}
}
},
"Initialize_JSON_Object": {
"runAfter": {},
"type": "InitializeVariable",
"inputs": {
"variables": [
{
"name": "JSON Object",
"type": "object",
"value": {
"email": "[email protected]",
"phone number": "+123 321 111 333",
"fields": [
{
"description": "name",
"value": "Mickey",
"id": 1
},
{
"description": "first name",
"value": "Donald",
"id": 1
},
{
"description": "last name",
"value": "Duck",
"id": 3
},
{
"description": "age",
"value": "1",
"id": 4
}
]
}
}
]
}
},
"Compose": {
"runAfter": {
"Parse_JSON": [
"Succeeded"
]
},
"type": "Compose",
"inputs": {
"email": "@{body('Parse_JSON')?['email']}",
"@{body('Parse_JSON')?['fields'][1]['description']}": "@{body('Parse_JSON')?['fields'][1]['value']}",
"@{body('Parse_JSON')?['fields'][2]['description']}": "@{body('Parse_JSON')?['fields'][2]['value']}"
}
}
},
"outputs": {},
"parameters": {
"$connections": {
"type": "Object",
"defaultValue": {}
}
}
},
"parameters": {
"$connections": {
"type": "Object",
"value": {}
}
}
}
Singed up stackoverflow now to say thnak you! had the same issue and now I know why :)
God bless
In the end it seemed that the deduplication using materialized view was the most performant approach because the ingestion latency was starting to get really high using a custom deduplication mechanism without materialized views. The only option was to upscale the SKU but that itself also has a great impact on cost.
However, deduplication using the materialized view approach also comes with a certain load on the ingestion process when working with billions of rows.
A classic approach for your case would be using one database (e.g. PostgreSQL) and two tables: one of match data and another for match summary. Such database is supported by practically any programming language and you'll se a lot of examples how to insert the data, so you won't even need to write a CSV file and JSON file but write the data directly into the database. But if you just have files, reading and inserting into the database is also simple, e.g. if you want to insert CSV look at this anwer: How to import CSV file data into a PostgreSQL table
Inserting JSON is a little less trivial, but still not very hard: How can I import a JSON file into PostgreSQL?
But definitely just one database and two tables, no need to run two database servers just to contain two tables.
header 1 | header 2 |
---|---|
cell 1 | cell 2 |
cell 3 | cell 4 |
<select name="cars" id="cars">
<option value="volvo">Volvo</option>
<option value="saab">Saab</option>
<option value="mercedes">Mercedes</option>
<option value="audi">Audi</option>
</select>
const $select = document.querySelector('#cars');
for (let i = 0; i < $select.options.length; i++) {
const option = $select.options[i];
console.log(`index: ${i}, value: ${option.value}, text: ${option.text}`);
}
Is this what you want?
If you want to visualize the server-side rendered version versus regular version - https://www.crawlably.com/ssr-checker/
Disclaimer - I created this tool for the non-devs on our team to check SSR issues.
In reference to: The code stops on the line: "qdf.Parameters(Parm1) = intVdrProfileID". I get "Item not found in this collection"....
Parm1 needs to be a string variable which is the name of the parameter. Dim Parm1 as string and set it to the name of the parameter.
When upgrading to SpringBoot 3, Tomcat 10 or anything that requires Jakarta EE 9, it’s always safer to replace all javax dependencies with jakarta ones. It’s not completely straightforward .
this earlier works without any security settings with hibernate jar with springboot version below 3.. but after springboot3 we compiled to security related settings to jdbc connection url in application.xml file and also remove hibernate jar dependency in pom.
.url=jdbc:sqlserver://<connection-ip:port>;databaseName=<Dbname>;encrypt=true;trustServerCertificate=true;
Did you ever implement this? I'm after the same thing and I'm about to resort to just using a FileSystemWatcher.
Springboot3 comes with it own jakarta jar dependency. Hibernate 5 not compatible with it, As it brings javax jar. So please upgrade your Sprinboot version and remove hibernate dependency. Your application will work perfectly and querydsl dependency also get the work around.
This is very common due to vscode copilot incorrectly predicting the new control flow syntax.
TLDR: make sure @
is added prior to the flow keyword, in my case, the else
keyword
Hola no se si aún te sirva la solución, pero de igual manera estaba teniendo este error en mi servidor Hostinger, es un cambio muy pequeño pero clave.
Al subir una aplicación Laravel/Filament a un hosting en la nube, las imágenes cargadas a través de la sección de administración no se muestran en el frontend. En su lugar, aparece un icono de imagen rota. Al revisar los logs de Nginx, el error específico que se presenta es: failed (40: Too many levels of symbolic links).
Esto indica que el servidor web (Nginx) no puede acceder a las imágenes porque el enlace simbólico public/storage que apunta a la ubicación real de los archivos (usualmente storage/app/public) está configurado incorrectamente o sufre de un problema de permisos que el sistema interpreta como un bucle o una cadena excesiva de enlaces.
1.- Enlace Simbólico (public/storage) con propietario incorrecto (root:root): Aunque el destino del enlace (storage/app/public) tuviera los permisos correctos, el propio archivo del enlace simbólico era propiedad de root, mientras que Nginx se ejecuta con un usuario diferente (www-data). Esto puede causar que Nginx no "confíe" en el enlace o lo interprete erróneamente.
2.- Posible creación incorrecta o bucle en el enlace simbólico: Aunque menos probable una vez que se verifica la ruta de destino, un enlace simbólico que apunta a sí mismo o a un enlace anidado puede generar este error.
La solución se centra en eliminar cualquier enlace simbólico public/storage existente, y luego recrearlo asegurándose de que el propietario sea el usuario del servidor web (www-data en la mayoría de los casos de Nginx en Ubuntu/Debian).
1.- Eliminar el Enlace Simbólico Problemático
Primero, elimina el enlace simbólico public/storage existente. Esto no borrará tus imágenes, ya que el enlace es solo un "acceso directo".
# Navega al directorio 'public' de tu proyecto Laravel
cd /var/www/nombre_proyecto/public
# Elimina el enlace simbólico 'storage'
rm storage
2. Recrear el Enlace Simbólico con el Propietario Correcto
La forma más efectiva es intentar crear el enlace simbólico directamente con el usuario del servidor web.
# Navega a la raíz de tu proyecto Laravel
cd /var/www/nombre_tu_proyecto/
# Ejecuta el comando storage:link como el usuario del servidor web
# Sustituye 'www-data' si tu usuario de Nginx es otro (ej. 'nginx')
sudo -u www-data php artisan storage:link
Si el comando sudo -u www-data php artisan storage:link falla o te da un error, puedes ejecutar php artisan storage:link (que lo creará como root) y luego usar el siguiente comando para cambiar su propiedad:
# Navega al directorio 'public' de tu proyecto
cd /var/www/nombre_tu_proyecto/public
# Cambia la propiedad del enlace simbólico *directamente* (con -h o --no-dereference)
# Sustituye 'www-data' si tu usuario de Nginx es otro
sudo chown -h www-data:www-data storage
3. Verificar la Propiedad del Enlace Simbólico
Es crucial verificar que el paso anterior haya funcionado y que el enlace simbólico storage ahora sea propiedad de tu usuario de servidor web.
# Desde /var/www/nombre_de_tu_proyecto/public
ls -l storage
La salida debería ser similar a esta (observa www-data www-data como propietario):
lrwxrwxrwx 1 www-data www-data 35 Jul 3 03:27 storage -> /var/www/nombre_de_tu_proyecto/storage/app/public
4. Limpiar Cachés de Laravel
Para asegurar que Laravel no esté sirviendo URLs de imágenes desactualizadas o incorrectas debido a la caché, límpialas.
# Desde la raíz de tu proyecto Laravel
php artisan config:clear
php artisan cache:clear
php artisan view:clear
5. Reiniciar Nginx
Para asegurar que Laravel no esté sirviendo URLs de imágenes desactualizadas o incorrectas debido a la caché, límpialas.
sudo systemctl reload nginx
De esta forma logre solucionar mi problema, en sí los puntos importantes que hay que tener en cuenta al levantar una página web en un hostinger o servidor, son los permisos de usuarios y que usuarios estan creando los archivos y dando acceso, en este caso es importante que www-data tenga acceso a estos archivos y carpetas porque es el usuario que usa Nginx para adiministrar los archivos del proyecto y servirlos, espero te ayude o ayude a otras personas con este problema 🙌.
The tokio-run-until-stalled
crate (https://crates.io/crates/tokio-run-until-stalled) is specifically designed to address this need—it provides a way to run a Tokio runtime until all pending tasks have completed (i.e., "stalled" state, where no more progress can be made).
The problem I had was this line.
return array(true, $idp_sso_url . '?SAMLRequest=' . base64_encode(gzdeflate($authnRequest)));
The $idp_sso_url from Google already had a parameter in the URL, so my use of "?SAMLRequest=..." needed to be "&SAMLRequest=..."
About the first problem, my guess is that The LLM uses DOM structure and visual hints to infer which element matches your instruction. So when visually adjacent elements (like icons or spans inside buttons) are rendered, the LLM picks the wrong node, especially if accessibility labels or semantic tags are missing.
The link is broken for me as well. I would suggest reached out to Stripe support (https://support.stripe.com/) about this.
Thank you Sam! Good news! Here is the result of headerData.
totalRowsCount 95
headerData {...} jsonTableCopy JSON
author: "Bob Hoskins"
_id: "9e570df9-6ea9-4760-98f1-0df76084e857"
_owner: "76001129-23f0-41da-9f3c-15b9bd2fe0e9"
_createdDate: "Wed Jun 25 2025 13:40:16 GMT+0530 (India Standard Time)"
_updatedDate: "Wed Jun 25 2025 13:40:16 GMT+0530 (India Standard Time)"
bookCopies: 1
available: true
title: "All They Want Is The Truth"
bookOwner: "BICF"
numberOfColumns 9
bookTableHeaders Array(9) jsonTableCopy JSON
0: "author"
1: "_id"
2: "_owner"
3: "_createdDate"
4: "_updatedDate"
5: "bookCopies"
6: "available"
7: "title"
8: "bookOwner"
Also...Some columns are empty like you pointed out. Here's a screenshot of my wix data table:
So I entered "NA" in some columns. That helped. Here's the result after that:
totalRowsCount 95
headerData {...} jsonTableCopy JSON
author: "Bob Hoskins"
borrowedBy: "NA"
_id: "9e570df9-6ea9-4760-98f1-0df76084e857"
_owner: "76001129-23f0-41da-9f3c-15b9bd2fe0e9"
_createdDate: "Wed Jun 25 2025 13:40:16 GMT+0530 (India Standard Time)"
_updatedDate: "Thu Jul 03 2025 08:44:49 GMT+0530 (India Standard Time)"
requestedBy: "NA"
bookCopies: 1
available: true
title: "All They Want Is The Truth"
bookOwner: "BICF"
numberOfColumns 11
bookTableHeaders Array(11) jsonTableCopy JSON
0: "author"
1: "borrowedBy"
2: "_id"
3: "_owner"
4: "_createdDate"
5: "_updatedDate"
6: "requestedBy"
7: "bookCopies"
8: "available"
9: "title"
10: "bookOwner"
So now the non empty columns are showing up. Thank you for your help! However 3 columns "available", "requestBook", "approve", were set as boolean. I was assuming, that leaving it empty would be taken as false. If therse are not showing up because they are empty, what should I do? Should I change these booleans to number columns and then write some code to make it look like Yes, No & NA on the page's table?
I looked at javascript way back in the year 2000! After that I became a sculptor! I may make some dumb mistakes here and there! Once again, thanks a lot Sam! May God bless you!
I have similar error in ionic , I notice that the HttpEventType was not correct in the import.
The correct is:
import { HttpEventType } from '@angular/common/http';
Thanks for the guide. How to deploy to https://dockerhosting.ru/
how about using the FakeLogger?
https://learn.microsoft.com/en-nz/dotnet/api/microsoft.extensions.logging.testing.fakelogger
https://devblogs.microsoft.com/dotnet/fake-it-til-you-make-it-to-production/
using Microsoft.Extensions.Logging.Testing;
public class Tests
{
private readonly FakeLogger<GetImageByPropertyCode> _fakeLogger = new();
[Fact]
public void Test()
{
_fakeLogger.Collector
.GetSnapshot()
.Count(l => l.Message.StartsWith("whatevs"))
.Should().Be(1);
}
Since July 2025, GitHub actions stopped supporting windows-2019 runner, so I encountered the same problem. I found a solution from Open .net framework 4.5 project in VS 2022. Is there any workaround?
The key steps are as follows:
C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework
GitHub action example:
name: test
on:
workflow_dispatch:
push:
branches: ['main']
jobs:
build:
env:
projName: net45action
buildCfg: Release
net45SdkUrl: 'https://www.nuget.org/api/v2/package/Microsoft.NETFramework.ReferenceAssemblies.net45/1.0.3'
sdkSystemPath: 'C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework'
runs-on: windows-2025
steps:
- name: Install .net framework 4.5 SDK
shell: pwsh
run: |
echo "download ${env:net45SdkUrl}"
Invoke-WebRequest -Uri "${env:net45SdkUrl}" -OutFile "net45sdk.zip"
echo "unzip net45sdk.zip"
Expand-Archive -Force -LiteralPath "net45sdk.zip" -DestinationPath "net45sdk"
echo "move to ${env:sdkSystemPath}"
Move-Item -Force -LiteralPath "net45sdk\build\.NETFramework\v4.5" -Destination "${env:sdkSystemPath}"
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Add msbuild to PATH
uses: microsoft/setup-msbuild@v2
- name: Setup VSTest Path
uses: darenm/[email protected]
- name: Restore packages
run: nuget restore ${env:projName}.sln
- name: Build
run: msbuild ${env:projName}.sln -p:Configuration=${env:buildCfg}
- name: Run unit tests
run: vstest.console.exe "${{ env.projName}}.test\bin\${{ env.buildCfg }}\${{ env.projName}}.test.dll"
Here is an example project:
https://github.com/vrnobody/net45action
I don't see an error here other than a statement reversal about the training dataset while predicting the model Training model.
In the below statement trainTgt had been sent to mask the source data to train. It doesn't ideally matter since you are only considering the output predictions for your reference. Do you have any error message to display to understand more about the issue?
tgt_padding_mask = generate_padding_mask(trainTgt, tokenizer.vocab['[PAD]']).cuda()
model.train()
trainPred: torch.Tensor = model(trainSrc, trainTgt, tgt_mask, tgt_padding_mask)
Thanks,
Ramakoti Reddy.
What I found gave me the desired effect was porting my basic operations to RTK Query.
From a thunk, when RTK query actions are dispatched, they can be awaited and there result is returned. For example:
export const createAndNameThing = createAsyncThunk(
'things/createAndName',
async (name: string, { dispatch }) => {
// Step 1: Create the thing
const createResult = await dispatch(
thingApi.endpoints.createThing.initiate(undefined)
).unwrap();
// Step 2: Update the thing with the name
const updateResult = await dispatch(
thingApi.endpoints.updateThing.initiate({
id: createResult.id,
data: name
})
).unwrap();
return updateResult;
}
);
import React, { useEffect, useState } from "react";
const Materias_List = () => {
const [originalData, setOriginalData] = useState([]);
const [dataApi, setDataApi] = useState([]);
const [estadoChecked, setEstadoChecked] = useState({
materia_promocionada: false,
materia_pendiente: false,
materia_cursando: false,
materia_tiene_apuntes: false,
});
const fetchData = async () => {
try {
// Simulated fetch (replace with real fetch if needed)
const response = await fetch("/path/to/your/degree_in_software_development.json");
const json = await response.json();
setOriginalData(json.subjects);
setDataApi(json.subjects);
} catch (e) {
console.error("Error al consumir API", e);
}
};
const handleOnChange = (e) => {
const { name, checked } = e.target;
setEstadoChecked((prev) => ({
...prev,
[name]: checked,
}));
};
useEffect(() => {
fetchData();
}, []);
// Apply filters every time estadoChecked changes
useEffect(() => {
let filtered = [...originalData];
const filters = [];
if (estadoChecked.materia_promocionada) filters.push("Promocionada");
if (estadoChecked.materia_pendiente) filters.push("Pendiente");
if (estadoChecked.materia_cursando) filters.push("Cursando");
// Filter by estado (Promocionada, Pendiente, Cursando)
if (filters.length > 0) {
filtered = filtered.filter((s) => filters.includes(s.estado));
}
// Filter by tiene_apuntes
if (estadoChecked.materia_tiene_apuntes) {
filtered = filtered.filter((s) => s.tiene_apuntes);
}
setDataApi(filtered);
}, [estadoChecked, originalData]);
return (
<>
<div>
<p>Filtrar por: </p>
<label>
<input
type="checkbox"
name="materia_promocionada"
onChange={handleOnChange}
/>
Promocionada
</label>
<label>
<input
type="checkbox"
name="materia_pendiente"
onChange={handleOnChange}
/>
Pendiente
</label>
<label>
<input
type="checkbox"
name="materia_cursando"
onChange={handleOnChange}
/>
Cursando
</label>
<label>
<input
type="checkbox"
name="materia_tiene_apuntes"
onChange={handleOnChange}
/>
Tiene apuntes
</label>
</div>
<div id="materias_container">
<ul id="materias_lista">
{dataApi.map((subject) => (
<li key={subject.codigo}>
<div className="materias__item">
<span className={`estado_${subject.estado.toLowerCase()}`}>
{subject.estado}
</span>
<h4>{subject.nombre}</h4>
<p className="materias__item-detalle">
<span>Código: {subject.codigo}</span>
{subject.tiene_apuntes && subject.link_apuntes && (
<span>
<a href={`/${subject.link_apuntes}`} target="_blank">
📚 Apuntes
</a>
</span>
)}
</p>
</div>
</li>
))}
</ul>
</div>
</>
);
};
export default Materias_List;
Is there any solution to this problem, I'm also having the same problem. Dependency conflicts aries only when the Supabase imports are included else everything is fine. What to do
In PowerShell, where python
did no produce a results because my installation did not add its path to Path, the environment variable. (Get-Command python).Path
worked
Great! This bot is exactly what you need. It can check whether a number is registered on Telegram. You can try it out here: https://t.me/nihaoiybot. My Telegram contact is @xm88918 if you need further assistance.
Rebuild the cache again
yarn cache clean
yarn install
The workaround described in this comment in an Avalonia Github Issue worked for me.
You still get the extra seven columns but you can remove them from the DataGrid at the end of the handler method as a last step.
Not a perfect solution, but might work for some.
Found a solution that worked for me.
You need to copy your .ipa files to your mac
Unzip the ipa
Re-sign the extension app with freshly written entitlements.plist based on your needs
Re-sign the main app with freshly written entitlements.plist based on your needs
Re-zip the ipa and upload it via Transporter
Good luck !
You can try setting the contenteditable attribute to false it will keep the text and allow you to readonly.
<div
contenteditable="false">
content goes here
</div>
It looked like a temporary glitch in the Autodesk hubs api. I am able to successfully query the data now.
Change
:paths ["src"]
to location of datafiles :paths["C:\\folder\\clojure\\data"]
in deps.edn
For me echo %JAVA_HOME% command was returning back %JAVA_HOME% in my new Windows system. After setting the JAVA_HOME as "C:\Program Files\Eclipse Adoptium\jdk-21.0.4.7-hotspot, I removed following from pom.xml, which fixed the build issue
<properties>
<java.version>21</java.version>
</properties>
io/resource is looking for the file in the class path, not the current directory. It may even be looking for something/file.txt in the class path, since it's a relative path in the something namespace.
You can enable xp_cmdshell
and have a procedure that executes a powershell script using that command. The contents of the script can include anything, and in your case a web request. This doesn't require importing any assemblies, and I find CLR to be overkill for this usercase.
func setupView() {
let eventMessenger = viewModel.getEventMessenger()
let model = viewModel.getEnvironmentModel()
let swiftUIView = CreateHeroSubraceView()
.environmentObject(eventMessenger)
.environmentObject(model)
let hostingController = UIHostingController(rootView: swiftUIView)
hostingController.view.translatesAutoresizingMaskIntoConstraints = false
hostingController.view.backgroundColor = .clear
hostingController.additionalSafeAreaInsets = .zero
addChild(hostingController)
view.addSubview(hostingController.view)
hostingController.view.snp.makeConstraints { make in
make.edges.equalToSuperview()
}
hostingController.didMove(toParent: self)
}
This looks like a bug, but I made it work properly by adding an empty slot.
Fix does not make much sense, but looks like it forces the correct default slot.
<template #thead />
Flutter
Just delete the cache
folder in the bin
folder in the flutter root folder and run flutter doctor -v
and all should be well.
I'm not that experienced, and maybe it's not the safest solution, but have you tried running the query so that it returns an Object[]
instead? It could help avoid the N+1 issue, since e.subEntity
would be loaded in the same query.
If you look at the description of strconv.Itoa
, it tells you:
Itoa is equivalent to [FormatInt](int64(i), 10).
Therefore to avoid any issues with truncation, simply use:
strconv.FormatInt(lastId, 10)
It looks like this was asked in the GitHub issues for Kysely already:
https://github.com/kysely-org/kysely/issues/838
The author essentially recommends the solution I proposed in the question itself which is to wrap it in an object:
private async makeQuery(db: Conn) {
const filter = await getFilterArg(db);
return {
query: db.selectFrom("item").where("item.fkId", "=", filter)
}
}
Here is a pretty simple regex pattern generator. My approach is really simple, just parsing the end user suitable input string like yyyy-MM-dd,HH:mm:ss
or 2025-06-05,08:37:38
and building a new regex pattern by exchanging all chars or digits by \d
or escaping some chars like .
, /
or \
.
The main issue was to correctly handle the specific [A|P]M
pattern, but I think it should be OK. Honestly, it is not super perfect, but fine for getting a clue how it could be done.
Please let me know if you need further explanations about my code and I will add it here tomorrow.
function new-regex-pattern {
param (
[string]$s
)
$ampm = '[A|P]M'
if (($s -match [Regex]::Escape($ampm)) -or ($s -match $ampm)) {
$regexOptions = [Text.RegularExpressions.RegexOptions]'IgnoreCase, CultureInvariant'
if ($s -match [Regex]::Escape($ampm)) {
$pattern = -join ('(?<start>.*)(?<AM_PM>',
[Regex]::Escape($ampm), ')(?<end>.*)')
}
else {
$pattern = -join ('(?<start>.*)(?<AM_PM>', $ampm, ')(?<end>.*)')
}
$regexPattern = [Regex]::new($pattern, $regexOptions)
$match = $regexPattern.Matches($s)
return (convert-pattern $match[0].Groups['start'].Value) +
$match[0].Groups['AM_PM'].Value +
(convert-pattern $match[0].Groups['end'].Value)
}
return convert-pattern $s
}
function convert-pattern {
param (
[string]$s
)
if ($s.Length -gt 0) {
foreach ($c in [char[]]$s) {
switch ($c) {
{ $_ -match '[A-Z0-9]' } { $result += '\d' }
{ $_ -match '\s' } { $result += '\s' }
{ $_ -eq '.' } { $result += '\.' }
{ $_ -eq '/' } { $result += '\/' }
{ $_ -eq '\' } { $result += '\\' }
default { $result += $_ }
}
}
}
return $result
}
$formatinput1 = 'M/d/yyyy,HH:mm:ss.fff'
$formatinput2 = 'yyyy-MM-dd,HH:mm:ss'
$formatinput3 = 'yyyy-M-d h:mm:ss [A|P]M'
$sampleinput1 = '6/5/2025,08:37:38.058'
$sampleinput2 = '2025-06-05,08:37:38'
$sampleinput3 = '2025-6-5 8:37:38 AM'
$example1 = '6/5/2025,08:37:38.058,1.0527,-39.5013,38.072,1.0527,-39.5013'
$example2 = '2025-06-05,08:37:38,1.0527,-39.5013,38.072,1.0527,-39.5013'
$example3 = '2025-6-5 8:37:38 AM,1.0527,-39.5013,38.072,1.0527,-39.5013'
$regexPattern = [Regex]::new((new-regex-pattern $formatinput1))
Write-Host $regexPattern.Matches($example1)
$regexPattern = [Regex]::new((new-regex-pattern $formatinput2))
Write-Host $regexPattern.Matches($example2)
$regexPattern = [Regex]::new((new-regex-pattern $formatinput3))
Write-Host $regexPattern.Matches($example3)
$regexPattern = [Regex]::new((new-regex-pattern $sampleinput1))
Write-Host $regexPattern.Matches($example1)
$regexPattern = [Regex]::new((new-regex-pattern $sampleinput2))
Write-Host $regexPattern.Matches($example2)
$regexPattern = [Regex]::new((new-regex-pattern $sampleinput3))
Write-Host $regexPattern.Matches($example3)
https://drive.google.com/file/d/1TGQUtIpuH0FPuXT640OMuJ9jG8YpUbq0/view?usp=drivesdk
Both file are under license ownership of Chandler Ayotte this is a portion of a work in progress. Anyone who loves physics will love this. The volumetric addition of qbits is lacking knowable information that when applied will provide a different perspective. There is an upper boundary completely controlled from surface area
Do you have a custom process? Also under Processing, click on your process. See on the Right pane. Check your Editable region and also your server side condition, make sure you select the right option
If you are using the Universal Render Pipeline, a setting that can produce this issue is the Layer your GameObject is set to could be filtered out in the Filtering property of the default Universal Renderer Data.
The Scene View uses the default Universal Renderer Data set in the URP Asset's Renderer List for it's Renderer settings.
In your URP Asset, double click the first Universal Renderer Data asset in the Renderer List to open it in the Inspector.
Under Filtering, check the Opaque Layer Mask and the Transparent Layer Mask to ensure the Layer your GameObject that is not rendering is checked on, or set the filter to Everything.
See the Unity Manual - Universal Renderer asset reference for URP page for more details on the Filtering property.
Better than disabling the checker completely, if you don’t want to add “U” to your supposedly unsigned literals, is to disable just that case of the checker, with - key: hicpp-signed-bitwise.IgnorePositiveIntegerLiterals
in your configuration. (Copied from comment at the request of julaine)
pyfixest author here - you can access the R2 values via the `Feols._R2` attribute. You can find all the attributes for Feols object here: link . Do you have a suggestion on how we could improve the documentation and make these things easier to find?
interesting topic. how do you modify the add button at point 2?
You can use:
pd.options.display.html.use_mathjax = False
See this for more information:
https://jonathansoma.com/everything/python/dollar-signs-italics-jupyter/
You can disable MSVC compatible mode by passing -fno-ms-compatibility
option:
clang-cl.exe -fno-ms-compatibility main.cpp
If you installed java after installing IntelliJ, the java should be seperately installed and not affected.
Even if IntelliJ uninstalls java, it should be very easy to reinstall it.
So it turns out this is a bug with Poetry 1.8.0 in relation to Artifactory that was patched in version 1.8.2.
Bug details: https://github.com/python-poetry/poetry/issues/9056
Changelog details for 1.8.2: https://python-poetry.org/history/#182---2024-03-02
I upgraded my poetry version and now the pyproject.toml
configuration I described above works as expected with no issues.
For qiskit detailed explanation is here
https://blog.shivalahare.live/quantum-computing-explained-simply-for-developers/
https://blog.shivalahare.live/getting-started-with-qiskit-a-beginners-guide-to-quantum-programming/
Method: Implicit Chain of Thought via Knowledge Distillation (ICoT-KD)
🎯 Goal:
Train a model to answer complex questions without generating reasoning steps, by learning only the final answer from a teacher model's CoT output.
🧠 Core Approach:
Teacher Model (e.g., GPT-3.5):
Generates full reasoning (CoT) + final answer
5 × 8 = 40 → 40 − 12 = 28 → Answer: 28
Student Model (e.g., T5, GPT-J):
Sees only the question → learns to predict “28”
✦ CoT is never shown during training or inference
🛠️ Training Steps:
Teacher generates (Question → CoT + Answer)
Extract (Question → Answer)
Train student on final answers only
✨ Enhancements (Optional):
Self-Consistency Voting (across multiple CoT outputs)
Filtering incorrect teacher answers
✅ Key Advantages:
Fast, CoT-free inference
No model changes required
Effective on math/symbolic tasks
Works with medium-sized models
Methodology: Implicit Chain of Thought via Knowledge Distillation (ICoT-KD)
Goal: Train a model to answer complex reasoning questions correctly without generating explicit reasoning steps — by using CoT-labeled answers from a teacher model.
🧠 Core Framework
1. Teacher Model (e.g., GPT-3.5):
Prompted with CoT-style questions to produce step-by-step rationales followed by final answers.
Example output:
“There are 7 days in a week. 7 squared is 49. Answer: 49”
2. Student Model (e.g., T5, GPT-J):
Trained to map the original input question → only the final answer, using the teacher’s output.
CoT steps are not shown to the student at any point.
Training supervised via standard cross-entropy loss on the final answer.
🧪 Optional Enhancements
Self-Consistency Decoding (SCD):
Use majority voting across multiple CoT generations to select the most consistent answer.
Model Filtering:
Student only distills from teacher generations where the answer matches the gold label.
📌 Training Pipeline
Generate (Q, CoT + A) pairs via teacher
Extract (Q, A) pairs
Train student on (Q → A)
No CoT reasoning at inference
✅ Advantages
General-purpose, model-agnostic
Works with medium models (T5-Base, GPT-J)
Requires no architectural changes
Effective on math and symbolic reasoning tasks
Methodology: Stepwise Internalization for Implicit CoT Reasoning
🎯 Goal:
Train language models to internalize reasoning steps — achieving accurate answers without outputting intermediate steps.
⚙️ Key Approach: Stepwise Internalization
Start with Explicit CoT Training:
Train the model on questions with full step-by-step reasoning and final answer.
Gradual Token Removal (Curriculum Learning):
Iteratively remove CoT tokens from inputs.
Fine-tune the model at each stage.
Forces the model to internalize reasoning within hidden states.
Final Stage – Fully Implicit CoT:
The model predicts the answer directly from the question with no visible reasoning steps.
🔁 Training Optimization Techniques:
Removal Smoothing: Adds random offset to CoT token removal to avoid abrupt changes.
Optimizer Reset: Reset training optimizer at each stage to stabilize learning.
📈 Benefits:
Simpler than knowledge distillation-based methods.
No teacher model required.
Model-agnostic and scalable (effective from GPT-2 to Mistral-7B).
Significant speed gains with minimal loss in accuracy.
Methodology: Reasoning in a Continuous Latent Space (Latent CoT)
🎯 Goal:
Train models to reason internally — without generating reasoning steps — by using a latent vector to carry the thought process.
⚙️ Core Architecture
Reasoning Encoder
Takes a question and maps it to a latent vector (a hidden representation of the reasoning process).
Learns to encode “how to think” into a compact form.
Answer Decoder
Uses the latent vector to generate the final answer only.
No reasoning steps are ever output.
🧪 How it’s Trained
Use existing Chain-of-Thought (CoT) traces to guide the encoder.
CoT helps shape the latent space, even though the model never generates the steps.
The training is fully differentiable (end-to-end), allowing the entire system to be optimized smoothly.
✅ Why It’s Powerful
No CoT at inference: reasoning is done silently inside the vector space.
Faster and more compact than explicit CoT methods.
Generalizes well across reasoning tasks.
What is this paper trying to do?
Normally, when a language model solves a hard question (like a math problem), we make it write out the steps, like:
"7 × 4 = 28. 28 + 12 = 40. Answer: 40."
This is called Chain of Thought (CoT) — it helps the model think clearly and get better answers.
But writing out all those steps:
Takes more time
Makes the model slower
Isn’t always needed if the model can “think” silently
🎯 So what’s the new idea?
Instead of making the model write its thinking, this paper teaches it to do the reasoning silently — inside its “mind”.
Like how humans often do math in their head without saying each step out loud.
They call this Latent CoT — the thinking happens in a hidden, internal form.
🧱 How does the model do it?
It’s like building a machine with two parts:
1. 🧠 Reasoning Encoder
It reads the question
It creates a special vector (a bunch of numbers) that secretly represents how to solve the problem
Think of this like your brain quietly planning an answer
2. 🗣️ Answer Decoder
It takes that hidden “thought vector” and turns it into the final answer
It doesn’t show any reasoning steps — just the answer
🧪 How do they train it?
At first, they let the model see examples with full CoT steps (like 7×4 = 28 → 28 + 12 = 40). But the model is trained to:
Not repeat those steps
Just use them to shape its internal thinking space
In simple terms:
The model learns from the explanations, but doesn’t copy them — it learns to reason silently.
And because the whole system is trained all together, it learns smoothly and efficiently.
✅ Why is this helpful?
🔕 Faster: No reasoning steps to write out
🧠 Smarter: Reasoning is hidden, but still accurate
📦 Compact: Takes less space and time
🔁 Trainable end-to-end: Easy to improve all parts together
🔬 Good at reasoning tasks like math and logic
🎓 Final Analogy:
Imagine teaching a student to solve problems in their head after showing them many worked-out examples — and they still get the right answers, just silently. That’s exactly what this model is doing.
🔁 Updated Example for Slide
Teacher Model (e.g., GPT-3.5) — Prompted with CoT:
“Alex had 5 packs of markers. Each pack had 8 markers. He gave 12 markers to a friend. How many does he have left?
→ 5 × 8 = 40
→ 40 − 12 = 28
*Answer: 28”
Student Model (e.g., T5, GPT-J) — Trained to see only:
“Alex had 5 packs of markers. Each pack had 8 markers. He gave 12 markers to a friend. How many does he have left?”
→ "28"
✅ This example comes from GSM8K, one of the key datasets used in the paper’s experiments .
Let me know if you’d like this integrated into the concise slide version or included in your experimental framework!
You can take a look at this article (use google to translate) : https://www.drhead.org/articles/quelques-techniques-seo-blackhat-1751392180396 it explains how to optimize SEO. But to answers you, go take a look at the SSG/ISR (by Next.JS) tech, u'll have what you need
The issue was that the production environment variables were not set up in my Expo account. I set them here and the crash resolved.
https://docs.expo.dev/eas/environment-variables/#create-environment-variables
Team member here. VS Code Insiders now has MCP out of preview, with a new policy for enterprises. Docs are being updated for next week's release.
Please file vscode issues for any issues you find for how policies are applied, so we can triage and debug.
Firstly: I was able to fix the CHROME problem on iOS by making sure all the event handlers were NOT async functions. I blame AI for making these async in the first place as the expo-audio APIs do not need to be awaited.
Secondly: To get iOS to work I did the following little trick, when I first (in the event handler) call to play, I did a play, pause, play. I get a warning in there, but for some reason this manages to do some of the "priming" I needed. I do not know why this worked, but a lot of trial and error led me here.
Perhaps at some point I might try to reproduce this issue and fix in a small project to give back to the expo-audio.
You can keep both apps with their own IDP and avoid coupling by using a third IDP as a broker (like another Keycloak instance).
This broker handles login via both app1's IDP and app2’s Keycloak using OIDC.
Basically:
app1 stays as-is
app2 uses Keycloak
the broker gives you SSO between both
This way, each app manages its own users/sessions, and the broker keeps a global session across them.
I wrote a guide on how to set up multiple identity providers in Keycloak if you want to go that route:
https://medium.com/@raf.lucca/one-login-many-sources-oidc-sso-with-multiple-identity-providers-keycloak-08cf3cd13c78
i have changed this and worked. source: https://samcogan.com/assign-azure-privileged-identity-management-roles-using-bicep/
param requestType string = 'AdminAssign'
Nothing has worked for me,
This is me insert: REPLACE(('The 6MP Dome IP Camera's clarity is solid, setup easy. Wide lens captures more area.'), '''', '''''')
It breaks because of the single quote in Camera's, these are dynamic variables
Any suggestions?!
When you encounter an error while pushing to a remote repository because it contains changes you don't have locally, you need to integrate those changes first. Start by fetching the latest updates from the remote and then merge or rebase them into your local branch. If conflicts arise, resolve them manually in your text editor, stage the resolved files, and complete the merge or rebase process. Once your local branch is up-to-date and conflicts are resolved, you can safely push your changes to the remote repository.
The cart behavior in Hydrogen 2 + Remix often relates to how Shopify handles cart sessions and optimistic UI updates. Below is a summary of potential reasons and troubleshooting techniques:
Optimistic UI kicks in immediately after CartForm.ACTIONS.LinesAdd, so your cart state temporarily shows the added line and updated quantity.
After the server processes the cart update, your app fetches the real cart state from Shopify.
If your server-side action or Shopify's API returns a cart with totalQuantity: 0, the optimistic state gets replaced by that empty cart, causing prices to show $0 and quantity to flip back.
The cart session is not properly persisted between client and server (Hydrogen relies on cart cookies or session).
Your server-side cart.addLines() call might be using an empty or expired cart ID, causing Shopify to create a new empty cart silently.
The cart context on the server side might be missing or not properly wired, so your cart object in the action is invalid.
( Ensure that the action receives a valid cart from your server-side context )
export async function loader({ request, context }: LoaderFunctionArgs) {
const cart = await getCartFromRequest(request, context);
return json({ cart });
}
Confirm Cart Session is Present
In browser DevTools:
-- Check cookies for cart or similar session identifier
-- Ensure the same cart ID is used between optimistic state and server response
Log Cart ID and Server Cart Object ( Make changes to your action to record what occurs: )
export async function action({ request, context }: ActionFunctionArgs) {
const { cart } = context;
console.log('Cart context on server:', cart);
const formData = await request.formData();
const { action, inputs } = CartForm.getFormInput(formData);
if (action === CartForm.ACTIONS.LinesAdd) {
const result = await cart.addLines(inputs.lines);
console.log('Cart result from server:', result.cart);
return result;
}
}
Search for:
is cart.id
valid?
Is totalQuantity
correct on the server response?
Are prices correct?
useEffect(() => {
refetchCart();
}, [optimisticCart?.totalQuantity]);
(Replace refetchCart with your method for forcing a fresh cart query from server after changes)
Sometimes old cookies or dev server cache cause weird cart behavior. Try:
Clear cookies
Restart dev server
Test in private/incognito window
I hope this is useful! If you need assistance debugging logs or reviewing your server-side context configuration, please let me know.
You can use perfect tools like deepgram, gpt-4-mini(this is the best for speed), elevanlabs.
In that case, you can reduce the delay by 1.2 ~ 1.5s.
Hope you are doing well.
This is a Windows/.NET error, and usually means something is wrong with how Python is being executed. This is is likely a conflict with your system configuration or an integration like OneDrive, PowerShell, or even a corrupted Python installation
It looks like you’ve correctly identified that you can programmatically set the Webhook Endpoint API version.
As to why Stripe is indicating that your default version is 2020-03-02, there is an account-wide default API version that is separate from the fixed API version being used by your SDK and can be upgraded from the Workbench. This account level default is used to determine the default API version used with raw calls and was previously used to set the default API version used by loosely typed language SDKs. Newer SDKs now default to the most recent version at the time of release but the account level default is still required for backward compatibility.
There is a simple answer:
req.remote_ip_address
See https://crowcpp.org/master/reference/structcrow_1_1request.html
in my case worked <span style="font-size:20pt"></span> so you can set margin space bevore or after your element
You can create a bean of the JwtDecoder
with passed a load balanced RestTemplate
:
@Bean
public JwtDecoder jwtDecoder(RestTemplate restTemplate) {
return NimbusJwtDecoder.withIssuerLocation("http://authorization-server").restOperations(restTemplate).build();
}
I just switched to another video adapter (nvidia instead of intel) and added swapchain recreation if it is out of date or suboptimal and it now works
UPDATE:
but it still writing that I should use one semaphore per each swapchain image, dont know why, I am waiting for them anyway
If you go just past the boundaries on your grid it works:
x = linspace(0.9, 2.1, n);
y = linspace(-0.1, 1.1, n);
z = linspace(-0.1, 1.1, n);
The issue here was in fact the endpoint. Instead of https://api.businesscentral.dynamics.com/v2.0/<user-domain-name>/api/Contoso/Punchout/v2.0/companies(<company-id>)/<EntityName>
, it is https://api.businesscentral.dynamics.com/v2.0/<tenant id>/<environment name>/api/<Api publisher>/<Api group>/v2.0/companies(<company ID>)/<EntitySetName>
. If after this you receive a 403 forbidden error, simply generate a permission set within vs code that gives your add-on application full control over its tables/pages/data etc and go into the BC UI and search entra -> microsoft entra application -> select the application you are working on -> under permission sets, add the one you just created in vs code.
Hi, Thanks for the help, I GPT the problem and I got these answers. I implemented and now it works. Here is for reference:
Thank you all anyway
"The default look-controls implementation handles touch input with the onTouchMove function. In version 1.7.0 of A‑Frame, the source shows that onTouchMove only adjusts yaw (horizontal rotation):
onTouchMove: function (evt) {
var direction;
var canvas = this.el.sceneEl.canvas;
var deltaY;
var yawObject = this.yawObject;
if (!this.touchStarted || !this.data.touchEnabled) { return; }
deltaY = 2 * Math.PI * (evt.touches[0].pageX - this.touchStart.x) /
canvas.clientWidth;
direction = this.data.reverseTouchDrag ? 1 : -1;
// Limit touch orientation to yaw (y axis).
yawObject.rotation.y -= deltaY * 0.5 * direction;
this.touchStart = { x: evt.touches[0].pageX, y: evt.touches[0].pageY };
},
Because the onTouchMove handler only updates yawObject.rotation.y, vertical pitch is not affected by dragging. The device’s gyroscope still changes orientation (magicWindowTrackingEnabled defaults to true when the attribute isn’t parsed), so the view moves when you physically tilt the device, but dragging up or down doesn’t modify pitch. To allow pitch rotation from dragging, you would need to customize or extend the look-controls component to apply movement to pitchObject.rotation.x as well.
AFRAME.components["look-controls"].Component.prototype.onTouchMove = function (evt) {
var canvas = this.el.sceneEl.canvas;
if (!this.touchStarted || !this.data.touchEnabled) { return; }
var touch = evt.touches[0];
var deltaX = 2 * Math.PI * (touch.pageX - this.touchStart.x) / canvas.clientWidth;
var deltaY = 2 * Math.PI * (touch.pageY - this.touchStart.y) / canvas.clientHeight;
var direction = this.data.reverseTouchDrag ? 1 : -1;
this.yawObject.rotation.y -= deltaX * 0.5 * direction;
this.pitchObject.rotation.x -= deltaY * 0.5 * direction;
var PI_2 = Math.PI / 2;
this.pitchObject.rotation.x = Math.max(-PI_2, Math.min(PI_2, this.pitchObject.rotation.x));
this.touchStart = { x: touch.pageX, y: touch.pageY };
};
"