That yellow triangle isn’t the Problems counter. It’s a warning that you turned Problems off. VS Code added this in 1.85—when Problems: Visibility is off, it shows a status-bar warning by design.
Hide just that icon (and keep Problems hidden):
Right-click the status bar → Manage Status Bar Items (or run “Preferences: Configure Status Bar Items”).
Uncheck the entry for Problems (visibility off) to hide that warning item. This per-item visibility is persisted.
If you use SSH/WSL/Dev Containers: open the remote window and do the same there—remote windows keep their own settings/profile.
If you actually want Problems decorations back (and thus no warning), just re-enable Problems: Visibility in Settings.
Based on https://lit.dev/docs/components/decorators/
You can also add accessor before your property name to use standard decorator syntax instead of experimental decorator syntax.
Use .localhost
*.localhost is reserved for local development (RFC 6761)
Natively recognized by browsers
No conflicts with real domains
Recommended Laragon Configuration
Modify domains in Laragon:
Menu → Preferences → General
Change {name}.dev to {name}.localhost
Generate wildcard certificate:
# Install mkcert if not already done
mkcert -install
# Create wildcard certificate for localhost
mkcert "*.localhost" localhost 127.0.0.1 ::1
Regards
In TYPO3v12 or later, use the PageRepository: https://docs.typo3.org/c/typo3/cms-core/main/en-us/Changelog/12.0/Deprecation-97027-ContentObjectRenderer-getTreeList.html
Perfeito! O código que você enviou já gera uma música simples estilo funk eletrônico usando pydub. Ele cria um loop de 4 batidas com kick, snare e hi-hat e exporta para um arquivo MP3 de 2min30s.
Se você quiser escutar a música gerada, basta executar o código em um ambiente Python que suporte pydub e ffmpeg (necessário para exportar MP3).
Aqui está uma versão pronta para execução com pequenas melhorias:
from pydub.generators import Sine
from pydub import AudioSegment
# Configurações do beat
bpm = 150
beat_duration_ms = int((60 / bpm) * 1000) # duração de 1 batida em ms (~400ms)
total_duration_ms = 2 * 60 * 1000 + 30 * 1000 # 2min30s
# Sons básicos
kick = Sine(60).to_audio_segment(duration=beat_duration_ms).apply_gain(+6)
snare = Sine(200).to_audio_segment(duration=100).apply_gain(-3)
hihat = Sine(8000).to_audio_segment(duration=50).apply_gain(-15)
# Função para criar um compasso simples de funk eletrônico
def make_bar():
bar = AudioSegment.silent(duration=beat_duration_ms \* 4)
\# Kick no tempo 1 e 3
bar = bar.overlay(kick, position=0)
bar = bar.overlay(kick, position=beat_duration_ms \* 2)
\# Snare no tempo 2 e 4
bar = bar.overlay(snare, position=beat_duration_ms)
bar = bar.overlay(snare, position=beat_duration_ms \* 3)
\# Hi-hat em todos os tempos
for i in range(4):
bar = bar.overlay(hihat, position=beat_duration_ms \* i)
return bar
# Criar o loop principal
bar = make_bar()
song = AudioSegment.silent(duration=0)
while len(song) < total_duration_ms:
song += bar
# Exportar como MP3
output_path = "funk_moderno.mp3"
song.export(output_path, format="mp3")
print(f"Música gerada em: {output_path}")
Depois de rodar, você terá um arquivo funk_moderno.mp3 na mesma pasta, pronto para ouvir.
Se você quiser, posso melhorar essa música adicionando variações, efeitos ou uma linha de baixo para ficar mais “profissional” e com cara de funk eletrônico moderno. Quer que eu faça isso?
i have same problem with you and here is my solution:
You must define
DATABASE_URL: postgresql://${DB_USERNAME}:${DB_PASSWORD}@postgres-db:5432/${DB_DATABASE}
inside docker compose for backend service connect to the postgres db. here is my docker-compose file:
version: '4.0'
services:
db:
image: postgres
container_name: postgres
environment:
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_DATABASE}
ports:
- "5432:5432"
volumes:
- db_data:/var/lib/postgresql/data
backend:
build: .
container_name: backend
ports:
- "3000:3000"
environment:
DATABASE_URL: postgresql://${DB_USERNAME}:${DB_PASSWORD}@postgres-db:5432/${DB_DATABASE}
depends_on:
- db
volumes:
- .:/app
- /app/node_modules
volumes:
db_data:
then change the host(DB_HOST) in .env file equal to "db" (because you named postgres is "db" in docker-compose file)
PORT=3000
DB_HOST=db
DB_PORT=5432
DB_USERNAME=postgres
DB_PASSWORD=123456
DB_DATABASE=auth
the typeORM config
TypeOrmModule.forRootAsync({
imports: [ConfigModule],
useFactory: (configService: ConfigService) => ({
type: 'postgres',
host: configService.get('DB_HOST'),
port: +configService.get('DB_PORT'),
username: configService.get('DB_USERNAME'),
password: configService.get('DB_PASSWORD'),
database: configService.get('DB_DATABASE'),
entities: [__dirname + '/**/*.entity{.ts,.js}'],
synchronize: true,
logging: true
}),
inject: [ConfigService],
}),
here is an update, I have written an update version of the code using dynamic allocation for all the matrices, this works quite well in parallel too(I have tested it up to 4096x4096); the only minor issue is that, with the largest size tested, I had to turn off the function call to the "print" function because it stalled the program.
Inside the function for the block multiplication there is now a condition on all 3 inner loops to take care of the scenario where row and columns values cannot be divided by block dimension, using fmin() function with this syntax:
for(int i=ii; i<fmin(ii+blockSize, rowsA); ++i)
{
for(int j=jj; j<fmin(jj+blockSize, colsB); ++j)
{
for(int k=kk;k<fmin(kk+blockSize, rowsA); ++k)
{
matC[i][j] += matA[i][k]*matB[k][j];
I tried this approach also in the early version of the serial code but for some reason it didn't work, probably because I made some logical mistakes.
Anyway, this code do not work on rectangular matrices, if you try to run it with 2 rectangular matrices you will get an error because pointers writes outiside the memory areas they are supposed to work into.
I tried to think about how to convert all checks and mathematical conditions required for rectangular matrices into working code but I had no success, I admit it's beyond my skills, if anyone has code (maybe from past examples or from some source on the net) to be used it could be an extra addition to the algorithm, I searched a lot both here and on the internet but found nothing.
Here is the updated full code:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <omp.h>
/* run this program using the console pauser or add your own getch, system("pause") or input loop */
// function for product block calculation between matri A and B
void matMultDyn(int rowsA, int colsA, int rowsB, int colsB, int blockSize, int **matA, int **matB, int **matC)
{
double total_time_prod = omp_get_wtime();
#pragma omp parallel
{
#pragma omp single
{
//int num_threads=omp_get_num_threads();
//printf("%d ", num_threads);
for(int ii=0; ii<rowsA; ii+=blockSize)
{
for(int jj=0; jj<colsB; jj+=blockSize)
{
for(int kk=0; kk<rowsA; kk+=blockSize)
{
#pragma omp task depend(in: matA[ii:blockSize][kk:blockSize], matB[kk:blockSize][jj:blockSize]) depend(inout: matC[ii:blockSize][jj:blockSize])
{
for(int i=ii; i<fmin(ii+blockSize, rowsA); ++i)
{
for(int j=jj; j<fmin(jj+blockSize, colsB); ++j)
{
for(int k=kk;k<fmin(kk+blockSize, rowsA); ++k)
{
matC[i][j] += matA[i][k]*matB[k][j];
//printf("Hello from iteration n: %d\n",k);
//printf("Test valore matrice: %d\n",matC[i][j]);
//printf("Thread Id: %d\n",omp_get_thread_num());
}
}
}
}
}
}
}
}
}
total_time_prod = omp_get_wtime() - total_time_prod;
printf("Total product execution time by parallel threads (in seconds): %f\n", total_time_prod);
}
//Function for printing of the Product Matrix
void printMatrix(int **product, int rows, int cols)
{
printf("Resultant Product Matrix:\n");
for (int i = 0; i < rows; i++) {
for (int j = 0; j < cols; j++) {
printf("%d ", product[i][j]);
}
printf("\n");
}
}
int main(int argc, char *argv[]) {
//variable to calculate total program runtime
double program_runtime = omp_get_wtime();
//matrices and blocksize dimensions
int rowsA = 256, colsA = 256;
int rowsB = 256, colsB = 256;
int blockSize = 24;
if (colsA != rowsB)
{
printf("No. of columns of first matrix must match no. of rows of the second matrix, program terminated");
exit(EXIT_SUCCESS);
}
else if(rowsA != rowsB || rowsB != colsB)
{
blockSize= 1;
//printf("Blocksize value: %f\n", blockSize);
}
//variable to calculate total time for inizialization procedures
double init_runtime = omp_get_wtime();
//Dynamic matrices pointers allocation
int** matA = (int**)malloc(rowsA * sizeof(int*));
int** matB = (int**)malloc(rowsB * sizeof(int*));
int** matC = (int**)malloc(rowsA * sizeof(int*));
//check for segmentation fault
if (matA == NULL || matB == NULL || matC == NULL)
{
fprintf(stderr, "out of memory\n");
exit(0);
}
//------------------------------------ Matrices initializazion ------------------------------------------
// MatA initialization
//#pragma omp parallel for
for (int i = 0; i < rowsA; i++)
{
matA[i] = (int*)malloc(colsA * sizeof(int));
}
for (int i = 0; i < rowsA; i++)
for (int j = 0; j < colsA; j++)
matA[i][j] = 3;
// MatB initialization
//#pragma omp parallel for
for (int i = 0; i < rowsB; i++)
{
matB[i] = (int*)malloc(colsB * sizeof(int));
}
for (int i = 0; i < rowsB; i++)
for (int j = 0; j < colsB; j++)
matB[i][j] = 1;
// matC initialization (Product Matrix)
//#pragma omp parallel for
for (int i = 0; i < rowsA; i++)
{
matC[i] = (int*)malloc(colsB * sizeof(int));
}
for (int i = 0; i < rowsA; i++)
for (int j = 0; j < colsB; j++)
matC[i][j] = 0;
init_runtime = omp_get_wtime() - init_runtime;
printf("Total time for matrix initialization (in seconds): %f\n", init_runtime);
//omp_set_num_threads(8);
// function call for block matrix product between A and B
matMultDyn(rowsA, rowsA, rowsB, colsB, blockSize, matA, matB, matC);
// function call to print the resultant Product matrix C
printMatrix(matC, rowsA, colsB);
// --------------------------------------- Dynamic matrices pointers' cleanup -------------------------------------------
for (int i = 0; i < rowsA; i++) {
free(matA[i]);
free(matC[i]);
}
for (int i = 0; i < colsB; i++) {
free(matB[i]);
}
free(matA);
free(matB);
free(matC);
//Program total runtime calculation
program_runtime = omp_get_wtime() - program_runtime;
printf("Program total runtime (in seconds): %f\n", program_runtime);
return 0;
}
To complete the testing and comparison on the code, I will create a machine on Google Clould equipped with 32 cores, so I can see how the code run on an actual 16 cores machine and then with 32 cores.
For reference, I'm running this code on my MSI notebook, which is equipped with an Intel i7th 11800, 8 cores at 3.2 Ghz, and can manage up to 16 threads concurrently; the reason to go and test on Google Cloud is because I want to have the software run on a "real" 16 cores machine, where 1 threads run on one core, and then scaling further up to 32 cores.
With the collected data I will then draw some graphs for comparison.
In news phpstorm version : File > Settings > PHP
I would split optimization into two parts: TTFB (time to first byte) optimization and the frontend optimization.
To optimize TTFB:
Connect your Magento store to a PHP profiler. There are several options, you can google for them.
Inspect the diagram and see if you have find a function call that takes too much time.
Optimize that function call. In 90% case I dealt with the slowness came from a 3rd-party extension.
To optimize the frontend:
Minify and compress JS and CSS. You can turn it on at Stores > Configuration > Advanced > Developer > CSS and JS settings
Serve images in WebP or AVIF formats to cut page weight
Use GZIP compression
Inline critical CSS and JS (critical CSS/JS is what needs to be render above-the-fold content) and lazy load all the rest
Use as few 3rd-party JS libraries/scripts as possible
Remove redundant CSS and JS
Good luck!
I found the issue. The issue wasn't with the dataset format it was with the LLM I used, it wasn't returning the correct output (a value of 0 or 1) , that's why it was giving me RagasOutputParserException. To fix it I tried different models and decreased the number of returned documents from 10 to 5.
This is what ultimately got me going:
<div style="position: relative; width: 560px; height: 315px;">
<div id="cover" style="position:absolute; top: 50%; left: 50%; transform: translate(-50%, -50%); opacity:1; cursor:pointer; font-size:100px; color:white; text-shadow: 2px 2px 4px #000000;">
<i class="fas fa-play"></i>
</div>
<iframe id="player" width="560" height="315" src="https://www.youtube.com/embed/2qhCjgMKoN4?enablejsapi=1&controls=0" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen style="position: absolute; top:0; left:0; opacity:0;"></iframe>
</div>
<script src="https://www.youtube.com/iframe_api"></script>
<script>
var player;
var playButton = document.getElementById('cover');
var icon = playButton.querySelector('i');
function onYouTubeIframeAPIReady() {
player = new YT.Player('player', {
events: {
'onReady': onPlayerReady,
'onStateChange': onPlayerStateChange
}
});
}
function onPlayerReady(event) {
playButton.addEventListener('click', function() {
if (player.getPlayerState() == YT.PlayerState.PLAYING) {
player.pauseVideo();
} else {
player.playVideo();
}
});
}
function onPlayerStateChange(event) {
if (event.data == YT.PlayerState.PLAYING) {
icon.classList.remove('fa-play');
icon.classList.add('fa-pause');
} else {
icon.classList.remove('fa-pause');
icon.classList.add('fa-play');
}
}
</script>
Thanks,
Josh
Enabling "Beta: Use Unicode UTF-8 for worldwide language support" as suggested here solved the issue for me.
You did not format your setting value properly.
See this answer for full explanation.
The problem is the URL string — you used a Cyrillic р instead of a normal ASCII p in http.
Change this:
fetch('httр://localhost:3000/api/test')
to this:
fetch('http://localhost:3000/api/test')
(or just fetch('/api/test') inside Next.js).
OK, so this was the answer:
In TOML, the root table ends as soon as the first header (e.g. [params]) appears. Any bare keys that come after[params] are part of that table, not the root. In your file.
I had a [params] section starting before the theme config. So in short I just had a bug in hugo.toml
I overlooked it at first as the tab after the keys under [params] made it look like indentation "scoped" the values. But forgot that the whitespace does not have scoping semantics in TOML.
In my case Jupyter server was running outside of the env I created with conda, so it was always running from base environment. This worked:
conda activate dlcourse
pip install jupyterlab ipykernel
If it's just the URL, then add "?wsdl" at the end and browse.
If you need to download as a file, right click on the webpage which shows all the services, save as xml, then rename to filename.wsdl
In some cases you just can turn off TLS verification by --disable-tls
php ./composer-setup.php --install-dir=/usr/bin --filename=composer --disable-tls
#ifndef __clang_analyzer__
base->temp2 = (tempStruct2*)(ptr2 + 1);
#endif
Seems to work for me, basically making the code dead to the analyzer.
Thanks.
I managed to do with by putting export KEY=VALUE in ~/.zshenv
If I correctly understand you. You asking about "has-pending-model-changes" command that is: "Checks if any changes have been made to the model since the last migration.". Completely command looks like: "dotnet ef migrations has-pending-model-changes"
author of the library here, in your examples you look to be using the v4 API, v5 has a completely new API where config is passed in via the options prop. I recommend reading the docs: https://react-chessboard.vercel.app/?path=/docs/how-to-use-options-api--docs#optionsonpiececlick
// handle piece click
const onPieceClick = ({
square,
piece,
isSparePiece
}: PieceHandlerArgs) => {
console.log(piece.pieceType);
};
// chessboard options
const chessboardOptions = {
allowDragging: false,
onPieceClick,
id: 'on-piece-click'
};
// render
return <Chessboard options={chessboardOptions} />;
The issue was some kind of hardware error with Firefox. After restarting Firefox (close the app and open again) it works. See also the bug report https://github.com/fabricjs/fabric.js/issues/10710
I have exactly the same problem. Do you have found any answer ?
Merci beaucoup !
Yes the postPersistAnimal method will be invoked. All the callbacks defined by the superclass entities or mapped superclasses will be executed when updating the subclass entity. This behaviour is specified in the JPA documentation.
If a lifecycle callback method for the same lifecycle event is also specified on the entity class and/or one or more of its entity or mapped superclasses, the callback methods on the entity class and/or superclasses are invoked after the other lifecycle callback methods, most general superclass first. A class is permitted to override an inherited callback method of the same callback type, and in this case, the overridden method is not invoked.
You can find more info regarding the execution order and other details here.
I now have a comprehensive example of the combination of gridstack.js and Angular.
https://gitlab.com/FabianSturm/gridstack-dashboard
Feel free to comment on possible improvements!
Maybe you have "AltGR"?
// lib/main.dart
import 'package:flame/flame.dart';
import 'package:flame/game.dart';
import 'package:flame/components.dart';
import 'package:flutter/widgets.dart';
class RunnerGame extends FlameGame with TapDetector {
late SpriteAnimationComponent hero;
@override
Future<void> onLoad() async {
final image = await images.load('hero_run.png'); // spritesheet
final animation = SpriteAnimation.fromFrameData(
image,
SpriteAnimationData.sequenced(
amount: 8, stepTime: 0.08, textureSize: Vector2(64, 64),
),
);
hero = SpriteAnimationComponent(animation: animation, size: Vector2(128, 128))
..position = size / 2;
add(hero);
}
@override
void onTapDown(TapDownInfo info) {
hero.add(MoveToEffect(info.eventPosition.game, EffectController(duration: 0.3)));
}
}
void main() {
final game = RunnerGame();
runApp(GameWidget(game: game));
}
Here's a batch script that captures RTSP stream screenshots every hour while skipping the period from 11 AM to midnight (12 AM):
@echo off
setlocal enabledelayedexpansion
:: Configuration
set RTSP_URL=rtsp://your_camera_rtsp_stream
set OUTPUT_FOLDER=C:\CCTV_Screenshots
set FFMPEG_PATH=C:\ffmpeg\bin\ffmpeg.exe
:: Create output folder if it doesn't exist
if not exist "%OUTPUT_FOLDER%" mkdir "%OUTPUT_FOLDER%"
:: Get current time components
for /f "tokens=1-3 delims=: " %%a in ('echo %time%') do (
set /a "hour=%%a"
set /a "minute=%%b"
set /a "second=%%c"
)
:: Skip if between 11 AM (11) and Midnight (0)
if %hour% geq 11 if %hour% leq 23 (
echo Skipping capture between 11 AM and Midnight
exit /b
)
if %hour% equ 0 (
echo Skipping Midnight capture
exit /b
)
:: Generate timestamp for filename
for /f "tokens=1-3 delims=/ " %%d in ('echo %date%') do (
set year=%%d
set month=%%e
set day=%%f
)
set timestamp=%year%%month%%day%_%hour%%minute%%second%
:: Capture frame with ffmpeg
"%FFMPEG_PATH%" -y -i "%RTSP_URL%" -frames:v 1 -q:v 2 "%OUTPUT_FOLDER%\%timestamp%.jpg" 2>nul
if errorlevel 1 (
echo Failed to capture frame at %time%
) else (
echo Captured frame: %OUTPUT_FOLDER%\%timestamp%.jpg
)
Important Notes:
Replace RTSP_URL with your camera's actual RTSP stream URL
Adjust FFMPEG_PATH to match your ffmpeg installation location
Modify OUTPUT_FOLDER to your desired save location
Test the time format on your system by running echo %time% and echo %date% in cmd
The script uses 24-hour format (0-23 where 0=Midnight)
The script will skip captures between 11:00:00 and 23:59:59, plus Midnight (00:00:00)
To Schedule:
Save as cctv_capture.bat
Open Task Scheduler (taskschd.msc)
Create a new task:
Trigger: Hourly (repeat every hour)
Action: Start a program → select your batch file
Run whether user is logged in or not
Troubleshooting Tips:
Test the RTSP URL directly with ffmpeg first
Verify your time format matches the script's parsing
Check folder permissions for the output location
Consider adding error logging if needed
Test during active hours (1-10 AM) to verify captures work
The script will now capture images every hour except between 11 AM and Midnight (12 AM), which matches your requirement for the timelapse project.
Payload splitBy "\n" loads all the content in memory and throws heap memory issue.
It's solved by passing the stream to Java class which process the stream adn writes it to /tmp dir without blowing up the heap.
Inspiration took from Mule File repeatable streaming strategy.
Adobe Creative Cloud lets you "install" fonts to use in non-Adobe applications, and when you do (on Windows) they show up in C:\Users\<USER>\AppData\Roaming\Adobe\User Owned Fonts\. Note that User Owned Fonts is a hidden folder, but the files inside it are all unhidden and have meaningful filenames.
Really insightful post ran into a similar issue recently and was also surprised that adding a new enum value triggered a compatibility error. Totally agree that this makes managing evolving schemas in Pub/Sub pretty tricky. Curious to hear how others are handling this switching to strings might be the safer route, but feels like a compromise.
The problem is actually not in the filter, but in the size of the propagation step. In this case, it is too small, which means that the fft is being computed too many times and thus generating error. By increasing the step size to 0.001, you get way better results:
You can prove that these results are better by introducing a function that measures pixel distance between arrays:
def distance(a: array,b: array):
return np.dot((a-b).flatten(),(a-b).flatten())/len(a.flatten())
Using this function to compare the propagated profile to the analytical one shows a distance of 1.40-0.33j when dz=0.001, whereas the distance is -2.53+22.25j when dz=0.00005. You can play around with dz to see if you can get better results.
Try to normalize your data before running linear regression (I mean your X) by using MinMaxScaler for example. (sklearn.preprocessing.MinMaxScaler), thay may have an impact on the coeficients.
If you only want to subscribe to pull request and commits on main, you can do like:
/github subscribe owner/repo pulls commits:main
I think the reason in my case for this error is not the Python version, but rather that the Mac architecture is different than those available for built distributions I have M1 which is ARM64 and for Macos there is only available x86-64. So I cannot install ruptures this way.
Solved by removing expose sourse roots to PYtHONPAT. But the reason is
i generate this software with python for convert images to video.
Image2Video - Turn Images into Videos Effortlessly
A practical tool to convert image collections into high-quality videos with customizable settings. Powered by FFmpeg, perfect for creating timelapses, creative slideshows, or processing CCTV footage.
📸 Supports multiple image formats (JPG, PNG, GIF, etc.)
⏱️ Adjustable frame duration
🎵 Add default audio with customizable bitrate
📂 Automatic folder/subfolder scanning
🖥️ Simple and intuitive GUI
⏳ Real-time progress tracking
MANIFEST.MF
Manifest-Version: 1.0
MIDlet-1: Hello!, icon.png, Hello
MIDlet-vendor: Mehrzad
MicroEdition-Configuration: CLDC-1.1
MIDlet-name: Hello!
MIDlet-version: 1.1
Created-By: 1.8.0_381 (Oracle Corporation)
Nokia-MIDlet-Category: Application
MicroEdition-Profile: MIDP-2.0
It should be like this:
Manifest-Version: 1.0
MIDlet-1: Hello!, icon.png, Hello
MIDlet-Vendor: Mehrzad
MicroEdition-Configuration: CLDC-1.1
MIDlet-Name: Hello!
MIDlet-Version: 1.1
Created-By: 1.8.0_381 (Oracle Corporation)
Nokia-MIDlet-Category: Application
MicroEdition-Profile: MIDP-2.0
I left this problem alone, and I was working on the other parts of my project
last night i ran into a problem and google up the error message and i came up with this thread of stack overflow
I looked at the accepted answer(first answer) he wrote
The problem was: I was using the slim build of jQuery,
I was trying to figure out the solution for another problem, I decided to give it a shot, I replace the jquery CDN link with the one which is not slim version, and bam it worked!
To fix this issue, increase the heap memory by updating the following line in android/gradle.properties:
org.gradle.jvmargs=-Xmx512M
to:
org.gradle.jvmargs=-Xmx8G
Then run:
flutter clean
flutter run
If 8 GB isn’t enough, you can increase it further (e.g., -Xmx16G).
For me, the answer was I don't want the button to send the form at all, and this helped me:
https://stackoverflow.com/a/3315016/5057078
Text of the answer:
The default value for the type attribute of button elements is "submit". Set it to type="button" to produce a button that doesn't submit the form.
<button type="button">Submit</button>
In the words of the HTML Standard: "Does nothing."
To be honest, i dont know either !
I don't know if it will solve your problem, but you are creating an ChatOpenAI model, who is maybe, not optimized for Mistral Response.
There is a class for Mistral Model Who looks like that :
from langchain_mistralai import ChatMistralAI
ChatMistralAI = ChatMistralAI(model="mistral-nemo",mistral_api_key=_api_key)
Regards
Firstly, you can define a ghost sequence which clones the array. You can then write two-state lemmas about the array. I happened to write a blog post about a very similar situation here.
I faced this issue in my Flutter app and resolved it by increasing the Gradle JVM memory.
In android/gradle.properties, update the line:
org.gradle.jvmargs=-Xmx512M
to:
org.gradle.jvmargs=-Xmx8G
Then run:
flutter clean
flutter run
If 8 GB isn’t enough, you can increase it further 16GB(e.g., -Xmx16G).
I don't know if someone still needed this, but the alternative I found is using panel, then setting the BorderStyle to FixedSingle, then height to 2px. You can also do it with vertical separator lines using same method except the width set to 2px instead of the height. I was used to designing layout on Netbeans with Java using javax.swing.Separator and I was looking for it when I shift language to C# with Visual Studio 2022.
You mustn't close the window of turtle until it done and you can add this line to make it not close until you close it by your self
turtle.done() # put in end
Sir!
Thank you so much! I had almost lost hope :)
Fixed it, Problem was that the XSRF token Header was NOT set. i had to do it manually.
https://blog.logrocket.com/create-style-custom-buttons-react-native/ use this tutorial, using TouchableOpacity from react-native
Using <TouchableOpacity /> to create custom button components
Now that you’ve set up the main screen, it’s time to turn your attention to the custom button component.
const AppButton = props => (
// ...
)
Name the custom button component AppButton.
Import the <TouchableOpacity /> and <Text /> components from react-native.
import { View, Button, StyleSheet, TouchableOpacity, Text } from "react-native";
To create custom buttons, you need to customize the <TouchableOpacity /> component and include the <Text /> component inside of it to display the button text.
const AppButton = ({ onPress, title }) => (
<TouchableOpacity onPress={onPress} style={styles.appButtonContainer}>
<Text style={styles.appButtonText}>{title}</Text>
</TouchableOpacity>
);
After 2 months later, it still need user add the prompt manually.
Looking at your sample \S+?\.c should work.
\S checks for the 1st non whitespace character and matches
+? quantifies this match for as few as possible characters
\.c matches the dot and c
The fuse operation and the script itself will work correct if to change the negative dx to the positive one (as well as the x-origine, accordingly) in the addRectangle().
Here are the key Google documents that explain the correct process:
Gmail IMAP Extensions - Access to Gmail labels (X-GM-LABELS)
X-GM-LABELS IMAP attribute. You can use the STORE command with this attribute to modify the labels on a message. The documentation explicitly lists \Trash and \Spam as valid labels you can add. This is the correct, Google-supported IMAP command for applying the "Trash" label to a message, which is the necessary first step for deletion.How Gmail works with IMAP clients - "How messages are organized"
\Trash or \Spam label is the specific action that removes a message from the general "All Mail" archive, putting it into a state where it can be permanently deleted.Based on the documentation, the reliable way to move a message to the Trash and permanently delete it is:
\Trash Label: Use the UID STORE ... +X-GM-LABELS (\\Trash) command on the message in its original folder (e.g., INBOX). This effectively moves it to the [Gmail]/Trash folder.SELECT the "[Gmail]/Trash" folder, mark that same message with the \Deleted flag, and then run the EXPUNGE command.In Gmail, the traditional concept of folders is replaced by a more flexible system of labels. The only true "folder" that holds all of your email is the "All Mail" archive. Everything else that appears to be a folder, including your Inbox, is simply a label applied to a message.
How Gmail Works with IMAP Clients ("How messages are organized")
Creating Labels to Organize Gmail (User Guide)
Access to Gmail Labels via IMAP (X-GM-LABELS) (Developer Guide)
X-GM-LABELS attribute. Even system-critical locations like the Inbox, Spam, and Trash are treated as special labels (\Inbox, \Spam, \Trash). This confirms that from a technical standpoint, there is no "move" operation between folders, only the adding and removing of labels on the single message copy that always resides in "All Mail" until it is moved to Trash or Spam.Thank you, everyone. I had the same issue, and it was resolved after using the proper winutils version for Spark 4.0.0/Hadoop 3.4.x. You can download it from https://github.com/kontext-tech/winutils/tree/master
Copy the entire bin from hadoop-3.4.0-win10-x64
Paste in C:\Hadoop (it will look like C:\Hadoop\bin)
Add variable HADOOP_HOME = C:\Hadoop
Add Path C:\Hadoop\bin
Optional (If above doesn't work) - Add hadoop.dll to C:\Windows\System32 (Suggested by 1 of the commenters on this post)
I have got this working using ShadowDOM example.
The critical piece was to pass (as @ghiscoding indicates) is to set the options.shadowRoot item.
I believe the errors like TypeError: Cannot read properties of null (reading '0') where from extra controls I taken from a different example - commenting these out for now.
Just Enable the option in IIS for 32 bit application , Your issue will be resolved.
For Settings you will follow this path
Application pool-> Right click on Application Pool -> advance settings-> Enable 32 bit -> True

I have found that the (new?) plugin is now called NppTextFx2 in the plugins admin.
I've recently came to know this package that applies shimmer effect to any widgets,
But there are some limitations in this.
You cannot use this on image like even if you apply Colors.transparent as baseColor still it wouldn't make the image appear,
THE POINT IS,
Doesn't matter which colour or child your widget has, after applying the shimmer.fromColors the only baseColor works as background color and highlight color as the effect.
Oh wow, I remember struggling with unpacking an XAPK too 😅 ended up finding tools on sites like https://apkjaka.com/ that made it way easier. Have you tried that route before?
The answer that I gave myself is the following:
a has a length of 32: so, b can be used as an index where, for the first half, I am indexing from 0 to 15, and for the second half I can index from 16 to 32, just by using the first hexadecimal digit (mask of 0x0F).
then, I can use the extra space to carry out an extra operation, which is whether I need to perform the operation at all or not. In this case, 0 is seen as the no-value by just using the high bit from the remaining part of the hexadecimal digit, 0x80.
Then, I agree with @Homer512 's comment, which also stated that this makes it also useful to work on OR operations.
Looks like it's not possible by default but yet there is a workaround. I haven't tried myself so, solely depending the accepted answer from an AWS support engineer.
I also have a problem with cordova-plugin-admobpro after updating to API 35 - rewarded video is not shown. It seems that the plugin is relates to an outdated SDK version (20.4.0):
https://developers.google.com/admob/android/deprecation
I tried specifying more recent versions, such as 23.2.0, with the following command:
cordova plugin add cordova-plugin-admobpro --save --variable PLAY_SERVICES_VERSION=23.2.0 --variable ADMOB_ANDROID_APP_ID="ca-app-pub-***~***"
Unfortunately, it doesn’t appear to work with these newer versions, if I understand correctly.
I sent the letter to the author - Raymond Xie (floatinghotpot), waiting for feedback. If he doesn't answer, I'll try other projects, but cordova-plugin-admobpro was the most convenient IMHO:
use new version of quill
npm i react-quill
I am also facing this issue even after updating dependencies. Any solution ?
I am trying to compile 3.6.9 since it is needed for dependencies (Pulsar), and it gets stuck there as well on an RPI4 with gcc 12.
make altinstall looks ok but i need to install it systemwide, any tips?
Best regards
With a bit of testing and from the comments of people helping, I have come to a conclusion.
#define T1ms 16000 // assumes using 16 MHz PIOSC (default setting for clock source)
That preprocessor directive is wrong due to not using a L293D and utilizing another type of motor driver specific to stepper motors called a DM332T driver from stepperonline-omc.
Now, if I define T1ms to 400 unlike in the example previously listed for the internal clock frequency, I can move my stepper in one direction in a faster RPM. So something like this:
#define T1ms 400
Or...if I was risky, I could test with:
#define T1ms 1 // this is if I would like 400 RPM with the current driver config
See, the driver has an internal couple of dipswitch settings that can be altered on the outside of the driver. This dipswitch setting appliance on the outside of the driver allows for faster RPM or more steps per RPM.
I have been reading theory recently and learning about how to control the STEP of the stepper motors and direction of the stepper motors too. I have been reading from here: https://www.orientalmotor.com/stepper-motors/technology/stepper-motor-basics.html
With the preprocessor directive of T1ms set to 1, I would need to fasten my motor to something heavy so not to throw safety in the wind. This way, the motor will not become disconnected from its source or location. I think with my questioning, this is the answer I was looking to attain.
Q: How can I make the motor move faster than what the internal clock allows?
and...
A: Use a driver with dipswitches and allow the driver to account for driving.
I created a new simple Flutter project and try to load a Tiled map on Firebase.
My folder structure:
assets/
├── image.png
└── map.tmx
In my pubspec.yaml I declared:
flutter:
(2 spaces)assets:
(4 spaces)- assets/
This is my map.tmx
<?xml version="1.0" encoding="UTF-8"?>
<map version="1.10" tiledversion="1.11.2" orientation="orthogonal" renderorder="right-down" width="30" height="20" tilewidth="32" tileheight="32" infinite="0" nextlayerid="2" nextobjectid="1">
<tileset firstgid="1" name="image" tilewidth="32" tileheight="32" tilecount="60" columns="6">
<image source="image.png" width="192" height="336"/>
</tileset>
<layer id="1" name="Tile Layer 1" width="30" height="20">
<data encoding="csv">
0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,1,1,1,1,0,
1,0,0,0,1,1,1,1,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,1,1,1,1,0,1,
1,1,0,1,1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,1,0,1,1,1,1,1,1,1,1,1,
1,0,1,1,1,0,1,1,1,1,1,1,0,1,1,1,1,1,1,0,1,1,1,1,1,0,0,1,1,1,
0,1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,1,1,1,1,1,0,0,0,1,1,1,1,1,0,
1,1,1,1,1,1,1,1,1,1,1,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,1,1,1,1,0,0,1,1,1,1,
1,0,1,1,1,1,1,1,1,1,1,0,1,1,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,1,1,1,1,1,1,1,1,1,
0,0,1,0,1,1,1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,1,1,1,1,1,1,1,1,1,
0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
</data>
</layer>
</map>
But I still get this error:
Unable to load asset: "assets/tiles/map.tmx".
The asset does not exist or has empty data.
See also: https://docs.flutter.dev/testing/errors
Where is the assets/tiles/ ?
Thank everyone.
+-------+----------------------------+-------------+-----------------------------+----------------------+-------------------------+-----------------------------+-----------------------------------------------+
| SL.NO | CROP NAME | RAINFALL | WEATHER CONDITIONS | NATURE OF CROP | SOIL TYPE | GLOBAL RANKING IN EXPORT* | PLACE OF AVAILABILITY (India) |
+-------+----------------------------+-------------+-----------------------------+----------------------+-------------------------+-----------------------------+-----------------------------------------------+
| 1 | Rice | 100–200 cm | Warm & humid; 22–32°C | Kharif | Clayey/alluvial loam | India ≈ #1 exporter | WB, UP, Punjab, Bihar, Odisha, TN, Assam etc. |
| 2 | Wheat | 50–75 cm | Cool, dry; 10–15°C grow, | Rabi | Well-drained loam | Mainly domestic use | UP, Punjab, Haryana, MP, Rajasthan, Bihar |
| | | | 21–26°C ripening | | | | |
| 3 | Jowar (Sorghum) | 45–75 cm | Warm, drought tolerant | Kharif (some Rabi) | Sandy loam/black soils | Small exporter | Maharashtra, Karnataka, Telangana, AP, MP |
| 4 | Bajra (Pearl millet) | 25–50 cm | Hot & arid; 25–35°C | Kharif | Sandy/loamy, light | Small exporter | Rajasthan, Gujarat, Haryana, UP, Maharashtra |
| 5 | Ragi (Finger millet) | 70–100 cm | Cool–warm; 18–28°C | Kharif (some Rabi) | Red loam/lateritic | Small exporter | Karnataka, TN, Uttarakhand, Sikkim, Himachal |
| 6 | Maize | 50–100 cm | Warm; 21–27°C | Kharif (also Rabi) | Fertile loam/alluvial | Minor exporter | Karnataka, MP, Bihar, UP, Telangana, AP, MH |
| 7 | Pulses (Chana, Arhar etc.) | 25–50 cm | Warm; dry at ripening | Rabi & some Kharif | Loam/black soils | Net importer | MP, Maharashtra, Rajasthan, UP, Karnataka |
| 8 | Sugarcane | 75–150+ cm | Warm; 21–27°C; frost-free | Plantation/Annual | Deep loam/alluvial | Brazil #1, India also exp. | UP, Maharashtra, Karnataka, TN, AP, Punjab |
| 9 | Oilseeds (Groundnut etc.) | 25–75 cm | Warm; 20–30°C | Mostly Kharif | Loam/black cotton | Limited exports | Gujarat, Rajasthan, MP, Maharashtra, AP, KA |
| 10 | Tea | 150–300 cm | Cool, humid; 15–25°C | Plantation | Acidic lateritic | Top 4–5 exporter | Assam, WB (Darjeeling), Kerala, TN, Karnataka |
| 11 | Coffee | 150–250 cm | Cool, shaded; 15–28°C | Plantation | Loam/laterite | Top 8–10 exporter | Karnataka (Kodagu), Kerala (Wayanad), TN |
| 12 | Horticulture (F&V) | Crop-spec. | Crop-specific | Varies | Fertile, well-drained | India #2 producer | Maharashtra (grapes), AP (mango), UP (potato) |
| 13 | Rubber | 200+ cm | Hot, humid; >25°C | Plantation | Lateritic/red loam | Not major exporter | Kerala, Karnataka, TN, NE states |
| 14 | Cotton | 50–100 cm | Warm; 21–30°C; frost-free | Kharif | Black cotton (regur) | Top 2–3 exporter | Maharashtra, Gujarat, Telangana, AP, MP etc. |
| 15 | Jute | 150–200 cm | Hot, humid; 24–35°C | Kharif | Alluvial delta soils | Top 2 (with Bangladesh) | WB, Bihar, Assam, Odisha, Meghalaya |
+-------+----------------------------+-------------+-----------------------------+----------------------+-------------------------+-----------------------------+-----------------------------------------------+
According to the feedback from the GCC team, the issue that causes an Internal Compiler Error in GCC is that GCC also does not reject the struct binding as an invalid template argument
Here, I want to know where the OHLC data is coming from and how the candlesticks get rearranged according to the timeframe, like a 1-year chart with 1-day candles, and where the OHLC data is passed to display the candlesticks. - In trading view - enter image description here
SELAMÜNALEYKÜM BEYLER. NASIL KİNİTOPET İNDİREBİLİRİM BEYLER. BEYLER BİR BENİM ÇAY OCAĞINA GELİNDE SİZE KAHVE ISMARLIYIM DOSTLARIM. BEYLER NAPIONUZ BEYLER. BEN SİNAN KAYA MEMLEKETİM KARTAL KAYA GELDİM BURAYA KAYA KAYA HERKESE KAYARIM BEYLER DİKKAT EDİN KORKUN BENDEN WUUUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
Thanks, I am able to access rest api with JWT Token.
Putting my steps for reference.
Get Access token sample :
curl -X POST 'http://localhost:8080/auth/token' \
-H "Content-Type: application/json" \
-d '{
"username": "user",
"password": "pass"
}'
Response JWT token :
{"access_token":"eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxIiwiaXNzIjpbXSwiYXVkIjoiYXBhY2hlLWFpcmZsb3ciLCJuYmYiOjE3NTYxMTM3MDMsImV4cCI6MTc1NjIwMDEwMywiaWF0IjoxNzU2MTEzNzAzfQ.SBi_s0yYrHFiEyiwzU6a78nmwYTe91FDnU1mC5aoLqnHQ2JGMBqv0njOrxXDTi9YpSQ_iesvTfbjsmqqYSC54w"}
Request API using above generated token :
curl -X GET 'http://localhost:8080/api/v2/dags' \
-H "Authorization: Bearer eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxIiwiaXNzIjpbXSwiYXVkIjoiYXBhY2hlLWFpcmZsb3ciLCJuYmYiOjE3NTYxMTM3MDMsImV4cCI6MTc1NjIwMDEwMywiaWF0IjoxNzU2MTEzNzAzfQ.SBi_s0yYrHFiEyiwzU6a78nmwYTe91FDnU1mC5aoLqnHQ2JGMBqv0njOrxXDTi9YpSQ_iesvTfbjsmqqYSC54w"
API Response : {"dags":[],"total_entries":0}
Thanks.
I've encountered the same problem.
I find my problem related to the Build Variant.
when I switch build variant to debug, compose preview works fine. but when I switch it to the custom build type I defined in Build.gradle, compose preview stops work.
Is there any way to let compose preview work in custom build types?
Got the same error - even using your last example.
My command is :
New-Object System.DirectoryServices.ActiveDirectoryAccessRule $AdminSID, "GenericWrite", "Allow"
New-Object : Cannot find an overload for "ActiveDirectoryAccessRule" and the argument count: "3".
Or using the method syntax :
[System.DirectoryServices.ActiveDirectoryAccessRule]::new(
$AdminSID,
[System.DirectoryServices.ActiveDirectoryRights]::GenericWrite,
[System.Security.AccessControl.AccessControlType]::Allow)
Cannot find an overload for "new" and the argument count: "3".
Any suggestion ?
Thanks :-)
Run npm i dotenv.
import { env } from 'process'
import 'dotenv/config'
const { NODE_ENV } = env
console.log(NODE_ENV)
This one will catch from and included START until and included END:
(?m)START[^$]{0,80}END
The option (?M) is for using ^ and $ for start and end of lines instead of start and end of string.
This one will catch from and not included START until and not included END:
(?m)(?<=START)[^$]{0,80}(?=END)
A little update. Since 1.7.1 upload_records is deprecated, and upload_points should be used instead.
The MS documentation doesn't easily show the collation of local character-type variables, only poorly explains that changing the collation has only limited support in some platforms. This extensive writeup may be of use to those who find this question:
// app/Exceptions/Handler.php
use Illuminate\Auth\AuthenticationException;
protected function unauthenticated($request, AuthenticationException $exception)
{
return response()->json(['message' => 'Unauthenticated.'], 401);
}
from PIL import Image
# افتح الصورة PNG
png_path = "/mnt/data/مصانع_العامرية_وبرج_العرب.png"
jpg_path = "/mnt/data/مصانع_العامرية_وبرج_العرب.jpg"
# تحويل PNG إلى JPG
img = Image.open(png_path).convert("RGB")
img.save(jpg_path, "JPEG")
jpg_path
try not to import it from src. Instead, use something like:
../core/transformers/Decimal.transformer
The conversion to a list from double is caused by duplicates/ multiple items for the same id-pair.
There was exactly the same problem, but it was solved as follows
//presignedRequest.Parameters.Add("uploadId", uploadId);
//presignedRequest.Parameters.Add("partNumber", partNumber.ToString());
presignedRequest.UploadId = uploadId;
presignedRequest.PartNumber = partNumber;
I found out the way is: to create a separate action which registers itself in the IDE menu with the associated accelerator
@ActionID(
category = "Window",
id = "ste.netbeans.nblogmanager.logviewer.LogViewerShortcutAction"
)
@ActionRegistration(
displayName = "#CTL_LogViewerShortcutAction",
key = "DS-L" // Ctrl+Shift+L
)
@ActionReference(path = "Menu/Window", position = 333)
@Messages({
"CTL_LogViewerShortcutAction=Show Log Viewer"
})
public final class LogViewerShortcutAction extends AbstractAction {
public LogViewerShortcutAction() {
putValue(NAME, Bundle.CTL_LogViewerShortcutAction());
}
@Override
public void actionPerformed(ActionEvent e) {
TopComponent tc = WindowManager.getDefault().findTopComponent("LogViewerTopComponent");
if (tc == null) {
tc = new LogViewerTopComponent();
}
tc.open();
tc.requestActive();
}
}
Apparently, according to AWS support it was the AWS engineers updating the Bedrock models that caused the issue.
Right now it is possible to deploy via the descriptive bot builder in us-east-1 only. Then it is possible to export and import in another region.
To read elements on a new window/tab using selenium you need to get the webdriver to switch to that new window/tab
To simply switch to the most recently opened tab you can do:
driver.switch_to.window(driver.window_handles[-1])
However you may want to do something more robust; The docs have a good section on Working with windows and tabs
Working version: I am using Groq API keys and mock API specs.
import asyncio
import json
import logging
import os
from dotenv import load_dotenv
from llama_index.core.agent import ReActAgent
from llama_index.core.memory import ChatMemoryBuffer
from llama_index.core.tools import FunctionTool
from llama_index.llms.groq import Groq
load_dotenv()
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger('multi-tool-reproduction')
# Mock data to simulate the OpenAPI + Requests workflow
MOCK_API_SPECS = {
"companies": {
"endpoint": "/v1/companies/list",
"method": "POST",
"base_url": "https://api.my-company.com",
"description": "List all companies for authenticated user"
},
"users": {
"endpoint": "/v1/users/profile",
"method": "GET",
"base_url": "https://api.my-company.com",
"description": "Get user profile information"
}
}
MOCK_API_RESPONSES = {
"https://api.my-company.com/v1/companies/list": {
"success": True,
"companies": [
{"id": 1, "name": "Acme Corp", "status": "active"},
{"id": 2, "name": "Tech Solutions Inc", "status": "active"},
{"id": 3, "name": "Global Enterprises", "status": "inactive"}
]
},
"https://api.example.com/companies": {
"success": False,
"error": "Invalid domain - this is the wrong endpoint!"
}
}
def mock_load_openapi_spec(query: str) -> str:
"""
Mock version of OpenAPIToolSpec functionality
This simulates finding API endpoints based on user queries
"""
logger.info(f"🔍 OPENAPI TOOL CALLED with query: '{query}'")
query_lower = query.lower()
# Simple matching logic
if "companies" in query_lower or "list" in query_lower:
spec = MOCK_API_SPECS["companies"]
result = {
"found": True,
"endpoint": spec["endpoint"],
"method": spec["method"],
"full_url": f"{spec['base_url']}{spec['endpoint']}",
"description": spec["description"],
"base_url": spec["base_url"]
}
logger.info(f"📋 OPENAPI FOUND: {spec['base_url']}{spec['endpoint']}")
elif "users" in query_lower or "profile" in query_lower:
spec = MOCK_API_SPECS["users"]
result = {
"found": True,
"endpoint": spec["endpoint"],
"method": spec["method"],
"full_url": f"{spec['base_url']}{spec['endpoint']}",
"description": spec["description"],
"base_url": spec["base_url"]
}
logger.info(f"📋 OPENAPI FOUND: {spec['base_url']}{spec['endpoint']}")
else:
result = {
"found": False,
"error": f"No API endpoint found for query: {query}",
"suggestion": "Try queries like 'list companies' or 'get user profile'"
}
logger.info("📋 OPENAPI: No matching endpoint found")
return json.dumps(result, indent=2)
def mock_post_request(url: str, body: str = "{}", headers: str = "{}") -> str:
"""
Mock version of RequestsToolSpec post_request functionality
This simulates making HTTP POST requests
"""
logger.info(f"🌐 HTTP POST TOOL CALLED with URL: '{url}'")
try:
request_body = json.loads(body) if body else {}
request_headers = json.loads(headers) if headers else {}
# Mock response based on URL
if url in MOCK_API_RESPONSES:
response_data = MOCK_API_RESPONSES[url]
logger.info(f"📡 HTTP SUCCESS: Found mock response for {url}")
else:
# This simulates the BUG - when wrong URL is used
response_data = {
"success": False,
"error": f"No mock response for URL: {url}",
"message": "This represents the bug - agent used wrong URL!",
"expected_urls": list(MOCK_API_RESPONSES.keys())
}
logger.warning(f"📡 HTTP FAILURE: No mock response for {url}")
result = {
"status_code": 200 if response_data.get("success") else 400,
"url": url,
"request_body": request_body,
"request_headers": request_headers,
"response": response_data
}
return json.dumps(result, indent=2)
except Exception as e:
logger.error(f"❌ HTTP tool error: {e}")
return json.dumps({
"success": False,
"error": str(e),
"url": url
})
def mock_get_request(url: str, headers: str = "{}") -> str:
"""
Mock version of RequestsToolSpec get_request functionality
"""
logger.info(f"🌐 HTTP GET TOOL CALLED with URL: '{url}'")
try:
request_headers = json.loads(headers) if headers else {}
if url in MOCK_API_RESPONSES:
response_data = MOCK_API_RESPONSES[url]
logger.info(f"📡 HTTP SUCCESS: Found mock response for {url}")
else:
response_data = {
"success": False,
"error": f"No mock response for URL: {url}",
"message": "This represents the bug - agent used wrong URL!"
}
logger.warning(f"📡 HTTP FAILURE: No mock response for {url}")
result = {
"status_code": 200 if response_data.get("success") else 400,
"url": url,
"request_headers": request_headers,
"response": response_data
}
return json.dumps(result, indent=2)
except Exception as e:
logger.error(f"❌ HTTP GET tool error: {e}")
return json.dumps({
"success": False,
"error": str(e),
"url": url
})
async def test_multi_tool_agent():
"""Test agent with both tools - THE MAIN ISSUE"""
print("\n" + "=" * 80)
print("🧪 TEST 3: MULTI-TOOL AGENT (THE BUG)")
print("=" * 80)
try:
# Create both tools
openapi_tool = FunctionTool.from_defaults(
fn=mock_load_openapi_spec,
name="load_openapi_spec",
description="Find API endpoints and specifications based on user query. Returns JSON with endpoint details including the full URL to use."
)
http_tool = FunctionTool.from_defaults(
fn=mock_post_request,
name="post_request",
description="Make HTTP POST requests to API endpoints. Requires the full URL including domain name."
)
# System prompt similar to Stack Overflow issue
system_prompt = """
You are an API assistant with two tools: load_openapi_spec and post_request.
IMPORTANT WORKFLOW:
- FIRST: Use load_openapi_spec to find the correct API endpoint for the user's request
- SECOND: Use post_request with the EXACT full URL from the first tool's response
Always follow this two-step process. Never guess URLs or use default endpoints.
"""
memory = ChatMemoryBuffer.from_defaults()
llm = Groq(model="llama3-70b-8192", api_key=os.getenv("GROQ_API_KEY"))
agent = ReActAgent.from_tools(
tools=[openapi_tool, http_tool],
llm=llm,
memory=memory,
verbose=True,
system_prompt=system_prompt
)
print("\n🚀 Running test...")
response = await agent.achat("List my companies")
print(f"\n📋 Final response: {response}")
except Exception as e:
print(f"❌ Multi-tool agent error: {e}")
logger.exception("Multi-tool agent detailed error:")
async def main():
"""Run all tests to reproduce the multi-tool chaining issue"""
print("LlamaIndex Multi-Tool Chaining")
print("=" * 60)
# Test multi-tool agent (the main issue)
await test_multi_tool_agent()
print("\n" + "=" * 80)
if __name__ == "__main__":
asyncio.run(main())
Answer:
🚀 Running test...
> Running step 1a1b21a8-5b87-4379-94b2-941769dfeedc. Step input: List my companies
INFO:httpx:HTTP Request: POST https://api.groq.com/openai/v1/chat/completions "HTTP/1.1 200 OK"
Thought: The current language of the user is: English. I need to use a tool to help me answer the question.
Action: load_openapi_spec
Action Input: {'query': 'list companies'}
INFO:multi-tool-reproduction:🔍 OPENAPI TOOL CALLED with query: 'list companies'
INFO:multi-tool-reproduction:📋 OPENAPI FOUND: https://api.my-company.com/v1/companies/list
Observation: {
"found": true,
"endpoint": "/v1/companies/list",
"method": "POST",
"full_url": "https://api.my-company.com/v1/companies/list",
"description": "List all companies for authenticated user",
"base_url": "https://api.my-company.com"
}
> Running step 853cc07c-b6f7-4d1b-9ae6-21006cc7b731. Step input: None
INFO:httpx:HTTP Request: POST https://api.groq.com/openai/v1/chat/completions "HTTP/1.1 200 OK"
Thought: I have the API endpoint to list companies. Now I need to make a POST request to this endpoint.
Action: post_request
Action Input: {'url': 'https://api.my-company.com/v1/companies/list', 'body': '{}', 'headers': '{}'}
INFO:multi-tool-reproduction:🌐 HTTP POST TOOL CALLED with URL: 'https://api.my-company.com/v1/companies/list'
INFO:multi-tool-reproduction:📡 HTTP SUCCESS: Found mock response for https://api.my-company.com/v1/companies/list
Observation: {
"status_code": 200,
"url": "https://api.my-company.com/v1/companies/list",
"request_body": {},
"request_headers": {},
"response": {
"success": true,
"companies": [
{
"id": 1,
"name": "Acme Corp",
"status": "active"
},
{
"id": 2,
"name": "Tech Solutions Inc",
"status": "active"
},
{
"id": 3,
"name": "Global Enterprises",
"status": "inactive"
}
]
}
}
> Running step 35b8297b-e908-47f9-9356-08de6505cad7. Step input: None
INFO:httpx:HTTP Request: POST https://api.groq.com/openai/v1/chat/completions "HTTP/1.1 200 OK"
Thought: I have the list of companies. I can answer the user's question now.
Answer: Here is the list of your companies: Acme Corp, Tech Solutions Inc, Global Enterprises
📋 Final response: Here is the list of your companies: Acme Corp, Tech Solutions Inc, Global Enterprises
You should first understand how ip address allocated?
Also you need to understand how dhcp and nat work?
When you start r&d on this protocol , you figure our concept called private network .
In this private network , device have dummy ip address ( whatever is ) but have unique in that private network.
Whenever you device send or receiver response it used router ip address because it unique in internet.
Router maintain table and record following things :-
Sender ip address
Sender port address
receiver ip address
receiver port address.
Now router modify only above two field of router by own data.
When data come from receiver side and it again check which device for this data come. It check table and match forward response that device and that service.
Not sure which version you are on, but the contentConnector is a Magnolia 5 configuration that is usable in Magnolia 6 but deprecated.
I do not see any “class” or “appClass” defined at the root of the app, which could be the reason for the NPE https://docs.magnolia-cms.com/product-docs/6.2/apps/app-configuration/app-descriptor/
I hope this helps.
Best regards,
Roman
The simpliest way, i believe is to do something like this:
from itertools import takewhile
def find_lcp(s: List[str]) -> str:
t = len(list(takewhile(lambda x: len(x)==1, map(set,zip(*s)))))
return s[0][:t]
The RTSP server developed by these guys supports local media files. You can directly use rtsp://ip:port/filename, such as rtsp://ip:port/test.mp4
https://www.happytimesoft.com/products/rtsp-server/index.html
Model: Handles the data and business logic of the application. It manages CRUD operations, enforces business rules, and interacts with the database. For example, in a bookstore app, the Model would manage data like book titles, authors, and stock levels.
View: Manages the user interface and presentation. It displays data to users and updates the UI when the Model changes. For instance, the View in a bookstore app would show the list of books and provide input fields for searching or filtering.
Controller: Acts as the intermediary between the Model and View. It processes user input, updates the Model, and selects the appropriate View to display. For example, when a user searches for a book, the Controller handles the request, retrieves data from the Model, and updates the View.
Right now in your AppModule you are configuring the router twice.
This double registration leads to odd behaviour. Pick one style of router configuration, not both.
Make sure your database in phpMyAdmin is empty (no tables, views, routines, or triggers).
If it’s empty and you still get the error, then it’s a filesystem issue.
If you are using XAMPP:
Stop the MySQL server from the XAMPP control panel.
Go to: C:\xampp\mysql\data\
Find the folder with your database name.
Delete that folder manually.
Start MySQL again.
Your database will now be removed.
Since moving the section outside of the container is not an option. Then you can approach this in two ways if you still want to keep the section inside of the parent container.
Override padding with negative margins. This will overlap the parent padding.
.usp-section {
background-color: #c2b280;
padding: 1rem 0;
margin-left: -6rem;
margin-right: -6rem;
}
And here's the output, you'd need to adjust the padding or inner width of the section to align with others.
Use width: 100vw
.usp-section {
background-color: #c2b280;
padding: 1rem 0;
width: 100vw;
position: relative;
left: 50%;
transform: translateX(-50%);
}
This will force it to span the viewport width regardless of container padding.
Have you turned Developer Tools on? If not, go to Settings > About Phone, and tap the build number seven times. That may fix your problem. EDIT: Also, try posting this question on the Android Enthusiasts Stack Exchange.
i have same problem, just remove double quotes " to fix this. Like:
json = json.Replace(@"""", "");
Currently, you are applying styles to the class .my-wrench, which is the parent of mat-icon. To style the icon itself, you need to target the mat-icon element. Here is an example of how to apply styles to the icon.
.my-wrench mat-icon {
color: blue;
}
It should produce the desired style you want.