not working.. pls fix this asap
Maybe you should take a look at their code example and search for the part using Markers. If this is not enough, you should ask directly to the maintainers of the lib through the issues section of the repository.
As of now, this seems to be impossible, short of patching Java yourself. There is upstream bug report: https://bugs.openjdk.org/browse/JDK-8290140 and Fedora might patch it: https://bugzilla.redhat.com/show_bug.cgi?id=1154277
//.htaccess
RewriteEngine On
RewriteBase /AbhihekDeveloper
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ index.php/$1 [PT,L]
The calendar app is just .toolbar
nothing too complicated. Using the new Toolbar stuff its build in a couple of minutes.
Calendar App:
private let days = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 ,16, 17, 18, 19, 20, 22, 23, 24, 25] //Just example dont implement like this
private let columns = [GridItem(.flexible()), GridItem(.flexible()), GridItem(.flexible()), GridItem(.flexible()), GridItem(.flexible()), GridItem(.flexible()), GridItem(.flexible())] // 7 days
var body: some View {
NavigationView {
VStack {
ScrollView {
Text("May")
.font(.largeTitle.bold())
.frame(maxWidth: .infinity, alignment: .leading)
.padding()
LazyVGrid(columns: columns) {
ForEach(days, id: \.self) { day in
Text("\(day)")
.font(.title3)
.padding(5)
.padding(.vertical, 10)
}
}
.padding()
Text("June")
.font(.largeTitle.bold())
.frame(maxWidth: .infinity, alignment: .leading)
.padding()
LazyVGrid(columns: columns) {
ForEach(days, id: \.self) { day in
Text("\(day)")
.font(.title3)
.padding(5)
.padding(.vertical, 10)
}
}
.padding()
}
}
.toolbar {
ToolbarItem(placement: .topBarLeading) {
Label("2025", systemImage: "chevron.left")
.labelStyle(.titleAndIcon)
.frame(width: 75) // Have to set it for the ToolbarItem or only icon is visible
}
ToolbarItem(placement: .topBarTrailing) {
Image(systemName: "server.rack") //or whatever
}
ToolbarItem(placement: .topBarTrailing) {
Image(systemName: "magnifyingglass")
}
ToolbarItem(placement: .topBarTrailing) {
Image(systemName: "plus")
}
ToolbarItem(placement: .bottomBar) {
Image(systemName: "pencil")
}
ToolbarSpacer(placement: .bottomBar)
ToolbarItem(placement: .bottomBar) {
Image(systemName: "exclamationmark.circle")
}
ToolbarItem(placement: .bottomBar) {
Image(systemName: "tray")
}
}
}
}
Now the Fitness app is a little bit more challenging. I didn't come up with a perfect solution, but it the basics works. I chose the .navigationTitle()
and just a plain VStack
with the chips as you can see. It doesn't have a blur, but the basics are there. TabView
is with just the basic Tab
. It could be refactored into the .toolbar
too with a custom title?
Fitness App:
struct FitnessAppView: View {
var body: some View {
TabView {
//Different views
Tab("Fitness+", systemImage: "ring") {
FitnessRunningView()
}
Tab("Summary", systemImage: "figure.run.circle") {
FitnessRunningView()
}
Tab("Sharing", systemImage: "person.2") {
FitnessRunningView()
}
}
}
}
struct FitnessRunningView: View {
var body: some View {
NavigationView {
ZStack {
VStack {
// Horizontal chips
ScrollView(.horizontal) {
HStack {
ChipView(text: "For you")
ChipView(text: "Explore")
ChipView(text: "Plans")
ChipView(text: "Library")
}
}
.scrollIndicators(.hidden)
// Main content
ScrollView {
VStack(spacing: 20) {
Text("Hello world!")
ForEach(0..<20) { i in
Text("Item \(i)")
.frame(maxWidth: .infinity)
.padding()
.background(.thinMaterial)
.cornerRadius(10)
}
}
.padding()
}
}
}
.navigationTitle("Fitness+")
}
}
}
struct ChipView: View {
var text: String
var body: some View {
Text(text)
.font(.title3)
.padding()
.glassEffect(.regular.interactive())
.padding(10)
}
}
Rejecting duplicate peerIds did not work for me. I kept an array of the sessions that I had started for all peerIds and when the advertiser triggered a call to session:peer:didChangeState: I did a disconnect and session=nil to all sessions in the array except the session that was finally connected.
I solved the problem by making the function that draws the messages also draw the line in the background and adding to the height y of the line coordinates the equivalent of the distance from the beginning of the message box to its center (as this is always fixed) + the total height of the box
Check the generated output variable in the schema.prisma file and the location from where u are importing prisma client. In my case I located where the edge.d.ts file was and it was in the src/generated/prisma .
import { PrismaClient } from '../src/generated/prisma/edge'
generator client {
provider = "prisma-client-js"
output = "../src/generated/prisma"
}
I also encountered this just now and tried something. I set the polygon's pivot point to the bones pivot and voila... it works fine now. (Godot 4.2.1)
so what I had to do to solve this error is go into my files, go to (%appdata% > roaming) find Jupyter in there. Then I was prompted by windows to allow admin permissions before entering. This fixed Anaconda when I went to check after.
My set up: NX + Angular 19 with internal library.
For me, this bug occurs when all three conditions are met:
I am using a component without exporting from library
I am using that component inside @defer{} block
I am NOT using hmr.
What really tricky is: if you are using hmr this just works fine.
Seems like a nastly angular bug.
try using a different ssh-agent . e.g
ssh-agent bash
ssh-add ~/.ssh/id_ed25519
The thing that worked for me is to either connect to your mobile hotspot and if you are already connected change the network type to private network.
You can set the number of concurrent processes used in your build process using CMAKE_BUILD_PARALLEL_LEVEL
in your CMake file. For example:
set(CMAKE_BUILD_PARALLEL_LEVEL 10)
is equal to specify -j 10
in your cmake command line.
You may also want to consider another approach of making the Djoser emails async by default.
The way I did this was to subclass Djoser's email classes and override the send()
method so it uses a Celery task. The accepted solution works for one-off tasks, but this method makes sure there is consistency across all email types.
users/tasks.py
from django.core.mail import EmailMultiAlternatives
from celery import shared_task
@shared_task(bind=True, max_retries=3)
def send_email_task(self, subject, body, from_email, to, bcc=None, cc=None, reply_to=None, alternatives=None):
try:
email = EmailMultiAlternatives(
subject=subject,
body=body,
from_email=from_email,
to=to,
bcc=bcc or [],
cc=cc or [],
reply_to=reply_to or []
)
if alternatives:
for alt in alternatives:
email.attach_alternative(*alt)
email.send()
except Exception as exc:
raise self.retry(exc=exc, countdown=60)
This is a generic task that sends any Django email. Nothing here is Djoser-specific.
users/email.py
from django.conf import settings
from djoser import email
from .tasks import send_email_task
class AsyncDjoserEmailMessage(email.BaseDjoserEmail):
"""
Override synchronous send to use Celery.
"""
def send(self, to, fail_silently=False, **kwargs):
self.render()
self.to = to
self.cc = kwargs.pop("cc", [])
self.bcc = kwargs.pop("bcc", [])
self.reply_to = kwargs.pop("reply_to", [])
self.from_email = kwargs.pop("from_email", settings.DEFAULT_FROM_EMAIL)
self.request = None # don't pass request to Celery
send_email_task.delay(
subject=self.subject,
body=self.body,
from_email=self.from_email,
to=self.to,
bcc=self.bcc,
cc=self.cc,
reply_to=self.reply_to,
alternatives=self.alternatives,
)
Any email that inherits from this class will be sent asynchronously.
Now you can combine Djoser's built-in emails with your async base:
class PasswordResetEmail(email.PasswordResetEmail, AsyncDjoserEmailMessage):
template_name = 'email/password_reset.html'
def get_context_data(self):
context = super().get_context_data()
user = context.get('user')
context['username'] = user.username
context['reset_url'] = (
f"{settings.FRONTEND_BASE_URL}/reset-password"
f"?uid={context['uid']}&token={context['token']}"
)
return context
class ActivationEmail(email.ActivationEmail, AsyncDjoserEmailMessage):
template_name = 'email/activation.html'
def get_context_data(self):
context = super().get_context_data()
user = context.get('user')
context['username'] = user.username
context['verify_url'] = (
f"{settings.FRONTEND_BASE_URL}/verify-email"
f"?uid={context['uid']}&token={context['token']}"
)
return context
class ConfirmationEmail(email.ConfirmationEmail, AsyncDjoserEmailMessage):
template_name = 'email/confirmation.html'
You can do the same for:
PasswordChangedConfirmationEmail
UsernameChangedConfirmationEmail
UsernameResetEmail
Each one gets async sending for free, and you can add extra context if you need it.
If you want to override Djoser's email, you need to make sure you add yours to the global templates dir so your templates get used instead. Examples (templates/email/...):
password_reset.html
{% block subject %}Reset your password on {{ site_name }}{% endblock %}
{% block text_body %}
Hello {{ username }}!
You requested a password reset for your account. Click the link below:
{{ reset_url }}
{% endblock %}
{% block html_body %}
<h2>Hello {{ username }}!</h2>
<p>Click the link to reset:</p>
<a href="{{ reset_url }}">Reset Password</a>
{% endblock %}
activation.html
{% block subject %}Verify your email for {{ site_name }}{% endblock %}
{% block text_body %}
Hello {{ username }}, please verify your email:
{{ verify_url }}
{% endblock %}
{% block html_body %}
<h2>Hello {{ username }}!</h2>
<p><a href="{{ verify_url }}">Verify Email</a></p>
{% endblock %}
...and similarly for confirmation.html.
Make sure your settings.py
points at the template folder:
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [BASE_DIR / "templates"],
...
}
]
Add Djoser URLs:
urlpatterns = [
path("users/", include("djoser.urls")),
...
]
Start Celery:
celery -A config worker -l info
(replace config
with your project name)
Trigger a Djoser action (e.g. reset_password
or activation
) and you'll see Celery run send_email_task
.
This way, all Djoser emails in which you inherit AsyncDjoserEmailMessage() become async, not just password reset.
There is a fix for grid.setOptions so that it doesn't drop your toolbar customizations.
Just detach the toolbar before setOptions and then re-apply it afterwards.
toolBar = $("#" + GridName + " .k-grid-toolbar").detach();
grid.setOptions(options);
$("#" + GridName + " .k-grid-toolbar").replaceWith(toolBar);
This is a fairly widespread compatibility issue between the JavaFX D3D hardware pipeline and recent Intel Iris Xe graphics drivers on Windows, as confirmed by your tests with multiple driver and Java versions. The D3DERR_DEVICEHUNG error and resulting freezes or flickers are typical of JavaFX running into problems with the GPU driver—these issues go away when using software rendering or a discrete NVIDIA GPU, but those solutions either severely hurt performance or aren't generally available to all users. Currently, aside from forcing software rendering (which impacts speed) or shifting to an external GPU (not possible on all systems), there is no reliable JVM flag or workaround that fully addresses this; the root cause is a low-level bug or incompatibility which requires a fix from Intel or the JavaFX/OpenJFX developers. For now, the best course is to alert both Intel and OpenJFX via a detailed bug report and, in the interim, provide users with guidance to use software mode or reduce heavy GPU effects until an official update becomes available.
Powershell:
Remove-Item Env:\<VARNAME>
Example:
Remove-Item Env:\SSH_AUTH_SOCK
Hello I´ve had the same issue, have you found the solution? Please could you give me a hint if you solved this problem. Thanks in advance.
Simple! I should have mentioned the .exe was previously signed. The solution is to do:
signtool remove /s %outputfile%
before the rcedit. Then after that, signtool to sign - works fine.
Use this patch. its works for me
https://github.com/software-mansion/react-native-reanimated/issues/7493#issuecomment-3056943474
Had the same issue. Try updating or using a new CLI
I fixed it by installing the latest version of IntelliJ IDEA, which has full support for newer Java language levels
+1 For the Loki
recommendation. It is nice being able to query the Loki data in the Grafana UI. You can tail live logs from your pod using the label selector or pick a specific time range that you are interested in.
I figured out how to get the output that I needed. I'll post it here for others to see and comment on.
The way I did it was to also require jq as a provider, which then allowed me to run a jq_query data block. This is the full end to end conversion of the data sources:
locals {
instances_json = jsonencode([ for value in data.terraform_remote_state.instances : value.outputs ])
}
data "jq_query" "all_ids" {
data = local.instances_json
query = ".[] | .. | select(.id? != null) | .id"
}
locals {
instances = split(",", replace(replace(data.jq_query.all_ids.result, "\n", "," ), "\"", "") )
}
The last locals block is needed because the jq_query block returns multiple values but the string is not in a standard json format. So we can't decode the string from json, we just simply have to work around it. So I replaced the "\n"
characters with commas, and then replaced the \"
with nothing so that the end result would give me something I could use the split function with to split up the values into a list.
Make sure to specify the uid when creating the user so that it will for sure match up with the uid specified for the cache. I was having permissions problems with the cache dir until I saw that the user that was created had uid 999.
useradd -u 1000 myuser
header 1 | header 2 |
---|---|
cell 1 | cell 2 |
cell 3 | cell 4 |
I had a case similar to the question above, but to solve I did this:
columns = ["a", "b", "c"]
df[[*columns]]
This unpacks the column names and uses them to generate a new dataframe with only the column names in the columns
list
I found the error. The de-serialization code should use boost::archive::binary_iarchive ar(filter);
instead of boost::archive::binary_iarchive ar(f);
That yellow triangle isn’t the Problems counter. It’s a warning that you turned Problems off. VS Code added this in 1.85—when Problems: Visibility is off, it shows a status-bar warning by design.
Hide just that icon (and keep Problems hidden):
Right-click the status bar → Manage Status Bar Items (or run “Preferences: Configure Status Bar Items”).
Uncheck the entry for Problems (visibility off) to hide that warning item. This per-item visibility is persisted.
If you use SSH/WSL/Dev Containers: open the remote window and do the same there—remote windows keep their own settings/profile.
If you actually want Problems decorations back (and thus no warning), just re-enable Problems: Visibility in Settings.
Based on https://lit.dev/docs/components/decorators/
You can also add accessor
before your property name to use standard decorator syntax instead of experimental decorator syntax.
Use .localhost
*.localhost
is reserved for local development (RFC 6761)
Natively recognized by browsers
No conflicts with real domains
Recommended Laragon Configuration
Modify domains in Laragon:
Menu → Preferences → General
Change {name}.dev
to {name}.localhost
Generate wildcard certificate:
# Install mkcert if not already done
mkcert -install
# Create wildcard certificate for localhost
mkcert "*.localhost" localhost 127.0.0.1 ::1
Regards
In TYPO3v12 or later, use the PageRepository
: https://docs.typo3.org/c/typo3/cms-core/main/en-us/Changelog/12.0/Deprecation-97027-ContentObjectRenderer-getTreeList.html
Perfeito! O código que você enviou já gera uma música simples estilo funk eletrônico usando pydub. Ele cria um loop de 4 batidas com kick, snare e hi-hat e exporta para um arquivo MP3 de 2min30s.
Se você quiser escutar a música gerada, basta executar o código em um ambiente Python que suporte pydub e ffmpeg (necessário para exportar MP3).
Aqui está uma versão pronta para execução com pequenas melhorias:
from pydub.generators import Sine
from pydub import AudioSegment
# Configurações do beat
bpm = 150
beat_duration_ms = int((60 / bpm) * 1000) # duração de 1 batida em ms (~400ms)
total_duration_ms = 2 * 60 * 1000 + 30 * 1000 # 2min30s
# Sons básicos
kick = Sine(60).to_audio_segment(duration=beat_duration_ms).apply_gain(+6)
snare = Sine(200).to_audio_segment(duration=100).apply_gain(-3)
hihat = Sine(8000).to_audio_segment(duration=50).apply_gain(-15)
# Função para criar um compasso simples de funk eletrônico
def make_bar():
bar = AudioSegment.silent(duration=beat_duration_ms \* 4)
\# Kick no tempo 1 e 3
bar = bar.overlay(kick, position=0)
bar = bar.overlay(kick, position=beat_duration_ms \* 2)
\# Snare no tempo 2 e 4
bar = bar.overlay(snare, position=beat_duration_ms)
bar = bar.overlay(snare, position=beat_duration_ms \* 3)
\# Hi-hat em todos os tempos
for i in range(4):
bar = bar.overlay(hihat, position=beat_duration_ms \* i)
return bar
# Criar o loop principal
bar = make_bar()
song = AudioSegment.silent(duration=0)
while len(song) < total_duration_ms:
song += bar
# Exportar como MP3
output_path = "funk_moderno.mp3"
song.export(output_path, format="mp3")
print(f"Música gerada em: {output_path}")
Depois de rodar, você terá um arquivo funk_moderno.mp3 na mesma pasta, pronto para ouvir.
Se você quiser, posso melhorar essa música adicionando variações, efeitos ou uma linha de baixo para ficar mais “profissional” e com cara de funk eletrônico moderno. Quer que eu faça isso?
i have same problem with you and here is my solution:
You must define
DATABASE_URL: postgresql://${DB_USERNAME}:${DB_PASSWORD}@postgres-db:5432/${DB_DATABASE}
inside docker compose for backend service connect to the postgres db. here is my docker-compose file:
version: '4.0'
services:
db:
image: postgres
container_name: postgres
environment:
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_DATABASE}
ports:
- "5432:5432"
volumes:
- db_data:/var/lib/postgresql/data
backend:
build: .
container_name: backend
ports:
- "3000:3000"
environment:
DATABASE_URL: postgresql://${DB_USERNAME}:${DB_PASSWORD}@postgres-db:5432/${DB_DATABASE}
depends_on:
- db
volumes:
- .:/app
- /app/node_modules
volumes:
db_data:
then change the host(DB_HOST) in .env file equal to "db" (because you named postgres is "db" in docker-compose file)
PORT=3000
DB_HOST=db
DB_PORT=5432
DB_USERNAME=postgres
DB_PASSWORD=123456
DB_DATABASE=auth
the typeORM config
TypeOrmModule.forRootAsync({
imports: [ConfigModule],
useFactory: (configService: ConfigService) => ({
type: 'postgres',
host: configService.get('DB_HOST'),
port: +configService.get('DB_PORT'),
username: configService.get('DB_USERNAME'),
password: configService.get('DB_PASSWORD'),
database: configService.get('DB_DATABASE'),
entities: [__dirname + '/**/*.entity{.ts,.js}'],
synchronize: true,
logging: true
}),
inject: [ConfigService],
}),
here is an update, I have written an update version of the code using dynamic allocation for all the matrices, this works quite well in parallel too(I have tested it up to 4096x4096); the only minor issue is that, with the largest size tested, I had to turn off the function call to the "print" function because it stalled the program.
Inside the function for the block multiplication there is now a condition on all 3 inner loops to take care of the scenario where row and columns values cannot be divided by block dimension, using fmin() function with this syntax:
for(int i=ii; i<fmin(ii+blockSize, rowsA); ++i)
{
for(int j=jj; j<fmin(jj+blockSize, colsB); ++j)
{
for(int k=kk;k<fmin(kk+blockSize, rowsA); ++k)
{
matC[i][j] += matA[i][k]*matB[k][j];
I tried this approach also in the early version of the serial code but for some reason it didn't work, probably because I made some logical mistakes.
Anyway, this code do not work on rectangular matrices, if you try to run it with 2 rectangular matrices you will get an error because pointers writes outiside the memory areas they are supposed to work into.
I tried to think about how to convert all checks and mathematical conditions required for rectangular matrices into working code but I had no success, I admit it's beyond my skills, if anyone has code (maybe from past examples or from some source on the net) to be used it could be an extra addition to the algorithm, I searched a lot both here and on the internet but found nothing.
Here is the updated full code:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <omp.h>
/* run this program using the console pauser or add your own getch, system("pause") or input loop */
// function for product block calculation between matri A and B
void matMultDyn(int rowsA, int colsA, int rowsB, int colsB, int blockSize, int **matA, int **matB, int **matC)
{
double total_time_prod = omp_get_wtime();
#pragma omp parallel
{
#pragma omp single
{
//int num_threads=omp_get_num_threads();
//printf("%d ", num_threads);
for(int ii=0; ii<rowsA; ii+=blockSize)
{
for(int jj=0; jj<colsB; jj+=blockSize)
{
for(int kk=0; kk<rowsA; kk+=blockSize)
{
#pragma omp task depend(in: matA[ii:blockSize][kk:blockSize], matB[kk:blockSize][jj:blockSize]) depend(inout: matC[ii:blockSize][jj:blockSize])
{
for(int i=ii; i<fmin(ii+blockSize, rowsA); ++i)
{
for(int j=jj; j<fmin(jj+blockSize, colsB); ++j)
{
for(int k=kk;k<fmin(kk+blockSize, rowsA); ++k)
{
matC[i][j] += matA[i][k]*matB[k][j];
//printf("Hello from iteration n: %d\n",k);
//printf("Test valore matrice: %d\n",matC[i][j]);
//printf("Thread Id: %d\n",omp_get_thread_num());
}
}
}
}
}
}
}
}
}
total_time_prod = omp_get_wtime() - total_time_prod;
printf("Total product execution time by parallel threads (in seconds): %f\n", total_time_prod);
}
//Function for printing of the Product Matrix
void printMatrix(int **product, int rows, int cols)
{
printf("Resultant Product Matrix:\n");
for (int i = 0; i < rows; i++) {
for (int j = 0; j < cols; j++) {
printf("%d ", product[i][j]);
}
printf("\n");
}
}
int main(int argc, char *argv[]) {
//variable to calculate total program runtime
double program_runtime = omp_get_wtime();
//matrices and blocksize dimensions
int rowsA = 256, colsA = 256;
int rowsB = 256, colsB = 256;
int blockSize = 24;
if (colsA != rowsB)
{
printf("No. of columns of first matrix must match no. of rows of the second matrix, program terminated");
exit(EXIT_SUCCESS);
}
else if(rowsA != rowsB || rowsB != colsB)
{
blockSize= 1;
//printf("Blocksize value: %f\n", blockSize);
}
//variable to calculate total time for inizialization procedures
double init_runtime = omp_get_wtime();
//Dynamic matrices pointers allocation
int** matA = (int**)malloc(rowsA * sizeof(int*));
int** matB = (int**)malloc(rowsB * sizeof(int*));
int** matC = (int**)malloc(rowsA * sizeof(int*));
//check for segmentation fault
if (matA == NULL || matB == NULL || matC == NULL)
{
fprintf(stderr, "out of memory\n");
exit(0);
}
//------------------------------------ Matrices initializazion ------------------------------------------
// MatA initialization
//#pragma omp parallel for
for (int i = 0; i < rowsA; i++)
{
matA[i] = (int*)malloc(colsA * sizeof(int));
}
for (int i = 0; i < rowsA; i++)
for (int j = 0; j < colsA; j++)
matA[i][j] = 3;
// MatB initialization
//#pragma omp parallel for
for (int i = 0; i < rowsB; i++)
{
matB[i] = (int*)malloc(colsB * sizeof(int));
}
for (int i = 0; i < rowsB; i++)
for (int j = 0; j < colsB; j++)
matB[i][j] = 1;
// matC initialization (Product Matrix)
//#pragma omp parallel for
for (int i = 0; i < rowsA; i++)
{
matC[i] = (int*)malloc(colsB * sizeof(int));
}
for (int i = 0; i < rowsA; i++)
for (int j = 0; j < colsB; j++)
matC[i][j] = 0;
init_runtime = omp_get_wtime() - init_runtime;
printf("Total time for matrix initialization (in seconds): %f\n", init_runtime);
//omp_set_num_threads(8);
// function call for block matrix product between A and B
matMultDyn(rowsA, rowsA, rowsB, colsB, blockSize, matA, matB, matC);
// function call to print the resultant Product matrix C
printMatrix(matC, rowsA, colsB);
// --------------------------------------- Dynamic matrices pointers' cleanup -------------------------------------------
for (int i = 0; i < rowsA; i++) {
free(matA[i]);
free(matC[i]);
}
for (int i = 0; i < colsB; i++) {
free(matB[i]);
}
free(matA);
free(matB);
free(matC);
//Program total runtime calculation
program_runtime = omp_get_wtime() - program_runtime;
printf("Program total runtime (in seconds): %f\n", program_runtime);
return 0;
}
To complete the testing and comparison on the code, I will create a machine on Google Clould equipped with 32 cores, so I can see how the code run on an actual 16 cores machine and then with 32 cores.
For reference, I'm running this code on my MSI notebook, which is equipped with an Intel i7th 11800, 8 cores at 3.2 Ghz, and can manage up to 16 threads concurrently; the reason to go and test on Google Cloud is because I want to have the software run on a "real" 16 cores machine, where 1 threads run on one core, and then scaling further up to 32 cores.
With the collected data I will then draw some graphs for comparison.
In news phpstorm version : File > Settings > PHP
I would split optimization into two parts: TTFB (time to first byte) optimization and the frontend optimization.
To optimize TTFB:
Connect your Magento store to a PHP profiler. There are several options, you can google for them.
Inspect the diagram and see if you have find a function call that takes too much time.
Optimize that function call. In 90% case I dealt with the slowness came from a 3rd-party extension.
To optimize the frontend:
Minify and compress JS and CSS. You can turn it on at Stores > Configuration > Advanced > Developer > CSS and JS settings
Serve images in WebP or AVIF formats to cut page weight
Use GZIP compression
Inline critical CSS and JS (critical CSS/JS is what needs to be render above-the-fold content) and lazy load all the rest
Use as few 3rd-party JS libraries/scripts as possible
Remove redundant CSS and JS
Good luck!
I found the issue. The issue wasn't with the dataset format it was with the LLM I used, it wasn't returning the correct output (a value of 0 or 1) , that's why it was giving me RagasOutputParserException. To fix it I tried different models and decreased the number of returned documents from 10 to 5.
This is what ultimately got me going:
<div style="position: relative; width: 560px; height: 315px;">
<div id="cover" style="position:absolute; top: 50%; left: 50%; transform: translate(-50%, -50%); opacity:1; cursor:pointer; font-size:100px; color:white; text-shadow: 2px 2px 4px #000000;">
<i class="fas fa-play"></i>
</div>
<iframe id="player" width="560" height="315" src="https://www.youtube.com/embed/2qhCjgMKoN4?enablejsapi=1&controls=0" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen style="position: absolute; top:0; left:0; opacity:0;"></iframe>
</div>
<script src="https://www.youtube.com/iframe_api"></script>
<script>
var player;
var playButton = document.getElementById('cover');
var icon = playButton.querySelector('i');
function onYouTubeIframeAPIReady() {
player = new YT.Player('player', {
events: {
'onReady': onPlayerReady,
'onStateChange': onPlayerStateChange
}
});
}
function onPlayerReady(event) {
playButton.addEventListener('click', function() {
if (player.getPlayerState() == YT.PlayerState.PLAYING) {
player.pauseVideo();
} else {
player.playVideo();
}
});
}
function onPlayerStateChange(event) {
if (event.data == YT.PlayerState.PLAYING) {
icon.classList.remove('fa-play');
icon.classList.add('fa-pause');
} else {
icon.classList.remove('fa-pause');
icon.classList.add('fa-play');
}
}
</script>
Thanks,
Josh
Enabling "Beta: Use Unicode UTF-8 for worldwide language support" as suggested here solved the issue for me.
You did not format your setting value properly.
See this answer for full explanation.
The problem is the URL string — you used a Cyrillic р instead of a normal ASCII p
in http
.
Change this:
fetch('httр://localhost:3000/api/test')
to this:
fetch('http://localhost:3000/api/test')
(or just fetch('/api/test')
inside Next.js).
OK, so this was the answer:
In TOML, the root table ends as soon as the first header (e.g. [params]
) appears. Any bare keys that come after[params]
are part of that table, not the root. In your file.
I had a [params] section starting before the theme config. So in short I just had a bug in hugo.toml
I overlooked it at first as the tab after the keys under [params] made it look like indentation "scoped" the values. But forgot that the whitespace does not have scoping semantics in TOML.
In my case Jupyter server was running outside of the env I created with conda, so it was always running from base environment. This worked:
conda activate dlcourse
pip install jupyterlab ipykernel
If it's just the URL, then add "?wsdl" at the end and browse.
If you need to download as a file, right click on the webpage which shows all the services, save as xml, then rename to filename.wsdl
In some cases you just can turn off TLS verification by --disable-tls
php ./composer-setup.php --install-dir=/usr/bin --filename=composer --disable-tls
#ifndef __clang_analyzer__
base->temp2 = (tempStruct2*)(ptr2 + 1);
#endif
Seems to work for me, basically making the code dead to the analyzer.
Thanks.
I managed to do with by putting export KEY=VALUE
in ~/.zshenv
If I correctly understand you. You asking about "has-pending-model-changes" command that is: "Checks if any changes have been made to the model since the last migration.". Completely command looks like: "dotnet ef migrations has-pending-model-changes"
author of the library here, in your examples you look to be using the v4 API, v5 has a completely new API where config is passed in via the options
prop. I recommend reading the docs: https://react-chessboard.vercel.app/?path=/docs/how-to-use-options-api--docs#optionsonpiececlick
// handle piece click
const onPieceClick = ({
square,
piece,
isSparePiece
}: PieceHandlerArgs) => {
console.log(piece.pieceType);
};
// chessboard options
const chessboardOptions = {
allowDragging: false,
onPieceClick,
id: 'on-piece-click'
};
// render
return <Chessboard options={chessboardOptions} />;
The issue was some kind of hardware error with Firefox. After restarting Firefox (close the app and open again) it works. See also the bug report https://github.com/fabricjs/fabric.js/issues/10710
I have exactly the same problem. Do you have found any answer ?
Merci beaucoup !
Yes the postPersistAnimal method will be invoked. All the callbacks defined by the superclass entities or mapped superclasses will be executed when updating the subclass entity. This behaviour is specified in the JPA documentation.
If a lifecycle callback method for the same lifecycle event is also specified on the entity class and/or one or more of its entity or mapped superclasses, the callback methods on the entity class and/or superclasses are invoked after the other lifecycle callback methods, most general superclass first. A class is permitted to override an inherited callback method of the same callback type, and in this case, the overridden method is not invoked.
You can find more info regarding the execution order and other details here.
I now have a comprehensive example of the combination of gridstack.js and Angular.
https://gitlab.com/FabianSturm/gridstack-dashboard
Feel free to comment on possible improvements!
Maybe you have "AltGR"?
// lib/main.dart
import 'package:flame/flame.dart';
import 'package:flame/game.dart';
import 'package:flame/components.dart';
import 'package:flutter/widgets.dart';
class RunnerGame extends FlameGame with TapDetector {
late SpriteAnimationComponent hero;
@override
Future<void> onLoad() async {
final image = await images.load('hero_run.png'); // spritesheet
final animation = SpriteAnimation.fromFrameData(
image,
SpriteAnimationData.sequenced(
amount: 8, stepTime: 0.08, textureSize: Vector2(64, 64),
),
);
hero = SpriteAnimationComponent(animation: animation, size: Vector2(128, 128))
..position = size / 2;
add(hero);
}
@override
void onTapDown(TapDownInfo info) {
hero.add(MoveToEffect(info.eventPosition.game, EffectController(duration: 0.3)));
}
}
void main() {
final game = RunnerGame();
runApp(GameWidget(game: game));
}
Here's a batch script that captures RTSP stream screenshots every hour while skipping the period from 11 AM to midnight (12 AM):
@echo off
setlocal enabledelayedexpansion
:: Configuration
set RTSP_URL=rtsp://your_camera_rtsp_stream
set OUTPUT_FOLDER=C:\CCTV_Screenshots
set FFMPEG_PATH=C:\ffmpeg\bin\ffmpeg.exe
:: Create output folder if it doesn't exist
if not exist "%OUTPUT_FOLDER%" mkdir "%OUTPUT_FOLDER%"
:: Get current time components
for /f "tokens=1-3 delims=: " %%a in ('echo %time%') do (
set /a "hour=%%a"
set /a "minute=%%b"
set /a "second=%%c"
)
:: Skip if between 11 AM (11) and Midnight (0)
if %hour% geq 11 if %hour% leq 23 (
echo Skipping capture between 11 AM and Midnight
exit /b
)
if %hour% equ 0 (
echo Skipping Midnight capture
exit /b
)
:: Generate timestamp for filename
for /f "tokens=1-3 delims=/ " %%d in ('echo %date%') do (
set year=%%d
set month=%%e
set day=%%f
)
set timestamp=%year%%month%%day%_%hour%%minute%%second%
:: Capture frame with ffmpeg
"%FFMPEG_PATH%" -y -i "%RTSP_URL%" -frames:v 1 -q:v 2 "%OUTPUT_FOLDER%\%timestamp%.jpg" 2>nul
if errorlevel 1 (
echo Failed to capture frame at %time%
) else (
echo Captured frame: %OUTPUT_FOLDER%\%timestamp%.jpg
)
Important Notes:
Replace RTSP_URL
with your camera's actual RTSP stream URL
Adjust FFMPEG_PATH
to match your ffmpeg installation location
Modify OUTPUT_FOLDER
to your desired save location
Test the time format on your system by running echo %time%
and echo %date%
in cmd
The script uses 24-hour format (0-23 where 0=Midnight)
The script will skip captures between 11:00:00 and 23:59:59, plus Midnight (00:00:00)
To Schedule:
Save as cctv_capture.bat
Open Task Scheduler (taskschd.msc)
Create a new task:
Trigger: Hourly (repeat every hour)
Action: Start a program → select your batch file
Run whether user is logged in or not
Troubleshooting Tips:
Test the RTSP URL directly with ffmpeg first
Verify your time format matches the script's parsing
Check folder permissions for the output location
Consider adding error logging if needed
Test during active hours (1-10 AM) to verify captures work
The script will now capture images every hour except between 11 AM and Midnight (12 AM), which matches your requirement for the timelapse project.
Payload splitBy "\n" loads all the content in memory and throws heap memory issue.
It's solved by passing the stream to Java class which process the stream adn writes it to /tmp dir without blowing up the heap.
Inspiration took from Mule File repeatable streaming strategy.
Adobe Creative Cloud lets you "install" fonts to use in non-Adobe applications, and when you do (on Windows) they show up in C:\Users\<USER>\AppData\Roaming\Adobe\User Owned Fonts\
. Note that User Owned Fonts
is a hidden folder, but the files inside it are all unhidden and have meaningful filenames.
Really insightful post ran into a similar issue recently and was also surprised that adding a new enum value triggered a compatibility error. Totally agree that this makes managing evolving schemas in Pub/Sub pretty tricky. Curious to hear how others are handling this switching to strings might be the safer route, but feels like a compromise.
The problem is actually not in the filter, but in the size of the propagation step. In this case, it is too small, which means that the fft is being computed too many times and thus generating error. By increasing the step size to 0.001, you get way better results:
You can prove that these results are better by introducing a function that measures pixel distance between arrays:
def distance(a: array,b: array):
return np.dot((a-b).flatten(),(a-b).flatten())/len(a.flatten())
Using this function to compare the propagated profile to the analytical one shows a distance of 1.40-0.33j when dz=0.001, whereas the distance is -2.53+22.25j when dz=0.00005. You can play around with dz to see if you can get better results.
Try to normalize your data before running linear regression (I mean your X) by using MinMaxScaler for example. (sklearn.preprocessing.MinMaxScaler), thay may have an impact on the coeficients.
If you only want to subscribe to pull request and commits on main, you can do like:
/github subscribe owner/repo pulls commits:main
I think the reason in my case for this error is not the Python version, but rather that the Mac architecture is different than those available for built distributions I have M1 which is ARM64 and for Macos there is only available x86-64. So I cannot install ruptures this way.
Solved by removing expose sourse roots to PYtHONPAT. But the reason is
i generate this software with python for convert images to video.
Image2Video - Turn Images into Videos Effortlessly
A practical tool to convert image collections into high-quality videos with customizable settings. Powered by FFmpeg, perfect for creating timelapses, creative slideshows, or processing CCTV footage.
📸 Supports multiple image formats (JPG, PNG, GIF, etc.)
⏱️ Adjustable frame duration
🎵 Add default audio with customizable bitrate
📂 Automatic folder/subfolder scanning
🖥️ Simple and intuitive GUI
⏳ Real-time progress tracking
MANIFEST.MF
Manifest-Version: 1.0
MIDlet-1: Hello!, icon.png, Hello
MIDlet-vendor: Mehrzad
MicroEdition-Configuration: CLDC-1.1
MIDlet-name: Hello!
MIDlet-version: 1.1
Created-By: 1.8.0_381 (Oracle Corporation)
Nokia-MIDlet-Category: Application
MicroEdition-Profile: MIDP-2.0
It should be like this:
Manifest-Version: 1.0
MIDlet-1: Hello!, icon.png, Hello
MIDlet-Vendor: Mehrzad
MicroEdition-Configuration: CLDC-1.1
MIDlet-Name: Hello!
MIDlet-Version: 1.1
Created-By: 1.8.0_381 (Oracle Corporation)
Nokia-MIDlet-Category: Application
MicroEdition-Profile: MIDP-2.0
I left this problem alone, and I was working on the other parts of my project
last night i ran into a problem and google up the error message and i came up with this thread of stack overflow
I looked at the accepted answer(first answer) he wrote
The problem was: I was using the slim build of jQuery,
I was trying to figure out the solution for another problem, I decided to give it a shot, I replace the jquery CDN link with the one which is not slim version, and bam it worked!
To fix this issue, increase the heap memory by updating the following line in android/gradle.properties
:
org.gradle.jvmargs=-Xmx512M
to:
org.gradle.jvmargs=-Xmx8G
Then run:
flutter clean
flutter run
If 8 GB isn’t enough, you can increase it further (e.g., -Xmx16G
).
For me, the answer was I don't want the button to send the form at all, and this helped me:
https://stackoverflow.com/a/3315016/5057078
Text of the answer:
The default value for the type
attribute of button
elements is "submit". Set it to type="button"
to produce a button that doesn't submit the form.
<button type="button">Submit</button>
In the words of the HTML Standard: "Does nothing."
To be honest, i dont know either !
I don't know if it will solve your problem, but you are creating an ChatOpenAI model, who is maybe, not optimized for Mistral Response.
There is a class for Mistral Model Who looks like that :
from langchain_mistralai import ChatMistralAI
ChatMistralAI = ChatMistralAI(model="mistral-nemo",mistral_api_key=_api_key)
Regards
Firstly, you can define a ghost sequence which clones the array. You can then write two-state lemmas about the array. I happened to write a blog post about a very similar situation here.
I faced this issue in my Flutter app and resolved it by increasing the Gradle JVM memory.
In android/gradle.properties
, update the line:
org.gradle.jvmargs=-Xmx512M
to:
org.gradle.jvmargs=-Xmx8G
Then run:
flutter clean
flutter run
If 8 GB isn’t enough, you can increase it further 16GB(e.g., -Xmx16G
).
I don't know if someone still needed this, but the alternative I found is using panel, then setting the BorderStyle to FixedSingle, then height to 2px. You can also do it with vertical separator lines using same method except the width set to 2px instead of the height. I was used to designing layout on Netbeans with Java using javax.swing.Separator and I was looking for it when I shift language to C# with Visual Studio 2022.
You mustn't close the window of turtle until it done and you can add this line to make it not close until you close it by your self
turtle.done() # put in end
Sir!
Thank you so much! I had almost lost hope :)
Fixed it, Problem was that the XSRF token Header was NOT set. i had to do it manually.
https://blog.logrocket.com/create-style-custom-buttons-react-native/ use this tutorial, using TouchableOpacity from react-native
Using <TouchableOpacity />
to create custom button components
Now that you’ve set up the main screen, it’s time to turn your attention to the custom button component.
const AppButton = props => (
// ...
)
Name the custom button component AppButton
.
Import the <TouchableOpacity />
and <Text />
components from react-native
.
import { View, Button, StyleSheet, TouchableOpacity, Text } from "react-native";
To create custom buttons, you need to customize the <TouchableOpacity />
component and include the <Text />
component inside of it to display the button text.
const AppButton = ({ onPress, title }) => (
<TouchableOpacity onPress={onPress} style={styles.appButtonContainer}>
<Text style={styles.appButtonText}>{title}</Text>
</TouchableOpacity>
);
After 2 months later, it still need user add the prompt manually.
Looking at your sample \S+?\.c
should work.
\S
checks for the 1st non whitespace character and matches
+?
quantifies this match for as few as possible characters
\.c
matches the dot and c
The fuse operation and the script itself will work correct if to change the negative dx to the positive one (as well as the x-origine, accordingly) in the addRectangle().
Here are the key Google documents that explain the correct process:
Gmail IMAP Extensions - Access to Gmail labels (X-GM-LABELS
)
X-GM-LABELS
IMAP attribute. You can use the STORE
command with this attribute to modify the labels on a message. The documentation explicitly lists \Trash
and \Spam
as valid labels you can add. This is the correct, Google-supported IMAP command for applying the "Trash" label to a message, which is the necessary first step for deletion.How Gmail works with IMAP clients - "How messages are organized"
\Trash
or \Spam
label is the specific action that removes a message from the general "All Mail" archive, putting it into a state where it can be permanently deleted.Based on the documentation, the reliable way to move a message to the Trash and permanently delete it is:
\Trash
Label: Use the UID STORE ... +X-GM-LABELS (\\Trash)
command on the message in its original folder (e.g., INBOX
). This effectively moves it to the [Gmail]/Trash
folder.SELECT
the "[Gmail]/Trash"
folder, mark that same message with the \Deleted
flag, and then run the EXPUNGE
command.In Gmail, the traditional concept of folders is replaced by a more flexible system of labels. The only true "folder" that holds all of your email is the "All Mail" archive. Everything else that appears to be a folder, including your Inbox, is simply a label applied to a message.
How Gmail Works with IMAP Clients ("How messages are organized")
Creating Labels to Organize Gmail (User Guide)
Access to Gmail Labels via IMAP (X-GM-LABELS
) (Developer Guide)
X-GM-LABELS
attribute. Even system-critical locations like the Inbox, Spam, and Trash are treated as special labels (\Inbox
, \Spam
, \Trash
). This confirms that from a technical standpoint, there is no "move" operation between folders, only the adding and removing of labels on the single message copy that always resides in "All Mail" until it is moved to Trash or Spam.Thank you, everyone. I had the same issue, and it was resolved after using the proper winutils version for Spark 4.0.0/Hadoop 3.4.x. You can download it from https://github.com/kontext-tech/winutils/tree/master
Copy the entire bin from hadoop-3.4.0-win10-x64
Paste in C:\Hadoop (it will look like C:\Hadoop\bin)
Add variable HADOOP_HOME = C:\Hadoop
Add Path C:\Hadoop\bin
Optional (If above doesn't work) - Add hadoop.dll to C:\Windows\System32 (Suggested by 1 of the commenters on this post)
I have got this working using ShadowDOM example.
The critical piece was to pass (as @ghiscoding indicates) is to set the options.shadowRoot
item.
I believe the errors like TypeError: Cannot read properties of null (reading '0')
where from extra controls I taken from a different example - commenting these out for now.
Just Enable the option in IIS for 32 bit application , Your issue will be resolved.
For Settings you will follow this path
Application pool-> Right click on Application Pool -> advance settings-> Enable 32 bit -> True
I have found that the (new?) plugin is now called NppTextFx2 in the plugins admin.
I've recently came to know this package that applies shimmer effect to any widgets,
But there are some limitations in this.
You cannot use this on image like even if you apply Colors.transparent as baseColor still it wouldn't make the image appear,
THE POINT IS,
Doesn't matter which colour or child your widget has, after applying the shimmer.fromColors the only baseColor works as background color and highlight color as the effect.
Oh wow, I remember struggling with unpacking an XAPK too 😅 ended up finding tools on sites like https://apkjaka.com/ that made it way easier. Have you tried that route before?
The answer that I gave myself is the following:
a has a length of 32: so, b can be used as an index where, for the first half, I am indexing from 0 to 15, and for the second half I can index from 16 to 32, just by using the first hexadecimal digit (mask of 0x0F
).
then, I can use the extra space to carry out an extra operation, which is whether I need to perform the operation at all or not. In this case, 0 is seen as the no-value by just using the high bit from the remaining part of the hexadecimal digit, 0x80
.
Then, I agree with @Homer512 's comment, which also stated that this makes it also useful to work on OR operations.
Looks like it's not possible by default but yet there is a workaround. I haven't tried myself so, solely depending the accepted answer from an AWS support engineer.
I also have a problem with cordova-plugin-admobpro after updating to API 35 - rewarded video is not shown. It seems that the plugin is relates to an outdated SDK version (20.4.0):
https://developers.google.com/admob/android/deprecation
I tried specifying more recent versions, such as 23.2.0, with the following command:
cordova plugin add cordova-plugin-admobpro --save --variable PLAY_SERVICES_VERSION=23.2.0 --variable ADMOB_ANDROID_APP_ID="ca-app-pub-***~***"
Unfortunately, it doesn’t appear to work with these newer versions, if I understand correctly.
I sent the letter to the author - Raymond Xie (floatinghotpot), waiting for feedback. If he doesn't answer, I'll try other projects, but cordova-plugin-admobpro was the most convenient IMHO:
use new version of quill
npm i react-quill
I am also facing this issue even after updating dependencies. Any solution ?
I am trying to compile 3.6.9 since it is needed for dependencies (Pulsar), and it gets stuck there as well on an RPI4 with gcc 12.
make altinstall looks ok but i need to install it systemwide, any tips?
Best regards
With a bit of testing and from the comments of people helping, I have come to a conclusion.
#define T1ms 16000 // assumes using 16 MHz PIOSC (default setting for clock source)
That preprocessor directive is wrong due to not using a L293D and utilizing another type of motor driver specific to stepper motors called a DM332T driver from stepperonline-omc.
Now, if I define T1ms to 400 unlike in the example previously listed for the internal clock frequency, I can move my stepper in one direction in a faster RPM. So something like this:
#define T1ms 400
Or...if I was risky, I could test with:
#define T1ms 1 // this is if I would like 400 RPM with the current driver config
See, the driver has an internal couple of dipswitch settings that can be altered on the outside of the driver. This dipswitch setting appliance on the outside of the driver allows for faster RPM or more steps per RPM.
I have been reading theory recently and learning about how to control the STEP of the stepper motors and direction of the stepper motors too. I have been reading from here: https://www.orientalmotor.com/stepper-motors/technology/stepper-motor-basics.html
With the preprocessor directive of T1ms set to 1, I would need to fasten my motor to something heavy so not to throw safety in the wind. This way, the motor will not become disconnected from its source or location. I think with my questioning, this is the answer I was looking to attain.
Q: How can I make the motor move faster than what the internal clock allows?
and...
A: Use a driver with dipswitches and allow the driver to account for driving.
I created a new simple Flutter project and try to load a Tiled map on Firebase.
My folder structure:
assets/
├── image.png
└── map.tmx
In my pubspec.yaml
I declared:
flutter:
(2 spaces)assets:
(4 spaces)- assets/
This is my map.tmx
<?xml version="1.0" encoding="UTF-8"?>
<map version="1.10" tiledversion="1.11.2" orientation="orthogonal" renderorder="right-down" width="30" height="20" tilewidth="32" tileheight="32" infinite="0" nextlayerid="2" nextobjectid="1">
<tileset firstgid="1" name="image" tilewidth="32" tileheight="32" tilecount="60" columns="6">
<image source="image.png" width="192" height="336"/>
</tileset>
<layer id="1" name="Tile Layer 1" width="30" height="20">
<data encoding="csv">
0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,1,1,1,1,0,
1,0,0,0,1,1,1,1,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,1,1,1,1,0,1,
1,1,0,1,1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,1,0,1,1,1,1,1,1,1,1,1,
1,0,1,1,1,0,1,1,1,1,1,1,0,1,1,1,1,1,1,0,1,1,1,1,1,0,0,1,1,1,
0,1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,1,1,1,1,1,0,0,0,1,1,1,1,1,0,
1,1,1,1,1,1,1,1,1,1,1,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,1,1,1,1,0,0,1,1,1,1,
1,0,1,1,1,1,1,1,1,1,1,0,1,1,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,1,1,1,1,1,1,1,1,1,
0,0,1,0,1,1,1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,1,1,1,1,1,1,1,1,1,
0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
</data>
</layer>
</map>
But I still get this error:
Unable to load asset: "assets/tiles/map.tmx".
The asset does not exist or has empty data.
See also: https://docs.flutter.dev/testing/errors
Where is the assets/tiles/ ?
Thank everyone.
+-------+----------------------------+-------------+-----------------------------+----------------------+-------------------------+-----------------------------+-----------------------------------------------+
| SL.NO | CROP NAME | RAINFALL | WEATHER CONDITIONS | NATURE OF CROP | SOIL TYPE | GLOBAL RANKING IN EXPORT* | PLACE OF AVAILABILITY (India) |
+-------+----------------------------+-------------+-----------------------------+----------------------+-------------------------+-----------------------------+-----------------------------------------------+
| 1 | Rice | 100–200 cm | Warm & humid; 22–32°C | Kharif | Clayey/alluvial loam | India ≈ #1 exporter | WB, UP, Punjab, Bihar, Odisha, TN, Assam etc. |
| 2 | Wheat | 50–75 cm | Cool, dry; 10–15°C grow, | Rabi | Well-drained loam | Mainly domestic use | UP, Punjab, Haryana, MP, Rajasthan, Bihar |
| | | | 21–26°C ripening | | | | |
| 3 | Jowar (Sorghum) | 45–75 cm | Warm, drought tolerant | Kharif (some Rabi) | Sandy loam/black soils | Small exporter | Maharashtra, Karnataka, Telangana, AP, MP |
| 4 | Bajra (Pearl millet) | 25–50 cm | Hot & arid; 25–35°C | Kharif | Sandy/loamy, light | Small exporter | Rajasthan, Gujarat, Haryana, UP, Maharashtra |
| 5 | Ragi (Finger millet) | 70–100 cm | Cool–warm; 18–28°C | Kharif (some Rabi) | Red loam/lateritic | Small exporter | Karnataka, TN, Uttarakhand, Sikkim, Himachal |
| 6 | Maize | 50–100 cm | Warm; 21–27°C | Kharif (also Rabi) | Fertile loam/alluvial | Minor exporter | Karnataka, MP, Bihar, UP, Telangana, AP, MH |
| 7 | Pulses (Chana, Arhar etc.) | 25–50 cm | Warm; dry at ripening | Rabi & some Kharif | Loam/black soils | Net importer | MP, Maharashtra, Rajasthan, UP, Karnataka |
| 8 | Sugarcane | 75–150+ cm | Warm; 21–27°C; frost-free | Plantation/Annual | Deep loam/alluvial | Brazil #1, India also exp. | UP, Maharashtra, Karnataka, TN, AP, Punjab |
| 9 | Oilseeds (Groundnut etc.) | 25–75 cm | Warm; 20–30°C | Mostly Kharif | Loam/black cotton | Limited exports | Gujarat, Rajasthan, MP, Maharashtra, AP, KA |
| 10 | Tea | 150–300 cm | Cool, humid; 15–25°C | Plantation | Acidic lateritic | Top 4–5 exporter | Assam, WB (Darjeeling), Kerala, TN, Karnataka |
| 11 | Coffee | 150–250 cm | Cool, shaded; 15–28°C | Plantation | Loam/laterite | Top 8–10 exporter | Karnataka (Kodagu), Kerala (Wayanad), TN |
| 12 | Horticulture (F&V) | Crop-spec. | Crop-specific | Varies | Fertile, well-drained | India #2 producer | Maharashtra (grapes), AP (mango), UP (potato) |
| 13 | Rubber | 200+ cm | Hot, humid; >25°C | Plantation | Lateritic/red loam | Not major exporter | Kerala, Karnataka, TN, NE states |
| 14 | Cotton | 50–100 cm | Warm; 21–30°C; frost-free | Kharif | Black cotton (regur) | Top 2–3 exporter | Maharashtra, Gujarat, Telangana, AP, MP etc. |
| 15 | Jute | 150–200 cm | Hot, humid; 24–35°C | Kharif | Alluvial delta soils | Top 2 (with Bangladesh) | WB, Bihar, Assam, Odisha, Meghalaya |
+-------+----------------------------+-------------+-----------------------------+----------------------+-------------------------+-----------------------------+-----------------------------------------------+
According to the feedback from the GCC team, the issue that causes an Internal Compiler Error in GCC is that GCC also does not reject the struct binding as an invalid template argument