<script setup>
import { inject } from 'vue';
const route = inject('route');
console.log(route('klant.index'));
</script>
look for <script setup> on this page for Vue3 composition API setup https://github.com/tighten/ziggy?tab=readme-ov-file#installation
We are looking for Administrative Assistants/Customer Service Assistants with flexible hours to start as soon as possible.
Training will be provided. (Monday to Friday or Saturday to Sunday). Good salary. Basic English is required
you can apply through the following link:
https://form.jotform.com/252307797899075
Found it in one of our css files.
The error/fix is simple.
The message is gibberish.
It indicates your css braces are not balanced.
We had it on this
``body {
/* for pages of css here */
.xxx {
}
``
Adding a brace and balancing everything made it go away.
Instead of auth.last_sign_in_at I think we can use auth.email_confirmed_at as well.
The Power BI dataset is tabular and, as such, Deneb processes and generates the supplied Power BI data as a tabular dataset in the Vega view. There isn't the ability to type data from the semantic model in a way that allows us to detect if it's JSON (and infer that it's spatial). We also can't have multiple queries per visual due to constraints in Power BI, so the tabular dataset was prioritized for greater flexibility for developers. If you are using the certified visual, you can currently only use the hard-coded approach that @davidebacci has identified.
I have some ideas for "post v2", where we could supply a valid scalar TopoJSON object from a semantic model via a conditional formatting property in the properties pane, which circumvents the query limit but allows us to potentially treat such a property and inject it as a spatial dataset (provided it parses). We'll also need to consider what this means for reuse via templates, as it creates an additional kind of dependency beyond the traditional tabular dataset that we currently assume.
Note that v2 is still under active development. As we're discussing potentially after this timeframe, it is not a valid short-term (or possibly medium-term) solution. However, I wanted to let you know that I'm aware of it as a limitation and am considering how to solve it in a future iteration.
I have tackled this same issue and did a lot of searching. Could not find the answer, so with some trial and error I found a little more information. I'll add it to this question, in case this keeps coming up.
The code in the answer by stackunderflow can modified to display more information about every reference in the project. In particular, we care about printing VBRef.GUID
. If we then search the registry for the GUID, there should be a hit in the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office area. There were a bunch of other hits I had to ignore - for my plugin the key chain went down ClickToRun path. Eventually there are some default values of type 0x40001, and one of them will contain the full path as binary data in UTF-16 format. Double-click to edit the binary data, which will also show the characters on the right side.
I verified this by modifying the binary data in RegEdit - Excel then showed the modified path in the reference list.
check virus & threat protection Real-time protection and allow folder image
I have similar problem, I would not find where to change the title for https://www.amphasisdesign.com/products . Please help.
After some digging, I found that Objects wrapped in a Svelte 5 state rune don't behave just like a normal Object (unlike in Svelte 4), as $state(...)
wraps plain objects/arrays in a Svelte Proxy. This is what led to the error: IndexedDB (and Node’s structuredClone
) cannot serialize these Proxies, so Dexie throws DataCloneError: #<Object> could not be cloned
. The fix is to simply replace the plain object spread with $state.snapshot()
, which takes a static serializable snapshot of a deeply reactive $state
proxy:
- const dirty = { meta: { ...meta } }
+ const dirty = { meta: $state.snapshot(meta) }
It looks like you are relying on the persistence of the value of count
; however, in Google Apps Script, the global context is executed on each execution.
One option is to use the Properties Service to store the value, but in this case, you might find it more convenient to get the column hidden state by using isColumnHiddenByUser(columnPosition)
Related
You cannot get a Google profile photo from Cloud IAP alone. IAP gives you identity for access control, not an OAuth access token for Google APIs. The headers and the IAP JWT are only meant for your app to verify who the caller is. They do not include a profile picture and they are not valid to call Google APIs like People.
import moviepy.editor as mp
# 打开GIF文件
gif_path = "/mnt/data/earthquake_shake.gif"
video_path = "/mnt/data/earthquake_shake.mp4"
# 转换为视频
clip = mp.VideoFileClip(gif_path)
clip.write_videofile(video_path, codec="libx264", fps=10)
Are you resetting the EPC pointer after every write ? If not You are seeing 23 instead of 24 (and 21 instead of 23) because the printer did write the value you asked for, but the reader is returning the next sequential EPC value.
This is a known quirk on Gen-2 UHF tags when the EPC pointer is left at an offset ≠ 0 after a previous operation.
The issue lies in the library, to be more specific, the regex that grabs the Android version.
As noted by @loremus, this library is no longer maintained and should not be used.
But to fix the issue, look for /android ([0-9]\.[0-9])/i)
as in the snippet below, and change it to /android ([0-9]+(?:\.[0-9]+)?)/i
function _getAndroid() {
var android = false;
var sAgent = navigator.userAgent;
if (/android/i.test(sAgent)) { // android
android = true;
var aMat = sAgent.toString().match(/android ([0-9]\.[0-9])/i);
if (aMat && aMat[1]) {
android = parseFloat(aMat[1]);
}
}
return android;
}
I break model training and gpu remains with full memory and this helps me. But by carefull its also kills python env kernel
pkill -f python
The ESLint warning you're seeing in VSCode—"Unexpected nullable string value in conditional. Please handle the nullish/empty cases explicitly"—comes from the @typescript-eslint/strict-boolean-expressions
rule. This rule enforces that conditionals must be explicit when dealing with potentially null
or undefined
values.
I'm not an expert, but I'm pretty sure you shouldn't use an initializer on a view. That code should be in the onAppear method. You can't count on the init method. You have the transaction in the binding.
I eventually found the issue that I had a typo in my code.
However, within the tab panel, one can just access the props of the component.
// bobsProps is passed in as a function properties, so is accessible.
<TabPanel bob={bobsProps} value={value} index={2}>
<div>{bobsProps.somevalue}</div>
</TabPanel>
Never mind, i tried a way, i added this and at least it prints all of them on the terminal:
df = pd.DataFrame(data)
dflst = []
dflst.append(df)
print(dflst)
all_data()
print(all_data())
I'd accept any insight to make my understanding of the whole thing better. Thanks for reading, and sorry to bother.
You can’t use switch for value ranges — it only works with fixed cases. If you need to set an image source based on ranges of ratio, you’ll need to use if/else statements (or a lookup function) instead. That way you can handle conditions like < 20, >= 20 && < 50, etc.
On Windows 11, the correct syntax is:
curl -X POST "http://localhost:1000/Schedule/0" -H "accept: */*" -H "Content-Type: application/json" -d "{ \"title\": \"string\", \"timeStart\": \"21:00:00\", \"timeEnd\": \"22:00:00\" }"
only 1 backslash before each double quote inside --data
I've found the cause. PHPStorm adds these parameters when you open the index page via ALT+F2
:
?_ijt=pdprfcc6u90jpqpfgc0hfk2mk3&_ij_reload=RELOAD_ON_SAVE
My code automatically preserves URL parameters, so this was causing the devenv to return the extra payload.
Just one DAG is enough!
Five tasks:
IsAlive - Check if the streaming app is alive. If no, jump to task 4.
IsHealthy - Check if the app is performing as expected. If yes, jump to task 5.
Shutdown - Finishes the app.
Start - Starts the app.
Log - Tracks the status and acts.
I am also following along in the book "Creating Apps with kivy", and ran into the same issue in Example 1-5. ListView is deprecated and replaced with RecycleView. I took the response from Nourless, and stripped it down to the bare essentials to match the example in the book. I found the following code worked in place of ListView. As the author goes through the book, I am guessing he will add the layout information one step at a time.
RecycleView:
data: [{'text':'Palo Alto, MX'}, {'text':'Palo Alto, US'}]
The answer here was to click Build->Load All (Ctrl-Shift-L) (or equivalently run devtools:load_all("path/to/my/project")
, which loads the correct things into scope.
sua ideia e boa, veja sobre cluster pvm, e nao mpi, os clusters atualmente sao todos MPI os cluster AWS e os APACHE ultimos estao tentando refazer o que o PVM do openmosix fazia acho que e isso que vc imaginou, vc coloca o seu programa para funcionar com muitas tread e magicamente sua tread aparece pronta isso e um PVM
not working.. pls fix this asap
Maybe you should take a look at their code example and search for the part using Markers. If this is not enough, you should ask directly to the maintainers of the lib through the issues section of the repository.
As of now, this seems to be impossible, short of patching Java yourself. There is upstream bug report: https://bugs.openjdk.org/browse/JDK-8290140 and Fedora might patch it: https://bugzilla.redhat.com/show_bug.cgi?id=1154277
//.htaccess
RewriteEngine On
RewriteBase /AbhihekDeveloper
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ index.php/$1 [PT,L]
The calendar app is just .toolbar
nothing too complicated. Using the new Toolbar stuff its build in a couple of minutes.
Calendar App:
private let days = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 ,16, 17, 18, 19, 20, 22, 23, 24, 25] //Just example dont implement like this
private let columns = [GridItem(.flexible()), GridItem(.flexible()), GridItem(.flexible()), GridItem(.flexible()), GridItem(.flexible()), GridItem(.flexible()), GridItem(.flexible())] // 7 days
var body: some View {
NavigationView {
VStack {
ScrollView {
Text("May")
.font(.largeTitle.bold())
.frame(maxWidth: .infinity, alignment: .leading)
.padding()
LazyVGrid(columns: columns) {
ForEach(days, id: \.self) { day in
Text("\(day)")
.font(.title3)
.padding(5)
.padding(.vertical, 10)
}
}
.padding()
Text("June")
.font(.largeTitle.bold())
.frame(maxWidth: .infinity, alignment: .leading)
.padding()
LazyVGrid(columns: columns) {
ForEach(days, id: \.self) { day in
Text("\(day)")
.font(.title3)
.padding(5)
.padding(.vertical, 10)
}
}
.padding()
}
}
.toolbar {
ToolbarItem(placement: .topBarLeading) {
Label("2025", systemImage: "chevron.left")
.labelStyle(.titleAndIcon)
.frame(width: 75) // Have to set it for the ToolbarItem or only icon is visible
}
ToolbarItem(placement: .topBarTrailing) {
Image(systemName: "server.rack") //or whatever
}
ToolbarItem(placement: .topBarTrailing) {
Image(systemName: "magnifyingglass")
}
ToolbarItem(placement: .topBarTrailing) {
Image(systemName: "plus")
}
ToolbarItem(placement: .bottomBar) {
Image(systemName: "pencil")
}
ToolbarSpacer(placement: .bottomBar)
ToolbarItem(placement: .bottomBar) {
Image(systemName: "exclamationmark.circle")
}
ToolbarItem(placement: .bottomBar) {
Image(systemName: "tray")
}
}
}
}
Now the Fitness app is a little bit more challenging. I didn't come up with a perfect solution, but it the basics works. I chose the .navigationTitle()
and just a plain VStack
with the chips as you can see. It doesn't have a blur, but the basics are there. TabView
is with just the basic Tab
. It could be refactored into the .toolbar
too with a custom title?
Fitness App:
struct FitnessAppView: View {
var body: some View {
TabView {
//Different views
Tab("Fitness+", systemImage: "ring") {
FitnessRunningView()
}
Tab("Summary", systemImage: "figure.run.circle") {
FitnessRunningView()
}
Tab("Sharing", systemImage: "person.2") {
FitnessRunningView()
}
}
}
}
struct FitnessRunningView: View {
var body: some View {
NavigationView {
ZStack {
VStack {
// Horizontal chips
ScrollView(.horizontal) {
HStack {
ChipView(text: "For you")
ChipView(text: "Explore")
ChipView(text: "Plans")
ChipView(text: "Library")
}
}
.scrollIndicators(.hidden)
// Main content
ScrollView {
VStack(spacing: 20) {
Text("Hello world!")
ForEach(0..<20) { i in
Text("Item \(i)")
.frame(maxWidth: .infinity)
.padding()
.background(.thinMaterial)
.cornerRadius(10)
}
}
.padding()
}
}
}
.navigationTitle("Fitness+")
}
}
}
struct ChipView: View {
var text: String
var body: some View {
Text(text)
.font(.title3)
.padding()
.glassEffect(.regular.interactive())
.padding(10)
}
}
Rejecting duplicate peerIds did not work for me. I kept an array of the sessions that I had started for all peerIds and when the advertiser triggered a call to session:peer:didChangeState: I did a disconnect and session=nil to all sessions in the array except the session that was finally connected.
I solved the problem by making the function that draws the messages also draw the line in the background and adding to the height y of the line coordinates the equivalent of the distance from the beginning of the message box to its center (as this is always fixed) + the total height of the box
Check the generated output variable in the schema.prisma file and the location from where u are importing prisma client. In my case I located where the edge.d.ts file was and it was in the src/generated/prisma .
import { PrismaClient } from '../src/generated/prisma/edge'
generator client {
provider = "prisma-client-js"
output = "../src/generated/prisma"
}
I also encountered this just now and tried something. I set the polygon's pivot point to the bones pivot and voila... it works fine now. (Godot 4.2.1)
so what I had to do to solve this error is go into my files, go to (%appdata% > roaming) find Jupyter in there. Then I was prompted by windows to allow admin permissions before entering. This fixed Anaconda when I went to check after.
My set up: NX + Angular 19 with internal library.
For me, this bug occurs when all three conditions are met:
I am using a component without exporting from library
I am using that component inside @defer{} block
I am NOT using hmr.
What really tricky is: if you are using hmr this just works fine.
Seems like a nastly angular bug.
try using a different ssh-agent . e.g
ssh-agent bash
ssh-add ~/.ssh/id_ed25519
The thing that worked for me is to either connect to your mobile hotspot and if you are already connected change the network type to private network.
You can set the number of concurrent processes used in your build process using CMAKE_BUILD_PARALLEL_LEVEL
in your CMake file. For example:
set(CMAKE_BUILD_PARALLEL_LEVEL 10)
is equal to specify -j 10
in your cmake command line.
You may also want to consider another approach of making the Djoser emails async by default.
The way I did this was to subclass Djoser's email classes and override the send()
method so it uses a Celery task. The accepted solution works for one-off tasks, but this method makes sure there is consistency across all email types.
users/tasks.py
from django.core.mail import EmailMultiAlternatives
from celery import shared_task
@shared_task(bind=True, max_retries=3)
def send_email_task(self, subject, body, from_email, to, bcc=None, cc=None, reply_to=None, alternatives=None):
try:
email = EmailMultiAlternatives(
subject=subject,
body=body,
from_email=from_email,
to=to,
bcc=bcc or [],
cc=cc or [],
reply_to=reply_to or []
)
if alternatives:
for alt in alternatives:
email.attach_alternative(*alt)
email.send()
except Exception as exc:
raise self.retry(exc=exc, countdown=60)
This is a generic task that sends any Django email. Nothing here is Djoser-specific.
users/email.py
from django.conf import settings
from djoser import email
from .tasks import send_email_task
class AsyncDjoserEmailMessage(email.BaseDjoserEmail):
"""
Override synchronous send to use Celery.
"""
def send(self, to, fail_silently=False, **kwargs):
self.render()
self.to = to
self.cc = kwargs.pop("cc", [])
self.bcc = kwargs.pop("bcc", [])
self.reply_to = kwargs.pop("reply_to", [])
self.from_email = kwargs.pop("from_email", settings.DEFAULT_FROM_EMAIL)
self.request = None # don't pass request to Celery
send_email_task.delay(
subject=self.subject,
body=self.body,
from_email=self.from_email,
to=self.to,
bcc=self.bcc,
cc=self.cc,
reply_to=self.reply_to,
alternatives=self.alternatives,
)
Any email that inherits from this class will be sent asynchronously.
Now you can combine Djoser's built-in emails with your async base:
class PasswordResetEmail(email.PasswordResetEmail, AsyncDjoserEmailMessage):
template_name = 'email/password_reset.html'
def get_context_data(self):
context = super().get_context_data()
user = context.get('user')
context['username'] = user.username
context['reset_url'] = (
f"{settings.FRONTEND_BASE_URL}/reset-password"
f"?uid={context['uid']}&token={context['token']}"
)
return context
class ActivationEmail(email.ActivationEmail, AsyncDjoserEmailMessage):
template_name = 'email/activation.html'
def get_context_data(self):
context = super().get_context_data()
user = context.get('user')
context['username'] = user.username
context['verify_url'] = (
f"{settings.FRONTEND_BASE_URL}/verify-email"
f"?uid={context['uid']}&token={context['token']}"
)
return context
class ConfirmationEmail(email.ConfirmationEmail, AsyncDjoserEmailMessage):
template_name = 'email/confirmation.html'
You can do the same for:
PasswordChangedConfirmationEmail
UsernameChangedConfirmationEmail
UsernameResetEmail
Each one gets async sending for free, and you can add extra context if you need it.
If you want to override Djoser's email, you need to make sure you add yours to the global templates dir so your templates get used instead. Examples (templates/email/...):
password_reset.html
{% block subject %}Reset your password on {{ site_name }}{% endblock %}
{% block text_body %}
Hello {{ username }}!
You requested a password reset for your account. Click the link below:
{{ reset_url }}
{% endblock %}
{% block html_body %}
<h2>Hello {{ username }}!</h2>
<p>Click the link to reset:</p>
<a href="{{ reset_url }}">Reset Password</a>
{% endblock %}
activation.html
{% block subject %}Verify your email for {{ site_name }}{% endblock %}
{% block text_body %}
Hello {{ username }}, please verify your email:
{{ verify_url }}
{% endblock %}
{% block html_body %}
<h2>Hello {{ username }}!</h2>
<p><a href="{{ verify_url }}">Verify Email</a></p>
{% endblock %}
...and similarly for confirmation.html.
Make sure your settings.py
points at the template folder:
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [BASE_DIR / "templates"],
...
}
]
Add Djoser URLs:
urlpatterns = [
path("users/", include("djoser.urls")),
...
]
Start Celery:
celery -A config worker -l info
(replace config
with your project name)
Trigger a Djoser action (e.g. reset_password
or activation
) and you'll see Celery run send_email_task
.
This way, all Djoser emails in which you inherit AsyncDjoserEmailMessage() become async, not just password reset.
There is a fix for grid.setOptions so that it doesn't drop your toolbar customizations.
Just detach the toolbar before setOptions and then re-apply it afterwards.
toolBar = $("#" + GridName + " .k-grid-toolbar").detach();
grid.setOptions(options);
$("#" + GridName + " .k-grid-toolbar").replaceWith(toolBar);
This is a fairly widespread compatibility issue between the JavaFX D3D hardware pipeline and recent Intel Iris Xe graphics drivers on Windows, as confirmed by your tests with multiple driver and Java versions. The D3DERR_DEVICEHUNG error and resulting freezes or flickers are typical of JavaFX running into problems with the GPU driver—these issues go away when using software rendering or a discrete NVIDIA GPU, but those solutions either severely hurt performance or aren't generally available to all users. Currently, aside from forcing software rendering (which impacts speed) or shifting to an external GPU (not possible on all systems), there is no reliable JVM flag or workaround that fully addresses this; the root cause is a low-level bug or incompatibility which requires a fix from Intel or the JavaFX/OpenJFX developers. For now, the best course is to alert both Intel and OpenJFX via a detailed bug report and, in the interim, provide users with guidance to use software mode or reduce heavy GPU effects until an official update becomes available.
Powershell:
Remove-Item Env:\<VARNAME>
Example:
Remove-Item Env:\SSH_AUTH_SOCK
Hello I´ve had the same issue, have you found the solution? Please could you give me a hint if you solved this problem. Thanks in advance.
Simple! I should have mentioned the .exe was previously signed. The solution is to do:
signtool remove /s %outputfile%
before the rcedit. Then after that, signtool to sign - works fine.
Use this patch. its works for me
https://github.com/software-mansion/react-native-reanimated/issues/7493#issuecomment-3056943474
Had the same issue. Try updating or using a new CLI
I fixed it by installing the latest version of IntelliJ IDEA, which has full support for newer Java language levels
+1 For the Loki
recommendation. It is nice being able to query the Loki data in the Grafana UI. You can tail live logs from your pod using the label selector or pick a specific time range that you are interested in.
I figured out how to get the output that I needed. I'll post it here for others to see and comment on.
The way I did it was to also require jq as a provider, which then allowed me to run a jq_query data block. This is the full end to end conversion of the data sources:
locals {
instances_json = jsonencode([ for value in data.terraform_remote_state.instances : value.outputs ])
}
data "jq_query" "all_ids" {
data = local.instances_json
query = ".[] | .. | select(.id? != null) | .id"
}
locals {
instances = split(",", replace(replace(data.jq_query.all_ids.result, "\n", "," ), "\"", "") )
}
The last locals block is needed because the jq_query block returns multiple values but the string is not in a standard json format. So we can't decode the string from json, we just simply have to work around it. So I replaced the "\n"
characters with commas, and then replaced the \"
with nothing so that the end result would give me something I could use the split function with to split up the values into a list.
Make sure to specify the uid when creating the user so that it will for sure match up with the uid specified for the cache. I was having permissions problems with the cache dir until I saw that the user that was created had uid 999.
useradd -u 1000 myuser
header 1 | header 2 |
---|---|
cell 1 | cell 2 |
cell 3 | cell 4 |
I had a case similar to the question above, but to solve I did this:
columns = ["a", "b", "c"]
df[[*columns]]
This unpacks the column names and uses them to generate a new dataframe with only the column names in the columns
list
I found the error. The de-serialization code should use boost::archive::binary_iarchive ar(filter);
instead of boost::archive::binary_iarchive ar(f);
That yellow triangle isn’t the Problems counter. It’s a warning that you turned Problems off. VS Code added this in 1.85—when Problems: Visibility is off, it shows a status-bar warning by design.
Hide just that icon (and keep Problems hidden):
Right-click the status bar → Manage Status Bar Items (or run “Preferences: Configure Status Bar Items”).
Uncheck the entry for Problems (visibility off) to hide that warning item. This per-item visibility is persisted.
If you use SSH/WSL/Dev Containers: open the remote window and do the same there—remote windows keep their own settings/profile.
If you actually want Problems decorations back (and thus no warning), just re-enable Problems: Visibility in Settings.
Based on https://lit.dev/docs/components/decorators/
You can also add accessor
before your property name to use standard decorator syntax instead of experimental decorator syntax.
Use .localhost
*.localhost
is reserved for local development (RFC 6761)
Natively recognized by browsers
No conflicts with real domains
Recommended Laragon Configuration
Modify domains in Laragon:
Menu → Preferences → General
Change {name}.dev
to {name}.localhost
Generate wildcard certificate:
# Install mkcert if not already done
mkcert -install
# Create wildcard certificate for localhost
mkcert "*.localhost" localhost 127.0.0.1 ::1
Regards
In TYPO3v12 or later, use the PageRepository
: https://docs.typo3.org/c/typo3/cms-core/main/en-us/Changelog/12.0/Deprecation-97027-ContentObjectRenderer-getTreeList.html
Perfeito! O código que você enviou já gera uma música simples estilo funk eletrônico usando pydub. Ele cria um loop de 4 batidas com kick, snare e hi-hat e exporta para um arquivo MP3 de 2min30s.
Se você quiser escutar a música gerada, basta executar o código em um ambiente Python que suporte pydub e ffmpeg (necessário para exportar MP3).
Aqui está uma versão pronta para execução com pequenas melhorias:
from pydub.generators import Sine
from pydub import AudioSegment
# Configurações do beat
bpm = 150
beat_duration_ms = int((60 / bpm) * 1000) # duração de 1 batida em ms (~400ms)
total_duration_ms = 2 * 60 * 1000 + 30 * 1000 # 2min30s
# Sons básicos
kick = Sine(60).to_audio_segment(duration=beat_duration_ms).apply_gain(+6)
snare = Sine(200).to_audio_segment(duration=100).apply_gain(-3)
hihat = Sine(8000).to_audio_segment(duration=50).apply_gain(-15)
# Função para criar um compasso simples de funk eletrônico
def make_bar():
bar = AudioSegment.silent(duration=beat_duration_ms \* 4)
\# Kick no tempo 1 e 3
bar = bar.overlay(kick, position=0)
bar = bar.overlay(kick, position=beat_duration_ms \* 2)
\# Snare no tempo 2 e 4
bar = bar.overlay(snare, position=beat_duration_ms)
bar = bar.overlay(snare, position=beat_duration_ms \* 3)
\# Hi-hat em todos os tempos
for i in range(4):
bar = bar.overlay(hihat, position=beat_duration_ms \* i)
return bar
# Criar o loop principal
bar = make_bar()
song = AudioSegment.silent(duration=0)
while len(song) < total_duration_ms:
song += bar
# Exportar como MP3
output_path = "funk_moderno.mp3"
song.export(output_path, format="mp3")
print(f"Música gerada em: {output_path}")
Depois de rodar, você terá um arquivo funk_moderno.mp3 na mesma pasta, pronto para ouvir.
Se você quiser, posso melhorar essa música adicionando variações, efeitos ou uma linha de baixo para ficar mais “profissional” e com cara de funk eletrônico moderno. Quer que eu faça isso?
i have same problem with you and here is my solution:
You must define
DATABASE_URL: postgresql://${DB_USERNAME}:${DB_PASSWORD}@postgres-db:5432/${DB_DATABASE}
inside docker compose for backend service connect to the postgres db. here is my docker-compose file:
version: '4.0'
services:
db:
image: postgres
container_name: postgres
environment:
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_DATABASE}
ports:
- "5432:5432"
volumes:
- db_data:/var/lib/postgresql/data
backend:
build: .
container_name: backend
ports:
- "3000:3000"
environment:
DATABASE_URL: postgresql://${DB_USERNAME}:${DB_PASSWORD}@postgres-db:5432/${DB_DATABASE}
depends_on:
- db
volumes:
- .:/app
- /app/node_modules
volumes:
db_data:
then change the host(DB_HOST) in .env file equal to "db" (because you named postgres is "db" in docker-compose file)
PORT=3000
DB_HOST=db
DB_PORT=5432
DB_USERNAME=postgres
DB_PASSWORD=123456
DB_DATABASE=auth
the typeORM config
TypeOrmModule.forRootAsync({
imports: [ConfigModule],
useFactory: (configService: ConfigService) => ({
type: 'postgres',
host: configService.get('DB_HOST'),
port: +configService.get('DB_PORT'),
username: configService.get('DB_USERNAME'),
password: configService.get('DB_PASSWORD'),
database: configService.get('DB_DATABASE'),
entities: [__dirname + '/**/*.entity{.ts,.js}'],
synchronize: true,
logging: true
}),
inject: [ConfigService],
}),
here is an update, I have written an update version of the code using dynamic allocation for all the matrices, this works quite well in parallel too(I have tested it up to 4096x4096); the only minor issue is that, with the largest size tested, I had to turn off the function call to the "print" function because it stalled the program.
Inside the function for the block multiplication there is now a condition on all 3 inner loops to take care of the scenario where row and columns values cannot be divided by block dimension, using fmin() function with this syntax:
for(int i=ii; i<fmin(ii+blockSize, rowsA); ++i)
{
for(int j=jj; j<fmin(jj+blockSize, colsB); ++j)
{
for(int k=kk;k<fmin(kk+blockSize, rowsA); ++k)
{
matC[i][j] += matA[i][k]*matB[k][j];
I tried this approach also in the early version of the serial code but for some reason it didn't work, probably because I made some logical mistakes.
Anyway, this code do not work on rectangular matrices, if you try to run it with 2 rectangular matrices you will get an error because pointers writes outiside the memory areas they are supposed to work into.
I tried to think about how to convert all checks and mathematical conditions required for rectangular matrices into working code but I had no success, I admit it's beyond my skills, if anyone has code (maybe from past examples or from some source on the net) to be used it could be an extra addition to the algorithm, I searched a lot both here and on the internet but found nothing.
Here is the updated full code:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <omp.h>
/* run this program using the console pauser or add your own getch, system("pause") or input loop */
// function for product block calculation between matri A and B
void matMultDyn(int rowsA, int colsA, int rowsB, int colsB, int blockSize, int **matA, int **matB, int **matC)
{
double total_time_prod = omp_get_wtime();
#pragma omp parallel
{
#pragma omp single
{
//int num_threads=omp_get_num_threads();
//printf("%d ", num_threads);
for(int ii=0; ii<rowsA; ii+=blockSize)
{
for(int jj=0; jj<colsB; jj+=blockSize)
{
for(int kk=0; kk<rowsA; kk+=blockSize)
{
#pragma omp task depend(in: matA[ii:blockSize][kk:blockSize], matB[kk:blockSize][jj:blockSize]) depend(inout: matC[ii:blockSize][jj:blockSize])
{
for(int i=ii; i<fmin(ii+blockSize, rowsA); ++i)
{
for(int j=jj; j<fmin(jj+blockSize, colsB); ++j)
{
for(int k=kk;k<fmin(kk+blockSize, rowsA); ++k)
{
matC[i][j] += matA[i][k]*matB[k][j];
//printf("Hello from iteration n: %d\n",k);
//printf("Test valore matrice: %d\n",matC[i][j]);
//printf("Thread Id: %d\n",omp_get_thread_num());
}
}
}
}
}
}
}
}
}
total_time_prod = omp_get_wtime() - total_time_prod;
printf("Total product execution time by parallel threads (in seconds): %f\n", total_time_prod);
}
//Function for printing of the Product Matrix
void printMatrix(int **product, int rows, int cols)
{
printf("Resultant Product Matrix:\n");
for (int i = 0; i < rows; i++) {
for (int j = 0; j < cols; j++) {
printf("%d ", product[i][j]);
}
printf("\n");
}
}
int main(int argc, char *argv[]) {
//variable to calculate total program runtime
double program_runtime = omp_get_wtime();
//matrices and blocksize dimensions
int rowsA = 256, colsA = 256;
int rowsB = 256, colsB = 256;
int blockSize = 24;
if (colsA != rowsB)
{
printf("No. of columns of first matrix must match no. of rows of the second matrix, program terminated");
exit(EXIT_SUCCESS);
}
else if(rowsA != rowsB || rowsB != colsB)
{
blockSize= 1;
//printf("Blocksize value: %f\n", blockSize);
}
//variable to calculate total time for inizialization procedures
double init_runtime = omp_get_wtime();
//Dynamic matrices pointers allocation
int** matA = (int**)malloc(rowsA * sizeof(int*));
int** matB = (int**)malloc(rowsB * sizeof(int*));
int** matC = (int**)malloc(rowsA * sizeof(int*));
//check for segmentation fault
if (matA == NULL || matB == NULL || matC == NULL)
{
fprintf(stderr, "out of memory\n");
exit(0);
}
//------------------------------------ Matrices initializazion ------------------------------------------
// MatA initialization
//#pragma omp parallel for
for (int i = 0; i < rowsA; i++)
{
matA[i] = (int*)malloc(colsA * sizeof(int));
}
for (int i = 0; i < rowsA; i++)
for (int j = 0; j < colsA; j++)
matA[i][j] = 3;
// MatB initialization
//#pragma omp parallel for
for (int i = 0; i < rowsB; i++)
{
matB[i] = (int*)malloc(colsB * sizeof(int));
}
for (int i = 0; i < rowsB; i++)
for (int j = 0; j < colsB; j++)
matB[i][j] = 1;
// matC initialization (Product Matrix)
//#pragma omp parallel for
for (int i = 0; i < rowsA; i++)
{
matC[i] = (int*)malloc(colsB * sizeof(int));
}
for (int i = 0; i < rowsA; i++)
for (int j = 0; j < colsB; j++)
matC[i][j] = 0;
init_runtime = omp_get_wtime() - init_runtime;
printf("Total time for matrix initialization (in seconds): %f\n", init_runtime);
//omp_set_num_threads(8);
// function call for block matrix product between A and B
matMultDyn(rowsA, rowsA, rowsB, colsB, blockSize, matA, matB, matC);
// function call to print the resultant Product matrix C
printMatrix(matC, rowsA, colsB);
// --------------------------------------- Dynamic matrices pointers' cleanup -------------------------------------------
for (int i = 0; i < rowsA; i++) {
free(matA[i]);
free(matC[i]);
}
for (int i = 0; i < colsB; i++) {
free(matB[i]);
}
free(matA);
free(matB);
free(matC);
//Program total runtime calculation
program_runtime = omp_get_wtime() - program_runtime;
printf("Program total runtime (in seconds): %f\n", program_runtime);
return 0;
}
To complete the testing and comparison on the code, I will create a machine on Google Clould equipped with 32 cores, so I can see how the code run on an actual 16 cores machine and then with 32 cores.
For reference, I'm running this code on my MSI notebook, which is equipped with an Intel i7th 11800, 8 cores at 3.2 Ghz, and can manage up to 16 threads concurrently; the reason to go and test on Google Cloud is because I want to have the software run on a "real" 16 cores machine, where 1 threads run on one core, and then scaling further up to 32 cores.
With the collected data I will then draw some graphs for comparison.
In news phpstorm version : File > Settings > PHP
I would split optimization into two parts: TTFB (time to first byte) optimization and the frontend optimization.
To optimize TTFB:
Connect your Magento store to a PHP profiler. There are several options, you can google for them.
Inspect the diagram and see if you have find a function call that takes too much time.
Optimize that function call. In 90% case I dealt with the slowness came from a 3rd-party extension.
To optimize the frontend:
Minify and compress JS and CSS. You can turn it on at Stores > Configuration > Advanced > Developer > CSS and JS settings
Serve images in WebP or AVIF formats to cut page weight
Use GZIP compression
Inline critical CSS and JS (critical CSS/JS is what needs to be render above-the-fold content) and lazy load all the rest
Use as few 3rd-party JS libraries/scripts as possible
Remove redundant CSS and JS
Good luck!
I found the issue. The issue wasn't with the dataset format it was with the LLM I used, it wasn't returning the correct output (a value of 0 or 1) , that's why it was giving me RagasOutputParserException. To fix it I tried different models and decreased the number of returned documents from 10 to 5.
This is what ultimately got me going:
<div style="position: relative; width: 560px; height: 315px;">
<div id="cover" style="position:absolute; top: 50%; left: 50%; transform: translate(-50%, -50%); opacity:1; cursor:pointer; font-size:100px; color:white; text-shadow: 2px 2px 4px #000000;">
<i class="fas fa-play"></i>
</div>
<iframe id="player" width="560" height="315" src="https://www.youtube.com/embed/2qhCjgMKoN4?enablejsapi=1&controls=0" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen style="position: absolute; top:0; left:0; opacity:0;"></iframe>
</div>
<script src="https://www.youtube.com/iframe_api"></script>
<script>
var player;
var playButton = document.getElementById('cover');
var icon = playButton.querySelector('i');
function onYouTubeIframeAPIReady() {
player = new YT.Player('player', {
events: {
'onReady': onPlayerReady,
'onStateChange': onPlayerStateChange
}
});
}
function onPlayerReady(event) {
playButton.addEventListener('click', function() {
if (player.getPlayerState() == YT.PlayerState.PLAYING) {
player.pauseVideo();
} else {
player.playVideo();
}
});
}
function onPlayerStateChange(event) {
if (event.data == YT.PlayerState.PLAYING) {
icon.classList.remove('fa-play');
icon.classList.add('fa-pause');
} else {
icon.classList.remove('fa-pause');
icon.classList.add('fa-play');
}
}
</script>
Thanks,
Josh
Enabling "Beta: Use Unicode UTF-8 for worldwide language support" as suggested here solved the issue for me.
You did not format your setting value properly.
See this answer for full explanation.
The problem is the URL string — you used a Cyrillic р instead of a normal ASCII p
in http
.
Change this:
fetch('httр://localhost:3000/api/test')
to this:
fetch('http://localhost:3000/api/test')
(or just fetch('/api/test')
inside Next.js).
OK, so this was the answer:
In TOML, the root table ends as soon as the first header (e.g. [params]
) appears. Any bare keys that come after[params]
are part of that table, not the root. In your file.
I had a [params] section starting before the theme config. So in short I just had a bug in hugo.toml
I overlooked it at first as the tab after the keys under [params] made it look like indentation "scoped" the values. But forgot that the whitespace does not have scoping semantics in TOML.
In my case Jupyter server was running outside of the env I created with conda, so it was always running from base environment. This worked:
conda activate dlcourse
pip install jupyterlab ipykernel
If it's just the URL, then add "?wsdl" at the end and browse.
If you need to download as a file, right click on the webpage which shows all the services, save as xml, then rename to filename.wsdl
In some cases you just can turn off TLS verification by --disable-tls
php ./composer-setup.php --install-dir=/usr/bin --filename=composer --disable-tls
#ifndef __clang_analyzer__
base->temp2 = (tempStruct2*)(ptr2 + 1);
#endif
Seems to work for me, basically making the code dead to the analyzer.
Thanks.
I managed to do with by putting export KEY=VALUE
in ~/.zshenv
If I correctly understand you. You asking about "has-pending-model-changes" command that is: "Checks if any changes have been made to the model since the last migration.". Completely command looks like: "dotnet ef migrations has-pending-model-changes"
author of the library here, in your examples you look to be using the v4 API, v5 has a completely new API where config is passed in via the options
prop. I recommend reading the docs: https://react-chessboard.vercel.app/?path=/docs/how-to-use-options-api--docs#optionsonpiececlick
// handle piece click
const onPieceClick = ({
square,
piece,
isSparePiece
}: PieceHandlerArgs) => {
console.log(piece.pieceType);
};
// chessboard options
const chessboardOptions = {
allowDragging: false,
onPieceClick,
id: 'on-piece-click'
};
// render
return <Chessboard options={chessboardOptions} />;
The issue was some kind of hardware error with Firefox. After restarting Firefox (close the app and open again) it works. See also the bug report https://github.com/fabricjs/fabric.js/issues/10710
I have exactly the same problem. Do you have found any answer ?
Merci beaucoup !
Yes the postPersistAnimal method will be invoked. All the callbacks defined by the superclass entities or mapped superclasses will be executed when updating the subclass entity. This behaviour is specified in the JPA documentation.
If a lifecycle callback method for the same lifecycle event is also specified on the entity class and/or one or more of its entity or mapped superclasses, the callback methods on the entity class and/or superclasses are invoked after the other lifecycle callback methods, most general superclass first. A class is permitted to override an inherited callback method of the same callback type, and in this case, the overridden method is not invoked.
You can find more info regarding the execution order and other details here.
I now have a comprehensive example of the combination of gridstack.js and Angular.
https://gitlab.com/FabianSturm/gridstack-dashboard
Feel free to comment on possible improvements!
Maybe you have "AltGR"?
// lib/main.dart
import 'package:flame/flame.dart';
import 'package:flame/game.dart';
import 'package:flame/components.dart';
import 'package:flutter/widgets.dart';
class RunnerGame extends FlameGame with TapDetector {
late SpriteAnimationComponent hero;
@override
Future<void> onLoad() async {
final image = await images.load('hero_run.png'); // spritesheet
final animation = SpriteAnimation.fromFrameData(
image,
SpriteAnimationData.sequenced(
amount: 8, stepTime: 0.08, textureSize: Vector2(64, 64),
),
);
hero = SpriteAnimationComponent(animation: animation, size: Vector2(128, 128))
..position = size / 2;
add(hero);
}
@override
void onTapDown(TapDownInfo info) {
hero.add(MoveToEffect(info.eventPosition.game, EffectController(duration: 0.3)));
}
}
void main() {
final game = RunnerGame();
runApp(GameWidget(game: game));
}
Here's a batch script that captures RTSP stream screenshots every hour while skipping the period from 11 AM to midnight (12 AM):
@echo off
setlocal enabledelayedexpansion
:: Configuration
set RTSP_URL=rtsp://your_camera_rtsp_stream
set OUTPUT_FOLDER=C:\CCTV_Screenshots
set FFMPEG_PATH=C:\ffmpeg\bin\ffmpeg.exe
:: Create output folder if it doesn't exist
if not exist "%OUTPUT_FOLDER%" mkdir "%OUTPUT_FOLDER%"
:: Get current time components
for /f "tokens=1-3 delims=: " %%a in ('echo %time%') do (
set /a "hour=%%a"
set /a "minute=%%b"
set /a "second=%%c"
)
:: Skip if between 11 AM (11) and Midnight (0)
if %hour% geq 11 if %hour% leq 23 (
echo Skipping capture between 11 AM and Midnight
exit /b
)
if %hour% equ 0 (
echo Skipping Midnight capture
exit /b
)
:: Generate timestamp for filename
for /f "tokens=1-3 delims=/ " %%d in ('echo %date%') do (
set year=%%d
set month=%%e
set day=%%f
)
set timestamp=%year%%month%%day%_%hour%%minute%%second%
:: Capture frame with ffmpeg
"%FFMPEG_PATH%" -y -i "%RTSP_URL%" -frames:v 1 -q:v 2 "%OUTPUT_FOLDER%\%timestamp%.jpg" 2>nul
if errorlevel 1 (
echo Failed to capture frame at %time%
) else (
echo Captured frame: %OUTPUT_FOLDER%\%timestamp%.jpg
)
Important Notes:
Replace RTSP_URL
with your camera's actual RTSP stream URL
Adjust FFMPEG_PATH
to match your ffmpeg installation location
Modify OUTPUT_FOLDER
to your desired save location
Test the time format on your system by running echo %time%
and echo %date%
in cmd
The script uses 24-hour format (0-23 where 0=Midnight)
The script will skip captures between 11:00:00 and 23:59:59, plus Midnight (00:00:00)
To Schedule:
Save as cctv_capture.bat
Open Task Scheduler (taskschd.msc)
Create a new task:
Trigger: Hourly (repeat every hour)
Action: Start a program → select your batch file
Run whether user is logged in or not
Troubleshooting Tips:
Test the RTSP URL directly with ffmpeg first
Verify your time format matches the script's parsing
Check folder permissions for the output location
Consider adding error logging if needed
Test during active hours (1-10 AM) to verify captures work
The script will now capture images every hour except between 11 AM and Midnight (12 AM), which matches your requirement for the timelapse project.
Payload splitBy "\n" loads all the content in memory and throws heap memory issue.
It's solved by passing the stream to Java class which process the stream adn writes it to /tmp dir without blowing up the heap.
Inspiration took from Mule File repeatable streaming strategy.
Adobe Creative Cloud lets you "install" fonts to use in non-Adobe applications, and when you do (on Windows) they show up in C:\Users\<USER>\AppData\Roaming\Adobe\User Owned Fonts\
. Note that User Owned Fonts
is a hidden folder, but the files inside it are all unhidden and have meaningful filenames.
Really insightful post ran into a similar issue recently and was also surprised that adding a new enum value triggered a compatibility error. Totally agree that this makes managing evolving schemas in Pub/Sub pretty tricky. Curious to hear how others are handling this switching to strings might be the safer route, but feels like a compromise.
The problem is actually not in the filter, but in the size of the propagation step. In this case, it is too small, which means that the fft is being computed too many times and thus generating error. By increasing the step size to 0.001, you get way better results:
You can prove that these results are better by introducing a function that measures pixel distance between arrays:
def distance(a: array,b: array):
return np.dot((a-b).flatten(),(a-b).flatten())/len(a.flatten())
Using this function to compare the propagated profile to the analytical one shows a distance of 1.40-0.33j when dz=0.001, whereas the distance is -2.53+22.25j when dz=0.00005. You can play around with dz to see if you can get better results.
Try to normalize your data before running linear regression (I mean your X) by using MinMaxScaler for example. (sklearn.preprocessing.MinMaxScaler), thay may have an impact on the coeficients.
If you only want to subscribe to pull request and commits on main, you can do like:
/github subscribe owner/repo pulls commits:main
I think the reason in my case for this error is not the Python version, but rather that the Mac architecture is different than those available for built distributions I have M1 which is ARM64 and for Macos there is only available x86-64. So I cannot install ruptures this way.
Solved by removing expose sourse roots to PYtHONPAT. But the reason is
i generate this software with python for convert images to video.
Image2Video - Turn Images into Videos Effortlessly
A practical tool to convert image collections into high-quality videos with customizable settings. Powered by FFmpeg, perfect for creating timelapses, creative slideshows, or processing CCTV footage.
📸 Supports multiple image formats (JPG, PNG, GIF, etc.)
⏱️ Adjustable frame duration
🎵 Add default audio with customizable bitrate
📂 Automatic folder/subfolder scanning
🖥️ Simple and intuitive GUI
⏳ Real-time progress tracking
MANIFEST.MF
Manifest-Version: 1.0
MIDlet-1: Hello!, icon.png, Hello
MIDlet-vendor: Mehrzad
MicroEdition-Configuration: CLDC-1.1
MIDlet-name: Hello!
MIDlet-version: 1.1
Created-By: 1.8.0_381 (Oracle Corporation)
Nokia-MIDlet-Category: Application
MicroEdition-Profile: MIDP-2.0
It should be like this:
Manifest-Version: 1.0
MIDlet-1: Hello!, icon.png, Hello
MIDlet-Vendor: Mehrzad
MicroEdition-Configuration: CLDC-1.1
MIDlet-Name: Hello!
MIDlet-Version: 1.1
Created-By: 1.8.0_381 (Oracle Corporation)
Nokia-MIDlet-Category: Application
MicroEdition-Profile: MIDP-2.0
I left this problem alone, and I was working on the other parts of my project
last night i ran into a problem and google up the error message and i came up with this thread of stack overflow
I looked at the accepted answer(first answer) he wrote
The problem was: I was using the slim build of jQuery,
I was trying to figure out the solution for another problem, I decided to give it a shot, I replace the jquery CDN link with the one which is not slim version, and bam it worked!
To fix this issue, increase the heap memory by updating the following line in android/gradle.properties
:
org.gradle.jvmargs=-Xmx512M
to:
org.gradle.jvmargs=-Xmx8G
Then run:
flutter clean
flutter run
If 8 GB isn’t enough, you can increase it further (e.g., -Xmx16G
).
For me, the answer was I don't want the button to send the form at all, and this helped me:
https://stackoverflow.com/a/3315016/5057078
Text of the answer:
The default value for the type
attribute of button
elements is "submit". Set it to type="button"
to produce a button that doesn't submit the form.
<button type="button">Submit</button>
In the words of the HTML Standard: "Does nothing."
To be honest, i dont know either !
I don't know if it will solve your problem, but you are creating an ChatOpenAI model, who is maybe, not optimized for Mistral Response.
There is a class for Mistral Model Who looks like that :
from langchain_mistralai import ChatMistralAI
ChatMistralAI = ChatMistralAI(model="mistral-nemo",mistral_api_key=_api_key)
Regards
Firstly, you can define a ghost sequence which clones the array. You can then write two-state lemmas about the array. I happened to write a blog post about a very similar situation here.