Not sure if it's too late, I have similar issue. It's because the installed aws executable is not in executables path, you can do either of:
Option#1: create sym link
sudo ln -s /Users/fr/.local/lib/aws/bin/aws /usr/local/bin/aws
Option#2: Add path to zshrc file
echo 'export PATH=/Users/fr/.local/lib/aws/bin/:$PATH' >> ~/.zshrc
source ~/.zshrc
did you solve ? I have same issue. Looking everywhere cant find a clue.AI always saying same shit. Delete it , clone it ,give it extra permission etc.
for download software the devs: curso google drive download
Была такая ошибка помогло try до вызова места где появилась ошибка. Она в ассинхронке только что ли...
Hello, I’m working on localizing my custom DNN module (C#, ASP.NET).
👉 I’m following the standard approach:
I created App_LocalResources/View.ascx.resx
and View.ascx.fr-FR.resx
The files contain a key:
<data name="msg" xml:space="preserve">
<value>Congrats !!</value>
</data>
My code:
string resourceFile = this.TemplateSourceDirectory + "/App_LocalResources/View.ascx";
string message = Localization.GetString("msg", resourceFile);
lblMessage.Text = message ?? "Key not found";
or
lblMessage = Localization.GetString("msg", this.LocalResourceFile);
fr-FR
(I forced it in code for testing).✅ The resource files are in the right folder.
✅ The key exists and matches exactly.
✅ The file name matches (View.ascx.fr-FR.resx).
❌ **But Localization.GetString always returns null.
What I checked:**
The LocalResourceFile
is correct: /DesktopModules/MyModule/App_LocalResources/View.ascx
I cleared the DNN cache and restarted the app pool
File encoding is UTF-8
Permissions on the .resx file are OK
My question:
➡ Does anyone have a working example where Localization.GetString
reads App_LocalResources successfully without modifying the web.config (i.e. without re-enabling buildProviders for .resx)?
➡ Could there be something else blocking DNN from loading the .resx files (for example, a hidden configuration or DNN version issue)?
Thanks for your help!
Looks like you're getting that annoying “Deadline expired before operation could complete” error in BigQuery.
That usually means one of two things - either BigQuery’s having a moment, or something’s up on your end.
First thing to do: check the Google Cloud Status Dashboard. If there’s a blip in your region, it’ll show up there.
Next, go to your Cloud Console → IAM & Admin → Quotas.
Look up things like “Create dataset requests” or “API requests per 100 seconds.” If you’re over the limit, that could be your problem.
Also, double-check your permissions. You’ll need bigquery.datasets.create on your account or service account.
Still no luck? Try using the bq command-line tool or even the REST API. They’re way better at showing detailed errors than the UI.
And if it’s still not working, try switching to a different region. Sometimes that helps if the current one’s overloaded.
Need a quick API command to test it? Just let me know - happy to share!
[qw](https://en.wikipedia.org/)
[url]https://en.wikipedia.org/\[/url\]
<a href="https://en.wikipedia.org/">qw</a>
[url=https://en.wikipedia.org/]qw[/url]
[qw]([url]https://en.wikipedia.org/\[/url\])
How to fix this? I have the same problem. No icon displayed. Here is my code:
!include "MUI2.nsh"
OutFile "out\myapp.exe"
Icon "original\app.ico"
RequestExecutionLevel user
SilentInstall silent
SetCompressor LZMA
!define MUI_ICON "original\app.ico"
;!insertmacro MUI_UNPAGE_CONFIRM
;!insertmacro MUI_UNPAGE_INSTFILES
;!insertmacro MUI_LANGUAGE "English"
Section
InitPluginsDir
SetOutPath $PLUGINSDIR
File /r original\*.*
ExecWait '"$PLUGINSDIR\commrun.exe" --pluginNames webserver'
Delete "$PLUGINSDIR\*.*"
SectionEnd
Unfortunately, this is not working in my case in the application.properties. ${PID}
works, for instance but not for HOSTNAME.
Also according to Baeldung it should work like this: https://www.baeldung.com/spring-boot-properties-env-variables#bd-use-environment-variables-in-the-applicationproperties-file
Do you know why this is the case?
My website's score is 90; I want to make it 100. How can I do that?
Here is my website: actiontimeusa
Navigate to the terminal window within VS Code.
Right-click on the word 'Terminal' at the top of the window to access the drop-down menu.
Choose 'Panel Position' option, followed by the position of choice ie Top/Right/Left/Bottom.
14 Years later, grafts are deprecated. I there a way to do this without grafts ?
I'm facing the same problem, but unable to solve it. Using spring boot 3.4.5, r2dbc-postgresql 1.0.7. My query looks like:
select test.id,
test.name,
test.description,
test.active,
count(q.id) as questions_count
from test_entity test
left join test_question q on q.test_entity_id = test.id
group by test.id, test.name, test.description, test.active
I tried many variants of spelling questions_count, but always get null.
I even tried to wrap this query into
select mm.* (
...
) mm
But that doesn't help.
I'm using R2dbcRepository with @Query annotation, and interface for retrieving result set.
I'm having the same problem in 2025, but I need a solution that works without an external library. As my problem is related to <input type="date" />
(see my update of https://stackoverflow.com/a/79654183/15910996) and people use my webpage in different countries, I also need a solution that works automatically with the current user's locale.
My idea is to take advantage of new Date().toLocaleDateString()
always being able to do the right thing but in the wrong direction. If I take a static ISO-date (e.g. "2021-02-01") I can easily ask JavaScript how this date is formatted locally, right now. To construct the right ISO-date from any local date, I only need to understand in which order month, year and date are used. I will find the positions by looking at the formatted string from the static date.
Luckily, we don't have to care about leeding zeros and the kind of separators that are used in the locale date-strings.
With my solution, on an Australian computer, you can do the following:
alert(new Date(parseLocaleDateString("21/11/1968")));
In the US it will look and work the same like this, depending on the user's locale:
alert(new Date(parseLocaleDateString("11/21/1968")));
Please note: My sandbox-example starts with an ISO-date, because I don't know which locale the current user has... 😉
// easy:
const localeDate = new Date("1968-11-21").toLocaleDateString();
// hard:
const isoDate = parseLocaleDateString(localeDate);
console.log("locale:", localeDate);
console.log("ISO: ", isoDate);
function parseLocaleDateString(value) {
// e.g. value = "21/11/1968"
if (!value) {
return "";
}
const valueParts = value.split(/\D/).map(s => parseInt(s)); // e.g. [21, 11, 1968]
if (valueParts.length !== 3) {
return "";
}
const staticDate = new Date(2021, 1, 1).toLocaleDateString(); // e.g. "01/02/2021"
const staticParts = staticDate.split(/\D/).map(s => parseInt(s)); // e.g. [1, 2, 2021]
const year = String(valueParts[staticParts.indexOf(2021)]); // e.g. "1968"
const month = String(valueParts[staticParts.indexOf(2)]); // e.g. "11"
const day = String(valueParts[staticParts.indexOf(1)]); // e.g. "21"
return [year.padStart(4, "0"), month.padStart(2, "0"), day.padStart(2, "0")].join("-");
}
Did you ever implement this? I'm after the same thing and I'm about to resort to just using a FileSystemWatcher.
Hola no se si aún te sirva la solución, pero de igual manera estaba teniendo este error en mi servidor Hostinger, es un cambio muy pequeño pero clave.
Al subir una aplicación Laravel/Filament a un hosting en la nube, las imágenes cargadas a través de la sección de administración no se muestran en el frontend. En su lugar, aparece un icono de imagen rota. Al revisar los logs de Nginx, el error específico que se presenta es: failed (40: Too many levels of symbolic links).
Esto indica que el servidor web (Nginx) no puede acceder a las imágenes porque el enlace simbólico public/storage que apunta a la ubicación real de los archivos (usualmente storage/app/public) está configurado incorrectamente o sufre de un problema de permisos que el sistema interpreta como un bucle o una cadena excesiva de enlaces.
1.- Enlace Simbólico (public/storage) con propietario incorrecto (root:root): Aunque el destino del enlace (storage/app/public) tuviera los permisos correctos, el propio archivo del enlace simbólico era propiedad de root, mientras que Nginx se ejecuta con un usuario diferente (www-data). Esto puede causar que Nginx no "confíe" en el enlace o lo interprete erróneamente.
2.- Posible creación incorrecta o bucle en el enlace simbólico: Aunque menos probable una vez que se verifica la ruta de destino, un enlace simbólico que apunta a sí mismo o a un enlace anidado puede generar este error.
La solución se centra en eliminar cualquier enlace simbólico public/storage existente, y luego recrearlo asegurándose de que el propietario sea el usuario del servidor web (www-data en la mayoría de los casos de Nginx en Ubuntu/Debian).
1.- Eliminar el Enlace Simbólico Problemático
Primero, elimina el enlace simbólico public/storage existente. Esto no borrará tus imágenes, ya que el enlace es solo un "acceso directo".
# Navega al directorio 'public' de tu proyecto Laravel
cd /var/www/nombre_proyecto/public
# Elimina el enlace simbólico 'storage'
rm storage
2. Recrear el Enlace Simbólico con el Propietario Correcto
La forma más efectiva es intentar crear el enlace simbólico directamente con el usuario del servidor web.
# Navega a la raíz de tu proyecto Laravel
cd /var/www/nombre_tu_proyecto/
# Ejecuta el comando storage:link como el usuario del servidor web
# Sustituye 'www-data' si tu usuario de Nginx es otro (ej. 'nginx')
sudo -u www-data php artisan storage:link
Si el comando sudo -u www-data php artisan storage:link falla o te da un error, puedes ejecutar php artisan storage:link (que lo creará como root) y luego usar el siguiente comando para cambiar su propiedad:
# Navega al directorio 'public' de tu proyecto
cd /var/www/nombre_tu_proyecto/public
# Cambia la propiedad del enlace simbólico *directamente* (con -h o --no-dereference)
# Sustituye 'www-data' si tu usuario de Nginx es otro
sudo chown -h www-data:www-data storage
3. Verificar la Propiedad del Enlace Simbólico
Es crucial verificar que el paso anterior haya funcionado y que el enlace simbólico storage ahora sea propiedad de tu usuario de servidor web.
# Desde /var/www/nombre_de_tu_proyecto/public
ls -l storage
La salida debería ser similar a esta (observa www-data www-data como propietario):
lrwxrwxrwx 1 www-data www-data 35 Jul 3 03:27 storage -> /var/www/nombre_de_tu_proyecto/storage/app/public
4. Limpiar Cachés de Laravel
Para asegurar que Laravel no esté sirviendo URLs de imágenes desactualizadas o incorrectas debido a la caché, límpialas.
# Desde la raíz de tu proyecto Laravel
php artisan config:clear
php artisan cache:clear
php artisan view:clear
5. Reiniciar Nginx
Para asegurar que Laravel no esté sirviendo URLs de imágenes desactualizadas o incorrectas debido a la caché, límpialas.
sudo systemctl reload nginx
De esta forma logre solucionar mi problema, en sí los puntos importantes que hay que tener en cuenta al levantar una página web en un hostinger o servidor, son los permisos de usuarios y que usuarios estan creando los archivos y dando acceso, en este caso es importante que www-data tenga acceso a estos archivos y carpetas porque es el usuario que usa Nginx para adiministrar los archivos del proyecto y servirlos, espero te ayude o ayude a otras personas con este problema 🙌.
I have similar error in ionic , I notice that the HttpEventType was not correct in the import.
The correct is:
import { HttpEventType } from '@angular/common/http';
Thanks for the guide. How to deploy to https://dockerhosting.ru/
I don't see an error here other than a statement reversal about the training dataset while predicting the model Training model.
In the below statement trainTgt had been sent to mask the source data to train. It doesn't ideally matter since you are only considering the output predictions for your reference. Do you have any error message to display to understand more about the issue?
tgt_padding_mask = generate_padding_mask(trainTgt, tokenizer.vocab['[PAD]']).cuda()
model.train()
trainPred: torch.Tensor = model(trainSrc, trainTgt, tgt_mask, tgt_padding_mask)
Thanks,
Ramakoti Reddy.
Is there any solution to this problem, I'm also having the same problem. Dependency conflicts aries only when the Supabase imports are included else everything is fine. What to do
Do you have a custom process? Also under Processing, click on your process. See on the Right pane. Check your Editable region and also your server side condition, make sure you select the right option
pyfixest author here - you can access the R2 values via the `Feols._R2` attribute. You can find all the attributes for Feols object here: link . Do you have a suggestion on how we could improve the documentation and make these things easier to find?
interesting topic. how do you modify the add button at point 2?
Nothing has worked for me,
This is me insert: REPLACE(('The 6MP Dome IP Camera's clarity is solid, setup easy. Wide lens captures more area.'), '''', '''''')
It breaks because of the single quote in Camera's, these are dynamic variables
Any suggestions?!
I have same issue--PhonePe Integration error - Package Signature Mismatched
sending response back: {"result":false,"failureReason":"PACKAGE_SIGNATURE_MISMATCHED"}
I've got a reply from MS. MIP SDK does not support the usage of managed identities.
I do not want the main menu/menu bar. I already have that. I want the icon bar or tool bar that goes accross the top. What you said just gives me the project explorer.
did you find a fix for the above as I am getting the same error?
I am running into the same issue. I do not see the role to assign it from the portal. Added a custom role with an action defined to allow the container creation via java code.
It just blows up with the following exception but there is no clue what is required to get it corrected.
The given request [POST /dbs/<DB_NAME>/colls] cannot be authorized by AAD token in data plane.
Status Code: Forbidden
Tried adding Cosmos DB Operator but it did not work as well. Any idea?
Any solution??? I try to do the same here, where I have one product page with differente sizes, and when I clicked in one size, the page change, because all sizes is a different product and the slick always stay in the fisrt and not in the size with class .selected of slick.
thank you so much for your helpful comments and for pointing me in the right direction.
I'm currently working with the same mobile signature provider and services described in this StackOverflow post, and the endpoints have not changed.
Here's what I'm doing:
I calculate the PDF hash based on the byte range (excluding the /Contents
field as expected).
I then Base64 encode this hash and send it to the remote signing service.
The service returns an XML Signature structure, containing only the signature value and the certificate. It does not re-hash the input — it signs the hash directly.
Based on that signature and certificate, I construct a PKCS#7 (CAdES) container and embed it into the original PDF using signDeferred
.
However, when I open the resulting PDF in Adobe Reader, I still get a “signature is invalid” error.
Additionally, Turkcell also offers a PKCS#7 signing service, but in that case, the returned messageDigest
is only 20 bytes, which doesn’t match the 32-byte SHA-256 digest I computed from my PDF. Because of this inconsistency, I cannot proceed using their PKCS#7 endpoint either.
I’m really stuck at this point and unsure how to proceed. Do you have any advice on:
how to correctly construct the PKCS#7 from a detached XML signature (raw signature + certificate)?
whether I must include signed attributes, or if there's a way to proceed without them?
or any clues why Adobe might mark the signature as invalid even when the structure seems correct?
Any help would be greatly appreciated!
any success? because I'm facing the same issue...
Environment: onpremise, istio, and kyverno policies.
Istio version:
client version: 1.23.2
control plane version: 1.23.2
data plane version: 1.23.2 (46 proxies)
Could anybody help?
I have the same problem, do you find a solution?
have same question, thanks for sharing
to really help you out, could you share the relevant parts of your App.razor
file and the _Host.cshtml
file? I need to take a look at those to see if your Microsoft Identity settings are set up correctly. That way, I can better understand why the sign-in redirect isn’t working, and whether there’s a missing middleware or something else going on. Right now, I can only give general advice — but if I can see those files, I’ll be able to give you a more accurate solution.
I'm facing the same problem, did you find any solution ?
f you tried everything and nothing works like my case, I get like 2 days reading in different places about the problem, stackoverflow, reddit, github ...
check this link here, i posted an answer : https://stackoverflow.com/a/79687509/15721679
f you tried everything and nothing works like my case, I get like 2 days reading in different places about the problem, stackoverflow, reddit, github ...
check this link here, i posted an answer : https://stackoverflow.com/a/79687509/15721679
Thanks a ton everyone! All responses have been great highlighting various facts hidden in the problem. I thank for the advice that name shall be modified to Vector as we cannot compare Point of one dimension to the Point of different dimensions.
And also, the analysis that unfolds p1 and p2 are two different types generalized by declaration of variadic class template.
Many thanks for the solution. Does the following method iteratively call the default version to resolve the problem part by part?
template <numeric ... Ts>
auto operator<=>(const Point<Ts...> &) const
{
return sizeof...(Args) <=> sizeof...(Ts);
}
Are you sure you're providing the right IDs? Can you send the package id and the Hunter and Image ID you're trying to pass?
you can contact me on maheshgadhari84@gmail.com, i think we are facing same issue, we can disscuss and solve
I'm trying to adapt the excellent answer above, but simplify it in 2 ways:
I have my data across a row, so swapping row() to column() and removing transpose()...
and
i'm after a simple cumulative of ALL the numbers in the row, (rather than needing the sumif condition of column B:B's "item A" that the original poster had...
----
I have dates to index in $K$7:$7, and expenditure data across $K14:14, and need to find the date in row 7 that the cumulative expenditure in row 14 reaches 10% of the row 14's total in $G14
i'm trying this but it's not working for me...
=INDEX($K$7:$7, MATCH(TRUE, SUMIF(OFFSET(B2,0,0,column($K14:14)-column($K14)),"A", OFFSET(C2,0,0,column($K14:14)-column($K14)))>=0.1*$G14, 0))
Thanks in advance
I hope that you are fine and in good health
The Problem of injecting the add-on to FDM is cleared but when I try to add URL to the playlist it does nothing it won't Traeger the function that I put in main.py, and I use elephant as source for handling the playlist, so I'm trying to make it work with elephant, but it doesn't work
Please Help me Fix this issue Thanks to you all for your reply
I need Bahrain Gold Price Live API Could you provide it?
This request is sent by the savefrom extension installed in your browser.
Do you have any news about this issue, I am trying to do the same thing and the Get-AdminFlow return nothing
Best,
Did you solve this issue? I'm facing the same problem, do you have some recommendation to me?
Is there a line like
this.model.on('change', this.render, this);
in the code, or is listenTo()
being used to listen for changes?
I am also working on this and referencing the nfclib source code.
Here is my project: https://github.com/JamesQian1999/macOS-NFC-Tool
What solved it for me whas to increase 'max_input_vars' in php.ini
Has anyone found a solution to this yet? Is there any extension available to achieve this?
Hi i am facing the same error, please guide me.. i am using windows
I am also facing the same issue. Could you find a solution to this problem? I also try different parameters in API but it doesn't work.
I had to add prisma generate to the build command 😒😂
чуваки выше, спасибо вам!!!!!!
Can you create a commit in a new file and create a github workflow to append that file text to original file when each commit is made?
import os
import re
import asyncio
import logging
import time
import gc
from pathlib import Path
from telethon import TelegramClient, events
from telethon.tl.types import MessageMediaDocument, InputDocumentFileLocation
from telethon.tl.functions.upload import GetFileRequest
from telethon.crypto import AES
from telethon.errors import FloodWaitError
import aiofiles
from concurrent.futures import ThreadPoolExecutor
# Optimize garbage collection for large file operations
gc.set_threshold(700, 10, 10)
# Set environment variables for better performance
os.environ['PYTHONUNBUFFERED'] = '1'
os.environ['PYTHONDONTWRITEBYTECODE'] = '1'
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
TELEGRAM_API_ID = int(os.getenv("TELEGRAM_API_ID"))
TELEGRAM_API_HASH = os.getenv("TELEGRAM_API_HASH")
TELEGRAM_SESSION_NAME = os.path.join('session', os.getenv('TELEGRAM_SESSION_NAME', 'bot_session'))
TELEGRAM_GROUP_ID = int(os.getenv("GROUP_CHAT_ID"))
TOPIC_IDS = {
'Doc 1': 137,
}
TOPIC_ID_TO_CATEGORY = {
137: 'doc 1',
}
CATEGORY_TO_DIRECTORY = {
'doc 1': '/mnt/disco1/test',
}
class FastTelegramDownloader:
def __init__(self, client, max_concurrent_downloads=4):
self.client = client
self.max_concurrent_downloads = max_concurrent_downloads
self.semaphore = asyncio.Semaphore(max_concurrent_downloads)
async def download_file_fast(self, message, dest_path, chunk_size=1024*1024, progress_callback=None):
"""
Fast download using multiple concurrent connections for large files
"""
document = message.media.document
file_size = document.size
# For smaller files, use standard download
if file_size < 10 * 1024 * 1024: # Less than 10MB
return await self._standard_download(message, dest_path, progress_callback)
# Create input location for the file
input_location = InputDocumentFileLocation(
id=document.id,
access_hash=document.access_hash,
file_reference=document.file_reference,
thumb_size=""
)
# Calculate number of chunks and their sizes
chunks = []
offset = 0
chunk_id = 0
while offset < file_size:
chunk_end = min(offset + chunk_size, file_size)
chunks.append({
'id': chunk_id,
'offset': offset,
'limit': chunk_end - offset
})
offset = chunk_end
chunk_id += 1
logging.info(f"📦 Dividiendo archivo en {len(chunks)} chunks de ~{chunk_size//1024}KB")
# Download chunks concurrently
chunk_data = {}
downloaded_bytes = 0
start_time = time.time()
async def download_chunk(chunk):
async with self.semaphore:
try:
result = await self.client(GetFileRequest(
location=input_location,
offset=chunk['offset'],
limit=chunk['limit']
))
# Update progress
nonlocal downloaded_bytes
downloaded_bytes += len(result.bytes)
if progress_callback:
progress_callback(downloaded_bytes, file_size)
return chunk['id'], result.bytes
except Exception as e:
logging.error(f"Error downloading chunk {chunk['id']}: {e}")
return chunk['id'], None
try:
# Execute downloads concurrently
tasks = [download_chunk(chunk) for chunk in chunks]
results = await asyncio.gather(*tasks, return_exceptions=True)
# Collect successful chunks
for result in results:
if isinstance(result, tuple) and result[1] is not None:
chunk_id, data = result
chunk_data[chunk_id] = data
# Verify all chunks downloaded successfully
if len(chunk_data) != len(chunks):
logging.warning(f"Some chunks failed, falling back to standard download")
return await self._standard_download(message, dest_path, progress_callback)
# Write file in correct order
async with aiofiles.open(dest_path, 'wb') as f:
for i in range(len(chunks)):
if i in chunk_data:
await f.write(chunk_data[i])
else:
raise Exception(f"Missing chunk {i}")
end_time = time.time()
duration = end_time - start_time
speed = (file_size / 1024 / 1024) / duration if duration > 0 else 0
logging.info(f"✅ Fast download completed: {dest_path} - Speed: {speed:.2f} MB/s")
return dest_path
except Exception as e:
logging.error(f"Fast download failed: {e}")
return await self._standard_download(message, dest_path, progress_callback)
async def _standard_download(self, message, dest_path, progress_callback=None):
"""Fallback to standard download method"""
document = message.media.document
file_size = document.size
# Optimize chunk size based on file size
if file_size > 100 * 1024 * 1024: # >100MB
part_size_kb = 1024 # 1MB chunks
elif file_size > 50 * 1024 * 1024: # >50MB
part_size_kb = 1024 # 1MB chunks
elif file_size > 10 * 1024 * 1024: # >10MB
part_size_kb = 512 # 512KB chunks
else:
part_size_kb = 256 # 256KB chunks
start_time = time.time()
await self.client.download_file(
document,
file=dest_path,
part_size_kb=part_size_kb,
file_size=file_size,
progress_callback=progress_callback
)
end_time = time.time()
duration = end_time - start_time
speed = (file_size / 1024 / 1024) / duration if duration > 0 else 0
logging.info(f"📊 Standard download speed: {speed:.2f} MB/s")
return dest_path
class MultiClientDownloader:
def __init__(self, api_id, api_hash, session_base_name, num_clients=3):
self.api_id = api_id
self.api_hash = api_hash
self.session_base_name = session_base_name
self.num_clients = num_clients
self.clients = []
self.client_index = 0
self.fast_downloaders = []
async def initialize_clients(self):
"""Initialize multiple client instances"""
for i in range(self.num_clients):
session_name = f"{self.session_base_name}_{i}"
client = TelegramClient(
session_name,
self.api_id,
self.api_hash,
connection_retries=3,
auto_reconnect=True,
timeout=300,
request_retries=3,
flood_sleep_threshold=60,
system_version="4.16.30-vxCUSTOM",
device_model="HighSpeedDownloader",
lang_code="es",
system_lang_code="es",
use_ipv6=False
)
await client.start()
self.clients.append(client)
self.fast_downloaders.append(FastTelegramDownloader(client, max_concurrent_downloads=2))
logging.info(f"✅ Cliente {i+1}/{self.num_clients} inicializado")
def get_next_client(self):
"""Get next client using round-robin"""
client = self.clients[self.client_index]
downloader = self.fast_downloaders[self.client_index]
self.client_index = (self.client_index + 1) % self.num_clients
return client, downloader
async def close_all_clients(self):
"""Clean shutdown of all clients"""
for client in self.clients:
await client.disconnect()
class TelegramDownloader:
def __init__(self, multi_client_downloader):
self.multi_client = multi_client_downloader
self.downloaded_files = set()
self.load_downloaded_files()
self.current_download = None
self.download_stats = {
'total_files': 0,
'total_bytes': 0,
'total_time': 0
}
def _create_download_progress_logger(self, filename):
"""Progress logger with reduced frequency"""
start_time = time.time()
last_logged_time = start_time
last_percent_reported = -5
MIN_STEP = 10 # Report every 10%
MIN_INTERVAL = 5 # Or every 5 seconds
def progress_bar_function(done_bytes, total_bytes):
nonlocal last_logged_time, last_percent_reported
current_time = time.time()
percent_now = int((done_bytes / total_bytes) * 100)
if (percent_now - last_percent_reported < MIN_STEP and
current_time - last_logged_time < MIN_INTERVAL):
return
last_percent_reported = percent_now
last_logged_time = current_time
speed = done_bytes / 1024 / 1024 / (current_time - start_time or 1)
msg = (f"⏬ {filename} | "
f"{percent_now}% | "
f"{speed:.1f} MB/s | "
f"{done_bytes/1024/1024:.1f}/{total_bytes/1024/1024:.1f} MB")
logging.info(msg)
return progress_bar_function
async def _process_download(self, message, metadata, filename, dest_path):
try:
self.current_download = filename
logging.info(f"🚀 Iniciando descarga de: {filename}")
progress_logger = self._create_download_progress_logger(filename)
temp_path = dest_path.with_name(f"temp_{metadata['file_name_telegram']}")
# Get next available client and downloader
client, fast_downloader = self.multi_client.get_next_client()
file_size = message.media.document.size
start_time = time.time()
try:
# Try fast download first for large files
if file_size > 20 * 1024 * 1024: # Files larger than 20MB
logging.info(f"📦 Usando descarga rápida para archivo de {file_size/1024/1024:.1f}MB")
await fast_downloader.download_file_fast(
message, temp_path, progress_callback=progress_logger
)
else:
# Use standard optimized download for smaller files
await fast_downloader._standard_download(
message, temp_path, progress_callback=progress_logger
)
except Exception as download_error:
logging.warning(f"Descarga optimizada falló, usando método estándar: {download_error}")
# Final fallback to basic download
await client.download_file(
message.media.document,
file=temp_path,
part_size_kb=512,
file_size=file_size,
progress_callback=progress_logger
)
if not temp_path.exists():
raise FileNotFoundError("No se encontró el archivo descargado")
# Atomic rename
temp_path.rename(dest_path)
# Update statistics
end_time = time.time()
duration = end_time - start_time
speed = (file_size / 1024 / 1024) / duration if duration > 0 else 0
self.download_stats['total_files'] += 1
self.download_stats['total_bytes'] += file_size
self.download_stats['total_time'] += duration
avg_speed = (self.download_stats['total_bytes'] / 1024 / 1024) / self.download_stats['total_time'] if self.download_stats['total_time'] > 0 else 0
logging.info(f"✅ Descarga completada: {dest_path}")
logging.info(f"📊 Velocidad: {speed:.2f} MB/s | Promedio sesión: {avg_speed:.2f} MB/s")
self.save_downloaded_file(str(message.id))
except Exception as e:
logging.error(f"❌ Error en descarga: {str(e)}", exc_info=True)
# Cleanup on error
for path_var in ['temp_path', 'dest_path']:
if path_var in locals():
path = locals()[path_var]
if hasattr(path, 'exists') and path.exists():
try:
path.unlink()
except:
pass
raise
finally:
self.current_download = None
def load_downloaded_files(self):
try:
if os.path.exists('/app/data/downloaded.log'):
with open('/app/data/downloaded.log', 'r', encoding='utf-8') as f:
self.downloaded_files = set(line.strip() for line in f if line.strip())
logging.info(f"📋 Cargados {len(self.downloaded_files)} archivos ya descargados")
except Exception as e:
logging.error(f"Error cargando archivos descargados: {str(e)}")
def save_downloaded_file(self, file_id):
try:
with open('/app/data/downloaded.log', 'a', encoding='utf-8') as f:
f.write(f"{file_id}\n")
self.downloaded_files.add(file_id)
except Exception as e:
logging.error(f"Error guardando archivo descargado: {str(e)}")
def parse_metadata(self, caption):
metadata = {}
try:
if not caption:
logging.debug(f"📂 No hay caption")
return None
pattern = r'^(\w[\w\s]*):\s*(.*?)(?=\n\w|\Z)'
matches = re.findall(pattern, caption, re.MULTILINE)
for key, value in matches:
key = key.strip().lower().replace(' ', '_')
metadata[key] = value.strip()
required_fields = [
'type', 'tmdb_id', 'file_name_telegram',
'file_name', 'folder_name', 'season_folder'
]
if not all(field in metadata for field in required_fields):
return None
if 'season' in metadata:
metadata['season'] = int(metadata['season'])
if 'episode' in metadata:
metadata['episode'] = int(metadata['episode'])
return metadata
except Exception as e:
logging.error(f"Error parseando metadata: {str(e)}")
return None
def get_destination_path(self, message, metadata):
try:
topic_id = message.reply_to.reply_to_msg_id if message.reply_to else None
if not topic_id:
logging.warning("No se pudo determinar el topic ID del mensaje")
return None
category = TOPIC_ID_TO_CATEGORY.get(topic_id)
if not category:
logging.warning(f"No se encontró categoría para el topic ID: {topic_id}")
return None
base_dir = CATEGORY_TO_DIRECTORY.get(category)
if not base_dir:
logging.warning(f"No hay directorio configurado para la categoría: {category}")
return None
filename = metadata.get('file_name')
if not filename:
logging.warning("Campo 'file_name' no encontrado en metadatos")
return None
if metadata['type'] == 'movie':
folder_name = f"{metadata['folder_name']}"
dest_dir = Path(base_dir) / folder_name
return dest_dir / filename
elif metadata['type'] == 'tv':
folder_name = f"{metadata['folder_name']}"
season_folder = metadata.get('season_folder', 'Season 01')
dest_dir = Path(base_dir) / folder_name / season_folder
return dest_dir / filename
else:
logging.warning(f"Tipo de contenido no soportado: {metadata['type']}")
return None
except Exception as e:
logging.error(f"Error determinando ruta de destino: {str(e)}")
return None
async def download_file(self, message):
try:
await asyncio.sleep(1) # Reduced delay
if not isinstance(message.media, MessageMediaDocument):
return
if str(message.id) in self.downloaded_files:
logging.debug(f"Archivo ya descargado (msg_id: {message.id})")
return
metadata = self.parse_metadata(message.message)
if not metadata:
logging.warning("No se pudieron extraer metadatos válidos")
return
if 'file_name' not in metadata or not metadata['file_name']:
logging.warning("El campo 'file_name' es obligatorio en los metadatos")
return
dest_path = self.get_destination_path(message, metadata)
if not dest_path:
return
dest_path.parent.mkdir(parents=True, exist_ok=True)
if dest_path.exists():
logging.info(f"Archivo ya existe: {dest_path}")
self.save_downloaded_file(str(message.id))
return
await self._process_download(message, metadata, metadata['file_name'], dest_path)
except Exception as e:
logging.error(f"Error descargando archivo: {str(e)}", exc_info=True)
async def process_topic(self, topic_id, limit=None):
try:
logging.info(f"📂 Procesando topic ID: {topic_id}")
# Use first client for message iteration
client = self.multi_client.clients[0]
async for message in client.iter_messages(
TELEGRAM_GROUP_ID,
limit=limit,
reply_to=topic_id,
wait_time=10 # Reduced wait time
):
try:
if message.media and isinstance(message.media, MessageMediaDocument):
await self.download_file(message)
# Small delay between downloads to prevent rate limiting
await asyncio.sleep(0.5)
except FloodWaitError as e:
wait_time = e.seconds + 5
logging.warning(f"⚠️ Flood wait detectado. Esperando {wait_time} segundos...")
await asyncio.sleep(wait_time)
continue
except Exception as e:
logging.error(f"Error procesando mensaje: {str(e)}", exc_info=True)
continue
except Exception as e:
logging.error(f"Error procesando topic {topic_id}: {str(e)}", exc_info=True)
async def process_all_topics(self):
for topic_name, topic_id in TOPIC_IDS.items():
logging.info(f"🎯 Iniciando procesamiento de: {topic_name}")
await self.process_topic(topic_id)
# Print session statistics
if self.download_stats['total_files'] > 0:
avg_speed = (self.download_stats['total_bytes'] / 1024 / 1024) / self.download_stats['total_time']
logging.info(f"📊 Estadísticas del topic {topic_name}:")
logging.info(f" 📁 Archivos: {self.download_stats['total_files']}")
logging.info(f" 💾 Total: {self.download_stats['total_bytes']/1024/1024/1024:.2f} GB")
logging.info(f" ⚡ Velocidad promedio: {avg_speed:.2f} MB/s")
async def main():
try:
# Test cryptg availability
test_data = os.urandom(1024)
key = os.urandom(32)
iv = os.urandom(32)
encrypted = AES.encrypt_ige(test_data, key, iv)
decrypted = AES.decrypt_ige(encrypted, key, iv)
if decrypted != test_data:
raise RuntimeError("❌ Cryptg does not work properly")
logging.info("✅ cryptg available and working")
except Exception as e:
logging.critical(f"❌ ERROR ON CRYPTG: {str(e)}")
raise SystemExit(1)
# Ensure session directory exists
os.makedirs('session', exist_ok=True)
os.makedirs('/app/data', exist_ok=True)
# Initialize multi-client downloader
multi_client = MultiClientDownloader(
TELEGRAM_API_ID,
TELEGRAM_API_HASH,
TELEGRAM_SESSION_NAME,
num_clients=3 # Use 3 clients for better speed
)
try:
logging.info("🚀 Inicializando clientes múltiples...")
await multi_client.initialize_clients()
downloader = TelegramDownloader(multi_client)
logging.info("📥 Iniciando descarga de todos los topics...")
await downloader.process_all_topics()
logging.info("✅ Proceso completado exitosamente")
except Exception as e:
logging.error(f"Error en main: {str(e)}", exc_info=True)
finally:
logging.info("🔌 Cerrando conexiones...")
await multi_client.close_all_clients()
if __name__ == "__main__":
asyncio.run(main())
There are couple of things you should check (btw, please share your the cloud function code snippet)
Make sure that you are calling/invoking supported GCP Vertex AI Gemini modes (Gemini 2.0, Gemini 2.5 Flash/pro etc.). Models like Palm, text-bison and even earlier Gemini models (like Gemini 1.0) has been deprecated, that's mostly likely the reason you are getting 404 reason due to model deprecation. Please check the supported model doc here to use a proper Gemini model.
Verify that you followed this Vertex AI getting started guide to set up your access to Gemini model. based on what you described:
You have GCP project
You enabled the Vertex AI API
IAM. Try to grant your GCP account Vertex AI User
role permission. For detail, check Vertex AI IAM permission here.
I recommend to use Google Gen AI SDK for Python to call Gemini models. It handles the endpoint and authentication, you just need to code the model to use. for example: gemini-2.5-flash
These steps should get you going. Please share code snippet so that I can share the edited snippet back.
I am pleased to share that the P vs NP question has been resolved, establishing that P = NP. The full write‑up and proof sketch are available here: https://huggingface.co/caletechnology/satisfier/blob/main/Solving_the_Boolean_k_SAT_Problem_in__Polynomial_Time.pdf
You can also review and experiment with the accompanying C implementation: https://huggingface.co/caletechnology/satisfier/tree/main
I welcome feedback and discussion on this claim.
I am pleased to share that the P vs NP question has been resolved, establishing that P = NP. The full write‑up and proof sketch are available here: https://huggingface.co/caletechnology/satisfier/blob/main/Solving_the_Boolean_k_SAT_Problem_in__Polynomial_Time.pdf
You can also review and experiment with the accompanying C implementation: https://huggingface.co/caletechnology/satisfier/tree/main
I welcome feedback and discussion on this claim.
I'm having the same problem. I've tried various nodemailer codes on the internet but still failed. It turns out after tracking the problem in my case, the API was not working properly which was caused by the use of "output: 'export'" in the next.config.js file. So if you don't use "output: 'export'" Next.js uses its full-stack capability which means Supports API routes (serverless functions in Vercel). so maybe if anyone has the same problem and has not been resolved, my suggestion is to remove "output: 'export'" in the next.config.js file. btw I use nodemailer, smtp gmail and deploy to vercel
Did you manage to resolve this? I am hitting the same issue.
Thanks!
The moneyRemoved variable wasn't being set true. I should have debugged better. Thank you to @Rufus L though for showing me how to properly get the result from an async method without using .GetAwaiter().GetResult()!
I am getting the same error.
According to the documentation, expo-maps is not available in Expo Go.
But I am not sure whether I have installed Expo Go (I think I did).
no, I don't have such a file
wqwqqwwq
I just had this same error pop up when trying to do the same thing with my data. Does cID by any chance have values that are not just a simple numbering 1:n(clusters)? My cluster identifiers were about 5 digits long and when I changed the ID values to numbers 1:n(clusters), the error magically disappeared.
thank you all, i had forgot to replace myproject.pdb.json on the server
La solución es cambiar el AspNetCoreHostingModel de todas las aplicaciones a OutOfProcess en el webconfig
<AspNetCoreHostingModel>OutOfProcess</AspNetCoreHostingModel>
I have this problem too. There may be alternative ways but why is this not working?
It's better to have server but if you don't have any server you can you HOST.
Did you manage to solve this?
its resolved ? i am facing same issue
Did you manage to solve this? I had to the do the following which forced the connection to be Microsoft.Data.SqlClient by creating it myself. I also pass in true to the base constructor to let the context dispose the connection after.
using System;
using System.Data.Common;
using System.Data.Entity;
using Microsoft.AspNet.Identity.EntityFramework;
using Microsoft.Data.SqlClient;
using Mintsoft.LaunchDarkly;
using Mintsoft.UserManagement.Membership;
namespace Mintsoft.UserManagement
{
public class UserDbContext : IdentityDbContext<ApplicationUser>
{
private static DbConnection GetBaseDbConnection()
{
var connectionString = System.Configuration.ConfigurationManager.ConnectionStrings["BaseContext"].ConnectionString;
return new SqlConnection(connectionString);
}
public UserDbContext()
: base(GetBaseDbConnection(), true)
{
}
I still can't figure it out. Can anyone help me?
Same issue I found my ppt is generating according to my requirements but getting repair issues I have tried many way to resolve it but unable to remove it.
Please any one who can help me.. I have Used OpenXML and HtmlAgilityPack
You can press ctrl+[ to remove tabulation
ok, i understand the information
just turn off toggle session isolation, in website conf
Can you tell me which version of revenue cat and superwall you used in both the projects ?
The store product error is because the purchases_flutter and superwall both contain a class named StoreProduct. You will need to hide one of them
Are you using a non-unique ID user store?
Also, could you let us know the version of the IWA Kerberos authenticator you are currently using?
There is a known issue(https://github.com/wso2/product-is/issues/21053) related to non-unique ID user stores when used with the IWA Kerberos authenticator. However, this issue has been resolved in the latest version.
Providing the above information will help us identify the exact cause of the issue.
How to get developer disk image for OS 18? From xcode also its not automatically downloading for me. Can you help me in how you got DDI?
My apologies, I had a moment, my original post works. Coffee time :)
This is cool and very useful thanks for the info
I know I'm late, but in v6 there is a built-in function for this.
https://www.tradingview.com/pine-script-reference/v5/#fun_ticker.standard
@Leyth resolved this. There was a line that truncated the files when they were being transformed. Things appeared fine until the file grew past a certain limit. Then it removed the lines that extended beyond the threshold. I removed that line (which wasn't needed and I do not recall adding in the first place) and the data appears correctly.
so in case not using typescript just React - VITE ,
still u use protobuf-ts ? or what?
@Douglas B
I am trying to compile XNNPACK for the Zynq Ultrascale+ and am running into the same issues you describes two years ago. Can you share your bitbake recipe or makefile?
if anyone has solved a 3d bin packing algo , can you share the code , or the flow in which this needs to be done
Hello my situation is very similar to the one that you had, can you please tell me how to log into data studio with Windows credentials, I'm having the same problem with DB2admin And I don't see any way to switch to a different userID.
thanks
Bigtable now supports Global Secondary Indexes.
Bigtable now supports Global Secondary Indexes.
Bigtable now supports global secondary indexes.
Bigtable now supports global secondary indexes.
Thank you, that looks amazing!
Can you explain why it is marking the whole street and not only the selected part?
I need to reduce it to the highlighted part because i want to do routing on that map.
So i probably need the "use "feature to split the dataset for the routing function...
I get the same error then i add this in to tsconfig.js
it is fixed
"paths": {
"@firebase/auth": ["./node_modules/@firebase/auth/dist/index.rn.d.ts"],
}
Can some one help me what style of the UI this Form used in VB.NET, the style of the button, shape, color gradation, also the border of the group box are ETCHED, also the design of the Datagrideview is modern, simple and elegant? Is there plugin used or there is code for this design of the components? Thanks!
Bom dia, tudo bem.
Você conseguiu resolver esse erro? Estou com o mesmo problema
I have the same problem, and I think it is because the retry rule does not trigger when all pods are down. But I'm not completely sure about this.
I realize this is an old question, but I recently started experiencing a strange issue related to transparent backgrounds. I often use the -t flag when using manim, but just recently I am no longer getting a transparent background and can't figure out why. Manim is still producing a .mov file (instead of .mp4), but the file has a black background rather than a transparent background. I'm working on a mac and recently upgraded the operating system, so I suspect that might have something to do with it. Has anyone else experienced this issue and does anyone know a workaround?
It is really helpful. It help me to change my root passowrd as i forgot it. Now I am able to use the mysql root database with the help of the new password.