Do you have any news about this issue, I am trying to do the same thing and the Get-AdminFlow return nothing
Best,
Did you solve this issue? I'm facing the same problem, do you have some recommendation to me?
Is there a line like
this.model.on('change', this.render, this);
in the code, or is listenTo()
being used to listen for changes?
I am also working on this and referencing the nfclib source code.
Here is my project: https://github.com/JamesQian1999/macOS-NFC-Tool
What solved it for me whas to increase 'max_input_vars' in php.ini
Has anyone found a solution to this yet? Is there any extension available to achieve this?
Hi i am facing the same error, please guide me.. i am using windows
I am also facing the same issue. Could you find a solution to this problem? I also try different parameters in API but it doesn't work.
I had to add prisma generate to the build command 😒😂
чуваки выше, спасибо вам!!!!!!
Can you create a commit in a new file and create a github workflow to append that file text to original file when each commit is made?
import os
import re
import asyncio
import logging
import time
import gc
from pathlib import Path
from telethon import TelegramClient, events
from telethon.tl.types import MessageMediaDocument, InputDocumentFileLocation
from telethon.tl.functions.upload import GetFileRequest
from telethon.crypto import AES
from telethon.errors import FloodWaitError
import aiofiles
from concurrent.futures import ThreadPoolExecutor
# Optimize garbage collection for large file operations
gc.set_threshold(700, 10, 10)
# Set environment variables for better performance
os.environ['PYTHONUNBUFFERED'] = '1'
os.environ['PYTHONDONTWRITEBYTECODE'] = '1'
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
TELEGRAM_API_ID = int(os.getenv("TELEGRAM_API_ID"))
TELEGRAM_API_HASH = os.getenv("TELEGRAM_API_HASH")
TELEGRAM_SESSION_NAME = os.path.join('session', os.getenv('TELEGRAM_SESSION_NAME', 'bot_session'))
TELEGRAM_GROUP_ID = int(os.getenv("GROUP_CHAT_ID"))
TOPIC_IDS = {
'Doc 1': 137,
}
TOPIC_ID_TO_CATEGORY = {
137: 'doc 1',
}
CATEGORY_TO_DIRECTORY = {
'doc 1': '/mnt/disco1/test',
}
class FastTelegramDownloader:
def __init__(self, client, max_concurrent_downloads=4):
self.client = client
self.max_concurrent_downloads = max_concurrent_downloads
self.semaphore = asyncio.Semaphore(max_concurrent_downloads)
async def download_file_fast(self, message, dest_path, chunk_size=1024*1024, progress_callback=None):
"""
Fast download using multiple concurrent connections for large files
"""
document = message.media.document
file_size = document.size
# For smaller files, use standard download
if file_size < 10 * 1024 * 1024: # Less than 10MB
return await self._standard_download(message, dest_path, progress_callback)
# Create input location for the file
input_location = InputDocumentFileLocation(
id=document.id,
access_hash=document.access_hash,
file_reference=document.file_reference,
thumb_size=""
)
# Calculate number of chunks and their sizes
chunks = []
offset = 0
chunk_id = 0
while offset < file_size:
chunk_end = min(offset + chunk_size, file_size)
chunks.append({
'id': chunk_id,
'offset': offset,
'limit': chunk_end - offset
})
offset = chunk_end
chunk_id += 1
logging.info(f"📦 Dividiendo archivo en {len(chunks)} chunks de ~{chunk_size//1024}KB")
# Download chunks concurrently
chunk_data = {}
downloaded_bytes = 0
start_time = time.time()
async def download_chunk(chunk):
async with self.semaphore:
try:
result = await self.client(GetFileRequest(
location=input_location,
offset=chunk['offset'],
limit=chunk['limit']
))
# Update progress
nonlocal downloaded_bytes
downloaded_bytes += len(result.bytes)
if progress_callback:
progress_callback(downloaded_bytes, file_size)
return chunk['id'], result.bytes
except Exception as e:
logging.error(f"Error downloading chunk {chunk['id']}: {e}")
return chunk['id'], None
try:
# Execute downloads concurrently
tasks = [download_chunk(chunk) for chunk in chunks]
results = await asyncio.gather(*tasks, return_exceptions=True)
# Collect successful chunks
for result in results:
if isinstance(result, tuple) and result[1] is not None:
chunk_id, data = result
chunk_data[chunk_id] = data
# Verify all chunks downloaded successfully
if len(chunk_data) != len(chunks):
logging.warning(f"Some chunks failed, falling back to standard download")
return await self._standard_download(message, dest_path, progress_callback)
# Write file in correct order
async with aiofiles.open(dest_path, 'wb') as f:
for i in range(len(chunks)):
if i in chunk_data:
await f.write(chunk_data[i])
else:
raise Exception(f"Missing chunk {i}")
end_time = time.time()
duration = end_time - start_time
speed = (file_size / 1024 / 1024) / duration if duration > 0 else 0
logging.info(f"✅ Fast download completed: {dest_path} - Speed: {speed:.2f} MB/s")
return dest_path
except Exception as e:
logging.error(f"Fast download failed: {e}")
return await self._standard_download(message, dest_path, progress_callback)
async def _standard_download(self, message, dest_path, progress_callback=None):
"""Fallback to standard download method"""
document = message.media.document
file_size = document.size
# Optimize chunk size based on file size
if file_size > 100 * 1024 * 1024: # >100MB
part_size_kb = 1024 # 1MB chunks
elif file_size > 50 * 1024 * 1024: # >50MB
part_size_kb = 1024 # 1MB chunks
elif file_size > 10 * 1024 * 1024: # >10MB
part_size_kb = 512 # 512KB chunks
else:
part_size_kb = 256 # 256KB chunks
start_time = time.time()
await self.client.download_file(
document,
file=dest_path,
part_size_kb=part_size_kb,
file_size=file_size,
progress_callback=progress_callback
)
end_time = time.time()
duration = end_time - start_time
speed = (file_size / 1024 / 1024) / duration if duration > 0 else 0
logging.info(f"📊 Standard download speed: {speed:.2f} MB/s")
return dest_path
class MultiClientDownloader:
def __init__(self, api_id, api_hash, session_base_name, num_clients=3):
self.api_id = api_id
self.api_hash = api_hash
self.session_base_name = session_base_name
self.num_clients = num_clients
self.clients = []
self.client_index = 0
self.fast_downloaders = []
async def initialize_clients(self):
"""Initialize multiple client instances"""
for i in range(self.num_clients):
session_name = f"{self.session_base_name}_{i}"
client = TelegramClient(
session_name,
self.api_id,
self.api_hash,
connection_retries=3,
auto_reconnect=True,
timeout=300,
request_retries=3,
flood_sleep_threshold=60,
system_version="4.16.30-vxCUSTOM",
device_model="HighSpeedDownloader",
lang_code="es",
system_lang_code="es",
use_ipv6=False
)
await client.start()
self.clients.append(client)
self.fast_downloaders.append(FastTelegramDownloader(client, max_concurrent_downloads=2))
logging.info(f"✅ Cliente {i+1}/{self.num_clients} inicializado")
def get_next_client(self):
"""Get next client using round-robin"""
client = self.clients[self.client_index]
downloader = self.fast_downloaders[self.client_index]
self.client_index = (self.client_index + 1) % self.num_clients
return client, downloader
async def close_all_clients(self):
"""Clean shutdown of all clients"""
for client in self.clients:
await client.disconnect()
class TelegramDownloader:
def __init__(self, multi_client_downloader):
self.multi_client = multi_client_downloader
self.downloaded_files = set()
self.load_downloaded_files()
self.current_download = None
self.download_stats = {
'total_files': 0,
'total_bytes': 0,
'total_time': 0
}
def _create_download_progress_logger(self, filename):
"""Progress logger with reduced frequency"""
start_time = time.time()
last_logged_time = start_time
last_percent_reported = -5
MIN_STEP = 10 # Report every 10%
MIN_INTERVAL = 5 # Or every 5 seconds
def progress_bar_function(done_bytes, total_bytes):
nonlocal last_logged_time, last_percent_reported
current_time = time.time()
percent_now = int((done_bytes / total_bytes) * 100)
if (percent_now - last_percent_reported < MIN_STEP and
current_time - last_logged_time < MIN_INTERVAL):
return
last_percent_reported = percent_now
last_logged_time = current_time
speed = done_bytes / 1024 / 1024 / (current_time - start_time or 1)
msg = (f"⏬ {filename} | "
f"{percent_now}% | "
f"{speed:.1f} MB/s | "
f"{done_bytes/1024/1024:.1f}/{total_bytes/1024/1024:.1f} MB")
logging.info(msg)
return progress_bar_function
async def _process_download(self, message, metadata, filename, dest_path):
try:
self.current_download = filename
logging.info(f"🚀 Iniciando descarga de: {filename}")
progress_logger = self._create_download_progress_logger(filename)
temp_path = dest_path.with_name(f"temp_{metadata['file_name_telegram']}")
# Get next available client and downloader
client, fast_downloader = self.multi_client.get_next_client()
file_size = message.media.document.size
start_time = time.time()
try:
# Try fast download first for large files
if file_size > 20 * 1024 * 1024: # Files larger than 20MB
logging.info(f"📦 Usando descarga rápida para archivo de {file_size/1024/1024:.1f}MB")
await fast_downloader.download_file_fast(
message, temp_path, progress_callback=progress_logger
)
else:
# Use standard optimized download for smaller files
await fast_downloader._standard_download(
message, temp_path, progress_callback=progress_logger
)
except Exception as download_error:
logging.warning(f"Descarga optimizada falló, usando método estándar: {download_error}")
# Final fallback to basic download
await client.download_file(
message.media.document,
file=temp_path,
part_size_kb=512,
file_size=file_size,
progress_callback=progress_logger
)
if not temp_path.exists():
raise FileNotFoundError("No se encontró el archivo descargado")
# Atomic rename
temp_path.rename(dest_path)
# Update statistics
end_time = time.time()
duration = end_time - start_time
speed = (file_size / 1024 / 1024) / duration if duration > 0 else 0
self.download_stats['total_files'] += 1
self.download_stats['total_bytes'] += file_size
self.download_stats['total_time'] += duration
avg_speed = (self.download_stats['total_bytes'] / 1024 / 1024) / self.download_stats['total_time'] if self.download_stats['total_time'] > 0 else 0
logging.info(f"✅ Descarga completada: {dest_path}")
logging.info(f"📊 Velocidad: {speed:.2f} MB/s | Promedio sesión: {avg_speed:.2f} MB/s")
self.save_downloaded_file(str(message.id))
except Exception as e:
logging.error(f"❌ Error en descarga: {str(e)}", exc_info=True)
# Cleanup on error
for path_var in ['temp_path', 'dest_path']:
if path_var in locals():
path = locals()[path_var]
if hasattr(path, 'exists') and path.exists():
try:
path.unlink()
except:
pass
raise
finally:
self.current_download = None
def load_downloaded_files(self):
try:
if os.path.exists('/app/data/downloaded.log'):
with open('/app/data/downloaded.log', 'r', encoding='utf-8') as f:
self.downloaded_files = set(line.strip() for line in f if line.strip())
logging.info(f"📋 Cargados {len(self.downloaded_files)} archivos ya descargados")
except Exception as e:
logging.error(f"Error cargando archivos descargados: {str(e)}")
def save_downloaded_file(self, file_id):
try:
with open('/app/data/downloaded.log', 'a', encoding='utf-8') as f:
f.write(f"{file_id}\n")
self.downloaded_files.add(file_id)
except Exception as e:
logging.error(f"Error guardando archivo descargado: {str(e)}")
def parse_metadata(self, caption):
metadata = {}
try:
if not caption:
logging.debug(f"📂 No hay caption")
return None
pattern = r'^(\w[\w\s]*):\s*(.*?)(?=\n\w|\Z)'
matches = re.findall(pattern, caption, re.MULTILINE)
for key, value in matches:
key = key.strip().lower().replace(' ', '_')
metadata[key] = value.strip()
required_fields = [
'type', 'tmdb_id', 'file_name_telegram',
'file_name', 'folder_name', 'season_folder'
]
if not all(field in metadata for field in required_fields):
return None
if 'season' in metadata:
metadata['season'] = int(metadata['season'])
if 'episode' in metadata:
metadata['episode'] = int(metadata['episode'])
return metadata
except Exception as e:
logging.error(f"Error parseando metadata: {str(e)}")
return None
def get_destination_path(self, message, metadata):
try:
topic_id = message.reply_to.reply_to_msg_id if message.reply_to else None
if not topic_id:
logging.warning("No se pudo determinar el topic ID del mensaje")
return None
category = TOPIC_ID_TO_CATEGORY.get(topic_id)
if not category:
logging.warning(f"No se encontró categoría para el topic ID: {topic_id}")
return None
base_dir = CATEGORY_TO_DIRECTORY.get(category)
if not base_dir:
logging.warning(f"No hay directorio configurado para la categoría: {category}")
return None
filename = metadata.get('file_name')
if not filename:
logging.warning("Campo 'file_name' no encontrado en metadatos")
return None
if metadata['type'] == 'movie':
folder_name = f"{metadata['folder_name']}"
dest_dir = Path(base_dir) / folder_name
return dest_dir / filename
elif metadata['type'] == 'tv':
folder_name = f"{metadata['folder_name']}"
season_folder = metadata.get('season_folder', 'Season 01')
dest_dir = Path(base_dir) / folder_name / season_folder
return dest_dir / filename
else:
logging.warning(f"Tipo de contenido no soportado: {metadata['type']}")
return None
except Exception as e:
logging.error(f"Error determinando ruta de destino: {str(e)}")
return None
async def download_file(self, message):
try:
await asyncio.sleep(1) # Reduced delay
if not isinstance(message.media, MessageMediaDocument):
return
if str(message.id) in self.downloaded_files:
logging.debug(f"Archivo ya descargado (msg_id: {message.id})")
return
metadata = self.parse_metadata(message.message)
if not metadata:
logging.warning("No se pudieron extraer metadatos válidos")
return
if 'file_name' not in metadata or not metadata['file_name']:
logging.warning("El campo 'file_name' es obligatorio en los metadatos")
return
dest_path = self.get_destination_path(message, metadata)
if not dest_path:
return
dest_path.parent.mkdir(parents=True, exist_ok=True)
if dest_path.exists():
logging.info(f"Archivo ya existe: {dest_path}")
self.save_downloaded_file(str(message.id))
return
await self._process_download(message, metadata, metadata['file_name'], dest_path)
except Exception as e:
logging.error(f"Error descargando archivo: {str(e)}", exc_info=True)
async def process_topic(self, topic_id, limit=None):
try:
logging.info(f"📂 Procesando topic ID: {topic_id}")
# Use first client for message iteration
client = self.multi_client.clients[0]
async for message in client.iter_messages(
TELEGRAM_GROUP_ID,
limit=limit,
reply_to=topic_id,
wait_time=10 # Reduced wait time
):
try:
if message.media and isinstance(message.media, MessageMediaDocument):
await self.download_file(message)
# Small delay between downloads to prevent rate limiting
await asyncio.sleep(0.5)
except FloodWaitError as e:
wait_time = e.seconds + 5
logging.warning(f"⚠️ Flood wait detectado. Esperando {wait_time} segundos...")
await asyncio.sleep(wait_time)
continue
except Exception as e:
logging.error(f"Error procesando mensaje: {str(e)}", exc_info=True)
continue
except Exception as e:
logging.error(f"Error procesando topic {topic_id}: {str(e)}", exc_info=True)
async def process_all_topics(self):
for topic_name, topic_id in TOPIC_IDS.items():
logging.info(f"🎯 Iniciando procesamiento de: {topic_name}")
await self.process_topic(topic_id)
# Print session statistics
if self.download_stats['total_files'] > 0:
avg_speed = (self.download_stats['total_bytes'] / 1024 / 1024) / self.download_stats['total_time']
logging.info(f"📊 Estadísticas del topic {topic_name}:")
logging.info(f" 📁 Archivos: {self.download_stats['total_files']}")
logging.info(f" 💾 Total: {self.download_stats['total_bytes']/1024/1024/1024:.2f} GB")
logging.info(f" ⚡ Velocidad promedio: {avg_speed:.2f} MB/s")
async def main():
try:
# Test cryptg availability
test_data = os.urandom(1024)
key = os.urandom(32)
iv = os.urandom(32)
encrypted = AES.encrypt_ige(test_data, key, iv)
decrypted = AES.decrypt_ige(encrypted, key, iv)
if decrypted != test_data:
raise RuntimeError("❌ Cryptg does not work properly")
logging.info("✅ cryptg available and working")
except Exception as e:
logging.critical(f"❌ ERROR ON CRYPTG: {str(e)}")
raise SystemExit(1)
# Ensure session directory exists
os.makedirs('session', exist_ok=True)
os.makedirs('/app/data', exist_ok=True)
# Initialize multi-client downloader
multi_client = MultiClientDownloader(
TELEGRAM_API_ID,
TELEGRAM_API_HASH,
TELEGRAM_SESSION_NAME,
num_clients=3 # Use 3 clients for better speed
)
try:
logging.info("🚀 Inicializando clientes múltiples...")
await multi_client.initialize_clients()
downloader = TelegramDownloader(multi_client)
logging.info("📥 Iniciando descarga de todos los topics...")
await downloader.process_all_topics()
logging.info("✅ Proceso completado exitosamente")
except Exception as e:
logging.error(f"Error en main: {str(e)}", exc_info=True)
finally:
logging.info("🔌 Cerrando conexiones...")
await multi_client.close_all_clients()
if __name__ == "__main__":
asyncio.run(main())
There are couple of things you should check (btw, please share your the cloud function code snippet)
Make sure that you are calling/invoking supported GCP Vertex AI Gemini modes (Gemini 2.0, Gemini 2.5 Flash/pro etc.). Models like Palm, text-bison and even earlier Gemini models (like Gemini 1.0) has been deprecated, that's mostly likely the reason you are getting 404 reason due to model deprecation. Please check the supported model doc here to use a proper Gemini model.
Verify that you followed this Vertex AI getting started guide to set up your access to Gemini model. based on what you described:
You have GCP project
You enabled the Vertex AI API
IAM. Try to grant your GCP account Vertex AI User
role permission. For detail, check Vertex AI IAM permission here.
I recommend to use Google Gen AI SDK for Python to call Gemini models. It handles the endpoint and authentication, you just need to code the model to use. for example: gemini-2.5-flash
These steps should get you going. Please share code snippet so that I can share the edited snippet back.
I am pleased to share that the P vs NP question has been resolved, establishing that P = NP. The full write‑up and proof sketch are available here: https://huggingface.co/caletechnology/satisfier/blob/main/Solving_the_Boolean_k_SAT_Problem_in__Polynomial_Time.pdf
You can also review and experiment with the accompanying C implementation: https://huggingface.co/caletechnology/satisfier/tree/main
I welcome feedback and discussion on this claim.
I am pleased to share that the P vs NP question has been resolved, establishing that P = NP. The full write‑up and proof sketch are available here: https://huggingface.co/caletechnology/satisfier/blob/main/Solving_the_Boolean_k_SAT_Problem_in__Polynomial_Time.pdf
You can also review and experiment with the accompanying C implementation: https://huggingface.co/caletechnology/satisfier/tree/main
I welcome feedback and discussion on this claim.
I'm having the same problem. I've tried various nodemailer codes on the internet but still failed. It turns out after tracking the problem in my case, the API was not working properly which was caused by the use of "output: 'export'" in the next.config.js file. So if you don't use "output: 'export'" Next.js uses its full-stack capability which means Supports API routes (serverless functions in Vercel). so maybe if anyone has the same problem and has not been resolved, my suggestion is to remove "output: 'export'" in the next.config.js file. btw I use nodemailer, smtp gmail and deploy to vercel
Did you manage to resolve this? I am hitting the same issue.
Thanks!
The moneyRemoved variable wasn't being set true. I should have debugged better. Thank you to @Rufus L though for showing me how to properly get the result from an async method without using .GetAwaiter().GetResult()!
I am getting the same error.
According to the documentation, expo-maps is not available in Expo Go.
But I am not sure whether I have installed Expo Go (I think I did).
no, I don't have such a file
wqwqqwwq
I just had this same error pop up when trying to do the same thing with my data. Does cID by any chance have values that are not just a simple numbering 1:n(clusters)? My cluster identifiers were about 5 digits long and when I changed the ID values to numbers 1:n(clusters), the error magically disappeared.
thank you all, i had forgot to replace myproject.pdb.json on the server
La solución es cambiar el AspNetCoreHostingModel de todas las aplicaciones a OutOfProcess en el webconfig
<AspNetCoreHostingModel>OutOfProcess</AspNetCoreHostingModel>
I have this problem too. There may be alternative ways but why is this not working?
It's better to have server but if you don't have any server you can you HOST.
Did you manage to solve this?
its resolved ? i am facing same issue
Did you manage to solve this? I had to the do the following which forced the connection to be Microsoft.Data.SqlClient by creating it myself. I also pass in true to the base constructor to let the context dispose the connection after.
using System;
using System.Data.Common;
using System.Data.Entity;
using Microsoft.AspNet.Identity.EntityFramework;
using Microsoft.Data.SqlClient;
using Mintsoft.LaunchDarkly;
using Mintsoft.UserManagement.Membership;
namespace Mintsoft.UserManagement
{
public class UserDbContext : IdentityDbContext<ApplicationUser>
{
private static DbConnection GetBaseDbConnection()
{
var connectionString = System.Configuration.ConfigurationManager.ConnectionStrings["BaseContext"].ConnectionString;
return new SqlConnection(connectionString);
}
public UserDbContext()
: base(GetBaseDbConnection(), true)
{
}
I still can't figure it out. Can anyone help me?
Same issue I found my ppt is generating according to my requirements but getting repair issues I have tried many way to resolve it but unable to remove it.
Please any one who can help me.. I have Used OpenXML and HtmlAgilityPack
You can press ctrl+[ to remove tabulation
ok, i understand the information
just turn off toggle session isolation, in website conf
Can you tell me which version of revenue cat and superwall you used in both the projects ?
The store product error is because the purchases_flutter and superwall both contain a class named StoreProduct. You will need to hide one of them
Are you using a non-unique ID user store?
Also, could you let us know the version of the IWA Kerberos authenticator you are currently using?
There is a known issue(https://github.com/wso2/product-is/issues/21053) related to non-unique ID user stores when used with the IWA Kerberos authenticator. However, this issue has been resolved in the latest version.
Providing the above information will help us identify the exact cause of the issue.
How to get developer disk image for OS 18? From xcode also its not automatically downloading for me. Can you help me in how you got DDI?
My apologies, I had a moment, my original post works. Coffee time :)
This is cool and very useful thanks for the info
I know I'm late, but in v6 there is a built-in function for this.
https://www.tradingview.com/pine-script-reference/v5/#fun_ticker.standard
@Leyth resolved this. There was a line that truncated the files when they were being transformed. Things appeared fine until the file grew past a certain limit. Then it removed the lines that extended beyond the threshold. I removed that line (which wasn't needed and I do not recall adding in the first place) and the data appears correctly.
so in case not using typescript just React - VITE ,
still u use protobuf-ts ? or what?
@Douglas B
I am trying to compile XNNPACK for the Zynq Ultrascale+ and am running into the same issues you describes two years ago. Can you share your bitbake recipe or makefile?
if anyone has solved a 3d bin packing algo , can you share the code , or the flow in which this needs to be done
Hello my situation is very similar to the one that you had, can you please tell me how to log into data studio with Windows credentials, I'm having the same problem with DB2admin And I don't see any way to switch to a different userID.
thanks
Bigtable now supports Global Secondary Indexes.
Bigtable now supports Global Secondary Indexes.
Bigtable now supports global secondary indexes.
Bigtable now supports global secondary indexes.
Thank you, that looks amazing!
Can you explain why it is marking the whole street and not only the selected part?
I need to reduce it to the highlighted part because i want to do routing on that map.
So i probably need the "use "feature to split the dataset for the routing function...
I get the same error then i add this in to tsconfig.js
it is fixed
"paths": {
"@firebase/auth": ["./node_modules/@firebase/auth/dist/index.rn.d.ts"],
}
Can some one help me what style of the UI this Form used in VB.NET, the style of the button, shape, color gradation, also the border of the group box are ETCHED, also the design of the Datagrideview is modern, simple and elegant? Is there plugin used or there is code for this design of the components? Thanks!
Bom dia, tudo bem.
Você conseguiu resolver esse erro? Estou com o mesmo problema
I have the same problem, and I think it is because the retry rule does not trigger when all pods are down. But I'm not completely sure about this.
I realize this is an old question, but I recently started experiencing a strange issue related to transparent backgrounds. I often use the -t flag when using manim, but just recently I am no longer getting a transparent background and can't figure out why. Manim is still producing a .mov file (instead of .mp4), but the file has a black background rather than a transparent background. I'm working on a mac and recently upgraded the operating system, so I suspect that might have something to do with it. Has anyone else experienced this issue and does anyone know a workaround?
It is really helpful. It help me to change my root passowrd as i forgot it. Now I am able to use the mysql root database with the help of the new password.
i have literally the same case.
however, i did these steps and still did not work.
i am setting the default value to an item on the screen .
if i do not submit the page, the default value doesn't work.
any help ?
Criação
Nuvem aberta
Legacy APIs
Authentication v2
*
Este conteúdo é traduzido por IA (Beta) e pode conter erros. Para ver a página em inglês, clique aqui.
URL base
JSON
Download
Authentication
GET
/v2/auth/metadata
Obtém metadados de autorização
Parameters
No parameters
Responses
Code Description
200
OK
Example Value
Model
{
"cookieLawNoticeTimeout": 0
}
POST
/v2/login
Autentica um usuário.
Parameters
Name Description
request *
object
(body)
Roblox.Authentication.Api.Models.LoginRequest.
Example Value
Model
{
"accountBlob": "string",
"captchaId": "string",
"captchaProvider": "string",
"captchaToken": "string",
"challengeId": "string",
"ctype": 0,
"cvalue": "carlosprobr2403
"password": carlosprobr
"secureAuthenticationIntent": {
"clientEpochTimestamp": 0,
"clientPublicKey": "string",
"saiSignature": "string",
"serverNonce": "string"
},
"securityQuestionRedemptionToken": "string",
"securityQuestionSessionId": "string",
"userId": 0
}
Responses
Code Description
200
OK
Example Value
Model
{
"accountBlob": "string",
"identityVerificationLoginTicket": "string",
"isBanned": true,
"recoveryEmail": "string",
"shouldUpdateEmail": [email protected]
"twoStepVerificationData": {
"mediaType": 0,
"ticket": "string"
},
"user": {
"displayName": "string",
"id": 0,
"name": "string"
}
}
400
0: Um erro inesperado ocorreu. 3: Nome de usuário e senha são necessários. Por favor, tente novamente. 8: Login com o tipo de credencial recebida não é suportado.
403
0: Validação de token falhou 1: Nome de usuário ou senha incorreta.Por favor, tente novamente. 2: Você deve passar no teste de robô antes de fazer login. 4: A conta foi bloqueada.Por favor, solicite um redefinir de senha. 5: Incapaz de se logar.Por favor, use a assinatura de rede social. 6: Problema de conta.Por favor, entre em contato com o Suporte. 9: Não foi possível fazer login com as credenciais fornecidas.O login padrão é necessário. 10: Credenciais recebidas não são verificadas. 12: Sessão de login existente encontrada.Por favor, faça login primeiro. 14: A conta não pode fazer login.Por favor, faça login no aplicativo LuoBu. 15: Muitas tentativas.Por favor, espere um pouco. 27: A conta não consegue fazer login.Por favor, faça login com o aplicativo VNG.
429
7: Demasiadas tentativas. Por favor, espere um pouco.
503
11: Serviço indisponível. Por favor, tente novamente.
POST
/v2/logout
Destrói a sessão de autenticação atual.
POST
/v2/logoutfromallsessionsandreauthenticate
Loga o usuário de todas as outras sessões.
IdentityVerification
POST
/v2/identity-verification/login
Ponto final para login com verificação de identidade
Metadata
GET
/v2/metadata
Obtenha o metadado
PasswordsV2
GET
/v2/passwords/current-status
Retorna o status da senha para o usuário atual, de forma assíncrona.
GET
/v2/passwords/reset
Obtém metadados necessários para a visualização de redefinição de senha.
POST
/v2/passwords/reset
Redefine uma senha para um usuário que pertence ao bilhete de redefinição de senha.
Isso vai registrar o usuário de todas as sessões e reautenticar.
Parameters
Name Description
request *
object
(body)
O modelo de solicitação, incluindo o tipo de alvo, o ticket, o ID do usuário e a nova senha, Roblox.Authentication.Api.Models.PasswordResetModel
Example Value
Model
Roblox.Authentication.Api.Models.PasswordResetModel{
accountBlob string
newEmail string
password string
passwordRepeated string
secureAuthenticationIntent Roblox.Authentication.Api.Models.Request.SecureAuthenticationIntentModel{
clientEpochTimestamp integer($int64)
clientPublicKey string
saiSignature string
serverNonce string
}
targetType integer($int32)
['Email' = 0, 'Número de Telefone' = 1, 'RecoverySessionID' = 2]
Enum:
Array [ 3 ]
ticket string
twoStepVerificationChallengeId string
twoStepVerificationToken string
userId integer($int64)
}
Responses
Code Description
200
OK
Example Value
Model
{
"accountBlob": "string",
"identityVerificationLoginTicket": "string",
"isBanned": true,
"recoveryEmail": "string",
"shouldUpdateEmail": true,
"twoStepVerificationData": {
"mediaType": 0,
"ticket": "string"
},
"user": {
"displayName": "string",
"id": 0,
"name": "string"
}
}
400
3: O pedido estava vazio. 11: O bilhete de redefinição de senha é inválido. 12: O usuário é inválido. 20: A senha não é válida. 21: As senhas não correspondem.
403
0: Validação de token falhou 16: O bilhete expirou. 17: O nonce expirou.
500
0: Ocorreu erro desconhecido.
503
1: Recurso temporariamente desativado. Por favor, tente novamente mais tarde.
POST
/v2/passwords/reset/send
Envia um e-mail de redefinição de senha ou desafio para o alvo especificado.
POST
/v2/passwords/reset/verify
Verifica uma solução de desafio de redefinição de senha.
GET
/v2/passwords/validate
Ponto final para verificar se uma senha é válida.
POST
/v2/passwords/validate
Ponto final para verificar se uma senha é válida.
Recovery
GET
/v2/recovery/metadata
Obter metadados para pontos de extremidade esquecidos
Revert
GET
/v2/revert/account
Obter informações do bilhete da Conta de Reversão
POST
/v2/revert/account
Enviar Solicitação de Reversão de Conta
Signup
POST
/v2/signup
Ponto final para inscrever um novo usuário
Passwords carlosprobr
POST
/v2/user/passwords/change
Muda a senha para o usuário autenticado.
A senha atual é necessária para verificar que a senha pode ser alterada.
Parameters
Name Description
request *
object
(body)
O modelo de solicitação, incluindo a senha atual e a nova senha.
Example Value
Model
{
"currentPassword": carlosprobr
"newPassword": carlosprobr
"secureAuthenticationIntent": {
"clientEpochTimestamp": 0,
"clientPublicKey": "string",
"saiSignature": "string",
"serverNonce": "string"
}
}
Any solutions? Facing this same error
@Marcin Kapusta Thanks for posting this, any update on this, we are also facing same from last one week onwards
yeah even i am facing the same , even though you do this
canConfigureBusOff(3, 0x153, 1);
canConfigureBusOff(3, 0x153, 0);
still in the bus stat it'll switch bw passive and active error state , it'll not come to Bus Off
This solution is not working practically. I tried with ConfuserEx2 obfuscation. as the multiple i tried with
[Obfuscation(Exclude = true, ApplyToMembers = true,StripAfterObfuscation =false)]
Also , Not working Practically usefull. please suggest something else .
Please someone help. Im getting Database error: Transaction not connected while I try to connect Sql anywhere 17 from my PB app
This was answered here in the end :)
did you manage to solve this in the meantime? Not sure what support meant by ignore it :D but sounds to me that your account is not fully set up yet or that you have not registered the sender/made necessary checks for the country you are sending to. Check here - https://www.infobip.com/docs/essentials/getting-started/sms-coverage-and-connectivity
Should the marginal effects be plotted as additive components (i.e., centered at zero mean), or absolute trends over time?
Is there a solution? I have the same problem recently...
How did you solved it then?
Can you help me with this?
i also got the same error but i am using directus as a CMS i dont know what cauing the error i have tried many solution
Что бы исправить это поведение нужно для скролл контейнера прописать css свойство transform: perspective(0);
I was also having issue connecting I followed this blog.
Please see the blog below:
According to my understanding, you are making some issues.
I ever faced similar issue before so please try it with my experience.
config.py**
SESSION_COOKIE_SAMESITE = "None"
SESSION_COOKIE_SECURE = True
app.py****
CORS(app, supports_credentials=True, resources={
r"/*": {
"origins": [
"http://localhost:3000",
"https://453c-162-199-93-254.ngrok-free.app"
]
}
})
if it is still unauthorized, then please let me know.
some try... maybe version 1.0.0-M6 of springAI can work.
Shouldn't be using WPF in 2025 you fucking retard
for some reason if i have 2 email with with similar subject (i.e email 1 with Subject= #general" & email 2 with subject= #enterprise-general"), the apps Script take the 2 subjects and insert them in the sheet "#general". Is there a way to insert it when is the exact subject into the code? Thanks
I can confirm @Kevinoid's suggestion works. What also works is prefixing "%h" instead of or replacing $HOME in the path of file/directory in systemd unit files.
see URL: https://bbs.archlinux.org/viewtopic.php?id=297777
"%h"equivalent to "$HOME", so can make work without /bin/bash -c 'exec part
Both work for me.
Cheers.
Scan a QR code to view the results here [email protected]
estava enfrentado esse problema com laravel 12, com a utilização no swagger. A solução acima para mim funcionou
Thanks for sharing this StackOverflow discussion on PDF clipping logic—it’s a fascinating breakdown of how graphics contexts handle shape boundaries. As someone working in digital image editing at PixcRetouch, understanding these low-level rendering principles helps us better optimize our retouching workflows, especially when dealing with vector layers or exporting client assets to PDF. Appreciate the technical depth here!
Solved: You need to close ComPort before changing the baudrate.
is there a solution? i have the same problem?
I'm leveraging the Decorator pattern in Quarkus to implement functionality equivalent to Spring JPA Specification: https://github.com/VithouJavaMaestro/quarkus-query-sculptor/tree/master
I’m encountering the same problem when trying to pass data from an MCP client to an MCP server using Spring AI. I haven’t found a solution yet — any help or direction would be greatly appreciated!
MCP Client Code:
@Service
public class ChatbotService {
protected Logger logger = LoggerFactory.getLogger(getClass());
private ChatClient chatClient;
public ChatbotService(ChatClient chatClient) {
this.chatClient = chatClient;
logger.info("ChatbotService is ready");
}
String chat(String question, String apiKey ) {
return chatClient
.prompt()
.user(question)
.toolContext(Map.of("apiKey", apiKey))
.call()
.content();
}
}
MCP Server Code:
@Tool(name = "getUserPermissionsByUsername", description = "Get user permissions by username. This tool is used to retrieve the permissions of a user by their username. Must supply the username and an apiKey for authentication.")
private List<String> getUserPermissionsByUsername(@ToolParam(description = "User Name") String username, String apiKey, ToolContext toolContext) {
try {
//apiKey not passed at String or at toolContext
return userProxy.getUserPermissionsByUsername(username);
} catch (Exception e) {
return new ArrayList<>();
}
}
man! I am facing the same issue. Copying from chrome to another place just show the window to terminate or wait option in my arch linux wm-hyprland
Thank you to Jon Spring for his linked answer:
set position / alignment of ggplot in separate pdf-images after setting absolute panel size
https://patchwork.data-imaginist.com/articles/guides/multipage.html
Using patchwork, I was able to come pretty close.
With patchwork, you can align to plots completely, making their plotting area the same size, no matter their surroundings (legends, axis text, etc.).
By continuing the example from above
The php like the one that was the first was a new new new new new new new new new
Just found out about this alternative: https://marketplace.visualstudio.com/items?itemName=danilocolombi.repository-insights
It suits my needs. Maybe it will fit yours... Cheers!
@Roee Shenberg The command works, but I want to know how you export those two path in `/.zprofile`?
HIHIHIHIHdfber wiebffwyewqfqFERBBERG
Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote
Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote
Why don't you try using the findContour method from the openCV library
Thank you.
This helped! .
.
.
.
.
.
.
.
.
.
.
.
Thank you for help And my name is oualid berini
I tried but when creating a new app, my app operates on v23 and I am unable to retrieve the data. Is there any other way to do this? Best regards.
Jejejejejsjsndndendbfbdenejdjdndbvbdbdnsnsnsnsnsnsnsnsnsndndnsnsjwksndndndbsbwjwksndndndndnsjsmdmdndnfnfndkwksmdnnddndnwmmwemndndndnnd
ask chatgpt for faster answer dude
thanks brother, you very genius - this is workss
after entering the formula, it is returning a "#N/A" error. Could you please assist me with how to remove or fix this issue?
=CONCATENATE(VLOOKUP(B6,Purchase!$B$2:$L$32,10,FALSE)," / ",VLOOKUP('Beverage - Master'!B6,Purchase!$B$2:$L$32,11,FALSE))
Could this also be related to how the frontend handles the redirect after a successful login? I'm wondering if the server action finishes before the browser has a chance to store the cookies, especially when working in development mode without HTTPS. Has anyone seen this behavior when testing locally with Next.js server functions?
Can any one help me with webhook to setup the endpoints and configuration in node js