чуваки выше, спасибо вам!!!!!!
Initialize your (JSON object from a SharePoint list) as an Array
Parse the JSON Object
Using Select - Data Operation to get values - Company, Date From, Date To and Title
Append to string variable to get
[
{
"Company": "Line2",
"Date From": "2022-03-21",
"Date To": "2022-03-29",
"Title": "Title 2"
},
{
"Company": "Test1",
"Date From": "2022-03-30",
"Date To": "2022-03-31",
"Title": "Title 1"
}
]enter image deenter image description herescription here
the evaluation_strategy keyword argument in TrainingArguments has now been replaced with eval_strategy. Using the old argument causes:
TrainingArguments.__init__() got an unexpected keyword argument 'evaluation_strategy'
from django import forms
from .models import Invoice_customer_list,Invoice_product_list
# Form for the Invoice model
class Invoice_customer_Form(forms.ModelForm):
class Meta:
model = Invoice_customer_list
fields = ['customer']
labels = {
'customer': 'Select Customer',
}
class Invoice_product_form(forms.ModelForm):
class Meta:
model = Invoice_product_list
fields = ['product']
labels = {
'product':'Select Product'
}
i want to createa form like this here is updated code
We adopted a Gitflow-like process where we have a branch for each stage we deploy to:
development (dev),
test (test),
acceptance (acc)
master (production).
Feature and bugfix branches are started from and merged back to development. These merges are squashed to simplify the Git history (optional). Then, PRs are made from development to test and from test to acceptance to promote these new releases. Each branch triggers its own build and release. This setup allows us to still hotfix specific environments in case of an urgent problem. When merging from development to another environment, we use a normal merge commit to preserve the Git history. This way, new commits just get added to different branches. In order to make sure teammates don't make mistakes when merging, the specific type of merge we want for the branches is specified in the branch protection policies. We do not use release branches or tag specific releases. Instead, we use conventional commit messages and a tool called GitVersion to automatically calculate semantic version numbers that we then use as build and release numbers.
ECS Game Domain specifics is, that you have more of all these things.
So component C++ is POD struct. And the key point is that you have a Container of POD.
And acces and managing POD’s is through the Container interface for example STD::ARRAY or STD::VECTOR. Prefering acces using container index not pointer way.
Because ECS and related cache locality using those cores and caches optimal.
Is update the components in system class that uses the container to process them in bulk.
You often won’t see any if all downsides of inheritance in very small scoped games like PONG.
But in ARMA or Homeworld scope of game with huge diversity of entities you will.
ECS is plural , EntityID’s , component’s so there is point there is mix of OOP but it at above container level.
Entities and components container interfaces and managers.
MPMoviePlayerController supports some legacy formats better while AVPlayer requires proper HLS formatting. Check your .m3u8 stream for correct audio codec, MIME type, and HLS compliance for AVPlayer.
Can you create a commit in a new file and create a github workflow to append that file text to original file when each commit is made?
On terminal
#curl -s http://169.254.169.254/latest/meta-data/instance-id
gives the instance id, 169.254.169.254 is a special AWS internal IP address used to access the Instance Metadata Service (IMDS) from within an EC2 instance.
Thanks for the explanation, Yong Shun. That clears things up. I was also under the impression that ngOnInit I would have the @Input() values are ready, but it makes sense now why it's still null at that point. I'll try using ngOnChanges or AfterViewInit Depending on the use case. The mention of Angular Signals is interesting too, I hadn't explored that yet. Appreciate the insights!
1. Register the New Menu Location in functions.php
2. Assign the Menu in WordPress Admin
3.Add the Menu to the firstpage Template
unlike express there in no req.body object. Therefore it is handle by request handlers. You can follow this article.
https://nodejs.org/en/learn/modules/anatomy-of-an-http-transaction#request-body
let incomingData=[];
req.on('data',chunk=>{
incomingData.push(chunk)
})
.on('end',()=>{
incomingData = Buffer.concat(incomingData)
let name=JSON.parse(incomingData.toString())
console.log("converted to string",name.name)
})
if your input file has extension (.webm) use this :
ffmpeg -re -i "your_file.webm" -map 0 -c:a copy -c:v copy -single_file "true" -f dash "your_output.mpd"
Fixed File Share access issue by creating Private Endpoint with Private DNS Zone; self-hosted Azure DevOps agent now has access
We need at least 3 items to build the essential triangles that make up an AVL tree, and some data to store and search on.
How to politely say the basic data structure is wrong for any AVL tree? Each node of an AVL tree needs at least 5 items. 1. the data or a pointer to the data. the data MUST include a key. 2. a pointer to the the left child node. 3. a pointer to the right child node 4. a pointer to the PARENT node, which is missing 5. one or more items to enforce the balance of the tree. One could store these extra item(s) separately. As a side note, one could use an index into an array rather than a C style pointer. In any case, the code is a cascade miss design with error(s) in each function.
No doubt it compiles (?under which OS and compiler?). To debug and test, you will want to write one or a series of tree display sub programs. I'm currently building a general purpose 32/64 bit Intel AVL tree. I'm at about 2000 lines and not done [ verbose is my nickname ]. It is intended for symbol tables for a compiler. I did some web searches for code: found a lot of broken examples. Search on an AVL tree should be about m*O( lg N). Insert about m*O( 2 * lg N) because of retrace. Delete and other operations such as bulk load not needed for my intended use.
Some rule changed with Tailwindcss V4.
V4 Angular init tutorials with CSS file, and use tailwindcss/postcss plugin.
Different vs V3
V4 not include/create tailwindcss.config.js file default, v4 need .postcss.json in Angular higher ver
V4 tutorial with styles.css in Angular project root, not styles.scss. so some usage changed.
styles.css need add @import "tailwindcss"; not like v3 add some "@tailwindcss "xxx"; "
V4 with CSS file in Angular Component, need add other statement: "@import "tailwindcss"" if you want to use @apply
The "dark" mode in manual setting, the tailwindcss v4 need add "@custom-variant dark (&:where(.dark, .dark *));" this statement in styles.css;
But if you want to use dark: in other component (lazy routing or lazy component), you must add "@import "tailwindcss";" in other css file(.css) and muse add "@custom-variant dark (&:where(.dark, .dark *));" again.
Add TailwindCSS v4 in Angular Higher Version flows:
Add TailwindCSS to project
npm install tailwindcss @tailwindcss/postcss postcss --force
Create postcss config
{
"plugins": {
"@tailwindcss/postcss": {}
}
}
Add TailwindCSS in root styles.css
@import "tailwindcss";
Set dark mode support (optional) in root styles.css
@import "tailwindcss";
@custom-variant dark (&:where(.dark, .dark *)); /* <html class="dark"> */
In other lazy component/module (must): src/dashboard/slidebar/slidebar.css
@import "tailwindcss"; /* for @apply usage */
@custom-variant dark (&:where(.dark, .dark *)); /* for dark:text-white usage */
.menu-item {
@apply flex flex-row w-full h-32 text-slate-100 dark:text-white;
}
.menu-item.active { @apply text-slate-800 dark:text-slate-100; }
You may want to try adding the resizable window flag on window creation.
SDL_WINDOW_RESIZABLE
Like this:
window = SDL_CreateWindow("Test Window", 800, 800, SDL_WINDOW_BORDERLESS | SDL_WINDOW_RESIZABLE);
if you want to set it globally for whole app use
<style>
<item name="android:includeFontPadding">false</item>
</style>
inside your theme.xml
dat<-as.data.frame(rexp(1000,0.2))
g <- ggplot(dat, aes(x = dat[,1]))
g + geom_histogram(alpha = 0.2, binwidth = 5, colour = "black") +
geom_line(stat = "bin", binwidth = 5, linewidth = 1)
I met the same problem. You define the stat = "bin" for geom_line. It will explain the caculation of geom_line as geom_histogram or geom_freqpoly.
Result:
According to the Declaration Merging section in the TypeScript official documentation, it mentions:
Non-function members of the interfaces should be unique. If they are not unique, they must be of the same type. The compiler will issue an error if the interfaces both declare a non-function member of the same name, but of different types.
Absolutely none of these work for me. I downloaded they lasted Android Studio today 7/1/2025. I have no Idea what version it is because there are too many numbers to figure it out. This should not be that difficult.
I just want the toolbar with all of the icons to display at the top. It has icons for running in debug mode, adding/removing comment block, etc. I am not talking about the Menu bar that has File, Edit, View, etc. I want the icon bar or tool bar. Whatever you want to call it.
If we track all possibilities, then
first if condition gives us
T(n)=O(n/2)+T(n/2) equivalent to T(n)=O(n)+T(n/2)
second gives us
T(n)=2*O(n/2)+T(n/2) equivalent to T(n)=O(n)+T(n/2)
for the third one
You can easily see that all possibilities will be equivalent to T(n)=O(n)+T(n/4).
From these recursions you can deduce that T(n)=O(n) i.e. the time complexity is linear.
On your merge sort analogy: The array is being broken in a similar way but if you observe carefully we don't operate on each chunk unlike merge sort. Basically at each of logn levels in merge sort we are dealing with all n of them while here with n/(2^i) i.e. decay exponentially.
import os
import re
import asyncio
import logging
import time
import gc
from pathlib import Path
from telethon import TelegramClient, events
from telethon.tl.types import MessageMediaDocument, InputDocumentFileLocation
from telethon.tl.functions.upload import GetFileRequest
from telethon.crypto import AES
from telethon.errors import FloodWaitError
import aiofiles
from concurrent.futures import ThreadPoolExecutor
# Optimize garbage collection for large file operations
gc.set_threshold(700, 10, 10)
# Set environment variables for better performance
os.environ['PYTHONUNBUFFERED'] = '1'
os.environ['PYTHONDONTWRITEBYTECODE'] = '1'
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
TELEGRAM_API_ID = int(os.getenv("TELEGRAM_API_ID"))
TELEGRAM_API_HASH = os.getenv("TELEGRAM_API_HASH")
TELEGRAM_SESSION_NAME = os.path.join('session', os.getenv('TELEGRAM_SESSION_NAME', 'bot_session'))
TELEGRAM_GROUP_ID = int(os.getenv("GROUP_CHAT_ID"))
TOPIC_IDS = {
'Doc 1': 137,
}
TOPIC_ID_TO_CATEGORY = {
137: 'doc 1',
}
CATEGORY_TO_DIRECTORY = {
'doc 1': '/mnt/disco1/test',
}
class FastTelegramDownloader:
def __init__(self, client, max_concurrent_downloads=4):
self.client = client
self.max_concurrent_downloads = max_concurrent_downloads
self.semaphore = asyncio.Semaphore(max_concurrent_downloads)
async def download_file_fast(self, message, dest_path, chunk_size=1024*1024, progress_callback=None):
"""
Fast download using multiple concurrent connections for large files
"""
document = message.media.document
file_size = document.size
# For smaller files, use standard download
if file_size < 10 * 1024 * 1024: # Less than 10MB
return await self._standard_download(message, dest_path, progress_callback)
# Create input location for the file
input_location = InputDocumentFileLocation(
id=document.id,
access_hash=document.access_hash,
file_reference=document.file_reference,
thumb_size=""
)
# Calculate number of chunks and their sizes
chunks = []
offset = 0
chunk_id = 0
while offset < file_size:
chunk_end = min(offset + chunk_size, file_size)
chunks.append({
'id': chunk_id,
'offset': offset,
'limit': chunk_end - offset
})
offset = chunk_end
chunk_id += 1
logging.info(f"📦 Dividiendo archivo en {len(chunks)} chunks de ~{chunk_size//1024}KB")
# Download chunks concurrently
chunk_data = {}
downloaded_bytes = 0
start_time = time.time()
async def download_chunk(chunk):
async with self.semaphore:
try:
result = await self.client(GetFileRequest(
location=input_location,
offset=chunk['offset'],
limit=chunk['limit']
))
# Update progress
nonlocal downloaded_bytes
downloaded_bytes += len(result.bytes)
if progress_callback:
progress_callback(downloaded_bytes, file_size)
return chunk['id'], result.bytes
except Exception as e:
logging.error(f"Error downloading chunk {chunk['id']}: {e}")
return chunk['id'], None
try:
# Execute downloads concurrently
tasks = [download_chunk(chunk) for chunk in chunks]
results = await asyncio.gather(*tasks, return_exceptions=True)
# Collect successful chunks
for result in results:
if isinstance(result, tuple) and result[1] is not None:
chunk_id, data = result
chunk_data[chunk_id] = data
# Verify all chunks downloaded successfully
if len(chunk_data) != len(chunks):
logging.warning(f"Some chunks failed, falling back to standard download")
return await self._standard_download(message, dest_path, progress_callback)
# Write file in correct order
async with aiofiles.open(dest_path, 'wb') as f:
for i in range(len(chunks)):
if i in chunk_data:
await f.write(chunk_data[i])
else:
raise Exception(f"Missing chunk {i}")
end_time = time.time()
duration = end_time - start_time
speed = (file_size / 1024 / 1024) / duration if duration > 0 else 0
logging.info(f"✅ Fast download completed: {dest_path} - Speed: {speed:.2f} MB/s")
return dest_path
except Exception as e:
logging.error(f"Fast download failed: {e}")
return await self._standard_download(message, dest_path, progress_callback)
async def _standard_download(self, message, dest_path, progress_callback=None):
"""Fallback to standard download method"""
document = message.media.document
file_size = document.size
# Optimize chunk size based on file size
if file_size > 100 * 1024 * 1024: # >100MB
part_size_kb = 1024 # 1MB chunks
elif file_size > 50 * 1024 * 1024: # >50MB
part_size_kb = 1024 # 1MB chunks
elif file_size > 10 * 1024 * 1024: # >10MB
part_size_kb = 512 # 512KB chunks
else:
part_size_kb = 256 # 256KB chunks
start_time = time.time()
await self.client.download_file(
document,
file=dest_path,
part_size_kb=part_size_kb,
file_size=file_size,
progress_callback=progress_callback
)
end_time = time.time()
duration = end_time - start_time
speed = (file_size / 1024 / 1024) / duration if duration > 0 else 0
logging.info(f"📊 Standard download speed: {speed:.2f} MB/s")
return dest_path
class MultiClientDownloader:
def __init__(self, api_id, api_hash, session_base_name, num_clients=3):
self.api_id = api_id
self.api_hash = api_hash
self.session_base_name = session_base_name
self.num_clients = num_clients
self.clients = []
self.client_index = 0
self.fast_downloaders = []
async def initialize_clients(self):
"""Initialize multiple client instances"""
for i in range(self.num_clients):
session_name = f"{self.session_base_name}_{i}"
client = TelegramClient(
session_name,
self.api_id,
self.api_hash,
connection_retries=3,
auto_reconnect=True,
timeout=300,
request_retries=3,
flood_sleep_threshold=60,
system_version="4.16.30-vxCUSTOM",
device_model="HighSpeedDownloader",
lang_code="es",
system_lang_code="es",
use_ipv6=False
)
await client.start()
self.clients.append(client)
self.fast_downloaders.append(FastTelegramDownloader(client, max_concurrent_downloads=2))
logging.info(f"✅ Cliente {i+1}/{self.num_clients} inicializado")
def get_next_client(self):
"""Get next client using round-robin"""
client = self.clients[self.client_index]
downloader = self.fast_downloaders[self.client_index]
self.client_index = (self.client_index + 1) % self.num_clients
return client, downloader
async def close_all_clients(self):
"""Clean shutdown of all clients"""
for client in self.clients:
await client.disconnect()
class TelegramDownloader:
def __init__(self, multi_client_downloader):
self.multi_client = multi_client_downloader
self.downloaded_files = set()
self.load_downloaded_files()
self.current_download = None
self.download_stats = {
'total_files': 0,
'total_bytes': 0,
'total_time': 0
}
def _create_download_progress_logger(self, filename):
"""Progress logger with reduced frequency"""
start_time = time.time()
last_logged_time = start_time
last_percent_reported = -5
MIN_STEP = 10 # Report every 10%
MIN_INTERVAL = 5 # Or every 5 seconds
def progress_bar_function(done_bytes, total_bytes):
nonlocal last_logged_time, last_percent_reported
current_time = time.time()
percent_now = int((done_bytes / total_bytes) * 100)
if (percent_now - last_percent_reported < MIN_STEP and
current_time - last_logged_time < MIN_INTERVAL):
return
last_percent_reported = percent_now
last_logged_time = current_time
speed = done_bytes / 1024 / 1024 / (current_time - start_time or 1)
msg = (f"⏬ {filename} | "
f"{percent_now}% | "
f"{speed:.1f} MB/s | "
f"{done_bytes/1024/1024:.1f}/{total_bytes/1024/1024:.1f} MB")
logging.info(msg)
return progress_bar_function
async def _process_download(self, message, metadata, filename, dest_path):
try:
self.current_download = filename
logging.info(f"🚀 Iniciando descarga de: {filename}")
progress_logger = self._create_download_progress_logger(filename)
temp_path = dest_path.with_name(f"temp_{metadata['file_name_telegram']}")
# Get next available client and downloader
client, fast_downloader = self.multi_client.get_next_client()
file_size = message.media.document.size
start_time = time.time()
try:
# Try fast download first for large files
if file_size > 20 * 1024 * 1024: # Files larger than 20MB
logging.info(f"📦 Usando descarga rápida para archivo de {file_size/1024/1024:.1f}MB")
await fast_downloader.download_file_fast(
message, temp_path, progress_callback=progress_logger
)
else:
# Use standard optimized download for smaller files
await fast_downloader._standard_download(
message, temp_path, progress_callback=progress_logger
)
except Exception as download_error:
logging.warning(f"Descarga optimizada falló, usando método estándar: {download_error}")
# Final fallback to basic download
await client.download_file(
message.media.document,
file=temp_path,
part_size_kb=512,
file_size=file_size,
progress_callback=progress_logger
)
if not temp_path.exists():
raise FileNotFoundError("No se encontró el archivo descargado")
# Atomic rename
temp_path.rename(dest_path)
# Update statistics
end_time = time.time()
duration = end_time - start_time
speed = (file_size / 1024 / 1024) / duration if duration > 0 else 0
self.download_stats['total_files'] += 1
self.download_stats['total_bytes'] += file_size
self.download_stats['total_time'] += duration
avg_speed = (self.download_stats['total_bytes'] / 1024 / 1024) / self.download_stats['total_time'] if self.download_stats['total_time'] > 0 else 0
logging.info(f"✅ Descarga completada: {dest_path}")
logging.info(f"📊 Velocidad: {speed:.2f} MB/s | Promedio sesión: {avg_speed:.2f} MB/s")
self.save_downloaded_file(str(message.id))
except Exception as e:
logging.error(f"❌ Error en descarga: {str(e)}", exc_info=True)
# Cleanup on error
for path_var in ['temp_path', 'dest_path']:
if path_var in locals():
path = locals()[path_var]
if hasattr(path, 'exists') and path.exists():
try:
path.unlink()
except:
pass
raise
finally:
self.current_download = None
def load_downloaded_files(self):
try:
if os.path.exists('/app/data/downloaded.log'):
with open('/app/data/downloaded.log', 'r', encoding='utf-8') as f:
self.downloaded_files = set(line.strip() for line in f if line.strip())
logging.info(f"📋 Cargados {len(self.downloaded_files)} archivos ya descargados")
except Exception as e:
logging.error(f"Error cargando archivos descargados: {str(e)}")
def save_downloaded_file(self, file_id):
try:
with open('/app/data/downloaded.log', 'a', encoding='utf-8') as f:
f.write(f"{file_id}\n")
self.downloaded_files.add(file_id)
except Exception as e:
logging.error(f"Error guardando archivo descargado: {str(e)}")
def parse_metadata(self, caption):
metadata = {}
try:
if not caption:
logging.debug(f"📂 No hay caption")
return None
pattern = r'^(\w[\w\s]*):\s*(.*?)(?=\n\w|\Z)'
matches = re.findall(pattern, caption, re.MULTILINE)
for key, value in matches:
key = key.strip().lower().replace(' ', '_')
metadata[key] = value.strip()
required_fields = [
'type', 'tmdb_id', 'file_name_telegram',
'file_name', 'folder_name', 'season_folder'
]
if not all(field in metadata for field in required_fields):
return None
if 'season' in metadata:
metadata['season'] = int(metadata['season'])
if 'episode' in metadata:
metadata['episode'] = int(metadata['episode'])
return metadata
except Exception as e:
logging.error(f"Error parseando metadata: {str(e)}")
return None
def get_destination_path(self, message, metadata):
try:
topic_id = message.reply_to.reply_to_msg_id if message.reply_to else None
if not topic_id:
logging.warning("No se pudo determinar el topic ID del mensaje")
return None
category = TOPIC_ID_TO_CATEGORY.get(topic_id)
if not category:
logging.warning(f"No se encontró categoría para el topic ID: {topic_id}")
return None
base_dir = CATEGORY_TO_DIRECTORY.get(category)
if not base_dir:
logging.warning(f"No hay directorio configurado para la categoría: {category}")
return None
filename = metadata.get('file_name')
if not filename:
logging.warning("Campo 'file_name' no encontrado en metadatos")
return None
if metadata['type'] == 'movie':
folder_name = f"{metadata['folder_name']}"
dest_dir = Path(base_dir) / folder_name
return dest_dir / filename
elif metadata['type'] == 'tv':
folder_name = f"{metadata['folder_name']}"
season_folder = metadata.get('season_folder', 'Season 01')
dest_dir = Path(base_dir) / folder_name / season_folder
return dest_dir / filename
else:
logging.warning(f"Tipo de contenido no soportado: {metadata['type']}")
return None
except Exception as e:
logging.error(f"Error determinando ruta de destino: {str(e)}")
return None
async def download_file(self, message):
try:
await asyncio.sleep(1) # Reduced delay
if not isinstance(message.media, MessageMediaDocument):
return
if str(message.id) in self.downloaded_files:
logging.debug(f"Archivo ya descargado (msg_id: {message.id})")
return
metadata = self.parse_metadata(message.message)
if not metadata:
logging.warning("No se pudieron extraer metadatos válidos")
return
if 'file_name' not in metadata or not metadata['file_name']:
logging.warning("El campo 'file_name' es obligatorio en los metadatos")
return
dest_path = self.get_destination_path(message, metadata)
if not dest_path:
return
dest_path.parent.mkdir(parents=True, exist_ok=True)
if dest_path.exists():
logging.info(f"Archivo ya existe: {dest_path}")
self.save_downloaded_file(str(message.id))
return
await self._process_download(message, metadata, metadata['file_name'], dest_path)
except Exception as e:
logging.error(f"Error descargando archivo: {str(e)}", exc_info=True)
async def process_topic(self, topic_id, limit=None):
try:
logging.info(f"📂 Procesando topic ID: {topic_id}")
# Use first client for message iteration
client = self.multi_client.clients[0]
async for message in client.iter_messages(
TELEGRAM_GROUP_ID,
limit=limit,
reply_to=topic_id,
wait_time=10 # Reduced wait time
):
try:
if message.media and isinstance(message.media, MessageMediaDocument):
await self.download_file(message)
# Small delay between downloads to prevent rate limiting
await asyncio.sleep(0.5)
except FloodWaitError as e:
wait_time = e.seconds + 5
logging.warning(f"⚠️ Flood wait detectado. Esperando {wait_time} segundos...")
await asyncio.sleep(wait_time)
continue
except Exception as e:
logging.error(f"Error procesando mensaje: {str(e)}", exc_info=True)
continue
except Exception as e:
logging.error(f"Error procesando topic {topic_id}: {str(e)}", exc_info=True)
async def process_all_topics(self):
for topic_name, topic_id in TOPIC_IDS.items():
logging.info(f"🎯 Iniciando procesamiento de: {topic_name}")
await self.process_topic(topic_id)
# Print session statistics
if self.download_stats['total_files'] > 0:
avg_speed = (self.download_stats['total_bytes'] / 1024 / 1024) / self.download_stats['total_time']
logging.info(f"📊 Estadísticas del topic {topic_name}:")
logging.info(f" 📁 Archivos: {self.download_stats['total_files']}")
logging.info(f" 💾 Total: {self.download_stats['total_bytes']/1024/1024/1024:.2f} GB")
logging.info(f" ⚡ Velocidad promedio: {avg_speed:.2f} MB/s")
async def main():
try:
# Test cryptg availability
test_data = os.urandom(1024)
key = os.urandom(32)
iv = os.urandom(32)
encrypted = AES.encrypt_ige(test_data, key, iv)
decrypted = AES.decrypt_ige(encrypted, key, iv)
if decrypted != test_data:
raise RuntimeError("❌ Cryptg does not work properly")
logging.info("✅ cryptg available and working")
except Exception as e:
logging.critical(f"❌ ERROR ON CRYPTG: {str(e)}")
raise SystemExit(1)
# Ensure session directory exists
os.makedirs('session', exist_ok=True)
os.makedirs('/app/data', exist_ok=True)
# Initialize multi-client downloader
multi_client = MultiClientDownloader(
TELEGRAM_API_ID,
TELEGRAM_API_HASH,
TELEGRAM_SESSION_NAME,
num_clients=3 # Use 3 clients for better speed
)
try:
logging.info("🚀 Inicializando clientes múltiples...")
await multi_client.initialize_clients()
downloader = TelegramDownloader(multi_client)
logging.info("📥 Iniciando descarga de todos los topics...")
await downloader.process_all_topics()
logging.info("✅ Proceso completado exitosamente")
except Exception as e:
logging.error(f"Error en main: {str(e)}", exc_info=True)
finally:
logging.info("🔌 Cerrando conexiones...")
await multi_client.close_all_clients()
if __name__ == "__main__":
asyncio.run(main())
Plotly.js creates a global stylesheet that is used to show the tooltip (called in plotly.js "hover..." hoverbox, hoverlayer, hoverlabel), as well as for other features - for instance, you can see the "modelbar" (the icon menu that's by default at the top-right of the plot div) is misplaced in your shadow-dom version.
The issue is thus the fact that the global stylesheets are not applied
to the shadow DOM. Based on the information from this Medium article by EisenbergEffect
I applied the global stylesheets to the shadow root of your sankey-sd,
using the function:
function addGlobalStylesToShadowRoot(shadowRoot) {
const globalSheets = Array.from(document.styleSheets)
.map(x => {
const sheet = new CSSStyleSheet();
const css = Array.from(x.cssRules).map(rule => rule.cssText).join(' ');
sheet.replaceSync(css);
return sheet;
});
shadowRoot.adoptedStyleSheets.push(
...globalSheets
);
}
applied in the constructor of class SankeySD:
class SankeySD extends HTMLElement {
constructor() {
super();
this.attachShadow({ mode: 'open' });
addGlobalStylesToShadowRoot(this.shadowRoot);
}
// ............... other methods
}
and it did enable the tooltip and corrected the position of the modelbar.
Here's a stack snippet demo, based on your original code:
//from https://eisenbergeffect.medium.com/using-global-styles-in-shadow-dom-5b80e802e89d
function addGlobalStylesToShadowRoot(shadowRoot) {
const globalSheets = Array.from(document.styleSheets)
.map(x => {
const sheet = new CSSStyleSheet();
const css = Array.from(x.cssRules).map(rule => rule.cssText).join(' ');
sheet.replaceSync(css);
return sheet;
});
shadowRoot.adoptedStyleSheets.push(
...globalSheets
);
}
window.addEventListener('DOMContentLoaded', () => {
class SankeySD extends HTMLElement {
constructor() {
super();
this.attachShadow({ mode: 'open' });
addGlobalStylesToShadowRoot(this.shadowRoot);
}
connectedCallback() {
const chartDiv = document.createElement('div');
chartDiv.id = 'chart';
chartDiv.style.width = '100%';
chartDiv.style.height = '100%';
chartDiv.style.minWidth = '500px';
chartDiv.style.minHeight = '400px';
this.shadowRoot.appendChild(chartDiv);
const labels = ["Start", "Middle", "Begin", "End", "Final"];
const labelIndex = new Map(labels.map((label, i) => [label, i]));
const links = [
{ source: "Start", target: "Middle", value: 5, label: "Test" },
{ source: "Start", target: "Middle", value: 3, label: "Test2" },
{ source: "Middle", target: "Start", value: 1, label: "" },
{ source: "Start", target: "End", value: 2, label: "" },
{ source: "Begin", target: "Middle", value: 5, label: "Test" },
{ source: "Middle", target: "End", value: 3, label: "" },
{ source: "Final", target: "Final", value: 0.0001, label: "" }
];
const sources = links.map(link => labelIndex.get(link.source));
const targets = links.map(link => labelIndex.get(link.target));
const values = links.map(link => link.value);
const customData = links.map(link => [link.source, link.target, link.value]);
const trace = {
type: "sankey",
orientation: "h",
arrangement: "fixed",
node: {
label: labels,
pad: 15,
thickness: 20,
line: { color: "black", width: 0.5 },
hoverlabel: {
bgcolor: "white",
bordercolor: "darkgrey",
font: {
color: "black",
family: "Open Sans, Arial",
size: 14
}
},
hovertemplate: '%{label}<extra></extra>',
color: ["#a6cee3", "#1f78b4", "#b2df8a", "#a9b1b9", "#a9b1b9" ]
},
link: {
source: sources,
target: targets,
value: values,
arrowlen: 20,
pad: 20,
thickness: 20,
line: { color: "black", width: 0.2 },
color: sources.map(i => ["#a6cee3", "#1f78b4", "#b2df8a", "#a9b1b9", "#a9b1b9"][i]),
customdata: customData,
hoverlabel: {
bgcolor: "white",
bordercolor: "darkgrey",
font: {
color: "black",
family: "Open Sans, Arial",
size: 14
}
},
hovertemplate:
'<b>%{customdata[0]}</b> → <b>%{customdata[1]}</b><br>' +
'Flow Value: <b>%{customdata[2]}</b><extra></extra>'
}
};
const layout = {
font: { size: 14 },
//margin: { t: 20, l: 10, r: 10, b: 10 },
//hovermode: 'closest'
};
Plotly.newPlot(chartDiv, [trace], layout, { responsive: true, displayModeBar: true })
.then((plot) => {
chartDiv.on('plotly_click', function(eventData) {
console.log(eventData);
if (!eventData || !eventData.points || !eventData.points.length) return;
const point = eventData.points[0];
if (typeof point.pointIndex === "number") {
const nodeLabel = point.label;
alert("Node clicked: " + nodeLabel + "\nNode index: " + point.pointIndex);
console.log("Node clicked:", point);
} else if (typeof point.pointNumber === "number") {
const linkIdx = point.pointNumber;
const linkData = customData[linkIdx];
alert(
"Link clicked: " +
linkData[0] + " → " + linkData[1] +
"\nValue: " + linkData[2] +
"\nLink index: " + linkIdx
);
console.log("Link clicked:", point);
} else {
console.log("Clicked background", point);
}
});
});
}
}
customElements.define('sankey-sd', SankeySD);
});
html, body {
height: 100%;
margin: 0;
}
sankey-sd {
display: block;
width: 100vw;
height: 100vh;
}
<sankey-sd></sankey-sd>
<script src="https://cdn.plot.ly/plotly-3.0.1.min.js" charset="utf-8"></script>
<!-- also works with v 2.30.1-->
The click feature is not caused by the shadow DOM; in this fiddle that uses the same plot configuration, but without the shadow DOM, the behaviour is the same - there's always a point.pointNumber and never point.pointIndex.
I can't find the code you have used, can you please show the version that works? In any case, this might be another question, as there should not be multiple issues per post, if their solutions are unrelated.
Font-weight rendering varies across browsers due to different font smoothing and anti-aliasing techniques.
Testing and using web-safe fonts or variable fonts can help ensure consistent appearance.
For anyone who is still getting the error after granting access. I tried to delete the key vault secret reference from my app service's environment variable, save and re-add it back, and it works now
I use workaround by '\*.py/\*[!p][!y]' to 'files to exclude'. but I don't have trust that it is really answer.
The control uses the client system date/time settings to display the date. The only way to fix this without replacing the entire control with something different is to have the client system changed to the "correct" settings.
This is very frustrating. The control offers a format, but doesn't really care what you set it to.
You can set up a Power Automate flow that connects Power BI with Jira:
Create a data alert or trigger in Power BI or directly in Power Automate based on your dataset (e.g., when the number of occurrences for a specific error exceeds a certain threshold within a given date range).
Use Power Automate to monitor this data (either via a scheduled refresh or a Power BI data alert).
Once the condition is met, the flow can automatically create a Jira ticket using the Jira connector.
You can populate the Jira ticket with details from the dataset or spreadsheet (like error type, frequency, affected module, etc.).
There are couple of things you should check (btw, please share your the cloud function code snippet)
Make sure that you are calling/invoking supported GCP Vertex AI Gemini modes (Gemini 2.0, Gemini 2.5 Flash/pro etc.). Models like Palm, text-bison and even earlier Gemini models (like Gemini 1.0) has been deprecated, that's mostly likely the reason you are getting 404 reason due to model deprecation. Please check the supported model doc here to use a proper Gemini model.
Verify that you followed this Vertex AI getting started guide to set up your access to Gemini model. based on what you described:
You have GCP project
You enabled the Vertex AI API
IAM. Try to grant your GCP account Vertex AI User role permission. For detail, check Vertex AI IAM permission here.
I recommend to use Google Gen AI SDK for Python to call Gemini models. It handles the endpoint and authentication, you just need to code the model to use. for example: gemini-2.5-flash
These steps should get you going. Please share code snippet so that I can share the edited snippet back.
<script>
window.setInterval(function() {
var elem = document.getElementById('fixed');
elem.scrollTop = elem.scrollHeight; }, 3000);
</script>
This worked well based on @johnscoops answer
I am pleased to share that the P vs NP question has been resolved, establishing that P = NP. The full write‑up and proof sketch are available here: https://huggingface.co/caletechnology/satisfier/blob/main/Solving_the_Boolean_k_SAT_Problem_in__Polynomial_Time.pdf
You can also review and experiment with the accompanying C implementation: https://huggingface.co/caletechnology/satisfier/tree/main
I welcome feedback and discussion on this claim.
be aware these numbers are on the training data which have arrived that node from different possibilities, rather than for your prediction results.
I am pleased to share that the P vs NP question has been resolved, establishing that P = NP. The full write‑up and proof sketch are available here: https://huggingface.co/caletechnology/satisfier/blob/main/Solving_the_Boolean_k_SAT_Problem_in__Polynomial_Time.pdf
You can also review and experiment with the accompanying C implementation: https://huggingface.co/caletechnology/satisfier/tree/main
I welcome feedback and discussion on this claim.
I was able to completely avoid the limitation by eliminating instances of query, and instead putting my "iter()" methods directly on the type.
pub trait Query<S: Storage> {
type Result<'r>: 'r where S: 'r;
fn iter(storage: &S) -> Option<impl Iterator<Item = (Entity, Option<Self::Result<'_>>)>>;
fn iter_chunks(storage: &S, chunk_size: usize) -> Option<impl Iterator<Item = impl Iterator<Item = (Entity, Option<Self::Result<'_>>)>>>;
}
The coding text input requires quotes in order to treat your input as one command, otherwise; each space is treated as a separate command on its own. Also, your output will only return the last buffer as 'line' where, it appears you were trying to set up an output variable 'output'
I'm having the same problem. I've tried various nodemailer codes on the internet but still failed. It turns out after tracking the problem in my case, the API was not working properly which was caused by the use of "output: 'export'" in the next.config.js file. So if you don't use "output: 'export'" Next.js uses its full-stack capability which means Supports API routes (serverless functions in Vercel). so maybe if anyone has the same problem and has not been resolved, my suggestion is to remove "output: 'export'" in the next.config.js file. btw I use nodemailer, smtp gmail and deploy to vercel
Did you manage to resolve this? I am hitting the same issue.
Thanks!
This is happening because at store build time window.innerWidth is undefined, and untill the resize event listener is triggered, a new value will not be set.
The moneyRemoved variable wasn't being set true. I should have debugged better. Thank you to @Rufus L though for showing me how to properly get the result from an async method without using .GetAwaiter().GetResult()!
There was a permission issue. The Groovy runtime could not get the resource because I had not opened my packages to org.apache.groovy. I just needed to add:
opens my_package to org.apache.groovy;
to module-info.java.
You probably need to handle Form.WndProc and capture the windows messages about the shortcut events. This is a little more complicate but allows you to capture a lot of things in one place and has been answered here before for the usuall events of forms closing and minimising
Preventing a VB.Net form from closing
There are probably message codes for those shortcuts
it worked by using :
{
"mcpServers": {
"firebase": {
"command": "firebase",
"args": ["experimental:mcp"]
}
}
}
What you can do is convert your file to a different but supported file like .wav or .flac and submit it to Google STT and it should work.
This is interesting to be available natively. On Google side, there is a feature request that you can file but there is no timeline on when it can be done.
I believe this is only an issue on Windows. I am experiencing the same thing, but running Tensorboard on a Debian server or even through WSL works without an issue. See the associated github issue:
Here are two SO questions with good answers:
- Inline-block element height issue - found through Google
- Why does inline-block cause this div to have height? - silviagreen's comment to the question
As per H264 specification, the H264 raw byte stream does not contain any presentation timestamp. Here is the verbiage from there, I will update more details as I find.
One of the main properties of H.264 is the complete decoupling of the transmission time, the decoding time, and the sampling or presentation time of slices and pictures. The decoding process specified in H.264 is unaware of time, and the H.264 syntax does not carry information such as the number of skipped frames (as is common in the form of the Temporal Reference in earlier video compression standards). Also, there are NAL units that affect many pictures and that are, therefore, inherently timeless. For this reason, the handling of the RTP timestamp requires some special considerations for NAL units for which the sampling or presentation time is not defined or, at transmission time, unknown.
timegm() is a non-standard GNU extension. A portable version using mktime() is below. This sets the TZ environment variable to UTC, calls mktime() and restores the value of TZ. Since TZ is modified this might not be thread safe. I understand the GNU libc version of tzset() does use a mutex so should be thread safe.
See:
#include <time.h>
#include <stdlib.h>
time_t
my_timegm(struct tm *tm)
{
time_t ret;
char *tz;
tz = getenv("TZ");
setenv("TZ", "", 1);
tzset();
ret = mktime(tm);
if (tz)
setenv("TZ", tz, 1);
else
unsetenv("TZ");
tzset();
return ret;
}
I have fixed this problem. Go to the "data" directory of MySQL. Rename the file "binlog.index" to "biglog.index_bak". and that's it. restart MySQL server, it will be reset.
Yeah, ROPC is outdated and not recommended — no MFA, no SSO, and hard to switch IdPs later.
Use Authorization Code Flow with PKCE instead. It supports MFA/SSO and gives you refresh tokens if you request the offline_access scope.
In Keycloak, enable this by assigning the offline_access role to users (or include it in the realm’s default roles).
Then, in the /auth request, include offline_access in the scope.
When you exchange the auth code at /token, you'll get an offline_token instead of a standard refresh token.
This lets you use Keycloak’s login page, so you can enable MFA, SSO, or whatever else you need.
Much safer, future-proof, and fully standard.
This drove me crazy but I found a solution! The cache file is called "Browse.VC.db" and located in a hidden folder called ".vs" example:
c:\VS Projects\yourprogram\.vs\yourprogram\v17\Browse.VC.db
Delete and restart your project.
You could demultiplex the intened service checkout sucess events by adding metadata to the checkout session like webhook_target: 'website-a', then in your website-a 's webhook handler ignore anything that comes with outer webhook_target
I couldn't find a nice way to do this, so I made a gem to make it easy once there are conflicts.
I removed all git dependencies from my toml/setup.py since pypi is very strict about that and added a try except to my entry point before the import in question. So I have an intentionally failed import when you run the code the first time. This triggers the subprocess to install the package then reruns the code. So the user can still pip install your package and then your package installs the git dependency when they run it.
This is dependent on the user having git on their system. Not a perfect solution but a decent workaround.
import subprocess
import sys
import os
try:
from svcca import something
except ImportError:
print("svcca not found. Installing from GitHub...")
subprocess.check_call([
sys.executable,
"-m", "pip", "install",
"svcca @ git://github.com/themightyoarfish/svcca-gpu.git"
])
print("Installation complete. Relaunching please wait..")
os.execv(sys.executable, [sys.executable] + sys.argv)
I'm not sure this resolves the issue, the vite dev react-router is not for releases. I have also tried to configure the react-router package without any luck.
export default defineConfig({
optimizeDeps: {
include: ['react-router-dom'],
},
build: {
commonjsOptions: {
include: [/react-router/, /node_modules/],
},
}
});
I am getting the same error.
According to the documentation, expo-maps is not available in Expo Go.
But I am not sure whether I have installed Expo Go (I think I did).
import speech_recognition as sr
import pyttsx3
import datetime
import webbrowser
engine = pyttsx3.init()
def speak(text):
engine.say(text)
engine.runAndWait()
def take_command():
recognizer = sr.Recognizer()
with sr.Microphone() as source:
print("Listening...")
audio = recognizer.listen(source)
try:
query = recognizer.recognize_google(audio, language='en-in')
print("User said:", query)
return query
except:
speak("Sorry, I didn't catch that.")
return ""
def execute(query):
query = query.lower()
if "time" in query:
time = datetime.datetime.now().strftime("%H:%M")
speak(f"The time is {time}")
elif "open youtube" in query:
webbrowser.open("https://youtube.com")
speak("Opening YouTube")
else:
speak("I can't do that yet.")
speak("Hello, I am Jarvis")
while True:
command = take_command()
if "stop" in command or "bye" in command:
speak("Goodbye!")
break
execute(command)
Ok, I get it now. Sorry, the parameters are somewhat confusing. This works:
analysisValueControl = new FormControl({value: '', disabled: true}, {validators: [
Validators.required, Validators.pattern(/^([+-]?[\d\.,]+)(\([+-]?[\d\.,]+\))?([eE][+-]?\d+)?$/i) ],
updateOn: 'blur'});
I am guessing that you're issue is because you aren't giving your servo an initial value, so the pin is likely floating. Try adding
helloServo.write(360);
to the end of your void setup(), this should make the servo start at the 360 position.
There are two namespaces that bitbake concerns itself with - recipe names (a.k.a. build time targets) and package names (a.k.a. runtime targets). You can specify a build time target on the bitbake command line, but not a runtime target; you need to find the recipe that provides the package you are trying to build and build that instead (or simply add that package to your image and build the image). In current versions bitbake will at least tell you which recipes have matching or similar-sounding runtime provides (RPROVIDES) so that you'll usually get a hint on which recipe you need to build.
The condition you're using seems to have an issue because the output of currentRange and currentRange.getValues() doesn't match what the condition expects and that's why else condition triggers instead.
If you check the value by using console you will get the output of:
console.log(currentRange) = object
console.log(currentRange.getValues()) = undefined
Agreeing to @Martin using strings to retrieve the ranges.
To make it work here's a modified version of your code:
function SUM_UNCOLORED_CELLS(...ranges) {
var ss = SpreadsheetApp.getActiveSpreadsheet().getActiveSheet();
var rl = ss.getRangeList(ranges);
var bg = rl.getRanges().map(rg => rg.getBackgrounds());
return bg.flat(2).filter(c => c == "#ffffff").length;
}
To learn more about how to pass a range into a custom function in Google Spreadsheets, you can read this post: How to pass a range into a custom function in Google Spreadsheets?
Im also searching for it, but it looks like you have to make 1 subscription and then add different base plans to it. I dont get why there isn't more info about this. Im also trying to just have 3 different subscriptions and upgrade/downgrade between them. When I created 3 subscriptions in the google play console, im able to subscribe to all 3 of them in my app. I think because it doesn't see it as a sub group. Im going to try to make one subscription with 3 base plans and see if it's able to detect it then. I don't know if this is the correct way tho...
The likely problem is the stream argument cannot be 0 (i.e. default stream). You will need to specify a named stream that was created with cudaStreamCreate*()
You also don't have to specify the location hints because "The cudaMemcpyAttributes::srcLocHint and cudaMemcpyAttributes::dstLocHint allows applications to specify hint locations for operands of a copy when the operand doesn't have a fixed location. That is, these hints are only applicable for managed memory pointers on devices where cudaDevAttrConcurrentManagedAccess is true or system-allocated pageable memory on devices where cudaDevAttrPageableMemoryAccess is true."
📱 SALES TEAM APP: Features Breakdown
1. Login Page
Username & password (must be created from Manager app)
Login only allowed if Sales ID exists
2. Food Items List
Show photo, name, price
Search or filter option (optional)
3. Create Invoice
Select food items
Quantity & total price calculation
Save invoice
4. Daily Invoice History
Show list of invoices created on the current day
Option to view details
5. Send Feedback
Text input
Sends directly to Manager (stored in database
Totally fair — let me clarify a bit.
The root issue seems to stem from how Jekyll resolves file system deltas during its incremental rebuild cycle. When it detects changes, it re-evaluates its asset manifest, and sometimes if your style.css isn’t explicitly locked into the precompiled asset flow, Jekyll will fall back to its inferred default — in this case, normalize.css.
One common workaround is to abstract your custom styles into a partial (e.g., _custom.scss) and then import that into a master stylesheet that’s definitely tracked by Jekyll’s pipeline. Alternatively, some folks set a manual passthrough override in _config.yml to ensure asset pathing stays deterministic during rebuilds. You might also try placing your custom style.css outside the default watch scope and reference it via a canonical link to bypass the regeneration logic entirely.
Let me know if that helps at all — happy to fine-tune based on your setup.
The wikipedia page on rotation matrices shows 1's on the diagonal.
I believe scipy replaces the 1's on the diagonal with
w^2+x^2+y^2+x^2
That makes them the same for a unit quaternion.
For non-unit quaternions, scipy's matrix acts as both a rotation and a scaling.
For example:
if you take the quaternion = 2 +0i+0j+0k.
The rotation matrix will be the identity matrix (with only a w term there is no rotation),
Scipy's matrix will be 2*identity, because in also includes the scaling.
type NewsItemPageProps = Promise<{ id: string }>;
async function NewsItemPage(props: { params: NewsItemPageProps }) {
const params = await props.params;
const id = params.id;
this code works
I suggest you try using Long Path Tool. It is very convenient.
The solution is to reset the ref value on blur.
There’s a required field called "What’s New in This Version", and it hasn’t been filled out with the updates included in your current build. Please add the relevant changes or improvements made in this version to that field, and this issue will be resolved.
web servers usually buffer output til some conditions are met (buffer is full, or end of data to send). there is no way around to bypass it except using http headers. one of them is sending data as chunked using transfer-encoding: chunked
Add trailingSlash: false, in your next.config.ts or js file
reference:
https://github.com/aws-amplify/amplify-hosting/issues/3819#issuecomment-1819740774
Jupyter notebook is certainly a great option. You can also use Anaconda platform. Download it for working on python(right from opening .ipynb files to host of other activities for data science are facilitated there.
I have this problem but this is my error
❌ Error creando PriceSchedule: {
"errors": [
{
"id": "3e8b4645-20bc-492a-93c9-x",
"status": "409",
"code": "ENTITY_ERROR.NOT_FOUND",
"title": "There is a problem with the request entity",
"detail": "The resource 'inAppPurchasePricePoints' with id 'eyJzIjoiNjc0NzkwNTgyMSIsInQiOiJBRkciLCJwIjoiMTAwMDEifQ' was not found.",
"source": {
"pointer": "/included/relationships/inAppPurchasePricePoints/data/0/id"
}
}
]
}
from moviepy.editor import VideoFileClip, AudioClip import numpy as np
Re-define file path after environment reset
video_path = "/mnt/data/screen-٢٠٢٥٠٦٢٩-١٥٢٢٢٤.mp4"
Reload the video and remove original audio
full_video = VideoFileClip(video_path).without_audio()
Define chunk length in seconds
chu
nk_length =
no, I don't have such a file
wqwqqwwq
Perhaps less efficient, but concise for the basic escapes without libraries using JDK 15+
public static String escape(String s) {
for (char c : "\\\"bfnrt".toCharArray())
s = s.replace(("\\" + c).translateEscapes(), "\\" + c);
return s;
}
try to pass it directly "
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
" to
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:1.9.20"
I think that approach works, but has some drawbacks like limiting the capabilities of a event driven architecture, that is asynchronous by nature or ending up with tons of messages to be sent because you are adding too much latency.
What i would do to stop some bots to flood the kafka pipelines, would be to implement a debounce system. i.e a bot would need to cooldown for a period before being able to send a message. That way you are not sending one message a time from all of the bots, but by making sure the more active bots only send every certain milliseconds, you allow the less active bots to send their messages
What is the big O of the algorithm below?
It is not an algorithm to begin with because the operation (in the lack of a better word) you described does not fit the standard definition of what constitutes an algorithm. If it is not an algorithm, you probably should not describe it using big O notation.
As pointed out in the previous answer, the use of a PRNG is probabilistically distributed, so the time bounds would converge to a finite set of steps eventually. The rest of my answer will now assume a truly random number generating function as part of your "algorithm".
Knuth describes an algorithm in TAOCP [1, pp. 4-7] as a "finite set of rules that gives a sequence of operations for solving a specific type of problem", highlighting the characteristics of finiteness, definiteness, input, output, effectiveness.
For concision, your described operation does not:
Moreover, the lack of finiteness prompting this operation to potentially run without ever finding a 5 digit number not in the DB perfectly classifies it as an undecidable problem.
Recall that decidability means whether or not a decision problem can be correctly solved if there exists an effective method (finite time deterministic procedure) for solving it [2].
For same reason and akin to the Halting problem [3], your operation is undecidable because it is impossible to construct an algorithm that always correctly determines [see 4] a new 5 digit random number effectively. The operation described is merely a problem statement, and not an algorithm because it still needs an algorithm to correctly and effectively solve it.
You might have to consider Kolmogorov complexity [5] in analyzing your problem because it allows you to describe (and prove) the impossibility of undecidable problems like the Halting problem and Cantor's diagonal argument [6].
An answer from this post suggests the use of Arithmetic Hierarchy [7] (as opposed to asymptotic analysis) as the appropriate measure of the complexity of undecidable problems, but even I am still struggling to comprehend this.
Here's two options that worked for me that may also work for you.
Use the other link to the DevOps instance
Try a different browser. In my case, chrome and edge stopped working yet firefox works fine.
updating with below code after suggestions on comment section , solves the issue:
sns.pointplot(x=group['Month'], y=group['Rate'], ax=ax1)
START OF PART 1 (Base64 PDF content)
JVBERi0xLjQKJeLjz9MKMSAwIG9iago8PC9UeXBlIC9DYXRhbG9nCi9QYWdlcyAyIDAgUgovT3Blb nRpbWVzIDEgMCBSCi9NZXRhZGF0YSAzIDAgUgovUm9vdCA0IDAgUgo+PgplbmRvYmoKMiAwIG9iag o8PC9UeXBlIC9QYWdlcwovS2lkcyBbNSAwIFJdCi9Db3VudCAxCj4+CmVuZG9iagozIDAgb2JqCjw 8L01ldGFkYXRhIDw8L0NyZWF0b3IgKENoYXQgR1BUIFBERiBJbnRlcmZhY2UpL0NyZWF0aW9uRGF0 ZSAoRDoyMDI1MDYxMDEyMzQ1NS0wNScwMCcpPj4KL1R5cGUgL01ldGFkYXRhPj4KZW5kb2JqCjQgM CBvYmoKPDwvVHlwZSAvUGFnZQovUGFyZW50IDIgMCBSCi9NZWRpYUJveCBbMCAwIDU5NSA4NDJdCi9 Db250ZW50cyA2IDAgUgovUmVzb3VyY2VzIDw8L0ZvbnQgPDwvRjEgNyAwIFI+Pj4KPj4KZW5kb2Jq CjUgMCBvYmoKPDwvVHlwZSAvUGFnZQovUGFyZW50IDIgMCBSCi9NZWRpYUJveCBbMCAwIDU5NSA4NDJ dCi9Db250ZW50cyA3IDAgUgovUmVzb3VyY2VzIDw8L0ZvbnQgPDwvRjEgNyAwIFI+Pj4KPj4KZW5k b2JqCjYgMCBvYmoKPDwvTGVuZ3RoIDQzNCA+PgpzdHJlYW0KQlQKIDAgMCBUZgoxMiBUZiAoRGF0ZT ogICAgICAgICAgIFRoZSBQcmluY2lwYWwgVG8gdXNlIHRoaXMgYXBwbGljYXRpb24gdG8gaW5zdGFs bCBtdWx0aW1lZGlhIHN5c3RlbSBpbiB0aGUgY2xhc3Jvb21cclxuCkZpcnN0LCB3ZSBhcHByZWNpY XRlIHRoYXQgdGhlIGluc3RhbGxhdGlvbiBvZiBtdWx0aW1lZGlhIGlzIHZlcnkgdXNlZnVsIGZvciB sZWFybmluZyBhbmQgdGVhY2hpbmcgZWZmZWN0aXZlbHkgc3R1ZGVudHMuIFdpdGggdGhpcyBzeXN0Z W0sIHRlYWNoZXJzIGNhbiBlYXNpbHkgZXhwbGFpbiBjb25jZXB0cyB1c2luZyB2aWRlb3MgYW5kIGF uaW1hdGlvbnMgdG8gbWFrZSBsZXNzb25zIG1vcmUgaW50ZXJlc3RpbmcuXHJcblRoYW5rIHlvdSB2Y XJpb3VzbHkuXHJcblxuU2lua2VybHksXHJcblx0TGFteWVhIFNoYXJtZW5cclxuUm9sbCBObzogX1xc XFxcbkNsYXNzOiBfXFxcbgoK Ci0tLQ==
END OF PART 1
---
Try windows_disk_utils package
Although it's an old question, I think it's worth sharing the solution. After struggling for 4–5 hours myself, I finally resolved it by changing the network mode to 'Mirrored' in the WSL settings.
CORS (Cross-Origin Resource Sharing) is a security policy that prevents domains from being accessed or manipulated by other domains that are not allowed. I assume you are trying to perform an operation from a different domain.
There are three ways to avoid being blocked by this policy:
The domain you are trying to manipulate explicitly allows your domain to run its scripts.
You perform the operation using a local script or a browser with CORS disabled (e.g., a CORS-disabled Chrome).
You perform the operation within the same domain or its subdomains. You can test this easily in the browser console via the inspector.
Here is a useful link that addresses a similar problem:
Error: Permission denied to access property "document"
I realized how to solve this, the problem was that all the pages were showing a mix of spanish, english labels and that stuff, I tought that it was something about this
var cookie = HttpContext.Request.Cookies[CookieCultureName];
where it takes many config values such as language and that somehow it chooses one of the two .resx files that have all the labels values but it was not the case, I solved it by changing it manually on inspection -> cookies -> Localization.CurrentUICulture:
enter image description here
But I still don´t know where this value comes from, kinda weird but it is what it is
I also had a git repo inside gdrive (for convenience as of keeping code along with other stuff), somehow gdrive managed to corrupt itself to the point where some files inside the project got corrupted and inaccessible locally (cloud version remained). The only solution was to delete the gdrive cache and db.
Plus, paths inside gdrive end up including "my drive", where the space is a problem with some tools (Flutter).
And, you also end up syncing .venv (as an example for python) and other temp files as gdrive has no exclude patterns.
Therefore after sorting the situation in my own repo caused by this combination, I moved the repo into it's own folder.
Maybe it's one datapoint, but in my case, this combination got corrupted and wasted time.
for Otel Collector, `receiver/sqlserver`:
Pls make sure to Use MSSQL V: 2022, I was using MSSQL Server Version 2019.
Pls Use SQLSERVER, instead of SQLEXPRESS Edition.
Pls provide MSSQL User Privileges as below:
GRANT VIEW SERVER PERFORMANCE STATE TO <USERNAME>
For reference check this Github Issue: https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/40937
File /usr/local/lib/python3.11/shutil.py:431, in copy(src, dst, follow_symlinks)
429 if os.path.isdir(dst):
430 dst = os.path.join(dst, os.path.basename(src))
--> 431 copyfile(src, dst, follow_symlinks=follow_symlinks)
432 copymode(src, dst, follow_symlinks=follow_symlinks)
433 return dst
I just had this same error pop up when trying to do the same thing with my data. Does cID by any chance have values that are not just a simple numbering 1:n(clusters)? My cluster identifiers were about 5 digits long and when I changed the ID values to numbers 1:n(clusters), the error magically disappeared.
In your .NET Core project (that references the .NET Framework lib), add System.Security.Permissions NuGet package — this ensures Newtonsoft.Json won’t fail with FileNotFoundException.
Please check out the instructions at the bottom of this README. This solved it for me.
https://github.com/Weile-Zheng/word2vec-vector-embedding?tab=readme-ov-file
Please check out the instructions at the bottom of this README. This solved it for me.
https://github.com/Weile-Zheng/word2vec-vector-embedding?tab=readme-ov-file
Look this:https://github.com/modelcontextprotocol/python-sdk/issues/423
I believe this is a problem of MCP.
I pity the OP lumbered with such a silly and inflexible accuracy target for their implementation as shown above. They don't deserve the negative votes that this question got but their professor certainly does!
By way of introduction I should point out that we would not normally approximate sin(x) over such a wide range as 0-pi because to do so violates one of the important heuristics generally used in series expansions used to approximate functions in computing namely:
IOW each successive term is a smaller correction to the sum than its predecessor and they are typically summed from highest order term first usually by Horner's method which lends itself to modern computers FMA instructions which can process a combined multiply and add with only one rounding each machine cycle.
To illustrate how range reduction helps I the test code below does the original range test with different numbers of terms N for argument limits of pi, pi/2 and pi/4. The first case evaluation for pi thrashes about wildly at first before eventually settling down. The latter pi/4requires just 4 terms to converge.
In fact we can get away with the wide range in this instance because sin, cos and exp are all highly convergent polynomial series with a factorial in the denominator - although the large alternating terms added to partial sums when x ~ pi do cause some loss of precision at the high end of the range.
We would normally approximate over a reduced range of pi/2, pi/4 or pi/6. However taking on this challenge head on there are several ways to do it. The different simple methods of summing the Taylor series can give a few possible answers depending on how you add them up and whether or not you accumulate the partial sum into a double precision variable. There is no compelling reason to prefer any one of them over another. The fastest method is as good as any.
There is really nothing good about the professor's recommended method. It is by far the most computationally expensive way to do it and for good measure it will violate the original specification of computing the Taylor series when N>=14 because the factorial result for 14! cannot be accurately represented in a float32 - the value is truncated.
The OP's original method was perfectly valid and neatly sidesteps the risk of overflow of xN and N! by refining the next term to be added for each iteration inside the summation loop. The only refinement would be to step the loop in increments of 2 and so avoid computing n = 2*i.
@user85421's comment reminded me of a very old school way to obtain a nearly correct polynomial approximation for cos/sin by nailing the result obtained at a specific point to be exact. Called "shooting" and usually done for boundary value problems it is the simplest and easiest to understand of the more advanced methods to improve polynomial accuracy.
In this instance we adjust the very last term in xN so that it hits the target of cos(pi) = -1 exactly. It can be manually tweaked from there to get a crude nearly equal ripple solution that is about 25x more accurate than the classical Taylor series.
The fundamental problem with the Taylor series is that it is ridiculously over precise near zero and increasingly struggles as it approaches pi. This hints that we might be able to find a compromise set of coefficients that is just good enough everywhere in the chosen range.
The real magic comes from constructing a Chebyshev equal ripple approximation using the same number of terms. This is harder to do for 32 bit floating point and since a lot of modern CPUs now have double precision arithmetic that is as fast as single precision you often find double precision implementations lurking inside nominally float32 wrappers.
It is possible to rewrite a Taylor series into a Chebyshev expansion by hand. My results were obtained using a Julia numerical code ARMremez.jl for rational approximations.
To get the best possible coefficient set for fixed precision working in practice requires a huge optimisation effort and isn't always successful but to get something that is good enough is relatively easy. The code below shows the various options I have discussed and sample coefficients. The framework used tests enough of the range of x values to put tight bounds on worst case absolute error |cos(x)-poly_cos(x)|.
In real applications of approximation we would usually go for minimum relative error | 1 - poly_cos(x)/cos(x)| (so that ideally all the bits in the mantissa are right). However the zero at pi/2 would make life a bit too interesting for a simple quick demo so I have used absolute error here instead.
The 6 term Chebyshev approximation is 80x more accurate but the error is in the sense that takes cos(x) outside the valid range |cos(x)| <= 1 (highly undesirable). That could easily be fixed by rescaling. They have been written in a hardcoded Horner fixed length polynomial implementation avoiding any loops (and 20-30% faster as a result).
The worst case error in the 7 term Chebyshev approximation computed in double precision is 1000x better at <9e-8 without any fine tuning. The theoretical limit with high precision arithmetic is 1.1e-8 which is below the 3e-8 Unit in the Last Place (ULP) threshold on 0.5-1.0. There is a good chance that it could be made correctly rounded for float32 with sufficient effort. If not then 8 terms will nail it.
One advantage of asking students to optimise their polynomial function on a range like 0-pi is that you can exhaustively test it for every possible valid input value x fairly quickly. Something that is usually impossible for double precision functions. A proper framework for doing this much more thoroughly than my hack below was included in a post by @njuffa about approximating erfc.
The test reveals that the OP's solution and the book solution are not that different, but the official recommended method is 30x slower or just 10x slower if you cache N!. This is all down to using pow(x,N) including the slight rounding differences in the sum and repeatedly recomputing factorial N (which leads to inaccuracies for N>14).
Curiously for a basic Taylor series expansion the worst case error is not always right at the end of the range - something particularly noticeable on the methods using pow()
Here is the results table:
| Description | cos(pi) | error | min_error | x_min | max_error | x_max | time (s) |
|---|---|---|---|---|---|---|---|
| prof Taylor | -0.99989957 | 0.000100434 | -1.436e-07 | 0.94130510 | 0.000100672 | 3.14159226 | 10.752 |
| pow Taylor | -0.99989957 | 0.000100434 | -1.436e-07 | 0.94130510 | 0.000100672 | 3.14159226 | 2.748 |
| your Cosinus | -0.99989957 | 0.000100434 | -1.570e-07 | 0.80652559 | 0.000100791 | 3.14157438 | 0.301 |
| my Taylor | -0.99989951 | 0.000100493 | -5.476e-08 | 1.00042307 | 0.000100493 | 3.14159274 | 0.237 |
| shoot Taylor | -0.99999595 | 4.0531e-06 | -4.155e-06 | 2.84360051 | 4.172e-06 | 3.14159012 | 0.26 |
| Horner Chebyshev 6 | -1.00000095 | -9.537e-07 | -1.330e-06 | 3.14106655 | 9.502e-07 | 2.21509051 | 0.177 |
| double Horner Cheby 7 | -1.00000000 | 0 | -7.393e-08 | 2.34867692 | 8.574e-08 | 2.10044718 | 0.188 |
Here is the code that can be sued to experiment with the various options. The code is C rather than Java but written in such a way that it should be easily ported to Java.
#include <stdio.h>
#include <math.h>
#include <time.h>
#define SLOW // to enable the official book answer
//#define DIVIDE // use explicit division vs multiply by precomputed reciprocal
double TaylorCn[10], dFac[20], drFac[20];
float shootC6;
float Fac[20];
float C6[7] = { 0.99999922f, -0.499994268f, 0.0416598222f, -0.001385891596f, 2.42044015e-05f, -2.19788836e-07f }; // original 240 bit rounded down to float32
// ref float C7[8] = { 0.99999999f, -0.499999892f, 0.0416664902f, -0.001388780783f, 2.47699662e-05f, -2.70797754e-07f, 1.724760709e-9f }; // original 240 bit rounded down to float32
float C7[8] = { 0.99999999f, -0.499999892f, 0.0416664902f, -0.001388780783f, 2.47699662e-05f, -2.707977e-07f, 1.72478e-9f }; // after simple fine tuning
double dC7[8] = { 0.9999999891722795, -0.4999998918375135482, 0.04166649019522770258731, -0.0013887807826936648, 2.47699662157542654e-05, -2.707977544202106e-07, 1.7247607089243954e-09 };
// Chebeshev equal ripple approximations obtained from modified ARMremez rational approximation code
// C7 +/- 1.08e-8 (computed using 240bit FP arithmetic - coefficients are not fully optimised for float arithmetic) actually obtain 9e-8 (could do better?)
// C6 +/- 7.78e-7 actually obtain 1.33e-6 (with fine tuning could do better)
const float pi = 3.1415926535f;
float TaylorCos(float x, int ordnung)
{
double sum, term, mx2;
sum = term = 1.0;
mx2 = -x * x;
for (int i = 2; i <= ordnung; i+=2) {
term *= mx2 ;
#ifdef DIVIDE
sum += term / Fac[i]; // slower when using divide
#else
sum += term * drFac[i]; // faster to multiply by reciprocal
#endif
}
return (float) sum;
}
float fTaylorCos(float x)
{
return TaylorCos(x, 12);
}
void InitTaylor()
{
float x2, x4, x8, x12;
TaylorCn[0] = 1.0;
for (int i = 1; i < 10; i++) TaylorCn[i] = TaylorCn[i - 1] / (2 * i * (2 * i - 1)); // precomute the coefficients
Fac[0] = 1;
drFac[0] = dFac[0] = 1;
for (int i = 1; i < 20; i++)
{
Fac[i] = i * Fac[i - 1];
dFac[i] = i * dFac[i - 1];
drFac[i] = 1.0 / dFac[i];
if ((double)Fac[i] != dFac[i]) printf("float factorial fails for %i! %18.0f should be %18.0f error %10.0f ( %6.5f ppm)\n", i, Fac[i], dFac[i], dFac[i]-Fac[i], 1e6*(1.0-Fac[i]/dFac[i]));
}
x2 = pi * pi;
x4 = x2 * x2;
x8 = x4 * x4;
x12 = x4 * x8;
shootC6 = (float)(cos((double)pi) - TaylorCos(pi, 10)) / x12 * 1.00221f; // fiddle factor for shootC6 with 7 terms *1.00128;
}
float shootTaylorCos(float x)
{
float x2, x4, x8, x12;
x2 = x * x;
x4 = x2 * x2;
x8 = x4 * x4;
x12 = x4 * x8;
return TaylorCos(x, 10) + shootC6 * x12;
}
float berechneCosinus(float x, int ordnung) {
float sum, term, mx2;
sum = term = 1.0f;
mx2 = -x * x;
for (int i = 1; i <= (ordnung + 1) / 2; i++) {
int n = 2 * i;
term *= mx2 / ((n-1) * n);
sum += term;
}
return sum;
}
float Cosinus(float x)
{
return berechneCosinus(x, 12);
}
float factorial(int n)
{
float result = 1.0f;
for (int i = 2; i <= n; i++)
result *= i;
return result;
}
float profTaylorCos_core(float x, int n)
{
float sum, term, mx2;
sum = term = 1.0f;
for (int i = 2; i <= n; i += 2) {
term *= -1;
sum += term*pow(x,i)/factorial(i);
}
return (float)sum;
}
float profTaylorCos(float x)
{
return profTaylorCos_core(x, 12);
}
float powTaylorCos_core(float x, int n)
{
float sum, term;
sum = term = 1.0f;
for (int i = 2; i <= n; i += 2) {
term *= -1;
sum += term * pow(x, i) / Fac[i];
}
return (float)sum;
}
float powTaylorCos(float x)
{
return powTaylorCos_core(x, 12);
}
float Cheby6Cos(float x)
{
float sum, term, x2;
sum = term = 1.0f;
x2 = x * x;
for (int i = 1; i < 6; i++) {
term *= x2;
sum += term * C6[i];
}
return sum;
}
float dHCheby7Cos(float x)
{
double x2 = x*x;
return (float)(dC7[0] + x2 * (dC7[1] + x2 * (dC7[2] + x2 * (dC7[3] + x2 * (dC7[4] + x2 * (dC7[5] + x2 * dC7[6])))))); // cos 7 terms
}
float HCheby6Cos(float x)
{
float x2 = x * x;
return C6[0] + x2 * (C6[1] + x2 * (C6[2] + x2 * (C6[3] + x2 * (C6[4] + x2 * C6[5])))); // cos 6 terms
}
void test(const char *name, float(*myfun)(float), double (*ref_fun)(double), double xstart, double xend)
{
float cospi, cpi_err, x, ox, dx, xmax, xmin;
double err, res, ref, maxerr, minerr;
time_t start, end;
x = xstart;
ox = -1.0;
// dx = 1.2e-7f;
dx = 2.9802322387695312e-8f; // chosen to test key ranges of the function exhaustively
maxerr = minerr = 0;
xmin = xmax = 0.0;
start = clock();
while (x <= xend) {
res = (*myfun)(x);
ref = (*ref_fun)(x);
err = res - ref;
if (err > maxerr) {
maxerr = err;
xmax = x;
}
if (err < minerr) {
minerr = err;
xmin = x;
}
x += dx;
if (x == ox) dx += dx;
ox = x;
}
end = clock();
cospi = (*myfun)(pi);
cpi_err = cospi - cos(pi);
printf("%-22s %10.8f %12g %12g @ %10.8f %12g @ %10.8f %g\n", name, cospi, cpi_err, minerr, xmin, maxerr, xmax, (float)(end - start) / CLOCKS_PER_SEC);
}
void OriginalTest(const char* name, float(*myfun)(float, int), float target, float x)
{
printf("%s cos(%10.7f) using terms upto x^N\n N \t result error\n",name, x);
for (int i = 0; i < 19; i += 2) {
float cx, err;
cx = (*myfun)(x, i);
err = cx - target;
printf("%2i %-12.9f %12.5g\n", i, cx, err);
if (err == 0.0) break;
}
}
int main() {
InitTaylor(); // note that factorial 14 cannot be represented accurately as a 32 bit float and is truncated.
// easy sanity check on factorial numbers is to count the number of trailing zeroes.
float x = pi; // approx. PI
OriginalTest("Taylor Cosinus", berechneCosinus, cos(x), x);
OriginalTest("Taylor Cosinus", berechneCosinus, cos(x/2), x/2);
OriginalTest("Taylor Cosinus", berechneCosinus, cos(x/4), x/4);
printf("\nHow it would actually be done using equal ripple polynomial on 0-pi\n\n");
printf("Chebyshev equal ripple cos(pi) 6 terms %12.8f (sum order x^0 to x^N)\n", Cheby6Cos(x));
printf("Horner optimum Chebyshev cos(pi) 6 terms %12.8f (sum order x^N to x^0)\n", HCheby6Cos(x));
printf("Horner optimum Chebyshev cos(pi) 7 terms %12.8f (sum order x^N to x^0)\n\n", dHCheby7Cos(x));
printf("Performance and functionality tests of versions - professor's solution is 10x slower ~2s on an i5-12600 (please wait)...\n");
printf(" Description \t\t cos(pi) error \t min_error \t x_min\tmax_error \t x_max \t time\n");
#ifdef SLOW
test("prof Taylor", profTaylorCos, cos, 0.0, pi);
test("pow Taylor", powTaylorCos, cos, 0.0, pi);
#endif
test("your Cosinus", Cosinus, cos, 0.0, pi);
test("my Taylor", fTaylorCos, cos, 0.0, pi);
test("shoot Taylor", shootTaylorCos, cos, 0.0, pi);
test("Horner Chebyshev 6", HCheby6Cos, cos, 0.0, pi);
test("double Horner Cheby 7", dHCheby7Cos, cos, 0.0, pi);
return 0;
}
It is interesting to make the sum and x2 variables double precision and observe the effect that has on the answers. If someone fancies running simulated annealing or another global optimiser to find the best possible optimised Chebyshev 6 & 7 float32 approximations please post the results.
I agree whole heartedly with Steve Summits final comments. You should think very carefully about risk of overflow of intermediate results and order of summation doing numerical calculations. Numerical analysis using floating point numbers follows different rules to pure mathematics and some rearrangements of an equation are very much better than others when you want to compute an accurate numerical value.
It's an old post but if you can do that, then Google, Microsoft or any large servers in the world can just crash by one client. And that's not how the Internet works! By requesting the server to send a resource, you - as the client, receives it chunk by chunk. And if you want to only request but not receive the byte, then by the time the server knew it send data to nowhere, it stops. Think of it as a electric wire, it allows electric to flow, right? If you cut the wire or connect the endpoint to nowhere then the electric is nowhere to flow.
One thing you can do is code some software and send it to people all over the world, the software you make will target specific website or server you want, then that's called DDoS and you've just made an malware! Those people installing your malware and turn their PC into a zombie machine sending request to your targeting server. Fulfilling large amount of request from all over the world make the server overload, and then shut down.
After all. What you've asking for is disgusting. It show no respect to the development world where it need to improve, not harm anybody. And for that reason I'm going to flag this post. Sorry.
prefer this document , this worked for me...
https://blog.devgenius.io/apache-superset-integration-with-keycloak-3571123e0acf