Hi i am facing the same error, please guide me.. i am using windows
It is an online tool specifically for comparing XML formats, which allows you to clearly see the differences between two XML files.
Use innerHTML
(MDN docs).
Example:
let mapScript = document.createElement('script');
mapScript.innerHTML = 'the whole script body content';
To view the data in the Core data database, first locate the path of the SQLite database for our project. This can be done by following the steps.
Select your project target.
Click on edit schema.
In the run destination, select arguments, and in that, you can see environment variables.
Add a new environment variable com.apple.CoreData.SQLDebug and set it value to 1.
Now close and run the app in the simulator.
In the console now you can now see all the logs related to core data.
Search for CoreData: annotation: Connecting to sqlite database file at. This will give you the exact path for your database.
Great, you have done your first step, and now you have the path to the SQLite database of your project. To view the data in the database, you can use any of the free SQLite database viewers.
I am using DB Browser. You can download it using this link https://sqlitebrowser.org/.
Now go to the path that has been printed in the previous step using Finder, there you will see the .sqlite file. Double tap on it to open in the DB Browser. That's it, now you can inspect your database structure and values stored in it.
I would say the first thing to check would be to go to this website:
https://marketplace.visualstudio.com/vscode
if it doesnt load or you have a connection issue it could be due to a vpn that you have installed.
VS Code uses its own Node.js-based backend. Sometimes it can't connect even if your browser can. So you could try opening the VS Code terminal and run:
curl https://marketplace.visualstudio.com/\_apis/public/gallery/extensionquery
if that fails then there is a network restriction or a DNS issue inside your environment.
An Alternative Solution for Instant would be ZonedDateTime which is also present in the java.time package
import java.time.*;
Instant OneYearFromNow = ZonedDateTime.now().plusYears(1).toInstant();
SQS uses envelope encryption, so the producer needs kms:GenerateDataKey
to create a data key to encrypt the messages it sends, and it needs kms:Decrypt
to verify the data key's integrity. It doesn't need kms:Encrypt
, because it uses the data key to do the encryption.
The consumer just needs kms:Decrypt
to decrypt the encrypted data key and then it can decrypt the messages using that data key.
So the repost doc is correct.
How is the application able to function correctly with the permissions 'reversed' like this?
My guess would be that either your queue isn't set up for SSE-KMS encryption, or your KMS key has the necessary permissions defined in it's key policy.
Are there any pitfalls or potential problems with this arrangement I need to be aware of?
Assuming the queue is encrypted, then you've got duplicate permissions defined in different places which isn't ideal, and you've got permissions defined that you don't need (e.g. neither producer nor consumer need kms:Encrypt
).
My first impression was that 100.000 objects can be a lot, try to reduce it to something absurd like 1000 and see if it reproduces. Also you are not overflowing to disk (overflowToDisk=false), that is a hard limit to swapping which would explain the OOM error. Don't use eternal="true" unless you know very well the amount and size of objects you are storing, you are preventing the cache from cleaning up less used objects. Also remember to check -Xmx and -Xms in the JVM.
With the information you provide seems more like a code companion (IA) resoluble question.
I had the same issue and was very frustrated - I'm using dates in Sheets a lot! Thank you for the solution, it works perfect!
dpkg -l | grep nvidia-jetpack
Use the above instead, it shows the actual installed JetPack version of the machine.
sudo apt-cache show nvidia-jetpack
The line above only shows the cached packages of the installed JetPack, and may have shown a few in the list. Just sudo apt clean
it if you want to.
Open the settings of the Terminal app (`Command` + `,`)
Profiles
Go to "Keyboard" tab
Uncheck "Use Option as Meta key"
I am also facing the same issue. Could you find a solution to this problem? I also try different parameters in API but it doesn't work.
To resolve this issue, I updated my app's build.gradle
file to target the required API level:
android {
compileSdkVersion 35
defaultConfig {
targetSdkVersion 35
}
}
But you still got the warning then please remove the older bundles from the open close testing.
Yeah, figured, it is possible but I lack the expertise, however I found that you need to have su privilages for the app, which is impossible to provide in these devices , they are closed source and very little documentation is available from manufacturer, I even tried adb and all other ways (tried rooting too, but failed misearbly),somehow, accesing the serial port via termux is possible,even tried background terminal process)(too slow for me),
Later I found a plugin called SPUP ,a uart bridge approach(my last resort), but it is smart and can adapt to all platforms without any code changes,which hopefully worked
So the problem is currently solved, but still open to suggestion and alternatives.
Thanks
The issue lay in ModSecurity. It was set to "Detection only" with the default OWASP ruleset, but even so, it appeared to throw some kind of error. I have been able to resolve it by setting ModSecurity to Off, or to a different ruleset like Atomic Standard (and then it can be fully on, yielding no problems).
I am facing the same issue, I have attached the log for this issue. UUID is not loadable by azure function apps
import uuid
2025-07-02T06:28:33.900 [Information] File "/home/site/wwwroot/.python_packages/lib/site-packages/uuid.py", line 138
2025-07-02T06:28:33.900 [Information] if not 0 <= time_low < 1<<32L:
2025-07-02T06:28:33.900 [Information] ^
2025-07-02T06:28:33.900 [Error] SyntaxError: invalid decimal literal
2025-07-02T06:28:33.900 [Information] Traceback (most recent call last):
2025-07-02T06:28:33.900 [Information] File "/azure-functions-host/workers/python/3.11/LINUX/X64/azure_functions_worker/main.py", line 61, in main
2025-07-02T06:28:33.900 [Information] return asyncio.run(start_async(
2025-07-02T06:28:33.900 [Information] ^^^^^^^^^^^^^^^^^^^^^^^^
2025-07-02T06:28:33.900 [Information] File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
2025-07-02T06:28:33.900 [Information] return runner.run(main)
2025-07-02T06:28:33.900 [Information] ^^^^^^^^^^^^^^^^
2025-07-02T06:28:33.900 [Information] File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
2025-07-02T06:28:33.900 [Information] return self._loop.run_until_complete(task)
2025-07-02T06:28:33.900 [Information] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-07-02T06:28:33.900 [Information] File "/usr/local/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
2025-07-02T06:28:33.900 [Information] return future.result()
String.join(", ", List.of("one", "two", "three"));
I found the problem. I had this in index.css.
.MuiInputLabel-outlined {
transform: translate(12px, 14px) scale(1) !important;
}
How can I disable this only on DateTimePicker but leave it as it is on everything else?
For me, it was enough to stop build and redo it. Maybe it was stuck for any connection issue.
If your database connection is defined with using environmental variables, you must use Rails console as the application user. I get this error when I try to use the console as root, but my app user is an unprivileged user.
I also encountered this problem. The reason was that the word "referrer" was misspelled as "referer".
and i create a new google account, the result is ok now
You can set the system property -Djavax.net.debug=ssl:handshake
when starting your Java application
For example you can run this in the command line
java -Djavax.net.debug=ssl:handshake -jar jarName.jar
I think there is an error in answer https://stackoverflow.com/a/46590476/16460395.
The file used in gzip, gerr := gzip.NewReader(file)
will never close, because Reader.Close does not close the underlying io.Reader (https://pkg.go.dev/compress/gzip#Reader.Close).
I had to add prisma generate to the build command šš
July 02 , 2025, Right now the latest sdk version is 35(Android 15).
targetSdkVersion = 35
You must own genuine Apple hardware even for virtualised macOS instances. Apple permits macOS VMs only on Apple-branded machines. Practically, even if you skirt licensing, GPU passthrough limits make the iOS 18 simulator crawl on most Hyper-V or KVM hosts.
If budget is the blocker, consider Appleās Xcode Cloud or third-party CI services that compile and notarise your build remotely; you can still write code on Windows/Linux and push via Git. For Android work, Googleās latest system-requirements page says 8 GB RAM is minimum, 16 GB recommended, and any post-2017 Intel/AMD CPU with VT-x/AMD-V will handle the Emulator at 60 fps. Mixing these approaches keeps you license-clean and within NIST 800-163 guidance that discourages unvetted VMs for signing keys.
References
Stack Overflow VM licensing thread
Android Studio system-requirements page
NIST SP 800-163
ŃŃŠ²Š°ŠŗŠø вŃŃŠµ, ŃŠæŠ°ŃŠøŠ±Š¾ вам!!!!!!
Initialize your (JSON object from a SharePoint list) as an Array
Parse the JSON Object
Using Select - Data Operation to get values - Company, Date From, Date To and Title
Append to string variable to get
[
{
"Company": "Line2",
"Date From": "2022-03-21",
"Date To": "2022-03-29",
"Title": "Title 2"
},
{
"Company": "Test1",
"Date From": "2022-03-30",
"Date To": "2022-03-31",
"Title": "Title 1"
}
]enter image deenter image description herescription here
the evaluation_strategy
keyword argument in TrainingArguments has now been replaced with eval_strategy
. Using the old argument causes:
TrainingArguments.__init__() got an unexpected keyword argument 'evaluation_strategy'
from django import forms
from .models import Invoice_customer_list,Invoice_product_list
# Form for the Invoice model
class Invoice_customer_Form(forms.ModelForm):
class Meta:
model = Invoice_customer_list
fields = ['customer']
labels = {
'customer': 'Select Customer',
}
class Invoice_product_form(forms.ModelForm):
class Meta:
model = Invoice_product_list
fields = ['product']
labels = {
'product':'Select Product'
}
i want to createa form like this here is updated code
We adopted a Gitflow-like process where we have a branch for each stage we deploy to:
development (dev),
test (test),
acceptance (acc)
master (production).
Feature and bugfix branches are started from and merged back to development. These merges are squashed to simplify the Git history (optional). Then, PRs are made from development to test and from test to acceptance to promote these new releases. Each branch triggers its own build and release. This setup allows us to still hotfix specific environments in case of an urgent problem. When merging from development to another environment, we use a normal merge commit to preserve the Git history. This way, new commits just get added to different branches. In order to make sure teammates don't make mistakes when merging, the specific type of merge we want for the branches is specified in the branch protection policies. We do not use release branches or tag specific releases. Instead, we use conventional commit messages and a tool called GitVersion to automatically calculate semantic version numbers that we then use as build and release numbers.
ECS Game Domain specifics is, that you have more of all these things.
So component C++ is POD struct. And the key point is that you have a Container of POD.
And acces and managing PODās is through the Container interface for example STD::ARRAY or STD::VECTOR. Prefering acces using container index not pointer way.
Because ECS and related cache locality using those cores and caches optimal.
Is update the components in system class that uses the container to process them in bulk.
You often wonāt see any if all downsides of inheritance in very small scoped games like PONG.
But in ARMA or Homeworld scope of game with huge diversity of entities you will.
ECS is plural , EntityIDās , componentās so there is point there is mix of OOP but it at above container level.
Entities and components container interfaces and managers.
MPMoviePlayerController supports some legacy formats better while AVPlayer requires proper HLS formatting. Check your .m3u8
stream for correct audio codec, MIME type, and HLS compliance for AVPlayer.
Can you create a commit in a new file and create a github workflow to append that file text to original file when each commit is made?
On terminal
#curl -s http://169.254.169.254/latest/meta-data/instance-id
gives the instance id, 169.254.169.254 is a special AWS internal IP address used to access the Instance Metadata Service (IMDS) from within an EC2 instance.
Thanks for the explanation, Yong Shun. That clears things up. I was also under the impression that ngOnInit
I would have the @Input()
values are ready, but it makes sense now why it's still null
at that point. I'll try using ngOnChanges
or AfterViewInit
Depending on the use case. The mention of Angular Signals is interesting too, I hadn't explored that yet. Appreciate the insights!
1. Register the New Menu Location in functions.php
2. Assign the Menu in WordPress Admin
3.Add the Menu to the firstpage
Template
unlike express there in no req.body object. Therefore it is handle by request handlers. You can follow this article.
https://nodejs.org/en/learn/modules/anatomy-of-an-http-transaction#request-body
let incomingData=[];
req.on('data',chunk=>{
incomingData.push(chunk)
})
.on('end',()=>{
incomingData = Buffer.concat(incomingData)
let name=JSON.parse(incomingData.toString())
console.log("converted to string",name.name)
})
if your input file has extension (.webm) use this :
ffmpeg -re -i "your_file.webm" -map 0 -c:a copy -c:v copy -single_file "true" -f dash "your_output.mpd"
Fixed File Share access issue by creating Private Endpoint with Private DNS Zone; self-hosted Azure DevOps agent now has access
We need at least 3 items to build the essential triangles that make up an AVL tree, and some data to store and search on.
How to politely say the basic data structure is wrong for any AVL tree? Each node of an AVL tree needs at least 5 items. 1. the data or a pointer to the data. the data MUST include a key. 2. a pointer to the the left child node. 3. a pointer to the right child node 4. a pointer to the PARENT node, which is missing 5. one or more items to enforce the balance of the tree. One could store these extra item(s) separately. As a side note, one could use an index into an array rather than a C style pointer. In any case, the code is a cascade miss design with error(s) in each function.
No doubt it compiles (?under which OS and compiler?). To debug and test, you will want to write one or a series of tree display sub programs. I'm currently building a general purpose 32/64 bit Intel AVL tree. I'm at about 2000 lines and not done [ verbose is my nickname ]. It is intended for symbol tables for a compiler. I did some web searches for code: found a lot of broken examples. Search on an AVL tree should be about m*O( lg N). Insert about m*O( 2 * lg N) because of retrace. Delete and other operations such as bulk load not needed for my intended use.
Some rule changed with Tailwindcss V4.
V4 Angular init tutorials with CSS file, and use tailwindcss/postcss plugin.
Different vs V3
V4 not include/create tailwindcss.config.js file default, v4 need .postcss.json in Angular higher ver
V4 tutorial with styles.css in Angular project root, not styles.scss. so some usage changed.
styles.css need add @import "tailwindcss"; not like v3 add some "@tailwindcss "xxx"; "
V4 with CSS file in Angular Component, need add other statement: "@import "tailwindcss"" if you want to use @apply
The "dark" mode in manual setting, the tailwindcss v4 need add "@custom-variant dark (&:where(.dark, .dark *));" this statement in styles.css;
But if you want to use dark: in other component (lazy routing or lazy component), you must add "@import "tailwindcss";" in other css file(.css) and muse add "@custom-variant dark (&:where(.dark, .dark *));" again.
Add TailwindCSS v4 in Angular Higher Version flows:
Add TailwindCSS to project
npm install tailwindcss @tailwindcss/postcss postcss --force
Create postcss config
{
"plugins": {
"@tailwindcss/postcss": {}
}
}
Add TailwindCSS in root styles.css
@import "tailwindcss";
Set dark mode support (optional) in root styles.css
@import "tailwindcss";
@custom-variant dark (&:where(.dark, .dark *)); /* <html class="dark"> */
In other lazy component/module (must): src/dashboard/slidebar/slidebar.css
@import "tailwindcss"; /* for @apply usage */
@custom-variant dark (&:where(.dark, .dark *)); /* for dark:text-white usage */
.menu-item {
@apply flex flex-row w-full h-32 text-slate-100 dark:text-white;
}
.menu-item.active { @apply text-slate-800 dark:text-slate-100; }
You may want to try adding the resizable window flag on window creation.
SDL_WINDOW_RESIZABLE
Like this:
window = SDL_CreateWindow("Test Window", 800, 800, SDL_WINDOW_BORDERLESS | SDL_WINDOW_RESIZABLE);
if you want to set it globally for whole app use
<style>
<item name="android:includeFontPadding">false</item>
</style>
inside your theme.xml
dat<-as.data.frame(rexp(1000,0.2))
g <- ggplot(dat, aes(x = dat[,1]))
g + geom_histogram(alpha = 0.2, binwidth = 5, colour = "black") +
geom_line(stat = "bin", binwidth = 5, linewidth = 1)
I met the same problem. You define the stat = "bin" for geom_line. It will explain the caculation of geom_line as geom_histogram or geom_freqpoly.
Result:
According to the Declaration Merging section in the TypeScript official documentation, it mentions:
Non-function members of the interfaces should be unique. If they are not unique, they must be of the same type. The compiler will issue an error if the interfaces both declare a non-function member of the same name, but of different types.
Absolutely none of these work for me. I downloaded they lasted Android Studio today 7/1/2025. I have no Idea what version it is because there are too many numbers to figure it out. This should not be that difficult.
I just want the toolbar with all of the icons to display at the top. It has icons for running in debug mode, adding/removing comment block, etc. I am not talking about the Menu bar that has File, Edit, View, etc. I want the icon bar or tool bar. Whatever you want to call it.
If we track all possibilities, then
first if condition gives us
T(n)=O(n/2)+T(n/2) equivalent to T(n)=O(n)+T(n/2)
second gives us
T(n)=2*O(n/2)+T(n/2) equivalent to T(n)=O(n)+T(n/2)
for the third one
You can easily see that all possibilities will be equivalent to T(n)=O(n)+T(n/4).
From these recursions you can deduce that T(n)=O(n) i.e. the time complexity is linear.
On your merge sort analogy: The array is being broken in a similar way but if you observe carefully we don't operate on each chunk unlike merge sort. Basically at each of logn levels in merge sort we are dealing with all n of them while here with n/(2^i) i.e. decay exponentially.
import os
import re
import asyncio
import logging
import time
import gc
from pathlib import Path
from telethon import TelegramClient, events
from telethon.tl.types import MessageMediaDocument, InputDocumentFileLocation
from telethon.tl.functions.upload import GetFileRequest
from telethon.crypto import AES
from telethon.errors import FloodWaitError
import aiofiles
from concurrent.futures import ThreadPoolExecutor
# Optimize garbage collection for large file operations
gc.set_threshold(700, 10, 10)
# Set environment variables for better performance
os.environ['PYTHONUNBUFFERED'] = '1'
os.environ['PYTHONDONTWRITEBYTECODE'] = '1'
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
TELEGRAM_API_ID = int(os.getenv("TELEGRAM_API_ID"))
TELEGRAM_API_HASH = os.getenv("TELEGRAM_API_HASH")
TELEGRAM_SESSION_NAME = os.path.join('session', os.getenv('TELEGRAM_SESSION_NAME', 'bot_session'))
TELEGRAM_GROUP_ID = int(os.getenv("GROUP_CHAT_ID"))
TOPIC_IDS = {
'Doc 1': 137,
}
TOPIC_ID_TO_CATEGORY = {
137: 'doc 1',
}
CATEGORY_TO_DIRECTORY = {
'doc 1': '/mnt/disco1/test',
}
class FastTelegramDownloader:
def __init__(self, client, max_concurrent_downloads=4):
self.client = client
self.max_concurrent_downloads = max_concurrent_downloads
self.semaphore = asyncio.Semaphore(max_concurrent_downloads)
async def download_file_fast(self, message, dest_path, chunk_size=1024*1024, progress_callback=None):
"""
Fast download using multiple concurrent connections for large files
"""
document = message.media.document
file_size = document.size
# For smaller files, use standard download
if file_size < 10 * 1024 * 1024: # Less than 10MB
return await self._standard_download(message, dest_path, progress_callback)
# Create input location for the file
input_location = InputDocumentFileLocation(
id=document.id,
access_hash=document.access_hash,
file_reference=document.file_reference,
thumb_size=""
)
# Calculate number of chunks and their sizes
chunks = []
offset = 0
chunk_id = 0
while offset < file_size:
chunk_end = min(offset + chunk_size, file_size)
chunks.append({
'id': chunk_id,
'offset': offset,
'limit': chunk_end - offset
})
offset = chunk_end
chunk_id += 1
logging.info(f"š¦ Dividiendo archivo en {len(chunks)} chunks de ~{chunk_size//1024}KB")
# Download chunks concurrently
chunk_data = {}
downloaded_bytes = 0
start_time = time.time()
async def download_chunk(chunk):
async with self.semaphore:
try:
result = await self.client(GetFileRequest(
location=input_location,
offset=chunk['offset'],
limit=chunk['limit']
))
# Update progress
nonlocal downloaded_bytes
downloaded_bytes += len(result.bytes)
if progress_callback:
progress_callback(downloaded_bytes, file_size)
return chunk['id'], result.bytes
except Exception as e:
logging.error(f"Error downloading chunk {chunk['id']}: {e}")
return chunk['id'], None
try:
# Execute downloads concurrently
tasks = [download_chunk(chunk) for chunk in chunks]
results = await asyncio.gather(*tasks, return_exceptions=True)
# Collect successful chunks
for result in results:
if isinstance(result, tuple) and result[1] is not None:
chunk_id, data = result
chunk_data[chunk_id] = data
# Verify all chunks downloaded successfully
if len(chunk_data) != len(chunks):
logging.warning(f"Some chunks failed, falling back to standard download")
return await self._standard_download(message, dest_path, progress_callback)
# Write file in correct order
async with aiofiles.open(dest_path, 'wb') as f:
for i in range(len(chunks)):
if i in chunk_data:
await f.write(chunk_data[i])
else:
raise Exception(f"Missing chunk {i}")
end_time = time.time()
duration = end_time - start_time
speed = (file_size / 1024 / 1024) / duration if duration > 0 else 0
logging.info(f"ā
Fast download completed: {dest_path} - Speed: {speed:.2f} MB/s")
return dest_path
except Exception as e:
logging.error(f"Fast download failed: {e}")
return await self._standard_download(message, dest_path, progress_callback)
async def _standard_download(self, message, dest_path, progress_callback=None):
"""Fallback to standard download method"""
document = message.media.document
file_size = document.size
# Optimize chunk size based on file size
if file_size > 100 * 1024 * 1024: # >100MB
part_size_kb = 1024 # 1MB chunks
elif file_size > 50 * 1024 * 1024: # >50MB
part_size_kb = 1024 # 1MB chunks
elif file_size > 10 * 1024 * 1024: # >10MB
part_size_kb = 512 # 512KB chunks
else:
part_size_kb = 256 # 256KB chunks
start_time = time.time()
await self.client.download_file(
document,
file=dest_path,
part_size_kb=part_size_kb,
file_size=file_size,
progress_callback=progress_callback
)
end_time = time.time()
duration = end_time - start_time
speed = (file_size / 1024 / 1024) / duration if duration > 0 else 0
logging.info(f"š Standard download speed: {speed:.2f} MB/s")
return dest_path
class MultiClientDownloader:
def __init__(self, api_id, api_hash, session_base_name, num_clients=3):
self.api_id = api_id
self.api_hash = api_hash
self.session_base_name = session_base_name
self.num_clients = num_clients
self.clients = []
self.client_index = 0
self.fast_downloaders = []
async def initialize_clients(self):
"""Initialize multiple client instances"""
for i in range(self.num_clients):
session_name = f"{self.session_base_name}_{i}"
client = TelegramClient(
session_name,
self.api_id,
self.api_hash,
connection_retries=3,
auto_reconnect=True,
timeout=300,
request_retries=3,
flood_sleep_threshold=60,
system_version="4.16.30-vxCUSTOM",
device_model="HighSpeedDownloader",
lang_code="es",
system_lang_code="es",
use_ipv6=False
)
await client.start()
self.clients.append(client)
self.fast_downloaders.append(FastTelegramDownloader(client, max_concurrent_downloads=2))
logging.info(f"ā
Cliente {i+1}/{self.num_clients} inicializado")
def get_next_client(self):
"""Get next client using round-robin"""
client = self.clients[self.client_index]
downloader = self.fast_downloaders[self.client_index]
self.client_index = (self.client_index + 1) % self.num_clients
return client, downloader
async def close_all_clients(self):
"""Clean shutdown of all clients"""
for client in self.clients:
await client.disconnect()
class TelegramDownloader:
def __init__(self, multi_client_downloader):
self.multi_client = multi_client_downloader
self.downloaded_files = set()
self.load_downloaded_files()
self.current_download = None
self.download_stats = {
'total_files': 0,
'total_bytes': 0,
'total_time': 0
}
def _create_download_progress_logger(self, filename):
"""Progress logger with reduced frequency"""
start_time = time.time()
last_logged_time = start_time
last_percent_reported = -5
MIN_STEP = 10 # Report every 10%
MIN_INTERVAL = 5 # Or every 5 seconds
def progress_bar_function(done_bytes, total_bytes):
nonlocal last_logged_time, last_percent_reported
current_time = time.time()
percent_now = int((done_bytes / total_bytes) * 100)
if (percent_now - last_percent_reported < MIN_STEP and
current_time - last_logged_time < MIN_INTERVAL):
return
last_percent_reported = percent_now
last_logged_time = current_time
speed = done_bytes / 1024 / 1024 / (current_time - start_time or 1)
msg = (f"⬠{filename} | "
f"{percent_now}% | "
f"{speed:.1f} MB/s | "
f"{done_bytes/1024/1024:.1f}/{total_bytes/1024/1024:.1f} MB")
logging.info(msg)
return progress_bar_function
async def _process_download(self, message, metadata, filename, dest_path):
try:
self.current_download = filename
logging.info(f"š Iniciando descarga de: {filename}")
progress_logger = self._create_download_progress_logger(filename)
temp_path = dest_path.with_name(f"temp_{metadata['file_name_telegram']}")
# Get next available client and downloader
client, fast_downloader = self.multi_client.get_next_client()
file_size = message.media.document.size
start_time = time.time()
try:
# Try fast download first for large files
if file_size > 20 * 1024 * 1024: # Files larger than 20MB
logging.info(f"š¦ Usando descarga rĆ”pida para archivo de {file_size/1024/1024:.1f}MB")
await fast_downloader.download_file_fast(
message, temp_path, progress_callback=progress_logger
)
else:
# Use standard optimized download for smaller files
await fast_downloader._standard_download(
message, temp_path, progress_callback=progress_logger
)
except Exception as download_error:
logging.warning(f"Descarga optimizada falló, usando método estÔndar: {download_error}")
# Final fallback to basic download
await client.download_file(
message.media.document,
file=temp_path,
part_size_kb=512,
file_size=file_size,
progress_callback=progress_logger
)
if not temp_path.exists():
raise FileNotFoundError("No se encontró el archivo descargado")
# Atomic rename
temp_path.rename(dest_path)
# Update statistics
end_time = time.time()
duration = end_time - start_time
speed = (file_size / 1024 / 1024) / duration if duration > 0 else 0
self.download_stats['total_files'] += 1
self.download_stats['total_bytes'] += file_size
self.download_stats['total_time'] += duration
avg_speed = (self.download_stats['total_bytes'] / 1024 / 1024) / self.download_stats['total_time'] if self.download_stats['total_time'] > 0 else 0
logging.info(f"ā
Descarga completada: {dest_path}")
logging.info(f"š Velocidad: {speed:.2f} MB/s | Promedio sesión: {avg_speed:.2f} MB/s")
self.save_downloaded_file(str(message.id))
except Exception as e:
logging.error(f"ā Error en descarga: {str(e)}", exc_info=True)
# Cleanup on error
for path_var in ['temp_path', 'dest_path']:
if path_var in locals():
path = locals()[path_var]
if hasattr(path, 'exists') and path.exists():
try:
path.unlink()
except:
pass
raise
finally:
self.current_download = None
def load_downloaded_files(self):
try:
if os.path.exists('/app/data/downloaded.log'):
with open('/app/data/downloaded.log', 'r', encoding='utf-8') as f:
self.downloaded_files = set(line.strip() for line in f if line.strip())
logging.info(f"š Cargados {len(self.downloaded_files)} archivos ya descargados")
except Exception as e:
logging.error(f"Error cargando archivos descargados: {str(e)}")
def save_downloaded_file(self, file_id):
try:
with open('/app/data/downloaded.log', 'a', encoding='utf-8') as f:
f.write(f"{file_id}\n")
self.downloaded_files.add(file_id)
except Exception as e:
logging.error(f"Error guardando archivo descargado: {str(e)}")
def parse_metadata(self, caption):
metadata = {}
try:
if not caption:
logging.debug(f"š No hay caption")
return None
pattern = r'^(\w[\w\s]*):\s*(.*?)(?=\n\w|\Z)'
matches = re.findall(pattern, caption, re.MULTILINE)
for key, value in matches:
key = key.strip().lower().replace(' ', '_')
metadata[key] = value.strip()
required_fields = [
'type', 'tmdb_id', 'file_name_telegram',
'file_name', 'folder_name', 'season_folder'
]
if not all(field in metadata for field in required_fields):
return None
if 'season' in metadata:
metadata['season'] = int(metadata['season'])
if 'episode' in metadata:
metadata['episode'] = int(metadata['episode'])
return metadata
except Exception as e:
logging.error(f"Error parseando metadata: {str(e)}")
return None
def get_destination_path(self, message, metadata):
try:
topic_id = message.reply_to.reply_to_msg_id if message.reply_to else None
if not topic_id:
logging.warning("No se pudo determinar el topic ID del mensaje")
return None
category = TOPIC_ID_TO_CATEGORY.get(topic_id)
if not category:
logging.warning(f"No se encontró categorĆa para el topic ID: {topic_id}")
return None
base_dir = CATEGORY_TO_DIRECTORY.get(category)
if not base_dir:
logging.warning(f"No hay directorio configurado para la categorĆa: {category}")
return None
filename = metadata.get('file_name')
if not filename:
logging.warning("Campo 'file_name' no encontrado en metadatos")
return None
if metadata['type'] == 'movie':
folder_name = f"{metadata['folder_name']}"
dest_dir = Path(base_dir) / folder_name
return dest_dir / filename
elif metadata['type'] == 'tv':
folder_name = f"{metadata['folder_name']}"
season_folder = metadata.get('season_folder', 'Season 01')
dest_dir = Path(base_dir) / folder_name / season_folder
return dest_dir / filename
else:
logging.warning(f"Tipo de contenido no soportado: {metadata['type']}")
return None
except Exception as e:
logging.error(f"Error determinando ruta de destino: {str(e)}")
return None
async def download_file(self, message):
try:
await asyncio.sleep(1) # Reduced delay
if not isinstance(message.media, MessageMediaDocument):
return
if str(message.id) in self.downloaded_files:
logging.debug(f"Archivo ya descargado (msg_id: {message.id})")
return
metadata = self.parse_metadata(message.message)
if not metadata:
logging.warning("No se pudieron extraer metadatos vƔlidos")
return
if 'file_name' not in metadata or not metadata['file_name']:
logging.warning("El campo 'file_name' es obligatorio en los metadatos")
return
dest_path = self.get_destination_path(message, metadata)
if not dest_path:
return
dest_path.parent.mkdir(parents=True, exist_ok=True)
if dest_path.exists():
logging.info(f"Archivo ya existe: {dest_path}")
self.save_downloaded_file(str(message.id))
return
await self._process_download(message, metadata, metadata['file_name'], dest_path)
except Exception as e:
logging.error(f"Error descargando archivo: {str(e)}", exc_info=True)
async def process_topic(self, topic_id, limit=None):
try:
logging.info(f"š Procesando topic ID: {topic_id}")
# Use first client for message iteration
client = self.multi_client.clients[0]
async for message in client.iter_messages(
TELEGRAM_GROUP_ID,
limit=limit,
reply_to=topic_id,
wait_time=10 # Reduced wait time
):
try:
if message.media and isinstance(message.media, MessageMediaDocument):
await self.download_file(message)
# Small delay between downloads to prevent rate limiting
await asyncio.sleep(0.5)
except FloodWaitError as e:
wait_time = e.seconds + 5
logging.warning(f"ā ļø Flood wait detectado. Esperando {wait_time} segundos...")
await asyncio.sleep(wait_time)
continue
except Exception as e:
logging.error(f"Error procesando mensaje: {str(e)}", exc_info=True)
continue
except Exception as e:
logging.error(f"Error procesando topic {topic_id}: {str(e)}", exc_info=True)
async def process_all_topics(self):
for topic_name, topic_id in TOPIC_IDS.items():
logging.info(f"šÆ Iniciando procesamiento de: {topic_name}")
await self.process_topic(topic_id)
# Print session statistics
if self.download_stats['total_files'] > 0:
avg_speed = (self.download_stats['total_bytes'] / 1024 / 1024) / self.download_stats['total_time']
logging.info(f"š EstadĆsticas del topic {topic_name}:")
logging.info(f" š Archivos: {self.download_stats['total_files']}")
logging.info(f" š¾ Total: {self.download_stats['total_bytes']/1024/1024/1024:.2f} GB")
logging.info(f" ā” Velocidad promedio: {avg_speed:.2f} MB/s")
async def main():
try:
# Test cryptg availability
test_data = os.urandom(1024)
key = os.urandom(32)
iv = os.urandom(32)
encrypted = AES.encrypt_ige(test_data, key, iv)
decrypted = AES.decrypt_ige(encrypted, key, iv)
if decrypted != test_data:
raise RuntimeError("ā Cryptg does not work properly")
logging.info("ā
cryptg available and working")
except Exception as e:
logging.critical(f"ā ERROR ON CRYPTG: {str(e)}")
raise SystemExit(1)
# Ensure session directory exists
os.makedirs('session', exist_ok=True)
os.makedirs('/app/data', exist_ok=True)
# Initialize multi-client downloader
multi_client = MultiClientDownloader(
TELEGRAM_API_ID,
TELEGRAM_API_HASH,
TELEGRAM_SESSION_NAME,
num_clients=3 # Use 3 clients for better speed
)
try:
logging.info("š Inicializando clientes mĆŗltiples...")
await multi_client.initialize_clients()
downloader = TelegramDownloader(multi_client)
logging.info("š„ Iniciando descarga de todos los topics...")
await downloader.process_all_topics()
logging.info("ā
Proceso completado exitosamente")
except Exception as e:
logging.error(f"Error en main: {str(e)}", exc_info=True)
finally:
logging.info("š Cerrando conexiones...")
await multi_client.close_all_clients()
if __name__ == "__main__":
asyncio.run(main())
Plotly.js creates a global stylesheet that is used to show the tooltip (called in plotly.js "hover..." hoverbox, hoverlayer, hoverlabel), as well as for other features - for instance, you can see the "modelbar" (the icon menu that's by default at the top-right of the plot div) is misplaced in your shadow-dom version.
The issue is thus the fact that the global stylesheets are not applied
to the shadow DOM. Based on the information from this Medium article by EisenbergEffect
I applied the global stylesheets to the shadow root of your sankey-sd
,
using the function:
function addGlobalStylesToShadowRoot(shadowRoot) {
const globalSheets = Array.from(document.styleSheets)
.map(x => {
const sheet = new CSSStyleSheet();
const css = Array.from(x.cssRules).map(rule => rule.cssText).join(' ');
sheet.replaceSync(css);
return sheet;
});
shadowRoot.adoptedStyleSheets.push(
...globalSheets
);
}
applied in the constructor of class SankeySD
:
class SankeySD extends HTMLElement {
constructor() {
super();
this.attachShadow({ mode: 'open' });
addGlobalStylesToShadowRoot(this.shadowRoot);
}
// ............... other methods
}
and it did enable the tooltip and corrected the position of the modelbar.
Here's a stack snippet demo, based on your original code:
//from https://eisenbergeffect.medium.com/using-global-styles-in-shadow-dom-5b80e802e89d
function addGlobalStylesToShadowRoot(shadowRoot) {
const globalSheets = Array.from(document.styleSheets)
.map(x => {
const sheet = new CSSStyleSheet();
const css = Array.from(x.cssRules).map(rule => rule.cssText).join(' ');
sheet.replaceSync(css);
return sheet;
});
shadowRoot.adoptedStyleSheets.push(
...globalSheets
);
}
window.addEventListener('DOMContentLoaded', () => {
class SankeySD extends HTMLElement {
constructor() {
super();
this.attachShadow({ mode: 'open' });
addGlobalStylesToShadowRoot(this.shadowRoot);
}
connectedCallback() {
const chartDiv = document.createElement('div');
chartDiv.id = 'chart';
chartDiv.style.width = '100%';
chartDiv.style.height = '100%';
chartDiv.style.minWidth = '500px';
chartDiv.style.minHeight = '400px';
this.shadowRoot.appendChild(chartDiv);
const labels = ["Start", "Middle", "Begin", "End", "Final"];
const labelIndex = new Map(labels.map((label, i) => [label, i]));
const links = [
{ source: "Start", target: "Middle", value: 5, label: "Test" },
{ source: "Start", target: "Middle", value: 3, label: "Test2" },
{ source: "Middle", target: "Start", value: 1, label: "" },
{ source: "Start", target: "End", value: 2, label: "" },
{ source: "Begin", target: "Middle", value: 5, label: "Test" },
{ source: "Middle", target: "End", value: 3, label: "" },
{ source: "Final", target: "Final", value: 0.0001, label: "" }
];
const sources = links.map(link => labelIndex.get(link.source));
const targets = links.map(link => labelIndex.get(link.target));
const values = links.map(link => link.value);
const customData = links.map(link => [link.source, link.target, link.value]);
const trace = {
type: "sankey",
orientation: "h",
arrangement: "fixed",
node: {
label: labels,
pad: 15,
thickness: 20,
line: { color: "black", width: 0.5 },
hoverlabel: {
bgcolor: "white",
bordercolor: "darkgrey",
font: {
color: "black",
family: "Open Sans, Arial",
size: 14
}
},
hovertemplate: '%{label}<extra></extra>',
color: ["#a6cee3", "#1f78b4", "#b2df8a", "#a9b1b9", "#a9b1b9" ]
},
link: {
source: sources,
target: targets,
value: values,
arrowlen: 20,
pad: 20,
thickness: 20,
line: { color: "black", width: 0.2 },
color: sources.map(i => ["#a6cee3", "#1f78b4", "#b2df8a", "#a9b1b9", "#a9b1b9"][i]),
customdata: customData,
hoverlabel: {
bgcolor: "white",
bordercolor: "darkgrey",
font: {
color: "black",
family: "Open Sans, Arial",
size: 14
}
},
hovertemplate:
'<b>%{customdata[0]}</b> ā <b>%{customdata[1]}</b><br>' +
'Flow Value: <b>%{customdata[2]}</b><extra></extra>'
}
};
const layout = {
font: { size: 14 },
//margin: { t: 20, l: 10, r: 10, b: 10 },
//hovermode: 'closest'
};
Plotly.newPlot(chartDiv, [trace], layout, { responsive: true, displayModeBar: true })
.then((plot) => {
chartDiv.on('plotly_click', function(eventData) {
console.log(eventData);
if (!eventData || !eventData.points || !eventData.points.length) return;
const point = eventData.points[0];
if (typeof point.pointIndex === "number") {
const nodeLabel = point.label;
alert("Node clicked: " + nodeLabel + "\nNode index: " + point.pointIndex);
console.log("Node clicked:", point);
} else if (typeof point.pointNumber === "number") {
const linkIdx = point.pointNumber;
const linkData = customData[linkIdx];
alert(
"Link clicked: " +
linkData[0] + " ā " + linkData[1] +
"\nValue: " + linkData[2] +
"\nLink index: " + linkIdx
);
console.log("Link clicked:", point);
} else {
console.log("Clicked background", point);
}
});
});
}
}
customElements.define('sankey-sd', SankeySD);
});
html, body {
height: 100%;
margin: 0;
}
sankey-sd {
display: block;
width: 100vw;
height: 100vh;
}
<sankey-sd></sankey-sd>
<script src="https://cdn.plot.ly/plotly-3.0.1.min.js" charset="utf-8"></script>
<!-- also works with v 2.30.1-->
The click feature is not caused by the shadow DOM; in this fiddle that uses the same plot configuration, but without the shadow DOM, the behaviour is the same - there's always a point.pointNumber
and never point.pointIndex
.
I can't find the code you have used, can you please show the version that works? In any case, this might be another question, as there should not be multiple issues per post, if their solutions are unrelated.
Font-weight rendering varies across browsers due to different font smoothing and anti-aliasing techniques.
Testing and using web-safe fonts or variable fonts can help ensure consistent appearance.
For anyone who is still getting the error after granting access. I tried to delete the key vault secret reference from my app service's environment variable, save and re-add it back, and it works now
I use workaround by '\*.py/\*[!p][!y]' to 'files to exclude'. but I don't have trust that it is really answer.
The control uses the client system date/time settings to display the date. The only way to fix this without replacing the entire control with something different is to have the client system changed to the "correct" settings.
This is very frustrating. The control offers a format, but doesn't really care what you set it to.
You can set up a Power Automate flow that connects Power BI with Jira:
Create a data alert or trigger in Power BI or directly in Power Automate based on your dataset (e.g., when the number of occurrences for a specific error exceeds a certain threshold within a given date range).
Use Power Automate to monitor this data (either via a scheduled refresh or a Power BI data alert).
Once the condition is met, the flow can automatically create a Jira ticket using the Jira connector.
You can populate the Jira ticket with details from the dataset or spreadsheet (like error type, frequency, affected module, etc.).
There are couple of things you should check (btw, please share your the cloud function code snippet)
Make sure that you are calling/invoking supported GCP Vertex AI Gemini modes (Gemini 2.0, Gemini 2.5 Flash/pro etc.). Models like Palm, text-bison and even earlier Gemini models (like Gemini 1.0) has been deprecated, that's mostly likely the reason you are getting 404 reason due to model deprecation. Please check the supported model doc here to use a proper Gemini model.
Verify that you followed this Vertex AI getting started guide to set up your access to Gemini model. based on what you described:
You have GCP project
You enabled the Vertex AI API
IAM. Try to grant your GCP account Vertex AI User
role permission. For detail, check Vertex AI IAM permission here.
I recommend to use Google Gen AI SDK for Python to call Gemini models. It handles the endpoint and authentication, you just need to code the model to use. for example: gemini-2.5-flash
These steps should get you going. Please share code snippet so that I can share the edited snippet back.
<script>
window.setInterval(function() {
var elem = document.getElementById('fixed');
elem.scrollTop = elem.scrollHeight; }, 3000);
</script>
This worked well based on @johnscoops answer
I am pleased to share that the PāÆvsāÆNP question has been resolved, establishing thatāÆPāÆ=āÆNP. The full writeāup and proof sketch are available here:Ā https://huggingface.co/caletechnology/satisfier/blob/main/Solving_the_Boolean_k_SAT_Problem_in__Polynomial_Time.pdf
You can also review and experiment with the accompanying C implementation:Ā https://huggingface.co/caletechnology/satisfier/tree/main
I welcome feedback and discussion on this claim.
be aware these numbers are on the training data which have arrived that node from different possibilities, rather than for your prediction results.
I am pleased to share that the PāÆvsāÆNP question has been resolved, establishing thatāÆPāÆ=āÆNP. The full writeāup and proof sketch are available here:Ā https://huggingface.co/caletechnology/satisfier/blob/main/Solving_the_Boolean_k_SAT_Problem_in__Polynomial_Time.pdf
You can also review and experiment with the accompanying C implementation:Ā https://huggingface.co/caletechnology/satisfier/tree/main
I welcome feedback and discussion on this claim.
I was able to completely avoid the limitation by eliminating instances of query, and instead putting my "iter()" methods directly on the type.
pub trait Query<S: Storage> {
type Result<'r>: 'r where S: 'r;
fn iter(storage: &S) -> Option<impl Iterator<Item = (Entity, Option<Self::Result<'_>>)>>;
fn iter_chunks(storage: &S, chunk_size: usize) -> Option<impl Iterator<Item = impl Iterator<Item = (Entity, Option<Self::Result<'_>>)>>>;
}
The coding text input requires quotes in order to treat your input as one command, otherwise; each space is treated as a separate command on its own. Also, your output will only return the last buffer as 'line' where, it appears you were trying to set up an output variable 'output'
I'm having the same problem. I've tried various nodemailer codes on the internet but still failed. It turns out after tracking the problem in my case, the API was not working properly which was caused by the use of "output: 'export'" in the next.config.js file. So if you don't use "output: 'export'" Next.js uses its full-stack capability which means Supports API routes (serverless functions in Vercel). so maybe if anyone has the same problem and has not been resolved, my suggestion is to remove "output: 'export'" in the next.config.js file. btw I use nodemailer, smtp gmail and deploy to vercel
Did you manage to resolve this? I am hitting the same issue.
Thanks!
This is happening because at store build time window.innerWidth
is undefined, and untill the resize event listener is triggered, a new value will not be set.
The moneyRemoved variable wasn't being set true. I should have debugged better. Thank you to @Rufus L though for showing me how to properly get the result from an async method without using .GetAwaiter().GetResult()!
There was a permission issue. The Groovy runtime could not get the resource because I had not opened my packages to org.apache.groovy
. I just needed to add:
opens my_package to org.apache.groovy;
to module-info.java
.
You probably need to handle Form.WndProc and capture the windows messages about the shortcut events. This is a little more complicate but allows you to capture a lot of things in one place and has been answered here before for the usuall events of forms closing and minimising
Preventing a VB.Net form from closing
There are probably message codes for those shortcuts
it worked by using :
{
"mcpServers": {
"firebase": {
"command": "firebase",
"args": ["experimental:mcp"]
}
}
}
What you can do is convert your file to a different but supported file like .wav or .flac and submit it to Google STT and it should work.
This is interesting to be available natively. On Google side, there is a feature request that you can file but there is no timeline on when it can be done.
I believe this is only an issue on Windows. I am experiencing the same thing, but running Tensorboard on a Debian server or even through WSL works without an issue. See the associated github issue:
Here are two SO questions with good answers:
- Inline-block element height issue - found through Google
- Why does inline-block cause this div to have height? - silviagreen's comment to the question
As per H264 specification, the H264 raw byte stream does not contain any presentation timestamp. Here is the verbiage from there, I will update more details as I find.
One of the main properties of H.264 is the complete decoupling of the transmission time, the decoding time, and the sampling or presentation time of slices and pictures. The decoding process specified in H.264 is unaware of time, and the H.264 syntax does not carry information such as the number of skipped frames (as is common in the form of the Temporal Reference in earlier video compression standards). Also, there are NAL units that affect many pictures and that are, therefore, inherently timeless. For this reason, the handling of the RTP timestamp requires some special considerations for NAL units for which the sampling or presentation time is not defined or, at transmission time, unknown.
timegm()
is a non-standard GNU extension. A portable version using mktime()
is below. This sets the TZ
environment variable to UTC, calls mktime()
and restores the value of TZ
. Since TZ
is modified this might not be thread safe. I understand the GNU libc version of tzset()
does use a mutex so should be thread safe.
See:
#include <time.h>
#include <stdlib.h>
time_t
my_timegm(struct tm *tm)
{
time_t ret;
char *tz;
tz = getenv("TZ");
setenv("TZ", "", 1);
tzset();
ret = mktime(tm);
if (tz)
setenv("TZ", tz, 1);
else
unsetenv("TZ");
tzset();
return ret;
}
I have fixed this problem. Go to the "data" directory of MySQL. Rename the file "binlog.index" to "biglog.index_bak". and that's it. restart MySQL server, it will be reset.
Yeah, ROPC is outdated and not recommended ā no MFA, no SSO, and hard to switch IdPs later.
Use Authorization Code Flow with PKCE instead. It supports MFA/SSO and gives you refresh tokens if you request the offline_access scope.
In Keycloak, enable this by assigning the offline_access role to users (or include it in the realmās default roles).
Then, in the /auth request, include offline_access in the scope.
When you exchange the auth code at /token, you'll get an offline_token instead of a standard refresh token.
This lets you use Keycloakās login page, so you can enable MFA, SSO, or whatever else you need.
Much safer, future-proof, and fully standard.
This drove me crazy but I found a solution! The cache file is called "Browse.VC.db" and located in a hidden folder called ".vs" example:
c:\VS Projects\yourprogram\.vs\yourprogram\v17\Browse.VC.db
Delete and restart your project.
You could demultiplex the intened service checkout sucess events by adding metadata to the checkout session like webhook_target: 'website-a', then in your website-a 's webhook handler ignore anything that comes with outer webhook_target
I couldn't find a nice way to do this, so I made a gem to make it easy once there are conflicts.
I removed all git dependencies from my toml/setup.py since pypi is very strict about that and added a try except to my entry point before the import in question. So I have an intentionally failed import when you run the code the first time. This triggers the subprocess to install the package then reruns the code. So the user can still pip install your package and then your package installs the git dependency when they run it.
This is dependent on the user having git on their system. Not a perfect solution but a decent workaround.
import subprocess
import sys
import os
try:
from svcca import something
except ImportError:
print("svcca not found. Installing from GitHub...")
subprocess.check_call([
sys.executable,
"-m", "pip", "install",
"svcca @ git://github.com/themightyoarfish/svcca-gpu.git"
])
print("Installation complete. Relaunching please wait..")
os.execv(sys.executable, [sys.executable] + sys.argv)
I'm not sure this resolves the issue, the vite dev react-router is not for releases. I have also tried to configure the react-router package without any luck.
export default defineConfig({
optimizeDeps: {
include: ['react-router-dom'],
},
build: {
commonjsOptions: {
include: [/react-router/, /node_modules/],
},
}
});
I am getting the same error.
According to the documentation, expo-maps is not available in Expo Go.
But I am not sure whether I have installed Expo Go (I think I did).
import speech_recognition as sr
import pyttsx3
import datetime
import webbrowser
engine = pyttsx3.init()
def speak(text):
engine.say(text)
engine.runAndWait()
def take_command():
recognizer = sr.Recognizer()
with sr.Microphone() as source:
print("Listening...")
audio = recognizer.listen(source)
try:
query = recognizer.recognize_google(audio, language='en-in')
print("User said:", query)
return query
except:
speak("Sorry, I didn't catch that.")
return ""
def execute(query):
query = query.lower()
if "time" in query:
time = datetime.datetime.now().strftime("%H:%M")
speak(f"The time is {time}")
elif "open youtube" in query:
webbrowser.open("https://youtube.com")
speak("Opening YouTube")
else:
speak("I can't do that yet.")
speak("Hello, I am Jarvis")
while True:
command = take_command()
if "stop" in command or "bye" in command:
speak("Goodbye!")
break
execute(command)
Ok, I get it now. Sorry, the parameters are somewhat confusing. This works:
analysisValueControl = new FormControl({value: '', disabled: true}, {validators: [
Validators.required, Validators.pattern(/^([+-]?[\d\.,]+)(\([+-]?[\d\.,]+\))?([eE][+-]?\d+)?$/i) ],
updateOn: 'blur'});
I am guessing that you're issue is because you aren't giving your servo an initial value, so the pin is likely floating. Try adding
helloServo.write(360);
to the end of your void setup(), this should make the servo start at the 360 position.
There are two namespaces that bitbake concerns itself with - recipe names (a.k.a. build time targets) and package names (a.k.a. runtime targets). You can specify a build time target on the bitbake command line, but not a runtime target; you need to find the recipe that provides the package you are trying to build and build that instead (or simply add that package to your image and build the image). In current versions bitbake will at least tell you which recipes have matching or similar-sounding runtime provides (RPROVIDES) so that you'll usually get a hint on which recipe you need to build.
The condition you're using seems to have an issue because the output of currentRange
and currentRange.getValues()
doesn't match what the condition expects and that's why else
condition triggers instead.
If you check the value by using console
you will get the output of:
console.log(currentRange) = object
console.log(currentRange.getValues()) = undefined
Agreeing to @Martin using strings to retrieve the ranges
.
To make it work here's a modified version of your code:
function SUM_UNCOLORED_CELLS(...ranges) {
var ss = SpreadsheetApp.getActiveSpreadsheet().getActiveSheet();
var rl = ss.getRangeList(ranges);
var bg = rl.getRanges().map(rg => rg.getBackgrounds());
return bg.flat(2).filter(c => c == "#ffffff").length;
}
To learn more about how to pass a range into a custom function in Google Spreadsheets, you can read this post: How to pass a range into a custom function in Google Spreadsheets?
Im also searching for it, but it looks like you have to make 1 subscription and then add different base plans to it. I dont get why there isn't more info about this. Im also trying to just have 3 different subscriptions and upgrade/downgrade between them. When I created 3 subscriptions in the google play console, im able to subscribe to all 3 of them in my app. I think because it doesn't see it as a sub group. Im going to try to make one subscription with 3 base plans and see if it's able to detect it then. I don't know if this is the correct way tho...
The likely problem is the stream argument cannot be 0 (i.e. default stream). You will need to specify a named stream that was created with cudaStreamCreate*()
You also don't have to specify the location hints because "The cudaMemcpyAttributes::srcLocHint and cudaMemcpyAttributes::dstLocHint allows applications to specify hint locations for operands of a copy when the operand doesn't have a fixed location. That is, these hints are only applicable for managed memory pointers on devices where cudaDevAttrConcurrentManagedAccess is true or system-allocated pageable memory on devices where cudaDevAttrPageableMemoryAccess is true."
š± SALES TEAM APP: Features Breakdown
1. Login Page
Username & password (must be created from Manager app)
Login only allowed if Sales ID exists
2. Food Items List
Show photo, name, price
Search or filter option (optional)
3. Create Invoice
Select food items
Quantity & total price calculation
Save invoice
4. Daily Invoice History
Show list of invoices created on the current day
Option to view details
5. Send Feedback
Text input
Sends directly to Manager (stored in database
Totally fair ā let me clarify a bit.
The root issue seems to stem from how Jekyll resolves file system deltas during its incremental rebuild cycle. When it detects changes, it re-evaluates its asset manifest, and sometimes if your style.css
isnāt explicitly locked into the precompiled asset flow, Jekyll will fall back to its inferred default ā in this case, normalize.css.
One common workaround is to abstract your custom styles into a partial (e.g., _custom.scss
) and then import that into a master stylesheet thatās definitely tracked by Jekyllās pipeline. Alternatively, some folks set a manual passthrough override in _config.yml
to ensure asset pathing stays deterministic during rebuilds. You might also try placing your custom style.css
outside the default watch scope and reference it via a canonical link to bypass the regeneration logic entirely.
Let me know if that helps at all ā happy to fine-tune based on your setup.
The wikipedia page on rotation matrices shows 1's on the diagonal.
I believe scipy replaces the 1's on the diagonal with
w^2+x^2+y^2+x^2
That makes them the same for a unit quaternion.
For non-unit quaternions, scipy's matrix acts as both a rotation and a scaling.
For example:
if you take the quaternion = 2 +0i+0j+0k.
The rotation matrix will be the identity matrix (with only a w term there is no rotation),
Scipy's matrix will be 2*identity, because in also includes the scaling.
type NewsItemPageProps = Promise<{ id: string }>;
async function NewsItemPage(props: { params: NewsItemPageProps }) {
const params = await props.params;
const id = params.id;
this code works
I suggest you try using Long Path Tool. It is very convenient.
The solution is to reset the ref value on blur.
Thereās a required field called "Whatās New in This Version", and it hasnāt been filled out with the updates included in your current build. Please add the relevant changes or improvements made in this version to that field, and this issue will be resolved.
web servers usually buffer output til some conditions are met (buffer is full, or end of data to send). there is no way around to bypass it except using http headers. one of them is sending data as chunked using transfer-encoding: chunked
Add trailingSlash: false
, in your next.config.ts or js file
reference:
https://github.com/aws-amplify/amplify-hosting/issues/3819#issuecomment-1819740774