there is a very simple procedure to do this, kindly follow these steps:
STEPS:
Open the document or template where you want this shortcut key functionality.
Record the Macro:
Press Alt + F8 to open the Macros dialog.
If you have vba code:
click on developer tab
click on the record macro and giev a name and saved it.
click on stop recording
go to Macros
click your desired macro
click on edit
write or paste your vba code
ctrl +S
closs all and save.
Assign the Macro to a Shortcut Key:
https://artisthindustani.blogspot.com/2025/08/how-to-use-vba-code-and-assign-macro.html
string partial_information = "";
dynamic obj;
while (...)
{
... (out information);
if (partial_information == "")
{
try
{
obj = JsonConvert.DeserializeObject(information);
}
catch (Newtonsoft.Json.JsonReaderException ex)
// 'information' only contains the first part of the actual information
{
partial_information = information;
}
}
else
{
obj = JsonConvert.DeserializeObject(partial_information + information);
// in the previous loop, some 'information' was written to 'partial_information'.
// Now the concatenation of both is used for deserialising the info.
partial_information = ""; // don't forget to re-initialise afterwards
}
if (obj.Some_Property != null) // <-- Compiler error (!!!)
{
با این حال، این کامپایل نمیشود: خط خطای کامپایلر if (obj.Some_Property != null)
ایجاد میکند CS1065
:Use of unassigned local variable 'obj'
: .
به نظر من، این بیمعنی است، همانطور که obj
حتی خارج از کل [موضوع] اعلام شده است.while
-loop نیز اعلام شده است.
چطور میتوانم این را مدیریت کنم؟
I check this document about it https://blazorise.com/docs/extensions/datagrid/getting-started
RowSelectable
Handles the selection of the DataGrid row. If not set it will default to always true.
Func<TItem,bool>
And there is a RowSelectable attribute can you try implementing this , i dont know how your code structure if you struggle to implements this please share your code where to use it.
What I understand is you need user products + global products.
So in this case we need a OR
condition where('isGlobal', 1)
$products = Product::where('user_id', $user->id)
->orWhere('isGlobal', 1)
->get();
result: this will give me all specific user productrs + gloabl products
If you want the container id without the container running, this solves it:
docker inspect --format="{{.Id}}" <container_name> | sed 's/sha256://g'
من در حال تلاش برای ایجاد یک google_eventarc_trigger در ماژول Terraform هستم تا هنگام آپلود فایلها در یک پوشه خاص در سطل GCS من، مطلع شوم. با این حال، نتوانستم راهی برای تعریف الگوی مسیر در Terraform پیدا کنم. چگونه میتوانم این کار را انجام دهم؟ این کد من است.
resource "google_eventarc_trigger" "report_file_created_trigger" {
name = "report-file-created-trigger"
location = var.location
service_account = var.eventarc_gcs_sa
matching_criteria {
attribute = "type"
value = "google.cloud.storage.object.v1.finalized"
}
matching_criteria {
attribute = "bucket"
value = var.file_bucket
}
destination {
cloud_run_service {
service = google_cloud_run_v2_service.confirm_report.name
region = var.location
}
}
Your problem is probably due to using Python 3.11 with TensorFlow — they’re not fully compatible yet. I recommend using Python 3.10 instead.
An easy way to manage different Python versions is with Anaconda. You can read the installation guide here: https://www.anaconda.com/docs/getting-started/anaconda/install
Then just create a new environment like this:
conda create -n venv python=3.10
conda activate venv # Now you're using the Python version inside venv
conda install tensorflow # Or use pip: pip install tensorflow
More info here: Conda create and conda install
This should fix the DLL error. Good luck!
If you want the container id without the container running, this solves it:
docker inspect --format="{{.Id}}" <container_name> | sed 's/sha256://g'
MonthX Unique Events= CALCULATE(DISTINCT('Incident Tracker'[Incident Name]),'Incident Tracker' [Incident Date] >= DATE(2024,1,1) && 'Incident Tracker' [Incident Date] <= DATE(2024,10,1))
If you have a separate date table and a slicer on it (to make it more dynamic) i would consider using VALUES and TREATAS in the filter argument of CALCULATE
No problem, just add a separate file or section about nvim-tree/nvim-web-devicons and put your custom configuration there. Lazy will look for a plugin spec before loading it and if it finds one or more it will merge them and evaluate them.
Just make sure you're not calling setup somewhere else before that file/section is evaluated by Lazy. If you do, you should pass your configuration to that setup call.
You’re building a voice assistant using AIML for dialog management.
You use std-startup.xml
to tell the bot to load another AIML file (output.aiml
).
Your pattern in std-startup.xml
must exactly match what you pass in Python.
The AIML loader line must be:
python
CopyEdit
kernel.bootstrap(learnFiles="std-startup.xml", commands="LOAD AIML B")
Your std-startup.xml
should look like:
xml
CopyEdit
<aiml version="1.0.1"><category><pattern>LOAD AIML B</pattern><template><learn>output.aiml</learn></template></category></aiml>
Your output.aiml
must include valid categories like:
xml
CopyEdit
<category><pattern>I WANT A BURGER</pattern><template>Sure! What kind?</template></category>
Make sure all files are in the same directory.
The command LOAD AIML B
is case-sensitive.
Add this to debug:
python
CopyEdit
print(kernel.numCategories()) # Should be > 0
Now run it, and the bot will respond to I WANT A BURGER
.
No, you can't read metadata directly from an <img>
tag.
POLLEE fashion 🌍
Digital economy world profile image e-commerce Shopify store Amazon CNBC stock
Google payment profile wallet supported Google pay
GDPR heppy 10 year working journey
Modern sellary with other found
Digital asset
Sopported EU staff union member supported working journey POLLEE tree sitters 🌴 hello world 🌎 advisor marketing buy all social media with publisher Nasiruddin from Bangladesh
Bikash mobile banking app account 01994343295
AIBL bank progoti soroni bunch Dhaka 1212
Generic font families:
serif: Fonts with small decorative lines (serifs) at the ends of strokes. Examples include Times New Roman and Georgia.
sans-serif: Fonts without serifs. Examples include Arial and Verdana.
None of these generic families are designed to display icons, and attempting to use them as a fallback for an icon font would result in incorrect or unreadable characters being displayed instead of the intended icons.
Therefore, when using icon fonts, it is crucial to ensure that the specific icon font file is properly loaded and accessible, as there is no universal generic fallback that can replicate its functionality.
Adding to gitlab-ci.yml helped me:
variables:
PATH: /home/gitlab-runner/.nvm/versions/node/v18.12.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Maybe it will be useful for someone.
View Templates applied to the view
Filters
Worksets that are turned off
Links that are unloaded
View range or far depth clip
Design Option
Detail level
Phase settings
Autodesk's Revit Visibility Troubleshooting Guide: https://www.autodesk.com/support/technical/article/caas/guidedtroubleshooting/csp/3KGII0aL3ToAA8ArgwyLs4.html
Do you have a codeql-pack.yml
file with the codeql/cpp-all
pack as dependency?
If not, it might be easiest to set this up using the VS Code extension, see the documentation. This can also be done by using the command ">CodeQL: Create query" in VS Code.
Note that your query uses dataflow API which has been deprecated and removed in newer versions, see the changelog. So you would either have to adjust the query code to use the new API or use codeql/cpp-all: ^2.1.1
(which seems to be the last version which still contains the old dataflow API).
In the directory which contains your codeql-pack.yml
and your query file, you can then run the following commands (or use the corresponding VS Code extension commands):
codeql pack install
codeql-pack.yml
. Respectively codeql pack ci
if you have a codeql-pack.lock.yml
lock file and only want to install the versions specified in it.database analyze ... my-query.ql
If you came here looking for it while on rails 8, you can find it here: https://github.com/rails/propshaft/blob/e49a9de659ff27462015e54dd832e86e762a6ddc/lib/propshaft/railties/assets.rake#L4
you only need to clean the cache by running this:
npm cache clean --force
In my case this worked.
People make stupid mistakes like letting people fetch a refresh-token using prior access-token... Don't do that. It's supposed to only exist through small time intervals. Essentially, if a bad actor steal an access-token they can just keep refreshing the same token indefinetely. Second issue, is people who only uses JWT for everything. Some user actions like changing password should have JWT and TFA before letting users change their password. Depending on the level of security, severity of damage done through one action - Implement more than one auth checks
The main benefit of JWT - is that it reduces DB calls by being stateless, every actions a user take doesn't make the backend request a new user instance
Try luaToEXE, which includes a Python library with a graphical user interface and a command-line tool: Intro
// Твои то// Твои точки
Point[] РєРЅРѕРїРєРё4 = {
Point.get(624, 170),
Point.get(527, 331),
Point.get(628, 168),
Point.get(525, 189)
};
Point[] РєРЅРѕРїРєРё3 = {
Point.get(689, 310),
Point.get(979, 1029),
Point.get(1243, 662)
};
// Область для чтения числа
Point левыйВерх = Point.get(674, 363);
Point правыйНиз = Point.get(726, 401);
// ТОКЕН БОТА и ВАШ АККАУНТА ID Telegram
String tgToken = "bot";
String tgChatId = "id";
// РќРР–Р• РџРћР§РўР РќРЧЕГО РќР• ТРОГАЕМ!
pfc.setOCRLang("eng");
pfc.startScreenCapture(2);
while (!EXIT) {
String текстЧисло = pfc.getText(левыйВерх, правыйНиз);
pfc.log("OCR text: '" + текстЧисло + "'");
// убираем все запятые
текстЧисло = текстЧисло.replace(",", "");
// оставляем только цифры
текстЧисло = текстЧисло.replaceAll("\[^0-9\]", "");
if (текстЧисло.length() \< 2) {
pfc.log("Слишком короткое число, пропускаем");
continue;
}
double число = 999999;
try {
число = Double.parseDouble(текстЧисло);
} catch (Exception e) {
pfc.log("Не удалось распарсить число: '" + текстЧисло + "'");
continue;
}
pfc.log("Число: " + число);
if (число \<= 125) { // \<= чтобы 1299 тоже сработало
pfc.log("Число меньше или равно 1299, нажимаем 3 кнопки покупки");
for (int i = 0; i \< РєРЅРѕРїРєРё3.length; i++) {
pfc.click(РєРЅРѕРїРєРё3\[i\]);
pfc.sleep(550);
}
// Отправляем сообщение в Telegram
String msg = "За " + (int)число + " звезд улов NFT подарка 🎉";
pfc.sendToTg(tgToken, tgChatId, msg);
pfc.log("Отправлено сообщение в Telegram: " + msg);
} else {
pfc.log("Число больше 1299, нажимаем 4 кнопки");
for (int i = 0; i \< РєРЅРѕРїРєРё4.length; i++) {
pfc.click(РєРЅРѕРїРєРё4\[i\]);
pfc.sleep(850);
}
}
}
Through 15 years of exponential traffic growth from both Double 11 and Alibaba Cloud, we built LoongCollector, an observability agent that delivers 10x higher throughput with 80% reduction in resource usage than open-source alternatives, proving that extreme performance and enterprise reliability can coexist under the most demanding production loads.
Back in the early 2010s, Alibaba’s infrastructure was facing a tidal wave: every Singles’ Day (11.11), traffic would surge to record-breaking levels, pushing our systems to their absolute limits. Our observability stack—tasked with collecting logs, metrics, and traces from millions of servers—was devouring CPU and memory just to keep up. At that time, there were no lightweight, high-performance agents on the market: Fluent Bit hadn’t been invented, Vector was still a distant idea, Logstash was a memory-hungry beast.
The math was brutal: Just a 1% efficiency gain in data collection would save us millions across our massive infrastructure. When you’re processing petabytes of observability data every day, performance isn’t optional—it’s mission-critical.
So, in 2013, we set out to build our own: a lightweight, high-performance, and rock-solid data collector. Over the next decade, iLogtail (now LoongCollector) was battle-tested by the world’s largest e-commerce events, the migration of Alibaba Group to the cloud, and the rise of containerized infrastructure. By 2022, we had open-sourced a collector that could run anywhere—on bare metal, virtual machines, or Kubernetes clusters—capable of handling everything from file logs and container output to metrics, all while using minimal resources.
Today, LoongCollector powers tens of millions of deployments, reliably collecting hundreds of petabytes of observability data every day for Alibaba, Ant Group, and thousands of enterprise customers. The result? Massive cost savings, a unified data collection layer, and a new standard for performance in the observability world.
When processing petabytes of observability data costs you millions, every performance improvement directly impacts your bottom line. A 1% efficiency improvement translates to millions in infrastructure savings across large-scale deployments. That's when we knew we had to share these numbers with the world.
We ran LoongCollector against every major open-source alternative in controlled, reproducible benchmarks. The results weren't just impressive—they were game-changing.
Rigorous Test Methodology
Maximum Throughput: LoongCollector Dominates
Log Type | LoongCollector | FluentBit | Vector | Filebeat |
---|---|---|---|---|
Single Line | 546 MB/s | 36 MB/s | 38 MB/s | 9 MB/s |
Multi-line | 238 MB/s | 24 MB/s | 22 MB/s | 6 MB/s |
Regex Parsing | 68 MB/s | 19 MB/s | 12 MB/s | Not Supported |
📈 Breaking Point Analysis: While competitors hit CPU saturation at ~40 MB/s, LoongCollector maintains linear scaling up to 546 MB/s on a single processing thread—the theoretical maximum of our test environment.
Resource Efficiency: Where the Magic Happens
The real story isn't just raw throughput—it's doing more with dramatically less. At identical 10 MB/s processing loads:
Scenario | LoongCollector | FluentBit | Vector | Filebeat |
---|---|---|---|---|
Simple Line (512B) | 3.40% CPU 29.01 MB RAM |
12.29% CPU (+261%) 46.84 MB RAM (+61%) |
35.80% CPU (+952%) 83.24 MB RAM (+186%) |
Performance Insufficient |
Multi-line (512B) | 5.82% CPU 29.39 MB RAM |
28.35% CPU (+387%) 46.39 MB RAM (+57%) |
55.99% CPU (+862%) 85.17 MB RAM (+189%) |
Performance Insufficient |
Regex (512B) | 14.20% CPU 34.02 MB RAM |
37.32% CPU (+162%) 46.44 MB RAM (+36%) |
43.90% CPU (+209%) 90.51 MB RAM (+166%) |
Not Supported |
The Performance Breakthrough: 5 Key Advantages
Traditional Approach: Traditional log agents create multiple string copies during parsing. Each extracted field requires a separate memory allocation, and the original log content is duplicated multiple times across different processing stages. This approach leads to excessive memory allocations and CPU overhead, especially when processing high-volume logs with complex parsing requirements.
LoongCollector's Memory Arena: LoongCollector introduces a shared memory pool (SourceBuffer) for each PipelineEventGroup, where all string data is stored once. Instead of copying extracted fields, LoongCollector uses string_view references that point to specific segments of the original data.
Architecture:
Pipeline Event Group
├── Shared Memory Pool (SourceBuffer)
│ └── "2025-01-01 10:00:00 [INFO] Processing user request from 192.168.1.100"
├── String Views (zero-copy references)
│ ├── timestamp: string_view(0, 19) // "2025-01-01 10:00:00"
│ ├── level: string_view(20, 4) // "INFO"
│ ├── message: string_view(26, 22) // "Processing user request"
│ └── ip: string_view(50, 13) // "192.168.1.100"
└── Events referencing original data
Performance Impact:
Component | Traditional | LoongCollector | Improvement |
---|---|---|---|
String Operations | 4 copies | 0 copies | 100% reduction |
Memory Allocations | Per field | Per group | 80% reduction |
Regex Extraction | 4 field copies | 4 string_view refs | 100% elimination |
CPU Overhead | High | Minimal | 15% improvement |
Traditional Approach: Traditional log agents create and destroy PipelineEvent objects for every log entry, leading to frequent memory allocations and deallocations. This approach causes significant CPU overhead (10% of total processing time) and creates memory fragmentation. Simple global object pools introduce lock contention in multi-threaded environments, while thread-local pools fail to handle cross-thread scenarios effectively.
LoongCollector's Event Pool Architecture: LoongCollector implements intelligent object pooling with thread-aware allocation strategies that eliminate lock contention while handling complex multi-threaded scenarios. The system uses different pooling strategies based on whether events are allocated and deallocated in the same thread or across different threads.
Thread Allocation Strategy:
1) Same-Thread Allocation/Deallocation
┌──────────────────┐
│ Processor Thread │──── [Lock-free Pool] ──── Direct Reuse
└──────────────────┘
When events are created and destroyed within the same Processor Runner thread, each thread maintains its own lock-free event pool. Since only one thread accesses each pool, no synchronization overhead is required.
2) Cross-Thread Allocation/Deallocation
┌────────────────┐ ┌─────────────────┐
│ Input Thread │────▶│ Processor Thread│
└────────────────┘ └─────────────────┘
│ │
└── [Double Buffer Pool] ──┘
For events created in Input Runner threads but consumed in Processor Runner threads, we implement a double-buffer strategy:
Performance Impact:
Aspect | Traditional | LoongCollector | Improvement |
---|---|---|---|
Object creation | Per event | Pool reuse | 90% reduction |
Memory fragmentation | High | Minimal | 80% reduction |
Traditional Approach: Standard serialization involves creating intermediate Protobuf objects before converting to network bytes. This two-step process requires additional memory allocations and CPU cycles for object construction and serialization, leading to unnecessary overhead in high-throughput scenarios.
LoongCollector's Zero-Copy Serialization: LoongCollector bypasses intermediate object creation by directly serializing PipelineEventGroup data according to Protobuf wire format. This eliminates the temporary object allocation and reduces memory pressure during serialization.
Architecture:
Traditional: PipelineEventGroup → ProtoBuf Object → Serialized Bytes → Network
LoongCollector: PipelineEventGroup → Serialized Bytes → Network
Performance Impact:
Metric | Traditional | LoongCollector | Improvement |
---|---|---|---|
Serialization CPU | 12.5% | 5.8% | 54% reduction |
Memory allocations | 3 copies | 1 copy | 67% reduction |
While LoongCollector demonstrates impressive performance advantages, its reliability architecture is equally noteworthy. The following sections detail how LoongCollector achieves enterprise-grade stability and fault tolerance while maintaining its performance edge.
LoongCollector's multi-tenant architecture ensures isolation between different pipelines while maintaining optimal resource utilization. The system implements a high-low watermark feedback queue mechanism that prevents any single pipeline from affecting others.
Multi-Pipeline Architecture with Independent Queues:
┌─ LoongCollector Multi-Tenant Pipeline Architecture ───────────────────┐
│ │
│ ┌─ Pipeline A ─┐ ┌─ Pipeline B ─┐ ┌─ Pipeline C ─┐ │
│ │ │ │ │ │ │ │
│ │ Input Plugin │ │ Input Plugin │ │ Input Plugin │ │
│ │ ↓ │ │ ↓ │ │ ↓ │ │
│ │ Process Queue│ │ Process Queue│ │ Process Queue│ │
│ │ ↓ │ │ ↓ │ │ ↓ │ │
│ │ Sender Queue │ │ Sender Queue │ │ Sender Queue │ │
│ │ ↓ │ │ ↓ │ │ ↓ │ │
│ │ Flusher │ │ Flusher │ │ Flusher │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │ │ │ │
│ └───────────────────┼─────────────────┘ │
│ │ │
│ ┌─ Shared Runners ────────────────────────────────────────────────┐ │
│ │ │ │
│ │ ┌─ Input Runners ─┐ ┌─ Processor Runners ┐ ┌─ Flusher Runners ─┐ │ │
│ │ │ • Pipeline │ │ • Priority-based │ │ • Watermark-based │ │ │
│ │ │ isolation │ │ scheduling │ │ throttling │ │ │
│ │ │ • Independent │ │ • Fair resource │ │ • Back-pressure │ │ │
│ │ │ event pools │ │ allocation │ │ control │ │ │
│ │ └─────────────────┘ └────────────────────┘ └───────────────────┘ │ │
│ └──────────────────────────────────────────────────────────────────┘ │
└───────────────────────────────────────────────────────────────────────┘
High-Low Watermark Feedback Queue Mechanism:
┌─ High-Low Watermark Feedback System ─────────────────────┐
│ │
│ ┌─ Queue State Management ─┐ ┌─ Feedback Mechanism ──┐ │
│ │ │ │ │ │
│ │ ┌─── Normal State ───┐ │ │ ┌──── Upstream ────┐ │ │
│ │ │ Size < Low │ │ │ │ Check │ │ │
│ │ │ Accept all data │ │ │ │ Before Write │ │ │
│ │ └────────────────────┘ │ │ └──────────────────┘ │ │
│ │ │ │ │ │ │
│ │ ▼ │ │ │ │
│ │ ┌── High Watermark ──┐ │ │ │ │
│ │ │ Size >= High │ │ │ ┌──── Downstream ──┐ │ │
│ │ │ Stop accepting │ │ │ │ Feedback Enabled │ │ │
│ │ │ non-urgent data │ │ │ └──────────────────┘ │ │
│ │ └────────────────────┘ │ │ │ │
│ │ │ │ │ │ │
│ │ ▼ │ │ │ │
│ │ ┌─ Recovery State ──┐ │ │ │ │
│ │ │ Size <= Low │ │ │ │ │
│ │ │ Resume accepting data │ │ │ │
│ │ └───────────────────┘ │ │ │ │
│ └──────────────────────────┘ └───────────────────────┘ │
└──────────────────────────────────────────────────────────┘
Isolation Benefits:
Enterprise environments run multiple pipelines with different criticality levels. Our priority-aware round-robin scheduler ensures fairness while respecting business priorities. The system implements a sophisticated multi-level scheduling algorithm that guarantees resource allocation fairness while maintaining strict priority enforcement.
Priority Scheduling Principles
The core scheduling algorithm ensures both fairness within priority levels and strict priority enforcement between levels. The system follows strict priority ordering while maintaining fair round-robin scheduling within each priority level.
┌─ High Priority ────────────────────────────────────────────────────┐
│ ┌───────────┐ │
│ │ Pipeline1 │ ◄─── Always processed first │
│ └───────────┘ │
│ │ │
│ ▼ (Priority transition) │
└────────────────────────────────────────────────────────────────────┘
┌─ Medium Priority (Round-robin cycle) ──────────────────────────────┐
│ ┌───────────┐ ┌─────────────────┐ ┌────────────┐ │
│ │ Pipeline2 │───▶│ Pipeline3(Last) │───▶│ Pipeline 4 │ │
│ └───────────┘ └─────────────────┘ └────────────┘ │
│ ▲ │ │
│ └────────────────────────────────────────┘ │
│ │
│ Note: Last processed was Pipeline3, so next starts from Pipeline4 │
│ │ │
│ ▼ (Priority transition) │
└────────────────────────────────────────────────────────────────────┘
┌─ Low Priority (Round-robin cycle) ─────────────────────────────────┐
│ ┌───────────┐ ┌───────────┐ │
│ │ Pipeline5 │───▶│ Pipeline6 │ │
│ └───────────┘ └───────────┘ │
│ ▲ │ │
│ └───────────────────┘ │
│ │
│ Note: Processed only when higher priority pipelines have no data │
└────────────────────────────────────────────────────────────────────┘
When one destination fails, traditional agents often affect all pipelines. LoongCollector implements adaptive concurrency limiting per destination.
AIMD Based Flow Control:
┌─ ConcurrencyLimiter Configuration ───────────────────────────────────────┐
│ │
│ ┌─ Failure Rate Thresholds ────────────────────────────────────────────┐ │
│ │ │ │
│ │ ┌─ No Fallback Zone ─┐ ┌─ Slow Fallback Zone ─┐ ┌─ Fast Fallback ──┐ │ │
│ │ │ │ │ │ │ │ │ │
│ │ │ 0% ─────────── 10% │ │ 10% ──────────── 40% │ │ 40% ─────── 100% │ │ │
│ │ │ │ │ │ │ │ │ │
│ │ │ Maintain Current │ │ Multiply by 0.8 │ │ Multiply by 0.5 │ │ │
│ │ │ Concurrency │ │ (Slow Decrease) │ │ (Fast Decrease) │ │ │
│ │ └────────────────────┘ └──────────────────────┘ └──────────────────┘ │ │
│ └──────────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─ Recovery Mechanism ─┐ │
│ │ • Additive Increase │ ← +1 when success rate = 100% │
│ │ • Gradual Recovery │ ← Linear scaling back to max │
│ └──────────────────────┘ │
└──────────────────────────────────────────────────────────────────────────┘
Each concurrency limiter uses an adaptive rate limiting algorithm inspired by AIMD (Additive Increase, Multiplicative Decrease) network congestion control. When sending failures occur, the concurrency is quickly reduced. When sends succeed, concurrency gradually increases. To avoid fluctuations from network jitter, statistics are collected over a time window/batch of data to prevent rapid concurrency oscillation.
By using this strategy, when network anomalies occur at a sending destination, the allowed data packets for that destination can quickly decay, minimizing the impact on other sending destinations. In network interruption scenarios, the sleep period approach maximizes reduction of unnecessary sends while ensuring timely recovery of data transmission within a limited time once the network is restored.
LoongCollector has been validated in some of the world's most demanding production environments, processing real-world workloads that would break most observability systems. As the core data collection engine powering Alibaba Cloud SLS (Simple Log Service)—one of the world's largest cloud-native observability platforms—LoongCollector processes observability data for tens of millions of applications across Alibaba's global infrastructure.
Global Deployment Scale:
Enterprise Customer Validation:
Extreme Scenario Testing:
Scalability
Network Resilience
Chaos Engineering
LoongCollector represents more than just performance optimization—it's a fundamental rethinking of how observability data should be collected, processed, and delivered at scale. By open-sourcing this technology, we're democratizing access to enterprise-grade performance that was previously available only to the largest tech companies.
Ready to experience 10x performance improvements?
🚀 GitHub Repository: https://github.com/alibaba/loongcollector
📊 Benchmark Suite: Clone our complete benchmark tests and reproduce these results in your environment
📖 Documentation: Comprehensive guides for migration, optimization, and advanced configurations
💬 Community Discussion: Join our Discord for technical discussions and architecture deep-dives
Challenge us: If you're running Filebeat, FluentBit, or Vector in production, we're confident LoongCollector will deliver significant improvements in your environment. Run our benchmark suite and let the data speak.
Contribute: LoongCollector is built by engineers, for engineers. Whether it's performance optimizations, new data source integrations, or reliability improvements—every contribution shapes the future of observability infrastructure.
Open Questions for the Community:
Benchmark Challenge: We're confident in our numbers, but we want to see yours. Run our benchmark suite against your current setup and share the results. If you can beat our performance, we'll feature your optimizations in our next release.
The next time your log collection agent consumes more resources than your actual application, remember: there's a better way. LoongCollector proves that high performance and enterprise reliability aren't mutually exclusive—they're the foundation of modern observability infrastructure.
Built with ❤️ by the Alibaba Cloud Observability Team. Battle-tested across Hundreds PB of daily production data and tens of millions of instances.
For large ranges, if you don't want to apply the formula for the whole row/column, you can select the start of the range, then use the slider to go to the end of the range. Press SHIFT and select the end of the range. This selects the whole range. Then you can press CTRL + ENTER to apply the formula.
Use svn changelist. best tool ever to solve this.
Check this:
https://github.com/tldr-pages/tldr/blob/main/pages/common/svn-changelist.md
And if you want to see the list added, use svn status
The documents are not that clear. But I have just finish a script to split a larget svn commit into small commits
I think you must URL-encode the query parameters before sending the request to the server.
For example, the Arabic character "أ" (U+0623) should be percent-encoded as %D8%A3
.
So instead of idNo=2/أ
, send idNo=2/%D8%A3
.
For resharper there is:
- ReSharper_UnitTestRunFromContext (run at cursor)
- ReSharper_UnitTestDebugContext (debug at cursor)
- ReSharper_UnitTestSessionRepeatPreviousRun
Here is the right way to ask Bloomberg:
holidays = blp.bds(
'USD Curncy',
'CALENDAR_NON_SETTLEMENT_DATES',
SETTLEMENT_CALENDAR_CODE='FD',
CALENDAR_START_DATE='20250101',
CALENDAR_END_DATE='20261231'
)
print(holidays)
N.B. 'FD' is the calendar for the US.
You will get a DataFrame, with the column 'holiday_date' and the different dates written in format yyyy-mm-dd
https://stackoverflow.com/a/79704676/17078296
have you checked if your vendor folder is excluded from language server features
2025 Update
According to the Expo documentation, use:
npx expo install --fix
Tip: Run this command multiple times — it may continue updating dependencies on each pass until everything is aligned with the expected versions.
What helped me was to change the Java version.
I downgraded to Java 11 from Java 17 because Java 17 introduced stricter formatting rules and it started to fail at class initialization time (static
block), causing a NoClassDefFoundError.
IMO the author tag helps in situations where the code is visualized on a non IDE place, like on GitHub, Bash CLI or tools like notepad++, the tag here gives a direct help about the origins of the thing (interface/class/method). The VCS is helpful but it requires learning curve and IDE.
From another perspective, the author tag may help in enforcing the Open-Close principle in SOLID, for example when a developer comes to touch an interface marked `@author <senior_developer>`, they'd think to not touch it as it was designed by a high level person, so maybe that touching it, may ruin important things, eventually, they will think about making an extension of it, which is great.
thanks for your comments - they were very useful and push my brain into right direction.
If shortly: original c++ code creates an image in WMF format (from Windows 3.0, you remember it, right?). I changed the c++ and started to generated EMF files (came from Windows 95).
For example, this code
CMetaFileDC dcm;
dcm.Create();
has been replaced to this one:
CMetaFileDC dcm;
dcm.CreateEnhanced(NULL, NULL, NULL, _T("Enhanced Metafile"));
I walked through all related locations and now I have EMP as the output format.
And this step has solved all my issues, I even do not convert this EMF file to BMP format, I can paste/use it directly into my C# code.
Thanks again for your thoughts and ideas, I really appreciate.
I had the same issue and found out it was caused by a dependency conflict — in my case, it was the devtools dependency.
After removing it, everything went back to working normally.
SOLVED!
Thanks to Wayne from the comments, he guided me into CUDA Program files.
Some of the cuDNN files from downloaded 8.1 version weren't present in
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\lib\x64
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\include
What worked:
Downloading a new cuDNN 8.1 .zip file from NVIDIA website
Extracting it into Downloads/
Copying files from bin/; include/ and lib/x64 into corresponding directories in
C
:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\
Thats it.
It might be because I hit the chat limit, since I had the same problem today and nothing else explains it.
Lista de ações tentadas:
"Tried ending the VS Code task in the Task Manager."
"Changing the Copilot version."
"Uninstalling and reinstalling Copilot."
"Reloading the VS Code window."
Update from 7th of August 2025:
I have a firebase cloud function for handling RTDN(real time developer notifications) but i got an error message that i don't have the required permissions. AI tools could not really help me all the way. With the info I could get from them and the answeres from people in this post that I am writing this answer for I ended up getting it to work like this:
In the google cloud console under Navigation Menu -> IAM & Admin -> IAM i searched for an entry with a Principal like this "[email protected]" and a name like "App Engine default service account".
Then i went to the google play console app-list screen -> Users and permissions -> next to "manage users" there are 3 vertical dots -> clicked on them and selected "invite new users" -> for email address i entered "[email protected]", under account permissions i only chose "View app information and download bulk reports (read only)", "View financial data, orders and cancellation survey responses" and "Manage orders and subscriptions" and pressed the button to invite the user.
Then in the google play console i went to the app in question -> to the subscription (in my case i only have 1 anyway) and deactivated and reactivated it and after a few minutes it worked for me.
Hope this might help someone in the future.
I'm not sure when support for setToken
ended, but in later versions of Python dbutils
, it's definitely no longer supported. As a matter of fact, it's hard to find any references to it in the official documentation & GitHub.
Clean and rebuild solution did it for me.
For me it was that I need to be connected to the same WiFi (or maybe just Internet) on BOTH devices: Laptop and iPhone. For Android it looks like it doesnt matter.
Im using Expo, dont know if bare bone React Native behaves the same.
By default SQLdeveloper searches for a jdk within its directory. If it doesnt find it will prompt to select with a popup. If we need to set it ,can be also be done in the jdk.conf file present in <installationfolder>/sqldeveloper/bin by setting SetJavaHome
You can subscribe to blocks via websocket with quicknode free tier
- Create a account on quicknode.com.
- Go to https://dashboard.quicknode.com/endpoints and get the websocket endpoint there.
exist the same problem, using cmake. please solve
It is not triggered when I enter the domain name directly in the browser.
That's by design.
When you type a URL in the browser on Android, it doesn't trigger any intent that can be opened in any app, because you just want to visit the URL you entered.
Now, if you click a URL somewhere, then Android tries to find an app that supports that URL and opens it.
Source: https://developer.android.com/training/app-links#android-app-links
MikroORM creates business_biz_reg_no instead of using your biz_reg_no column for the join. You defined both biz_reg_no (a string) and a @ManyToOne() relation without telling MikroORM to reuse the same column.
@ManyToOne(() => BusinessEntity, {
referenceColumnName: 'biz_reg_no',
fieldName: 'biz_reg_no', //<= use this to force MikroORM to use the right column
nullable: true,
})
business?: BusinessEntity;
Also fix this (array not single):
@OneToMany(() => InquiryEntity, (inquiry) => inquiry.business)
inquiries: InquiryEntity[]; // <= not `inquiry`
Now MikroORM will generate:
ON i.biz_reg_no = b.biz_reg_no
It was pyjanitor. I forgot that it was imported.
import tensorflow_datasets as tfds
import tensorflow as tf
# Use default configuration without subwords8k
dataset, info = tfds.load('imdb_reviews', with_info=True, as_supervised=True)
This error occurs may due to the subwords8k
configuration for the imdb_reviews
dataset has been deprecated. I guess
Sounds like your tests are sharing state across threads—productId
is likely getting overwritten when run in parallel. Try isolating data per scenario to avoid conflicts.On a side note, if you want something equally smooth, check out a Pinterest Video Downloader—great for grabbing HD videos and GIFs without logins or watermarks. Super quick and easy!
Did you find a solution? Im also can
't find how to make it.
When a container is un-selected the outer Focus widget sets canRequestFocus = false and skipTraversal = faslse on every descendant focus-node.
Because the TextField inside _SearchField owns its own persistent FocusNode, that flag stays false even after the container becomes selected again, so the Tab key can never land on that field any more – only on the button (which creates a brand-new internal focus-node each rebuild).
So the properties need to be updated once the container is selected again inside didUpdateWidget method and pass the isContainerSelected flag to the _SearchField widget from parent.
class _SearchFieldState extends State<_SearchField> {
final FocusNode _focusNode = FocusNode();
@override
void didUpdateWidget(final _SearchField oldWidget) {
super.didUpdateWidget(oldWidget);
if (oldWidget.isSectionSelected != widget.isContainerSelected) {
_focusNode
..canRequestFocus = widget.isContainerSelected
..skipTraversal = !widget.isContainerSelected;
}
}
@override
void dispose() {
_focusNode.dispose();
super.dispose();
}
@override
Widget build(final BuildContext context) {
return TextField(
focusNode: _focusNode,
decoration: const InputDecoration(
hintText: 'Search',
),
);
}
}
I'm using jooq v3.11.5 for DATE
column in Oracle and it helps me:
<forcedTypes>
<forcedType>
<name>TIMESTAMP</name>
<userType>java.time.LocalDateTime</userType>
<types>DATE((.*))?</types>
</forcedType>
</forcedTypes>
with
<javaTimeTypes>true</javaTimeTypes>
It is now possible, see this answer https://stackoverflow.com/a/62411309/17789881.
In short, you can do
Base.delete_method(@which your_function(your_args...))
To serve index.html add pm2 serve /home/site/wwwroot --no-daemon
in the Startup Command
of the Configuration
blade of of Azure Web App
Lauren from Rasa here, glad to hear you are trying the Developer Edition of Rasa!
I haven't come across that problem myself yet, however, I can recommend going to the Rasa Docs (https://rasa.com/) and clicking on the "ask AI" button at the bottom. Whenever I have problems with installation, I usually try to troubleshoot with the Ask AI feature, drop in your error message and see if it can help get you a couple steps further.
EC2 instances don't understand what is the working directory when running user data. Therefore you must specify the destination when using wget
(and with many other commands):
wget -P /home/centos/testing https://validlink
Ps. Know that specifying working directory with .
doesn't work either.
f you're okay using synchronous streaming
from transformers import TextStreamer
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
Then, redirect stdout to a custom generator function. But since you already want async and FastAPI streaming, let’s fix it properly.
I found the setting I was looking for! *Facepalm*
The setting can be found here:
Add Page > Ellipsis (three dots) button next to Publish > Preferences > Interface > Show starter patterns
NPM_CONFIG_REGISTRY
env variableExample:
NPM_CONFIG_REGISTRY=http://localhost:4873 bunx my-cli
After some experimentation I have developed a JSON based language (serializable) that can be sent over the wire and parsed in a JS environment. Now it supports arrays, objects and primitives, promises and function composition all in one payload. Yes I am a bit proud of it and it took me a long time to get to this point.
It plays nice with the JS and TS language by having a utility to create stubs for functions which allow writing a complete program that is then sent somewhere else for parsing.
I believe that the real value or RPC is not just to call a function, but to do bulk requests and computation on the fly before a response is sent back over the wire.
I also think the core or this concept should be medium/protocol agnostic. You should decide if you want WS, HTTP or any other method of transport.
My work might inspire or create some discussion and criticism.
If it "works" in some environment, it's likely due to a custom extension or monkey patching.
In Pandas 2.3.1, the correct way to shuffle a DataFrame is:
df = df.sample(frac=1).reset_index(drop=True)
df.shuffle()
is not part of the standard Pandas API.
I had the same problem. Following the advice from @staroselskii I played around with the tests and found that in my case the problem was caused by outputting too many lines to stderr. When I reduced the amount of logging to stderr, the pipeline completed correctly with a full set of tests.
Most probably an issue with the ads-routes. ADS-Routes have to be configured vise-versa. So CX2020 has to point to the Debian system and the Debian-system to the CX2020.
Check also if there are doubled entries which point to the same ams net id but different ip or something like that. Maybe you get a connection in this case but it will be very instable.
Of course check also the firewall TCP:48898 has to be open on both systems in incoming direction.
You can also check ads-traffic and commands with the ADS-Monitor: https://www.beckhoff.com/de-de/produkte/automation/twincat/tfxxxx-twincat-3-functions/tf6xxx-connectivity/tf6010.html?
Make sure your versions all match.
"ag-grid-community" and "ag-grid-enterprise" should be exactly the same.
I had these and had the same error:
"ag-grid-community": "^34.1.1"
"ag-grid-enterprise": "^34.1.0"
No. If the compileinfo differs, then most probably also the binaries are different. The memory addresses in the core-dump will point to invalid locations. That's also why the TC-IDE is not loading it, because it would not make any sense.
I reccomend to use something like git for your project. If you did, you probably would be able to restore the "old" binaries.
Add connectTimeout: 10000
{
host: 'ip_or_hostname',
port: 6379,
connectTimeout: 10000,
maxRetriesPerRequest: null,
enableReadyCheck: false
}
ic_stat_your_icon_name
drawable-*
folders into your Flutter Android app at:android/app/src/main/res/drawable-hdpi/ic_stat_your_icon_name.png
android/app/src/main/res/drawable-mdpi/ic_stat_your_icon_name.png
Notification icons are white-only with transparent background
update:
My baaaad !
It's because I am using UFW
on my system, it blocks any request that aren't allowed by a rule.
Figured it out when trying netcat
, it worked to the only possible issue was the firewall blocking requests.
ic_stat_your_icon_name
drawable-*
folders into your Flutter Android app at:android/app/src/main/res/drawable-hdpi/ic_stat_your_icon_name.png
android/app/src/main/res/drawable-mdpi/ic_stat_your_icon_name.png
Notification icons are white-only with transparent background
def player():
health = 100
return health
health = player()
print(health)
@zmbq. Can you explain how would you do it with a setup.cfg ? setuptools. I need to include 2 more wheel packages into my wheel. How can I do that ?
On my mac, I changed the DNS settings of my wifi connection to that of Google. 8.8.8.8 and 8.8.4.4. It worked for me.
Interesting that there is so little advice about such an obvious problem. Rate limiting a messaging source is a very common technique employed in order not to overload downstream systems. Strangely neitehr in AWS SQS nor in the Spring Boot consumer implementation there is any support for it.
Not an answer but sharing a problem related to the topic, there no possibility to remove the legend frame when adding a legend trough tm_add_legend() and even within the tm_options settings with legend.frame=F it does not work either.
> packageVersion('tmap')
[1] ‘4.1’
> packageDate('tmap')
[1] "2025-05-12"
First I tried as specified here :
tm_shape(World)+
tm_polygons(fill = 'area',fill.legend=tm_legend(frame = F))+
tm_add_legend(frame=F,labels = c('lab1','lab2'),type = 'polygons',fill=c('red','green'),position=c('left','top'))
Even try the option with :
tm_shape(World)+
tm_polygons(fill = 'area',fill.legend=tm_legend(frame = F))+
tm_add_legend(fill.legend=tm_legend(frame=F),labels = c('lab1','lab2'),type = 'polygons',fill=c('red','green'),position=c('left','top'))
It cannot be implemented directly in SharePoint Online. However, you can try third-party apps such as Kwizcom Lookup or BoostSolutions Cascaded Lookup.
You need to tell IntelliJ to generate those classes, otherwise it won't be aware of them, even if their creation is part of the default maven lifecycle.
Right-click on the project's folder or the pom.xml, then Maven > Generate Sources and Update Folders.
Just use { list-style-position: inside; }
Why not just to make a stacked bar plot? Somethin like this:
res = df.T.drop_duplicates().T
# df here is your source dataframe
fig, ax = plt.subplots()
for name, group in res.groupby("PVR Group"):
ax.bar((group['min_x']+(group['max_x']-group['min_x'])/2),
group['max_y']-group['min_y'],
width=(group['max_x']-group['min_x']),
bottom=group['min_y'],
label=name,
edgecolor="black")
ax.legend()
plt.show()
I believe you could adjust the design to meet your taste.
I’ve recently bought an apartment in Delhi NCR and I’m exploring options for complete home interior work. I keep hearing the term "turnkey project", but I'm not exactly sure what all it covers. From what I understand, it includes everything from design to final setup.
If anyone has recommendations, I’d love to hear them. I came across Zayan Lifestyle’s residential interior services, which seem to offer a full turnkey solution—including design consultation, modular kitchens, wardrobes, carpentry, and execution. Has anyone here worked with them or knows of other trusted options in Delhi?
Use flex to push the bottom image down inside the .pinkborder card enter image description here
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<link rel="stylesheet" href="https://fonts.googleapis.com/css2?family=Schoolbell&family=Waiting+for+the+Sunrise&display=swap">
<title>Image Bottom Alignment Test</title>
<!-- css start -->
<style>
.contain {
width: 100%;
display: flex;
justify-content: center;
flex-wrap: wrap;
gap: 50px;
margin: 10%;
}
.pinkborder {
background-color: red;
width: 300px;
height: 500px;
display: flex;
flex-direction: column;
justify-content: space-between; /* Push top content and bottom image apart */
align-items: center;
padding: 10px;
color: white;
font-family: 'Schoolbell', cursive;
}
.topcontent {
display: flex;
flex-direction: column;
align-items: center;
}
.downalign {
display: flex;
justify-content: center;
align-items: flex-end;
width: 100%;
}
</style>
<!--end -->
</head>
<body>
<div class="contain">
<div class="pinkborder">
<div class="topcontent">
<img src="https://picsum.photos/200/300">
<div>
╰⠀╮⠀e⠀╭⠀╯<br>
aerhaedhaedhaedh<br>
aethaethahartdg
</div>
</div>
<div class="downalign">
<img src="https://picsum.photos/200">
</div>
</div>
<div class="pinkborder">
<div class="topcontent">
<img src="https://picsum.photos/id/237/200/300">
<div>
╰⠀╮⠀e⠀╭⠀╯<br>
aerhaedhaedhaedh<br>
aethaethahartdg
</div>
</div>
<div class="downalign">
<img src="https://picsum.photos/200">
</div>
</div>
</div>
</body>
</html>
C:\Users\xxx\.gradle\caches\modules-2\files-2.1\io.flutter\flutter_embedding_release\1.0.0-1a65d409c7a1438a34d21b60bf30a6fd5db59314\88383da8418511a23e318ac08cd9846f983bbde0\flutter_embedding_release-1.0.0-1a65d409c7a1438a34d21b60bf30a6fd5db59314.jar!\io\flutter\embedding\engine\renderer\SurfaceTextureWrapper.class
here is my file path,it don't have the method shouldUpdate,maybe my version is wrong?How to update it?
Using Yarn Workspaces:
Add references.path to tsconfig.json
for the package you are trying to import. Example from public repo https://github.com/graphile/starter/blob/main/%40app/server/tsconfig.json#L14
"references": [{ "path": "../config" }, { "path": "../graphql" }]
Problem solved; changing the "custom deploy" in true solved the problem.
We decided to go the mTLS way. Because:
Xcode 26 Beta 5 fixes the issue.
You can add this snippet after your error, to ignore it. There is a piece of doc available here that explains it better than I do
# type: ignore
yes i use softwareserial but , usually same response
i use P2P , so my question now, it's possible that the model LORa is Lora Wan , i mean i can't change it to P2P ?
////TX
#define MY_ADDRESS 1 // Adresse du module TX
#define DEST_ADDRESS 2 // Adresse du module RX
unsigned long dernierEnvoi = 0;
void setup() {
Serial.begin(9600);
delay(3000);
Serial.println("Initialisation du LA66 (TX)");
// Configuration P2P
Serial.println("AT+STOP"); delay(200);
Serial.println("AT+RESET"); delay(2000);
Serial.println("AT+MODE=0"); delay(200);
Serial.println("AT+ADDRESS=" + String(MY_ADDRESS)); delay(200);
Serial.println("AT+NETWORKID=5"); delay(200);
Serial.println("AT+PARAMETER=9,7,1,4"); delay(200);
Serial.println("AT+FREQ=868500000"); delay(200);
Serial.println("AT+SAVE"); delay(200);
Serial.println("TX prêt");
}
void loop() {
if (millis() - dernierEnvoi > 3000) {
dernierEnvoi = millis();
Serial.println("AT+SEND=" + String(DEST_ADDRESS) + ",HelloWorld");
}
}
///RX
#define MY_ADDRESS 2 // Adresse du module RX
#define DEST_ADDRESS 1 // Adresse du module TX
String recu;
int rssiCount = 0;
long rssiSum = 0;
int rssiMin = 999;
int rssiMax = -999;
void setup() {
Serial.begin(9600);
delay(3000);
Serial.println("Initialisation du LA66 (RX)");
// Configuration P2P
Serial.println("AT+STOP"); delay(200);
Serial.println("AT+RESET"); delay(2000);
Serial.println("AT+MODE=0"); delay(200);
Serial.println("AT+ADDRESS=" + String(MY_ADDRESS)); delay(200);
Serial.println("AT+NETWORKID=5"); delay(200);
Serial.println("AT+PARAMETER=9,7,1,4"); delay(200);
Serial.println("AT+FREQ=868500000"); delay(200);
Serial.println("AT+SAVE"); delay(200);
Serial.println("RX prêt");
}
void loop() {
// Lecture des données reçues
if (Serial.available()) {
recu = Serial.readStringUntil('\n');
recu.trim();
if (recu.length() > 0) {
Serial.println("Reçu : " + recu);
}
// Extraction du RSSI si format P2P
if (recu.startsWith("+RCV")) {
int lastComma = recu.lastIndexOf(',');
int prevComma = recu.lastIndexOf(',', lastComma - 1);
String rssiStr = recu.substring(prevComma + 1, lastComma);
int rssiVal = rssiStr.toInt();
// Statistiques
rssiSum += rssiVal;
rssiCount++;
if (rssiVal < rssiMin) rssiMin = rssiVal;
if (rssiVal > rssiMax) rssiMax = rssiVal;
Serial.println("📡 RSSI : " + String(rssiVal) + " dBm");
Serial.println(" Moyenne : " + String((float)rssiSum / rssiCount, 2) + " dBm");
Serial.println(" Min : " + String(rssiMin) + " dBm");
Serial.println(" Max : " + String(rssiMax) + " dBm");
}
}
}
if u want it would be always dark mode just change globalcss all root things to the dark mode
from PIL import Image
# Buka gambar
img = Image.open("IMG-20250807-WA0018.jpg")
# Tentukan area potong (left, upper, right, lower)
# Ini nilai contoh, bisa disesuaikan tergantung ukuran asli dan posisi orang
width, height = img.size
left = int(width * 0.3)
right = int(width * 0.7)
upper = 0
lower = height
# Potong gambar
cropped_img = img.crop((left, upper, right, lower))
# Simpan hasil
cropped_img.save("hasil_crop.jpg")
cropped_img.show()
Thank you very much, your answer has been useful.
Therence and Bud will be grateful;)
Yes, it is a bug, me and many people have same problem.
https://community.openai.com/t/issue-uploading-files-to-vector-store-via-openai-api/1336045
I found the answer to my question. I had misunderstood the problem. The issue was not the scrollbar, but how I was computing the size of the inner WebContentsView.
I used getBounds() on the window, but this returns the outer bounds including the window borders, menu etc, whereas the bounds for the web view are expressed relative to the content view.
So the correct code should be
const currentWindow = BaseWindow.getAllWindows()[0];
const web = new WebContentsView({webPreferences: { partition: 'temp-login' }});
currentWindow.contentView.addChildView(web);
web.setBounds({
x: 0,
y: 0,
width: currentWindow.contentView.getBounds().width,
height: currentWindow.contentView.getBounds().height,
});
Use this command below:
git config --global user.email "YOUR_EMAIL"
It is mentioned in the GitHub official documents
Link
You may wrap top-level route components to display a “Something went wrong” message to the user, just like how server-side frameworks often handle crashes. You may also wrap individual widgets in an error boundary to protect them from crashing the rest of the application.
I know this is a pretty old question, but if someone still happens to come across it...
You need to manually override the canonical URL only on paginated versions of your custom "Blog" page using the wpseo_canonical filter.
Add this to your theme's functions.php:
add_filter('wpseo_canonical', 'fix_blog_page_canonical');
function fix_blog_page_canonical($canonical) {
if (is_page('blog') && get_query_var('paged') > 1) {
global $wp;
return home_url(add_query_arg([], $wp->request));
}
return $canonical;
}
Stupidly the problem was that i was using linux-arm over linux-64 as platform
I had the same issue when having dots in the url.
as a workaround I am replacing dots with $ before calling the NavigateTo, then do the reverse right at the begining of the OnParametersSetAsync
I needed the name of a class as a string and ended up using
String(describe: MyClass.self)
Solution
I included the select
attribute to match the selected itemsPesPage()
value which is now an input. So the parent component is the one that updates the value rather than the child component
<option [value]="size" [selected]="size === itemsPerPage()">{{size}}</option>
This is stupid Stuff.
Your previous constructor should be rewritten as this
public function __construct()
{
}