// Твои то// Твои точки
Point[] РєРЅРѕРїРєРё4 = {
Point.get(624, 170),
Point.get(527, 331),
Point.get(628, 168),
Point.get(525, 189)
};
Point[] РєРЅРѕРїРєРё3 = {
Point.get(689, 310),
Point.get(979, 1029),
Point.get(1243, 662)
};
// Область для чтения числа
Point левыйВерх = Point.get(674, 363);
Point правыйНиз = Point.get(726, 401);
// ТОКЕН БОТА и ВАШ АККАУНТА ID Telegram
String tgToken = "bot";
String tgChatId = "id";
// РќРР–Р• РџРћР§РўР РќРЧЕГО РќР• ТРОГАЕМ!
pfc.setOCRLang("eng");
pfc.startScreenCapture(2);
while (!EXIT) {
String текстЧисло = pfc.getText(левыйВерх, правыйНиз);
pfc.log("OCR text: '" + текстЧисло + "'");
// убираем все запятые
текстЧисло = текстЧисло.replace(",", "");
// оставляем только цифры
текстЧисло = текстЧисло.replaceAll("\[^0-9\]", "");
if (текстЧисло.length() \< 2) {
pfc.log("Слишком короткое число, пропускаем");
continue;
}
double число = 999999;
try {
число = Double.parseDouble(текстЧисло);
} catch (Exception e) {
pfc.log("Не удалось распарсить число: '" + текстЧисло + "'");
continue;
}
pfc.log("Число: " + число);
if (число \<= 125) { // \<= чтобы 1299 тоже сработало
pfc.log("Число меньше или равно 1299, нажимаем 3 кнопки покупки");
for (int i = 0; i \< РєРЅРѕРїРєРё3.length; i++) {
pfc.click(РєРЅРѕРїРєРё3\[i\]);
pfc.sleep(550);
}
// Отправляем сообщение в Telegram
String msg = "За " + (int)число + " звезд улов NFT подарка 🎉";
pfc.sendToTg(tgToken, tgChatId, msg);
pfc.log("Отправлено сообщение в Telegram: " + msg);
} else {
pfc.log("Число больше 1299, нажимаем 4 кнопки");
for (int i = 0; i \< РєРЅРѕРїРєРё4.length; i++) {
pfc.click(РєРЅРѕРїРєРё4\[i\]);
pfc.sleep(850);
}
}
}
Through 15 years of exponential traffic growth from both Double 11 and Alibaba Cloud, we built LoongCollector, an observability agent that delivers 10x higher throughput with 80% reduction in resource usage than open-source alternatives, proving that extreme performance and enterprise reliability can coexist under the most demanding production loads.
Back in the early 2010s, Alibaba’s infrastructure was facing a tidal wave: every Singles’ Day (11.11), traffic would surge to record-breaking levels, pushing our systems to their absolute limits. Our observability stack—tasked with collecting logs, metrics, and traces from millions of servers—was devouring CPU and memory just to keep up. At that time, there were no lightweight, high-performance agents on the market: Fluent Bit hadn’t been invented, Vector was still a distant idea, Logstash was a memory-hungry beast.
The math was brutal: Just a 1% efficiency gain in data collection would save us millions across our massive infrastructure. When you’re processing petabytes of observability data every day, performance isn’t optional—it’s mission-critical.
So, in 2013, we set out to build our own: a lightweight, high-performance, and rock-solid data collector. Over the next decade, iLogtail (now LoongCollector) was battle-tested by the world’s largest e-commerce events, the migration of Alibaba Group to the cloud, and the rise of containerized infrastructure. By 2022, we had open-sourced a collector that could run anywhere—on bare metal, virtual machines, or Kubernetes clusters—capable of handling everything from file logs and container output to metrics, all while using minimal resources.
Today, LoongCollector powers tens of millions of deployments, reliably collecting hundreds of petabytes of observability data every day for Alibaba, Ant Group, and thousands of enterprise customers. The result? Massive cost savings, a unified data collection layer, and a new standard for performance in the observability world.
When processing petabytes of observability data costs you millions, every performance improvement directly impacts your bottom line. A 1% efficiency improvement translates to millions in infrastructure savings across large-scale deployments. That's when we knew we had to share these numbers with the world.
We ran LoongCollector against every major open-source alternative in controlled, reproducible benchmarks. The results weren't just impressive—they were game-changing.
Rigorous Test Methodology
Maximum Throughput: LoongCollector Dominates
Log Type | LoongCollector | FluentBit | Vector | Filebeat |
---|---|---|---|---|
Single Line | 546 MB/s | 36 MB/s | 38 MB/s | 9 MB/s |
Multi-line | 238 MB/s | 24 MB/s | 22 MB/s | 6 MB/s |
Regex Parsing | 68 MB/s | 19 MB/s | 12 MB/s | Not Supported |
📈 Breaking Point Analysis: While competitors hit CPU saturation at ~40 MB/s, LoongCollector maintains linear scaling up to 546 MB/s on a single processing thread—the theoretical maximum of our test environment.
Resource Efficiency: Where the Magic Happens
The real story isn't just raw throughput—it's doing more with dramatically less. At identical 10 MB/s processing loads:
Scenario | LoongCollector | FluentBit | Vector | Filebeat |
---|---|---|---|---|
Simple Line (512B) | 3.40% CPU 29.01 MB RAM |
12.29% CPU (+261%) 46.84 MB RAM (+61%) |
35.80% CPU (+952%) 83.24 MB RAM (+186%) |
Performance Insufficient |
Multi-line (512B) | 5.82% CPU 29.39 MB RAM |
28.35% CPU (+387%) 46.39 MB RAM (+57%) |
55.99% CPU (+862%) 85.17 MB RAM (+189%) |
Performance Insufficient |
Regex (512B) | 14.20% CPU 34.02 MB RAM |
37.32% CPU (+162%) 46.44 MB RAM (+36%) |
43.90% CPU (+209%) 90.51 MB RAM (+166%) |
Not Supported |
The Performance Breakthrough: 5 Key Advantages
Traditional Approach: Traditional log agents create multiple string copies during parsing. Each extracted field requires a separate memory allocation, and the original log content is duplicated multiple times across different processing stages. This approach leads to excessive memory allocations and CPU overhead, especially when processing high-volume logs with complex parsing requirements.
LoongCollector's Memory Arena: LoongCollector introduces a shared memory pool (SourceBuffer) for each PipelineEventGroup, where all string data is stored once. Instead of copying extracted fields, LoongCollector uses string_view references that point to specific segments of the original data.
Architecture:
Pipeline Event Group
├── Shared Memory Pool (SourceBuffer)
│ └── "2025-01-01 10:00:00 [INFO] Processing user request from 192.168.1.100"
├── String Views (zero-copy references)
│ ├── timestamp: string_view(0, 19) // "2025-01-01 10:00:00"
│ ├── level: string_view(20, 4) // "INFO"
│ ├── message: string_view(26, 22) // "Processing user request"
│ └── ip: string_view(50, 13) // "192.168.1.100"
└── Events referencing original data
Performance Impact:
Component | Traditional | LoongCollector | Improvement |
---|---|---|---|
String Operations | 4 copies | 0 copies | 100% reduction |
Memory Allocations | Per field | Per group | 80% reduction |
Regex Extraction | 4 field copies | 4 string_view refs | 100% elimination |
CPU Overhead | High | Minimal | 15% improvement |
Traditional Approach: Traditional log agents create and destroy PipelineEvent objects for every log entry, leading to frequent memory allocations and deallocations. This approach causes significant CPU overhead (10% of total processing time) and creates memory fragmentation. Simple global object pools introduce lock contention in multi-threaded environments, while thread-local pools fail to handle cross-thread scenarios effectively.
LoongCollector's Event Pool Architecture: LoongCollector implements intelligent object pooling with thread-aware allocation strategies that eliminate lock contention while handling complex multi-threaded scenarios. The system uses different pooling strategies based on whether events are allocated and deallocated in the same thread or across different threads.
Thread Allocation Strategy:
1) Same-Thread Allocation/Deallocation
┌──────────────────┐
│ Processor Thread │──── [Lock-free Pool] ──── Direct Reuse
└──────────────────┘
When events are created and destroyed within the same Processor Runner thread, each thread maintains its own lock-free event pool. Since only one thread accesses each pool, no synchronization overhead is required.
2) Cross-Thread Allocation/Deallocation
┌────────────────┐ ┌─────────────────┐
│ Input Thread │────▶│ Processor Thread│
└────────────────┘ └─────────────────┘
│ │
└── [Double Buffer Pool] ──┘
For events created in Input Runner threads but consumed in Processor Runner threads, we implement a double-buffer strategy:
Performance Impact:
Aspect | Traditional | LoongCollector | Improvement |
---|---|---|---|
Object creation | Per event | Pool reuse | 90% reduction |
Memory fragmentation | High | Minimal | 80% reduction |
Traditional Approach: Standard serialization involves creating intermediate Protobuf objects before converting to network bytes. This two-step process requires additional memory allocations and CPU cycles for object construction and serialization, leading to unnecessary overhead in high-throughput scenarios.
LoongCollector's Zero-Copy Serialization: LoongCollector bypasses intermediate object creation by directly serializing PipelineEventGroup data according to Protobuf wire format. This eliminates the temporary object allocation and reduces memory pressure during serialization.
Architecture:
Traditional: PipelineEventGroup → ProtoBuf Object → Serialized Bytes → Network
LoongCollector: PipelineEventGroup → Serialized Bytes → Network
Performance Impact:
Metric | Traditional | LoongCollector | Improvement |
---|---|---|---|
Serialization CPU | 12.5% | 5.8% | 54% reduction |
Memory allocations | 3 copies | 1 copy | 67% reduction |
While LoongCollector demonstrates impressive performance advantages, its reliability architecture is equally noteworthy. The following sections detail how LoongCollector achieves enterprise-grade stability and fault tolerance while maintaining its performance edge.
LoongCollector's multi-tenant architecture ensures isolation between different pipelines while maintaining optimal resource utilization. The system implements a high-low watermark feedback queue mechanism that prevents any single pipeline from affecting others.
Multi-Pipeline Architecture with Independent Queues:
┌─ LoongCollector Multi-Tenant Pipeline Architecture ───────────────────┐
│ │
│ ┌─ Pipeline A ─┐ ┌─ Pipeline B ─┐ ┌─ Pipeline C ─┐ │
│ │ │ │ │ │ │ │
│ │ Input Plugin │ │ Input Plugin │ │ Input Plugin │ │
│ │ ↓ │ │ ↓ │ │ ↓ │ │
│ │ Process Queue│ │ Process Queue│ │ Process Queue│ │
│ │ ↓ │ │ ↓ │ │ ↓ │ │
│ │ Sender Queue │ │ Sender Queue │ │ Sender Queue │ │
│ │ ↓ │ │ ↓ │ │ ↓ │ │
│ │ Flusher │ │ Flusher │ │ Flusher │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │ │ │ │
│ └───────────────────┼─────────────────┘ │
│ │ │
│ ┌─ Shared Runners ────────────────────────────────────────────────┐ │
│ │ │ │
│ │ ┌─ Input Runners ─┐ ┌─ Processor Runners ┐ ┌─ Flusher Runners ─┐ │ │
│ │ │ • Pipeline │ │ • Priority-based │ │ • Watermark-based │ │ │
│ │ │ isolation │ │ scheduling │ │ throttling │ │ │
│ │ │ • Independent │ │ • Fair resource │ │ • Back-pressure │ │ │
│ │ │ event pools │ │ allocation │ │ control │ │ │
│ │ └─────────────────┘ └────────────────────┘ └───────────────────┘ │ │
│ └──────────────────────────────────────────────────────────────────┘ │
└───────────────────────────────────────────────────────────────────────┘
High-Low Watermark Feedback Queue Mechanism:
┌─ High-Low Watermark Feedback System ─────────────────────┐
│ │
│ ┌─ Queue State Management ─┐ ┌─ Feedback Mechanism ──┐ │
│ │ │ │ │ │
│ │ ┌─── Normal State ───┐ │ │ ┌──── Upstream ────┐ │ │
│ │ │ Size < Low │ │ │ │ Check │ │ │
│ │ │ Accept all data │ │ │ │ Before Write │ │ │
│ │ └────────────────────┘ │ │ └──────────────────┘ │ │
│ │ │ │ │ │ │
│ │ ▼ │ │ │ │
│ │ ┌── High Watermark ──┐ │ │ │ │
│ │ │ Size >= High │ │ │ ┌──── Downstream ──┐ │ │
│ │ │ Stop accepting │ │ │ │ Feedback Enabled │ │ │
│ │ │ non-urgent data │ │ │ └──────────────────┘ │ │
│ │ └────────────────────┘ │ │ │ │
│ │ │ │ │ │ │
│ │ ▼ │ │ │ │
│ │ ┌─ Recovery State ──┐ │ │ │ │
│ │ │ Size <= Low │ │ │ │ │
│ │ │ Resume accepting data │ │ │ │
│ │ └───────────────────┘ │ │ │ │
│ └──────────────────────────┘ └───────────────────────┘ │
└──────────────────────────────────────────────────────────┘
Isolation Benefits:
Enterprise environments run multiple pipelines with different criticality levels. Our priority-aware round-robin scheduler ensures fairness while respecting business priorities. The system implements a sophisticated multi-level scheduling algorithm that guarantees resource allocation fairness while maintaining strict priority enforcement.
Priority Scheduling Principles
The core scheduling algorithm ensures both fairness within priority levels and strict priority enforcement between levels. The system follows strict priority ordering while maintaining fair round-robin scheduling within each priority level.
┌─ High Priority ────────────────────────────────────────────────────┐
│ ┌───────────┐ │
│ │ Pipeline1 │ ◄─── Always processed first │
│ └───────────┘ │
│ │ │
│ ▼ (Priority transition) │
└────────────────────────────────────────────────────────────────────┘
┌─ Medium Priority (Round-robin cycle) ──────────────────────────────┐
│ ┌───────────┐ ┌─────────────────┐ ┌────────────┐ │
│ │ Pipeline2 │───▶│ Pipeline3(Last) │───▶│ Pipeline 4 │ │
│ └───────────┘ └─────────────────┘ └────────────┘ │
│ ▲ │ │
│ └────────────────────────────────────────┘ │
│ │
│ Note: Last processed was Pipeline3, so next starts from Pipeline4 │
│ │ │
│ ▼ (Priority transition) │
└────────────────────────────────────────────────────────────────────┘
┌─ Low Priority (Round-robin cycle) ─────────────────────────────────┐
│ ┌───────────┐ ┌───────────┐ │
│ │ Pipeline5 │───▶│ Pipeline6 │ │
│ └───────────┘ └───────────┘ │
│ ▲ │ │
│ └───────────────────┘ │
│ │
│ Note: Processed only when higher priority pipelines have no data │
└────────────────────────────────────────────────────────────────────┘
When one destination fails, traditional agents often affect all pipelines. LoongCollector implements adaptive concurrency limiting per destination.
AIMD Based Flow Control:
┌─ ConcurrencyLimiter Configuration ───────────────────────────────────────┐
│ │
│ ┌─ Failure Rate Thresholds ────────────────────────────────────────────┐ │
│ │ │ │
│ │ ┌─ No Fallback Zone ─┐ ┌─ Slow Fallback Zone ─┐ ┌─ Fast Fallback ──┐ │ │
│ │ │ │ │ │ │ │ │ │
│ │ │ 0% ─────────── 10% │ │ 10% ──────────── 40% │ │ 40% ─────── 100% │ │ │
│ │ │ │ │ │ │ │ │ │
│ │ │ Maintain Current │ │ Multiply by 0.8 │ │ Multiply by 0.5 │ │ │
│ │ │ Concurrency │ │ (Slow Decrease) │ │ (Fast Decrease) │ │ │
│ │ └────────────────────┘ └──────────────────────┘ └──────────────────┘ │ │
│ └──────────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─ Recovery Mechanism ─┐ │
│ │ • Additive Increase │ ← +1 when success rate = 100% │
│ │ • Gradual Recovery │ ← Linear scaling back to max │
│ └──────────────────────┘ │
└──────────────────────────────────────────────────────────────────────────┘
Each concurrency limiter uses an adaptive rate limiting algorithm inspired by AIMD (Additive Increase, Multiplicative Decrease) network congestion control. When sending failures occur, the concurrency is quickly reduced. When sends succeed, concurrency gradually increases. To avoid fluctuations from network jitter, statistics are collected over a time window/batch of data to prevent rapid concurrency oscillation.
By using this strategy, when network anomalies occur at a sending destination, the allowed data packets for that destination can quickly decay, minimizing the impact on other sending destinations. In network interruption scenarios, the sleep period approach maximizes reduction of unnecessary sends while ensuring timely recovery of data transmission within a limited time once the network is restored.
LoongCollector has been validated in some of the world's most demanding production environments, processing real-world workloads that would break most observability systems. As the core data collection engine powering Alibaba Cloud SLS (Simple Log Service)—one of the world's largest cloud-native observability platforms—LoongCollector processes observability data for tens of millions of applications across Alibaba's global infrastructure.
Global Deployment Scale:
Enterprise Customer Validation:
Extreme Scenario Testing:
Scalability
Network Resilience
Chaos Engineering
LoongCollector represents more than just performance optimization—it's a fundamental rethinking of how observability data should be collected, processed, and delivered at scale. By open-sourcing this technology, we're democratizing access to enterprise-grade performance that was previously available only to the largest tech companies.
Ready to experience 10x performance improvements?
🚀 GitHub Repository: https://github.com/alibaba/loongcollector
📊 Benchmark Suite: Clone our complete benchmark tests and reproduce these results in your environment
📖 Documentation: Comprehensive guides for migration, optimization, and advanced configurations
💬 Community Discussion: Join our Discord for technical discussions and architecture deep-dives
Challenge us: If you're running Filebeat, FluentBit, or Vector in production, we're confident LoongCollector will deliver significant improvements in your environment. Run our benchmark suite and let the data speak.
Contribute: LoongCollector is built by engineers, for engineers. Whether it's performance optimizations, new data source integrations, or reliability improvements—every contribution shapes the future of observability infrastructure.
Open Questions for the Community:
Benchmark Challenge: We're confident in our numbers, but we want to see yours. Run our benchmark suite against your current setup and share the results. If you can beat our performance, we'll feature your optimizations in our next release.
The next time your log collection agent consumes more resources than your actual application, remember: there's a better way. LoongCollector proves that high performance and enterprise reliability aren't mutually exclusive—they're the foundation of modern observability infrastructure.
Built with ❤️ by the Alibaba Cloud Observability Team. Battle-tested across Hundreds PB of daily production data and tens of millions of instances.
For large ranges, if you don't want to apply the formula for the whole row/column, you can select the start of the range, then use the slider to go to the end of the range. Press SHIFT and select the end of the range. This selects the whole range. Then you can press CTRL + ENTER to apply the formula.
Use svn changelist. best tool ever to solve this.
Check this:
https://github.com/tldr-pages/tldr/blob/main/pages/common/svn-changelist.md
And if you want to see the list added, use svn status
The documents are not that clear. But I have just finish a script to split a larget svn commit into small commits
I think you must URL-encode the query parameters before sending the request to the server.
For example, the Arabic character "أ" (U+0623) should be percent-encoded as %D8%A3
.
So instead of idNo=2/أ
, send idNo=2/%D8%A3
.
For resharper there is:
- ReSharper_UnitTestRunFromContext (run at cursor)
- ReSharper_UnitTestDebugContext (debug at cursor)
- ReSharper_UnitTestSessionRepeatPreviousRun
Here is the right way to ask Bloomberg:
holidays = blp.bds(
'USD Curncy',
'CALENDAR_NON_SETTLEMENT_DATES',
SETTLEMENT_CALENDAR_CODE='FD',
CALENDAR_START_DATE='20250101',
CALENDAR_END_DATE='20261231'
)
print(holidays)
N.B. 'FD' is the calendar for the US.
You will get a DataFrame, with the column 'holiday_date' and the different dates written in format yyyy-mm-dd
https://stackoverflow.com/a/79704676/17078296
have you checked if your vendor folder is excluded from language server features
2025 Update
According to the Expo documentation, use:
npx expo install --fix
Tip: Run this command multiple times — it may continue updating dependencies on each pass until everything is aligned with the expected versions.
What helped me was to change the Java version.
I downgraded to Java 11 from Java 17 because Java 17 introduced stricter formatting rules and it started to fail at class initialization time (static
block), causing a NoClassDefFoundError.
IMO the author tag helps in situations where the code is visualized on a non IDE place, like on GitHub, Bash CLI or tools like notepad++, the tag here gives a direct help about the origins of the thing (interface/class/method). The VCS is helpful but it requires learning curve and IDE.
From another perspective, the author tag may help in enforcing the Open-Close principle in SOLID, for example when a developer comes to touch an interface marked `@author <senior_developer>`, they'd think to not touch it as it was designed by a high level person, so maybe that touching it, may ruin important things, eventually, they will think about making an extension of it, which is great.
thanks for your comments - they were very useful and push my brain into right direction.
If shortly: original c++ code creates an image in WMF format (from Windows 3.0, you remember it, right?). I changed the c++ and started to generated EMF files (came from Windows 95).
For example, this code
CMetaFileDC dcm;
dcm.Create();
has been replaced to this one:
CMetaFileDC dcm;
dcm.CreateEnhanced(NULL, NULL, NULL, _T("Enhanced Metafile"));
I walked through all related locations and now I have EMP as the output format.
And this step has solved all my issues, I even do not convert this EMF file to BMP format, I can paste/use it directly into my C# code.
Thanks again for your thoughts and ideas, I really appreciate.
I had the same issue and found out it was caused by a dependency conflict — in my case, it was the devtools dependency.
After removing it, everything went back to working normally.
SOLVED!
Thanks to Wayne from the comments, he guided me into CUDA Program files.
Some of the cuDNN files from downloaded 8.1 version weren't present in
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\lib\x64
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\include
What worked:
Downloading a new cuDNN 8.1 .zip file from NVIDIA website
Extracting it into Downloads/
Copying files from bin/; include/ and lib/x64 into corresponding directories in
C
:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\
Thats it.
It might be because I hit the chat limit, since I had the same problem today and nothing else explains it.
Lista de ações tentadas:
"Tried ending the VS Code task in the Task Manager."
"Changing the Copilot version."
"Uninstalling and reinstalling Copilot."
"Reloading the VS Code window."
Update from 7th of August 2025:
I have a firebase cloud function for handling RTDN(real time developer notifications) but i got an error message that i don't have the required permissions. AI tools could not really help me all the way. With the info I could get from them and the answeres from people in this post that I am writing this answer for I ended up getting it to work like this:
In the google cloud console under Navigation Menu -> IAM & Admin -> IAM i searched for an entry with a Principal like this "[email protected]" and a name like "App Engine default service account".
Then i went to the google play console app-list screen -> Users and permissions -> next to "manage users" there are 3 vertical dots -> clicked on them and selected "invite new users" -> for email address i entered "[email protected]", under account permissions i only chose "View app information and download bulk reports (read only)", "View financial data, orders and cancellation survey responses" and "Manage orders and subscriptions" and pressed the button to invite the user.
Then in the google play console i went to the app in question -> to the subscription (in my case i only have 1 anyway) and deactivated and reactivated it and after a few minutes it worked for me.
Hope this might help someone in the future.
I'm not sure when support for setToken
ended, but in later versions of Python dbutils
, it's definitely no longer supported. As a matter of fact, it's hard to find any references to it in the official documentation & GitHub.
Clean and rebuild solution did it for me.
For me it was that I need to be connected to the same WiFi (or maybe just Internet) on BOTH devices: Laptop and iPhone. For Android it looks like it doesnt matter.
Im using Expo, dont know if bare bone React Native behaves the same.
By default SQLdeveloper searches for a jdk within its directory. If it doesnt find it will prompt to select with a popup. If we need to set it ,can be also be done in the jdk.conf file present in <installationfolder>/sqldeveloper/bin by setting SetJavaHome
You can subscribe to blocks via websocket with quicknode free tier
- Create a account on quicknode.com.
- Go to https://dashboard.quicknode.com/endpoints and get the websocket endpoint there.
exist the same problem, using cmake. please solve
It is not triggered when I enter the domain name directly in the browser.
That's by design.
When you type a URL in the browser on Android, it doesn't trigger any intent that can be opened in any app, because you just want to visit the URL you entered.
Now, if you click a URL somewhere, then Android tries to find an app that supports that URL and opens it.
Source: https://developer.android.com/training/app-links#android-app-links
MikroORM creates business_biz_reg_no instead of using your biz_reg_no column for the join. You defined both biz_reg_no (a string) and a @ManyToOne() relation without telling MikroORM to reuse the same column.
@ManyToOne(() => BusinessEntity, {
referenceColumnName: 'biz_reg_no',
fieldName: 'biz_reg_no', //<= use this to force MikroORM to use the right column
nullable: true,
})
business?: BusinessEntity;
Also fix this (array not single):
@OneToMany(() => InquiryEntity, (inquiry) => inquiry.business)
inquiries: InquiryEntity[]; // <= not `inquiry`
Now MikroORM will generate:
ON i.biz_reg_no = b.biz_reg_no
It was pyjanitor. I forgot that it was imported.
import tensorflow_datasets as tfds
import tensorflow as tf
# Use default configuration without subwords8k
dataset, info = tfds.load('imdb_reviews', with_info=True, as_supervised=True)
This error occurs may due to the subwords8k
configuration for the imdb_reviews
dataset has been deprecated. I guess
Sounds like your tests are sharing state across threads—productId
is likely getting overwritten when run in parallel. Try isolating data per scenario to avoid conflicts.On a side note, if you want something equally smooth, check out a Pinterest Video Downloader—great for grabbing HD videos and GIFs without logins or watermarks. Super quick and easy!
Did you find a solution? Im also can
't find how to make it.
When a container is un-selected the outer Focus widget sets canRequestFocus = false and skipTraversal = faslse on every descendant focus-node.
Because the TextField inside _SearchField owns its own persistent FocusNode, that flag stays false even after the container becomes selected again, so the Tab key can never land on that field any more – only on the button (which creates a brand-new internal focus-node each rebuild).
So the properties need to be updated once the container is selected again inside didUpdateWidget method and pass the isContainerSelected flag to the _SearchField widget from parent.
class _SearchFieldState extends State<_SearchField> {
final FocusNode _focusNode = FocusNode();
@override
void didUpdateWidget(final _SearchField oldWidget) {
super.didUpdateWidget(oldWidget);
if (oldWidget.isSectionSelected != widget.isContainerSelected) {
_focusNode
..canRequestFocus = widget.isContainerSelected
..skipTraversal = !widget.isContainerSelected;
}
}
@override
void dispose() {
_focusNode.dispose();
super.dispose();
}
@override
Widget build(final BuildContext context) {
return TextField(
focusNode: _focusNode,
decoration: const InputDecoration(
hintText: 'Search',
),
);
}
}
I'm using jooq v3.11.5 for DATE
column in Oracle and it helps me:
<forcedTypes>
<forcedType>
<name>TIMESTAMP</name>
<userType>java.time.LocalDateTime</userType>
<types>DATE((.*))?</types>
</forcedType>
</forcedTypes>
with
<javaTimeTypes>true</javaTimeTypes>
It is now possible, see this answer https://stackoverflow.com/a/62411309/17789881.
In short, you can do
Base.delete_method(@which your_function(your_args...))
To serve index.html add pm2 serve /home/site/wwwroot --no-daemon
in the Startup Command
of the Configuration
blade of of Azure Web App
Lauren from Rasa here, glad to hear you are trying the Developer Edition of Rasa!
I haven't come across that problem myself yet, however, I can recommend going to the Rasa Docs (https://rasa.com/) and clicking on the "ask AI" button at the bottom. Whenever I have problems with installation, I usually try to troubleshoot with the Ask AI feature, drop in your error message and see if it can help get you a couple steps further.
EC2 instances don't understand what is the working directory when running user data. Therefore you must specify the destination when using wget
(and with many other commands):
wget -P /home/centos/testing https://validlink
Ps. Know that specifying working directory with .
doesn't work either.
f you're okay using synchronous streaming
from transformers import TextStreamer
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
Then, redirect stdout to a custom generator function. But since you already want async and FastAPI streaming, let’s fix it properly.
I found the setting I was looking for! *Facepalm*
The setting can be found here:
Add Page > Ellipsis (three dots) button next to Publish > Preferences > Interface > Show starter patterns
NPM_CONFIG_REGISTRY
env variableExample:
NPM_CONFIG_REGISTRY=http://localhost:4873 bunx my-cli
After some experimentation I have developed a JSON based language (serializable) that can be sent over the wire and parsed in a JS environment. Now it supports arrays, objects and primitives, promises and function composition all in one payload. Yes I am a bit proud of it and it took me a long time to get to this point.
It plays nice with the JS and TS language by having a utility to create stubs for functions which allow writing a complete program that is then sent somewhere else for parsing.
I believe that the real value or RPC is not just to call a function, but to do bulk requests and computation on the fly before a response is sent back over the wire.
I also think the core or this concept should be medium/protocol agnostic. You should decide if you want WS, HTTP or any other method of transport.
My work might inspire or create some discussion and criticism.
If it "works" in some environment, it's likely due to a custom extension or monkey patching.
In Pandas 2.3.1, the correct way to shuffle a DataFrame is:
df = df.sample(frac=1).reset_index(drop=True)
df.shuffle()
is not part of the standard Pandas API.
I had the same problem. Following the advice from @staroselskii I played around with the tests and found that in my case the problem was caused by outputting too many lines to stderr. When I reduced the amount of logging to stderr, the pipeline completed correctly with a full set of tests.
Most probably an issue with the ads-routes. ADS-Routes have to be configured vise-versa. So CX2020 has to point to the Debian system and the Debian-system to the CX2020.
Check also if there are doubled entries which point to the same ams net id but different ip or something like that. Maybe you get a connection in this case but it will be very instable.
Of course check also the firewall TCP:48898 has to be open on both systems in incoming direction.
You can also check ads-traffic and commands with the ADS-Monitor: https://www.beckhoff.com/de-de/produkte/automation/twincat/tfxxxx-twincat-3-functions/tf6xxx-connectivity/tf6010.html?
Make sure your versions all match.
"ag-grid-community" and "ag-grid-enterprise" should be exactly the same.
I had these and had the same error:
"ag-grid-community": "^34.1.1"
"ag-grid-enterprise": "^34.1.0"
No. If the compileinfo differs, then most probably also the binaries are different. The memory addresses in the core-dump will point to invalid locations. That's also why the TC-IDE is not loading it, because it would not make any sense.
I reccomend to use something like git for your project. If you did, you probably would be able to restore the "old" binaries.
Add connectTimeout: 10000
{
host: 'ip_or_hostname',
port: 6379,
connectTimeout: 10000,
maxRetriesPerRequest: null,
enableReadyCheck: false
}
ic_stat_your_icon_name
drawable-*
folders into your Flutter Android app at:android/app/src/main/res/drawable-hdpi/ic_stat_your_icon_name.png
android/app/src/main/res/drawable-mdpi/ic_stat_your_icon_name.png
Notification icons are white-only with transparent background
update:
My baaaad !
It's because I am using UFW
on my system, it blocks any request that aren't allowed by a rule.
Figured it out when trying netcat
, it worked to the only possible issue was the firewall blocking requests.
ic_stat_your_icon_name
drawable-*
folders into your Flutter Android app at:android/app/src/main/res/drawable-hdpi/ic_stat_your_icon_name.png
android/app/src/main/res/drawable-mdpi/ic_stat_your_icon_name.png
Notification icons are white-only with transparent background
def player():
health = 100
return health
health = player()
print(health)
@zmbq. Can you explain how would you do it with a setup.cfg ? setuptools. I need to include 2 more wheel packages into my wheel. How can I do that ?
On my mac, I changed the DNS settings of my wifi connection to that of Google. 8.8.8.8 and 8.8.4.4. It worked for me.
Interesting that there is so little advice about such an obvious problem. Rate limiting a messaging source is a very common technique employed in order not to overload downstream systems. Strangely neitehr in AWS SQS nor in the Spring Boot consumer implementation there is any support for it.
Not an answer but sharing a problem related to the topic, there no possibility to remove the legend frame when adding a legend trough tm_add_legend() and even within the tm_options settings with legend.frame=F it does not work either.
> packageVersion('tmap')
[1] ‘4.1’
> packageDate('tmap')
[1] "2025-05-12"
First I tried as specified here :
tm_shape(World)+
tm_polygons(fill = 'area',fill.legend=tm_legend(frame = F))+
tm_add_legend(frame=F,labels = c('lab1','lab2'),type = 'polygons',fill=c('red','green'),position=c('left','top'))
Even try the option with :
tm_shape(World)+
tm_polygons(fill = 'area',fill.legend=tm_legend(frame = F))+
tm_add_legend(fill.legend=tm_legend(frame=F),labels = c('lab1','lab2'),type = 'polygons',fill=c('red','green'),position=c('left','top'))
It cannot be implemented directly in SharePoint Online. However, you can try third-party apps such as Kwizcom Lookup or BoostSolutions Cascaded Lookup.
You need to tell IntelliJ to generate those classes, otherwise it won't be aware of them, even if their creation is part of the default maven lifecycle.
Right-click on the project's folder or the pom.xml, then Maven > Generate Sources and Update Folders.
Just use { list-style-position: inside; }
Why not just to make a stacked bar plot? Somethin like this:
res = df.T.drop_duplicates().T
# df here is your source dataframe
fig, ax = plt.subplots()
for name, group in res.groupby("PVR Group"):
ax.bar((group['min_x']+(group['max_x']-group['min_x'])/2),
group['max_y']-group['min_y'],
width=(group['max_x']-group['min_x']),
bottom=group['min_y'],
label=name,
edgecolor="black")
ax.legend()
plt.show()
I believe you could adjust the design to meet your taste.
I’ve recently bought an apartment in Delhi NCR and I’m exploring options for complete home interior work. I keep hearing the term "turnkey project", but I'm not exactly sure what all it covers. From what I understand, it includes everything from design to final setup.
If anyone has recommendations, I’d love to hear them. I came across Zayan Lifestyle’s residential interior services, which seem to offer a full turnkey solution—including design consultation, modular kitchens, wardrobes, carpentry, and execution. Has anyone here worked with them or knows of other trusted options in Delhi?
Use flex to push the bottom image down inside the .pinkborder card enter image description here
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<link rel="stylesheet" href="https://fonts.googleapis.com/css2?family=Schoolbell&family=Waiting+for+the+Sunrise&display=swap">
<title>Image Bottom Alignment Test</title>
<!-- css start -->
<style>
.contain {
width: 100%;
display: flex;
justify-content: center;
flex-wrap: wrap;
gap: 50px;
margin: 10%;
}
.pinkborder {
background-color: red;
width: 300px;
height: 500px;
display: flex;
flex-direction: column;
justify-content: space-between; /* Push top content and bottom image apart */
align-items: center;
padding: 10px;
color: white;
font-family: 'Schoolbell', cursive;
}
.topcontent {
display: flex;
flex-direction: column;
align-items: center;
}
.downalign {
display: flex;
justify-content: center;
align-items: flex-end;
width: 100%;
}
</style>
<!--end -->
</head>
<body>
<div class="contain">
<div class="pinkborder">
<div class="topcontent">
<img src="https://picsum.photos/200/300">
<div>
╰⠀╮⠀e⠀╭⠀╯<br>
aerhaedhaedhaedh<br>
aethaethahartdg
</div>
</div>
<div class="downalign">
<img src="https://picsum.photos/200">
</div>
</div>
<div class="pinkborder">
<div class="topcontent">
<img src="https://picsum.photos/id/237/200/300">
<div>
╰⠀╮⠀e⠀╭⠀╯<br>
aerhaedhaedhaedh<br>
aethaethahartdg
</div>
</div>
<div class="downalign">
<img src="https://picsum.photos/200">
</div>
</div>
</div>
</body>
</html>
C:\Users\xxx\.gradle\caches\modules-2\files-2.1\io.flutter\flutter_embedding_release\1.0.0-1a65d409c7a1438a34d21b60bf30a6fd5db59314\88383da8418511a23e318ac08cd9846f983bbde0\flutter_embedding_release-1.0.0-1a65d409c7a1438a34d21b60bf30a6fd5db59314.jar!\io\flutter\embedding\engine\renderer\SurfaceTextureWrapper.class
here is my file path,it don't have the method shouldUpdate,maybe my version is wrong?How to update it?
Using Yarn Workspaces:
Add references.path to tsconfig.json
for the package you are trying to import. Example from public repo https://github.com/graphile/starter/blob/main/%40app/server/tsconfig.json#L14
"references": [{ "path": "../config" }, { "path": "../graphql" }]
Problem solved; changing the "custom deploy" in true solved the problem.
We decided to go the mTLS way. Because:
Xcode 26 Beta 5 fixes the issue.
You can add this snippet after your error, to ignore it. There is a piece of doc available here that explains it better than I do
# type: ignore
yes i use softwareserial but , usually same response
i use P2P , so my question now, it's possible that the model LORa is Lora Wan , i mean i can't change it to P2P ?
////TX
#define MY_ADDRESS 1 // Adresse du module TX
#define DEST_ADDRESS 2 // Adresse du module RX
unsigned long dernierEnvoi = 0;
void setup() {
Serial.begin(9600);
delay(3000);
Serial.println("Initialisation du LA66 (TX)");
// Configuration P2P
Serial.println("AT+STOP"); delay(200);
Serial.println("AT+RESET"); delay(2000);
Serial.println("AT+MODE=0"); delay(200);
Serial.println("AT+ADDRESS=" + String(MY_ADDRESS)); delay(200);
Serial.println("AT+NETWORKID=5"); delay(200);
Serial.println("AT+PARAMETER=9,7,1,4"); delay(200);
Serial.println("AT+FREQ=868500000"); delay(200);
Serial.println("AT+SAVE"); delay(200);
Serial.println("TX prêt");
}
void loop() {
if (millis() - dernierEnvoi > 3000) {
dernierEnvoi = millis();
Serial.println("AT+SEND=" + String(DEST_ADDRESS) + ",HelloWorld");
}
}
///RX
#define MY_ADDRESS 2 // Adresse du module RX
#define DEST_ADDRESS 1 // Adresse du module TX
String recu;
int rssiCount = 0;
long rssiSum = 0;
int rssiMin = 999;
int rssiMax = -999;
void setup() {
Serial.begin(9600);
delay(3000);
Serial.println("Initialisation du LA66 (RX)");
// Configuration P2P
Serial.println("AT+STOP"); delay(200);
Serial.println("AT+RESET"); delay(2000);
Serial.println("AT+MODE=0"); delay(200);
Serial.println("AT+ADDRESS=" + String(MY_ADDRESS)); delay(200);
Serial.println("AT+NETWORKID=5"); delay(200);
Serial.println("AT+PARAMETER=9,7,1,4"); delay(200);
Serial.println("AT+FREQ=868500000"); delay(200);
Serial.println("AT+SAVE"); delay(200);
Serial.println("RX prêt");
}
void loop() {
// Lecture des données reçues
if (Serial.available()) {
recu = Serial.readStringUntil('\n');
recu.trim();
if (recu.length() > 0) {
Serial.println("Reçu : " + recu);
}
// Extraction du RSSI si format P2P
if (recu.startsWith("+RCV")) {
int lastComma = recu.lastIndexOf(',');
int prevComma = recu.lastIndexOf(',', lastComma - 1);
String rssiStr = recu.substring(prevComma + 1, lastComma);
int rssiVal = rssiStr.toInt();
// Statistiques
rssiSum += rssiVal;
rssiCount++;
if (rssiVal < rssiMin) rssiMin = rssiVal;
if (rssiVal > rssiMax) rssiMax = rssiVal;
Serial.println("📡 RSSI : " + String(rssiVal) + " dBm");
Serial.println(" Moyenne : " + String((float)rssiSum / rssiCount, 2) + " dBm");
Serial.println(" Min : " + String(rssiMin) + " dBm");
Serial.println(" Max : " + String(rssiMax) + " dBm");
}
}
}
if u want it would be always dark mode just change globalcss all root things to the dark mode
from PIL import Image
# Buka gambar
img = Image.open("IMG-20250807-WA0018.jpg")
# Tentukan area potong (left, upper, right, lower)
# Ini nilai contoh, bisa disesuaikan tergantung ukuran asli dan posisi orang
width, height = img.size
left = int(width * 0.3)
right = int(width * 0.7)
upper = 0
lower = height
# Potong gambar
cropped_img = img.crop((left, upper, right, lower))
# Simpan hasil
cropped_img.save("hasil_crop.jpg")
cropped_img.show()
Thank you very much, your answer has been useful.
Therence and Bud will be grateful;)
Yes, it is a bug, me and many people have same problem.
https://community.openai.com/t/issue-uploading-files-to-vector-store-via-openai-api/1336045
I found the answer to my question. I had misunderstood the problem. The issue was not the scrollbar, but how I was computing the size of the inner WebContentsView.
I used getBounds() on the window, but this returns the outer bounds including the window borders, menu etc, whereas the bounds for the web view are expressed relative to the content view.
So the correct code should be
const currentWindow = BaseWindow.getAllWindows()[0];
const web = new WebContentsView({webPreferences: { partition: 'temp-login' }});
currentWindow.contentView.addChildView(web);
web.setBounds({
x: 0,
y: 0,
width: currentWindow.contentView.getBounds().width,
height: currentWindow.contentView.getBounds().height,
});
Use this command below:
git config --global user.email "YOUR_EMAIL"
It is mentioned in the GitHub official documents
Link
You may wrap top-level route components to display a “Something went wrong” message to the user, just like how server-side frameworks often handle crashes. You may also wrap individual widgets in an error boundary to protect them from crashing the rest of the application.
I know this is a pretty old question, but if someone still happens to come across it...
You need to manually override the canonical URL only on paginated versions of your custom "Blog" page using the wpseo_canonical filter.
Add this to your theme's functions.php:
add_filter('wpseo_canonical', 'fix_blog_page_canonical');
function fix_blog_page_canonical($canonical) {
if (is_page('blog') && get_query_var('paged') > 1) {
global $wp;
return home_url(add_query_arg([], $wp->request));
}
return $canonical;
}
Stupidly the problem was that i was using linux-arm over linux-64 as platform
I had the same issue when having dots in the url.
as a workaround I am replacing dots with $ before calling the NavigateTo, then do the reverse right at the begining of the OnParametersSetAsync
I needed the name of a class as a string and ended up using
String(describe: MyClass.self)
Solution
I included the select
attribute to match the selected itemsPesPage()
value which is now an input. So the parent component is the one that updates the value rather than the child component
<option [value]="size" [selected]="size === itemsPerPage()">{{size}}</option>
This is stupid Stuff.
Your previous constructor should be rewritten as this
public function __construct()
{
}
onTap: () => Navigator.push(
context,
MaterialPageRoute(
builder: (_) => BlocProvider(
create: (_) => HomeCubit()..getProfileData(similarJobs?.sId ?? ''),
child: TopProfileScreen(id: similarJobs?.sId ?? ''),
),
),
),
if you are using bloc simple use bloc provider in related product that you want navigate new related product .....when you back from this page bloc state manage that things for you
Here's a good workaround...
As far as to what is happening, I could only guess. I don't feel like spending a lot of time trying to figure it out either. I don't believe this is the intended usage of the popover. Popover to normally used to popover a different view, not itself. It's interesting the View Extension
works ( in my shortened testing) ... and the modifier doesn't. Best of luck.
extension View {
func popper ( _ isPresented: Binding < Bool > ) -> some View {
Color.clear
.popover ( isPresented: isPresented ) { self }
}
}
#if DEBUG
struct DisplayAsPopoverModifier_Previews: PreviewProvider {
static var previews: some View {
VStack(spacing: 16) {
Text("Title")
.font(.largeTitle)
Image(systemName: "star")
.font(.largeTitle)
}
.popper1 ( .constant ( true ) )
}
}
#endif
When you use the Download a image or file Dataverse action in Power Automate make sure to expand the advanced options. This will show the Image size input. When not filled, it will by default download a thumbnail. When you set this to full, the file/image will be downloaded in the maximum resolution and not a thumbnail.
The problem was caused by the Windows shortcuts (.lnk
files), which are used as symbolic links, created by MontaVista in 2006 not correctly being resolved.
The first issue that at some point in time the shortcut resolution mechanism of cygwin changed and the current cygwin version 3.6.4-1 can not correctly resolve the old-style shortcuts created in 2006 as links. Thus, I switched back to cygwin version 2.10.0-1 available on the cygwin time machine. This resolved the issue on my local system.
However, when working with Windows containers there were several issues with the file attributes missing due to the file system being used by docker. I initially tried to unzip the ZIP archive into the container file system during the build phase so that the container would start as fast as possible and be self-contained. However, the shortcut resolution did not work due to issues with the file attributes. Thus, as a fix given that the container is an intermediate solution while modernising the build process, I decided to mount the unzipped directory containing the compiler using docker's -v
option: docker run --rm --name mips-build -v C:\xyz\resources\MontaVista:C:\MontaVista mips-build:0.0.0
.
However, the compiler needs to be located inside the "D: drive". Thus, I created a symbolic link as part of the CMD instruction within the Docker file:
CMD [ "C:\\cygwin64\\bin\\bash.exe", "--login", "-i", "-c", "ln -s /cygdrive/c/montavista/ /cygdrive/d/MontaVista", "&&", "..." ]
There is no possible way to control the modules on Azure Hybrid workers via Azure Automation. The modules control functionality in Azure Automation is only for the Azure Workers (the built-in ones). This means you will have to do it manually, by running scripts on your machines or some other automation way.
Not that you can write your runbooks in a way that it checks if certain module and version is available on the machine. If it is not available you can have a code that installs the module and the version. That of course will add additional runtime for your runbooks of doing that check and more additional time if it has to download and install the module(s).
Stop and starting following services resolve the issue for me:
sudo systemctl stop docker.socket
sudo systemctl stop docker
sudo systemctl start docker.socket
sudo systemctl start docker
Note: Make sure to stop docker.socket first, otherwise it will start the docker service when you stop, and make sure do stop instead of restart.
It's pretty simple - just disable default shortcuts it preferences and "Ctrl+K+C" -> comment, "Ctrl+K+U" - uncomment selected area... Look at attached images...
enter image description here
enter image description here
From what I have experienced, the easiest way is to apply every updates you need in the suggestions tab of the project structure then apply it and reload your gradle project. In my case it fixed itself but you might also want to check your libs.versions.toml file and fix the warnings if you have some.
Please tell me if this problem has been solved
I managed to solve the "starting up" error. Some Android SDK was missing. When I installed it, the connection between Android Studio and the virtual device remained intact.
The answer of @hanrvuser (disabling the impeller) helped. Though, I now found another solution: running the virtual device with software instead of automatic/hardware.
I’ve used an open-source tool called Keploy recently, and it’s been pretty useful when I needed to do integration testing without heavy refactoring.
In one project, we were working with a third-party API (kind of a black-box scenario) buried deep in the codebase — similar to what you're describing. Writing isolated unit tests wasn’t practical at that point, so we needed a way to test how our app behaved when interacting with that external component.
Keploy worked by sitting between the app and the network — it recorded actual requests and responses while we used the app normally. That included calls to the third-party API. From that, it generated test cases automatically, and even mocked the API calls for later test runs. This meant we didn’t need to set up or maintain a separate staging version of the third-party service every time we wanted to validate something.
We were able to run these generated tests as part of our pipeline and catch integration issues early, especially when updating dependencies. It wasn’t perfect — there’s a bit of setup involved, and you need to run the app with Keploy to record the traffic — but it definitely saved time compared to writing all the test cases and mocks manually.
It doesn’t replace unit testing tools, but it complements them well. We used our regular testing framework for unit tests, and then Keploy for parts of the system where integration mattered more than isolation.
In addition to other comments, Expo SDK 53 is setting Android 15 (API level 35) by default, so you can just upgrade your project to use the latest updates
https://expo.dev/changelog/sdk-53
You can easily convert it online using this site: https://formatjsononline.com/json-to-string.
<a href="https://astrotalk.store/collections/pyrite">Pyrite</a>, often called “Fool’s Gold” due to its golden metallic luster, is a powerful healing crystal known for attracting wealth, abundance, and protection. Spiritually, it boosts confidence, shields against negativity, and enhances mental clarity. Pyrite is commonly used in jewelry like bracelets, pendants, and raw stones, and is often placed in homes or offices to invite prosperity and success. It resonates with the Solar Plexus Chakra and is ideal for those seeking motivation, focus, and grounding energy.
The Problem where that That in the Solution the Configuration were missing the checkbox Build
To solve it:
RightClick on the Solution => Select "Configuration Manager" => check the check box Build (see Picture)
You can try the online tool https://www.splitbybookmark.com/, it can split the pdf by toc and page limited.
It turns out that there has been a problem with the Hedera Testnet since July 31st, which results in the exact same issue I am having.
Investigating - We’re investigating an issue on the Hedera Testnet affecting smart contracts. The behaviour was first reported starting 31 July 2025.
Deployments may succeed, but you may experience:
- Contract bytecode not appearing on Hashscan
- Read/function calls failing with a BAD_DATA error
https://codecanyon.net/item/fitness-app-react-native-frontend-laravel-backend/56749615
Ready made code of fitness app cross platform in react native with backend
Large file uploaded by browser has its limit of 25mb
You can use the git clone, to clone the remote repository to your local computer.
Manually drop your file to the folder, and use git to add, commit, and push.
c.b.a.a$b->onReceive points to an obfuscated part of your code. It appears with this cryptic name because you have enabled ProGuard in your Android project, which obfuscates the code. Obfuscation creates a mapping file as well that lists all the mappings that took place during the obfuscation. For instance, Class A -> x, Class B -> p, etc.
To find out to which line of code this error refers, you can do the following:
Go to Play Console and download the .aab file that Play Console mentions has a policy violation. (You can find it at Bundle Explorer)
After the file is downloaded, rename it and change its file type to .zip.
Open the zip file and navigate to a folder called BUNDLE-METADATA/com.android.tools.build.obfuscation/ and open the file proguard with VS Code, or just a text editor
Using the "Find" tool (Command+F), find to which class and method c.b.a.a$b->onReceive is mapped.
If you found flutter clean or iOS pod clean (as mentioned above) didn't help, it may caused by your code.
In my scenario, it's a device specific issue, by default I am choosing the 3rd camera in my test devices, but in another device, there is no 3rd camera. Caused the crash and splash screen freeze.
I finally find the root cause after I get that device and perform the tests...