Best quick fix - keeps only your specified radial grid lines
scale_y_continuous(
breaks = c(0:5) + exp_lim,
limits = range(c(0:5) + exp_lim)
)
Hello please use our official npm package at https://www.npmjs.com/package/@bugrecorder/sdk
I will answer my question I am using wrong npm package I should use https://www.npmjs.com/package/@bugrecorder/sdk
client.init({
apiKey: "1234567890",
domain: "test.bugrecorder.com"
});
This will send the metric to the dashboard automatickly
client.monitor({
serverName: "server_dev_1",
onData: (data) => {
console.log("data from monitor", data);
},
onError: (error) => {
console.log("error from monitor", error);
}
});
client.sendError({
domain: "test.bugrecorder.com",
message: "test",
stack: "test",
context: "test"
});
LOCAL_SRC_FILES := $(call all-cpp-files-under,$(LOCAL_PATH))
You mention that to run, you used this command - ./filename.c. This command is creating issue. You should run the object file not the source code file. The proper command is ./filename
filename.c is your source code file, not the compiled program.
To run your program,
./filename
In case anyone is looking for an answer to this in 2025, go to the XML side of the designer. Find the image you are trying to increase the size of. Set the constraints according to the size you want and then include:
android:scaleType="fitCenter"
There are other values available, but this will increase the size of the image to the maximum possible without cropping while keeping it centered in the view.
if you want to make the mobile side app , Yes you can make it , with this you can cast photos videos, etc to the roku device
The main problem here is that, you are using custom migration in wrong way.
0. If your old database was not VersionedSchema already, then migration will not work.
1. Custom migration still migrate data in automatic way, so you don't need to remove or add models by yourself.
2. willMigrate give you access to new context with model V1
3. didMigrate give you access to new context with model V2
4. Doing migrationFromV1ToV2Data = v1Data is nothing else like copying references. After removing it from context with context.delete you are left with empty references.
So you have 2 options:
A)
You should make migrationFromV1ToV2Data: [PersistentIdentifier: Bool] and in willMigrate copy current property1 with modelID.
private static var migrationFromV1ToV2Data: [PersistentIdentifier: Bool] = [:]
static let migrateFromV1ToV2 = MigrationStage.custom(
fromVersion: MyDataSchemaV1.self,
toVersion: MyDataSchemaV2.self,
willMigrate:
{
modelContext in
let descriptor : FetchDescriptor<MyDataV1> = FetchDescriptor<MyDataV1>()
let v1Data : [MyDataV1] = try modelContext.fetch(descriptor)
v1Data.forEach {
migrationFromV1ToV2Data[$0.persistentModelID] = $0.property1
}
},
didMigrate:
{
modelContext in
for (id, data) in migrationFromV1ToV2Data{
if let model: MyDataV2 = modelContext.registeredModel(for: id) {
model.property1 = [data]
}
}
try? modelContext.save()
}
)
}
B)
Create V2 model from V1 in willMigrate, and populate into new context in didMigrate.
private static var migrationFromV1ToV2Data: [MyDataV2] = []
static let migrateFromV1ToV2 = MigrationStage.custom(
fromVersion: MyDataSchemaV1.self,
toVersion: MyDataSchemaV2.self,
willMigrate:
{
modelContext in
let descriptor : FetchDescriptor<MyDataV1> = FetchDescriptor<MyDataV1>()
let v1Data : [MyDataV1] = try modelContext.fetch(descriptor)
migrationFromV1ToV2Data = v1Data.map{ MyDataV2(myDataV1: $0) }
try modelContext.delete(model: MyDataV1.self)
try modelContext.save()
},
didMigrate:
{
modelContext in
migrationFromV1ToV2Data.forEach
{
modelContext.insert($0)
}
try modelContext.save()
}
)
}
I had problem with relationships in one of my migration where i need to use option B, but in most cases opt A is enough.
def Q1(numerator, denominator):
# Check if both are numbers (int or float), but not complex
if not (isinstance(numerator, (int, float)) and isinstance(denominator, (int, float))):
return None
# Avoid division by zero
if denominator == 0:
return None
# Check divisibility using modulus (%)
return numerator % denominator == 0
The problem seems to be solved with
import io
pd.read_csv(io.BytesIO(file.read()), encoding="cp1257")
The default admin password for the MQ console when using the cr.io/ibm-messaging/mq image is 'passw0rd'.
I found that the MQ_ADMIN_PASSWORD env variable does work if specified.
If your OptionButtons are in ControlFormat this where you manipulate state, I do believe
This of course would depend if depending on whether you want the OptionButton selected ? Can be set either x1on or x10ff
Just ensure you have the deployment target at 15.1 for iOS.
2025 Vuetify 3.10+
"vite.config.js":
vuetify({
styles: {
configFile: 'src/styles/settings.scss',
},
})
"src/styles/settings.scss":
@use 'vuetify/settings' with (
$body-font-family: ('Arial', sans-serif),
);
Change the size of: Tools – Options – Environment - Fonts and Colors - Statement Completion/Editor Tooltip
Would OpenTelemetry collector aggregation help (https://last9.io/blog/opentelemetry-metrics-aggregation/)? There should be no app change, and it is handled in the collector pipeline.
Are you using ControlFormat ?
Is your approach to work with shapes? Perhaps manipulate the OptionBotton sate, 'if' button inserted as a Form Control.
New here .. Curious did you modify the .value property of the OptionBotton object?
Here is the official documentation for the Remedy Rest APIs
The issue was caused by invalid dependencies in package.json that created conflicts during the Docker build process.
Fix:
Remove the invalid dependency from package.json
Clean all cache and node_modules
Rebuild Docker containers without cache
In my case, it was just an @ symbol I accidentally put in a localisation file 😆
the trick is to use env variable
I used this image with wordart and convert it perfectly.
New here .. Curious did you modify the .value property of the OptionBotton object?
For anyone stumbling upon this years later, you can now use:
word-break: keep-all;
I'm using VS 17.14.7 and Resharper version
you can share your clean up profile! But you should firstly switch to editing that team-shared layer. (By default if you go to editing code clean it creates/edits) profile on your personal level:
1. Go to Extension => Resharper => Manage Options and select Solution...team shared, click on edit for this layer
2. Then go to Code Editing => Code Cleanup => Profiles, create your profile
3. Save
Thus it will write this profile into .DotSettings file in your solution. Then you can commit and push this file as any other solution files
You could use the String_split table function:
SELECT TRIM(value) AS pid
FROM STRING_SPLIT('U388279963, U388631403, U389925814', ... ',')
It probably due to the
Current:
//button[contains(@class, 'btn-primary') and text()='Confirm']
Change it to
//button[contains(@class, 'btn-primary') and text()='Confirm Booking']
I could fix the issue by deleting all .dcu files from my DCU folder ; after this, the compiler worked without issues. I'm not sure why but i'm posting here to help someone that face the same issue.
you must change getMaxCores function, roll it back to 0.25, then install package by tar.gz file . Detail in this article by me https://www.bilibili.com/opus/1126187884464832514 .
You can read from serial port with php and windows with library for windows
https://github.com/m0x3/php_comport
This is only one real working read on windows with php.
we have to use px at the end
#list_label {
font-size:20px;}
In my case which I have a cpanel shared host:
1- Go to MultiPHP INI Editor
2- Under Configure PHP INI basic settings select the location you want to change the limit
3- Find post_max_size and set your desired value for ex: 100M
4- Click on apply
Resolved the issue, if anyone comes across the same issue, ensure that your mocks are properly mocked:
Failing:
jest.mock('@mui/x-date-pickers/LocalizationProvider', () => ({
__esModule: true,
default: (p: { children?: any }) => p.children,
}));
Passing:
jest.mock('@mui/x-date-pickers/LocalizationProvider', () => ({
__esModule: true,
LocalizationProvider: (p: { children?: any }) => p.children,
}));
Snakemake version 9.0.0 and newer supports this via the --report-after-run parameter.
Make sure you used the correct function to register it.
function my_message_shortcode() {
return "Hello, this is my custom message!";
}
add_shortcode('my_message', 'my_message_shortcode');
If you miss the return statement and use echo instead, it may not render properly.
Necro-answer:
I believe you have to query the ics file directly:
http://localhost:51756/iCalendar/userUniqueIdiCalendar/userUniqueId/calendar.ics
I answer as this is still a relevant issue.
I was able to resolve this by using the "Reload Project" option from VS 2022 menu (not sure how I missed that). Thanks for the responses
fixed: turns out you cant do that in textual
add list then open it as design vew than add example code to apply filter from cmb_ml combo box to list
IIf([Forms]![qc_moldsF9_partsListSelect]![cmb_ml] Is Not Null,[Forms]![qc_moldsF9_partsListSelect]![cmb_ml],[id_part])
You can read from serial port with php and windows with library for windows
https://github.com/m0x3/php_comport
This is only one real working read on windows with php.
You can’t simply use a ELM327 to fully emulate an ECU — you need to decide which layer to emulate (CAN‑bus vs the ELM327 AT‑command layer) and build the interface so your tool reads from the emulated bus instead of the real one
The php command
php artisan migrate
succeeds provided I do 2 things:
rename the service mysql instead of db (in .env file and docker-compose.yml)
add the --network <id> flag when connecting to the backend container's shell
Assuming you got your [goal weight day] set up, would this work?
If((\[weight\]\<160) and (\[goal weight day\] is not null),First(\[day\]) over (\[person\],\[weight\]))
I was experiencing this issue as well, found out that there are some required parameters I was not sending from the Backend hence causing the error to be raised
I had a similar problem. What helped was removing the custom domain from my username.github.io repository (the user/organization site)
Suppose your number is on cell A2 then use the formula IF(A2=0,0,MOD(A2-1,9)+1)
This will return the sum of all digits into a single digit between 0 to 9
rightclick -> format document ?
Check if the server is overriding
Try building the project by runing commands
npm run build serve -s build
Later check by opening the build URL in Safari browser
I know this is an old thread, but there is still no "create_post" capability (I wonder why?) and I needed this functionality as well.
What I want: I create a single post for specific users with a custom role and then only let them edit that post.
This is what works for me:
'edit_posts' => false : will remove the ability to create posts, but also the ability to edit/update/delete
'edit_published_posts' => true : this will give back the ability to edit and update, but no to create new posts (so there will be no "Add post" button)
The whole function & hook:
function user_custom_roles() {
remove_role('custom_role'); //needed to "reset" the role
add_role(
'custom_role',
'Custom Role',
array(
'read' => true,
'delete_posts' => false,
'delete_published_posts' => false,
'edit_posts' => false, //IMPORTANT
'edit_published_posts' => true, //IMPORTANT
'edit_others_pages' => false,
'edit_others_posts' => false,
'publish_pages' => false,
'publish_posts' => false,
'upload_files' => true,
'unfiltered_html' => false
)
);
}
add_action('admin_init', 'user_custom_roles');
I see this question is 12 (!!!) years old, but I’ll add an answer anyway. I ran into the same confusion while reading Evans and Vernon and thought this might help others.
Like you, I was puzzled by:
1️⃣ Subdomains vs. Bounded Contexts
Subdomains are business-oriented concepts, bounded contexts are software-oriented. A subdomain represents an area of the business, for example, Sales in an e-commerce company (the classic example). Within that subdomain, you might have several bounded contexts: Product Catalog, Order Management, Pricing, etc. Each bounded context has its own model and ubiquitous language, consistent only within that context. As a matter of fact, model and ubiquitous language are the concepts that, at the implementation level, define the boundary of a context (terms mean something different and/or are implemented in different ways depending on context)
2️⃣ How they relate
In short: you can have multiple bounded contexts within one subdomain. To use a different analogy than the existing ones: subdomains are like thematic areas in an amusement park, while bounded contexts are the attractions within each area, each with its own design and mechanisms, but all expressing the same general theme.
3️⃣ In practice
In implementation, you mostly work within bounded contexts, since that’s where your code and model live. For example, in Python you might structure your project with one package per bounded context, each encapsulating its domain logic and data model.
Another reason to keep the two concepts separated is that you may have a business rule spanning across different bounded contexts and be implemented differently in each of those. For example (again sale, I hate this domain, but here we are): "A customer cannot receive more than a 20% discount" is a rule of the "Sale" sub-domain that, language-wise and model-wise, will be implemented differently in different bounded contexts (pricing, order management, etc).
Also...
When planning development, discussions start at the subdomain level, aligning business capabilities and assigning teams. Those teams then define the bounded contexts and their corresponding ubiquitous languages.
The distinction between the two matters most at this strategic design stage, it helps large projects stay organised and prevents overlapping models and terminology from creating chaos.
If you mainly work on smaller or personal projects by your own (as I do), all this taxonomy may not seem that important at first, but (I guess) the advantage is clear for people that witnesses project collapsing because of bad planning.
TLDR:
thank you
sudo restorecon -rv /opt/tomcat
worked for me
The problem with missing options came from config/packages/doctrine.yaml
There, the standard config sets options that were removed in doctrine-orm v3, namely:
doctrine.dbal.use_savepoints
doctrine.orm.auto_generate_proxy_classes
doctrine.orm.enable_lazy_ghost_objects
doctrine.orm.report_fields_where_declared
Commenting out / removing those options resolved the issue.
Hopefully the package supplying those config options fixes this. In the meantime, manually editing the file seems to work.
You can read from serial port with php and windows with library for windows
https://github.com/m0x3/php_comport
This is only one real working read on windows with php.
Now it has changed, use following in your source project:
Window -> Layouts -> Save Current Layout As New ...
And this in you destination project:
Window -> Layouts -> {Name you've given} -> Apply
Without quotes and for a file in directory ./files, launch the following command from the root directory where .git is placed:
git diff :!./files/file.txt
Once the bitmap preview is open, you can copy it (via cmd or [right click -> copy]) and then paste it to Preview app [Preview -> File -> New from clipboard] (if you use a Mac) or any image viewer of your choice. Then save it.
This issue is solved by running the published executable as Admin. My visual studio always runs as admin, turns out it makes a difference.
I am not sure why it matters, maybe windows defender by default scanning the executable while running that makes it slower, or like Guru Stron said that it has anything to do with DOTNET_TC_QuickJitForLoops, but I haven't got time to test it any further.
Maybe when I have enough time to test, I will update my answer.
For now, I will close this issue.
how do i create something like the 1 answer but for something else.. theres a website i want to scrape but i want to scrape for a specific "src="specific url ""
Looks like the best autocalibreation is manual one. Used AI to create me a script to adjust all the values of the camera_matrix and dist_coeffs manually until i got the desired picture in the live preview.
If your organisation permits you might be able to use LDAP to populate those field:
VBA excel Getting information from active directory with the username based in cells
Turned out I forgot to add
app.html
<router-outlet></router-outlet>
and
app.ts
@Component({
imports: [RouterModule],
selector: 'app-root',
templateUrl: './app.html',
styleUrl: './app.scss',
})
export class App {
protected title = 'test-app';
}
In my case i had an email notification configure in /etc/mysql/mariadb.conf.d/60-galera.cnf
The process was hanging and after I removed it the service restarted and the machine reboots with no problem.
Hope it helps,
Lets add
@AutoConfigureMockMvc(addFilters = false)
to ImportControllerTest. By setting addFilters = false in @AutoConfigureMockMvc, you instruct Spring to disable the entire Security Filter Chain for the test. This allows the request to be routed directly to your ImportController, bypassing any potential misconfiguration in the auto-configured OAuth2 resource server setup that is preventing the dispatcher from finding the controller.
You can do this:
total = 0
while total <= 100:
total += float(input("Write number: "))
Maybe this helps. for me it work nicely. and sets like 11 postgres processes to the cores i want on the cpu i want (multi CPU sever). its a part of a startup script when server restarts.
SET "POSTGRES_SERVICE_NAME=postgresql-x64-18"
:: --- CPU Affinity Masks ---
:: PostgreSQL: 7 physical cores on CPU 1 (logical processors 16-29)
SET "AFFINITY_POSTGRES=0x3FFF0000"
:: --- 1. Start PostgreSQL Service ---
echo [1/3] PostgreSQL Service
echo ---------------------------------------------------
echo Checking PostgreSQL service state...
sc query %POSTGRES_SERVICE_NAME% | find "STATE" | find "RUNNING" > nul
if %errorlevel% == 0 (
echo [OK] PostgreSQL is already RUNNING.
) else (
echo Starting PostgreSQL service...
net start %POSTGRES_SERVICE_NAME% >nul 2>&1
echo Waiting for PostgreSQL to initialize...
for /l %%i in (1,1,15) do (
timeout /t 1 /nobreak > nul
sc query %POSTGRES_SERVICE_NAME% | find "STATE" | find "RUNNING" > nul
if !errorlevel! == 0 (
goto :postgres_started
)
)
:: If we get here, timeout expired
echo [ERROR] PostgreSQL service failed to start within 15 seconds. Check logs.
pause & goto :eof
:postgres_started
echo [OK] PostgreSQL service started.
)
:: Wait a moment for all postgres.exe processes to spawn
echo Waiting for PostgreSQL processes to spawn...
timeout /t 3 /nobreak > nul
:: Apply affinity to ALL postgres.exe processes using PowerShell
echo Setting PostgreSQL affinity to %AFFINITY_POSTGRES%...
powershell -NoProfile -ExecutionPolicy Bypass -Command "$procs = Get-Process -Name postgres -ErrorAction SilentlyContinue; $count = 0; foreach($p in $procs) { try { $p.ProcessorAffinity = %AFFINITY_POSTGRES%; $count++ } catch {} }; Write-Host \" [OK] Affinity set for $count postgres.exe processes.\" -ForegroundColor Green"
I am guessing you mean css. here is the correct code
#list_label {
font-size:20px;
}
Here is the docs for font size:
Sorry for the late reply. But - a good place to ask those questions related to DQL would be on the Dynatrace community -> https://community.dynatrace.com
I think for this use case we have the command KVP (Key Value Pairs) that automatically parses Key Value Pairs and you can then access all keys and values. Here for instance a discussion on that topic => https://community.dynatrace.com/t5/DQL/Log-processing-rule-for-each-item-in-json-array-split-on-quot/m-p/220181
On 2025 the properties of the other posts didnt worked or even exists.
A simple workaround for me was just disableing hovering on the whole cancas html element via css.
canvas {
// disable all the hover effects
pointer-events: none;
}
If you're familiar with django template syntax github.com/peedief/template-engine is a very good option.
output = string.replace(fragment, "*", 1).replace("fragment", "").replace("*", fragment)
If needed, replace "*" with some token string which would never occur on your original string.
"Batteries included" doesn't mean that you can do everything you want with single built-in function call.
Based on @domi 's comment, I have added this line to the end of the command and it worked fine.
Ignore the Suggestions matching public code (duplication detection filter)
If you’re looking for a reliable tool to pretty-print and formatter JSON content, one of the best options is the command-line utility jq, which is described in this Stack Overflow thread: “JSON command line formatter tool for Linux”
I had the same issue and found a convenient way to globally configure this and packaged it into a htmx extension, you can find it here: https://github.com/fchtngr/htmx-ext-alpine-interop
ive acidently passed a const argument. dosent seem to be the issu in youre case tho.
Follow this inComplete guide repo to install and setup jupyter notebook on termux android 13+
ls -v *.txt | cat -n | while read i f; do mv $f $(printf "%04d.txt" $i); done
I tested this locally with Spring Boot 3.4.0 on Java 25 using Gradle 9.1.0 and the app failed to start with the same error you mentioned. This happens because the ASM library embedded in Spring Framework 6.2.0 (used by 3.4.0) doesn’t support Java 25 class files.
When I upgraded to Spring Boot 3.4.10 (the latest patch in the 3.4.x line), the same app ran fine on Java 25.
It looks like a patch-level issue, early 3.4.x releases didn’t fully support Java 25, but the latest patch fixed the ASM support.
What you can do is,
Upgrade to Spring Boot 3.4.10 (if you want to stay on 3.4.x).
Upgrade to Spring Boot 3.5.x, which fully supports Java 25.
Either option works fine on Java 25.
Pedro Piñera helped answer here and thanks!
Basically Tuist sets a default version in the generated projects here https://github.com/tuist/tuist/blob/88b57c1ac77dac2a8df7e45a0a59ef4a7ca494e9/cli/Sources/TuistGenerator/Generator/ProjectDescriptorGenerator.swift#L188
which is not configurable as of now.
I have a similar kind of issue, where the page is splitting unnecessarily.
I have three components a header, a title and a chart using chart js, the issue is the header and title is coming in the first page and chart is going to the second page keeping the first page blank, so what else I can do here it is working fine when the chart data is fit with in the first page.
Can somebody please help me to fix this issue
Here is the code
<div className="chart-container">
<div className="d-flex justify-content-between">
<label className="chart-title m-2">{props.title}</label>
</div>
{data.length == 0
? <div className="no-data-placeholder">
<span>No Data Found!</span>
</div>
: <div id={props.elementID} style={props.style}></div>
}
</div>
Since ngx-image-cropper adjusts the image to fit the crop area, zooming out scales the image instead of keeping its original size. MaintainAspectRatio or transform settings should be used.
You could also set your conditions without AssertJ and then just verify the boolean value with AssertJ.
Like this:
boolean result = list.stream()
.anyMatch(element -> element.matches(regex) || element.equals(specificString));
assertThat(result).isTrue()
It's probably ...Edit Scheme...->Run->Diagnostics->API Validation. Uncheck this and give it a try.
I know this is an old post, but if you're here from a "Annex B vs AVCC" search, I thought it would be worth adding another opinion, because what I believe to be the most important reason to use Annex B has not been mentioned.
@VC. One has already provided some technical information about each of the formats, so I will try not to repeat that.
I wonder in which case we should use Annex-B
To answer you question directly, the Annex-B start codes allow a decoder to synchronise to a stream that is already being transmitted, like a UDP broadcast or wireless terrestrial tv broadcast. The start codes also allow the decoder to re-synchronise after a corruption in the media transport.
AVCC does not have a recovery mechanism, so cannot be used for purposes like I describe above.
To be clear, each of the formats have practical advantages and disadvantages.
Neither is "better" - they have different goals.
The comparison of these formats is similar to MPEG-TS vs MPEG-PS.
Transport stream (-TS) can be recovered if the stream is corrupted by an unreliable transport.
Program stream (-PS) is more compact and easier to parse, but has no recovery mechanism, so only use it with reliable transports.
For those parsing NALU's out of a byte stream that is stored on disk, you might reasonably question why you are searching for start codes in a file on disk, when you could be using a format that tells you the atom sizes before you parse them. Disk storage is reliable. So is TCP transmission. Favour AVCC in these contexts, if it is convenient to do so.
However, keep in mind that constructing the box structures in AVCC is more complex than just dropping start codes between each NALU, so recording from a live source is much simpler with Annex B. Apart from the additional complexity, recording directly to AVCC is also more prone to corruption if it is interrupted, because that format requires that the location of each of the frame boxes is in an index (in moov boxes) that you can only write retrospectively when you're streaming live video to disk. If your recording process is interrupted (crash, power loss, et, al.), you will need some repair process to fix the broken recording (parsing the box structures for frames and building the moov atom). An interrupted Annex B recording, however, will only suffer a single broken frame in the same scenario.
So my message is "horses for courses".
Chose the one that suits your acquisition/recording/reconstruction needs best.
You are trying to run the command on a generic notebook as a generic pyspark import.
The pipeline module can be accessed only within a context of a pipeline.
please refer this documentation for clarity:
https://docs.databricks.com/aws/en/ldp/developer/python-ref/#gsc.tab=0
Currently I'm not allowed to Add/reply to comments, so i'll just make an Individual answer,
For MacOs, the solution is the same as Bhavin Panara's Solution, the directory is
/Users/(YourUser)/Library/Unity/cache/packages/packages.unity.com
You can use
time.strftime('%Y-%m-%d %H:%M:%S.%f')[:-3])
Cant you probably just look up the JS code of the router page and see what requests it sends?
i am stuck when i have ti submet where thay aked if im a android
(17f0.a4c): Break instruction exception - code 80000003 (first chance) ntdll!LdrpDoDebuggerBreak+0x30: 00007ffa`0a3006b0 cc int 3
That's just the WinDbg's break instruction exception (a.k.a int3 opcode 0xcc)
According to this article the executable part is in
.text, According to this article the executable part is in.textand.rodata, is it possible to grab the bytes in.textand convert them to a shellcode then injecting it into a process
It greatly depends on the executable! As long as the data isn't being executed as code and vice versa, it's gonna be fine.
After testing the same app on the same Samsung device updated to Android 16 (recently released for Samsung), I can confirm that Audio Focus requests now behave correctly, they are granted when the app is running a foreground service, even if it’s not the top activity.
This indicates the issue was specific to Samsung’s Android 15 firmware, not to Android 15 itself. On Pixel devices, AudioFocus worked as expected on both Android 15 and 16, consistent with Google’s behavior change documentation.
In short:
Samsung Android 15 bug: AudioFocus requests were incorrectly rejected when the app wasn’t in the foreground, even if it had a foreground service.
Fixed in Android 16: Behavior now matches Pixel and AOSP devices.
Older Samsung devices: Those that don’t receive Android 16 will likely continue to exhibit this bug.
document.querySelectorAll('button[aria-pressed="true"][aria-label="Viewed"]').forEach(btn => btn.click());
Updated command for 2025 github new UI
I just got this number a couple of hours ago and it's been banned what may I do so that. May tart using telegram agin
From the Google Cloud console, select your project, then in the top bar, search for buckets. You will see that you have one created. Enter it and you will obtain the list of .zip files, one for each deployment.
Well, the official GitHub documentation says they used third party for language detection and code hilights.
"We use Linguist to perform language detection and to select third-party grammars for syntax highlighting. You can find out which keywords are valid in the languages YAML file."
You may try to do the same thing.
Actually, I wonder how this page, StackOverflow may do it, since the code you paste here, is well highligted.
You may think in how to install the third party libraries and use it in your own project. My recommendation would be to:
The most common and effective way to render Markdown with syntax highlighting (including for JSX) in a React application is to combine the react-markdown library with react-syntax-highlighter.
You're correct that Markdown itself doesn't highlight code; it just identifies code blocks. You may need to use a separate library to parse and style that code. react-syntax-highlighter is a popular choice because it bundles highlighting libraries like Prism and Highlight.js for easy use in React.
An useful example might be:
First, you need to install the necessary packages:
npm install react-markdown react-syntax-highlighter
# Optional, but recommended for GitHub-style markdown (tables, etc.)
npm install remark-gfm
Now, create a component that renders the Markdown. The key is to use the components prop in react-markdown to override the default renderer for code blocks.
import React from 'react';
import ReactMarkdown from 'react-markdown';
import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter';
// You can choose any theme you like
import { vscDarkPlus } from 'react-syntax-highlighter/dist/esm/styles/prism';
import remarkGfm from 'remark-gfm';
// The markdown string you want to render
const markdownString = `
Here's some regular text.
And here is a JSX code block:
\`\`\`jsx
import React from 'react';
function MyComponent() {
return (
<div className="container">
<h1>Hello, React!</h1>
</div>
);
}
\`\`\`
We also support inline \`code\` elements.
And other languages like JavaScript:
\`\`\`javascript
console.log('Hello, world!');
\`\`\`
`;
function MarkdownRenderer() {
return (
<ReactMarkdown
remarkPlugins={[remarkGfm]} // Adds GFM support
children={markdownString}
components={{
code(props) {
const { children, className, node, ...rest } = props;
const match = /language-(\w+)/.exec(className || '');
return match ? (
<SyntaxHighlighter
{...rest}
PreTag="div"
children={String(children).replace(/\n$/, '')}
language={match[1]} // e.g., 'jsx', 'javascript'
style={vscDarkPlus} // The theme to use
/>
) : (
<code {...rest} className={className}>
{children}
</code>
);
},
}}
/>
);
}
export default MarkdownRenderer;
# compare_icon_fmt.py
import cv2
import numpy as np
from dataclasses import dataclass
from typing import Tuple, List
# ===================== T H A M S Ố & C ᾳ U H Ì N H =====================
@dataclass
class RedMaskParams:
# Dải đỏ HSV đôi: [0..10] U [170..180]
lower1: Tuple[int, int, int] = (0, 80, 50)
upper1: Tuple[int, int, int] = (10, 255, 255)
lower2: Tuple[int, int, int] = (170, 80, 50)
upper2: Tuple[int, int, int] = (180, 255, 255)
open_ksize: int = 3
close_ksize: int = 5
@dataclass
class CCParams:
dilate_ksize: int = 3
min_area: int = 150
max_area: int = 200000
aspect_min: float = 0.5
aspect_max: float = 2.5
pad: int = 2
@dataclass
class FMTParams:
hann: bool = True
eps: float = 1e-3
min_scale: float = 0.5
max_scale: float = 2.0
@dataclass
class MatchParams:
ncc_threshold: float = 0.45
canny_low: int = 60
canny_high: int = 120
# ===================== 1) LOAD & BINARIZE =====================
def load_and_binarize(path: str):
img_bgr = cv2.imread(path, cv2.IMREAD_COLOR)
if img_bgr is None:
raise FileNotFoundError(f"Không thể đọc ảnh: {path}")
rgb = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2RGB)
gray = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2GRAY)
_, binarized = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
return img_bgr, rgb, binarized
# ===================== 2) TEMPLATE BIN + INVERT =====================
def binarize_and_invert_template(tpl_bgr):
tpl_gray = cv2.cvtColor(tpl_bgr, cv2.COLOR_BGR2GRAY)
_, tpl_bin = cv2.threshold(tpl_gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
tpl_inv = cv2.bitwise_not(tpl_bin)
return tpl_bin, tpl_inv
# ===================== 3) RED MASK =====================
def red_mask_on_dashboard(dash_bgr, red_params: RedMaskParams):
hsv = cv2.cvtColor(dash_bgr, cv2.COLOR_BGR2HSV)
m1 = cv2.inRange(hsv, red_params.lower1, red_params.upper1)
m2 = cv2.inRange(hsv, red_params.lower2, red_params.upper2)
mask = cv2.bitwise_or(m1, m2)
if red_params.open_ksize > 0:
k = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (red_params.open_ksize,)*2)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, k)
if red_params.close_ksize > 0:
k = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (red_params.close_ksize,)*2)
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, k)
return mask
def apply_mask_to_binarized(binarized, mask):
return cv2.bitwise_and(binarized, binarized, mask=mask)
# ===================== 4) DILATE + CONNECTED COMPONENTS =====================
def find_candidate_boxes(masked_bin, cc_params: CCParams) -> List[Tuple[int,int,int,int]]:
k = cv2.getStructuringElement(cv2.MORPH_RECT, (cc_params.dilate_ksize,)*2)
dil = cv2.dilate(masked_bin, k, iterations=1)
num_labels, labels, stats, _ = cv2.connectedComponentsWithStats((dil>0).astype(np.uint8), connectivity=8)
boxes = []
H, W = masked_bin.shape[:2]
for i in range(1, num_labels):
x, y, w, h, area = stats[i]
if area < cc_params.min_area or area > cc_params.max_area:
continue
aspect = w / (h + 1e-6)
if not (cc_params.aspect_min <= aspect <= cc_params.aspect_max):
continue
x0 = max(0, x - cc_params.pad)
y0 = max(0, y - cc_params.pad)
x1 = min(W, x + w + cc_params.pad)
y1 = min(H, y + h + cc_params.pad)
boxes.append((x0, y0, x1-x0, y1-y0))
return boxes
# ===================== 5) CROP CHẶT TEMPLATE =====================
def tight_crop_template(tpl_inv):
cnts, _ = cv2.findContours(tpl_inv, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
if not cnts:
return tpl_inv
x, y, w, h = cv2.boundingRect(max(cnts, key=cv2.contourArea))
return tpl_inv[y:y+h, x:x+w]
# ===================== 6) FOURIER–MELLIN (scale, rotation) =====================
def _fft_magnitude(img: np.ndarray, use_hann=True, eps=1e-3) -> np.ndarray:
if use_hann:
hann_y = cv2.createHanningWindow((img.shape[1], 1), cv2.CV_32F)
hann_x = cv2.createHanningWindow((1, img.shape[0]), cv2.CV_32F)
window = hann_x @ hann_y
img = img * window
dft = cv2.dft(img, flags=cv2.DFT_COMPLEX_OUTPUT)
dft_shift = np.fft.fftshift(dft, axes=(0,1))
mag = cv2.magnitude(dft_shift[:,:,0], dft_shift[:,:,1])
mag = np.log(mag + eps)
mag = cv2.normalize(mag, None, 0, 1, cv2.NORM_MINMAX)
return mag
def _log_polar(mag: np.ndarray) -> Tuple[np.ndarray, float]:
center = (mag.shape[1]//2, mag.shape[0]//2)
max_radius = min(center[0], center[1])
M = mag.shape[1] / np.log(max_radius + 1e-6)
lp = cv2.logPolar(mag, center, M, cv2.WARP_FILL_OUTLIERS + cv2.INTER_LINEAR)
return lp, M
def fourier_mellin_register(img_ref: np.ndarray, img_mov: np.ndarray, fmt_params: FMTParams):
a = cv2.normalize(img_ref.astype(np.float32), None, 0, 1, cv2.NORM_MINMAX)
b = cv2.normalize(img_mov.astype(np.float32), None, 0, 1, cv2.NORM_MINMAX)
amag = _fft_magnitude(a, use_hann=fmt_params.hann, eps=fmt_params.eps)
bmag = _fft_magnitude(b, use_hann=fmt_params.hann, eps=fmt_params.eps)
alp, M = _log_polar(amag)
blp, _ = _log_polar(bmag)
shift, response = cv2.phaseCorrelate(alp, blp)
# phaseCorrelate trả (shiftX, shiftY)
shiftX, shiftY = shift
cols = alp.shape[1]
scale = np.exp(shiftY / (M + 1e-9))
rotation = -360.0 * (shiftX / (cols + 1e-9))
scale = float(np.clip(scale, fmt_params.min_scale, fmt_params.max_scale))
rotation = float(((rotation + 180) % 360) - 180)
return scale, rotation, float(response)
def warp_template_by(scale: float, rotation_deg: float, tpl_gray: np.ndarray, target_size: Tuple[int, int]):
h, w = tpl_gray.shape[:2]
center = (w/2, h/2)
M = cv2.getRotationMatrix2D(center, rotation_deg, scale)
warped = cv2.warpAffine(tpl_gray, M, (w, h), flags=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT, borderValue=0)
warped = cv2.resize(warped, (target_size[0], target_size[1]), interpolation=cv2.INTER_LINEAR)
return warped
# ===================== 7) MATCH SCORE (robust) =====================
def edge_preprocess(img_gray: np.ndarray, mp: MatchParams):
# CLAHE để chống ảnh phẳng
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
g = clahe.apply(img_gray)
edges = cv2.Canny(g, mp.canny_low, mp.canny_high)
# Nếu cạnh quá ít → dùng gradient magnitude
if np.count_nonzero(edges) < 0.001 * edges.size:
gx = cv2.Sobel(g, cv2.CV_32F, 1, 0, ksize=3)
gy = cv2.Sobel(g, cv2.CV_32F, 0, 1, ksize=3)
mag = cv2.magnitude(gx, gy)
mag = cv2.normalize(mag, None, 0, 255, cv2.NORM_MINMAX).astype(np.uint8)
return mag
# Dãn cạnh nhẹ
k = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
edges = cv2.dilate(edges, k, iterations=1)
return edges
def _nan_to_val(x: float, val: float = -1.0) -> float:
return float(val) if (x is None or (isinstance(x, float) and (x != x))) else float(x)
def ncc_score(scene: np.ndarray, templ: np.ndarray) -> float:
Hs, Ws = scene.shape[:2]
Ht, Wt = templ.shape[:2]
if Hs < Ht or Ws < Wt:
pad = np.zeros((max(Hs,Ht), max(Ws,Wt)), dtype=scene.dtype)
pad[:Hs,:Ws] = scene
scene = pad
# 1) TM_CCOEFF_NORMED
res = cv2.matchTemplate(scene, templ, cv2.TM_CCOEFF_NORMED)
s1 = _nan_to_val(res.max())
# 2) Fallback: TM_CCORR_NORMED
s2 = -1.0
if s1 <= -0.5:
res2 = cv2.matchTemplate(scene, templ, cv2.TM_CCORR_NORMED)
s2 = _nan_to_val(res2.max())
# 3) Fallback cuối: IoU giữa 2 mask nhị phân
if s1 <= -0.5 and s2 <= 0:
t = templ
sc = scene
if sc.shape != t.shape:
sc = cv2.resize(sc, (t.shape[1], t.shape[0]), interpolation=cv2.INTER_NEAREST)
_, tb = cv2.threshold(t, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
_, sb = cv2.threshold(sc, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
inter = np.count_nonzero(cv2.bitwise_and(tb, sb))
union = np.count_nonzero(cv2.bitwise_or(tb, sb))
iou = inter / union if union > 0 else 0.0
return float(iou)
return max(s1, s2)
def thicken_binary(img: np.ndarray, ksize: int = 3, iters: int = 1) -> np.ndarray:
k = cv2.getStructuringElement(cv2.MORPH_RECT, (ksize,ksize))
return cv2.dilate(img, k, iterations=iters)
# ===================== P I P E L I N E C H Í N H =====================
def find_icon_with_fmt(
dashboard_path: str,
template_path: str,
red_params=RedMaskParams(),
cc_params=CCParams(),
fmt_params=FMTParams(),
match_params=MatchParams(),
):
# 1) Dashboard: RGB + bin
dash_bgr, dash_rgb, dash_bin = load_and_binarize(dashboard_path)
# 2) Template: bin + invert
tpl_bgr = cv2.imread(template_path, cv2.IMREAD_COLOR)
if tpl_bgr is None:
raise FileNotFoundError(f"Không thể đọc template: {template_path}")
tpl_bin, tpl_inv = binarize_and_invert_template(tpl_bgr)
# 3) Lọc đỏ & áp mask lên ảnh nhị phân dashboard
redmask = red_mask_on_dashboard(dash_bgr, red_params)
dash_masked = apply_mask_to_binarized(dash_bin, redmask)
# 4) Dãn + tìm CC để lấy candidate boxes
boxes = find_candidate_boxes(dash_masked, cc_params)
# 5) Cắt chặt template & chuẩn bị phiên bản grayscale
tpl_tight = tight_crop_template(tpl_inv)
tpl_tight_gray = cv2.GaussianBlur(tpl_tight, (3,3), 0)
# Tiền xử lý cạnh cho template
tpl_edges = edge_preprocess(tpl_tight_gray, match_params)
best = {
"score": -1.0,
"box": None,
"scale": None,
"rotation": None
}
dash_gray = cv2.cvtColor(dash_bgr, cv2.COLOR_BGR2GRAY)
for (x, y, w, h) in boxes:
roi = dash_gray[y:y+h, x:x+w]
if roi.size == 0 or w < 8 or h < 8:
continue
# Resize tạm cho FMT
tpl_norm = cv2.resize(tpl_tight_gray, (w, h), interpolation=cv2.INTER_LINEAR)
roi_norm = cv2.resize(roi, (w, h), interpolation=cv2.INTER_LINEAR)
# 6) FMT ước lượng scale/rotation (có fallback)
try:
scale, rotation, resp = fourier_mellin_register(tpl_norm, roi_norm, fmt_params)
except Exception:
scale, rotation, resp = 1.0, 0.0, 0.0
warped = warp_template_by(scale, rotation, tpl_tight_gray, target_size=(w, h))
# (tuỳ chọn) làm dày biên template
warped = thicken_binary(warped, ksize=3, iters=1)
# 7) Tính điểm khớp trên đặc trưng robust
roi_feat = edge_preprocess(roi, match_params)
warped_feat = edge_preprocess(warped, match_params)
score = ncc_score(roi_feat, warped_feat)
if score > best["score"]:
best.update({
"score": score,
"box": (x, y, w, h),
"scale": scale,
"rotation": rotation
})
return {
"best_score": best["score"],
"best_box": best["box"], # (x, y, w, h) trên dashboard
"best_scale": best["scale"],
"best_rotation_deg": best["rotation"],
"pass": (best["score"] is not None and best["score"] >= match_params.ncc_threshold),
"num_candidates": len(boxes),
}
# ===================== V Í D Ụ C H Ạ Y =====================
if __name__ == "__main__":
# ĐỔI 2 ĐƯỜNG DẪN NÀY THEO MÁY BẠN
DASHBOARD = r"\Icon\dashboard.jpg"
TEMPLATE = r"\Icon\ID01.jpg"
result = find_icon_with_fmt(
dashboard_path=DASHBOARD,
template_path=TEMPLATE,
red_params=RedMaskParams(), # nới dải đỏ nếu cần
cc_params=CCParams(min_area=60, max_area=120000, pad=3),
fmt_params=FMTParams(min_scale=0.6, max_scale=1.8),
match_params=MatchParams(ncc_threshold=0.55, canny_low=50, canny_high=130)
)
print("=== KẾT QUẢ ===")
for k, v in result.items():
print(f"{k}: {v}")
# Vẽ khung best match để kiểm tra nhanh
if result["best_box"] is not None:
img = cv2.imread(DASHBOARD)
x, y, w, h = result["best_box"]
cv2.rectangle(img, (x,y), (x+w, y+h), (0,255,0), 2)
cv2.putText(img, f"NCC={result['best_score']:.2f}", (x, max(0,y-8)),
cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0,255,0), 2, cv2.LINE_AA)
cv2.imshow("Best match", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Hi i am using but it don't find correct image. Please help me check
As Discussed In The Comments, The Problem Seems To Be Exclusive To My System. Very Sorry For Everyone's Time Wasted.
Edit: I Cannot Delete The Post Because There Are Other Answers On It.
code not run
import { Directive, ElementRef, HostListener } from '@angular/core';
@Directive({
selector: '[formatDate]', // Este é o seletor que você usará no HTML
standalone: true // Torna a diretiva autônoma (não precisa ser declarada em um módulo)
})
export class FormataDateDirective {
constructor(private el: ElementRef) {}
/**
* O HostListener escuta eventos no elemento hospedeiro (o <input>).
* Usamos o evento 'input' porque ele captura cada alteração,
* incluindo digitação, colagem e exclusão de texto.
* @param event O evento de entrada disparado.
*/
@HostListener('input', ['$event'])
onInputChange(event: Event): void {
const inputElement = event.target as HTMLInputElement;
let inputValue = inputElement.value.replace(/\D/g, ''); // Remove tudo que não for dígito
// Limita a entrada a 8 caracteres (DDMMAAAA)
if (inputValue.length > 8) {
inputValue = inputValue.slice(0, 8);
}
let formattedValue = '';
// Aplica a formatação DD/MM/AAAA conforme o usuário digita
if (inputValue.length > 0) {
formattedValue = inputValue.slice(0, 2);
}
if (inputValue.length > 2) {
formattedValue = `${inputValue.slice(0, 2)}/${inputValue.slice(2, 4)}`;
}
if (inputValue.length > 4) {
formattedValue = `${inputValue.slice(0, 2)}/${inputValue.slice(2, 4)}/${inputValue.slice(4, 8)}`;
}
// Atualiza o valor do campo de entrada
inputElement.value = formattedValue;
}
/**
* Este listener lida com o pressionamento da tecla Backspace.
* Ele garante que a barra (/) seja removida junto com o número anterior,
* proporcionando uma experiência de usuário mais fluida.
*/
@HostListener('keydown.backspace', ['$event'])
onBackspace(event: KeyboardEvent): void {
const inputElement = event.target as HTMLInputElement;
const currentValue = inputElement.value;
if (currentValue.endsWith('/') && currentValue.length > 0) {
// Remove a barra e o número anterior de uma vez
inputElement.value = currentValue.slice(0, currentValue.length - 2);
// Previne o comportamento padrão do backspace para não apagar duas vezes
event.preventDefault();
}
}
}
<main class="center"> <router-outlet></router-outlet> <input type="text" placeholder="DD/MM/AAAA" [formControl]="dateControl" formatDate maxlength="10"
</main>
Qwen2_5_VLProcessor is a processor class specifically designed for the Qwen 2.5 VL model, handling its unique preprocessing needs.
AutoProcessor is a generic factory that automatically loads the appropriate processor class (like Qwen2_5_VLProcessor) based on the model name or configuration.