fixed: turns out you cant do that in textual
add list then open it as design vew than add example code to apply filter from cmb_ml combo box to list
IIf([Forms]![qc_moldsF9_partsListSelect]![cmb_ml] Is Not Null,[Forms]![qc_moldsF9_partsListSelect]![cmb_ml],[id_part])
You can read from serial port with php and windows with library for windows
https://github.com/m0x3/php_comport
This is only one real working read on windows with php.
You can’t simply use a ELM327 to fully emulate an ECU — you need to decide which layer to emulate (CAN‑bus vs the ELM327 AT‑command layer) and build the interface so your tool reads from the emulated bus instead of the real one
The php command
php artisan migrate
succeeds provided I do 2 things:
rename the service mysql instead of db (in .env file and docker-compose.yml)
add the --network <id> flag when connecting to the backend container's shell
Assuming you got your [goal weight day] set up, would this work?
If((\[weight\]\<160) and (\[goal weight day\] is not null),First(\[day\]) over (\[person\],\[weight\]))
I was experiencing this issue as well, found out that there are some required parameters I was not sending from the Backend hence causing the error to be raised
I had a similar problem. What helped was removing the custom domain from my username.github.io repository (the user/organization site)
Suppose your number is on cell A2 then use the formula IF(A2=0,0,MOD(A2-1,9)+1)
This will return the sum of all digits into a single digit between 0 to 9
rightclick -> format document ?
Check if the server is overriding
Try building the project by runing commands
npm run build serve -s build
Later check by opening the build URL in Safari browser
I know this is an old thread, but there is still no "create_post" capability (I wonder why?) and I needed this functionality as well.
What I want: I create a single post for specific users with a custom role and then only let them edit that post.
This is what works for me:
'edit_posts' => false : will remove the ability to create posts, but also the ability to edit/update/delete
'edit_published_posts' => true : this will give back the ability to edit and update, but no to create new posts (so there will be no "Add post" button)
The whole function & hook:
function user_custom_roles() {
remove_role('custom_role'); //needed to "reset" the role
add_role(
'custom_role',
'Custom Role',
array(
'read' => true,
'delete_posts' => false,
'delete_published_posts' => false,
'edit_posts' => false, //IMPORTANT
'edit_published_posts' => true, //IMPORTANT
'edit_others_pages' => false,
'edit_others_posts' => false,
'publish_pages' => false,
'publish_posts' => false,
'upload_files' => true,
'unfiltered_html' => false
)
);
}
add_action('admin_init', 'user_custom_roles');
I see this question is 12 (!!!) years old, but I’ll add an answer anyway. I ran into the same confusion while reading Evans and Vernon and thought this might help others.
Like you, I was puzzled by:
1️⃣ Subdomains vs. Bounded Contexts
Subdomains are business-oriented concepts, bounded contexts are software-oriented. A subdomain represents an area of the business, for example, Sales in an e-commerce company (the classic example). Within that subdomain, you might have several bounded contexts: Product Catalog, Order Management, Pricing, etc. Each bounded context has its own model and ubiquitous language, consistent only within that context. As a matter of fact, model and ubiquitous language are the concepts that, at the implementation level, define the boundary of a context (terms mean something different and/or are implemented in different ways depending on context)
2️⃣ How they relate
In short: you can have multiple bounded contexts within one subdomain. To use a different analogy than the existing ones: subdomains are like thematic areas in an amusement park, while bounded contexts are the attractions within each area, each with its own design and mechanisms, but all expressing the same general theme.
3️⃣ In practice
In implementation, you mostly work within bounded contexts, since that’s where your code and model live. For example, in Python you might structure your project with one package per bounded context, each encapsulating its domain logic and data model.
Another reason to keep the two concepts separated is that you may have a business rule spanning across different bounded contexts and be implemented differently in each of those. For example (again sale, I hate this domain, but here we are): "A customer cannot receive more than a 20% discount" is a rule of the "Sale" sub-domain that, language-wise and model-wise, will be implemented differently in different bounded contexts (pricing, order management, etc).
Also...
When planning development, discussions start at the subdomain level, aligning business capabilities and assigning teams. Those teams then define the bounded contexts and their corresponding ubiquitous languages.
The distinction between the two matters most at this strategic design stage, it helps large projects stay organised and prevents overlapping models and terminology from creating chaos.
If you mainly work on smaller or personal projects by your own (as I do), all this taxonomy may not seem that important at first, but (I guess) the advantage is clear for people that witnesses project collapsing because of bad planning.
TLDR:
thank you
sudo restorecon -rv /opt/tomcat
worked for me
The problem with missing options came from config/packages/doctrine.yaml
There, the standard config sets options that were removed in doctrine-orm v3, namely:
doctrine.dbal.use_savepoints
doctrine.orm.auto_generate_proxy_classes
doctrine.orm.enable_lazy_ghost_objects
doctrine.orm.report_fields_where_declared
Commenting out / removing those options resolved the issue.
Hopefully the package supplying those config options fixes this. In the meantime, manually editing the file seems to work.
You can read from serial port with php and windows with library for windows
https://github.com/m0x3/php_comport
This is only one real working read on windows with php.
Now it has changed, use following in your source project:
Window -> Layouts -> Save Current Layout As New ...
And this in you destination project:
Window -> Layouts -> {Name you've given} -> Apply
Without quotes and for a file in directory ./files, launch the following command from the root directory where .git is placed:
git diff :!./files/file.txt
Once the bitmap preview is open, you can copy it (via cmd or [right click -> copy]) and then paste it to Preview app [Preview -> File -> New from clipboard] (if you use a Mac) or any image viewer of your choice. Then save it.
This issue is solved by running the published executable as Admin. My visual studio always runs as admin, turns out it makes a difference.
I am not sure why it matters, maybe windows defender by default scanning the executable while running that makes it slower, or like Guru Stron said that it has anything to do with DOTNET_TC_QuickJitForLoops, but I haven't got time to test it any further.
Maybe when I have enough time to test, I will update my answer.
For now, I will close this issue.
how do i create something like the 1 answer but for something else.. theres a website i want to scrape but i want to scrape for a specific "src="specific url ""
Looks like the best autocalibreation is manual one. Used AI to create me a script to adjust all the values of the camera_matrix and dist_coeffs manually until i got the desired picture in the live preview.
If your organisation permits you might be able to use LDAP to populate those field:
VBA excel Getting information from active directory with the username based in cells
Turned out I forgot to add
app.html
<router-outlet></router-outlet>
and
app.ts
@Component({
imports: [RouterModule],
selector: 'app-root',
templateUrl: './app.html',
styleUrl: './app.scss',
})
export class App {
protected title = 'test-app';
}
In my case i had an email notification configure in /etc/mysql/mariadb.conf.d/60-galera.cnf
The process was hanging and after I removed it the service restarted and the machine reboots with no problem.
Hope it helps,
Lets add
@AutoConfigureMockMvc(addFilters = false)
to ImportControllerTest. By setting addFilters = false in @AutoConfigureMockMvc, you instruct Spring to disable the entire Security Filter Chain for the test. This allows the request to be routed directly to your ImportController, bypassing any potential misconfiguration in the auto-configured OAuth2 resource server setup that is preventing the dispatcher from finding the controller.
You can do this:
total = 0
while total <= 100:
total += float(input("Write number: "))
Maybe this helps. for me it work nicely. and sets like 11 postgres processes to the cores i want on the cpu i want (multi CPU sever). its a part of a startup script when server restarts.
SET "POSTGRES_SERVICE_NAME=postgresql-x64-18"
:: --- CPU Affinity Masks ---
:: PostgreSQL: 7 physical cores on CPU 1 (logical processors 16-29)
SET "AFFINITY_POSTGRES=0x3FFF0000"
:: --- 1. Start PostgreSQL Service ---
echo [1/3] PostgreSQL Service
echo ---------------------------------------------------
echo Checking PostgreSQL service state...
sc query %POSTGRES_SERVICE_NAME% | find "STATE" | find "RUNNING" > nul
if %errorlevel% == 0 (
echo [OK] PostgreSQL is already RUNNING.
) else (
echo Starting PostgreSQL service...
net start %POSTGRES_SERVICE_NAME% >nul 2>&1
echo Waiting for PostgreSQL to initialize...
for /l %%i in (1,1,15) do (
timeout /t 1 /nobreak > nul
sc query %POSTGRES_SERVICE_NAME% | find "STATE" | find "RUNNING" > nul
if !errorlevel! == 0 (
goto :postgres_started
)
)
:: If we get here, timeout expired
echo [ERROR] PostgreSQL service failed to start within 15 seconds. Check logs.
pause & goto :eof
:postgres_started
echo [OK] PostgreSQL service started.
)
:: Wait a moment for all postgres.exe processes to spawn
echo Waiting for PostgreSQL processes to spawn...
timeout /t 3 /nobreak > nul
:: Apply affinity to ALL postgres.exe processes using PowerShell
echo Setting PostgreSQL affinity to %AFFINITY_POSTGRES%...
powershell -NoProfile -ExecutionPolicy Bypass -Command "$procs = Get-Process -Name postgres -ErrorAction SilentlyContinue; $count = 0; foreach($p in $procs) { try { $p.ProcessorAffinity = %AFFINITY_POSTGRES%; $count++ } catch {} }; Write-Host \" [OK] Affinity set for $count postgres.exe processes.\" -ForegroundColor Green"
I am guessing you mean css. here is the correct code
#list_label {
font-size:20px;
}
Here is the docs for font size:
Sorry for the late reply. But - a good place to ask those questions related to DQL would be on the Dynatrace community -> https://community.dynatrace.com
I think for this use case we have the command KVP (Key Value Pairs) that automatically parses Key Value Pairs and you can then access all keys and values. Here for instance a discussion on that topic => https://community.dynatrace.com/t5/DQL/Log-processing-rule-for-each-item-in-json-array-split-on-quot/m-p/220181
On 2025 the properties of the other posts didnt worked or even exists.
A simple workaround for me was just disableing hovering on the whole cancas html element via css.
canvas {
// disable all the hover effects
pointer-events: none;
}
If you're familiar with django template syntax github.com/peedief/template-engine is a very good option.
output = string.replace(fragment, "*", 1).replace("fragment", "").replace("*", fragment)
If needed, replace "*" with some token string which would never occur on your original string.
"Batteries included" doesn't mean that you can do everything you want with single built-in function call.
Based on @domi 's comment, I have added this line to the end of the command and it worked fine.
Ignore the Suggestions matching public code (duplication detection filter)
If you’re looking for a reliable tool to pretty-print and formatter JSON content, one of the best options is the command-line utility jq, which is described in this Stack Overflow thread: “JSON command line formatter tool for Linux”
I had the same issue and found a convenient way to globally configure this and packaged it into a htmx extension, you can find it here: https://github.com/fchtngr/htmx-ext-alpine-interop
ive acidently passed a const argument. dosent seem to be the issu in youre case tho.
Follow this inComplete guide repo to install and setup jupyter notebook on termux android 13+
ls -v *.txt | cat -n | while read i f; do mv $f $(printf "%04d.txt" $i); done
I tested this locally with Spring Boot 3.4.0 on Java 25 using Gradle 9.1.0 and the app failed to start with the same error you mentioned. This happens because the ASM library embedded in Spring Framework 6.2.0 (used by 3.4.0) doesn’t support Java 25 class files.
When I upgraded to Spring Boot 3.4.10 (the latest patch in the 3.4.x line), the same app ran fine on Java 25.
It looks like a patch-level issue, early 3.4.x releases didn’t fully support Java 25, but the latest patch fixed the ASM support.
What you can do is,
Upgrade to Spring Boot 3.4.10 (if you want to stay on 3.4.x).
Upgrade to Spring Boot 3.5.x, which fully supports Java 25.
Either option works fine on Java 25.
Pedro Piñera helped answer here and thanks!
Basically Tuist sets a default version in the generated projects here https://github.com/tuist/tuist/blob/88b57c1ac77dac2a8df7e45a0a59ef4a7ca494e9/cli/Sources/TuistGenerator/Generator/ProjectDescriptorGenerator.swift#L188
which is not configurable as of now.
I have a similar kind of issue, where the page is splitting unnecessarily.
I have three components a header, a title and a chart using chart js, the issue is the header and title is coming in the first page and chart is going to the second page keeping the first page blank, so what else I can do here it is working fine when the chart data is fit with in the first page.
Can somebody please help me to fix this issue
Here is the code
<div className="chart-container">
<div className="d-flex justify-content-between">
<label className="chart-title m-2">{props.title}</label>
</div>
{data.length == 0
? <div className="no-data-placeholder">
<span>No Data Found!</span>
</div>
: <div id={props.elementID} style={props.style}></div>
}
</div>
Since ngx-image-cropper adjusts the image to fit the crop area, zooming out scales the image instead of keeping its original size. MaintainAspectRatio or transform settings should be used.
You could also set your conditions without AssertJ and then just verify the boolean value with AssertJ.
Like this:
boolean result = list.stream()
.anyMatch(element -> element.matches(regex) || element.equals(specificString));
assertThat(result).isTrue()
It's probably ...Edit Scheme...->Run->Diagnostics->API Validation. Uncheck this and give it a try.
I know this is an old post, but if you're here from a "Annex B vs AVCC" search, I thought it would be worth adding another opinion, because what I believe to be the most important reason to use Annex B has not been mentioned.
@VC. One has already provided some technical information about each of the formats, so I will try not to repeat that.
I wonder in which case we should use Annex-B
To answer you question directly, the Annex-B start codes allow a decoder to synchronise to a stream that is already being transmitted, like a UDP broadcast or wireless terrestrial tv broadcast. The start codes also allow the decoder to re-synchronise after a corruption in the media transport.
AVCC does not have a recovery mechanism, so cannot be used for purposes like I describe above.
To be clear, each of the formats have practical advantages and disadvantages.
Neither is "better" - they have different goals.
The comparison of these formats is similar to MPEG-TS vs MPEG-PS.
Transport stream (-TS) can be recovered if the stream is corrupted by an unreliable transport.
Program stream (-PS) is more compact and easier to parse, but has no recovery mechanism, so only use it with reliable transports.
For those parsing NALU's out of a byte stream that is stored on disk, you might reasonably question why you are searching for start codes in a file on disk, when you could be using a format that tells you the atom sizes before you parse them. Disk storage is reliable. So is TCP transmission. Favour AVCC in these contexts, if it is convenient to do so.
However, keep in mind that constructing the box structures in AVCC is more complex than just dropping start codes between each NALU, so recording from a live source is much simpler with Annex B. Apart from the additional complexity, recording directly to AVCC is also more prone to corruption if it is interrupted, because that format requires that the location of each of the frame boxes is in an index (in moov boxes) that you can only write retrospectively when you're streaming live video to disk. If your recording process is interrupted (crash, power loss, et, al.), you will need some repair process to fix the broken recording (parsing the box structures for frames and building the moov atom). An interrupted Annex B recording, however, will only suffer a single broken frame in the same scenario.
So my message is "horses for courses".
Chose the one that suits your acquisition/recording/reconstruction needs best.
You are trying to run the command on a generic notebook as a generic pyspark import.
The pipeline module can be accessed only within a context of a pipeline.
please refer this documentation for clarity:
https://docs.databricks.com/aws/en/ldp/developer/python-ref/#gsc.tab=0
Currently I'm not allowed to Add/reply to comments, so i'll just make an Individual answer,
For MacOs, the solution is the same as Bhavin Panara's Solution, the directory is
/Users/(YourUser)/Library/Unity/cache/packages/packages.unity.com
You can use
time.strftime('%Y-%m-%d %H:%M:%S.%f')[:-3])
Cant you probably just look up the JS code of the router page and see what requests it sends?
i am stuck when i have ti submet where thay aked if im a android
(17f0.a4c): Break instruction exception - code 80000003 (first chance) ntdll!LdrpDoDebuggerBreak+0x30: 00007ffa`0a3006b0 cc int 3
That's just the WinDbg's break instruction exception (a.k.a int3 opcode 0xcc)
According to this article the executable part is in
.text, According to this article the executable part is in.textand.rodata, is it possible to grab the bytes in.textand convert them to a shellcode then injecting it into a process
It greatly depends on the executable! As long as the data isn't being executed as code and vice versa, it's gonna be fine.
After testing the same app on the same Samsung device updated to Android 16 (recently released for Samsung), I can confirm that Audio Focus requests now behave correctly, they are granted when the app is running a foreground service, even if it’s not the top activity.
This indicates the issue was specific to Samsung’s Android 15 firmware, not to Android 15 itself. On Pixel devices, AudioFocus worked as expected on both Android 15 and 16, consistent with Google’s behavior change documentation.
In short:
Samsung Android 15 bug: AudioFocus requests were incorrectly rejected when the app wasn’t in the foreground, even if it had a foreground service.
Fixed in Android 16: Behavior now matches Pixel and AOSP devices.
Older Samsung devices: Those that don’t receive Android 16 will likely continue to exhibit this bug.
document.querySelectorAll('button[aria-pressed="true"][aria-label="Viewed"]').forEach(btn => btn.click());
Updated command for 2025 github new UI
I just got this number a couple of hours ago and it's been banned what may I do so that. May tart using telegram agin
From the Google Cloud console, select your project, then in the top bar, search for buckets. You will see that you have one created. Enter it and you will obtain the list of .zip files, one for each deployment.
Well, the official GitHub documentation says they used third party for language detection and code hilights.
"We use Linguist to perform language detection and to select third-party grammars for syntax highlighting. You can find out which keywords are valid in the languages YAML file."
You may try to do the same thing.
Actually, I wonder how this page, StackOverflow may do it, since the code you paste here, is well highligted.
You may think in how to install the third party libraries and use it in your own project. My recommendation would be to:
The most common and effective way to render Markdown with syntax highlighting (including for JSX) in a React application is to combine the react-markdown library with react-syntax-highlighter.
You're correct that Markdown itself doesn't highlight code; it just identifies code blocks. You may need to use a separate library to parse and style that code. react-syntax-highlighter is a popular choice because it bundles highlighting libraries like Prism and Highlight.js for easy use in React.
An useful example might be:
First, you need to install the necessary packages:
npm install react-markdown react-syntax-highlighter
# Optional, but recommended for GitHub-style markdown (tables, etc.)
npm install remark-gfm
Now, create a component that renders the Markdown. The key is to use the components prop in react-markdown to override the default renderer for code blocks.
import React from 'react';
import ReactMarkdown from 'react-markdown';
import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter';
// You can choose any theme you like
import { vscDarkPlus } from 'react-syntax-highlighter/dist/esm/styles/prism';
import remarkGfm from 'remark-gfm';
// The markdown string you want to render
const markdownString = `
Here's some regular text.
And here is a JSX code block:
\`\`\`jsx
import React from 'react';
function MyComponent() {
return (
<div className="container">
<h1>Hello, React!</h1>
</div>
);
}
\`\`\`
We also support inline \`code\` elements.
And other languages like JavaScript:
\`\`\`javascript
console.log('Hello, world!');
\`\`\`
`;
function MarkdownRenderer() {
return (
<ReactMarkdown
remarkPlugins={[remarkGfm]} // Adds GFM support
children={markdownString}
components={{
code(props) {
const { children, className, node, ...rest } = props;
const match = /language-(\w+)/.exec(className || '');
return match ? (
<SyntaxHighlighter
{...rest}
PreTag="div"
children={String(children).replace(/\n$/, '')}
language={match[1]} // e.g., 'jsx', 'javascript'
style={vscDarkPlus} // The theme to use
/>
) : (
<code {...rest} className={className}>
{children}
</code>
);
},
}}
/>
);
}
export default MarkdownRenderer;
# compare_icon_fmt.py
import cv2
import numpy as np
from dataclasses import dataclass
from typing import Tuple, List
# ===================== T H A M S Ố & C ᾳ U H Ì N H =====================
@dataclass
class RedMaskParams:
# Dải đỏ HSV đôi: [0..10] U [170..180]
lower1: Tuple[int, int, int] = (0, 80, 50)
upper1: Tuple[int, int, int] = (10, 255, 255)
lower2: Tuple[int, int, int] = (170, 80, 50)
upper2: Tuple[int, int, int] = (180, 255, 255)
open_ksize: int = 3
close_ksize: int = 5
@dataclass
class CCParams:
dilate_ksize: int = 3
min_area: int = 150
max_area: int = 200000
aspect_min: float = 0.5
aspect_max: float = 2.5
pad: int = 2
@dataclass
class FMTParams:
hann: bool = True
eps: float = 1e-3
min_scale: float = 0.5
max_scale: float = 2.0
@dataclass
class MatchParams:
ncc_threshold: float = 0.45
canny_low: int = 60
canny_high: int = 120
# ===================== 1) LOAD & BINARIZE =====================
def load_and_binarize(path: str):
img_bgr = cv2.imread(path, cv2.IMREAD_COLOR)
if img_bgr is None:
raise FileNotFoundError(f"Không thể đọc ảnh: {path}")
rgb = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2RGB)
gray = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2GRAY)
_, binarized = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
return img_bgr, rgb, binarized
# ===================== 2) TEMPLATE BIN + INVERT =====================
def binarize_and_invert_template(tpl_bgr):
tpl_gray = cv2.cvtColor(tpl_bgr, cv2.COLOR_BGR2GRAY)
_, tpl_bin = cv2.threshold(tpl_gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
tpl_inv = cv2.bitwise_not(tpl_bin)
return tpl_bin, tpl_inv
# ===================== 3) RED MASK =====================
def red_mask_on_dashboard(dash_bgr, red_params: RedMaskParams):
hsv = cv2.cvtColor(dash_bgr, cv2.COLOR_BGR2HSV)
m1 = cv2.inRange(hsv, red_params.lower1, red_params.upper1)
m2 = cv2.inRange(hsv, red_params.lower2, red_params.upper2)
mask = cv2.bitwise_or(m1, m2)
if red_params.open_ksize > 0:
k = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (red_params.open_ksize,)*2)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, k)
if red_params.close_ksize > 0:
k = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (red_params.close_ksize,)*2)
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, k)
return mask
def apply_mask_to_binarized(binarized, mask):
return cv2.bitwise_and(binarized, binarized, mask=mask)
# ===================== 4) DILATE + CONNECTED COMPONENTS =====================
def find_candidate_boxes(masked_bin, cc_params: CCParams) -> List[Tuple[int,int,int,int]]:
k = cv2.getStructuringElement(cv2.MORPH_RECT, (cc_params.dilate_ksize,)*2)
dil = cv2.dilate(masked_bin, k, iterations=1)
num_labels, labels, stats, _ = cv2.connectedComponentsWithStats((dil>0).astype(np.uint8), connectivity=8)
boxes = []
H, W = masked_bin.shape[:2]
for i in range(1, num_labels):
x, y, w, h, area = stats[i]
if area < cc_params.min_area or area > cc_params.max_area:
continue
aspect = w / (h + 1e-6)
if not (cc_params.aspect_min <= aspect <= cc_params.aspect_max):
continue
x0 = max(0, x - cc_params.pad)
y0 = max(0, y - cc_params.pad)
x1 = min(W, x + w + cc_params.pad)
y1 = min(H, y + h + cc_params.pad)
boxes.append((x0, y0, x1-x0, y1-y0))
return boxes
# ===================== 5) CROP CHẶT TEMPLATE =====================
def tight_crop_template(tpl_inv):
cnts, _ = cv2.findContours(tpl_inv, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
if not cnts:
return tpl_inv
x, y, w, h = cv2.boundingRect(max(cnts, key=cv2.contourArea))
return tpl_inv[y:y+h, x:x+w]
# ===================== 6) FOURIER–MELLIN (scale, rotation) =====================
def _fft_magnitude(img: np.ndarray, use_hann=True, eps=1e-3) -> np.ndarray:
if use_hann:
hann_y = cv2.createHanningWindow((img.shape[1], 1), cv2.CV_32F)
hann_x = cv2.createHanningWindow((1, img.shape[0]), cv2.CV_32F)
window = hann_x @ hann_y
img = img * window
dft = cv2.dft(img, flags=cv2.DFT_COMPLEX_OUTPUT)
dft_shift = np.fft.fftshift(dft, axes=(0,1))
mag = cv2.magnitude(dft_shift[:,:,0], dft_shift[:,:,1])
mag = np.log(mag + eps)
mag = cv2.normalize(mag, None, 0, 1, cv2.NORM_MINMAX)
return mag
def _log_polar(mag: np.ndarray) -> Tuple[np.ndarray, float]:
center = (mag.shape[1]//2, mag.shape[0]//2)
max_radius = min(center[0], center[1])
M = mag.shape[1] / np.log(max_radius + 1e-6)
lp = cv2.logPolar(mag, center, M, cv2.WARP_FILL_OUTLIERS + cv2.INTER_LINEAR)
return lp, M
def fourier_mellin_register(img_ref: np.ndarray, img_mov: np.ndarray, fmt_params: FMTParams):
a = cv2.normalize(img_ref.astype(np.float32), None, 0, 1, cv2.NORM_MINMAX)
b = cv2.normalize(img_mov.astype(np.float32), None, 0, 1, cv2.NORM_MINMAX)
amag = _fft_magnitude(a, use_hann=fmt_params.hann, eps=fmt_params.eps)
bmag = _fft_magnitude(b, use_hann=fmt_params.hann, eps=fmt_params.eps)
alp, M = _log_polar(amag)
blp, _ = _log_polar(bmag)
shift, response = cv2.phaseCorrelate(alp, blp)
# phaseCorrelate trả (shiftX, shiftY)
shiftX, shiftY = shift
cols = alp.shape[1]
scale = np.exp(shiftY / (M + 1e-9))
rotation = -360.0 * (shiftX / (cols + 1e-9))
scale = float(np.clip(scale, fmt_params.min_scale, fmt_params.max_scale))
rotation = float(((rotation + 180) % 360) - 180)
return scale, rotation, float(response)
def warp_template_by(scale: float, rotation_deg: float, tpl_gray: np.ndarray, target_size: Tuple[int, int]):
h, w = tpl_gray.shape[:2]
center = (w/2, h/2)
M = cv2.getRotationMatrix2D(center, rotation_deg, scale)
warped = cv2.warpAffine(tpl_gray, M, (w, h), flags=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT, borderValue=0)
warped = cv2.resize(warped, (target_size[0], target_size[1]), interpolation=cv2.INTER_LINEAR)
return warped
# ===================== 7) MATCH SCORE (robust) =====================
def edge_preprocess(img_gray: np.ndarray, mp: MatchParams):
# CLAHE để chống ảnh phẳng
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
g = clahe.apply(img_gray)
edges = cv2.Canny(g, mp.canny_low, mp.canny_high)
# Nếu cạnh quá ít → dùng gradient magnitude
if np.count_nonzero(edges) < 0.001 * edges.size:
gx = cv2.Sobel(g, cv2.CV_32F, 1, 0, ksize=3)
gy = cv2.Sobel(g, cv2.CV_32F, 0, 1, ksize=3)
mag = cv2.magnitude(gx, gy)
mag = cv2.normalize(mag, None, 0, 255, cv2.NORM_MINMAX).astype(np.uint8)
return mag
# Dãn cạnh nhẹ
k = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
edges = cv2.dilate(edges, k, iterations=1)
return edges
def _nan_to_val(x: float, val: float = -1.0) -> float:
return float(val) if (x is None or (isinstance(x, float) and (x != x))) else float(x)
def ncc_score(scene: np.ndarray, templ: np.ndarray) -> float:
Hs, Ws = scene.shape[:2]
Ht, Wt = templ.shape[:2]
if Hs < Ht or Ws < Wt:
pad = np.zeros((max(Hs,Ht), max(Ws,Wt)), dtype=scene.dtype)
pad[:Hs,:Ws] = scene
scene = pad
# 1) TM_CCOEFF_NORMED
res = cv2.matchTemplate(scene, templ, cv2.TM_CCOEFF_NORMED)
s1 = _nan_to_val(res.max())
# 2) Fallback: TM_CCORR_NORMED
s2 = -1.0
if s1 <= -0.5:
res2 = cv2.matchTemplate(scene, templ, cv2.TM_CCORR_NORMED)
s2 = _nan_to_val(res2.max())
# 3) Fallback cuối: IoU giữa 2 mask nhị phân
if s1 <= -0.5 and s2 <= 0:
t = templ
sc = scene
if sc.shape != t.shape:
sc = cv2.resize(sc, (t.shape[1], t.shape[0]), interpolation=cv2.INTER_NEAREST)
_, tb = cv2.threshold(t, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
_, sb = cv2.threshold(sc, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
inter = np.count_nonzero(cv2.bitwise_and(tb, sb))
union = np.count_nonzero(cv2.bitwise_or(tb, sb))
iou = inter / union if union > 0 else 0.0
return float(iou)
return max(s1, s2)
def thicken_binary(img: np.ndarray, ksize: int = 3, iters: int = 1) -> np.ndarray:
k = cv2.getStructuringElement(cv2.MORPH_RECT, (ksize,ksize))
return cv2.dilate(img, k, iterations=iters)
# ===================== P I P E L I N E C H Í N H =====================
def find_icon_with_fmt(
dashboard_path: str,
template_path: str,
red_params=RedMaskParams(),
cc_params=CCParams(),
fmt_params=FMTParams(),
match_params=MatchParams(),
):
# 1) Dashboard: RGB + bin
dash_bgr, dash_rgb, dash_bin = load_and_binarize(dashboard_path)
# 2) Template: bin + invert
tpl_bgr = cv2.imread(template_path, cv2.IMREAD_COLOR)
if tpl_bgr is None:
raise FileNotFoundError(f"Không thể đọc template: {template_path}")
tpl_bin, tpl_inv = binarize_and_invert_template(tpl_bgr)
# 3) Lọc đỏ & áp mask lên ảnh nhị phân dashboard
redmask = red_mask_on_dashboard(dash_bgr, red_params)
dash_masked = apply_mask_to_binarized(dash_bin, redmask)
# 4) Dãn + tìm CC để lấy candidate boxes
boxes = find_candidate_boxes(dash_masked, cc_params)
# 5) Cắt chặt template & chuẩn bị phiên bản grayscale
tpl_tight = tight_crop_template(tpl_inv)
tpl_tight_gray = cv2.GaussianBlur(tpl_tight, (3,3), 0)
# Tiền xử lý cạnh cho template
tpl_edges = edge_preprocess(tpl_tight_gray, match_params)
best = {
"score": -1.0,
"box": None,
"scale": None,
"rotation": None
}
dash_gray = cv2.cvtColor(dash_bgr, cv2.COLOR_BGR2GRAY)
for (x, y, w, h) in boxes:
roi = dash_gray[y:y+h, x:x+w]
if roi.size == 0 or w < 8 or h < 8:
continue
# Resize tạm cho FMT
tpl_norm = cv2.resize(tpl_tight_gray, (w, h), interpolation=cv2.INTER_LINEAR)
roi_norm = cv2.resize(roi, (w, h), interpolation=cv2.INTER_LINEAR)
# 6) FMT ước lượng scale/rotation (có fallback)
try:
scale, rotation, resp = fourier_mellin_register(tpl_norm, roi_norm, fmt_params)
except Exception:
scale, rotation, resp = 1.0, 0.0, 0.0
warped = warp_template_by(scale, rotation, tpl_tight_gray, target_size=(w, h))
# (tuỳ chọn) làm dày biên template
warped = thicken_binary(warped, ksize=3, iters=1)
# 7) Tính điểm khớp trên đặc trưng robust
roi_feat = edge_preprocess(roi, match_params)
warped_feat = edge_preprocess(warped, match_params)
score = ncc_score(roi_feat, warped_feat)
if score > best["score"]:
best.update({
"score": score,
"box": (x, y, w, h),
"scale": scale,
"rotation": rotation
})
return {
"best_score": best["score"],
"best_box": best["box"], # (x, y, w, h) trên dashboard
"best_scale": best["scale"],
"best_rotation_deg": best["rotation"],
"pass": (best["score"] is not None and best["score"] >= match_params.ncc_threshold),
"num_candidates": len(boxes),
}
# ===================== V Í D Ụ C H Ạ Y =====================
if __name__ == "__main__":
# ĐỔI 2 ĐƯỜNG DẪN NÀY THEO MÁY BẠN
DASHBOARD = r"\Icon\dashboard.jpg"
TEMPLATE = r"\Icon\ID01.jpg"
result = find_icon_with_fmt(
dashboard_path=DASHBOARD,
template_path=TEMPLATE,
red_params=RedMaskParams(), # nới dải đỏ nếu cần
cc_params=CCParams(min_area=60, max_area=120000, pad=3),
fmt_params=FMTParams(min_scale=0.6, max_scale=1.8),
match_params=MatchParams(ncc_threshold=0.55, canny_low=50, canny_high=130)
)
print("=== KẾT QUẢ ===")
for k, v in result.items():
print(f"{k}: {v}")
# Vẽ khung best match để kiểm tra nhanh
if result["best_box"] is not None:
img = cv2.imread(DASHBOARD)
x, y, w, h = result["best_box"]
cv2.rectangle(img, (x,y), (x+w, y+h), (0,255,0), 2)
cv2.putText(img, f"NCC={result['best_score']:.2f}", (x, max(0,y-8)),
cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0,255,0), 2, cv2.LINE_AA)
cv2.imshow("Best match", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Hi i am using but it don't find correct image. Please help me check
As Discussed In The Comments, The Problem Seems To Be Exclusive To My System. Very Sorry For Everyone's Time Wasted.
Edit: I Cannot Delete The Post Because There Are Other Answers On It.
code not run
import { Directive, ElementRef, HostListener } from '@angular/core';
@Directive({
selector: '[formatDate]', // Este é o seletor que você usará no HTML
standalone: true // Torna a diretiva autônoma (não precisa ser declarada em um módulo)
})
export class FormataDateDirective {
constructor(private el: ElementRef) {}
/**
* O HostListener escuta eventos no elemento hospedeiro (o <input>).
* Usamos o evento 'input' porque ele captura cada alteração,
* incluindo digitação, colagem e exclusão de texto.
* @param event O evento de entrada disparado.
*/
@HostListener('input', ['$event'])
onInputChange(event: Event): void {
const inputElement = event.target as HTMLInputElement;
let inputValue = inputElement.value.replace(/\D/g, ''); // Remove tudo que não for dígito
// Limita a entrada a 8 caracteres (DDMMAAAA)
if (inputValue.length > 8) {
inputValue = inputValue.slice(0, 8);
}
let formattedValue = '';
// Aplica a formatação DD/MM/AAAA conforme o usuário digita
if (inputValue.length > 0) {
formattedValue = inputValue.slice(0, 2);
}
if (inputValue.length > 2) {
formattedValue = `${inputValue.slice(0, 2)}/${inputValue.slice(2, 4)}`;
}
if (inputValue.length > 4) {
formattedValue = `${inputValue.slice(0, 2)}/${inputValue.slice(2, 4)}/${inputValue.slice(4, 8)}`;
}
// Atualiza o valor do campo de entrada
inputElement.value = formattedValue;
}
/**
* Este listener lida com o pressionamento da tecla Backspace.
* Ele garante que a barra (/) seja removida junto com o número anterior,
* proporcionando uma experiência de usuário mais fluida.
*/
@HostListener('keydown.backspace', ['$event'])
onBackspace(event: KeyboardEvent): void {
const inputElement = event.target as HTMLInputElement;
const currentValue = inputElement.value;
if (currentValue.endsWith('/') && currentValue.length > 0) {
// Remove a barra e o número anterior de uma vez
inputElement.value = currentValue.slice(0, currentValue.length - 2);
// Previne o comportamento padrão do backspace para não apagar duas vezes
event.preventDefault();
}
}
}
<main class="center"> <router-outlet></router-outlet> <input type="text" placeholder="DD/MM/AAAA" [formControl]="dateControl" formatDate maxlength="10"
</main>
Qwen2_5_VLProcessor is a processor class specifically designed for the Qwen 2.5 VL model, handling its unique preprocessing needs.
AutoProcessor is a generic factory that automatically loads the appropriate processor class (like Qwen2_5_VLProcessor) based on the model name or configuration.
For Private Bytes and Working Set, read this answer.
Heap size is the size of Managed Heap. If your code constructs new instance of class, it will be allocated on the Managed Heap. for better understanding, search about Stack and Heap Memory Model, which is the fundamental basis of modern computers.
Difference between Private Bytes and Heap Size doesn't represent anything. If your program loads a lot of static/dynamic libraries, the memory needed to load them would not be in Heap and thus Heap Size would not grow. If your program loads lots of data and constructs truckloads of class instances to store it on memory, they will be in Heap and thus Heap Size would grow.
Check Maven Surefire Plugin Configuration
Even though you have JUnit 5 dependencies, the maven-surefire-plugin might need explicit configuration to recognize JUnit 5 tests. Add the following plugin configuration to your pom.xml (within the <build><plugins> section):
Your line 3 contains unbalanced square brackets:
if [[ "$my_error_flag"=="1" || "$my_error_flag_o"=="2" ]
should be
if [[ "$my_error_flag" == "1" || "$my_error_flag_o" == "2" ]]
Note I also added spaces around ==.
"Process finished with exit code 0" is notification that your script successfully finished.
your out put is this one.
<!DOCTYPE html><html><head>
<title>Example Domain</title>
<meta charset="utf-8">
<meta http-equiv="Content-type" content="text/html; charset=utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<style type="text/css">
body {
background-color: #f0f0f2;
margin: 0;
padding: 0;
font-family: -apple-system, system-ui, BlinkMacSystemFont, "Segoe UI", "Open Sans", "Helvetica Neue", Helvetica, Arial, sans-serif;
}
div {
width: 600px;
margin: 5em auto;
padding: 2em;
background-color: #fdfdff;
border-radius: 0.5em;
box-shadow: 2px 3px 7px 2px rgba(0,0,0,0.02);
}
a:link, a:visited {
color: #38488f;
text-decoration: none;
}
@media (max-width: 700px) {
div {
margin: 0 auto;
width: auto;
}
}
</style>
</head>
<body>
<div>
<h1>Example Domain</h1>
<p>This domain is for use in illustrative examples in documents. You may use this
domain in literature without prior coordination or asking for permission.</p>
<p><a href="https://www.iana.org/domains/example">More information...</a></p>
</div>
</body></html>
Seems that it is a common issue, so please vote at https://github.com/angular/angular/issues/64553
To put forward a case to OpenText that demonstrates SQLBase user support for Entity Framework would be a popular and a worthwhile modernization, I put together a Poll, and if the results are favorable I will send the outcome to the OpenText Director of Product Management & Portfolio Products as an idea for their future roadmap for SQLBase.
If you use SQLBase , or would consider using it if it supported the .NET Entity Framework, vote or comment here:
OpenText SQLBase Entity Framework support.
The Poll is active for 3 months.
Note that I am just a simple SQLBase end user, not affiliated to OpenText or Gupta in any way, trying to promote some ideas to get this great database management system more up-to-date.
There is another possibel reasion is VScode `s workspace has import to many folders, it can be doing well on highlight if import less folders.I think the reasion for error is to VScode dont know the where the lib really come from so it scan all the folder to find the lib,it can be overload for VScode when there is to much folders
sorry,my English is not good.
pls send me the code for send text_tabs in one template, this dont work:
'template_roles' => [
$this->client->templateRole([
'email' => '[email protected]',
'name' => 'ONELI',
'role_name' => 'Cliente',
'tabs' => [
'sign_here_tabs' => [
$this->client->signHere([
'x_position' => '277',
'y_position' => '431',
'document_id' => '1',
'page_number' => '1',
'recipient_id' => '57777797'
])
], 'text_tabs'=>[
[
'value' => 'mycallleeee6',
'tabLabel' => 'Calle',
]
]
]
])
]
Declare context: Context as a function parameter and call it in activity with this passed to it or LocalContext.current inside compose function.
I made Parall app that creates a bundle shortcut to your selected app.
This way you can start two instances of an app and pin two shortcuts to your Dock.
You can find the app in Mac App Store, more info here https://parall.app
You could create the subfolder2 directly in one step like this:
@echo off
if not exist "subfolder1\subfolder2" md "subfolder1\subfolder2"
set /p UserData=Write some text here:
@echo %UserData% >> "subfolder1\subfolder2\TheDataIsHere.txt"
type subfolder1\subfolder2\TheDataIsHere.txt
When I dealt with a similar issue, my solution was to not run the SSH command with -N, but instead send the command bash -c 'echo "Connected" && while true ; do sleep 1; done'. Once connected, it would write "Connected" to STDOUT and then sleep forever; my outer script could just watch SSH's STDOUT for that string (or, indeed, any string).
Right click and open in a browser.......... Otherwise, convert the PDF to a HTML5 page. When the latter is accessed, it can be viewed immediately without an memory hogging application.
After lots of digging, trying all kinds of AIs, asking friends and blaming my image loading code, I finally found out that I could just set the Garbage Collector to be as agressive as possible:
GC.Collect(2, GCCollectionMode.Aggressive, true, true);
Why?
Since Images are usually somewhat big, they are being put into the Large Object Heap.
Garbage collection usually tries to leave some LOH-Memory allocated to use it as a cache for future objects (as far as I know, feel free to correct me!), but in my case, this was not necessary, and by setting the GC to agressive, I could get rid of all the wasted memory and drastically reduce RAM usage!
It took me a really long time to figure this out, and I am really glad that I found out.
If you are having simmilar problems with large piles of unused memory, try making the GC collect the different generations in agressive mode. Even if it's not images but byte arrays in general.
Use OneSignal if you want a simple setup and free tier.
Use Pushy if you want full independence from Google services.
Both handle fallback delivery methods and work reliably across Android devices — including those without Google Play Services.
That message is printed out by Micrometer because in GraalVM, JMX support is not complete yet so Micrometer cannot get the data to record, see:
Also see: JvmGcMetrics.java
One option is to use Snowflake Dynamic table that helps to track the history i.e change data capture.
To identify what has changed, you can create stream on top of the dynamic table or you can create SCD type 1 Dynamic table on top of SCD 2 dynamic table like explained in this article
Here's the solution.
idx_x = xr.DataArray(myIndices[:,0], dims="points")
idx_y = xr.DataArray(myIndices[:,1], dims="points")
myValues = da.isel(x=idx_x, y=idx_y)
myValues = myValues.values # this converts it from an xarray into just an array
I went from ~35 second run times to ~0.0035 second run times. Four orders of magnitude of improvement! Woot woot!
I don't understand why putting the indices into an xarray.DataArray with dimension name "points" (rather than just as a list as I tried before) causes the .isel to work correctly, but it does.
I encounter the same problem when I followed that tutorial. Basically it want you organize your "include" directory in following structure:
└── include/
├── GLFW/
│ ├── glfw3.h
│ └── glfw3native.h
├── glad/
│ └── glad.h
└── KHR/
└── khrplatform.h
This was adequate for my needs:
MAKEALL := $(findstring B,$(MAKEFLAGS))
DRYRUN := $(findstring n,$(MAKEFLAGS))
Picks off two make option flags:
@echo "MAKEFLAGS: $(MAKEFLAGS) MAKEALL: $(MAKEALL) DRYRUN: $(DRYRUN)"
Result:
make -Bi
MAKEFLAGS: Bi MAKEALL: B DRYRUN:
Thus these flags can be tested for individually.
I supposed that if I start a foreground service special use this would automatically make my app to keep running in background but this was wrong.
As shown in Choose the right technology the path led me to "Mannually set a wake lock" as I am not using an API that keeps the device awake.
I tested my app with Pixel 8 Pro A16, Samsung J3 A9. Also Huawei but with App Launch no battery optimization (set manually).
class TimerService : Service() {
...
override fun onCreate() {
super.onCreate()
wakeLock.acquire() <<<<-------
createNotificationChannel()
initTimer()
}
...
private fun stopTimer() {
// Remove callbacks from the background thread handler.
if (::serviceHandler.isInitialized) {
serviceHandler.removeCallbacks(timerRunnable)
}
_isTimerRunning.value = false
_timerStateFlow.value = 0
if (wakeLock.isHeld) { <<<<------
wakeLock.release()
}
}
...
}
Interesting links I found
Hope this helps other devs with the same issue.
Best regards!
haven't looked at implementation details. But, At a more systems level, CPU-memory to GPU-memory data transfer is a time-consuming operation. Most of times, it's more expensive than actual matrix computation in GPU itself.
Looks like, Library is somehow detecting that we are iteratively making GPU inferences, but ton of GPU memory is still available. Thus prompting us to send more data to GPU Memory.
The TIMESTAMPADD function is part of the ODBC/JDBC standard and is supported by H2, MySQL. So try to use this, Hope this will solve your problem
SELECT TIMESTAMPADD(DAY, -30, CURRENT_DATE());
Well I finally decided my choice on the set of Copro template. template<T>class Copro {};
And so :
using VMUnprotected = VM<uint8_t *>;
using VMProtected = VM<MemoryProtected>;
using CoproUnprotected = Copro<uint8_t *>;
using CoproProtected = Copro<MemoryProtected>;
It may not be the most beautiful syntactically solution, but it is relatively high-performance and without hacks.
this was so helpful, I finally got my toaster to cook a pizza in the morning
The answer mentioned by Chris is the solution :
@ManyToOne
@Fetch(FetchMode.SELECT)
@JoinColumn(name = "FK_COUNTRY")
private CountryEntity country;
Because the FetchModeType is marked as EAGER, the default FetchMode is JOIN.
That's why in my case, I have to force it to SELECT
To solve this issue, remember to add an entry in your launch.json with a key of "target" and a value equal to your emulator ID which can be found in XCode.
import matplotlib.pyplot as plt
y = [3.97x10^(-3), 5.30x10^(-3), 6.95x10^(-3), 8.61x10^(-3), 9.60x10^(-3)]
x = [3235, 1480, 767, 312, 276]
e = [3.71x10^(-4), 3.71x10^(-4), 3.71x10^(-4), 3.71x10^(-4), 3.71x10^(-4)]
plt.errorbar(x, y, yerr=e, fmt='o')
plt.show()
And how would I go about this when I want this to work with CUDA.jl CuArrays as well?
I could fix the issue by deleting all .dcu files from my DCU folder ; after this, the compiler worked without issues. I'm not sure why but i'm posting here to help someone that face the same issue.
My misunderstanding. The return value of ExecuteNonQuery appears to be the number of rows inserted. The USE command does not insert any rows, so the return value is zero.
from imagedl import imagedl
image_client = imagedl.ImageClient(image_source='BaiduImageClient')
image_client.startcmdui()
try this repo: https://github.com/CharlesPikachu/imagedl
After hours of struggling, I found out the solution by accident. All I had to do, was to add this configuration:
grpc:
shutdown-grace: 30
\renewbibmacro*{journal}{
\iffieldundef{journaltitle}
{}
{\printtext[journaltitle]{
\printfield[journaltitlecase]{journaltitle}
\iffieldundef{journalsubtitle}
{}
{\setunit{\subtitlepunct}
\printfield[journaltitlecase]{journalsubtitle}}}}}
You can read from serial port with php and windows with library for windows
https://github.com/m0x3/php_comport
Don't tweak offsets by hand, get the center coordinate(s) using patch.get_center() and align the text around it by giving ax.annotate the parameter va="center" for vertical or ha="center" for horizontal alignment, respectively.
https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.annotate.html
@Remy
I removed all the references to "__stdcall" and uncommented the "extern "C" code in the C++ dll, so that the function should be now exported as "_add_code".
I changed the DLL declaration in:
<DllImport("MyDll.dll", EntryPoint:="_add_code")>
Private Shared Function _add_code(ByVal text As String) As <MarshalAs(UnmanagedType.BStr)> String
End Function
Private Sub MainForm_Load(sender As Object, e As EventArgs) Handles MyBase.Load
Dim foo As String = _add_code("ppp")
End Sub
Noting changes, i get alway the error "System.EntryPointNotFoundException: Could not find entry point with name '_add_code' in the DLL. The Entry point is not found, so I think that eventual errors related to Strings still have to be evaluated.
Folowing your suggestion I also tried a dump to get an Export table by VS command line, and I obtain the follw answer, that confirms that the function is exported as "_add_code". So the question: "Why I can't access to it from VB??"
Microsoft (R) COFF/PE Dumper Version 14.44.35217.0
Copyright (C) Microsoft Corporation. All rights reserved.
Dump of file MyDll.dll
File Type: DLL
Section contains the following exports fory MyDll.dll
00000000 characteristics
FFFFFFFF time date stamp
0.00 version
1 ordinal base
1 number of functions
1 number of names
ordinal hint RVA name
1 0 00001210 add_code = _add_code
Summary
1000 .data
1000 .rdata
1000 .reloc
1000 .rsrc
2000 .text