Give your app target's build phases a Run Script phase. Check to see that "Show environment variables in build log" is checked. That's all! The environment variables and their values will all be dumped to the build report every time you build.
I tried most things mentioned in previous answers but they didn't work in my case. I restarted the whole system (linux) after update and this error disappeared.
You're close, but there's room to make this calibration pipeline a lot more robust, especially across varied lighting, contrast, and resolution conditions. OpenCV’s findCirclesGrid with SimpleBlobDetector is a solid base, but you need some adaptability in preprocessing and parameter tuning to make it reliable. Here's how I’d approach it.
Start by adapting the preprocessing step. Instead of hardcoding an inversion, let the pipeline decide based on image brightness. You can combine this with CLAHE (adaptive histogram equalization) and optional Gaussian blurring to boost contrast and suppress noise:
def preprocess_image(gray):
# Auto invert if mean brightness is high
if np.mean(gray) > 127:
gray = cv2.bitwise_not(gray)
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8))
gray = clahe.apply(gray)
gray = cv2.GaussianBlur(gray, (5, 5), 0)
return gray
For the blob detector, don’t use fixed values. Instead, estimate parameters dynamically based on image size. This keeps the detector responsive to different resolutions or dot sizes. Something like this works well:
def create_blob_detector(gray):
h, w = gray.shape
estimated_dot_area = (h * w) * 0.0005 # heuristic estimate
params = cv2.SimpleBlobDetector_Params()
params.filterByArea = True
params.minArea = estimated_dot_area * 0.5
params.maxArea = estimated_dot_area * 3.0
params.filterByCircularity = True
params.minCircularity = 0.7
params.filterByConvexity = True
params.minConvexity = 0.85
params.filterByInertia = False
return cv2.SimpleBlobDetector_create(params)
This adaptive approach is inspired by guides like the one from Longer Vision Technology, which walks through calibration with circle grids using OpenCV: https://longervision.github.io/2017/03/18/ComputerVision/OpenCV/opencv-internal-calibration-circle-grid/
You can then wrap the entire detection and calibration process in a reusable function that works across a wide range of images:
def calibrate_from_image(image_path, pattern_size=(4,4), spacing=1.0):
img = cv2.imread(image_path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
preprocessed = preprocess_image(gray)
detector = create_blob_detector(preprocessed)
found, centers = cv2.findCirclesGrid(
preprocessed, pattern_size,
flags=cv2.CALIB_CB_SYMMETRIC_GRID + cv2.CALIB_CB_CLUSTERING,
blobDetector=detector
)
if not found:
print("❌ Grid not found.")
return None
objp = np.zeros((pattern_size[0] * pattern_size[1], 3), np.float32)
objp[:, :2] = np.mgrid[0:pattern_size[0], 0:pattern_size[1]].T.reshape(-1, 2) * spacing
image_points = [centers]
object_points = [objp]
image_size = (img.shape[1], img.shape[0])
ret, cam_matrix, dist_coeffs, _, _ = cv2.calibrateCamera(
object_points, image_points, image_size, None, None)
print("✅ Grid found and calibrated.")
print("🔹 RMS Error:", ret)
print("🔹 Camera Matrix:\n", cam_matrix)
print("🔹 Distortion Coefficients:\n", dist_coeffs)
return cam_matrix, dist_coeffs
For even more robustness, consider running detection with multiple preprocessing strategies in parallel (e.g., with and without inversion, different CLAHE tile sizes), or use entropy/edge density as cues to decide preprocessing strategies dynamically.
Also worth noting: adaptive thresholding techniques can help in poor lighting conditions. Take a look at this StackOverflow discussion for examples using cv2.adaptiveThreshold: OpenCV Thresholding adaptive to different lightning conditions
This setup will get you much closer to a reliable, general-purpose camera calibration pipeline—especially when you're dealing with non-uniform images and mixed camera setups. Let me know if you want to expand this to batch processing or video input.
My issue was that there was an Nginx ingress added and it was raising an HTTP code 413
"Request Entity too Large". To fix this we increase the following configuration:
nginx.ingress.kubernetes.io/proxy-body-size: 8m
For anyone struggling with this (I tried the first guy's answer and got error after error), I believe I found a way that is WAY simpler. Credit to this website - https://www.windowscentral.com/how-rename-multiple-files-bulk-windows-10
Put all of the files you want to trim the names of into one folder. Open CMD to that folder (in Windows 11, you can right-click while you're in the folder and select Open In Terminal. For me, it opened PowerShell, so I had to type cmd and hit enter first), and read the next part.
'ren' is the rename command. '*' means 'anything' (from what I understand), so '*.*' means 'any filename and any extension in the folder.' And finally, the amount of question marks is the amount of characters to keep. So '???.*' would only keep the first 3 characters and would delete anything after that while keeping whatever filetype extension it was.
So if you had multiple files with filenames formatted YYYY-MM-DD_randomCharactersBlahBlah.jpg and .mp4 and .pdf, you'd want to keep only the first 10 characters. So you'd open CMD in the folder and type:
ren *.* ??????????.*
The new filename would be YYYY-MM-DD.jpg or .mp4 or .pdf.
Just be careful, because if you have multiple files with the same date in this scenario, they'd have the same filename after trimming which causes CMD to skip that file. Hope this helps someone.
I had the same problem, I could only solve it by removing the -Objc flag from Other Linker Flags, which some cocoapods packages insert because it depends on libs .a, this flag causes it to import unused codes that cause this error to happen, with me the problem is with Google Mobile Ads that inserts this flag, but the curious thing is that this problem only happens with iOS devices that have iOS versions below 18.4, the iPhone 15 with 18.4 the problem does not happen, now with all devices with 18.3, 18.2 and even 15.2 the application does not open.
Now the million dollar question, is the problem with Xcode or with third party libs that still depend on the -Objc flag?
@Pigeo or anyone willing to help. I'm a noobie and can someone please explain the following Apache 2.4 syntax:
Header onsuccess edit Set-Cookie ^(.*(?i:HttpOnly).*)$ ;;;HttpOnly_ALREADY_SET;;;$1
Especially
^(.*(?i:HttpOnly).*)$ ;;;HttpOnly_ALREADY_SET;;;$1
I'm assuming that the * is a wildcard, but how is this syntax read? If someone can please explain or direct me to somewhere (page) that may explain it. Thanks.
I'm trying to understand how let the eyetracker work and record data in VIVE FOCUS 3. As i readed all around the web I need Unity to do it. Once I tried to do it but withouth results. Have you some reccomendation or tutorial to suggest?
If the passkey is available on the other devices (in most cases it will be), it will work regardless of whether that device has biometrics. Most passkey providers will provide device PIN or device passcode if there are no biometric devices.
useMemo(() => "React", []): Creates a memoized value.
React needs to run the function () => "React" once and it stores the result.
did you find a solution to this / have any insights? Thanks
If someone is still looking for this. I have found the solution and described in my blog post here https://gelembjuk.hashnode.dev/building-mcp-sse-server-to-integrate-llm-with-external-tools
But as i understand there will be better ways to do this soon because developers of that python sdk have some ideas how to support this
There is no --add.safe.directory option in git, remove point after add
git config --global --add safe.directory '[local path of files to upload]'
Highchart doesn't redraw when there is zoom in/out. Try keeping the ref of chart and use chart.redraw(). This will redraw the chart to fit the new layout.
Please make sure the spring boot version is updated to v3.2.8 or later and of course all the other related dependencies in that spring boot project to the version compatible to the updated version of the spring boot
You can use the @validator decorator to achieve this. This answer can help you.
There is a post from Josiah Parry that shows how you can format R scripts as notebooks, saving it as a .r instead of .R and using # COMMAND --------- to specify a cell of a notebook. If you want to use the notebook functionality but develop outside of the Databricks UI.
I find the version of PowerBI on MS Website that allows to enable customized map creation.
Any pointers on where to locate the
WixToolset.Mba.Core.dll
I have installed the wix tools set 4.x but dont find the WixToolset.Mba.Core.dll installed under %%user profile%%/.nuget\packages\wixtoolset.sdk\4.0.0\Sdk
please help.
Also you can do it with filter
new_dict = dict(filter(lambda item: item[1]['category']['name'] == 'Fruit', my_dict.items()))
I know this is an old topic, and I’m sure you want this to not lose focus from a re render, a good way around this is to have the input field in a child component and the onchangetext logic in a parent component which will avoid the losing focus problem that still exists as of today.
Do it in a 'pythonic' way:
filtered_dict = {k: v for k, v in my_dict.items() if v['category']['name'] == 'Fruit'}
I managed to find you where the problem was
importProvidersFrom(
HttpClientInMemoryWebApiModule.forRoot(InMemoryDataService, { delay: 500
})
)
there was this in app.config that was blocking my calls.
The only workarround i found was to make use of a mirror:
<mirror>
<id>central-proxy</id>
<name>Proxy of central repo</name>
<url>https://repository.mulesoft.org/nexus/content/repositories/public/</url>
<mirrorOf>central</mirrorOf>
</mirror>
And this worked, but i'm probably sure there's a better way to go about this.
im having the same problem, when I query all the table (*) it shows as in preview but when i query those specific columns Unrecognized name: Small_Bags; Did you mean Small Bags? at [4:1] appers. Unfortunaately it has been a constant during the course. Were you able to find a solution?
Seems like it is huggingface issue, I found a similar issue here
curl https://huggingface.co/BAAI/bge-large-en-v1.5/resolve/main/1_Pooling/config.json returns Entry not found, although config exists in repo
But for another models config.json is available by the same path
For example there is an available config for bge-large-en-v1.5: https://huggingface.co/BAAI/bge-large-en-v1.5/resolve/main/1_Pooling/config.json
We worked a ticket with Microsoft on this issue and here is their response:
I was able to get a meeting with some of the backend engineers from the private link side so we can get some clarification over why the api.loganalytics.io is not properly resolving with the alias monitor.azure.com. They let me know that this is now a known issue on their end and are working to remove the api.loganalytics.io completely form the service and in the future will only be using the monitor.azure.com, however they were unable to provide an eta over this other than "soon". In the interim what they suggested was to add another forwarder for loganlytics.io pointing back to Azure DNS.
I was able to get confirmation that api.loganlytics.io is the only A record impacted by this known issue.
Lastly this is an issue that only applies when using forwarders, If name resolution goes directly to Azure DNS, it resolves properly. Which is why you still see it showing as the alias with a nslookup.
We went ahead with the additional conditional forwarder on api.loganalytics.io to work around the issue.
If you are using POJO's to write data via key / value puts and you then want to read that same data via SQL, you need to both reference your POJO's in your SQL create statement in the "with" portion of the create statement listing the fully qualified class name for the key and the fully qualified class name for the value. You will also need to be sure that your class files are distributed to the cluster and available on all cluster nodes. Hope that helps.
Thanks so much @T.J.Crowder. Found this useful in helping with debugging an similar error message.
To save on time, you can also simply use any of the generative AI applications to "Make String JSON Compliant".
Trust this helps.
I found out that my entrypoint was not configured correctly. The real issue was not ACA, but conda command that was buffering the logs and never flushing to stdout/stderr.
I modified
ENTRYPOINT ["conda", "run", "-n", "py310", "/bin/bash", "-c", "/utils/entrypoint.sh"]
To This:
ENTRYPOINT ["conda", "run", "--live-stream", "-n", "py310", "/bin/bash", "-c", "/utils/entrypoint.sh"]
Just use TAB key instead. TAB in assembly view switches to the same place in pseudocode tab, and TAB in pseudocode switches to the same place in code in assembly tab.
If you're using CKEditor 4:
You can customize the editor UI using config options or CSS.
🚫 Option 1: Disable the title bar
If that’s a dialog box, you can remove the title by overriding CKEDITOR.dialog.prototype._.title Option 2: CSS Hack
If it's part of the editor frame and not a dialog: Inspect the element (right-click → Inspect) and then use custom CSS to hide it NOTE : 1. .cke_top is the toolbar/header container in CKEditor 4.
2. Be careful: hiding it removes the entire toolbar, not just the border.
3. To hide only the border
The answer was ensuring that there is no other class called "Timer"!
did you find a solution to this problem within gnuradio? I encountered the same problem when trying to process METEOR-M2 within gnuradio with the Viterbi algorithm.
I used the ccsds27 library decoder, but when checking with the medet program, there was nothing that indicated the correct operation of the Viterbi algorithm. Although the sync words were decoded correctly. I also tried to use the FEC decoder with the Viterbi algorithm config, but nothing worked either.
The solution was to use Collectfast, which indexes what needs updating by comparing, which prevents these timeouts.
i strugled to logout from the sandbox account. but i found it in sitting>>developer>>sandbox apple account
IOS Version 18.3.1
I'm using Gitlab package registry and what worked for me was generating a new personal access token with read/write scopes to the registry.
I assume it might be the same for Github.
I recommend using Firebase for databases, Google also recommends using Firebase as a database, it is very easy to use and there is already an easy-to-integrate API, such as Firebase_auth and cloud_store.
Visit this documentation here to add Firebase to your project.
I did flutter clean flutter pub get and it worked
I found an active issue with the request body variables problem. Seems it is not working since 2018
I had this same issue earlier today. i tried a lot of the solutions here, including asking ChatGPT but still couldn't get it to work. I was following this tutorial and the tutor wrote npm install nativewind tailwindcss react-native -reanimated react-native-safe-area-context but looking to the documentation here it says npm install nativewind tailwindcss@^3.4.17 [email protected] react-native-safe-area-context. It changed the tailwindcss and react-native-reanimated packages to their correct versions and it worked.
This helped me, in your properties catalog file for mysql connection, add this:
case-insensitive-name-matching=true
There is an active Jenkins issue about it. It doesn't work as expected.
Got a satisfying answer:
I am trying to understand the advantages and disadvantages of named routes vs directly pushing the Widget to Navigator. In the cookbook, I only get information about how to, not why/why not.
In my opinion, pushing Widget directly to Navigator is a better, simpler, and safer option even for a large application.
- You don't need to create a centralized brain that knows everything about the app.
- Passing data is very easy no need to do additional mapping and thanks to required arguments if something changes you can easily identify things needed to be updated.
- Parents know how to instantiate their children.
The only argument I see for named routes is if I consider Flutter for web, where I would like to see something like "/profile/description" in the address bar.
https://www.reddit.com/r/FlutterDev/comments/gei3fb/routes_named_vs_unnamed/?rdt=54144
It is not working go back to Virtual box before start and go to settings, then go to system increase processor to 2 and in display increase display to 128
It might be the os.makedirs() call. I was able to create a folder and save a file using os.mkdir().
I have not found the actual problem. But while doing some testing I switched over to an old version of the linux kernel from TI (linux_ti_stating_6.1) and solved the issue with the UART ttyS1. I have kept the defconfig, devicetree, and everything else exactly the same, I just added the older kernel and specified I want to use it in yocto.
Why the newer upstream kernel or newer TI kernel is causing a problem with only ttyS1 and not ttyS4 is still something that is causing me confusion. I will start adding some newer kernels until I can see the issue return.
Though it is impossible to create a game (as mentioned by the previous answer), that does not mean you cannot create a 'text-based game.' Even though you cannot add images or any music, you could use text (like the game Oregon Trail... without any pictures.)
I've got Linux Ubuntu 2204 LTD and Python 3.10.12 with PIP 22.02. I don't find pip.conf file, not even a "pip"-folder.
But according to an installation tutorial of rasa, I have to find it in order to add a line.
What can I do..?
I have an ESP32-S3 WROOM-1 that could see available Wi-Fi networks, including my own, but was unable to connect to my Wi-Fi. I’ve found a possible solution to the problem. If it's mounted on a breadboard, it seems like the metal in the breadboard interferes with the connection. So either you can remove it from the breadboard, or — and this might sound strange — when it's trying to connect, touch the CPU cover with your finger. I assume your body acts as an antenna, which allows it to connect to your Wi-Fi. Once it's connected, you can remove your finger, and it will stay connected to the Wi-Fi.
The post you found already answers your question:
You have to offset/shift the graphics and collision shapes, instead of the rigid body: there is no center of mass offset or shift in a rigid body. The setCenterOfMassTransform sets the world transform for a rigid body, which is always equal to the center of mass of the rigid body in world space. So instead of shifting the rigid body, you need to shift the graphics and collision shapes.
A Shape itself cannot have an offset center of mass: it's center of mass is the center.
Even if you were to use a btCompoundShape comprised of multiple shapes, it would still have center of mass in the center.
What you will most likely need to do (to make a rigid body with off-center mass, such as a hammer) is to use two RigidBodies and then use a constraint (like btFixedConstraint, btHingeConstraint, etc) to join them.
insert x (b :: l) reduces to if x <=? b then x :: b :: l else b :: insert x l, which is not b :: insert x l. You can simplify IHIHl to make the latter term appear, get a proof that x <=? b = true from E' : b < x and rewrite this proof in IHIHl to make the term reduce further.
I think I found the answer myself.
For groupBy + aggregate: If you aggregate Entity E by some grouping key, the order of E-Records with a given E-Key are only interesting for the resulting aggregates within the partition of the grouping key, so the problem does not arise.
For the foreign key join, the answer can be found here:
https://www.confluent.io/blog/data-enrichment-with-kafka-streams-foreign-key-joins/#race-conditions
The short version is: When sending messages with a partitioning different from the original one, a hash of the original record is added to this message. In the end, this hash is used to determine, whether a certain message corresponds to the latest version of the original record.
Same issue here.
I found this message on this Gitlab forum topic useful: https://forum.gitlab.com/t/placeholder-users-how-to-manage/120632/9
Does this work for you?
replace this :: if linkage != nil Pod::UI.puts "Configuring Pod with #{linkage}ally linked Frameworks".green use_frameworks! :linkage => :static end with this :: Pod::UI.puts "Configuring Pod with #{linkage}ally linked Frameworks".green use_frameworks! :linkage => :static $RNFirebaseAsStaticFramework = true
I have a link that will help you
and that is to convert the json file and convert it to the full form and be able to save and retrieve
i've tried to get it working with searching for ProcessName
async Task WaitForBG3()
{
textBox1.Text = "Searching";
await Task.Run(async () =>
{
bool found = false;
while (!found)
{
Process[] ProcessList = Process.GetProcesses();
foreach (Process p in ProcessList)
{
if (p.ProcessName.Contains("bg3"))
{
found = true;
}
}
}
});
textBox1.Text = "found and wait for Exit";
await Task.Run(async () =>
{
bool running = false;
while (!running)
{
Process[] ProcessList = Process.GetProcesses();
foreach (Process p in ProcessList)
{
if (p.ProcessName.Contains("bg3"))
{
running = false;
}
else
{
running = true;
}
}
}
});
textBox1.Text = "Exit";
}
When i test it with Notepad, it's working as it should, but when i use the game name, it's instandly triggering the else part of the 2nd Task. Even when the game is running allready running when calling
private async void buttonStart_Click(object sender, EventArgs e)
{
//LaunchViaSteam();
await WaitForBG3();
//do some mor stuff
}
Any Idea why?
Have you tried increasing limits? I've increased the limits, and the problems went away:
resources:
limits:
memory: 1Gi
requests:
cpu: 100m
memory: 512Mi
app==finance
inclua o app antes
ex==
(reverse('finance:pagamento_sucesso'))
How to use Frida via Wi-Fi purification:
There is a simple way to use remote access via Wi-Fi purification.
First, activate the Wi-Fi purification:
Clique at the top and bottom or IP and at the door of access.
No terminal or CMD from your PC, execute or command:
Copy
adb connect <endereço_IP_do_seu_disposizione>:45844
Please note: Your PC and mobile phone must be connected via Wi-Fi.
After executing the command in terminal or CMD on your PC, you will see a connection message.
Below, connect to Frida's servant in the following way:
Copy
./frida-server -l <endereço_IP_do_seu_device>
Archivematica is a system that stores digital files along with metadata (data about your files like file name, type, creation date, etc.). When you upload files to Archivematica, it bundles them into something called an AIP (Archival Information Package) — a folder that contains:
The original files you uploaded
A METS.xml file — an XML file where Archivematica stores the metadata about those files
How to get your files and metadata back:
From the Archivematica dashboard — you can download the whole AIP package.
From the storage location (if you have file system access) — you can find and unzip the AIP.
Using the REST API — you can connect programmatically to Archivematica to request the files and metadata.
Inside the AIP:
Your original files are in a folder like data/objects/
Metadata about those files is in metadata/METS.xml
You can open this METS.xml file and read the metadata — either manually or using a Python script.
Maybe there is a problem trying to target with JQuery. . You can try use myBtn = document.getElementById('btn-submit') instead. . Then myBtn.disabled = true OR myBtn.disabled = false 💜
You can retrieve all files in a SIP using GET /api/v2/file?package_uuid=<sip_uuid> And after that for each file retrieve metadata by requesting GET /api/v2/file/<file_uuid>
For anyone looking for this answer like I was, what worked for me was to leave 'RunMsCodeCoverage' and 'IncludeReferencedProjects' to TRUE, but setting 'IncludeTestAssembly' to FALSE.
did you manage to solve this ? removing/changing the black background in the embed iframe google slide ? I'll be happy if you could share a tips, thanks
Two things you can try:
a) When writing the file use :
merged_adata.write_h5ad(output_file, compression = "gzip")
b) Use a tool to fix chunking issues (i.e ptrepack)
Seems like there is no endpoint for these purposes. But you can get the order information, then check if order was paid, and then if yes send the message
If we could get closer into Go-Guardian, and remove one thing we can stop it from stopping students playing fun games, any coders out there, see if you can do anything with this link. chrome-extension://haldlgldplgnggkjaafhelgiaglafanh/admin.js
I have a link that will help you
and that is to convert the json file and convert it to the full form and be able to save and retrieve
Try to add this line in /etc/sudoers
linadmin ALL=(oracle) NOPASSWD: ALL
yes I think you need to go native a bit. This is probably happening because you are when running a library that haven't been fully migrated to support React Native's New Architecture. The error indicates that RNMapsAirModule can't be found by TurboModuleRegistry, because react-native-maps likely hasn't implemented proper TurboModule registration for iOS yet
If you are unable to run browseVignettes(“ggplot2”), please install this on your RStudio console.
install.packages("ggplot2.utils")
Position your brand in Gurugram’s fastest-growing business corridor. Paragon 57 delivers a blend of high-end architecture and smart infrastructure, making it the ideal destination for retail stores, corporate offices, and investors. Located in Sector 57, M3M Paragon stands out with its modern facade, pedestrian-friendly layout, and high footfall potential. Enjoy top-tier amenities, unmatched accessibility, and strategic positioning that drives business success. Secure your space in this landmark destination and experience exponential growth. Your next great business move starts here—with M3M Paragon.
Needless to say, called in the usual ways.
The single digit at the start of each numeric string is a code for use in diagnosing problems.
select '6((61677*89972)*79533)/3778)', [dbo].[mthstrv08_rtrn_var70]('6(((61677*89972)*79533)/3778)')
select top 10 *, [dbo].[mthstrv08_rtrn_var70]('0'+pict2) from npick4sv2
I am using intellij and faced a similar problem, in my machine you select a block that you wanna comment out and do "ALT + SHIFT + A"
it will get multi line comment
Thanks for @ThomA!
The solution is using method add with Session like:
with Session(autoflush=False, bind=engine_new) as db:
new_part = Parts(
part_kod=dataArt.get('kod'),
part_description=dataArt.get('title'),
part_kod_supplier=dataArt.get('supplier'),
part_manufacturer=manId
)
db.add(new_part)
db.commit()
While using
rowInsertStmt = insert(partsTable).values(...)
#or this rowInsertStmt = partsTable.insert().values(...)
session_new.execute(rowInsertStmt)
session_new.commit()
You will see an error in the topic.
I use the same solution with 'insert' for processing 'manufactures' and it works well, but with one sql trigger.
Just adding to Floriaan's answer, I had noticed that the function did not work well for inputs round_to greater than 1, so I added a small checkup:
def calculate_ticks(ax, ticks, round_to=0.1, center=False):
upperbound = np.ceil(ax.get_ybound()[1]/round_to)
lowerbound = np.floor(ax.get_ybound()[0]/round_to)
dy = upperbound - lowerbound
# The added part:
if round_to > 1:
fit = np.floor(dy/(ticks - 1))
else:
fit = np.floor(dy/(ticks - 1)) + 1
dy_new = (ticks - 1)*fit
if center:
offset = np.floor((dy_new - dy)/2)
lowerbound = lowerbound - offset
values = np.linspace(lowerbound, lowerbound + dy_new, ticks)
return values*round_to
It looks like the Oracle container is relying on DNS resolution for gen-ai-labs-hub, but it's not resolving properly despite being in /etc/hosts. With --network=host, the container uses the host's network, but sometimes containerized processes can't read /etc/hosts correctly. Try setting the hostname explicitly with --hostname gen-ai-labs-hub in your podman run command — that often resolves this Oracle Net Listener issue without changing network modes.
I tried everything and it didn't work. But one way worked. At the end of the link, a symbol? Put it. Like this:
<a href="index2.html?">work!</a>
Try to reimplement your POST request like this:
url = "https://graph.instagram.com/your-instagram-endpoint"
data = {
"ACCESS_TOKEN": "value1",
"ANOTHER_PARAM": "valu2"
}
response = requests.post(url, data=data)
In my case, Xdebug couldn't connect because the debug client was listening on IPv6 (::9003). Changing it to IPv4 (0.0.0.0:9003) fixed the issue.
To help you accurately, I reviewed the API documentation you're referring to — eMAG Marketplace API v4.4.4, specifically page 16, which is part of the Product Offer API (POST /api-3p/offer/save).
Here's a step-by-step guide to sending stock and handling_time using Postman and the correct content type.
https://marketplace.emag.ro/api-3p/offer/save
KeyValue
Content-Type
application/x-www-form-
urlencoded
AuthorizationBearer
<your_token_here>
Replace <your_token_here> with your valid authentication token.
In the Body tab of Postman:
Select x-www-form-urlencoded.
Then, add the fields exactly as per the API expects. This is crucial!
KeyValueoffer[0][id]12345offer[0][price]49.99offer[0][stock]10offer[0][handling_time]24
✅ If you are sending multiple offers, increment the index:
offer[1][id]
offer[1][price]
etc.
less
CopyEdit
Key | Value ----------------------------|------ offer[0][id] | 12345 offer[0][price] | 49.99 offer[0][stock] | 10 offer[0][handling_time] | 24
Don’t use form-data, use x-www-form-urlencoded.
Make sure field names match exactly (like offer[0][stock] — no typos).
Make sure the offer ID actually exists and belongs to your seller account.
handling_time must be in hours, as per documentation.
If you want to test and it's still not working, feel free to share a screenshot of your Postman setup (headers and body tab), and I’ll help debug it specifically.
I searched for a long time, found nothing (nice code). Based on the experience, it reached regular expressions.
The task can be put on, I want to have only the necessary symbols in the string:
Below is a template (not fully working code) from a working project (Delphi 10.4)
uses
System.regularexpressions;
Function UpdateinFormorder (IDORDER: String): Integer;
Var
TEMP, Strregexp: String;
REGEX: TREGEX;
Begin
regex.create ('');
// What is allowed in the string utf English, Russian Code page, number and other
strregexp: = '[^A-Za-zА-Яа-я0-9 _ ,, ..?!-@<> "; ::; ()+=/\ |]';
// in TEMP we keep the string cut from extra characters
TEMP: = regex.Replace ('testing 😀', strregexp, '');
end;
Your parent container needs to have a height of 100%.
This can be achieved by using height: "100%", but I'd recommend using flex: 1.
export default function App() {
return (
<View style={{flex: 1}}>
<CameraView style={{flex: 1}}>
</CameraView>
</View>
);
}
1- database tables
2- program.cs
3- claimtransformation class
4- controller example
Create application can be done with VS Code too. MVC, is an architecture based on Model-Controller-Views and easy to work with it. the business logic is placed in Controllers.
I created the database in SQL Server, and perform the connection with "Entity Framework Core" package.
The tables that relevant for this subject are Roles, UserRoles and Employees
the entities models that retrieve data from database and store in Models folder:
Role.cs
[Table("Tbl_Roles")]
public class Role
{
[Key]
public int Id { get; set; }
[StringLength(50)]
public string Name { get; set; }
public virtual List<UserRole> UserRoles { get; set; } //foreign key to UserRoles table
}
UserRole.cs
[Table("Tbl_UserRoles")]
public class UserRole
{
[Key]
public int Id { get; set; }
public int RoleId { get; set; }
public int EmployeeId { get; set; }
public bool Active { get; set; }
//fk to employees table
public virtual Employee Employee { get; set; }
//fk to employees table
public virtual Role Role { get; set; }
}
employee.cs
[Table("Tbl_Employees")]
public class Employee
{
[Key]
public int Id { get; set; }
[StringLength(50)]
[Required]
public string UserName { get; set; } = string.Empty;
[StringLength(50)]
[Required]
public string FirstName { get; set; } = string.Empty;
[StringLength(50)]
[Required]
public string LastName { get; set; } = string.Empty;
[StringLength(50)]
[Required]
public string EmployeeNumber { get; set; } = string.Empty;
public int CompanyId { get; set; }
[StringLength(150)]
public string? Email { get; set; }
[StringLength(50)]
public string? PhoneNumber { get; set; }
public string? Department { get; set; }
public string? SubDepartment { get; set; }
public bool Active { get; set; }
//foreign key to UserRoles table
public virtual List<UserRole> UserRoles { get; set; } = new();
}
program.cs :// Add DbContext
builder.Services.AddDbContext<VisitorDataContext>(options =>
{
var context = builder.Configuration.GetConnectionString("Your Connection");
options.UseSqlServer(eAESCrypt.AesMethods.DecryptString(key, context));
});
// Add Windows Authentication
builder.Services.AddAuthentication(NegotiateDefaults.AuthenticationScheme)
.AddNegotiate();
build configuration for each role (as preconfigured in DB) or policy for combined role
Since we use claims as role : use context.User.HasClaim() instead of Use.IsInRole()
// Add Authorization
builder.Services.AddAuthorizationBuilder()
.AddPolicy("AdminOnly", policy => policy.RequireRole("Admin"))
.AddPolicy("SecurityAdminOnly", policy => policy.RequireRole("SecurityAdmin"))
.AddPolicy("SecurityOnly", policy => policy.RequireRole("Security"))
.AddPolicy("AdminAndSecurityAdminOnly", policy =>
policy.RequireAssertion(context =>
context.User.HasClaim(ClaimTypes.Role, "Admin") || context.User.HasClaim(ClaimTypes.Role, "SecurityAdmin")));
//it allow to permit access to a method in the controller for "Admin" or "SecurityAdmin"
ClaimsTranformationclass to retrieve the Claims role for each user authenticated by windows
public class WVMClaimsTransformation : IClaimsTransformation
{
private readonly VisitorDataContext _context;
public WVMClaimsTransformation(VisitorDataContext context)
{
_context = context;
}
public async Task<ClaimsPrincipal> TransformAsync(ClaimsPrincipal principal)
{
if (principal.Identity.IsAuthenticated)
{
var identity = (ClaimsIdentity)principal.Identity;
//get the username without the domain name
var username = Helper.GetUserName(identity.Name);
//get all roles of the specific user from database
var roles = await _context.UserRoles
.Include(ur => ur.Role)
.Where(ur => ur.Employee.UserName == username)
.Select(ur => ur.Role.Name)
.ToListAsync();
//insert it to Claims Identities
foreach (var role in roles)
{
if (!identity.HasClaim(c => c.Type == ClaimTypes.Role && c.Value == role))
{
identity.AddClaim(new Claim(ClaimTypes.Role, role));
}
}
}
return principal;
}
}
Home Controller for Administration View: // Add the Authorization to Combine Role : **Admin** or **SecurityAdmin **
[Authorize(Policy = "AdminAndSecurityAdminOnly")]
public IActionResult Administration()
{
return View();
}
When you troobleshooting you can see the role as :
I hope that will help people like it help me. if I missed something please fell free to send me feedback.
You must call the cookieParser library in the server.js file of the program as follows so that your program can recognize this library.
app.use(cookieParser(process.env.COOKIE_PARSER_SECRET_KEY)); or app.use(cookieParser(secret_key));
even if regular expressions are optimal, in order to avoid time for finding configurations, you can do it manually.
*.blend1
*.blend2
and so on up to your last version. you want to exclude.
I ran into the same issue in my project, and after debugging, I traced it back to a regression in compose-multiplatform-core.
The root cause is documented in this pull request:
- https://github.com/JetBrains/compose-multiplatform-core/pull/1818
Upgrading the Compose Multiplatform plugin to version 1.8.0 resolves the issue, as the fix has been included in that release.
That just happened to me after Eclipse updated itself 2025, it seems the default editor is overwritten when eclipse updates, which is a pain because I mainly used it for python not Java, so solution? uninstall eclipse, install it again and disable updates. Every time you want to update Eclipse for some reason, do it yourself on a fresh install. No need to waste your time, because another dev decided it was time to update their software.
Due to the bizarre limitation of Microsoft that you can't use EXEC or SP_EXECUTESQL in a stored function. I have had to resort to the following type of code which, even yet, isn't completely finished. Maybe you could find something useful within.
DROP FUNCTION [dbo].[mthstrv08_rtrn_var70]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE FUNCTION [dbo].[mthstrv08_rtrn_var70](@strn varchar(60))
RETURNS varchar(80)
AS
BEGIN
declare @ret2 varchar(80)
declare @jp1 varchar(1)
set @jp1 = substring(@strn ,1,1)
DECLARE @opo1 varchar(1)
set @opo1='x'
DECLARE @opo2 varchar(1)
set @opo2='x'
DECLARE @opo3 varchar(1)
set @opo3='x'
DECLARE @opostrn varchar(3)
declare @num1 int -- numeric(30,20)
declare @num1char varchar(10)
declare @num1cwnt smallint
set @num1char = '|'
set @num1cwnt = -1
set @num1 = 0
declare @num2 int -- numeric(30,20)
declare @num2char varchar(10)
declare @num2cwnt smallint
set @num2char = '|'
set @num2cwnt = -1
declare @num3 int -- decimal(30,20)
declare @num3char varchar(10)
declare @num3cwnt smallint
set @num3char = '|'
set @num3cwnt = -1
declare @num4 int -- decimal(30,20)
declare @num4char varchar(10)
declare @num4cwnt smallint
set @num4char = '|'
set @num4cwnt = -1
declare @sum1 numeric(30,5)
declare @sum2 numeric(30,5)
declare @sum3 numeric(30,20)
declare @sum4 numeric(30,25)
declare @sum5 numeric(30,20)
declare @sum6 numeric(30,10)
declare @curval varchar(1)
declare @prevval varchar(1)
declare @mathstrn varchar(40)
declare @loopval smallint
declare @strlen smallint
declare @patern varchar(40)
declare @numflag smallint
set @numflag=0
set @loopval=3
set @patern='('
set @prevval='('
set @mathstrn =@strn
set @strlen = len(@mathstrn)
--
-- parse pattern of string
--
-- @loopval begins at 3 because we know first char is a indicator/flag
-- and 2nd char is always a '('
--
while @loopval <= @strlen
begin
set @curval = substring(@mathstrn,@loopval,1)
if @curval = '(' or @curval = ')'
begin
set @patern=@patern+@curval
end
if @curval in ( '/','+','-','*')
begin
set @patern=@patern+'o'
if @opo1='x'
begin
set @opo1 = @curval
end
else if @opo2='x'
begin
set @opo2 = @curval
end
else if @opo3='x'
begin
set @opo3 = @curval
end
end
if @curval between '0' and '9'
begin
set @patern=@patern+'n'
if @num1cwnt = -1
begin
set @numflag=1
set @num1cwnt=@loopval
end
else if @num2cwnt = -1 and (@prevval not between '0' and '9')
begin
set @numflag=2
set @num2cwnt=@loopval
end
else if @num3cwnt = -1 and (@prevval not between '0' and '9')
begin
set @numflag=3
set @num3cwnt=@loopval
end
else if @num4cwnt = -1 and (@prevval not between '0' and '9')
begin
set @numflag=4
set @num4cwnt=@loopval
end
if @numflag=1
begin
set @num1char=@num1char+@curval
end
else if @numflag=2
begin
set @num2char=@num2char+@curval
end
else if @numflag=3
begin
set @num3char=@num3char+@curval
end
else if @numflag=4
begin
set @num4char=@num4char+@curval
end
end
set @loopval=@loopval+1
set @prevval = @curval
end
set @patern = replace(@patern,'nnnnnnnnn','n')
set @patern = replace(@patern,'nnnnnnnn','n')
set @patern = replace(@patern,'nnnnnnn','n')
set @patern = replace(@patern,'nnnnnn','n')
set @patern = replace(@patern,'nnnnn','n')
set @patern = replace(@patern,'nnnn','n')
set @patern = replace(@patern,'nnn','n')
set @patern = replace(@patern,'nn','n')
set @num1 = convert (int,substring(@num1char,2,9))
set @num2 = convert (int,substring(@num2char,2,9))
set @num3 = convert (int,substring(@num3char,2,9))
set @num4 = convert (int,substring(@num4char,2,9))
set @opostrn=@opo1+@opo2+@opo3
--set @patern = '((non)o(non))'
if @patern = '(((non)on)on)'
begin
set @sum1 =0
set @sum1 = @num1
set @sum1 =
case
when @opo1 = '*' then @sum1*@num2
when @opo1 = '+' then @sum1+@num2
when @opo1 = '-' then @sum1-@num2
when @opo1 = '/' then @sum1*1.00000000000000000000/@num2
end
set @sum2 =
case
when @opo2 = '*' then @sum1*@num3
when @opo2 = '+' then @sum1+@num3
when @opo2 = '-' then @sum1-@num3
when @opo2 = '/' then @sum1*1.0000000000000000000/@num3
end
if @opo3 = '/' begin
set @sum5 = @sum2 % @num4
set @sum6 = (@sum2 - @sum5 )*1.00000000000000000000/@num4
set @sum4 = @sum5*1.00000000000000000000/@num4
set @ret2 =
right(' '+convert( varchar(14), cast(@sum6 as bigint) ) ,14) +
--cast (cast (@sum6 as int ) as varchar(15) ) +
substring( cast (@sum4 as varchar(43) ),2, 42)
end
else begin
set @sum1 =
case
when @opo3 = '+' then @sum2+@num4
when @opo3 = '*' then @sum2*@num4
when @opo3 = '-' then @sum2-@num4
end
set @ret2 = cast (@sum1 as bigint )
end
end
else if @patern = '((non)o(non))'
begin
set @sum1 =
case
when @opo1 = '*' then @num1*@num2
when @opo1 = '+' then @num1+@num2
when @opo1 = '-' then @num1-@num2
when @opo1 = '/' then @num1*1.0000000000000000000/@num2
end
set @sum2 =
case
when @opo3 = '*' then @num3*@num4
when @opo3 = '+' then @num3+@num4
when @opo3 = '-' then @num3-@num4
when @opo3 = '/' then @num3*1.0000000000000000000/@num4
end
set @sum3 =
case
when @opo2 = '*' then @sum1*@sum2
when @opo2 = '+' then @sum1+@sum2
when @opo2 = '-' then @sum1-@sum2
when @opo2 = '/' then @sum1*1.0000000000000000000/@sum2
end
end
else if substring(@patern , 1 ,len(@patern) ) = '(no((non)on))'
begin
set @sum1 =
case
when @opo2 = '*' then @num2*@num3
when @opo2 = '+' then @num2+@num3
when @opo2 = '-' then @num2-@num3
when @opo2 = '/' then @num2*1.00000000000000000000/@num3
end
set @sum2 =
case
when @opo3 = '*' then @sum1*@num4
when @opo3 = '+' then @sum1+@num4
when @opo3 = '-' then @sum1-@num4
when @opo3 = '/' then @sum1*1.00000000000000000000/@num4
end
set @sum3 =
case
when @opo1 = '*' then @num1*@sum2
when @opo1 = '+' then @num1+@sum2
when @opo1 = '-' then @num1-@sum2
when @opo1 = '/' then @num1*1.00000000000000000000/@sum2
end
end
else if substring(@patern , 1 ,len(@patern) ) = '((no(non))on)'
begin
set @sum1 =
case
when @opo2 = '*' then @num2*@num3
when @opo2 = '+' then @num2+@num3
when @opo2 = '-' then @num2-@num3
when @opo2 = '/' then @num2*1.00000000000000000000/@num3
end
set @sum2 =
case
when @opo1 = '*' then @num1*@sum1
when @opo1 = '+' then @num1+@sum1
when @opo1 = '-' then @num1-@sum1
when @opo1 = '/' then @num1*1.00000000000000000000/@sum1
end
set @sum3 =
case
when @opo3 = '*' then @sum2*@num4
when @opo3 = '+' then @sum2+@num4
when @opo3 = '-' then @sum2-@num4
when @opo3 = '/' then @sum2*1.00000000000000000000/@num4
end
end
else if substring(@patern , 1 ,len(@patern) ) = '(no(no(non)))'
begin
set @sum1 =
case
when @opo3 = '*' then @num3*@num4
when @opo3 = '+' then @num3+@num4
when @opo3 = '-' then @num3-@num4
when @opo3 = '/' then @num3*1.00000000000000000000/@num4
end
set @sum2 =
case
when @opo2 = '*' then @num2*@sum1
when @opo2 = '+' then @num2+@sum1
when @opo2 = '-' then @num2-@sum1
when @opo2 = '/' then @num2*1.00000000000000000000/@sum1
end
set @sum3 =
case
when @opo1 = '*' then @num1*@sum2
when @opo1 = '+' then @num1+@sum2
when @opo1 = '-' then @num1-@sum2
when @opo1 = '/' then @num1*1.00000000000000000000/@sum2
end
end
RETURN (@ret2)
END
GO
Pour que ça ne change plus de taille quand vous changez la dimension de la fenêtre il faut mettre du responsive dans le CSS
Check your ISR numbers / addresses.
avr-gcc uses vector number 9 for TIMER0_OVF, which is word address 0x9, which is byte address 0x12. So the next address is invalid since there is not code for ISR 9.
Why are you using magic number to begin with? Isn't there a better, symbolic way in that IDE you are using?
Be sure the profile is allowed to get the license/key (see picture below):
P.S. I think Carlos Toledo means the same
If you want to have the configuration in another file you would use:
importProvidersFrom(
LoggerModule.forRoot(undefined, {
configProvider: NGX_FACTORY_PROVIDER,
ruleProvider: NGX_RULE_PROVIDER,
}),
),
undefined refers to the default config but since i'm using a configuration provider, it's undefined.
In my config file i have:
export const NGX_RULE_PROVIDER: ClassProvider = {
provide: TOKEN_LOGGER_RULES_SERVICE,
useClass: LoggerRules,
}
export const NGX_FACTORY_PROVIDER: FactoryProvider = {
provide: TOKEN_LOGGER_CONFIG,
useFactory: loggerConfigFactory,
deps: [ConfigService],
}
Answer proposed by @Shawn works - the trick is to use [read stdin] and not [gets stdin]
Correct answer for lint.tcl content :
#! /usr/bin/tclsh
set a [split [read stdin] \n]
puts [llength $a]
pipe command remains same:
grep -ri --color -n WRN warnings.log | lint.tcl
base64body worked for me but i wonder if it is possible to send more data like
"data_name": "Info.pdf", "ReceiveTime":"{{date '2020-11-17' '2020-11-17' 'yyyy-MM-dd HH:mm:ss'}}"
There is a great sample posted by Simon Mourier in the comment section that contains the answer https://github.com/smourier/XpsPrintSamples
SPMT is a decent starting point for migrating from SharePoint On-premise to SharePoint Online, especially for small to medium-sized migrations. It’s free, Microsoft-supported, and works well for basic content like document libraries, lists, and simple site structures. However, it has limitations—especially when it comes to migrating complex sites, permissions, metadata, workflows, and large volumes of data.
If your environment has customized features or a lot of content, third-party tools like Sharegate, Matalogix, or AvePoint can provide more flexibility, reporting, and smoother handling of advanced scenarios.
Another great alternative is Kernel Migration for SharePoint. It’s a powerful tool designed for SharePoint to SharePoint migrations—both On-prem to Online and tenant-to-tenant. It supports granular migration, preserves permissions and metadata, and handles large-scale moves with ease. Worth checking out if you're looking for a reliable, efficient migration experience.
To resolve the device mismatch error, you should let RLlib and PyTorch manage device placement automatically.
Layers are no longer explicity moved to to(self.device) during initialization
Used dynamic device detection of the input self.device = input_dict["obs"].device
Only inputs in the forward method and values_out in the value_function are moved to the model's device manually.
It's also important to override the forward and value_function methods, as suggested by @Marzi Heifari.
Here is the modified version:
import torch
import torch.nn as nn
from ray.rllib.models.torch.torch_modelv2 import TorchModelV2
from ray.rllib.utils.annotations import override, DeveloperAPI
from ray.rllib.models.modelv2 import ModelV2
@DeveloperAPI
class SimpleTransformer(TorchModelV2, nn.Module):
def __init__(self, obs_space, action_space, num_outputs, model_config, name):
TorchModelV2.__init__(self, obs_space, action_space, num_outputs, model_config, name)
nn.Module.__init__(self)
# Configuration
custom_config = model_config["custom_model_config"]
self.input_dim = 76
self.seq_len = custom_config["seq_len"]
self.embed_size = custom_config["embed_size"]
self.nheads = custom_config["nhead"]
self.nlayers = custom_config["nlayers"]
self.dropout = custom_config["dropout"]
self.values_out = None
self.device = None
# Input layer
self.input_embed = nn.Linear(self.input_dim, self.embed_size)
# Positional encoding
self.pos_encoding = nn.Embedding(self.seq_len, self.embed_size)
# Transformer
self.transformer = nn.TransformerEncoder(
nn.TransformerEncoderLayer(
d_model=self.embed_size,
nhead=self.nheads,
dropout=self.dropout,
activation='gelu'),
num_layers=self.nlayers
)
# Policy and value heads
self.policy_head = nn.Sequential(
nn.Linear(self.embed_size + 2, 64), # Add dynamic features (wallet balance, unrealized PnL)
nn.ReLU(),
nn.Linear(64, num_outputs) # Action space size
)
self.value_head = nn.Sequential(
nn.Linear(self.embed_size + 2, 64),
nn.ReLU(),
nn.Linear(64, 1)
)
@override(ModelV2)
def forward(self, input_dict, state, seq_lens):
self.device = input_dict["obs"].device
x = input_dict["obs"].view(-1, self.seq_len, self.input_dim).to(self.device)
dynamic_features = x[:, -1, 2:4].clone()
x = self.input_embed(x)
position = torch.arange(0, self.seq_len, device=self.device).unsqueeze(0).expand(x.size(0), -1)
x = x + self.pos_encoding(position)
transformer_out = self.transformer(x)
last_out = transformer_out[:, -1, :]
combined = torch.cat((last_out, dynamic_features), dim=1)
logits = self.policy_head(combined)
self.values_out = self.value_head(combined).squeeze(1)
return logits, state
@override(ModelV2)
def value_function(self):
return self.values_out.to(self.device)