In file C:\Program Files\Microchip\xc8\v2.31\avr\avr\include\avr\sfr_defs.h you can fing used definition:
#define _MMIO_BYTE(mem_addr) (*(volatile uint8_t *)(mem_addr))
#define _SFR_IO8(io_addr) _MMIO_BYTE((io_addr) + __SFR_OFFSET)
Posting this for Docker Desktop for Mac which runs Docker inside a linux Virtual Machine (VM) so the accepted answer does not work since those storage_driver files are inside that VM and not on the Mac itself.
Here, the approach is to delete that VM and have Docker Desktop for Mac recreate it when it restarts.
The steps for this are...
Quit Docker Desktop For Mac
Find the proper <vm-number> (e.g. 0 )...
ls -al ~/Library/Containers/com.docker.docker/Data/vms/
Move/Backup the VM...
mv ~/Library/Containers/com.docker.docker/Data/vms/<vm-number>/data/Docker.raw ~/Desktop/Docker.raw.backup
Restart Docker Desktop For Mac
I created a Gist on this citing this post
AhmadHamze's solution worked for me. (Comment did not work so I post answer, sorry)
FetchOptions has a RecordCountMode property to control the way RecordCount is calculated.
How are you accessing the app?
Via php artisan serve → http://127.0.0.1:8000?
Or directly through Apache → http://localhost/myproject/?
set the workspace file name to a substitution string reference in the Header Template. Also, since I have text next to the img, it was easier to add the text to the svg file that contained the image
After the above error messages I decided my python installation was cursed.
Following this answer, more or less: https://apple.stackexchange.com/questions/284824/remove-and-reinstall-python-on-mac-can-i-trust-these-old-references
I did
brew uninstall --ignore-dependencies --force python
and
brew uninstall --ignore-dependencies --force [email protected]
and
brew uninstall --ignore-dependencies --force python@3
and then just
brew install python
Now I can make virtual environments just fine.
How do I use that? Is that C++? Julia's println function already inserts a new line.
Hopefully this helps someone in the future but if use setUrgent() on the PutDataMapRequest object, it fixes the issue.
It's because the chunk is named setup.
Try {r} without setup or {r another_chunk_name}.
This is now resolved: The solution was to set up a SQL alert to send an email notification if x < 0. The content of a SQL alert can be customised.
If you still looking for it, you whould use the resource "databricks_access_control_rule_set"
Something like that :
resource "databricks_service_principal" "sp" {
display_name = "ndp-sp-${var.project}-${lower(var.env)}"
}
resource "databricks_access_control_rule_set" "automation_sp_rule_set" {
name = "accounts/{account-id}/servicePrincipals/${databricks_service_principal.sp.application_id}/ruleSets/default"
grant_rules {
principals = [data.databricks_group.admin_group.id]
role = "roles/servicePrincipal.manager"
}
}
I had similar issue, you can delete the .ninja_deps from your build folder and this should fix your issue. In my case I am using cmake to generate my build system with generator set to Ninja.
Can you post any of your attempts, and what specific issue(s) there were with each?
Honestly this usually happens when Docker suddenly updates itself in the background or the daemon gets bumped to a newer version while your local client or whatever library you use (TestContainers in this case) is still trying to talk with an older API version.
Docker doesn’t always keep the client-server API versions in perfect sync, so if the daemon jumps to something like 1.44 and your TestContainers setup is still locked on 1.32, it just refuses the call and you get that “client version too old” thing.
Most of the time it’s either:
Docker Desktop auto-updated
You pulled a new Docker engine version (especially on Linux)
Or TestContainers is pinning an old API version and hasn’t refreshed its mapping yet
Fix is usually boring: update your Docker client or update TestContainers to the latest version so both sides speak the same API level.
Nothing magical! just version mismatch after an auto-update.
Nope, precise number of colors is main task for me. I have graphic editor where i should change colors of image and it is very important to have colors less than my max
If you don't know the map's key, you can simply iterate over the map and extract the required value.
@C3roe the part where the programmable microprocessor,microcontroller, is programmed?
Your original 10 colour PNG source will have additional colours created by antialiassing when the SVG image is rendered for display. Without that concession the edges of lines will be visibly jagged. Surely you only really care about filesize/quality tradeoff rather than exact number of colours in the image.
The only solution that worked for me (Windows 11, VC Code 1.106.3, SQL) was manually creating the foldable section (highlight desired section Ctrl+K Ctrl+,) and removing it (Ctrl+K Ctrl+.).
As this is a university project, I would like to focus on making correct architectural decisions and structuring my ASP.NET Core backend in a clean and maintainable way.
Would ASP.NET Core MVC with Razor Views be sufficient for this type of portal, or would a Web API approach be more appropriate?
Additionally, what architectural style (e.g. layered architecture, Clean Architecture) would you recommend for such a system in order to remain maintainable and scalable?
What would be a recommended way to organize the solution and its layers (e.g. Controllers → Application Services → Domain → Infrastructure)?
Also, where should business rules such as application state transitions (e.g. approve/reject) and payment processing logic ideally reside, and how should these be separated from the MVC layer?
So how does one create bit fielded data structures in python? or do I literally need an int for every bit and then combine these into another int with shifting? I need to manage CAN bus communication where individual bits have individual meanings.
It's not assumption, it's fact. After creating SVG image i use function getOriginalColors for retrieving all colors used in SVG image. And I see that when I send maxColors = 10, it is possible to get 16. So, based on sharp docs, parameter colours set number of palette entries (in my mind it means number of colors). I checked visually image after sharp convertation and it looks cool. Vectorization also looks nice visually but in result I counted colors used for SVG and it is bigger than my value (10). Mostly, for max = 10 it gives me 16 colors
write a service unit for systemd and put it into system slice (/etc/systemd/system).
as long as you execute it under a user (even root), you will be restricted.
If you click on it a window (Host Details) comes up.
You can see the whole output in "Data" tab.
After adding the IronSource sdk (whether it's installed via package manager - Ads Mediation, or download sdk), the generated iOS build will create a Unity-iPhone.xcworkspace file.
You should open Unity-iPhone.xcworkspace file instead of Unity-iPhone.xcodeproj. This Unity-iPhone.xcworkspace coatins Unity-iPhone.xcodeproj and Pods.xcodeproj.
I would not be so sure "Sched + Pickups Week 1" counts for 1 workbook only
@Atmo,I'm sorry, but I cannot understand this count.
I thought you wanted to avoid having to rewrite it as 8 to 15 in the strgen function.
i found it: somewhere several "allow from"s where hidden behind includes of includes and bypassed the user specific things ... I hope, whoever did all that to the system has retired meanwhile ... grmpf ...
the admin page behind this server was completely open to any domain user (several thousands).
Just a note: you describe a data architechure of a particular application. This is not what fully defines the architecture used for the implementaion. There are many ohter factors. These two things are not directly related.
The data structure you described is nothing special. It is very simple and can be expressed in many different ways. You can expect some advice in general, but you cannot expect that someone infers the implementation architecture pattern from your data structure.
What you’re seeing is normal behavior for phone GPS, especially:
In a small area (like a sculpture garden),
With offline positioning (no Wi-Fi / network assistance),
And using high-frequency updates with BestForNavigation.
A few key points about GPS on mobile devices:
Typical real-world horizontal accuracy is 3–10 meters under good outdoor conditions. Near buildings, trees, or indoors, it easily gets worse.
Even if the user is completely still, the reported latitude/longitude will wander inside an error circle defined by coords.accuracy.
That means in a small map (say 30–50 m across), the natural noise of GPS can be a large percentage of the whole area. So the marker looks like it’s “jumping all over” even though it’s just wandering within the uncertainty bubble.
So there is nothing “wrong” with Expo Location itself. You’re just seeing the raw limitations of consumer GPS very clearly because your map is small and your update settings are aggressive.
You’re doing some reasonable things already (UTM transform, simple smoothing), but a few details make the jitter very visible instead of hiding it.
subscription = await Location.watchPositionAsync(
{
accuracy: Location.Accuracy.BestForNavigation,
timeInterval: 500,
distanceInterval: 0.5,
},
(newLocation) => {
setLocation(newLocation);
}
);
Problems here:
BestForNavigation is designed to be responsive, not perfectly stable.
timeInterval: 500 ms and distanceInterval: 0.5 m means you’re asking for very frequent, very fine-grained updates.
At that resolution, you see every tiny fluctuation in the GPS solution as a “movement”.
You’re essentially subscribing to noise in high definition.
const acc = location.coords.accuracy;
if (acc && acc > 8) return;
You’re discarding any reading where the accuracy is worse than 8 m. In many real environments, values below 8 m are rare. So what happens?
You ignore a lot of readings.
You occasionally accept “good” ones that might still be off by several meters.
That makes your smoothed position jump from one “good” estimate to another instead of drifting slowly.
setSmoothedUtm(prev => {
if (!prev) return { x: rawUtm.x, y: rawUtm.y };
const dx = rawUtm.x - prev.x;
const dy = rawUtm.y - prev.y;
const movement = Math.sqrt(dx * dx + dy * dy);
if (movement < 0.3) return prev;
const alpha = 0.8;
return {
x: prev.x * (1 - alpha) + rawUtm.x * alpha,
y: prev.y * (1 - alpha) + rawUtm.y * alpha
};
});
Two main issues:
A dead-zone of 0.3 m is meaningless when GPS noise is 3–10 m. Almost every jitter is bigger than 0.3 m, so almost everything is treated as “real movement”.
alpha = 0.8 heavily favors the new (noisy) value:
new = 20% old + 80% new → that’s closer to “forward the noise” than “smooth it”.
So the filter doesn’t do much to hide the GPS wandering.
You cannot turn GPS into a centimeter-accurate indoor tracker with code. But you can:
Reduce how much jitter the user sees.
Make your routing logic robust to noise.
Align UX with the actual accuracy you have (meters, not centimeters).
Below are practical changes that work within the limitations.
Relax watchPositionAsync so you don’t hammer the GPS and you don’t get spammed with tiny fluctuations.
For example:
subscription = await Location.watchPositionAsync(
{
accuracy: Location.Accuracy.High, // or Location.Accuracy.Balanced
timeInterval: 2000, // at most once every 2 seconds
distanceInterval: 2, // only if moved ~2 meters
},
(newLocation) => {
setLocation(newLocation);
}
);
This does two things:
Reduces CPU/battery.
Prevents your UI from reacting to every sub-meter wobble.
In many cases, High or Balanced gives more stable behavior than BestForNavigation for walking around at low speed.
Next, make the smoothing logic match real GPS behavior:
Accept readings up to ~15–20 m accuracy (beyond that, ignore).
Ignore position changes smaller than 2–3 m as noise.
Use a small alpha (0.1–0.3) so the filtered position moves slowly.
Example:
useEffect(() => {
if (!rawUtm || !location) return;
const acc = location.coords.accuracy ?? 999;
// Ignore very noisy readings
if (acc > 20) return;
setSmoothedUtm(prev => {
if (!prev) {
return { x: rawUtm.x, y: rawUtm.y };
}
const dx = rawUtm.x - prev.x;
const dy = rawUtm.y - prev.y;
const movement = Math.sqrt(dx * dx + dy * dy);
// Ignore small movements that are likely just jitter
if (movement < 2) {
return prev;
}
// Strong smoothing: mostly keep old position
const alpha = 0.2; // 20% new, 80% old
return {
x: prev.x * (1 - alpha) + rawUtm.x * alpha,
y: prev.y * (1 - alpha) + rawUtm.y * alpha,
};
});
}, [rawUtm, location]);
Now:
The marker will not “twitch” for tiny fluctuations.
Larger moves (user actually walking) will slowly pull the marker toward the new position.
You still respect accuracy constraints, but you’re not unrealistically strict.
You currently trigger arrival when:
if (distanceToDestination !== null && distanceToDestination < 3) {
Alert.alert('Arrived!', ...)
}
If your GPS accuracy is ±5–10 m, checking for < 3 m is optimistic. The position might easily jump between 2 m and 8 m from the sculpture while the user stands in the same place.
Use a larger radius (e.g. 7–10 m) and maybe require the distance to stay under that threshold for a couple of consecutive readings:
const ARRIVAL_RADIUS = 8; // meters
useEffect(() => {
if (distanceToDestination == null || selectedDestination == null) return;
if (distanceToDestination < ARRIVAL_RADIUS) {
Alert.alert('Arrived!', `You reached ${TEST_POSITIONS[selectedDestination].label}`);
setSelectedDestination(null);
}
}, [distanceToDestination, selectedDestination]);
It’s better UX to say “you’re here” a bit early than never say it because of GPS jitter.
You’re mapping UTM → pixels based on sculpture coordinates that are very close together. When your entire site is, say, 30 m wide and your screen is 300 px wide:
1 meter ≈ 10 pixels.
Normal GPS noise of 5–10 m → 50–100 pixels of jump.
That’s why the dot looks like it’s flying around.
There is no way around this from GPS alone. What you can do:
Visually de-emphasize the exact dot and focus on a “you’re roughly here” indicator (e.g. circle radius = accuracy).
Optionally snap the user’s position to the nearest path or nearest sculpture when they’re within a certain distance. That’s a UX trick: you’re acknowledging that the exact GPS coordinate is fuzzy and instead projecting them to a discrete point that makes sense for your map.
For a small offline map, if you need sub-meter or 2–3 m reliable precision, you won’t get it on regular phones with only GPS, especially offline and near structures.
Real alternatives (if this were a production problem):
BLE beacons / UWB anchors indoors.
QR / NFC markers near sculptures that the user scans to “locate” themselves.
Letting the user manually mark their approximate location on the map as a reset.
That’s beyond Expo Location and outside pure code solutions.
Your device isn’t “broken”; you’re just seeing normal GPS error at a very small scale.
The combo of BestForNavigation, high-frequency updates, very strict accuracy filter, and weak smoothing makes the jitter extremely visible.
Use:
Less aggressive watchPositionAsync settings,
Realistic accuracy + movement thresholds,
Stronger smoothing,
A larger arrival radius, and
UI that communicates “approximate location”, not exact centimeters.
You can’t make GPS behave like a laser pointer, but you can make the experience look stable and usable for an offline sculpture map.
In my case i just changed
_userManager.Setup(r => r.GetRolesAsync(newlyCreatedUser)).ReturnsAsync(x => oldRolesThatUserHad);
into
_userManager.Setup(r => r.GetRolesAsync(newlyCreatedUser)).ReturnsAsync(oldRolesThatUserHad); //omit the lambda
The 403 errors are caused by ModSecurity Rule ID 942440 incorrectly flagging encrypted cookie content as an SQL injection attempt. To fix this, whitelist Rule ID 942440 in your Plesk ModSecurity settings.
Rule ID 942440 is a ModSecurity rule titled "SQL Comment Sequence Detected" ([msg "SQL Comment Sequence Detected"]).
It is designed to detect patterns within requests that resemble SQL comments (like -- or ---), which are often used in SQL injection attacks.
If your text written to stdout doesn't end with \n, use std::flush to force a screen update.
Hi heaxyh,
by increasing the CommandTimeout you will prevent the timeout and consequently the lock on __EFMigrations table. This timeout increases the time on the transaction itself
services.AddDbContext<XContext>((srv, options) =>
{
options.UseSqlServer("ConnectionString", opts => { opts.MigrationsAssembly("MigrationProject"); opts.CommandTimeout(appSettings.CommandTimeoutInSeconds);});
});
kind regards,
I have a scenario, where I need to delete Cloud tasks.
I create the tasks with a custom name/id.
Under some circumstances I then I check if that task is there, if it is I delete it.
client.getTask(fullTaskName)
client.deleteTask(fullTaskName)
What part of this is supposed to be a programming question?
Three random digits may not be sufficient to prevent collisions, and detecting duplicates only at insertion time (using a primary key) could be too late. What happens if the system clock drifts? What if you have multiple application servers without perfectly synchronized clocks? Maybe better something like Snowflake ID with less presition on the time but with a node/thread ID and a sequence number per node/thread rather than a small random
Get the parent branch name of your current branch with this:
git rev-parse --abbrev-ref @{-1}
In Windows PowerShell I had to escape the expression.
git rev-parse --abbrev-ref "@{-1}"
The accepted answer by @andreas-wolf is more thorough. I use this where every branch is created with git checkout -b feature, then back into main and merge if required.
From the Git Reference: gitrevisions
The construct @{-<n>} means the <n>th branch/commit checked out before the current one.
For future readers
OP probably faced type error inside the exact version of @types/node.
If possible, try to upgrade @types/node package instead of type casting or creating additional objects.
The Angular team hasn’t published what exact design tool was used for those specific SVGs and they don’t provide the original source files like .ai or .fig for assets in the repo. In practice, teams at Google typically use a mix of tools such as Figma, Adobe Illustrator, or Sketch to design vector assets, and then export them to SVG for use in the codebase. Since the SVGs are already in the repository, the intended workflow is to treat them as the source of truth rather than request the original .ai files.
If you want to work with or modify them, you can safely import the existing .svg files directly into Figma, Adobe Illustrator, or Inkscape and continue editing them there. All of these tools support round-trip SVG editing very well.
So the practical answer is: you don’t need the original .ai files. Just open the existing SVGs in Illustrator or Figma and you will be able to inspect and modify the layers, paths, groups, colors, and exported structure easily.
use position fixed for the burger menu makes sticky may need to add different positioning
The Cleanest way to use with using Python 2.7
import imp
module = imp.load_source("myfile", os.path.join(latest, "myfile.py"))
print(module.myvariable)
Nowadays this is in the Configuration Center. The setting is called Screen Blanking and it can be found under Display.
for me helped:
1. flutter upgrade
2. run app through command flutter run
As @jonrsharpe explained, the problem is that FileBackend does not return proper Task objects. Rather, it returns raw objects with the properties of a Task, which do not have any of the methods. I wrongly assumed the Task objects were automatically constructed thanks to the type hints, but this was not the case.
The solution would be to construct a Task object from the retrieved items. For example, defining a factory method like this (the dates are being parsed using Date.parse() per as the MDN documentation):
public static tryFromRawObject(obj: {id?: number, name: string, status: string, creationTime: string, lastModificationTime: string}): Task {
let parsedCreationTime = Date.parse(obj.creationTime);
if (isNaN(parsedCreationTime)) {
throw new TypeError(`Failed to construct Date object from value: creationTime=${obj.creationTime}`);
}
let parsedLastModificationTime = Date.parse(obj.lastModificationTime);
if (isNaN(parsedLastModificationTime)) {
throw new TypeError(`Failed to construct Date object from value: lastModificationTime=${obj.creationTime}`);
}
if (!Object.values(TaskStatus).includes(obj.status as TaskStatus)) {
throw new TypeError(`Failed to construct TaskStatus object from value: status=${obj.status}`);
}
return new Task({
id: obj.id,
name: obj.name,
status: obj.status as TaskStatus,
creationTime: new Date(parsedCreationTime),
lastModificationTime: new Date(parsedLastModificationTime),
});
}
And then invoke it from the TaskRepository:
private async readStream(): Promise<Array<Task>>{
const tasks: Array<{id?: number, name: string, status: string, creationTime: string, lastModificationTime: string}> = [];
tasks.push(...await this.backend.json());
return tasks.map(Task.tryFromRawObject);
}
In case anyone else finds this, I ran into this same error, and solved it by:
(1) removing ninja, ninja.bat, ninja.py from depot_tools (I just put them into a temporary folder, so I could move them back if it didn't work)
(2) copying ninja.exe into depot_tools
When I ran the cmake Ninja command, I got all kinds of errors about certain files it wasn't able to find, but then running ninja aseprite afterward worked.
I don't fully understand why this works. The best I can tell, ninja / ninja.bat are wrapper scripts that call ninja.py, and ninja.py is a wrapper script that searches PATH for ninja.exe and calls it, so by copying ninja.exe there in the first place, you're kind of cutting out the middleman?
All list of constexpr containers:
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p3372r0.html
Cool - thank you for this helpful and working input, Giovanni Augusto (2024-09-21)!
For some reason I cannot comment on your comment, but only on OP...
ChatGPT made it into a toggle-PowerShell-Script for me:
# Toggle Light / Dark Mode Script
# Define registry paths
$personalizePath = "HKCU:\Software\Microsoft\Windows\CurrentVersion\Themes\Personalize"
$accentPath = "HKCU:\Software\Microsoft\Windows\CurrentVersion\Explorer\Accent"
# Read current system theme (SystemUsesLightTheme)
try {
$currentSystemTheme = Get-ItemPropertyValue -Path $personalizePath -Name SystemUsesLightTheme
} catch {
# If the registry entry is missing, default to Light
$currentSystemTheme = 1
New-ItemProperty -Path $personalizePath -Name SystemUsesLightTheme -Value $currentSystemTheme -Type Dword -Force
New-ItemProperty -Path $personalizePath -Name AppsUseLightTheme -Value $currentSystemTheme -Type Dword -Force
}
# Toggle logic
if ($currentSystemTheme -eq 1) {
# Currently Light → switch to Dark
$newValue = 0
Write-Host "Switching to Dark Mode..."
} else {
# Currently Dark → switch to Light
$newValue = 1
Write-Host "Switching to Light Mode..."
}
# Set System & Apps theme
New-ItemProperty -Path $personalizePath -Name SystemUsesLightTheme -Value $newValue -Type Dword -Force
New-ItemProperty -Path $personalizePath -Name AppsUseLightTheme -Value $newValue -Type Dword -Force
# Set Accent color (kept constant)
New-ItemProperty -Path $accentPath -Name AccentColorMenu -Value 0 -Type Dword -Force
New-ItemProperty -Path $acce
First check if nvcc is installed :
nvcc --version
If its Not Installed then you can install it (Nvidia CUDA ToolKit) using this guide :- https://developer.nvidia.com/cuda-downloads
1. Compile the CUDA (.cu) program:
nvcc my_cuda_program.cu -o my_cuda_program
2. Run the executable:
./my_cuda_program
Download Visual Color Theme Designer 2022. Create a new project from its template. Select Base Theme Dark.
Go to All elements. And find … the element with the name “Shell – AccentFillDefault”.
Change the purple color to FF454545.
Press Apply. Restart VS and select your theme from the list in themes.
Your are DONE !!!
Actually GNU Emacs will have more of the ISPF/Edit functions.
In short, Uvicorn is a lightning-fast ASGI (Asynchronous Server Gateway Interface) server implementation for Python.
While frameworks like FastAPI (which I used in my project) define how your API handles requests (routing, validation, logic), they do not include a web server to actually listen for network requests and send responses. Uvicorn fills this gap by acting as the bridge between the outside world (HTTP) and your Python application.
Can you drop the repo link or the full scripts here?
While the existing answers clearly explain how to update the intrinsic matrix for standard scaling/cropping operations, I want to provide a related perspective: how to construct a intrinsic for projection matrix of rendering when using crop, padding, and scaling, so that standard rendering pipelines correctly project 3D objects onto the edited image.
When performing crop, padding, or scaling on images, the camera projection matrix needs to be adjusted to ensure that 3D objects are correctly rendered onto the modified image.
Pixel space vs. Homogeneous space
Pixel space (CV): Uses actual pixel coordinates, e.g., a 1920×1080 image has x ∈ [0,1920] and y ∈ [0,1080].
Homogeneous space (Graphics): Normalized coordinates where x ∈ [-1,1] and y ∈ [-1,1], regardless of image size.
This distinction affects how image augmentations influence projections. For example, adding padding on the right:
In pixel space, left-side pixels do not move.
In homogeneous space, the entire x-axis is compressed because the total width increased.
A camera intrinsic matrix contains four main parameters:
fx, fy: focal lengths along x and y axes
cx, cy: principal point offsets along x and y axes
Translation (cx, cy)
Only cropping/padding on the left/top affects the principal point.
Right/bottom operations have no effect.
Scaling (fx, fy)
In CV pixel space: only scaling changes fx/fy.
In homogeneous space: crop and padding also affect fx/fy, because padding changes the image aspect ratio, which changes the mapping to normalized [-1,1] coordinates.
Pixel space rules:
Cropping/padding:
cx, cy decrease by the number of pixels cropped from left/top
Right/bottom cropping has no effect
fx, fy remain unchanged
Scaling:
fx, fy multiplied by scale s
cx, cy multiplied by scale s
Homogeneous space rules:
Cropping/padding changes image aspect ratio → requires extra scaling compensation
Compute compensation factors:
sx = s * (original_width / padded_width)
sy = s * (original_height / padded_height)
fx_new = fx * sx
fy_new = fy * sy
cx_new = cx * sx
cy_new = cy * sy
Note: This compensation only adjusts for the normalized coordinate system and does not change physical camera parameters.
To compute FOV consistent with the original image:
fov_x = 2 * arctan((original_width * s) / (2 * fx_new))
fov_y = 2 * arctan((original_height * s) / (2 * fy_new))
This ensures that rendering with crop, padding, and scaling produces objects at the correct location and scale, without relying on viewport adjustments.
"Sched + Pickups Week 1" was interpreted as the file name in the code mentioned in the question. For example, ”SCHED 11.30.25.xlsm”
In my case the problem was the version of the extension bundles
What I had to do is:
Locate the `host.json` of your function app
Update the
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.*, 4.0.0]"
}
}
To
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[4.*, 5.0.0)"
}
}
try using onnxruntime 1.19.2 this solved the issue for me
I am using Python 3.11.4
A program can check if a file descriptor is a TTY or not with isatty call.
It very likely checks if stdout is a tty.
For those using Quarkus 3.x, it's possible with this option:
quarkus.swagger-ui.try-it-out-enabled=true
There are two ways to use the Data Mapper Runtime.
If you generate a data mapping dll in Liquid Studio, it will copy LiquidTechnologies.DataMapper.Runtime.dll from the Install folder. This is a .Net Framework 4.6.2 dll.
If you generate source code and compile your own data mapping dll, this will pull down the LiquidTechnologies.DataMapper.Runtime Nuget and use the appropriate LiquidTechnologies.DataMapper.Runtime.dll for the .Net version you are using (currently supports: .Net Framework 4.6.2, .Net 8.0, .Net 9.0 or .Net Standard 2.0). These targets are inline with the common Microsoft nugets.
You can easily see which you are using by looking in the Bin folder. If LiquidTechnologies.DataMapper.Runtime.dll is 5MB then it is the .Net Framework 4.6.2 version from the install folder. The Nuget version is only 500KB as all of the dependencies are pulled down in separate Nugets.
The following video shows you how to generate your own data mapping dll using c# in Microsoft Visual Studio:
Generate a C# Data Mapping Project in Liquid Data Mapper
You’re focusing on the wrong layer.
What you see in Wireshark (headers in one TCP packet, body in another) is just TCP segmentation. From the HTTP point of view this is still a single request, and HTTP/1.1 explicitly allows the body to arrive in multiple segments. A server that only works when headers and body are in the same TCP packet is simply broken.
Because of that, there’s no supported way in .NET Framework to “force” HttpClient to put headers and body into one packet; the OS and network stack are free to split the data however they want.
What is different in your failing trace is this part of the headers:
Expect: 100-continue
Connection: Keep-Alive
With Expect: 100-continue, .NET sends the headers first, waits for a 100 Continue from the server, and only then sends the body. Your embedded device is sending a 400 instead of 100, so it’s not handling this handshake correctly.
You’ve already tried ServicePointManager.Expect100Continue = false, but with HttpClient on .NET Framework you should make sure this is set before creating the client, and also disable it on the request headers:
ServicePointManager.Expect100Continue = false;
var handler = new HttpClientHandler();
var client = new HttpClient(handler);
client.DefaultRequestHeaders.ExpectContinue = false;
using var payload = new MultipartFormDataContent();
payload.Add(new StreamContent(File.OpenRead(filename)), "file", Path.GetFileName(filename));
var response = await client.PostAsync(requestUrl, payload);
If the device still rejects the request after removing Expect: 100-continue, then the issue isn’t packet splitting. it’s simply that the device’s HTTP implementation doesn’t fully conform to HTTP/1.1. In that case your only real options are:
Fix/update the firmware / server on the device, or
Bypass HttpClient and use a raw Socket to manually send bytes that reproduce the exact request that you know works (like the curl request), but that’s a fragile workaround.
So, in short:
Is there any way to make .NET Framework 4.8 HttpClient send the file data directly in the initial request?
No, not reliably. You can (and should) disable Expect: 100-continue, but you can’t control how the HTTP request is split into TCP packets; the embedded server needs to handle segmented requests correctly.
With the given code , it will work fine if you use the import lombok.Data; in ProductRequest.
The possibility of 400 Bad Request is may be of your input type mismatch.
sample json request body
{
"name": "Laptop",
"price": 999.99
}
@Data is from LombokStill getting the error please share your log details
This turns out to be a compiler bug in gcc 14, fixed with gcc 14.3. The corresponding Bugzilla ticket is: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118849.
In the current state, Keeps API can only be used through domain-wide delegation, with service accounts or an OAuth client ID, only for enterprise apps.
You can reach the resource following this path Admin Console> API Controls> Security> Domain-wide Delegation
If you have more doubts on how to use or configure Domain-Wide delegation, you can follow this guide.
Cheers!
Reference: https://issuetracker.google.com/issues/210500028#comment4
main workbook : Including this macro
WB1 and WB1A-actual schedule : ?
WB2=Sched + Pickups Week 1 : this week
WB3=Sched + Pickups : the next weeks
I'm not quite sure what you want to do, but do you want to open WB2 or WB3 in the main workbook's macro?
| Execution date | WB2 | WB3 |
|---|---|---|
| 2025/11/29(Sat) | 11.30.25.xlsm | 12.07.25.xlsm |
| 2025/11/30(Sun) | 12.07.25.xlsm | 12.14.25.xlsm |
| 2025/12/1 (Mon) | 12.07.25.xlsm | 12.14.25.xlsm |
| 2025/12/2 (Tue) | 12.07.25.xlsm | 12.14.25.xlsm |
| 2025/12/3 (Wed) | 12.07.25.xlsm | 12.14.25.xlsm |
| 2025/12/4 (Thu) | 12.07.25.xlsm | 12.14.25.xlsm |
| 2025/12/5 (Fri) | 12.07.25.xlsm | 12.14.25.xlsm |
| 2025/12/6 (Sat) | 12.07.25.xlsm | 12.14.25.xlsm |
| 2025/12/7 (Sun) | 12.14.25.xlsm | 12.21.25.xlsm |
| 2025/12/8 (Mon) | 12.14.25.xlsm | 12.21.25.xlsm |
What if strgen is passed the number of weeks as an argument?
Function strgen(weeks As Long)
' weeks 1 : this weekend(Sunday) Sunday-Saturday next Sunday
' 2 : next weekend(Sunday)
strgen = Format(Date - Weekday(Now(), 1) + 1 + (weeks * 7), "mm.dd.yy") & ".xlsm"
End Function
How about doing it as follows in the macro you run?
Sub test()
Dim strDirPath As String
Dim strFilename1 As String
Dim strPath1 As String
Dim strFilename2 As String
Dim strPath2 As String
Dim strFilename As String
Dim strPath As String
Dim result As VbMsgBoxResult
strDirPath = "C:\Temp\Orginal Schedules\"
strFilename1 = "SCHED " & strgen(1)
strPath1 = strDirPath & strFilename1
strFilename2 = "SCHED " & strgen(2)
strPath2 = strDirPath & strFilename2
If Dir(strPath1) <> "" Then
If Dir(strPath2) <> "" Then
Select Case Weekday(Now(), 1)
Case 1 To 3
strFilename = strFilename1
Case 4
result = MsgBox("It is Wednesday. Do you want to open """ & strFilename1 & """", vbYesNo + vbQuestion, "Confirmation")
If result = vbYes Then
strFilename = strFilename1
Else
strFilename = strFilename2
End If
Case Else
strFilename = strFilename2
End Select
Else
strFilename = strFilename1
End If
ElseIf Dir(strPath1) <> "" Then
strFilename = strFilename2
Else
MsgBox "file does not exist.", vbOKOnly + vbCritical
Exit Sub
End If
strPath = strDirPath & strFilename
Workbooks.Open Filename:=strPath
Windows(strFilename).Activate
End Sub
CSS has now come with the "field-sizing" property.
input {
field-sizing: content;
}
https://developer.mozilla.org/en-US/docs/Web/CSS/Reference/Properties/field-sizing
Still experimental by november 2025, but it seems to work very well.
private void OntriggerEnter(Collider other) { ItemWorld itemWorld = other.GetComponent<ItemWorld>(); if(itemWorld != null) { // add to the player's inventory inventory.AddItem(itemWorld.GetItem()); // destroy from world itemWorld.DestroySelf(); } }
OntriggerEnter should be OnTriggerEnter
There seems to be no official support for changing the border-radius on the <gmp-map> web component as of now.
Please file a feature request on the public issue tracker here and include a use-case scenario that would be reasonable for Engineers to check on this: https://issuetracker.google.com/issues/new?component=188853&template=787814
You can use "planDB" for this case with sqlite or sqlciher (encrypted sqlite),can use for windows and linux,mac also, i think its a good one for comparing and even bidirectional patch generation like source-target and target to source u can get it from "www.planplabs.com" or "github.com/planp1125-pixel/plandb_mvp/releases"
Since the error happens when node tries to parse the [...]/node_modules/mongodb/lib/collection.js:24:18 character, there is a good chance that there is a misconfiguration somewhere. Try to check if the project's setup is compatible with the mongodb version. Usually, from my experience, JS parsing errors inside well stablished modules happens because of misconfigurations. - Iguatemi CG
As per the author's comment, they fixed the issue by downgrading to the following versions:
node - from v22.20.0 to v20.19.5
mongoose - from 8.19.1 to 8.6.0
In my case the problem was that my application.properties encoding was ANSI. After changing encoding to UTF-8, eclipse removed the alert automatically.
Thanks for posting your solution - it really helped! For my site, I had to go up from the default 1.0 Minimum TLS on Cloudflare to 1.1 and it appears to have resolved my issue.
So,
Google's AI mode suggested using the Table.Buffer command for the source queries and I did this for the source queries in my flow and it seems to have worked.
I remain unimpressed - my queries were not (IMHO) complex and there was not a huge amount of data involved either (<3,000 rows and ~30 fields).
Buffer the Table (Advanced Workaround): In some advanced scenarios where sorting at the end doesn't work, using Table.Buffer in the M language can force Power Query to load the entire table into memory and maintain order, but this has performance implications. For most cases, a final explicit Sort step is sufficient.
I just had the same problem on an old project.
Here is what work for me:
$response = new \Symfony\Component\HttpFoundation\JsonResponse();
$response->setEncodingOptions(JSON_UNESCAPED_UNICODE);
$response->setData($ret);
return $response;
You can edit VSCode built-in markdown.css stylesheet directly:
Windows:
C:\Program Files\Microsoft VS Code\resources\app\extensions\markdown-language-features\media\markdown.css
macOS:
/Applications/Visual Studio Code.app/Contents/Resources/app/extensions/markdown-language-features/media/markdown.css
Linux:
/usr/share/code/resources/app/extensions/markdown-language-features/media/markdown.css
official doc answering the question (for later searches) can be found here : https://docs.aws.amazon.com/sns/latest/dg/subscription-filter-policy-constraints.html#subscription-filter-policy-payload-constraints
setup do
@user = users(:one)
sign_in_as(@user)
end
You have not enabled billing on your project which is causing this error. You must enable Billing on the Google Cloud Project through this link.
Follow the instructions in the documentation on how to create a billing account: https://developers.google.com/maps/get-started#create-billing-account
and please open a support case for further inquiries.
I did manage to find a solution based on the formattable function from the question and the concept from this StackOverflow answer:
template<typename T>
consteval bool formattable()
{
auto fmt_parse_ctx = std::format_parse_context(".2");
return std::formatter<T>().parse(fmt_parse_ctx) == fmt_parse_ctx.end();
}
template<typename T> concept Formattable = requires
{
typename std::type_identity_t<int[(formattable<T>(),1)]>;
};
static_assert(Formattable<double>);
static_assert(!Formattable<int>);
This worked in Ubuntu Gnome: -Dglass.gtk.uiScale=4.0
It made GUI 4x bigger.
More detailed answer: https://stackoverflow.com/a/79832355/20340543
You can achieve this by overriding the row expansion logic manually within your row or cell click handler.
Add this:
onClick: () =>
table.setExpanded({
[row.id]: !row.getIsExpanded(),
})
This is a solution i found from the GitHub discussion linked here
I'm facing the same problem.
The project uses Hibernate 7.1.2.Final, so you don't need to specify the dialect manually, but the error persisted.
The problem was that I forgot to put a colon in front of the two slashes.
Thus, make sure that the JDBC_DATABASE_URL is specified correctly.
If you are working with PostgreSQL, then the path will be as follows:
jdbc:postgresql://${HOSTNAME}:${DB_PORT}/${DATABASE}?password=${PASSWORD}&user=${USERNAME}
The example would look like this:
jdbc:postgresql://localhost:5432/taskdb?password=qwerty123&user=sa
https://github.com/DevilForDevs/YtMuxerKt/blob/master/src/main/kotlin/webm/WebMParser.kt
This is tiny webm parser can give you 6 2 or any sample count auto switching clusters. Each sample will have abs offset that you can copy using sample size.
@ John I also thought about this but OP asked for the "simplest way". Your approach works but needs many lines of code and comes with some problems. If you change the size of the axis by adjusting your figure window, the text won't adjust its font size as it does with heatmap. And I guess with using surf + many text objects, this solution is not very performant for large matrices.
I started using PancakeView back in Xamarin.Forms when even Frame class didn't exist. Back then it was the fastest way to even achieve a cross-platform view with corner radius.
Right now there is no need to use third-party libraries since maui provides you everything you need to achieve what PancakeView provided back in the day. Even more if you are looking for stability and long-term support.
You have:
Friend, please tell me the solution you used.
Ok whenever I hear someone talk about variadic template and virtual function in one sentence. I am like : Oh probably we have a design issue here. Most designs either use static polymorphism (e.g. you really know the types you are going to support) or you use virtual functions + inheritance. So which of your categories are you in? Do you know all types you are going to support at compile time: then look for a template only solution. Otherwise look for an inheritance one.
They might use inkscape, a free and open source vector graphics software as well:
https://inkscape.org/
All
pyproject.tomlrequirements (the dependency version constraints published on pypi) shall be honored prior touvtransferring control to your app.
That wasn't clear to me. Thanks for pointing that out. I've tested it by replacing uvx streamlit with streamlit and it seems to work just as well.
You haven't cited any streamlit documentation that talks about a "run" verb which will locate and download a foobarbaz pypi package, and I certainly don't recall having come across such a thing.
streamlit run app.py is the default idiom AFAIK (source).
Please publish the URL of the GitHub repo where you are exploring these issues.
This seems to work now. Thanks a lot!
I am facing same issues? Did anyone manage to resolve it?
Just addinside activity
HttpsTrustManager.allowAllSSL();
HEIC is based on the HEIF/HEVC codec, and many browsers and OS environments still do not ship a native decoder.
That’s why libraries like FileReader, canvas, Pillow, or Sharp may fail to load HEIC images directly.
If you want to convert HEIC → PNG programmatically:
- Python: install pillow-heif and convert through Pillow
- Node.js: use Sharp with libvips HEIF support enabled
- iOS: use CGImageSource / CIImage to re-encode as PNG
- Android: HEIC decoding requires API 28+ or ImageDecoder
If you only need a quick conversion without installing libraries,
a browser-based tool works well:
(Conversion happens on the client side; no uploads required.)
If you're getting the same error on iOS,
Open your iOS project in Xcode using this command:
open ios/Runner.xcworkspace
In Xcode, go to:
Runner → Signing & Capabilities
Check whether Push Notifications capability is added.
Make sure the capability is enabled for both Debug and Release modes.
In my case, it was only enabled for Debug, and that caused the error.
After enabling it for Release as well, everything started working correctly.
Hope this helps you too!
Since the comment button doesn't work on this site, I have no choice but to respond to my own question. Let this be a comment on John Bollinger's answer. At the same time, I'll expand a bit on my concerns regarding the original question.
By default, global variables without initialization are assigned the value 0.
In my example, the value of all elements of the 'buffer' array is 0 as soon as we enter the main() function.
As for the startDMARead() function, its appearance, simplified to all accesses to 'buffer' via a pointer to it, is:
static unsigned char buffer[32];
int main(void) {
startDMARead(buffer, sizeof(buffer));
return 0;
}
void startDMARead(unsigned char *ptr, int size) {
*(volatile unsigned int *)DMA_HW_ADDR = (unsigned int)ptr;
*(volatile unsigned int *)DMA_HW_SIZE = size;
... // wait for end transfer
}
As is clear (including to the compiler), the value of the ptr pointer is written to some MMIO address.
During optimization, I see no reason to think anything is happening to the contents pointed to by ptr.
In C code, there are no modifications to 'buffer' due to any dereference of ptr.
Now, when returning from startDMARead(), the compiler knows that 'buffer' was not modified by the call to startDMARead().
This means that reading 'buffer', for example, buffer[0], can be omitted. Instead, the compiler can simply replace such access with the value 0 (the value the buffer was initialized with).
That's what I'm afraid of. And I want to tell the compiler explicitly: "Hey, you don't see the changes in C, but I have to tell you that this buffer was modified beyond your comprehension!"
Mr.karan.singh
Butwhy
| header 1 | header 2 | |
|---|---|---|
| cell 1 | cell 2 | |
| cell 3 | cell 4 |
It's very likely that the HTTP response code is not 200. Did you check that? Either way, the response text doesn't represent valid JSON
Please put the expression _pool = Pool(2) within the function script_runner as follows:
def script_runner():
_pool = Pool(2)
_pool.apply_async(script_wrapper)
_pool.close()
_pool.join()
->
foo
Hang on a moment. What are you writing here? Are you writing an MFT, or are you trying to call an existing MFT? What video type do you actually have?
You can customize css variables:
:root {
--r-main-font: 'Your Custom Font', sans-serif;
--r-heading-font: var(--r-main-font);
}