The MDriven title suggests that you are not using OCL at all.
In OCL I would expect to write something like
myThingCollection.other->select(title <> null)->collect(...whatever...)
Sorry you've waited 5 years. I assume you never found an answer, because Neither have I, nor has anyone. MPC knows and displays frame numbers, until you pause or scrub, then it forgets the frame number. VLC and MPC both had an addon that did this, but none of them work anymore. VirtualDub can do this, but doesn't open literally ANY modern codec or container.
I've found literally hundreds of request for this info, feature requests, complains about previous methods no longer working, and NOBODY, from any video player team, seems to care.
I had this issue on Ubuntu 25, and running this was able to fix the issue for me:
sudo update-alternatives --install /usr/bin/stty stty /usr/bin/gnustty 100
remove your docker file from root
also remove these 2 lines from .csproj file
<DockerLaunchAction>LaunchBrowser</DockerLaunchAction>
<DockerLaunchUrl>http://{ServiceIPAddress}</DockerLaunchUrl>
@motosann, IMO, it's much easier to not calculate any date at all, instead find files (since they are saved somewhere,I suppose) with DIR("filename*.xlsx") and extract the date from whatever is found by the function to compare it with the current day.
Did you get to know anything about it? I am facing the same issue here.
But what do you actually mean by "number of colours". Try drawing an SVG black circle on a pure white ground and count the number of "colours" in the rendered raster image!
For debug and release mode, use this way until Flutter provides an official way
if (g_file_test("assets", G_FILE_TEST_IS_DIR))
{
gtk_window_set_icon_from_file(window, "assets/images/icon.png", NULL); // For debug mode
}
else
{
gtk_window_set_icon_from_file(window, "data/flutter_assets/assets/images/icon.png", NULL); // For release mode
}
You can also try with hover https://github.com/flutter/flutter/issues/53229#issuecomment-660040567
A regular pipe will do the trick:
cat part-* | tar -xvz
I had similar concerns using the new Material Design 3 ColorScheme.fromSeed(seedColor: Color) function.
Wanting to know the exact number of Primary colors generated, I wrote a 'Brute Force' Flutter App to walk through all 16,777,216 RBG color codes yielding only 626 total/available Primary Colors.
Material Design 3 is not a âshow me every color the display can doâ system; itâs a âgive me a small, stable, perceptually sane, accessible, reproducible set of colors from any seedâ system.
To achieve that, it fixes tone, caps chroma, uses a perceptual space, and snaps results to displayable values.
https://github.com/AndyW58/Flutter-App-Material-Design-3-Unique-Primary-Colors
I had similar concerns using the new Material Design 3 ColorScheme.fromSeed(seedColor: Color) function.
Wanting to know the exact number of Primary colors generated, I wrote a 'Brute Force' Flutter App to walk through all 16,777,216 RBG color codes yielding only 626 total/available Primary Colors.
Material Design 3 is not a âshow me every color the display can doâ system; itâs a âgive me a small, stable, perceptually sane, accessible, reproducible set of colors from any seedâ system.
To achieve that, it fixes tone, caps chroma, uses a perceptual space, and snaps results to displayable values.
https://github.com/AndyW58/Flutter-App-Material-Design-3-Unique-Primary-Colors
Great this was helpful, specifically pointing to documentation, open source and medium articles. I'm looking for sources of information and it seems like those are going to have what I need. Thanks!
I deployed my Laravel application (built using the Livewire starter kit) on XAMPP. The application is accessible via: http://localhost/myproject/ but the livewire and flux not working
That worked for me too! Thanks a ton!
I had similar concerns using the new Material Design 3 ColorScheme.fromSeed(seedColor: Color) function.
Wanting to know the exact number of Primary colors generated, I wrote a 'Brute Force' Flutter App to walk through all 16,777,216 RBG color codes yielding only 626 total/available Primary Colors.
Material Design 3 is not a âshow me every color the display can doâ system; itâs a âgive me a small, stable, perceptually sane, accessible, reproducible set of colors from any seedâ system.
To achieve that, it fixes tone, caps chroma, uses a perceptual space, and snaps results to displayable values.
https://github.com/AndyW58/Flutter-App-Material-Design-3-Unique-Primary-Colors
Per current MS documentation (late 2025), this is available in VS2022 as a "Preview" feature.
Turn on: Tools â Options â Preview Features â Pull Request Comments.
It is also listed as a feature of the VS2026 "Insiders" edition (basically also 'preview').
$ magick -size 1000x1000 xc: +noise Random noise.png
Possible types of noise:
$ magick -list noise
Gaussian
Impulse
Laplacian
Multiplicative
Poisson
Random
Uniform
đ©-tip: "Making Noise with ImageMagick"
AI does improve learning by at least being a tool that can explain what code does. So unless you want to avoid it altogether, it is a good way to explain what a technology is good for and what the usual applications are. (LLMs are probability machines after all.)
If you want to avoid AI altogether, or learn in a more structured way, there still are tutorials and courses that teach a technology. For Go, you could check for example A Tour of Go. There isn't necessarily a need to read the documentation from start to finish, but the documentation pages usually contain an explanation of what it is used for and tutorials of common use cases.
If you want to know what kind of solutions a technology is good for, I would just google it. There are many blog posts and forums discussing the experiences developers have had with a piece of technology where you can learn more about it. These days AI is good for exactly this, though.
You have to enable transform in ValidationPipe
async function bootstrap() {
const app = await NestFactory.create(ServerModule);
app.useGlobalPipes(new ValidationPipe({ transform: true })); // enable transform
await app.listen(config.PORT);
}
bootstrap();
I made those changes, but it still doesn't produce any results. It's as if the foreign key isn't being linked. But if I search directly in the Flight model, it does filter.
so if I want to work in Go, I should just read through the documentation for Go to figure out what projects it would be best for?
Smartphone GPS always drifts. Even good phones move a few meters while standing still. This is normal and not a bug in React Native or Expo. Raw GPS is simply too noisy for small areas like a sculpture park.
First, lower the accuracy mode. Balanced often gives steadier readings than BestForNavigation. The latter exposes more raw noise. Next, apply strong filtering. Your current smoothing lets too much noise pass. Use a low-pass filter with alpha around 0.1â0.2. Add a deadzone of 1â2 meters so tiny jumps get ignored.
A Kalman filter helps even more. It combines new data with past data and reduces jitter. Many GPS hardware projects use this exact approach. For example, the GPS tracker shown here uses filtering to stabilize noisy readings:
https://www.pcbway.com/project/shareproject/GPS_tracker_recorder_for_cars_and_other_vehicles.html
You can do the same. Run your GPS reading through a Kalman filter, then through your low-pass filter. The result will feel smooth and believable.
Also, remember that you only need rough GPS. Your routing works on your own map, not on real geography. You can snap the user to the nearest logical area or path and ignore tiny moves. This is how many indoor and park apps work.
So your recipe is simple: lower accuracy, filter hard, apply a deadzone, and use a Kalman filter. After that, your dot will stay calm, and your offline routes will make sense.
What you suspected is correct.
Since the minimum pool size is set to 2, those two idle connections are never closed, which is why the maxConnectionReuseTime setting does not affect them even after the timeout.
If you want this behavior to change, you can migrate to Oracle JDBC/UCP 23ai, which provides an option to allow timers to affect all connections.You can enable this by setting the following system property:
-Doracle.ucp.timersAffectAllConnections=true
This ensures that even idle connections below the minimum pool size are eligible for closure based on timer settings.
You read the documentation of whatever you use
I had to do instead:
type StyleSheetProperties = {
background: string,
};
type StyleSheetPropertyKeys = keyof StyleSheetProperties;
type FrameworkAttributes = {
[K in `s:${StyleSheetPropertyKeys}`]?: string | number;
};
declare global {
namespace JSX {
interface IntrinsicAttributes extends FrameworkAttributes {}
}
}
There are two windows...
1. Threads
2. Tasks
There are a couple options to consider.
1. Map the 'Task.CurrentId' to a 'ConcurrentDictionary' (similar to that already mentioned)
2. Try and leverage off the Thread name. (as shown below)
You can set the thread name so that it will display in the debugger window..
The Tasks window should already contain the stack frame on where it is being called from.
<ul><%= users.forEach(function(user){ %>
<li><%= user.name %></li>
<%= }); %>
</ul>
Using the above, the %= appears to be actually attempting to set a whole value instead of run a function. Using instead:
<ul><% users.forEach(function(user){ %>
<li><%= user.name %></li>
<% }); %>
</ul>
appears to have resolved the issue.
FYI: If you get this in 2025, it's probably a bot. Block it.
const aaaa
dcjc
const aa ={}
<div></div>
I have this same issue, i have the api enabled check activated, but getting the same error on testing connection, someone has resolved it? Thanks
Very usefull also in the latest version of PowerBI, go to Format visual -> Visual -> Values (click off in text wrap) -> then you can reduce the dimensions of the column to 0 and put at the last place so the user can't see it; and you still use for the drill-through.
âAt every 90th minute.â
*/90 * * * *
If you're using Alfred, here is a very nice solution combining everything you mentioned here:
https://github.com/Avivbens/alfredo/blob/master/projects/packages/noise-cancellation/README.md#toggle-noise-cancellation
I agree with @SilverWarior that using the "FetchOptions" is how to control the overall dataset operation. I also agree that in the default mode using the fmOnDemand for the "mode" should force the database to return the records from a database in batches based on demand.
@Uwe Raabe Thank you for pointing that option out, it will be very useful to set it to cmTotal to get the total number of records in the database. This will hopefully prevent the need to use the Last function to get the count.
It is now possible to implement ip filtering on GCS buckets as of https://cloud.google.com/storage/docs/release-notes#July_02_2025 using https://cloud.google.com/storage/docs/ip-filtering-overview.
This is how to calculate the date in a worksheet with the same value as strgen.You can check the date on the worksheet.
Cell H1 8 or 15
Cell H2
=TEXT(TODAY()-WEEKDAY(NOW(),1)+H1,"mm.dd.yy") & ".xlsm"
Function strgen()
strgen = ThisWorkbook.Sheets("Sheet1").Range("H2").Value
End Function
Ugh.. looks like a wrong "type" of question.. But it won't let me edit it.
As the comments suggested, removing the previous build, updating west and doing new build worked. My toolchain and SDK is v3.1.3.
A middleware needs to be added for token exchange mechanism. That way, user getting logged into parent application will get a token from token endpoint, using OAuth 2.0 connection configured in Bot Service on Azure portal. Verify the below essential steps.
It's not that important, but to match the picture (ARRAY [1,2,3,4,4,7,8],ARRAY [3,3,4,5,6,8,9]) And recursive subquery in recursive query - is this possible?
I had similar issue and had solved it by installing autoconf-archive
This shows that the auto* things are pieces of archaic sh!t that must DIE as soon as possible.
You have 1000 tools - autoconf, autoreconf, autoconf-archive, autoreconf, automake, configure, bootstrap, m4, libtool, libtoolize. Every project is built using a different combination of these commands. Every project has different style of passing options to ./configure. Which, for cross builds or debug/release builds is pain in the @ss
You get cryptic errors that the AC_SUBST macro is missing, and this is magically resolved either by installing a package or by running a command.
If we used a modern build system like CMake or Meson, the world would be better place.
In file C:\Program Files\Microchip\xc8\v2.31\avr\avr\include\avr\sfr_defs.h you can fing used definition:
#define _MMIO_BYTE(mem_addr) (*(volatile uint8_t *)(mem_addr))
#define _SFR_IO8(io_addr) _MMIO_BYTE((io_addr) + __SFR_OFFSET)
Posting this for Docker Desktop for Mac which runs Docker inside a linux Virtual Machine (VM) so the accepted answer does not work since those storage_driver files are inside that VM and not on the Mac itself.
Here, the approach is to delete that VM and have Docker Desktop for Mac recreate it when it restarts.
The steps for this are...
Quit Docker Desktop For Mac
Find the proper <vm-number> (e.g. 0 )...
ls -al ~/Library/Containers/com.docker.docker/Data/vms/
Move/Backup the VM...
mv ~/Library/Containers/com.docker.docker/Data/vms/<vm-number>/data/Docker.raw ~/Desktop/Docker.raw.backup
Restart Docker Desktop For Mac
I created a Gist on this citing this post
AhmadHamze's solution worked for me. (Comment did not work so I post answer, sorry)
FetchOptions has a RecordCountMode property to control the way RecordCount is calculated.
How are you accessing the app?
Via php artisan serve â http://127.0.0.1:8000?
Or directly through Apache â http://localhost/myproject/?
set the workspace file name to a substitution string reference in the Header Template. Also, since I have text next to the img, it was easier to add the text to the svg file that contained the image
After the above error messages I decided my python installation was cursed.
Following this answer, more or less: https://apple.stackexchange.com/questions/284824/remove-and-reinstall-python-on-mac-can-i-trust-these-old-references
I did
brew uninstall --ignore-dependencies --force python
and
brew uninstall --ignore-dependencies --force [email protected]
and
brew uninstall --ignore-dependencies --force python@3
and then just
brew install python
Now I can make virtual environments just fine.
How do I use that? Is that C++? Julia's println function already inserts a new line.
Hopefully this helps someone in the future but if use setUrgent() on the PutDataMapRequest object, it fixes the issue.
It's because the chunk is named setup.
Try {r} without setup or {r another_chunk_name}.
This is now resolved: The solution was to set up a SQL alert to send an email notification if x < 0. The content of a SQL alert can be customised.
If you still looking for it, you whould use the resource "databricks_access_control_rule_set"
Something like that :
resource "databricks_service_principal" "sp" {
display_name = "ndp-sp-${var.project}-${lower(var.env)}"
}
resource "databricks_access_control_rule_set" "automation_sp_rule_set" {
name = "accounts/{account-id}/servicePrincipals/${databricks_service_principal.sp.application_id}/ruleSets/default"
grant_rules {
principals = [data.databricks_group.admin_group.id]
role = "roles/servicePrincipal.manager"
}
}
I had similar issue, you can delete the .ninja_deps from your build folder and this should fix your issue. In my case I am using cmake to generate my build system with generator set to Ninja.
Can you post any of your attempts, and what specific issue(s) there were with each?
Honestly this usually happens when Docker suddenly updates itself in the background or the daemon gets bumped to a newer version while your local client or whatever library you use (TestContainers in this case) is still trying to talk with an older API version.
Docker doesnât always keep the client-server API versions in perfect sync, so if the daemon jumps to something like 1.44 and your TestContainers setup is still locked on 1.32, it just refuses the call and you get that âclient version too oldâ thing.
Most of the time itâs either:
Docker Desktop auto-updated
You pulled a new Docker engine version (especially on Linux)
Or TestContainers is pinning an old API version and hasnât refreshed its mapping yet
Fix is usually boring: update your Docker client or update TestContainers to the latest version so both sides speak the same API level.
Nothing magical! just version mismatch after an auto-update.
Nope, precise number of colors is main task for me. I have graphic editor where i should change colors of image and it is very important to have colors less than my max
If you don't know the map's key, you can simply iterate over the map and extract the required value.
@C3roe the part where the programmable microprocessor,microcontroller, is programmed?
Your original 10 colour PNG source will have additional colours created by antialiassing when the SVG image is rendered for display. Without that concession the edges of lines will be visibly jagged. Surely you only really care about filesize/quality tradeoff rather than exact number of colours in the image.
The only solution that worked for me (Windows 11, VC Code 1.106.3, SQL) was manually creating the foldable section (highlight desired section Ctrl+K Ctrl+,) and removing it (Ctrl+K Ctrl+.).
As this is a university project, I would like to focus on making correct architectural decisions and structuring my ASP.NET Core backend in a clean and maintainable way.
Would ASP.NET Core MVC with Razor Views be sufficient for this type of portal, or would a Web API approach be more appropriate?
Additionally, what architectural style (e.g. layered architecture, Clean Architecture) would you recommend for such a system in order to remain maintainable and scalable?
What would be a recommended way to organize the solution and its layers (e.g. Controllers â Application Services â Domain â Infrastructure)?
Also, where should business rules such as application state transitions (e.g. approve/reject) and payment processing logic ideally reside, and how should these be separated from the MVC layer?
So how does one create bit fielded data structures in python? or do I literally need an int for every bit and then combine these into another int with shifting? I need to manage CAN bus communication where individual bits have individual meanings.
It's not assumption, it's fact. After creating SVG image i use function getOriginalColors for retrieving all colors used in SVG image. And I see that when I send maxColors = 10, it is possible to get 16. So, based on sharp docs, parameter colours set number of palette entries (in my mind it means number of colors). I checked visually image after sharp convertation and it looks cool. Vectorization also looks nice visually but in result I counted colors used for SVG and it is bigger than my value (10). Mostly, for max = 10 it gives me 16 colors
write a service unit for systemd and put it into system slice (/etc/systemd/system).
as long as you execute it under a user (even root), you will be restricted.
If you click on it a window (Host Details) comes up.
You can see the whole output in "Data" tab.
After adding the IronSource sdk (whether it's installed via package manager - Ads Mediation, or download sdk), the generated iOS build will create a Unity-iPhone.xcworkspace file.
You should open Unity-iPhone.xcworkspace file instead of Unity-iPhone.xcodeproj. This Unity-iPhone.xcworkspace coatins Unity-iPhone.xcodeproj and Pods.xcodeproj.
I would not be so sure "Sched + Pickups Week 1" counts for 1 workbook only
@Atmo,I'm sorry, but I cannot understand this count.
I thought you wanted to avoid having to rewrite it as 8 to 15 in the strgen function.
i found it: somewhere several "allow from"s where hidden behind includes of includes and bypassed the user specific things ... I hope, whoever did all that to the system has retired meanwhile ... grmpf ...
the admin page behind this server was completely open to any domain user (several thousands).
Just a note: you describe a data architechure of a particular application. This is not what fully defines the architecture used for the implementaion. There are many ohter factors. These two things are not directly related.
The data structure you described is nothing special. It is very simple and can be expressed in many different ways. You can expect some advice in general, but you cannot expect that someone infers the implementation architecture pattern from your data structure.
What youâre seeing is normal behavior for phone GPS, especially:
In a small area (like a sculpture garden),
With offline positioning (no Wi-Fi / network assistance),
And using high-frequency updates with BestForNavigation.
A few key points about GPS on mobile devices:
Typical real-world horizontal accuracy is 3â10 meters under good outdoor conditions. Near buildings, trees, or indoors, it easily gets worse.
Even if the user is completely still, the reported latitude/longitude will wander inside an error circle defined by coords.accuracy.
That means in a small map (say 30â50 m across), the natural noise of GPS can be a large percentage of the whole area. So the marker looks like itâs âjumping all overâ even though itâs just wandering within the uncertainty bubble.
So there is nothing âwrongâ with Expo Location itself. Youâre just seeing the raw limitations of consumer GPS very clearly because your map is small and your update settings are aggressive.
Youâre doing some reasonable things already (UTM transform, simple smoothing), but a few details make the jitter very visible instead of hiding it.
subscription = await Location.watchPositionAsync(
{
accuracy: Location.Accuracy.BestForNavigation,
timeInterval: 500,
distanceInterval: 0.5,
},
(newLocation) => {
setLocation(newLocation);
}
);
Problems here:
BestForNavigation is designed to be responsive, not perfectly stable.
timeInterval: 500 ms and distanceInterval: 0.5 m means youâre asking for very frequent, very fine-grained updates.
At that resolution, you see every tiny fluctuation in the GPS solution as a âmovementâ.
Youâre essentially subscribing to noise in high definition.
const acc = location.coords.accuracy;
if (acc && acc > 8) return;
Youâre discarding any reading where the accuracy is worse than 8 m. In many real environments, values below 8 m are rare. So what happens?
You ignore a lot of readings.
You occasionally accept âgoodâ ones that might still be off by several meters.
That makes your smoothed position jump from one âgoodâ estimate to another instead of drifting slowly.
setSmoothedUtm(prev => {
if (!prev) return { x: rawUtm.x, y: rawUtm.y };
const dx = rawUtm.x - prev.x;
const dy = rawUtm.y - prev.y;
const movement = Math.sqrt(dx * dx + dy * dy);
if (movement < 0.3) return prev;
const alpha = 0.8;
return {
x: prev.x * (1 - alpha) + rawUtm.x * alpha,
y: prev.y * (1 - alpha) + rawUtm.y * alpha
};
});
Two main issues:
A dead-zone of 0.3 m is meaningless when GPS noise is 3â10 m. Almost every jitter is bigger than 0.3 m, so almost everything is treated as âreal movementâ.
alpha = 0.8 heavily favors the new (noisy) value:
new = 20% old + 80% new â thatâs closer to âforward the noiseâ than âsmooth itâ.
So the filter doesnât do much to hide the GPS wandering.
You cannot turn GPS into a centimeter-accurate indoor tracker with code. But you can:
Reduce how much jitter the user sees.
Make your routing logic robust to noise.
Align UX with the actual accuracy you have (meters, not centimeters).
Below are practical changes that work within the limitations.
Relax watchPositionAsync so you donât hammer the GPS and you donât get spammed with tiny fluctuations.
For example:
subscription = await Location.watchPositionAsync(
{
accuracy: Location.Accuracy.High, // or Location.Accuracy.Balanced
timeInterval: 2000, // at most once every 2 seconds
distanceInterval: 2, // only if moved ~2 meters
},
(newLocation) => {
setLocation(newLocation);
}
);
This does two things:
Reduces CPU/battery.
Prevents your UI from reacting to every sub-meter wobble.
In many cases, High or Balanced gives more stable behavior than BestForNavigation for walking around at low speed.
Next, make the smoothing logic match real GPS behavior:
Accept readings up to ~15â20 m accuracy (beyond that, ignore).
Ignore position changes smaller than 2â3 m as noise.
Use a small alpha (0.1â0.3) so the filtered position moves slowly.
Example:
useEffect(() => {
if (!rawUtm || !location) return;
const acc = location.coords.accuracy ?? 999;
// Ignore very noisy readings
if (acc > 20) return;
setSmoothedUtm(prev => {
if (!prev) {
return { x: rawUtm.x, y: rawUtm.y };
}
const dx = rawUtm.x - prev.x;
const dy = rawUtm.y - prev.y;
const movement = Math.sqrt(dx * dx + dy * dy);
// Ignore small movements that are likely just jitter
if (movement < 2) {
return prev;
}
// Strong smoothing: mostly keep old position
const alpha = 0.2; // 20% new, 80% old
return {
x: prev.x * (1 - alpha) + rawUtm.x * alpha,
y: prev.y * (1 - alpha) + rawUtm.y * alpha,
};
});
}, [rawUtm, location]);
Now:
The marker will not âtwitchâ for tiny fluctuations.
Larger moves (user actually walking) will slowly pull the marker toward the new position.
You still respect accuracy constraints, but youâre not unrealistically strict.
You currently trigger arrival when:
if (distanceToDestination !== null && distanceToDestination < 3) {
Alert.alert('Arrived!', ...)
}
If your GPS accuracy is ±5â10 m, checking for < 3 m is optimistic. The position might easily jump between 2 m and 8 m from the sculpture while the user stands in the same place.
Use a larger radius (e.g. 7â10 m) and maybe require the distance to stay under that threshold for a couple of consecutive readings:
const ARRIVAL_RADIUS = 8; // meters
useEffect(() => {
if (distanceToDestination == null || selectedDestination == null) return;
if (distanceToDestination < ARRIVAL_RADIUS) {
Alert.alert('Arrived!', `You reached ${TEST_POSITIONS[selectedDestination].label}`);
setSelectedDestination(null);
}
}, [distanceToDestination, selectedDestination]);
Itâs better UX to say âyouâre hereâ a bit early than never say it because of GPS jitter.
Youâre mapping UTM â pixels based on sculpture coordinates that are very close together. When your entire site is, say, 30 m wide and your screen is 300 px wide:
1 meter â 10 pixels.
Normal GPS noise of 5â10 m â 50â100 pixels of jump.
Thatâs why the dot looks like itâs flying around.
There is no way around this from GPS alone. What you can do:
Visually de-emphasize the exact dot and focus on a âyouâre roughly hereâ indicator (e.g. circle radius = accuracy).
Optionally snap the userâs position to the nearest path or nearest sculpture when theyâre within a certain distance. Thatâs a UX trick: youâre acknowledging that the exact GPS coordinate is fuzzy and instead projecting them to a discrete point that makes sense for your map.
For a small offline map, if you need sub-meter or 2â3 m reliable precision, you wonât get it on regular phones with only GPS, especially offline and near structures.
Real alternatives (if this were a production problem):
BLE beacons / UWB anchors indoors.
QR / NFC markers near sculptures that the user scans to âlocateâ themselves.
Letting the user manually mark their approximate location on the map as a reset.
Thatâs beyond Expo Location and outside pure code solutions.
Your device isnât âbrokenâ; youâre just seeing normal GPS error at a very small scale.
The combo of BestForNavigation, high-frequency updates, very strict accuracy filter, and weak smoothing makes the jitter extremely visible.
Use:
Less aggressive watchPositionAsync settings,
Realistic accuracy + movement thresholds,
Stronger smoothing,
A larger arrival radius, and
UI that communicates âapproximate locationâ, not exact centimeters.
You canât make GPS behave like a laser pointer, but you can make the experience look stable and usable for an offline sculpture map.
In my case i just changed
_userManager.Setup(r => r.GetRolesAsync(newlyCreatedUser)).ReturnsAsync(x => oldRolesThatUserHad);
into
_userManager.Setup(r => r.GetRolesAsync(newlyCreatedUser)).ReturnsAsync(oldRolesThatUserHad); //omit the lambda
The 403 errors are caused by ModSecurity Rule ID 942440 incorrectly flagging encrypted cookie content as an SQL injection attempt. To fix this, whitelist Rule ID 942440 in your Plesk ModSecurity settings.
Rule ID 942440 is a ModSecurity rule titled "SQL Comment Sequence Detected" ([msg "SQL Comment Sequence Detected"]).
It is designed to detect patterns within requests that resemble SQL comments (like -- or ---), which are often used in SQL injection attacks.
If your text written to stdout doesn't end with \n, use std::flush to force a screen update.
Hi heaxyh,
by increasing the CommandTimeout you will prevent the timeout and consequently the lock on __EFMigrations table. This timeout increases the time on the transaction itself
services.AddDbContext<XContext>((srv, options) =>
{
options.UseSqlServer("ConnectionString", opts => { opts.MigrationsAssembly("MigrationProject"); opts.CommandTimeout(appSettings.CommandTimeoutInSeconds);});
});
kind regards,
I have a scenario, where I need to delete Cloud tasks.
I create the tasks with a custom name/id.
Under some circumstances I then I check if that task is there, if it is I delete it.
client.getTask(fullTaskName)
client.deleteTask(fullTaskName)
What part of this is supposed to be a programming question?
Three random digits may not be sufficient to prevent collisions, and detecting duplicates only at insertion time (using a primary key) could be too late. What happens if the system clock drifts? What if you have multiple application servers without perfectly synchronized clocks? Maybe better something like Snowflake ID with less presition on the time but with a node/thread ID and a sequence number per node/thread rather than a small random
Get the parent branch name of your current branch with this:
git rev-parse --abbrev-ref @{-1}
In Windows PowerShell I had to escape the expression.
git rev-parse --abbrev-ref "@{-1}"
The accepted answer by @andreas-wolf is more thorough. I use this where every branch is created with git checkout -b feature, then back into main and merge if required.
From the Git Reference: gitrevisions
The construct @{-<n>} means the <n>th branch/commit checked out before the current one.
For future readers
OP probably faced type error inside the exact version of @types/node.
If possible, try to upgrade @types/node package instead of type casting or creating additional objects.
The Angular team hasnât published what exact design tool was used for those specific SVGs and they donât provide the original source files like .ai or .fig for assets in the repo. In practice, teams at Google typically use a mix of tools such as Figma, Adobe Illustrator, or Sketch to design vector assets, and then export them to SVG for use in the codebase. Since the SVGs are already in the repository, the intended workflow is to treat them as the source of truth rather than request the original .ai files.
If you want to work with or modify them, you can safely import the existing .svg files directly into Figma, Adobe Illustrator, or Inkscape and continue editing them there. All of these tools support round-trip SVG editing very well.
So the practical answer is: you donât need the original .ai files. Just open the existing SVGs in Illustrator or Figma and you will be able to inspect and modify the layers, paths, groups, colors, and exported structure easily.
use position fixed for the burger menu makes sticky may need to add different positioning
The Cleanest way to use with using Python 2.7
import imp
module = imp.load_source("myfile", os.path.join(latest, "myfile.py"))
print(module.myvariable)
Nowadays this is in the Configuration Center. The setting is called Screen Blanking and it can be found under Display.
for me helped:
1. flutter upgrade
2. run app through command flutter run
As @jonrsharpe explained, the problem is that FileBackend does not return proper Task objects. Rather, it returns raw objects with the properties of a Task, which do not have any of the methods. I wrongly assumed the Task objects were automatically constructed thanks to the type hints, but this was not the case.
The solution would be to construct a Task object from the retrieved items. For example, defining a factory method like this (the dates are being parsed using Date.parse() per as the MDN documentation):
public static tryFromRawObject(obj: {id?: number, name: string, status: string, creationTime: string, lastModificationTime: string}): Task {
let parsedCreationTime = Date.parse(obj.creationTime);
if (isNaN(parsedCreationTime)) {
throw new TypeError(`Failed to construct Date object from value: creationTime=${obj.creationTime}`);
}
let parsedLastModificationTime = Date.parse(obj.lastModificationTime);
if (isNaN(parsedLastModificationTime)) {
throw new TypeError(`Failed to construct Date object from value: lastModificationTime=${obj.creationTime}`);
}
if (!Object.values(TaskStatus).includes(obj.status as TaskStatus)) {
throw new TypeError(`Failed to construct TaskStatus object from value: status=${obj.status}`);
}
return new Task({
id: obj.id,
name: obj.name,
status: obj.status as TaskStatus,
creationTime: new Date(parsedCreationTime),
lastModificationTime: new Date(parsedLastModificationTime),
});
}
And then invoke it from the TaskRepository:
private async readStream(): Promise<Array<Task>>{
const tasks: Array<{id?: number, name: string, status: string, creationTime: string, lastModificationTime: string}> = [];
tasks.push(...await this.backend.json());
return tasks.map(Task.tryFromRawObject);
}
In case anyone else finds this, I ran into this same error, and solved it by:
(1) removing ninja, ninja.bat, ninja.py from depot_tools (I just put them into a temporary folder, so I could move them back if it didn't work)
(2) copying ninja.exe into depot_tools
When I ran the cmake Ninja command, I got all kinds of errors about certain files it wasn't able to find, but then running ninja aseprite afterward worked.
I don't fully understand why this works. The best I can tell, ninja / ninja.bat are wrapper scripts that call ninja.py, and ninja.py is a wrapper script that searches PATH for ninja.exe and calls it, so by copying ninja.exe there in the first place, you're kind of cutting out the middleman?
All list of constexpr containers:
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p3372r0.html
Cool - thank you for this helpful and working input, Giovanni Augusto (2024-09-21)!
For some reason I cannot comment on your comment, but only on OP...
ChatGPT made it into a toggle-PowerShell-Script for me:
# Toggle Light / Dark Mode Script
# Define registry paths
$personalizePath = "HKCU:\Software\Microsoft\Windows\CurrentVersion\Themes\Personalize"
$accentPath = "HKCU:\Software\Microsoft\Windows\CurrentVersion\Explorer\Accent"
# Read current system theme (SystemUsesLightTheme)
try {
$currentSystemTheme = Get-ItemPropertyValue -Path $personalizePath -Name SystemUsesLightTheme
} catch {
# If the registry entry is missing, default to Light
$currentSystemTheme = 1
New-ItemProperty -Path $personalizePath -Name SystemUsesLightTheme -Value $currentSystemTheme -Type Dword -Force
New-ItemProperty -Path $personalizePath -Name AppsUseLightTheme -Value $currentSystemTheme -Type Dword -Force
}
# Toggle logic
if ($currentSystemTheme -eq 1) {
# Currently Light â switch to Dark
$newValue = 0
Write-Host "Switching to Dark Mode..."
} else {
# Currently Dark â switch to Light
$newValue = 1
Write-Host "Switching to Light Mode..."
}
# Set System & Apps theme
New-ItemProperty -Path $personalizePath -Name SystemUsesLightTheme -Value $newValue -Type Dword -Force
New-ItemProperty -Path $personalizePath -Name AppsUseLightTheme -Value $newValue -Type Dword -Force
# Set Accent color (kept constant)
New-ItemProperty -Path $accentPath -Name AccentColorMenu -Value 0 -Type Dword -Force
New-ItemProperty -Path $acce
First check if nvcc is installed :
nvcc --version
If its Not Installed then you can install it (Nvidia CUDA ToolKit) using this guide :- https://developer.nvidia.com/cuda-downloads
1. Compile the CUDA (.cu) program:
nvcc my_cuda_program.cu -o my_cuda_program
2. Run the executable:
./my_cuda_program
Download Visual Color Theme Designer 2022. Create a new project from its template. Select Base Theme Dark.
Go to All elements. And find ⊠the element with the name âShell â AccentFillDefaultâ.
Change the purple color to FF454545.
Press Apply. Restart VS and select your theme from the list in themes.
Your are DONE !!!
Actually GNU Emacs will have more of the ISPF/Edit functions.
In short, Uvicorn is a lightning-fast ASGI (Asynchronous Server Gateway Interface) server implementation for Python.
While frameworks like FastAPI (which I used in my project) define how your API handles requests (routing, validation, logic), they do not include a web server to actually listen for network requests and send responses. Uvicorn fills this gap by acting as the bridge between the outside world (HTTP) and your Python application.
Can you drop the repo link or the full scripts here?
While the existing answers clearly explain how to update the intrinsic matrix for standard scaling/cropping operations, I want to provide a related perspective: how to construct a intrinsic for projection matrix of rendering when using crop, padding, and scaling, so that standard rendering pipelines correctly project 3D objects onto the edited image.
When performing crop, padding, or scaling on images, the camera projection matrix needs to be adjusted to ensure that 3D objects are correctly rendered onto the modified image.
Pixel space vs. Homogeneous space
Pixel space (CV): Uses actual pixel coordinates, e.g., a 1920Ă1080 image has x â [0,1920] and y â [0,1080].
Homogeneous space (Graphics): Normalized coordinates where x â [-1,1] and y â [-1,1], regardless of image size.
This distinction affects how image augmentations influence projections. For example, adding padding on the right:
In pixel space, left-side pixels do not move.
In homogeneous space, the entire x-axis is compressed because the total width increased.
A camera intrinsic matrix contains four main parameters:
fx, fy: focal lengths along x and y axes
cx, cy: principal point offsets along x and y axes
Translation (cx, cy)
Only cropping/padding on the left/top affects the principal point.
Right/bottom operations have no effect.
Scaling (fx, fy)
In CV pixel space: only scaling changes fx/fy.
In homogeneous space: crop and padding also affect fx/fy, because padding changes the image aspect ratio, which changes the mapping to normalized [-1,1] coordinates.
Pixel space rules:
Cropping/padding:
cx, cy decrease by the number of pixels cropped from left/top
Right/bottom cropping has no effect
fx, fy remain unchanged
Scaling:
fx, fy multiplied by scale s
cx, cy multiplied by scale s
Homogeneous space rules:
Cropping/padding changes image aspect ratio â requires extra scaling compensation
Compute compensation factors:
sx = s * (original_width / padded_width)
sy = s * (original_height / padded_height)
fx_new = fx * sx
fy_new = fy * sy
cx_new = cx * sx
cy_new = cy * sy
Note: This compensation only adjusts for the normalized coordinate system and does not change physical camera parameters.
To compute FOV consistent with the original image:
fov_x = 2 * arctan((original_width * s) / (2 * fx_new))
fov_y = 2 * arctan((original_height * s) / (2 * fy_new))
This ensures that rendering with crop, padding, and scaling produces objects at the correct location and scale, without relying on viewport adjustments.
"Sched + Pickups Week 1" was interpreted as the file name in the code mentioned in the question. For example, âSCHED 11.30.25.xlsmâ
In my case the problem was the version of the extension bundles
What I had to do is:
Locate the `host.json` of your function app
Update the
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.*, 4.0.0]"
}
}
To
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[4.*, 5.0.0)"
}
}
try using onnxruntime 1.19.2 this solved the issue for me
I am using Python 3.11.4
A program can check if a file descriptor is a TTY or not with isatty call.
It very likely checks if stdout is a tty.