Thanks to @wildpeaks' answer, I have implemented this idea that works for my project (iOS13+ API, RealityKit ARView), here's the code snippet without the business logics:
// Check if the entity is in screen every 0.5s
func startTrackingAnchorEntities() {
anchorCheckTimer = Timer.scheduledTimer(withTimeInterval: 0.5, repeats: true) { [weak self] timer in
self?.checkForNewAnchors(timer: timer)
}
}
private func checkForNewAnchors(timer: Timer) {
// Check if there is an anchor, you may do additional checks here for the right entity(names, children, etc.)
guard let entity = self.scene.anchors.first {
print("❌ Missing required objects - entity: \(self.scene.anchors.first != nil)")
return
}
// Get current active camera
let cameraTransform = self.cameraTransform
print("📱 Camera Transform - Position: \(cameraTransform.translation), Rotation: \(cameraTransform.rotation)")
// Prevent checking with the initial frame camera positions that often just passes
if (cameraTransform.translation == SIMD3<Float>(0,0,0)) {
print("⚠️ Camera at origin (0,0,0), skipping...")
return
}
// Convert world position to camera space
let cameraAnchorEntity = AnchorEntity(world: Transform(scale: .one, rotation: cameraTransform.rotation, translation: cameraTransform.translation).matrix)
// Get the entity's position
let entityPosition = entity.position(relativeTo: cameraAnchorEntity)
print("🔍 Entity relative position: \(bubblePosition)")
// IMPORTANT! Get world position for projection, else the projected point becomes super big
let worldPosition = entity.convert(position: .zero, to: nil)
// Project bubble position to screen space
guard let projectedPoint = self.project(worldPosition) else {
print("❌ Failed to project entity position to screen space")
return
}
print("📍 Projected point on screen: \(projectedPoint)")
print("📱 Screen bounds: \(self.bounds)")
print("🌍 World position used for projection: \(worldPosition)")
// Check if the projected point is within screen bounds
guard self.bounds.contains(projectedPoint) else {
print("⚠️ Entity outside screen bounds")
return
}
print("✅ Entity visible! Scene position: \(entity.scenePosition), Camera position: \(cameraTransform.translation)")
// Stop the timers after detecting the visible bubble
timer.invalidate()
anchorCheckTimer = nil
// Do whatever you need to do afterwards
}
What's new and what I noticed are:
The idea to use this function is to call it in your ARView's set up function. For my use case, I first load an ARWorldMap then I call this function, it runs fine in parallel without interfering with the relocalization progress for those whom may concern.
the .z thing @JoeySlomowitz mentioned does still persists when I'm working on this issue. So I removed it and it seems to still work like a charm.
I used ARView.cameraTransform which is a newer way to get active camera position in addition to session.currentFrame.camera You may find documentation about it here.
The tricky part is the relative position space, make sure everything is relative to the same position space.
You can’t get both in a single syscall, but on Linux you can avoid a second full path resolution by:
open() (or openat2() if you want to block symlinks) to get the file descriptor.
Use readlink("/proc/self/fd/") to retrieve the resolved path from the kernel.
This way you walk the path only once, and /proc/self/fd just returns the path the kernel already resolved.
Remove --progress or use --no-tty. parallel can't write progress to an interactive terminal if you're using nohup to background it.
I know it's been six years. But I wanted to give the right answer here. Unity does support WAV loop points, and it's a very reliable way to achieve something vital to your game: having beautiful music with nice intros and looping on a specific part of the song through loop points. Not all wav edition software is compatible with unity though. I've used Wavosaur and it's compatible with Unity 6. I've also heard that tidalwave and other in-editor wav editors are also a viable option to set loop points. Once you set your loop points, all you have to do is tell the audio source to loop, and it will work seamlessly with your loop points out of the box. I hope this is useful to you, or anyone else visiting in the future!!
I managed to find it via this link (short url, but it redirects to a proper apple download url)
John I wonder did you ever figure this out?
The container will not have a kthreadd, which is always pid 2:
[[ $(ps -p2) ]] && echo host || echo container
This will not work if either:
hidepid=2these would be somewhat obscure conditions.
I've run into a strange issue. I have a generic extension on interface .IDbGroups which defines a Groups property, but when it's invoked I get an InvalidOperationException.
Is there something I should know about about generic extensions and implementation?
Code:
public interface IDbGroups
{
[Required]
string Groups { get; set; }
}
public interface ISetting : IAssetsDbBase, IDbGroups
{
...
}
public sealed class Setting : AssetsDbBase, ISetting
{
...
public string Groups { get; set; }
...
public const string DefaultGroupName = "Assets";
}
[Expandable(nameof(Predicates.InGroupsImpl))]
public static MatchTypes DbInGroups<TEntity>(this TEntity item, string values) where TEntity : class, IDbGroups
=> DbUtility.ListMatches(item.Groups, values);
public static Expression<Func<TEntity, MatchTypes>> InGroupsImpl<TEntity>(string values) where TEntity : class, IDbGroups
=> e => DbUtility.ListMatches(e.Groups, values)
This line:
cache.Settings = await db.GetWorker<Setting>().WhereAsync(s => s.AppId.Equals(cache.App.Id) || s.AppId == null && s.DbInGroups(Setting.DefaultGroupName).Equals(MatchTypes.All));
Causes follow exception:
Inner Exception 1:
InvalidOperationException: No generic method 'InGroupsImpl' on type 'SLT.Assets.Extensions.LINQExtensions' is compatible with the supplied type arguments and arguments. No type arguments should be provided if the method is non-generic.
Any suggestions would be great, I can't figure out why this happens. The error remains even if I remove
s.DbInGroups(Setting.DefaultGroupName).Equals(MatchTypes.All))
It accours when I query DbSet<Setting>
Weird
Removing WindowsSdkPackageVersion property works for me
I can still remember the moment. A quiet afternoon in my first year of college, walking past a poster stuck slightly askew on the department wall: “Seeking students for an IoT project.”
It just caught my eye. I had no background in hardware, and, truthfully, I didn’t even know what “embedded systems” really meant. But something—a quiet spark of curiosity, perhaps—made me pause, take a photo of the poster, and later that night, send a hesitant email. That moment, which seemed so inconsequential at the time, became the flap of a butterfly’s wing.
What followed was a crash course in self-teaching, late-night debugging, and learning to build with both my hands and mind. By the end of the year, I had co-developed two working prototypes: a smart medicine dispenser and a stove timer control system. Both were later published as patents. That led to the Budding Engineer Award—an honor given annually to just two second-year students.
My achievements checked every box of conventional success. But I still felt disconnected, lacking that quiet sense of alignment that tells you you're building something that truly fits. I was proud, yes, but still searching.
The next domino fell during my internship at the National Informatics Centre. I was expecting a standard backend assignment. I hadn’t expected to be placed on the AI & Data Science team, and at the time, the field felt abstract and a bit intimidating.
But as I began building a file-based question-answering system using Retrieval-Augmented Generation (RAG) and LLaMA 3, something clicked. That sense of alignment I had been missing? I felt it then. I found myself staying up late, not out of pressure, but out of genuine curiosity. That internship did more than just expand my technical toolkit; it gave me direction. It turned curiosity into conviction.
The next ripple came during my six-month internship at Hyundai Motor India, where I joined the Internal Audit Division. It was unfamiliar territory, a room full of Chartered Accountants, with me as the lone data analyst. But that difference became my value. I worked through vast financial records, detected anomalies, and provided insights that directly supported audit processes and internal reviews.
What struck me was this: even in a traditional, compliance-focused environment, data had the power to challenge assumptions, improve systems, and guide meaningful decisions. It was more than just code or numbers; it was clarity, context, and consequence. This insight stayed with me and resurfaced when a friend shared their frustration with long, manual site assessments.
That conversation led to ArchiScout, a chatbot that automates site analysis for architectural planning. What began as a side project to help a friend soon evolved into something far more meaningful. A process that typically took weeks could now be done in minutes. Today, architecture students in my circle use it informally to fast-track their assignments. I now see ArchiScout as a blueprint for the kind of systems I want to build: tools that think with you, not just for you.
Knowing that my work was not just technically sound but genuinely useful was deeply fulfilling. I'm now working on publishing the project in hopes of making it accessible on a larger scale. But I know its true potential is still untapped. I want to refine it, scale it, and integrate more advanced models—steps I cannot take without a stronger academic foundation. That’s why I’m applying for a Master’s in Data Science.
[University-specific paragraph]
This Master’s program isn’t the destination—it’s the next ripple in a journey that began with a quiet choice I barely understood at the time. I want to take ArchiScout further and transform it into a robust, adaptable tool that genuinely supports architectural workflows. Over time, I hope to immerse myself in research that doesn’t just demonstrate technical capability but addresses problems that actually matter.
I entered college with questions, not a map. I followed sparks: a poster on a wall, an unexpected internship placement, and a conversation with a friend. And slowly, those sparks began to form a pattern. With every late-night debug session, every misstep turned into insight, and every unlikely opportunity followed by purpose, I found direction.
That accidental placement on the AI & Data Science team at NIC wasn’t just a lucky mismatch; it was the flap of a butterfly’s wing. What felt like a detour became a defining moment. It turned uncertainty into curiosity, and curiosity into conviction.
Now, I seek to deepen that conviction in an environment that challenges me intellectually, supports my growth, and equips me to design systems that think with you, not just for you, because I believe your program is where that impact can truly begin to scale
As org.apache.commons.lang3.StringUtils.replaceIgnoreCase(String text, String searchString, String replacement) is now deprecated, I use org.apache.commons.lang3.Strings.CI.replace(String text, String searchString, String replacement) which was previously used by StringUtils.
This is unfortunately not a supported function yet.
https://support.google.com/docs/answer/3093377
Tip: You can't use table references in =INDIRECT yet. Try IMPORTRANGE instead.
IMPORTRANGE shouldn't require an INDIRECT
Syntax
IMPORTRANGE(spreadsheet_url, range_string)
The value for spreadsheet_url must either be enclosed in quotation marks or be a reference to a cell containing the URL of a spreadsheet.
The value for range_string must either be enclosed in quotation marks or be a reference to a cell containing the appropriate text.
I also tried this, but without success.
[PXDBDateAndTime(UseTimeZone = true, PreserveTime = true, DisplayNameDate = "Scheduled Start Date", DisplayNameTime = "Scheduled Start Time")]
[PXDateTimeSelector(Interval = 15)]
[PXDefault]
[PXUIField(DisplayName = "Scheduled Start Date", Visibility = PXUIVisibility.SelectorVisible)]
public virtual DateTime? ScheduledDateTimeBegin
{
get
{
return this._ScheduledDateTimeBegin;
}
set
{
this.ScheduledDateTimeBeginUTC = value;
this._ScheduledDateTimeBegin = value;
}
}
Did you solve the issue? My solution partially works for my case, hence I am still looking for a definitive solution.
So, define the delegate in the HorizontalHeaderView as suggested in the question you've mentioned.
In the TableView, implement a columnWidthProvider function such as the one bellow:
columnWidthProvider: function(column) {
if (!isColumnLoaded(column))
return -1;
let headerWidth = horizontalHeader.implicitColumnWidth(column);
let dataWidth = tableView.implicitColumnWidth(column);
// limit the minimum width to the header width
let columnWidth = Math.max(headerWidth, dataWidth);
// and optionally limit the maximum width to 200
return Math.min(columnWidth, 200);
}
The problem with this solution is that it prevents manual resizing of the columns.
I had the same issue today and used this tutorial [https://next-intl.dev/docs/getting-started/app-router/with-i18n-routing] to resolve it. It is a library that wraps your dictionaries in a server-side module, then makes its properties visible to your client-side components.
If you insist on a custom implementation, I would recommend doing the same thing. Also, if you find a way to make the generated pages static, let me know.
This is my implementation of the archive pre-commit-hook mentioned by @ignoring_gravity, since ruff and isort still don't support formatting into absolute imports (they only point to relative imports).
I described it in more detail in this answer
It goes through all the files passed to it and checks them for relative imports that go beyond the package boundaries.
The tool also has useful arguments: -v / --verbose, -d / --dry-run, -i / --ignore and -R / --root-dir. You can read more about them in the tool help:
usage: main.py [-h] [-R ROOT_DIR] [-i IGNORED_PATHS] [-v] [-d] file_paths [file_paths ...]
positional arguments:
file_paths files to be processed
options:
-h, --help show this help message and exit
-R ROOT_DIR, --root-dir ROOT_DIR
path to root directory (e.g., ./src)
-i IGNORED_PATHS, --ignore IGNORED_PATHS
regex pattern to ignore file paths
-v, --verbose output extra information
-d, --dry-run run without changes
If you're using PNPM and install using pnpm add puppeteer, you may need to run the postinstall script to finish the setup:
pnpm approve-builds
And select > puppeteer
My friends. I'm retard. Bad career choice. Everything works perfectly.
Check that you only have one .env file. I had a .env.profuction file nearby, when I deleted it, the error disappeared
Yes, you can export entity data from AutoCAD by using AutoLISP, .NET API, or scripts to read properties and XData, then write them to CSV. For DXF files, you can parse them using Python or other languages with libraries like ezdxf, extract the entity data, and save it as CSV easily.
This works in a pyproject.toml file, for overrides per Python module:
[[tool.mypy.overrides]]
module = "mymodule.*"
ignore_errors = true
If it's hosted on render as a static site add this to the rewrite rule(Source: /* Destination: /index.html)
Check Picture
Here is a slightly different approach that, for each zoom-out operation, calculates how many further operations would be required to restore the default scale (1x) and, according to this, shifts the canvas view proportionally towards its initial position
Credit to Christian C. Salvadó for the augmented Math.Log function to allow the specification of a base!
<!DOCTYPE html>
<html>
<head>
<script src="https://unpkg.com/[email protected]/konva.min.js"></script>
<meta charset="utf-8" />
<title>Konva Zoom Relative to Stage Demo</title>
<style>
body {
margin: 0;
padding: 0;
overflow: hidden;
background-color: #f0f0f0;
}
</style>
</head>
<body>
<div id="container"></div>
<script>
var stage = new Konva.Stage({
container: 'container',
width: window.innerWidth,
height: window.innerHeight,
});
var layer = new Konva.Layer();
stage.add(layer);
var circle = new Konva.Rect({
x: stage.width() / 2,
y: stage.height() / 2,
width: 50,
height: 50,
fill: 'green',
});
layer.add(circle);
layer.draw();
const scaleFactor = 1.03;
stage.addEventListener("wheel", (e) => zoomStage(e));
function zoomStage(event) {
event.preventDefault();
var oldScale = stage.scaleX();
var oldPos = stage.getPointerPosition();
var zoomIn = event.deltaY < 0;
var scaleMult = zoomIn ? scaleFactor : 1 / scaleFactor;
var newScale = Math.max(oldScale * scaleMult, 1);
var scaleDelta = newScale / oldScale;
stage.scale({ x: newScale, y: newScale });
if (zoomIn) {
stage.position({
x: oldPos.x * (1 - scaleDelta) + stage.x() * scaleDelta,
y: oldPos.y * (1 - scaleDelta) + stage.y() * scaleDelta,
});
} else {
var timesScaled = Math.round(Math.log(newScale, scaleFactor));
var positionScaleFactor = timesScaled / (timesScaled + 1);
stage.position({
x: stage.x() * positionScaleFactor,
y: stage.y() * positionScaleFactor,
});
}
stage.batchDraw();
}
// https://stackoverflow.com/a/3019319
Math.log = (function() {
var log = Math.log;
return function(n, base) {
return log(n) / (base ? log(base) : 1);
};
})();
</script>
</body>
</html>
@trincot thanks for the patience sir I'll try to explain my point as i can for anyone feeling unclear about my description of the question
First I have a custom component CollapsibleFilter which acts as a wrapper of Collapsible from shadcn/ui, by passing this component with children it will display the content of children when the collapsible is opened, e.g. when i write
<CollapsibleFilter title="foo">
hello world
</CollapsibleFilter>
it should show something like this in the browser(when you click the collapsible to open it)
now i created a react state variable called mapObj:
const [mapObj, setMap] = useState<Map<string, string>>();
useEffect(() => {
setMap(new Map([["John", "abc"]]));
}, []);
which is an object of type Map, and will be initialized with key value pair of ["John","abc"]
now when i write
<h2>outside of collapsible filter</h2>
{mapObj?.entries().map((kv) => {
return (
<div className="text-orange-400">
<p>name:{kv[0]}</p>
<p>tag:{kv[1]}</p>
</div>
);
})}
as i expected, the content of mapObj is displayed in the browser:
this is against @DipitSharma's answer where he states that react can't render the result returned from an iterator
and if i write(let's call this "test1")
<h2 className="text-red-800">
using <code>Array.from(map.entries())</code>
</h2>
<CollapsibleFilter title="test1">
{Array.from(mapObj?.entries() ?? []).map((kv) => {
return (
<div className="text-red-400">
<p>name:{kv[0]}</p>
<p>tag:{kv[1]}</p>
</div>
);
})}
</CollapsibleFilter>
this also work as i expected, the content of mapObj is inside my CollapsibleFilter component, you can click to open the collapsible to see it:
However if i write(let's call this "test2")
<h2 className="text-blue-800">
using <code>map.entries()</code>
</h2>
<CollapsibleFilter title="test2">
{mapObj?.entries().map((kv) => {
return (
<div className="text-blue-300">
<p>name:{kv[0]}</p>
<p>tag:{kv[1]}</p>
</div>
);
})}
</CollapsibleFilter>
against my expectation, when i click to open the collapsible there is nothing:
So why "test1" works but "test2" fails
In Helm v3.13.2 at least, the default behaviour is what you expected, i.e. overwrite entire list rather than merge.
They are both writing to the same object - when you are doing something simple like adding group membership, it doesn’t matter which cmdlets you use.
If you are setting an Exchange-specific attribute, to remain in support, you should use the Exchange cmdlets.
There is a version for Windows, by the original author at www.taroz.net. It appears to be the first release all these versions are based on.
Okay, I found that when there are no pods running through Argocd we are not getting these metrics. so throw in a simple pod and then check.
If you run dbshell inside a container, the .dbshell file will be created inside that container’s filesystem, in the home directory of the user running the command.
So:
bash
docker exec -it <your_container> bash ls -la ~ | grep .dbshell
For me, isSuccess is local to the current component instance. So If you call navigate() in onSuccess, the component unmounts immediately. You never observe isSuccess: true on that instance.
Don’t rely on isSuccess if you navigate or unmount right after the mutation.
When using container images on App Platform you can patch the spec and just update the changes, e.g. like this:
doctl apps update <app-id> --update-sources --spec - <<EOF
services:
- name: <service-name>
image:
tag: "<new-version>"
EOF
You will need the --update-sources argument, otherwise it won't update it's source.
CentOS 9, Apache server. Had the same issue and it always uplaoded to the temp directory, which is something like /tmp/systemd*php*/tmp.
Turned out that systemd's .service unit for the php-fpm service (/usr/lib/systemd/system/php-fpm.service) has the following line:
Comment it out and reload the unit.
systemctl daemon-reload
Only then it stopped targeting the default tmp directory and pointed to the one I explicitly put in php.ini 'upload_tmp_dir'
DBeaver does not auto-refresh schemas in use cases like yours.
After altering the schema, refresh manually using Shift+Ctrl+R (ensure it's configured in Preferences > Keys > Navigate) before pressing F4.
It's important to choose "When" correctly; you may want to set "Database Navigator [view] context" instead of "SQL Script Editor Context" (see below). Beware of avoid replacing other important bindings you use.
I found that if I use CTRL-V it would default to pasting un-fomatted text. If I right click and paste keeping source formatting it does as I desire. It maintains the color coding of the SQL editor window.
I had the same problem. Just had to install nuget pkg: NAPS2.Tesseract.Binaries
https://github.com/cyanfish/naps2/blob/master/NAPS2.Sdk.Samples/OcrSample.cs
To limit the number of requests per minute on a gRPC Java client, you can create a client-side interceptor that wraps the io.grpc.Channel. This interceptor keeps track of how many requests have been sent in the current time window (like one minute). If the request limit is reached, it waits until the next minute before allowing more requests.
This way, you avoid exceeding the server’s rate limits and prevent receiving errors like RESOURCE_EXHAUSTED. There isn’t a built-in decorator for this in io.grpc.Channel, so implementing a custom ClientInterceptor is the recommended approach. You can also use algorithms like token bucket or leaky bucket to make the rate limiting smoother.
sorry but I'm having the very same problem...
Could somebody help us ?
It's not possible to directly get the Wayland event serial for a pointer-enter event in GTK 3. The toolkit's design intentionally abstracts away such backend-specific details.
namespace App\Providers;
use Illuminate\Support\ServiceProvider;
use Livewire\Livewire;
use Illuminate\Support\Facades\Route;
class AppServiceProvider extends ServiceProvider
{
public function boot(): void
{
Livewire::setUpdateRoute(function ($handle) {
return Route::post('/custom/livewire/update', $handle);
});
}
}
When the @RequestBody is added, Spring would require the content type to be application/json and the JSON body is required.
When the @RequestPart is added, Spring would require the content type to be multipart/form-data and input as form
=> If both of them are included in a controller, the error 403 will be thrown because the content type is not correct.
In this case, I think we can input username and email as @RequestPart
@RequestPart("username") String username
I finally managed to make it work by
- storing myprogram.exe in the root directory,
- keeping the original file package.json, and
- adding in forge.config.js:
module.exports = {
packagerConfig: {
asar: true,
extraResource: ['myprogram.exe']
},
Note that there is no final "s" in ExtraResource
Browsers and Google look for a 32x32 icon, so if your favicon.ico only has 16×16, they’ll pad it, and thus it looks tiny, so in order to fix it, generate a multi-size favicon (16×16, 32×32, or 48×48) or PNGs, put them in /public, and declare them in your metadata.icons in app/layout.tsx. That way Google will pull the right-sized image and your favicon will show full-size...hope it helps :)
Try inline template expression function @mergeObjects(..., ...)
It takes two JSON objects as input. It was added recently and doesn't have a public documentation yet.
Using npmAuthenticate@0 requires the working file to be either in the working directory of the agent or within a sub folder of the working directory. It's not noted in the docs but for me even now didn't authenticated the .npmrc file
I have no idea but I also tried to do the same thing, it never worked
You can use HttpRequest.BodyPublishers.ofByteArray(...) in stead of ofByteArrays(...) .
The Content-Length header will then be set.
The only drawback is, that first you will have to join the list of byteArrays into 1 byteArray.
You can reserve a large, contiguous address space by using mmap with PROT_NONE (no permissions). Then, you can later "grow" into this allocation by using mprotect to update a segment with the proper permissions. This way, you can mmap the maximum amount of memory needed, which will ensure the pages are virtually continuous but not yet mapped to a physical page until the protections are incrementally updated.
Solution 1: Use extraResources instead of extraFiles
In your forge.config.js, try using extraResources instead of extraFiles:
module.exports = {
packagerConfig: {
asar: true,
},
makers: [
// your makers config
],
plugins: [
[
'@electron-forge/plugin-auto-unpack-natives',
{
// options if needed
},
],
],
packagerConfig: {
extraResources: [
{
from: 'resources/myprogram.exe',
to: 'resources/myprogram.exe',
},
],
},
};
Solution 2: Update your package.json configuration
Try this in your package.json:
"build": {
"appId": "myapp",
"extraResources": [
{
"from": "resources/myprogram.exe",
"to": "resources"
}
]
}
Solution 3: Check file paths and structure
Ensure that:
If none of these work, please provide:
forge.config.jspackage.json build configurationImport whole module into one variable, equivalent to import myModule from './my-module'
node -e "globalThis.myModule = await import('./my-module.mjs')" -i
plugin pretty-shell is causing this. the easiest solution is to disable the plugin
Good news.
Now with g++ released today (aug 8, 2025) version 15.2, gtkmm 4.0 header units compile without errors!
Thanks M.Adel, For me, uninstalling the driver from "Mice and other pointing devices" --> "HID Compliant Mouse" and repairing the mouse worked.
⚽Birmingham City vs Ipswich Town Live Full Match Today 2025 (0-0)
🕒 1:00 AM
Johor Darul Ta'zim Selangor Football Club live score (and video online live stream) starts on 8 Aug 2025 at 1:00 UTC time in Piala Sumbangsih, Malaysia.
Johor Darul T vs Selangor FC live stream here on ScoreBat when an official broadcast is available. We will provide only official live stream ...
Johor Darul Ta'zim vs. Selangor Malaysian Super League game on ESPN (IN), including live score, highlights and updated stats.
#Birmingham #BirminghamCity #BirminghamCitylive #StAndrewSStadiumEngland2025
@everyone
https://rosidmalhdlive.blogspot.com/2025/08/birmingham-city-vs-ipswich-town-live.html
To fix this modify `persistence.xml` :
<property name="hibernate.cdi.extensions" value="true" />
Can someone explain me why it works ?
Hope this helps someone
If you're deploying form VS Code, you're function has to be running AND you have to have sent a request through it before it will register on the portal during deployment. Proper annotations all that aside if this is not done it will not find your method - No HTTP triggers found
A minor safety option which is what I did in similar circumstances:
Move the offending and dangerously names item ~ to a new name foodir:
mv \~ foo
then check it's what it should be : ls foo should not list your home folder
Then remove :
rm -r foo
@bot.command(pass_context=True)
@commands.has_role("Moderator")
async def unban2(ctx):
mesg = ctx.message.content[8:]
banned = client.get_user_info(mesg)
await bot.unban(ctx.message.server, banned)
await bot.say("Unbanned!")
GridView.builder(
controller: _scrollController,
cacheExtent: 200,
Use cacheExtent so that Flutter keeps the loaded images in memory and not discard them while they are not in view.
#include "numpy/arrayobject.h"
#include "numpy/ndarraytypes.h"
// obj is your PyArrayObject*
PyArray_Descr* descr = PyArray_DESCR((PyArrayObject*)obj);
// Make sure it's a datetime or timedelta type
if (descr->type_num == NPY_DATETIME || descr->type_num == NPY_TIMEDELTA) {
PyArray_DatetimeMetaData* meta = (PyArray_DatetimeMetaData*)descr->c_metadata;
if (meta != NULL) {
NPY_DATETIMEUNIT unit = meta->base;
// Now unit is an enum value like NPY_FR_D, NPY_FR_M, etc.
// You can switch on `unit` or print it
}
}
After trying every suggestion I could find (Safari, clearing cache, different networks, different Macs), the only thing that worked was this method:
Steps:
1. On your iPhone:
• Turn off Wi-Fi.
• Use your cellular data only.
2. Install a free VPN that offers a USA (New York) server and turn it on.
3. Open Safari:
• Close all tabs.
• Go to Apple Developer.
4. Log out of your Apple Developer account.
• When it asks “Trust this browser?”, click Trust before logging out.
5. Log back in.
6. Navigate to Certificates, Identifiers & Profiles → Keys.
7. Revoke old APNs keys
(Note: any apps using these keys will need to be updated with the new one).
8. Create a new key:
• Name it something like 2025ApnYourName.
• Select Apple Push Notifications service (APNs).
• Configure for Sandbox & Production.
9. Register and immediately click Download.
• This time the .p8 file downloaded successfully. 🎉
Why this works:
It seems to be related to Apple’s server location checks and caching. Forcing a different IP (VPN to USA) over cellular, plus a fresh login, triggers the download to work.
You can split the file contents on semicolons safely (for most typical SQL dumps) and execute each statement separately.
# Convert the black & white image to 2D-style PNG by applying a slight threshold to enhance the sketch look
# Convert to numpy array for processing
bw_array = np.array(bw_image)
# Apply a binary threshold to simulate a 2D sketch effect
threshold = 100
bw_2d_array = (bw_array > threshold) * 255 # Convert to binary: 0 or 255
# Convert back to PIL image
bw_2d_image = Image.fromarray(bw_2d_array.astype(np.uint8))
# Save as PNG
bw_2d_png_path = "/mnt/data/black_white_2d_sketch.png"
bw_2d_image.save(bw_2d_png_path)
bw_2d_png_path
Span creation for metrics and monitoring and thread locals are common case of wanting an unnamed resource, where instead the Java compiler could create a name behind the scenes on which it would call the close for the AutoCloseable.
try(createSpan()) {
// actions in the monitored span
} // span is complete
The _ usage is a little better, but still unnecessarily verbose.
Slove it Bro , i have the same issue
Depending on your system, connections can be a significant issue with MARS, where sql's connection limit can get hit and the pooler restarts, which will cause some failed logins. It's very frustrating to troubleshoot from the DBA end as you can watch low traffic from sys.dm_exec_requests/sys.dm_exec_sql_text, but get high counts from sys.dm_exec_connections, where when joined to sys.dm_exec_sessions it looks like your web service account has thousands of connections for minimal sql because MARS will create a connection for each statement in the batch, but the above DMVs only show the specific query running. SP_who2 won't show all the logins mapped either. It's REALLY annoying.
el que me funcionó fue el de excluir el subsystem, y tengo JBoss 7.0.9.
Is there any official bug reported?
Right click on your project > click properties at the bottom > click Libraries on the left side > click Add Jar or Add External Jar > Locate the Jar file and load it > click OK. Now you have added the related Jar file into your class path and the ClassNotFoundException should disappear. If still issue is unresolved, you need to open the Jar file with JD-GUI and check if the class you are looking for is present or not. If yes it should resolve the error, if not, then you should upgrade or downgrade the jar depending upon the situation.
For PowerShell
$tags = @{"tag1"="xxx";"tag2"="2025-08-08";"tag3"="[email protected]";"tag4"="xxx";"tag5"="xxx";"tag6"="xxx"; "tag7"="xx"}
New-AzResourceGroup -Name $resourceGroupName -Location $location -Tag $tags
Can u give a try using below two options
spark.read.schema(schema).option("mode", "DROPMALFORMED").json("my_path")
spark.read.schema(schema).option("mode", "FAIFAST").json("my_path")
DROPMALFORMED : will ignore the corrupted records.
FAILFAST : throws an exception when it meets corrupted records.
Using the top syntax, I get a different error:
error: Failed to parse: `pyproject.toml`
Caused by: TOML parse error at line 9, column 15
|
9 | docs = {sphinx}
| ^
trailing commas are not supported in inline tables, expected nothing
and getting this error for the bottom syntax:
error: Project `testproject` has malformed dependency groups
Caused by: Failed to find group `docs` specified in `[tool.uv.dependency-groups]`
Nice question — this kind of task comes up a lot when handling image uploads or API payloads.
I've used a similar approach before: save the Bitmap to a MemoryStream, then convert the bytes to Base64. Using `ImageFormat.Png` usually preserves quality better than JPEG for transparency.
Hope someone posts a clean snippet here!
| header 1 | header 2 |
|---|---|
| cell 1 | cell 2 |
| cell 3 | cell 4 |
I made a program that checks how high you can make a pyramid from a number of bricks. Hope it helps :
blocks = int(input("Enter the number of blocks: "))
height = int(0)
layer = int(0)
while(blocks > layer):
layer += 1
blocks = blocks - layer
height += 1
print(height)
You can set up a Local Group Policy to only apply to a specific user account (or to Administrators vs Non-Administrators) as so:
Therefore no need for PowerShell here :)
To your main question:
I try to utilize Azure Resource Graph to get all records from Public DNS zones...Does anybody have an idea which table to query to get the records?
This is not possible through Resource Graph. Public DNS records aren't stored there.
I have a bash script that does this by looping through subscriptions:
az account list --query "[].id" -o tsv | while read sub; do
az network dns zone list --subscription "$sub" --query "[].{rg:resourceGroup, zone:name}" -o tsv | \
while read rg zone; do
for type in A AAAA CNAME MX NS PTR SRV TXT; do
case "$type" in
CNAME)
az network dns record-set CNAME list \
--subscription "$sub" -g "$rg" -z "$zone" \
--query "[].{sub: '$sub', rg: '$rg', zone: '$zone', type: 'CNAME', name: name, records: CNAMERecord.cname}" \
-o tsv
;;
A)
az network dns record-set A list \
--subscription "$sub" -g "$rg" -z "$zone" -o json | \
jq -r --arg sub "$sub" --arg zone "$zone" --arg rg $rg '
.[] | .ARecords[]?.ipv4Address as $ip
| [$sub, $rg, $zone, "A", .name, $ip] | @tsv
'
;;
TXT)
az network dns record-set TXT list \
--subscription "$sub" -g "$rg" -z "$zone" -o json | \
jq -r --arg sub "$sub" --arg zone "$zone" --arg rg $rg '
.[] | .TXTRecords[]?.value[] as $txt
| [$sub, $rg, $zone, "TXT", .name, $txt] | @tsv
'
;;
NS)
az network dns record-set NS list \
--subscription "$sub" -g "$rg" -z "$zone" -o json | \
jq -r --arg sub "$sub" --arg zone "$zone" --arg rg $rg '
.[] | .NSRecords[]?.nsdname as $nsd
| [$sub, $rg, $zone, "NS", .name, $nsd] | @tsv
'
;;
AAAA)
az network dns record-set AAAA list \
--subscription "$sub" -g "$rg" -z "$zone" -o json | \
jq -r --arg sub "$sub" --arg zone "$zone" --arg rg $rg '
.[] | .AAAARecords[]?.ipv6Address as $ip6
| [$sub, $rg, $zone, "AAAA", .name, $ip6] | @tsv
'
;;
MX)
az network dns record-set MX list \
--subscription "$sub" -g "$rg" -z "$zone" -o json | \
jq -r --arg sub "$sub" --arg zone "$zone" --arg rg $rg '
.[] | .MXRecords[]? as $mx
| [$sub, $rg, $zone, "MX", .name, "\($mx.preference) \($mx.exchange)"] | @tsv
'
;;
PTR)
az network dns record-set PTR list \
--subscription "$sub" -g "$rg" -z "$zone" -o json | \
jq -r --arg sub "$sub" --arg zone "$zone" --arg rg $rg '
.[] | .PTRRecords[]?.ptrdname as $ptr
| [$sub, $rg, $zone, "PTR", .name, $ptr] | @tsv
'
;;
SRV)
az network dns record-set SRV list \
--subscription "$sub" -g "$rg" -z "$zone" -o json | \
jq -r --arg sub "$sub" --arg zone "$zone" --arg rg $rg '
.[] | .SRVRecords[]? as $srv
| [$sub, $rg, $zone, "SRV", .name, "\($srv.priority) \($srv.weight) \($srv.port) \($srv.target)"] | @tsv
'
;;
*)
echo "Skipping unknown record type: $type" >&2
;;
esac
done
done
done
Your idea of a "magnetic scan" on an LLM is a powerful way to describe a field of study called interpretability. This field aims to open up the "black box" of LLMs and understand their internal workings. While we don't use MRI machines, researchers use various techniques to see which "areas" of the model are most active in response to different inputs.
If you use free LAB Fit Curve Fitting Software ( www.labfit.net ) to fit a function to your dataset, in dialog box "Results" you find: 1) average values of the parameters, 2) the corresponding uncertainties, 3) the covariance matrix. LAB Fit has an error propagation option (first order approximation). Using this option, you inform 1) the expression to determine the propagated error, 2) the average values, 3) the uncertainties, 4) the covariance matrix. See an complete example at: https://www.labfit.net/fitting.htm . If you insttall LAB Fit, see several videos clicking "Help" and choosing "Show Features (ppsx)". A general idea is available in https://www.youtube.com/@WiltonPereira-d9z
Wilton
Fun fact: the month field is zero-based in the Date object. So to get the first of January, you need to do this:
var date = new Date(2000, 0, 1)
The page contains tabularized parameters which would help in deciding on your choice. The XSSFWorkbook and SXSSFWorkbook behaviour in memory.
This issue only occurs with a specific few releases in version 17.10, it ended up being a known bug that was fixed in later versions and is no longer in issue for recent builds. The solution if you encounter it is to upgrade VS.
Instead of adding "input_shape" to your first layer ...
add Input(shape) as your first layer
classifer.add(keras.Input(shape=(11,)))
Then your layers
classifer.add(Dense(6, activation = 'relu'))
More about the sequential model here: https://keras.io/guides/sequential_model/
Just commenting to let you know I just ran into this issue and I am extremely annoyed about it. WTF is this, I just want an API key. To create a Power Up I also need to host some html page somehwere for an iframe?! I need to host a webhook to get an API key?!
Iframe connector URL (Required for Power-Up)
who thought this was a good idea?!
Your ProxyPass /static/ ! must come before the ProxyPass / rule so Apache serves static files itself instead of forwarding them to Gunicorn. Also, make sure your Alias /static/ points to the correct static files directory and that Apache has permission to read them. The MIME error happens because Gunicorn returns an HTML 404 page instead of the CSS file when static requests get proxied to it.
If same error is happening in all directory, which mean yarn is picking
"packageManager": "[email protected]"
from home package.json
@RequestBody with Multipart (file + JSON) returns 403I’m making a small Spring Boot test project where I want to update a user’s profile data (name, email, and image).
If I send the request in Postman using form-data with only a file, everything works fine.
But as soon as I uncomment the @RequestBody line and try to send both JSON and file in the same request, I get a 403 Forbidden error.
My controller method:
@PatchMapping("/{id}")
@Operation(summary = "Update user")
public ResponseEntity<User> updateUser(
@PathVariable Long id,
@RequestPart("file") MultipartFile file
// @RequestBody UserDtoRequest userDtoRequest
) {
System.out.println(id);
System.out.println(file);
// System.out.println(userDtoRequest);
return null;
}
My DTO:
@Data
@AllArgsConstructor
public class UserDtoRequest {
@Nullable
@Length(min = 3, max = 20)
private String username;
@Nullable
@Email(message = "Email is not valid")
private String email;
}
I can only accept data from UserDtoRequest if I use raw JSON in Postman, but then I cannot attach the image.
Question:
How can I send both a file and JSON object in the same request without getting a 403 error?
@RequestBody expects the entire request body to be JSON, which conflicts with multipart/form-data used for file uploads.
The correct way is to use @RequestPart for both the JSON object and the file.
@PatchMapping(value = "/{id}", consumes = MediaType.MULTIPART_FORM_DATA_VALUE)
public ResponseEntity<User> updateUser(
@PathVariable Long id,
@RequestPart(value = "file", required = false) MultipartFile file,
@RequestPart(value = "user", required = false) UserDtoRequest userDtoRequest
) {
System.out.println("ID: " + id);
System.out.println("File: " + file);
System.out.println("User DTO: " + userDtoRequest);
// TODO: Save file, update user, etc.
return ResponseEntity.ok().build();
}
Method: PATCH
URL: http://localhost:8080/users/{id}
Go to Body → form-data and add:
Key: file → Type: File → choose an image from your computer.
Key: user → Type: Text → paste JSON string:
{"username":"John","email":"[email protected]"}
multipart/form-data.@RequestPart tells Spring to bind individual parts of a multipart request to method parameters.
This allows binary data (file) and structured data (JSON) in the same request.
Using @RequestBody with multipart is not supported because it expects a single non-multipart payload.
✅ Tip: If Spring can’t parse the JSON in user automatically, add:
objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
or ensure that the user field in Postman is exactly valid JSON.
2025:
same problem here.
my findings:
with pyarrow.set_memory_pool(pyarrow.jemalloc_memory_pool()) and pyarrow.jemalloc_set_decay_ms(0) the memory will be released eventually if you have enough memory to prevent OOM kill before the full gc triggered.
Exactly same issue I'm facing with Drools 8 or Drools 10.1.0. [Drools 10.1.0]
Everything works fine in Intellij. If deploy it to Linux (RHEL 7.0) getting NPE.
Caused by: java.lang.NullPointerException: Cannot invoke "org.kie.api.KieServices.newKieFileSystem()" because "this.ks" is null
at org.kie.internal.utils.KieHelper.<init>(KieHelper.java:52)
Added META-INF as below:
kie.conf content:
# KIE configuration file for Drools
# Example: Specify the KieServices implementation
org.kie.api.KieServices = org.drools.compiler.kie.builder.impl.KieServicesImpl
org.kie.internal.builder.KnowledgeBuilderFactoryService = org.drools.compiler.builder.impl.KnowledgeBuilderFactoryServiceImpl
Simple Code:
public static StatelessKieSession buildStatelessKieSession(List<String> drlFiles) {
KieHelper kieHelper = new KieHelper();
for(String drlFile : drlFiles){
kieHelper.addContent(drlFile, ResourceType.DRL);
}
return kieHelper.build().newStatelessKieSession();
}
Go to node_modules/react-native-gesture-handler/android/src/main/java/com/swmansion/gesturehandler/core/GestureHandlerOrchestrator.kt
at line 193 change awaitingHandlers.reversed() to awaitingHandlers.asReversed()
https://github.com/software-mansion/react-native-gesture-handler/issues/3621
Some websites I've tried did not work eyllanesc solution. You can try to add these params at the very start of your PyQt app:
os.environ['QTWEBENGINE_CHROMIUM_FLAGS'] = '--ignore-ssl-errors --ignore-certificate-errors --allow-running-insecure-content --disable-web-security --no-sandbox'
os.environ['QTWEBENGINE_DISABLE_SANDBOX'] = '1'
The points of a square can only be one of two lengths. A side or a diagonal. Just given any set of 4 points we don't know which pair are diagonally opposite or which are adjacent
Without loss of generality you only need to check 4 distances:
║A,B║;
║A,C║;
║A,D║;
And one other ║B,C║; ║C,D║; or ║C,D║.
As what @Floris said, using square distances is easiest.
Of the three distances two will be equal (this distance will be the side length (squared)), the third must be a diagonal and thus 2 * side^2.
To pick the last pair, you need to pick the point which is diagonally opposite p1 and one of the other two points (B or C). This distance must be equal to the side length.
This does not, as @Kos pointed out, solve for the situation of a bow-tie shape. Yet if this is a consideration then the ordering of the vertices matters, then you can treat the input as a list of ordered points, and then you can setup the function arguments so that A and C are diagonally opposite, and so are B and D.
You will need EXT:crawler 12.0.9 or later ( unreleased as of the time of writing ) due to this bug:
https://github.com/tomasnorre/crawler/issues/1140
The problem was that I installed Visual Studio Code from the Flatpak store. Downloading the .deb file from the VSCode website and installing it and manually downloading the zip file of the Flutter SDK fixed the problem.
This is a bug of Flutter/Chrome. It seems to be caused by Google stopping to automatically use the cross platform Software Renderer SwiftShader if needed. A hoped for fix has not been able to mitigate this yet.
Possible Workaround:
--disable-features=DisallowRasterInterfaceWithoutSkiaBackendThe size of the parameter set in the two representations
of either 5 (2 center coordinates, 2 radii, 1 alignment angle)
or 6 (quadratic form $Ax^2+Bxy+Cy^2+Dx+Ey+F=0$) is
due to the fact that the algebraic equation/form is invariant
to the space of solutions (x,y) if it is divided through any
of the 6 (nonzero coefficients): $x^2+(B/A)*x*y+(C/A)*y^2+(D/A)*x +(E/A)*y+F/A=0$.
So the quadratic form has effectively only 5 independent parameters.
I faced this same issue as I had added incorrect credentials.
So Check your Database URL or environment variables.
No — you can’t directly create a foreign key constraint on an expression like DATE(created_date) in standard SQL (including MySQL, PostgreSQL, SQL Server, Oracle, etc.).
Foreign key constraints must be defined between actual columns, not computed expressions. Both columns must also match in data type and precision.
IIS will restart the site if it detects the web.config file has changed (by default).
So assuming you can programmatically read and then save that file (you don't even need to make any change), IIS will handle the rest.
This works not just in Blazor (.net core) but also .NET framework (MVC and webforms) and even classic ASP, if you really, really need to.
Probably not the "correct" way, but it is simple and works and given that this has worked for 20+ years, all the way back to classic ASP, it seems pretty robust.
I have put this in old web forms apps for years, used it many times (normally to clear caches) and never had any issue, such as mangling the web.config.