Add this dependency to your pom.xml file
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-entitymanager</artifactId>
<version>5.3.7.Final</version>
</dependency>
Open Git Bash and enter - git update-git-for-windows
@Kaz, Thank you for introducing the #: notation and the circle notation.
As you said, gensym can be omitted in many macros, without creating many different uninterned symbols that share the same names.
In cotontrast, in the sortf macro, different #:NEW1 symbols should be used, and get-setf-expansion already makes a new uninterned #:NEW1 whenever it's used.
In sortf, the number of #:NEW1 symbols is proportional to the number of arguments that are passed to sortf. Therefore, sortf cannot use the fixed number of symbols that can be generated by the #: notation.
Flutter developers using Android Studio may encounter a common warning message related to the Flutter Device Daemon Crash, specifically advising to "Increase the maximum number of file handles available." This issue can disrupt the development workflow, but fear not: there is a straightforward solution. In this article, we'll guide you through the steps to fix this warning and get your Flutter project back on track.
In windows open the Registry Editor
Navigate to Below Path:
HKEY_LOCAL_MACHINE
└── SYSTEM
└── CurrentControlSet
└── Services
└── WebClient
└── Parameters
└── FileSizeLimitInBytes
Change Hexadecimal [ffffffff] --> Decimal [4294967295]
Flutter developers using Android Studio may encounter a common warning message related to the Flutter Device Daemon Crash, specifically advising to "Increase the maximum number of file handles available." This issue can disrupt the development workflow, but fear not, as there is a straightforward solution to address this problem. In this article, we'll guide you through the steps to fix this warning and get your Flutter project back on track.
In windows open the Registry Editor
Navigate to Below Path:
HKEY_LOCAL_MACHINE
└── SYSTEM
└── CurrentControlSet
└── Services
└── WebClient
└── Parameters
└── FileSizeLimitInBytes
Change Hexadecimal [ffffffff] --> Decimal [4294967295]
Similar to https://stackoverflow.com/a/24811490/2251785, but I can't find those urls from official doc. Here are the source code archive URLs from GitHub's documentation, using github/codeql as an example.
| Type of archive | Example | URL |
|---|---|---|
| Branch | main |
https://github.com/github/codeql/archive/refs/heads/main.zip |
| Tag | codeql-cli/v2.12.0 |
https://github.com/github/codeql/archive/refs/tags/codeql-cli/v2.12.0.zip |
| Commit | aef66c4 |
https://github.com/github/codeql/archive/aef66c462abe817e33aad91d97aa782a1e2ad2c7.zip |
You can install any zip file directly with this:
pip install https://github.com/github/codeql/archive/refs/heads/main.zip
I had this issue recently, I rolled back to 3.8.0 and it seems there is better handling in that version for my specific PDFs
I'm prefer one option by using Autodesk Design Collaboration: https://help.autodesk.com/view/COLLAB/ENU/?guid=Design_Collab_Schedule_Regular_Publishing
Try adding
ocsp_fail_open=True,
Since you downloaded the latest Eclipse (2025-09), it's possible that Eclipse hasn't fully caught up with Java 25 yet, even if it officially supports it. Eclipse's support for a new Java version often lags behind the JDK's release. One thing you could try is ensuring you have the latest version of Eclipse installed (including any updates for the Java tools).
Also check the java 25 JDK Path in eclipse.ini file
Try setting the -vm parameter directly to the Java 25 JDK instead of a specific library within it.
In your eclipse.ini file, it should look something like this:
make sure this is in different line than -vmargs, and ther are no extra spaces.
Check Java Version Using Eclipse Console:
Once you have made these changes you can check what Eclipse is using by openning the eclipse console and running this commend in the Eclipse IDE treminal or checkking the version in the console:
This will confirm if the environment is pointing to Java 25 or still using the older version.
i'm no expert and cant even connect to anything ran in kuberneties, but it runs a shit load of crap, proxy servers and stuff that is pointless to just run a container, save yourself the headache and drp it
I just uploaded nextcrumbs to github for this purpose.
Next.js isn't necessary but there is a feature that will allow the breadcrumbs to be automatically created using usePathName from next/navigation
You can find the package here.
https://github.com/ameshkin/nextcrumbs
https://www.npmjs.com/package/@ameshkin/supercrumbs
npm i @ameshkin/supercrumbs
To dump DataRow row contents to a string I use
string.Join(", ", row.ItemArray)
I tried the patch above, but that did not quite fix the same issue, so I downloaded the newer contents of MagicWord.php file from the right side of
https://gerrit.wikimedia.org/r/c/mediawiki/core/+/122449/1/includes/MagicWord.php#686
as given by @blacksunshineCoding above, uploaded it to overwrite my MagicWord.php (after first saving a copy) and that fixed the issue.
Unfortunately the subsequent purge broke my session and even after various attempts (cookies, adding cache settings to LocalSettings.php) I can't log in with this browser due to "There seems to be a problem with your login session; this action has been canceled as a precaution against session hijacking. Go back to the previous page, reload that page and then try again." but I can see my wiki media site again with all browsers even though my MediaWiki is very very old, and I can log in and change it with other browsers.
If you have this issue, I recommend doing the purge with a browser you don't use.
Must the side menu be fixed in position?
If not, you can simply wrap the menu and the main content in a container and use CSS Grid to handle the layout. This ensures the main content area correctly fills the remaining space and can scroll independently.
<div class"wrapper">
<div class="left-menu"></div>
<div class="body-wrapper"></div>
</div>
.wrapper {
display: grid;
width: 100%;
/* Use height: 100vh or 100dvh to make it full screen, or 100%
if its parent already has a defined height. */
height: 100vh;
/* The grid columns: 'auto' sizes the menu to its content,
'1fr' gives the remaining space to the body. */
grid-template-columns: auto 1fr;
/* Prevents the entire grid from adding a scrollbar if inner content is too wide. */
overflow: hidden;
}
/* Restore scrolling behavior in the main body content. */
.body-wrapper { overflow: auto }
If the menu must be fixed, please clarify your requirements so I can provide a solution tailored to that constraint.
from PIL import Image, ImageDraw, ImageFont
import random
# Escala y dimensiones
scale = 10 # 1 cm = 10 px
width_px = 183 * scale
height_px = 35 * scale
# Crear imagen de fondo blanco
visual = Image.new("RGB", (width_px, height_px), (255, 255, 255))
draw = ImageDraw.Draw(visual)
# Fuente para etiquetas
try:
font = ImageFont.truetype("arial.ttf", 15)
except:
font = ImageFont.load_default()
# Datos de fotos (tipo, tamaño cm, posición cm)
photos = [
("G1",16,22,2,2),("G2",14,20,20,2),("G3",18,18,38,2),("G4",16,22,56,2),
("G5",14,20,74,2),("G6",16,22,92,2),("G7",18,18,110,2),("G8",16,22,128,2),
("G9",14,20,146,2),("G10",16,22,164,2),
("G11",14,20,2,25),("G12",16,22,20,25),("G13",18,18,38,25),("G14",16,22,56,25),
("G15",14,20,74,25),("G16",16,22,92,25),("G17",18,18,110,25),("G18",16,22,128,25),
("G19",14,20,146,25),("G20",16,22,164,25),
("M1",10,13,5,15),("M2",10,13,20,15),("M3",10,13,35,12),("M4",10,13,50,15),
("M5",10,13,65,12),("M6",10,13,80,15),("M7",10,13,95,12),("M8",10,13,110,15),
("M9",10,13,125,12),("M10",10,13,140,15),("M11",10,13,155,12),("M12",10,13,170,15),
("M13",10,13,5,28),("M14",10,13,20,28),("M15",10,13,35,28),("M16",10,13,50,28),
("M17",10,13,65,28),("M18",10,13,80,28),("M19",10,13,95,28),("M20",10,13,110,28),
("M21",10,13,125,28),("M22",10,13,140,28),("M23",10,13,155,28),
("P1",7,10,8,7),("P2",7,10,23,7),("P3",7,10,38,7),("P4",7,10,53,7),("P5",7,10,68,7),
("P6",7,10,83,7),("P7",7,10,98,7),("P8",7,10,113,7),("P9",7,10,128,7),("P10",7,10,143,7),
("P11",7,10,158,7),("P12",7,10,173,7),("P13",7,10,8,22),("P14",7,10,23,22),("P15",7,10,38,22),
("P16",7,10,53,22),("P17",7,10,68,22),("P18",7,10,83,22),("P19",7,10,98,22),("P20",7,10,113,22),
("P21",7,10,128,22),("P22",7,10,143,22),("P23",7,10,158,22),("P24",7,10,173,22),
("P25",7,10,8,33),("P26",7,10,23,33),("P27",7,10,38,33),("P28",7,10,53,33),("P29",7,10,68,33),
("P30",7,10,83,33),("P31",7,10,98,33),
("Mi1",5,5,12,12),("Mi2",5,5,27,12),("Mi3",5,5,42,12),("Mi4",5,5,57,12),("Mi5",5,5,72,12),
("Mi6",5,5,87,12),("Mi7",5,5,102,12),("Mi8",5,5,117,12),("Mi9",5,5,132,12),("Mi10",5,5,147,12)
]
# Función para color aleatorio
def random_color():
return tuple(random.randint(100, 255) for _ in range(3))
# Dibujar fotos con color y etiqueta
for code, w_cm, h_cm, x_cm, y_cm in photos:
x0 = x_cm * scale
y0 = y_cm * scale
x1 = x0 + w_cm * scale
y1 = y0 + h_cm * scale
fill_color = random_color()
draw.rectangle([x0, y0, x1, y1], outline=(0,0,0), width=2, fill=fill_color)
text_w, text_h = draw.textsize(code, font=font)
draw.text((x0+(x1-x0-text_w)/2, y0+(y1-y0-text_h)/2), code, fill=(0,0,0), font=font)
# Guardar imagen final
visual.save("collage_visual.png")
print("Collage visual generado: collage_visual.png")
Use imports_passed_through when importing activities into workflow code:
with workflow.unsafe.imports_passed_through():
import test_activity
See https://docs.temporal.io/develop/python/python-sdk-sandbox#passthrough-modules for more info.
Thanks for using Django MongoDB Backend. Would you mind creating an issue here: https://jira.mongodb.org/projects/INTPYTHON/issues/INTPYTHON-809?filter=allopenissues ?
Procedure bindings of parameterized derived types with KIND type parameters need to be implemented for the various distinct sets of values with which the KIND type parameters will be instantiated, and they really need to be collected into a generic type-bound procedure to be useful in practice. You could also implement a TBP for a KIND PDT using an unlimited polymorphic PASS dummy argument, but that just moves the problem from compilation time to run time without adding any more flexibility.
I have been trying to do that and this is what I have been able to do:
You can find the code in my repo: https://github.com/omrastogi/dsa_questions/blob/master/cs5800-Algorithms/binary_search_trees/bst_visualization.py
by listening to fetch requests
@user31405354, How would you slim down the image with a multi-stage build if you need to remove dependencies only used in one workspace's member which is not required for the image you're trying to build ?
I had been trying to render similar binary search tree. Here is what I have:
The code for this can be found in my github repo: https://github.com/omrastogi/dsa_questions/blob/master/cs5800-Algorithms/binary_search_trees/bst_visualization.py
For now you will have to build on python=3.12 or lower. If that doesn't work please let me know.
💥 Player sejati gak cuma main, tapi menang. Lo udah di Jo777 belum? 😉
The problem is with blazor server. It uses signalr internally. With adding an other signalr.client it conflict and click button stop working. For me webasesenly worked with signalr.client
@Barmar, Thank you for the explanation about compiler optimization.
As you said, calling the car function sounds more expensive than referencing a literal or a variable.
What seems to currently work is:
address = self.base_address + line_number.virtualAddress
Here I am using virtualAddress instead of addressOffset.
Order both sets by Morton number of coordinates.
Shopify recently Increased limits in metafield and metaobject definitions
Basically, for metaobject entries:
The 1,000,000 entry limit per definition removes previous plan-based restrictions of 64,000 (non-Plus) and 128,000 (Plus).
128 definitions for Basic, Shopify, and Advanced plans for a merchant
It looks Shopify is encouraging merchants and app developers to fully leverage its metaobjects instead of using external storage.
Thanks to everyone. I completely understand what you are saying and you have basically confirmed what I already thought. There are some instances where I will have to continue to use #defines, and that is fine.
In my case it was a mistake in the .sln file. There was a mapping from x64 to Win32 like this:
{29B3DBB1-C22B-4366-B257-AFA436F24871}.Release|x64.ActiveCfg = Release|Win32
Which needs to be
{29B3DBB1-C22B-4366-B257-AFA436F24871}.Release|x64.ActiveCfg = Release|x64
This was caused by me manually editing those files and making mistakes ... I probably shouldn't do that.
I created a SQL Server trigger that blocks or allows remote connections based on IP address — without blocking port 1433 or stopping the SQL Server service. This trigger helps control remote access while keeping the benefits of TCP 1433 connections.
just RUN this Trigger and u can edit the @ip for the machine can connecte with sql server
https://github.com/ozstriker712/BLock-Allow-IP-adresse-for-Remote-Connection-SQL-SERVER
The MonoGame template package is still based on .NET Standard 2.0, and it’s not fully updated for .NET 8 yet. Because of that, the install command can fail. Trying it with .NET 6 or 7 SDK might work.
I understand that. If everything I read says to use constexpr instead of #define, then I'm assuming there must be a way of replicating #ifdef, etc ? If not, then why not just use #define?
https://stackoverflow.com/questions/21837819/base64-encoding-and-decoding#:~:text=c:\TEMP%3Etype%20c.txtdDMrKDpBUFBNT0JJc:\TEMP%3Ebase64%20%2Dd%20c.txtt3+(:APPMOBIc:\TEMP%3Ebase64%20%2Dd%20c.txt%20%3E%20c.binc:\TEMP%3Eod%20%2Dt%20x1%20c.bin0000000%2074%2033%202b%2028%203a%2041%2050%2050%204d%204f%2042%20490000014c:\TEMP%3Etype%20c.bint3+(:APPMOBIc:\TEMP%3E
Macros are nothing like proper variables. You shouldn't even be comparing them.
You’re very close — the error you’re seeing (CredentialsProviderError: Could not load credentials from any providers) isn’t really about your endpoint, but rather about AWS SDK v3 trying to sign the request even though it’s hitting your local serverless-offline WebSocket server.
Let’s walk through what’s happening and how to fix it.
When you do:
const apiGatewayClient = new ApiGatewayManagementApiClient({ endpoint });
await apiGatewayClient.send(new PostToConnectionCommand(payload));
The AWS SDK v3 automatically assumes it’s talking to real AWS API Gateway, so it:
Attempts to sign the request with AWS credentials.
Fails because serverless-offline doesn’t need or support signed requests.
Hence: Could not load credentials from any providers.
So even though your endpoint (http://localhost:3001) is correct, the client is still trying to sign requests as if it were AWS.
When using serverless-offline for WebSocket testing, you need to give the ApiGatewayManagementApiClient dummy credentials and a local region.
Here’s a working local setup:
const {
ApiGatewayManagementApiClient,
PostToConnectionCommand,
} = require("@aws-sdk/client-apigatewaymanagementapi");
exports.message = async (event, context) => {
// Use the same port that serverless-offline shows for websocket
const endpoint = "http://localhost:3001";
const connectionId = event.requestContext.connectionId;
const payload = {
ConnectionId: connectionId,
Data: "pong",
};
const apiGatewayClient = new ApiGatewayManagementApiClient({
endpoint,
region: "us-east-1",
// Dummy credentials to satisfy SDK signer
credentials: {
accessKeyId: "dummy",
secretAccessKey: "dummy",
},
});
try {
await apiGatewayClient.send(new PostToConnectionCommand(payload));
} catch (err) {
console.error("PostToConnection error:", err);
}
return { statusCode: 200, body: "pong sent" };
};
The AWS SDK v3 doesn’t let you completely disable signing, but it’s happy if you provide any credentials.
Since serverless-offline ignores them, “dummy” values are perfectly fine locally.
Start serverless offline:
npx serverless offline
Connect via WebSocket client (e.g., wscat):
npx wscat -c ws://localhost:3001
Type a message — you should see "pong" echoed back.
ProblemFixSDK tries to sign local requestsProvide dummy credentialsWrong endpointUse http://localhost:3001 (as printed by serverless-offline)Missing regionAdd region: "us-east-1"
If you want, I can also show you how to make this conditional (so it automatically switches between local and AWS endpoints depending on IS_OFFLINE), which makes deployments smoother. Would you like that?
Starts with any number of ‘abc’ or Contains any number of ‘aab’ or any number of ‘bba’ as substring or Ends with ‘abba’ or any number of ‘ccc’
You need to re-architect the structure, as the info you have given so far is not enough right now.
One way to start to remedy the issue would be to remove the coordinator all together, and only include the headers where you need them, and implement the logic accordingly. The issue you are facing is very common when trying to spread too much of the implementation out across too many files.
Once you provide more info (the exact error string at least) I'm sure we could figure it out quite quickly and help you solve this.
pytidycensus is no longer being maintained and its dependency chain is not compatible with Python 3.13 and recent NumPy builds.
The similar package is tidycensus : https://pypi.org/project/tidycensus/
you can install it using pip:
pip install tidycensus
تمام أحمد 💪
باش نكمل الخدمة ونخرج ليك النسخة الجاهزة، خاصني نأكد آخر تفصيل صغير:
في صفحة Formulaire، واش بغيتي الزر الأزرق يكون:
1️⃣ في أعلى الصفحة (فوق الخانات C2:C5)
ولا
2️⃣ في الأسفل (تحت الخانة C5، يعني بعد ما المستخدم يعمر المعلومات يلقاه مباشرة تحتها)؟
قولي شنو تختار باش ندمج بالضبط على ذاك الشكل 🎯
Try the command:
git count-objects -vH
this command gives you the size of the data being uploaded, Git might upload Libraries' files that you thought were ignored by .gitignore.
It's just a guess.
you could check and reply on the comments.
If you use "SQLTools" by Matheus Teixeira, you can disable the feature in the extensions "Settings" dialog:
Just uncheck "Highlight Query".
So, I just found this buried in F5 documentation:
These variables have no prefix - for example, a variable named foo. Local variables are scoped to the connection, and are deleted when the connection is cleared. In most cases, this is the most appropriate variable type to use.
Apparently iRules are scoped to the connection, which in theory sounds like they can be shared by irules for the same connection. So, this looks like I can add 2 irules to the same VIP, one with the variables in the irule_init, and have that one higher in priority than the irule that has all of the event logic. Can anyone confirm this will work? I may need to do some experimentation.
No.
Apple does not provide any system process that refreshes your app’s APNs token automatically after a restore or migration. The token is only refreshed once your app explicitly registers again.
From Apple’s documentation:
“APNs issues a new device token to your app when certain events happen. The app must register with APNs each time it launches.”
— Apple Developer Documentation: Registering Your App with APNs
And:
“When the device token has changed, the user must launch your app once before APNs can once again deliver remote notifications to the device.”
— Configuring Remote Notification Support (Archived Apple Doc)
That means the OS will not wake your app automatically to renew the token post-restore. The user must open the app at least once.
2. Can the app be awakened silently (e.g., background app refresh or silent push) to refresh its token before the user opens it?
Not reliably.
While background modes like silent push (content-available: 1) or background app refresh can wake your app occasionally, they don’t work until the app has been launched at least once after installation or restore.
Also, if the APNs token changed due to restore, your backend will still be sending notifications to the old, invalid token — meaning the silent push will never arrive in the first place.
“The system does not guarantee delivery of background notifications in all cases. Notifications are throttled based on current conditions, such as the device’s battery level or thermal state.”
— Apple Docs: Pushing Background Updates to Your App
So while background updates might sometimes trigger, you can’t rely on them for refreshing tokens after a restore.
3. What’s the best practice to ensure push delivery reliability after a device restore?
Here’s what works in production:
Always call registerForRemoteNotifications() on every cold launch.
Send the token to your backend inside
application(_:didRegisterForRemoteNotificationsWithDeviceToken:).
Compare the new token to the last saved one and update your backend if it changed.
Do not cache or assume the token is permanent.
“Never cache device tokens in your app; instead, get them from the system when you need them.”
— Apple Docs: Registering Your App with APNs
Treat device tokens as ephemeral — they can change anytime (reinstall, restore, OS update, etc.).
Handle APNs error responses such as:
410 Unregistered → token is invalid; stop sending.
400 BadDeviceToken → token doesn’t match app environment.
When receiving these, mark tokens as invalid and remove them from your database.
Keep a “last registration date” per device and flag stale ones.
For critical alerts (e.g., security, transactions), have fallback channels (email, SMS, etc.).
“If a provider attempts to deliver a push notification to an application, but the application no longer exists on the device, APNs returns an error code indicating that the device token is no longer valid.”
— Apple Docs: Communicating with APNs
For those still having issue with this:
Enable Databricks Apps - On-Behalf-Of User Authorization (Click on your user and then 'Preview'). For this to take effect, you need to shut down your app and start it again.
Add scopes to your app by editing the app. To edit scopes, your app must be stopped.
After configuring scopes and restarting the app, you may need end the login session and login to databricks again for scope changes to take effect. My databricks instance is configured with Google Workspace SSO, so I had to end my google session and login again for it to work.
flagged as this should be an objective Question, not part of massively downvoted "experiment" Opinion-based questions alpha experiment on Stack Overflow
please include a clearer reproduction and the complete message
Have you tried notExists() instead of id.eq(JPAExpressions.select(...).limit(1)) ?
jpaQueryFactory
.selectFrom(qVehicleLocation)
.innerJoin(qVehicleLocation.vehicle).fetchJoin()
.where(
JPAExpressions.selectOne()
.from(subLocation)
.where(
subLocation.vehicle.eq(qVehicleLocation.vehicle),
subLocation.createdAt.gt(qVehicleLocation.createdAt)
.or(
subLocation.createdAt.eq(qVehicleLocation.createdAt)
.and(subLocation.id.gt(qVehicleLocation.id))
)
)
.notExists()
)
.fetch();
This is what you need
function load() {
//Your function here
}
$(function() {
load(); //run on load
});
var loaded = setInterval(_run, 600000); //repeat every 10 mins
function _run() {
load();
clearInterval(run); //clear interval to recycle in the next 10 mins (not necessary);
}
To have cleaner approach I want like this
field: 'purchaseOrder.poCode', headerName: 'PO Number', flex: 1, minWidth: 120,
instead of below
field: 'purchaseOrder', headerName: 'PO Number', flex: 1, minWidth: 120, valueGetter: (params) => {
return params?.poCode
}
what to do if the viewB is transparent/translucent(basically its a carosel) and you want to avoid overlap?
in practice, you want the LLM to have the entire body of text prior to responding. what you should do i begin streaming the response from the LLM and send that to your speech to text processor if you want to improve voice speed.
What we actually did is just a sleep job before the job you want delayed.
Like this on windows, simple but works
Start-Sleep -Seconds 3600
Or Unix:
sleep 1h
Cheers,
Dave
Use file reference with #r:
#r @"C:\Users\<your-user>\.nuget\packages\newtonsoft.json\<package-version>\lib\<.net-version>\Newtonsoft.Json.dll"
So, letting a friend try my code on his Mac, without the xsl stylesheet parameter, he got this error
Run-time error '2004'
Method 'OpenXML' of object 'Workbooks' failed.
Which answers my #1 question.
Thanks for the contributions @timwilliams and @Haluk.
I will start exploring options like Power Query.
still having this problem in 2025 and it took some work but I got a solution working. I have networkingMode=mirrored , no JAVA_HOME conflicts, and most other connections work fine but I had to set up forwarding using usbipd-win to get it working.
install usbipd-win on Windows, using admin PowerShell, run
winget install --interactive --exact dorssel.usbipd-win
or download the .msi from https://github.com/dorssel/usbipd-win/releases
connect your phone via USB
open PowerShell as admin and list devices with usbipd list noting the phone's BUSID (e.g. 1-4, and it's VID:PID) then:
bind the device usbipd bind --busid <BUSID>
attach to WSL usbipd attach --wsl --busid <BUSID>
accept the "allow USB debugging?" prompt on the phone
restart adb in WSL: adb kill-server; adb start-server; adb devices and you should see the device showing up
after I did this the first time and selected "always allow this connection" from my phone it's worked pretty much every time. occasionally I have to do it again after a restart but it's pretty stable. I did write a script to automate the whole thing and alias it so it's easier to run if I have to reset the binding
# AttachAndroidToWSL.ps1
$deviceVidPid = "<VID:PID>"
Write-Host "Searching for device with VID:PID $deviceVidPid..."
$devices = usbipd list
$targetDevice = $devices | Where-Object {
$_ -match $deviceVidPid -and
$_ -notmatch "Attached"
}
if ($targetDevice) {
$busId = ($targetDevice -split " ")[0]
Write-Host "Found device: $targetDevice"
Write-Host "Attaching device with BUSID $busId to WSL..."
try {
usbipd bind --busid $busId | Out-Null
usbipd attach --wsl --busid $busId
Write-Host "Device attached successfully. Check adb devices in WSL."
} catch {
Write-Error "Failed to attach device: $($_.Exception.Message)"
}
} else {
Write-Host "Device with VID:PID $deviceVidPid not found or already attached."
Write-Host "Current USB devices:"
usbipd list
}
# Restart adb server in WSL (optional)
# Change WSL distribution name if it's not 'Ubuntu'
# wsl -d Ubuntu -e bash -c "adb kill-server; adb start-server"
and a connect-android powershell alias is helpful to quickly bind
function Connect-Android {
C:\path\to\script\AttachAndroidToWSL.ps1
}
Set-Alias -Name connect-android -Value Connect-Android
LLM is the model itself, a direct interface to the language model (e.g., OpenAI, Anthropic). You can call it directly with a prompt and get a response.
LLMChain is a LangChain wrapper that combines the model (llm) with a PromptTemplate and optional output logic. It doesn’t replace the LLM; it uses it internally to build a reusable, parameterized pipeline.
So it’s not one over the other, you typically use them together:
the LLM provides the intelligence, and the LLMChain structures how prompts are created and managed when interacting with it.
I'd use Power Query for this. With Office 365 this formula could be an alternative. It doesn't require LAMBDA.
=LET(_data,A1:F13,
_header,DROP(TAKE(_data,1),,1),
_body,DROP(_data,1),
VSTACK(HSTACK("",_header),
CHOOSEROWS(_body,
XMATCH(
MAXIFS(
INDEX(_body,,2),INDEX(_body,,1),UNIQUE(INDEX(_body,,1)))&UNIQUE(INDEX(_body,,1)),
INDEX(_body,,2)&INDEX(_body,,1)))))
Try the LockedList plug-in, it also has a nice UI.
Had the same issue, had to install torchcodec=0.7 so that it was compatible with my pytorch version. then reset my runtime in colab and it worked. diagram of pytorch/torchcodec compatibilities found here https://github.com/meta-pytorch/torchcodec
I ran into the same thing before, bro. The designer just shows that black screen instead of rendering the control kinda annoying. What fixed it for me was rebuilding my Nebroo project and reopening Visual Studio. Once it loads properly, the control shows fine when added to a form. It’s just how the designer handles custom controls sometimes.
Excelente me funcionó quiza por el autocompletado lo castea (///) y lo correcto seria (//)
Integrating NLP with Solr improves search quality by normalizing language, identifying entities, and expanding related terms. Instead of treating words as isolated tokens, NLP lets Solr recognize that “run,” “running,” and “ran” refer to the same concept, or that “Paris” may refer to a location entity. This results in higher recall, better matching, and more contextually relevant results.
For reference, a detailed study on this approach is available below, analyzing the impact of NLP techniques on Solr’s search relevancy.
https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1577525&dswid=-8621
Try async-trace, it should provide proper stacktrace for async await calls
I have the same problem where the action is 20 and the effected folder being empty, when in reality it should have files in it. In my case, when I looked at the history in the VSS client, the action was showing as "Archived version of ..." (where the ... is the file name).
The VssPhysicalLib>RevisionRecord.cs>Action enumeration does have an entry for ArchiveProject = 23, but not for 20.
A lazy solution to the problem but managed to work around the issue for me:
Add a new entry to the VssPhysicalLib>RevisionRecord.cs>Action enumeration: ArchiveUnknown = 20,
Add a new VSS action class to VssLogicalLib>VssAction.cs file:
public class VssNoAction : VssAction
{
public override VssActionType Type { get { return VssActionType.Label;} }
public VssNoAction()
{
//
}
public override string ToString()
{
return "No Action";
}
}
Add a new case to the switch statement in VssLogicalLib>VssRivision.cs>CreateAction() method:
case Hpdi.VssPhysicalLib.Action.ArchiveUnknown:
{
return new VssNoAction();
}
For more details, you can check this issue on trevorr/vss2git github repo: https://github.com/trevorr/vss2git/issues/39
You can do this efficiently and vectorized in NumPy using broadcasting.
import numpy as np
a = np.array([1, 3, 4, 6])
b = np.array([2, 7, 8, 10, 15])
result = b[:, None] + a
print(result)
You need to either add this header:
'Referrer-Policy': 'strict-origin-when-cross-origin'
Or you can add the following to your embed element:
referrerpolicy='strict-origin-when-cross-origin'
Either should work for you to get them back up and running.
See YouTube documentation here:
https://developers.google.com/youtube/terms/required-minimum-functionality#embedded-player-api-client-identity
@Swifty, wouldn't such use of zip create a list -- rather than an iterator? The input sequence of rows here can be very long -- I don't want to store it all in memory...
You can use @sln JSON Perl/PCRE regex functions to validate and error check.
See this link for several practical usagge examples to query and validate JSON,
and for a full explanation of the several functions available :
https://stackoverflow.com/a/79785886/15577665
json_regexp = paste0(
"(?x) \n",
" \n",
" # JSON recursion functions by @sln \n",
" \n",
" (?: \n",
" (?: # Valid JSON Object or Array \n",
" (?&V_Obj) \n",
" | (?&V_Ary) \n",
" ) \n",
" | # or, \n",
" (?<Invalid> # (1), Invalid JSON - Find the error \n",
" (?&Er_Obj) \n",
" | (?&Er_Ary) \n",
" ) \n",
" ) \n",
" \n",
" \n",
" (?(DEFINE)(?<Sep_Ary>\s*(?:,(?!\s*[}\]])|(?=\])))(?<Sep_Obj>\s*(?:,(?!\s*[}\]])|(?=})))(?<Er_Obj>(?>{(?:\s*(?&Str)(?:\s*:(?:\s*(?:(?&Er_Value)|(?<Er_Ary>\[(?:\s*(?:(?&Er_Value)|(?&Er_Ary)|(?&Er_Obj))(?:(?&Sep_Ary)|(*ACCEPT)))*(?:\s*\]|(*ACCEPT)))|(?&Er_Obj))(?:(?&Sep_Obj)|(*ACCEPT))|(*ACCEPT))|(*ACCEPT)))*(?:\s*}|(*ACCEPT))))(?<Er_Value>(?>(?&Numb)|(?>true|false|null)|(?&Str)))(?<Str>(?>"[^\\"]*(?:\\[\s\S][^\\"]*)*"))(?<Numb>(?>[+-]?(?:\d+(?:\.\d*)?|\.\d+)(?:[eE][+-]?\d+)?|(?:[eE][+-]?\d+)))(?<V_KeyVal>(?>\s*(?&Str)\s*:\s*(?&V_Value)\s*))(?<V_Value>(?>(?&Numb)|(?>true|false|null)|(?&Str)|(?&V_Obj)|(?&V_Ary)))(?<V_Ary>\[(?>\s*(?&V_Value)(?&Sep_Ary))*\s*\])(?<V_Obj>{(?>(?&V_KeyVal)(?&Sep_Obj))*\s*})) \n"
)
It seems that with these steps (I've installed another emulator with Android 14 and Google Play services just in case), plus generating my own CA certificate (not a self-signed one) and installing it on the emulator, at the same time it was configured in the assets/certs and the res/raw did the work.
I have the same problem, but I haven't been able to solve it(
I have all the data as integer, but I still get this error: [inputs.mqtt_consumer] Error in plugin: metric parse error: expected tag at 1:3: "84".
mqtt.conf for Telegraf:
[[inputs.mqtt_consumer]]
servers = ["tcp://localhost:1883"]
topics = [ "test_topic" ]
qos = 0
client_id = "qwe12"
#name_override = "entropy_available"
#topic_tag = "test_topic"
data_format = "influx"
data_type = "integer"
Influxdb:
[[inputs.mqtt_consumer]]
servers = ["tcp://localhost:1883"]
topics = [
"test_topic"
]
[[inputs.mqtt_consumer.fieldpass]]
field="value"
converter="integer"
Where am I doing wrong?
https://pypi.org/project/keyring/ uses platform services to securely store secrets, credentials, etc.
You cannot do it using "forge test" but you can deploy to the testnet and run tests off of it with the methods you show above.
I just had this come up because some program had stuck a P4CHARSET=utf8 into my p4config.txt in a depot that was not configured for utf8. So that may be one of many reasons for this error.
I think I’ve found a solution, and I’d appreciate it if someone could take a look and comment, so I know if I’m on the right track.
After numerous changes, I realized that one of the bigger problems was that I wasn’t performing a Clean + Rebuild, so Visual Studio kept caching my modifications.
In the end, the solution came down to the following part of the web.config file:
<system.web>
<authentication mode="Windows" />
<compilation debug="true" targetFramework="4.5.2" />
<httpRuntime targetFramework="4.5.2" />
<httpModules>
<add name="ApplicationInsightsWebTracking" type="Microsoft.ApplicationInsights.Web.ApplicationInsightsHttpModule, Microsoft.AI.Web" />
</httpModules>
</system.web>
<!-- Set Windows Auth for api/auth/token endpoint -->
<location path="api/auth/token">
<system.webServer>
<security>
<authentication>
<anonymousAuthentication enabled="false" />
<windowsAuthentication enabled="true" />
</authentication>
</security>
</system.webServer>
</location>
<!-- For the rest of the app, allow anonymous auth -->
<system.webServer>
<security>
<authentication>
<anonymousAuthentication enabled="true" />
<windowsAuthentication enabled="false" />
</authentication>
</security>
</system.webServer>
Now, the first endpoint passes through Windows Authentication (receives the Authorization: Negotiate ... header), while the rest of the application is authorized through CustomAuthorization using JWT tokens.
Additionally, I had to configure the following in the applicationhost.config file:
<section name="anonymousAuthentication" overrideModeDefault="Allow" />
<section name="windowsAuthentication" overrideModeDefault="Allow" />
I would appreciate it if someone could review this and provide advice or recommendations on whether this setup is acceptable.
Thank you!
If you're creating a website with Divi Builder, make sure your Divi is active.
Divi > Theme Options > Updates > Username and API Key needs to be active. Usually when I do this, and go back to Dashboard, you should automatically have a Divi update.
After that, if you're still seeing a template as your home page even with created pages, it's because you don't have specific static pages setup.
Proceed to Settings > Reading > Your Homepage Displays > and make sure it's set to A Static Page (Select Below) and it'll give you the option to set your homepage to a specific page as well as your Blog page.
This can also be obtained by going in Appearance > Customize > Homepage Settings > Homepage Displays is there as well.
I used PostHTML as suggested by Parcel docs. It allowed me to insert partials using the <include> element, instead of inserting dynamically using js.
If you are passing logging config at the cmd prompt try remove it and the Request line will become expandable in the report :
remove this -Dlogback.configurationFile=src/test/resources/logback.xml
As of October 30, 2025 this is the message that I get in Firefox console when running a Nuxt 2 project:
[_Vue DevTools v7 log_]
Vue DevTools v7 detected in your Vue2 project. v7 only supports Vue3 and will not work.
The legacy version of chrome extension that supports both Vue 2 and Vue 3 has been moved to https://chromewebstore.google.com/detail/vuejs-devtools/iaajmlceplecbljialhhkmedjlpdblhp
The legacy version of firefox extension that supports both Vue 2 and Vue 3 has been moved to https://addons.mozilla.org/firefox/addon/vue-js-devtools-v6-legacy
Please install and enable only the legacy version for your Vue2 app.
[_Vue DevTools v7 log_]
You will need to disable the v7 DevTools while running Nuxt 2 projects.
you should delete the docker and install everything with from the orgianl website of the docker
and reinstall it again and also run the command which remove the older version and also remove the filw which remove the conflicts
Run the following command to uninstall all conflicting packages:
for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done
and also add the docker in the group :
sudo usermod -aG docker ${USER}
Just delete .angular/cache and be happy. I advise to just not use this cache for development, since it just hinders things like npm link. You can disable it with:
{
...
"cli": {
...
"cache": {
"enabled": false
}
}
}
I know it a dead thread but wanted to show the fix i found
msiexec.exe is not meant to be directly called so you can't just run "msiexec.exe /arguments" you need something else to call it. the dumb fix i have is to use start-process so something like where 00000000-0000-0000-0000-000000000000 is the application id you can get from wmi-objects. so the following would uninstall that app and be able to run right from an administrative powershell
Start-Process msiexec.exe -ArgumentList '/X{00000000-0000-0000-0000-000000000000} /q' -Wait
Is this the best general-purpose solution?
def batch(iteration, batchSize):
items = []
for item in iteration:
items.append(item)
if len(items) == batchSize:
yield items
items = []
if items:
yield items
...
for rows in batch(query.results(), N):
cluster.submit(rows)
As of today with Aspire 9.5.2 the only implemented clustering integrations are
Redis
Azure Storage Tables
The settings dialog always appeared when I used the STS for Eclipse + GitHub Copilot + Copilot Metrics plugin.
please insert a valid URL, even though my url and token worked fine with curl
The Copilot Metrics plugin requires a specific backend server, not just any URL. It verifies this by calling a /metrics/api/health (or similar endpoint), which returns JSON data.
After this restart the eclipse if still error is there , go to window
window->show view->error log
@Fildor Microfrontends are usually an evolutionary step aren't they? I think my question centers more on what other people have experienced and what their situations were when they implemented Microfrontends to gather knowledge from the trenches. I'd like to know the challenges other people have come across to get some ideas of different areas of risk and mitigation.
@JonasH there will be around 50 people if I recall the last numbers. Though I'm mainly curious about your experiences in implementing microfrontends. Please share what you think is relevant to situations you've come across.
@AndrewS this is exactly the kind of experience I'm looking for although I'd love to hear more about your personal experiences. Thank you so much!
If you still get this error even when Metal toolchain is installed, is because you try to run metal tool directly i.e. /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/metal which will not work anymore
Try to use xcrun metal instead.
If you are already getting all the data from logs for each job. You can still sort on job = error and based on your table, sendevent to force start to the job. I would also suggest to add a comment for the sendevent to include the error, that way you cover audit as well. Let me know if that helps or have more question. Sounds like a great job on data collection!
@Peter Mortensen
the link is now https://www.can-cia.org/cia-groups/technical-documents
look for CiA301. The link is available for download only if you are registered (for free and can be done within a few min)
If you're are looking for more nice presentation features for Pdf-files, you may check out MagicPresenter (https://apps.apple.com/us/app/magicpresenter/id6569250589). It allows you to add presenter notes to PDFs and view them in presenter mode during your presentation. You can even scribble "magic" annotations directly on your slides. These are only visible to you. I found this super handy for remembering what I want to say.
Thank you all. The problem was that the term "global b" shoud have been at the begining of all the funkcions.
import tkinter,time
canvas=tkinter.Canvas(width=500,height=500,bg="white")
canvas.pack()
canvas.create_text(250,250,text="00:00:00",font="Arial 70 bold")
def cl_e():
global b
b=False
clock()
def cl_s():
global b
b=True
clock()
def clock():
global b
h=0
m=0
s=0
while b:
canvas.delete("all")
if s<60:
s+=1
time.sleep(1)
elif m<60:
s=0
m+=1
else:
s=0
m=0
h+=1
if h<10 and m<10 and s<10:
canvas.create_text(250,250,text="0"+str(h)+":0"+str(m)+":0"+str(s),font="Arial 70 bold")
elif h<10 and m<10 and s>=10:
canvas.create_text(250,250,text="0"+str(h)+":0"+str(m)+":"+str(s),font="Arial 70 bold")
elif h<10 and m>=10 and s<10:
canvas.create_text(250,250,text="0"+str(h)+":"+str(m)+":0"+str(s),font="Arial 70 bold")
elif h<10 and m>=10 and s>=10:
canvas.create_text(250,250,text="0"+str(h)+":"+str(m)+":"+str(s),font="Arial 70 bold")
elif h>=10 and m<10 and s<10:
canvas.create_text(250,250,text=str(h)+":0"+str(m)+":0"+str(s),font="Arial 70 bold")
elif h>=10 and m<10 and s>=10:
canvas.create_text(250,250,text=str(h)+":0"+str(m)+":"+str(s),font="Arial 70 bold")
elif h>=10 and m>=10 and s<10:
canvas.create_text(250,250,text=str(h)+":"+str(m)+":0"+str(s),font="Arial 70 bold")
else:
canvas.create_text(250,250,text=str(h)+":"+str(m)+":"+str(s),font="Arial 70 bold")
canvas.update()
start=tkinter.Button(text="Start",command=cl_s)
end=tkinter.Button(text="End",command=cl_e)
start.pack()
end.pack()
SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY is a view. and also account admin has no access to the source. I guess you may challenge the security parameters of that view but I am not sure it will do the work.
Instead, I would consider to execute a predefined task that will extract this data into a table the app can access. it will take a few seconds of delay but you will get the data.
make sure in the CREATE TASK to RUN AS OWNER without a schedule.
then EXECUTE IMMEDIATE <TASK_NAME>;
I hope it will work for you
Thanks for your answer. This helped me in finding the solution, which was actually fairly obvious ;)
if ((PhotoMetric == PHOTOMETRIC_MINISWHITE) || (PhotoMetric == PHOTOMETRIC_MINISBLACK) || (SamplesPerPixel == 1)) {
if (BitsPerSample == 1)
Type = PRESCRENED_TIFF_IMAGE;
else
Type = MONOCHROME_TIFF_IMAGE;
} else if (SamplesPerPixel == 4) {
if (PhotoMetric == PHOTOMETRIC_SEPARATED)
Type = CMYK_TIFF_IMAGE;
else
Type = OTHER_TIFF_IMAGE;
} else
Type = OTHER_TIFF_IMAGE;
Found the issue... The EditContext being set to new(Search) was triggering a new edit context upon field entry
EditContext="new(Search)
Finally found what is wrong:
...
"lint": {
"builder": "@angular-eslint/builder:lint",
"options": {
"lintFilePatterns": [
"src/**/*.ts",
"src/**/*.html"
]
}
}
...
Notice the absence of forward slashes before the 2 paths in the "lintFilePatterns" section.
If someone today needs to Create a Document in Cosmos DB, POST a key both in the header and body like this:
const headers = {
'x-ms-documentdb-partitionkey': '["yourPartitionKey"]'
};
const body = JSON.stringify({
id: 'someid',
partitionKey: 'yourPartitionKey'
})
What topics can I ask about here? -> Questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
<html>
<head>
<title>my first web page</title>
</head>
<body>
This is my first web page
</body>
</html>