What seems to currently work is:
address = self.base_address + line_number.virtualAddress
Here I am using virtualAddress instead of addressOffset.
Order both sets by Morton number of coordinates.
Shopify recently Increased limits in metafield and metaobject definitions
Basically, for metaobject entries:
The 1,000,000 entry limit per definition removes previous plan-based restrictions of 64,000 (non-Plus) and 128,000 (Plus).
128 definitions for Basic, Shopify, and Advanced plans for a merchant
It looks Shopify is encouraging merchants and app developers to fully leverage its metaobjects instead of using external storage.
Thanks to everyone. I completely understand what you are saying and you have basically confirmed what I already thought. There are some instances where I will have to continue to use #defines, and that is fine.
In my case it was a mistake in the .sln file. There was a mapping from x64 to Win32 like this:
{29B3DBB1-C22B-4366-B257-AFA436F24871}.Release|x64.ActiveCfg = Release|Win32
Which needs to be
{29B3DBB1-C22B-4366-B257-AFA436F24871}.Release|x64.ActiveCfg = Release|x64
This was caused by me manually editing those files and making mistakes ... I probably shouldn't do that.
I created a SQL Server trigger that blocks or allows remote connections based on IP address — without blocking port 1433 or stopping the SQL Server service. This trigger helps control remote access while keeping the benefits of TCP 1433 connections.
just RUN this Trigger and u can edit the @ip for the machine can connecte with sql server
https://github.com/ozstriker712/BLock-Allow-IP-adresse-for-Remote-Connection-SQL-SERVER
The MonoGame template package is still based on .NET Standard 2.0, and it’s not fully updated for .NET 8 yet. Because of that, the install command can fail. Trying it with .NET 6 or 7 SDK might work.
I understand that. If everything I read says to use constexpr instead of #define, then I'm assuming there must be a way of replicating #ifdef, etc ? If not, then why not just use #define?
https://stackoverflow.com/questions/21837819/base64-encoding-and-decoding#:~:text=c:\TEMP%3Etype%20c.txtdDMrKDpBUFBNT0JJc:\TEMP%3Ebase64%20%2Dd%20c.txtt3+(:APPMOBIc:\TEMP%3Ebase64%20%2Dd%20c.txt%20%3E%20c.binc:\TEMP%3Eod%20%2Dt%20x1%20c.bin0000000%2074%2033%202b%2028%203a%2041%2050%2050%204d%204f%2042%20490000014c:\TEMP%3Etype%20c.bint3+(:APPMOBIc:\TEMP%3E
Macros are nothing like proper variables. You shouldn't even be comparing them.
You’re very close — the error you’re seeing (CredentialsProviderError: Could not load credentials from any providers) isn’t really about your endpoint, but rather about AWS SDK v3 trying to sign the request even though it’s hitting your local serverless-offline WebSocket server.
Let’s walk through what’s happening and how to fix it.
When you do:
const apiGatewayClient = new ApiGatewayManagementApiClient({ endpoint });
await apiGatewayClient.send(new PostToConnectionCommand(payload));
The AWS SDK v3 automatically assumes it’s talking to real AWS API Gateway, so it:
Attempts to sign the request with AWS credentials.
Fails because serverless-offline doesn’t need or support signed requests.
Hence: Could not load credentials from any providers.
So even though your endpoint (http://localhost:3001) is correct, the client is still trying to sign requests as if it were AWS.
When using serverless-offline for WebSocket testing, you need to give the ApiGatewayManagementApiClient dummy credentials and a local region.
Here’s a working local setup:
const {
ApiGatewayManagementApiClient,
PostToConnectionCommand,
} = require("@aws-sdk/client-apigatewaymanagementapi");
exports.message = async (event, context) => {
// Use the same port that serverless-offline shows for websocket
const endpoint = "http://localhost:3001";
const connectionId = event.requestContext.connectionId;
const payload = {
ConnectionId: connectionId,
Data: "pong",
};
const apiGatewayClient = new ApiGatewayManagementApiClient({
endpoint,
region: "us-east-1",
// Dummy credentials to satisfy SDK signer
credentials: {
accessKeyId: "dummy",
secretAccessKey: "dummy",
},
});
try {
await apiGatewayClient.send(new PostToConnectionCommand(payload));
} catch (err) {
console.error("PostToConnection error:", err);
}
return { statusCode: 200, body: "pong sent" };
};
The AWS SDK v3 doesn’t let you completely disable signing, but it’s happy if you provide any credentials.
Since serverless-offline ignores them, “dummy” values are perfectly fine locally.
Start serverless offline:
npx serverless offline
Connect via WebSocket client (e.g., wscat):
npx wscat -c ws://localhost:3001
Type a message — you should see "pong" echoed back.
ProblemFixSDK tries to sign local requestsProvide dummy credentialsWrong endpointUse http://localhost:3001 (as printed by serverless-offline)Missing regionAdd region: "us-east-1"
If you want, I can also show you how to make this conditional (so it automatically switches between local and AWS endpoints depending on IS_OFFLINE), which makes deployments smoother. Would you like that?
Starts with any number of ‘abc’ or Contains any number of ‘aab’ or any number of ‘bba’ as substring or Ends with ‘abba’ or any number of ‘ccc’
You need to re-architect the structure, as the info you have given so far is not enough right now.
One way to start to remedy the issue would be to remove the coordinator all together, and only include the headers where you need them, and implement the logic accordingly. The issue you are facing is very common when trying to spread too much of the implementation out across too many files.
Once you provide more info (the exact error string at least) I'm sure we could figure it out quite quickly and help you solve this.
pytidycensus is no longer being maintained and its dependency chain is not compatible with Python 3.13 and recent NumPy builds.
The similar package is tidycensus : https://pypi.org/project/tidycensus/
you can install it using pip:
pip install tidycensus
تمام أحمد 💪
باش نكمل الخدمة ونخرج ليك النسخة الجاهزة، خاصني نأكد آخر تفصيل صغير:
في صفحة Formulaire، واش بغيتي الزر الأزرق يكون:
1️⃣ في أعلى الصفحة (فوق الخانات C2:C5)
ولا
2️⃣ في الأسفل (تحت الخانة C5، يعني بعد ما المستخدم يعمر المعلومات يلقاه مباشرة تحتها)؟
قولي شنو تختار باش ندمج بالضبط على ذاك الشكل 🎯
Try the command:
git count-objects -vH
this command gives you the size of the data being uploaded, Git might upload Libraries' files that you thought were ignored by .gitignore.
It's just a guess.
you could check and reply on the comments.
If you use "SQLTools" by Matheus Teixeira, you can disable the feature in the extensions "Settings" dialog:
Just uncheck "Highlight Query".
So, I just found this buried in F5 documentation:
These variables have no prefix - for example, a variable named foo. Local variables are scoped to the connection, and are deleted when the connection is cleared. In most cases, this is the most appropriate variable type to use.
Apparently iRules are scoped to the connection, which in theory sounds like they can be shared by irules for the same connection. So, this looks like I can add 2 irules to the same VIP, one with the variables in the irule_init, and have that one higher in priority than the irule that has all of the event logic. Can anyone confirm this will work? I may need to do some experimentation.
No.
Apple does not provide any system process that refreshes your app’s APNs token automatically after a restore or migration. The token is only refreshed once your app explicitly registers again.
From Apple’s documentation:
“APNs issues a new device token to your app when certain events happen. The app must register with APNs each time it launches.”
— Apple Developer Documentation: Registering Your App with APNs
And:
“When the device token has changed, the user must launch your app once before APNs can once again deliver remote notifications to the device.”
— Configuring Remote Notification Support (Archived Apple Doc)
That means the OS will not wake your app automatically to renew the token post-restore. The user must open the app at least once.
2. Can the app be awakened silently (e.g., background app refresh or silent push) to refresh its token before the user opens it?
Not reliably.
While background modes like silent push (content-available: 1) or background app refresh can wake your app occasionally, they don’t work until the app has been launched at least once after installation or restore.
Also, if the APNs token changed due to restore, your backend will still be sending notifications to the old, invalid token — meaning the silent push will never arrive in the first place.
“The system does not guarantee delivery of background notifications in all cases. Notifications are throttled based on current conditions, such as the device’s battery level or thermal state.”
— Apple Docs: Pushing Background Updates to Your App
So while background updates might sometimes trigger, you can’t rely on them for refreshing tokens after a restore.
3. What’s the best practice to ensure push delivery reliability after a device restore?
Here’s what works in production:
Always call registerForRemoteNotifications() on every cold launch.
Send the token to your backend inside
application(_:didRegisterForRemoteNotificationsWithDeviceToken:).
Compare the new token to the last saved one and update your backend if it changed.
Do not cache or assume the token is permanent.
“Never cache device tokens in your app; instead, get them from the system when you need them.”
— Apple Docs: Registering Your App with APNs
Treat device tokens as ephemeral — they can change anytime (reinstall, restore, OS update, etc.).
Handle APNs error responses such as:
410 Unregistered → token is invalid; stop sending.
400 BadDeviceToken → token doesn’t match app environment.
When receiving these, mark tokens as invalid and remove them from your database.
Keep a “last registration date” per device and flag stale ones.
For critical alerts (e.g., security, transactions), have fallback channels (email, SMS, etc.).
“If a provider attempts to deliver a push notification to an application, but the application no longer exists on the device, APNs returns an error code indicating that the device token is no longer valid.”
— Apple Docs: Communicating with APNs
For those still having issue with this:
Enable Databricks Apps - On-Behalf-Of User Authorization (Click on your user and then 'Preview'). For this to take effect, you need to shut down your app and start it again.
Add scopes to your app by editing the app. To edit scopes, your app must be stopped.
After configuring scopes and restarting the app, you may need end the login session and login to databricks again for scope changes to take effect. My databricks instance is configured with Google Workspace SSO, so I had to end my google session and login again for it to work.
flagged as this should be an objective Question, not part of massively downvoted "experiment" Opinion-based questions alpha experiment on Stack Overflow
please include a clearer reproduction and the complete message
Have you tried notExists() instead of id.eq(JPAExpressions.select(...).limit(1)) ?
jpaQueryFactory
.selectFrom(qVehicleLocation)
.innerJoin(qVehicleLocation.vehicle).fetchJoin()
.where(
JPAExpressions.selectOne()
.from(subLocation)
.where(
subLocation.vehicle.eq(qVehicleLocation.vehicle),
subLocation.createdAt.gt(qVehicleLocation.createdAt)
.or(
subLocation.createdAt.eq(qVehicleLocation.createdAt)
.and(subLocation.id.gt(qVehicleLocation.id))
)
)
.notExists()
)
.fetch();
This is what you need
function load() {
//Your function here
}
$(function() {
load(); //run on load
});
var loaded = setInterval(_run, 600000); //repeat every 10 mins
function _run() {
load();
clearInterval(run); //clear interval to recycle in the next 10 mins (not necessary);
}
To have cleaner approach I want like this
field: 'purchaseOrder.poCode', headerName: 'PO Number', flex: 1, minWidth: 120,
instead of below
field: 'purchaseOrder', headerName: 'PO Number', flex: 1, minWidth: 120, valueGetter: (params) => {
return params?.poCode
}
what to do if the viewB is transparent/translucent(basically its a carosel) and you want to avoid overlap?
in practice, you want the LLM to have the entire body of text prior to responding. what you should do i begin streaming the response from the LLM and send that to your speech to text processor if you want to improve voice speed.
What we actually did is just a sleep job before the job you want delayed.
Like this on windows, simple but works
Start-Sleep -Seconds 3600
Or Unix:
sleep 1h
Cheers,
Dave
Use file reference with #r:
#r @"C:\Users\<your-user>\.nuget\packages\newtonsoft.json\<package-version>\lib\<.net-version>\Newtonsoft.Json.dll"
So, letting a friend try my code on his Mac, without the xsl stylesheet parameter, he got this error
Run-time error '2004'
Method 'OpenXML' of object 'Workbooks' failed.
Which answers my #1 question.
Thanks for the contributions @timwilliams and @Haluk.
I will start exploring options like Power Query.
still having this problem in 2025 and it took some work but I got a solution working. I have networkingMode=mirrored , no JAVA_HOME conflicts, and most other connections work fine but I had to set up forwarding using usbipd-win to get it working.
install usbipd-win on Windows, using admin PowerShell, run
winget install --interactive --exact dorssel.usbipd-win
or download the .msi from https://github.com/dorssel/usbipd-win/releases
connect your phone via USB
open PowerShell as admin and list devices with usbipd list noting the phone's BUSID (e.g. 1-4, and it's VID:PID) then:
bind the device usbipd bind --busid <BUSID>
attach to WSL usbipd attach --wsl --busid <BUSID>
accept the "allow USB debugging?" prompt on the phone
restart adb in WSL: adb kill-server; adb start-server; adb devices and you should see the device showing up
after I did this the first time and selected "always allow this connection" from my phone it's worked pretty much every time. occasionally I have to do it again after a restart but it's pretty stable. I did write a script to automate the whole thing and alias it so it's easier to run if I have to reset the binding
# AttachAndroidToWSL.ps1
$deviceVidPid = "<VID:PID>"
Write-Host "Searching for device with VID:PID $deviceVidPid..."
$devices = usbipd list
$targetDevice = $devices | Where-Object {
$_ -match $deviceVidPid -and
$_ -notmatch "Attached"
}
if ($targetDevice) {
$busId = ($targetDevice -split " ")[0]
Write-Host "Found device: $targetDevice"
Write-Host "Attaching device with BUSID $busId to WSL..."
try {
usbipd bind --busid $busId | Out-Null
usbipd attach --wsl --busid $busId
Write-Host "Device attached successfully. Check adb devices in WSL."
} catch {
Write-Error "Failed to attach device: $($_.Exception.Message)"
}
} else {
Write-Host "Device with VID:PID $deviceVidPid not found or already attached."
Write-Host "Current USB devices:"
usbipd list
}
# Restart adb server in WSL (optional)
# Change WSL distribution name if it's not 'Ubuntu'
# wsl -d Ubuntu -e bash -c "adb kill-server; adb start-server"
and a connect-android powershell alias is helpful to quickly bind
function Connect-Android {
C:\path\to\script\AttachAndroidToWSL.ps1
}
Set-Alias -Name connect-android -Value Connect-Android
LLM is the model itself, a direct interface to the language model (e.g., OpenAI, Anthropic). You can call it directly with a prompt and get a response.
LLMChain is a LangChain wrapper that combines the model (llm) with a PromptTemplate and optional output logic. It doesn’t replace the LLM; it uses it internally to build a reusable, parameterized pipeline.
So it’s not one over the other, you typically use them together:
the LLM provides the intelligence, and the LLMChain structures how prompts are created and managed when interacting with it.
I'd use Power Query for this. With Office 365 this formula could be an alternative. It doesn't require LAMBDA.
=LET(_data,A1:F13,
_header,DROP(TAKE(_data,1),,1),
_body,DROP(_data,1),
VSTACK(HSTACK("",_header),
CHOOSEROWS(_body,
XMATCH(
MAXIFS(
INDEX(_body,,2),INDEX(_body,,1),UNIQUE(INDEX(_body,,1)))&UNIQUE(INDEX(_body,,1)),
INDEX(_body,,2)&INDEX(_body,,1)))))
Try the LockedList plug-in, it also has a nice UI.
Had the same issue, had to install torchcodec=0.7 so that it was compatible with my pytorch version. then reset my runtime in colab and it worked. diagram of pytorch/torchcodec compatibilities found here https://github.com/meta-pytorch/torchcodec
I ran into the same thing before, bro. The designer just shows that black screen instead of rendering the control kinda annoying. What fixed it for me was rebuilding my Nebroo project and reopening Visual Studio. Once it loads properly, the control shows fine when added to a form. It’s just how the designer handles custom controls sometimes.
Excelente me funcionó quiza por el autocompletado lo castea (///) y lo correcto seria (//)
Integrating NLP with Solr improves search quality by normalizing language, identifying entities, and expanding related terms. Instead of treating words as isolated tokens, NLP lets Solr recognize that “run,” “running,” and “ran” refer to the same concept, or that “Paris” may refer to a location entity. This results in higher recall, better matching, and more contextually relevant results.
For reference, a detailed study on this approach is available below, analyzing the impact of NLP techniques on Solr’s search relevancy.
https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1577525&dswid=-8621
Try async-trace, it should provide proper stacktrace for async await calls
I have the same problem where the action is 20 and the effected folder being empty, when in reality it should have files in it. In my case, when I looked at the history in the VSS client, the action was showing as "Archived version of ..." (where the ... is the file name).
The VssPhysicalLib>RevisionRecord.cs>Action enumeration does have an entry for ArchiveProject = 23, but not for 20.
A lazy solution to the problem but managed to work around the issue for me:
Add a new entry to the VssPhysicalLib>RevisionRecord.cs>Action enumeration: ArchiveUnknown = 20,
Add a new VSS action class to VssLogicalLib>VssAction.cs file:
public class VssNoAction : VssAction
{
public override VssActionType Type { get { return VssActionType.Label;} }
public VssNoAction()
{
//
}
public override string ToString()
{
return "No Action";
}
}
Add a new case to the switch statement in VssLogicalLib>VssRivision.cs>CreateAction() method:
case Hpdi.VssPhysicalLib.Action.ArchiveUnknown:
{
return new VssNoAction();
}
For more details, you can check this issue on trevorr/vss2git github repo: https://github.com/trevorr/vss2git/issues/39
You can do this efficiently and vectorized in NumPy using broadcasting.
import numpy as np
a = np.array([1, 3, 4, 6])
b = np.array([2, 7, 8, 10, 15])
result = b[:, None] + a
print(result)
You need to either add this header:
'Referrer-Policy': 'strict-origin-when-cross-origin'
Or you can add the following to your embed element:
referrerpolicy='strict-origin-when-cross-origin'
Either should work for you to get them back up and running.
See YouTube documentation here:
https://developers.google.com/youtube/terms/required-minimum-functionality#embedded-player-api-client-identity
@Swifty, wouldn't such use of zip create a list -- rather than an iterator? The input sequence of rows here can be very long -- I don't want to store it all in memory...
You can use @sln JSON Perl/PCRE regex functions to validate and error check.
See this link for several practical usagge examples to query and validate JSON,
and for a full explanation of the several functions available :
https://stackoverflow.com/a/79785886/15577665
json_regexp = paste0(
"(?x) \n",
" \n",
" # JSON recursion functions by @sln \n",
" \n",
" (?: \n",
" (?: # Valid JSON Object or Array \n",
" (?&V_Obj) \n",
" | (?&V_Ary) \n",
" ) \n",
" | # or, \n",
" (?<Invalid> # (1), Invalid JSON - Find the error \n",
" (?&Er_Obj) \n",
" | (?&Er_Ary) \n",
" ) \n",
" ) \n",
" \n",
" \n",
" (?(DEFINE)(?<Sep_Ary>\s*(?:,(?!\s*[}\]])|(?=\])))(?<Sep_Obj>\s*(?:,(?!\s*[}\]])|(?=})))(?<Er_Obj>(?>{(?:\s*(?&Str)(?:\s*:(?:\s*(?:(?&Er_Value)|(?<Er_Ary>\[(?:\s*(?:(?&Er_Value)|(?&Er_Ary)|(?&Er_Obj))(?:(?&Sep_Ary)|(*ACCEPT)))*(?:\s*\]|(*ACCEPT)))|(?&Er_Obj))(?:(?&Sep_Obj)|(*ACCEPT))|(*ACCEPT))|(*ACCEPT)))*(?:\s*}|(*ACCEPT))))(?<Er_Value>(?>(?&Numb)|(?>true|false|null)|(?&Str)))(?<Str>(?>"[^\\"]*(?:\\[\s\S][^\\"]*)*"))(?<Numb>(?>[+-]?(?:\d+(?:\.\d*)?|\.\d+)(?:[eE][+-]?\d+)?|(?:[eE][+-]?\d+)))(?<V_KeyVal>(?>\s*(?&Str)\s*:\s*(?&V_Value)\s*))(?<V_Value>(?>(?&Numb)|(?>true|false|null)|(?&Str)|(?&V_Obj)|(?&V_Ary)))(?<V_Ary>\[(?>\s*(?&V_Value)(?&Sep_Ary))*\s*\])(?<V_Obj>{(?>(?&V_KeyVal)(?&Sep_Obj))*\s*})) \n"
)
It seems that with these steps (I've installed another emulator with Android 14 and Google Play services just in case), plus generating my own CA certificate (not a self-signed one) and installing it on the emulator, at the same time it was configured in the assets/certs and the res/raw did the work.
I have the same problem, but I haven't been able to solve it(
I have all the data as integer, but I still get this error: [inputs.mqtt_consumer] Error in plugin: metric parse error: expected tag at 1:3: "84".
mqtt.conf for Telegraf:
[[inputs.mqtt_consumer]]
servers = ["tcp://localhost:1883"]
topics = [ "test_topic" ]
qos = 0
client_id = "qwe12"
#name_override = "entropy_available"
#topic_tag = "test_topic"
data_format = "influx"
data_type = "integer"
Influxdb:
[[inputs.mqtt_consumer]]
servers = ["tcp://localhost:1883"]
topics = [
"test_topic"
]
[[inputs.mqtt_consumer.fieldpass]]
field="value"
converter="integer"
Where am I doing wrong?
https://pypi.org/project/keyring/ uses platform services to securely store secrets, credentials, etc.
You cannot do it using "forge test" but you can deploy to the testnet and run tests off of it with the methods you show above.
I just had this come up because some program had stuck a P4CHARSET=utf8 into my p4config.txt in a depot that was not configured for utf8. So that may be one of many reasons for this error.
I think I’ve found a solution, and I’d appreciate it if someone could take a look and comment, so I know if I’m on the right track.
After numerous changes, I realized that one of the bigger problems was that I wasn’t performing a Clean + Rebuild, so Visual Studio kept caching my modifications.
In the end, the solution came down to the following part of the web.config file:
<system.web>
<authentication mode="Windows" />
<compilation debug="true" targetFramework="4.5.2" />
<httpRuntime targetFramework="4.5.2" />
<httpModules>
<add name="ApplicationInsightsWebTracking" type="Microsoft.ApplicationInsights.Web.ApplicationInsightsHttpModule, Microsoft.AI.Web" />
</httpModules>
</system.web>
<!-- Set Windows Auth for api/auth/token endpoint -->
<location path="api/auth/token">
<system.webServer>
<security>
<authentication>
<anonymousAuthentication enabled="false" />
<windowsAuthentication enabled="true" />
</authentication>
</security>
</system.webServer>
</location>
<!-- For the rest of the app, allow anonymous auth -->
<system.webServer>
<security>
<authentication>
<anonymousAuthentication enabled="true" />
<windowsAuthentication enabled="false" />
</authentication>
</security>
</system.webServer>
Now, the first endpoint passes through Windows Authentication (receives the Authorization: Negotiate ... header), while the rest of the application is authorized through CustomAuthorization using JWT tokens.
Additionally, I had to configure the following in the applicationhost.config file:
<section name="anonymousAuthentication" overrideModeDefault="Allow" />
<section name="windowsAuthentication" overrideModeDefault="Allow" />
I would appreciate it if someone could review this and provide advice or recommendations on whether this setup is acceptable.
Thank you!
If you're creating a website with Divi Builder, make sure your Divi is active.
Divi > Theme Options > Updates > Username and API Key needs to be active. Usually when I do this, and go back to Dashboard, you should automatically have a Divi update.
After that, if you're still seeing a template as your home page even with created pages, it's because you don't have specific static pages setup.
Proceed to Settings > Reading > Your Homepage Displays > and make sure it's set to A Static Page (Select Below) and it'll give you the option to set your homepage to a specific page as well as your Blog page.
This can also be obtained by going in Appearance > Customize > Homepage Settings > Homepage Displays is there as well.
I used PostHTML as suggested by Parcel docs. It allowed me to insert partials using the <include> element, instead of inserting dynamically using js.
If you are passing logging config at the cmd prompt try remove it and the Request line will become expandable in the report :
remove this -Dlogback.configurationFile=src/test/resources/logback.xml
As of October 30, 2025 this is the message that I get in Firefox console when running a Nuxt 2 project:
[_Vue DevTools v7 log_]
Vue DevTools v7 detected in your Vue2 project. v7 only supports Vue3 and will not work.
The legacy version of chrome extension that supports both Vue 2 and Vue 3 has been moved to https://chromewebstore.google.com/detail/vuejs-devtools/iaajmlceplecbljialhhkmedjlpdblhp
The legacy version of firefox extension that supports both Vue 2 and Vue 3 has been moved to https://addons.mozilla.org/firefox/addon/vue-js-devtools-v6-legacy
Please install and enable only the legacy version for your Vue2 app.
[_Vue DevTools v7 log_]
You will need to disable the v7 DevTools while running Nuxt 2 projects.
you should delete the docker and install everything with from the orgianl website of the docker
and reinstall it again and also run the command which remove the older version and also remove the filw which remove the conflicts
Run the following command to uninstall all conflicting packages:
for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done
and also add the docker in the group :
sudo usermod -aG docker ${USER}
Just delete .angular/cache and be happy. I advise to just not use this cache for development, since it just hinders things like npm link. You can disable it with:
{
...
"cli": {
...
"cache": {
"enabled": false
}
}
}
I know it a dead thread but wanted to show the fix i found
msiexec.exe is not meant to be directly called so you can't just run "msiexec.exe /arguments" you need something else to call it. the dumb fix i have is to use start-process so something like where 00000000-0000-0000-0000-000000000000 is the application id you can get from wmi-objects. so the following would uninstall that app and be able to run right from an administrative powershell
Start-Process msiexec.exe -ArgumentList '/X{00000000-0000-0000-0000-000000000000} /q' -Wait
Is this the best general-purpose solution?
def batch(iteration, batchSize):
items = []
for item in iteration:
items.append(item)
if len(items) == batchSize:
yield items
items = []
if items:
yield items
...
for rows in batch(query.results(), N):
cluster.submit(rows)
As of today with Aspire 9.5.2 the only implemented clustering integrations are
Redis
Azure Storage Tables
The settings dialog always appeared when I used the STS for Eclipse + GitHub Copilot + Copilot Metrics plugin.
please insert a valid URL, even though my url and token worked fine with curl
The Copilot Metrics plugin requires a specific backend server, not just any URL. It verifies this by calling a /metrics/api/health (or similar endpoint), which returns JSON data.
After this restart the eclipse if still error is there , go to window
window->show view->error log
@Fildor Microfrontends are usually an evolutionary step aren't they? I think my question centers more on what other people have experienced and what their situations were when they implemented Microfrontends to gather knowledge from the trenches. I'd like to know the challenges other people have come across to get some ideas of different areas of risk and mitigation.
@JonasH there will be around 50 people if I recall the last numbers. Though I'm mainly curious about your experiences in implementing microfrontends. Please share what you think is relevant to situations you've come across.
@AndrewS this is exactly the kind of experience I'm looking for although I'd love to hear more about your personal experiences. Thank you so much!
If you still get this error even when Metal toolchain is installed, is because you try to run metal tool directly i.e. /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/metal which will not work anymore
Try to use xcrun metal instead.
If you are already getting all the data from logs for each job. You can still sort on job = error and based on your table, sendevent to force start to the job. I would also suggest to add a comment for the sendevent to include the error, that way you cover audit as well. Let me know if that helps or have more question. Sounds like a great job on data collection!
@Peter Mortensen
the link is now https://www.can-cia.org/cia-groups/technical-documents
look for CiA301. The link is available for download only if you are registered (for free and can be done within a few min)
If you're are looking for more nice presentation features for Pdf-files, you may check out MagicPresenter (https://apps.apple.com/us/app/magicpresenter/id6569250589). It allows you to add presenter notes to PDFs and view them in presenter mode during your presentation. You can even scribble "magic" annotations directly on your slides. These are only visible to you. I found this super handy for remembering what I want to say.
Thank you all. The problem was that the term "global b" shoud have been at the begining of all the funkcions.
import tkinter,time
canvas=tkinter.Canvas(width=500,height=500,bg="white")
canvas.pack()
canvas.create_text(250,250,text="00:00:00",font="Arial 70 bold")
def cl_e():
global b
b=False
clock()
def cl_s():
global b
b=True
clock()
def clock():
global b
h=0
m=0
s=0
while b:
canvas.delete("all")
if s<60:
s+=1
time.sleep(1)
elif m<60:
s=0
m+=1
else:
s=0
m=0
h+=1
if h<10 and m<10 and s<10:
canvas.create_text(250,250,text="0"+str(h)+":0"+str(m)+":0"+str(s),font="Arial 70 bold")
elif h<10 and m<10 and s>=10:
canvas.create_text(250,250,text="0"+str(h)+":0"+str(m)+":"+str(s),font="Arial 70 bold")
elif h<10 and m>=10 and s<10:
canvas.create_text(250,250,text="0"+str(h)+":"+str(m)+":0"+str(s),font="Arial 70 bold")
elif h<10 and m>=10 and s>=10:
canvas.create_text(250,250,text="0"+str(h)+":"+str(m)+":"+str(s),font="Arial 70 bold")
elif h>=10 and m<10 and s<10:
canvas.create_text(250,250,text=str(h)+":0"+str(m)+":0"+str(s),font="Arial 70 bold")
elif h>=10 and m<10 and s>=10:
canvas.create_text(250,250,text=str(h)+":0"+str(m)+":"+str(s),font="Arial 70 bold")
elif h>=10 and m>=10 and s<10:
canvas.create_text(250,250,text=str(h)+":"+str(m)+":0"+str(s),font="Arial 70 bold")
else:
canvas.create_text(250,250,text=str(h)+":"+str(m)+":"+str(s),font="Arial 70 bold")
canvas.update()
start=tkinter.Button(text="Start",command=cl_s)
end=tkinter.Button(text="End",command=cl_e)
start.pack()
end.pack()
SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY is a view. and also account admin has no access to the source. I guess you may challenge the security parameters of that view but I am not sure it will do the work.
Instead, I would consider to execute a predefined task that will extract this data into a table the app can access. it will take a few seconds of delay but you will get the data.
make sure in the CREATE TASK to RUN AS OWNER without a schedule.
then EXECUTE IMMEDIATE <TASK_NAME>;
I hope it will work for you
Thanks for your answer. This helped me in finding the solution, which was actually fairly obvious ;)
if ((PhotoMetric == PHOTOMETRIC_MINISWHITE) || (PhotoMetric == PHOTOMETRIC_MINISBLACK) || (SamplesPerPixel == 1)) {
if (BitsPerSample == 1)
Type = PRESCRENED_TIFF_IMAGE;
else
Type = MONOCHROME_TIFF_IMAGE;
} else if (SamplesPerPixel == 4) {
if (PhotoMetric == PHOTOMETRIC_SEPARATED)
Type = CMYK_TIFF_IMAGE;
else
Type = OTHER_TIFF_IMAGE;
} else
Type = OTHER_TIFF_IMAGE;
Found the issue... The EditContext being set to new(Search) was triggering a new edit context upon field entry
EditContext="new(Search)
Finally found what is wrong:
...
"lint": {
"builder": "@angular-eslint/builder:lint",
"options": {
"lintFilePatterns": [
"src/**/*.ts",
"src/**/*.html"
]
}
}
...
Notice the absence of forward slashes before the 2 paths in the "lintFilePatterns" section.
If someone today needs to Create a Document in Cosmos DB, POST a key both in the header and body like this:
const headers = {
'x-ms-documentdb-partitionkey': '["yourPartitionKey"]'
};
const body = JSON.stringify({
id: 'someid',
partitionKey: 'yourPartitionKey'
})
What topics can I ask about here? -> Questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
<html>
<head>
<title>my first web page</title>
</head>
<body>
This is my first web page
</body>
</html>
In my case it was because one of my properties on my DTO was of type ReadOnlyCollection<T> which caught me out. Changing to the type of List<T> fixed it for me as this CAN be assigned to. ReadOnlyCollection cannot!
It is possible.
With nth-last-child()
nth-last-child(-n+2)
I'm closing it.
I can import a component with react but not mui dependencies. Issues is from Mui
Thanks @kgoodrick, your answer works perfectly. Just one addition: I had to change axis(False) to axis(None) for it to work. If you post it as an answer (not a comment), then I can click the "accept" button :-)
The correct approach for handling large, non-code assets (like databases, media files, or large configuration sets) in a Tauri application is to use Resources combined with the Asset Protocol.
This method is critical because it keeps your application binary (.exe, .app, etc.) small and manageable, bundling the large files separately but automatically alongside the binary in the final installer (.msi, .dmg, etc.).
For a full, step-by-step implementation guide covering the setup and the usage of the files, you can refer to this guide, where I summarized all the necessarry configurations:
GitHub Issue Comment Detailing Tauri Resource Setup
This ensures your users get a single, professional installer package with all the necessary large files correctly located and accessible at runtime.
we encounter this problem too. we found that some duplicate symbol causes wrong link,link from _text to data. As a result,out of range.finally,we use llvm-objcopy to deal the .o files,where use and define this symbol, rename the conflict symbol. successfully build.
i recommend firstly use the Zy Zhao's answer successfully build your project to get the linkmap. then use the linkmap.txt and the error description to find out which symbol is duplicated.good luck
Ended up using ScriptUtils
ScriptUtils.executeSqlScript does the job.
I think this code run on local browser . During developement the camera access is silently denied. Its need domain host .
In my case, I was sending HttpServletRequest to @Async method then execute call getRequestURI() inside this method (in a different thread). This is giving me not reliable result and sometimes I got null . I modified the behavior and used a static utility method to extract requestUri (extraction in the same thread) then passing the extracted value to @Async method, I got reliable and expected result in this scenario and solved my problem.
Thanks, that's what I was missing, I thought @Value was exclusive to env vars and props, apparently Spring does inject command line args
except, it looks like it works in this format
-Dspring-boot.run.arguments="--p=foo"
with
@Value("${p}")
String p;
Thanks again!
This is much more straightforward
After google for a bit, I found solution only for Chrome in this article, you will need to install a google extension called "Let Me out", it will disable that pop up forever.
After migrating to PgDog, everything is working correctly.
Do you have an implementation of the recipe you describe that you can share? Maybe your current implementation can be optimized? Or: would it be an option to run it in parallel to speedup your processing?
But Zabbix monitors the server not the website.
To interact with your conversational agent you have to use Dialogflow CX libraries. Actually, Dialogflow CX is being deprecated tomorrow October 31 in favor of these conversational agents, and if you see the conversational agent UI is the same as Dialogflow CX but expanded with more generative AI functionalities such as playbooks.
The method to interact with an agent by text is called "detect intent". You have a node.js sample here: https://cloud.google.com/dialogflow/cx/docs/quick/api#detect-intent-nodejs
But you need to make sure that you are authenticating properly when you do the call to the API. That means that if you are executing the call from your local machine, you have to be logged in gcloud as a user with the corresponding permissions or use a service account with those permissions and add the google credentials in the request.
I have added the following block of code in the Info.plist inside the
CFBundleURLTypes
It gives permission for my domain so every api call which is using the domain i have mention will work and not get failed, i have tried this solution and it works.
<key>CFBundleURLTypes</key>
<array>
<dict>
<key>CFBundleURLName</key>
<string>app.appname.in</string>
<key>CFBundleURLSchemes</key>
<array>
<string>YourAppName</string>
</array>
</dict>
</array>
Hercules
https://www.hw-group.com/product-version/hercules
This one can help you
According to official documentation PyCharm 2025.2 supports python 2.7.
You can try selecting an existing interpreter (not venv) and see if it works.
Sounds like an encoding issue. Those can be tricky. More info would help, like .. code and parameters used to call the api, the api version, etc..
Have you verified the response from google translate actually contains the content you think it does ...? Depending on response format, it might contain html entities like ignórelo instead of all characters ignórelo like you're expecting. Try using cfdump to view the full http response, including html entities (not visible using a simple cfoutput).
If I set debug to 3, the output is (values 10 (10 1 2)).
So, high optimization made the inconsistent output (values 0 (10 1 2)).
In addition, I should've carefully check the warning message about a literal by SBCL.
The ocean of Common Lisp is so deep :o
trackAll is not an available event function anymore, the second import method works fine and using an event like ahoy.trackView(); proved that my setup was ready to work.
After trying countless things, including reinstalling PowerShell, I found the problem and the solution.
The folder c:\windows\system32\powershell was missing from my computer. Reinstalling PowerShell didn't restore it. To fix it, simply run: Install-PowerShellRemoting.
From that moment on, everything worked as it should.
Just for the sake of completeness of the thread, since 1.33, and due to this enhancement proposal, Kubernetes supports native sidecars, which is a pattern for deploying sidecar containers in pods. Native sidecars are init containers with Always being used as their restart policy. This blog post from istio shows how you could enable istio to inject sidecar containers as native sidecars.
just add a zStack at top inside body,
and give it your desire color,
that's it.
it will work fine !!!!
Add 'https://www.youtube-nocookie.com' in origin
_controller = YoutubePlayerController(
params: const YoutubePlayerParams(
showControls: true,
mute: false,
showFullscreenButton: true,
loop: false,
origin: 'https://www.youtube-nocookie.com', // add this line
),
);
_controller.loadVideoById(videoId: YoutubePlayerController.convertUrlToId("https://www.youtube.com/watch?v=_57oC8Sfp-I")??'');
Update based on @h3llr4iser answer:
<?php
namespace App\Services;
use App\Entity\MyEntity;
use Vich\UploaderBundle\FileAbstraction\ReplacingFile;
use Vich\UploaderBundle\Handler\UploadHandler;
final class ExternalImageUploader
{
public function __construct(private UploadHandler $uploadHandler)
{
}
public function copyExternalFile(string $url, MyEntity $entity)
{
$newfile = tempnam(sys_get_temp_dir(), 'prefix_');
copy($url, $newfile);
$uploadedFile = new ReplacingFile($newfile);
$entity->setImageFile($uploadedFile);
$this->uploadHandler->upload($entity,'imageFile');
}
}