NVDA sometimes fails to change to focus mode consistently, possibly due to nested elements. But you can always disable 'Enable focus mode on run' from Settings - Browse mode.
There are a quite a few issues here:
Your code can exit before it hits any of the #expect
tests.
You set up the sink
and then emit
the fakeMessages
and then immediately exit. You have no assurances that it will even reach your #expect
tests within the sink
at all. You need to do something to make sure the test doesn’t finish before it has consumed the published values.
Fortunately, async
-await
offers a simple solution. E.g., you might take the sut.$messages
publisher, and then await
its values
. So either:
Use a for await
-in
loop:
for await value in values in sut.$messages.values {
…
}
Or use an iterator:
var iterator = sut.$messages.values.makeAsyncIterator()
let value1 = await iterator.next()
…
let value2 = await iterator.next()
…
// etc
Either way, this is how you can await
a value emitted from the values
asynchronous sequence associated with the sut.$messages
publisher, thereby assuring that the test will not finish before you process the values.
Having modified this to make sure your test does not finish prematurely, the next question is how do you have it timeout if your stubbed service fails to emit the values. You can do this a number of ways, but I tend to use a task group, with one task for the tests and another for a timeout operation. E.g.:
try await withThrowingTaskGroup(of: Void.self) { group in
group.addTask {
let value1 = await iterator.next()
#expect(value1 == [])
…
}
group.addTask {
try await Task.sleep(for: .seconds(1))
throw ChatScreenViewModelTestsError.timedOut
}
try await group.next()
group.cancelAll()
}
Or a more complete example:
@Test
func onAppearShouldReturnInitialMessagesAndStartPolling() async throws {
let mockMessageProvider = MockMessageProvider()
let sut = createSUT(messageProvider: mockMessageProvider)
sut.onAppear()
var iterator = sut.$messages
.buffer(size: 10, prefetch: .keepFull, whenFull: .dropOldest)
.values
.makeAsyncIterator()
Task {
await mockMessageProvider.emit(.success(fakeMessages)) // Emit initial messages
await mockMessageProvider.emit(.success(moreFakeMessages)) // Emit more messages
}
try await withThrowingTaskGroup(of: Void.self) { group in
group.addTask {
let value1 = await iterator.next()
#expect(value1 == [])
let value2 = await iterator.next()
#expect(value2 == fakeMessages)
let value3 = await iterator.next()
#expect(value3 == moreFakeMessages)
}
group.addTask {
try await Task.sleep(for: .seconds(1))
throw ChatScreenViewModelTestsError.timedOut
}
try await group.next()
group.cancelAll()
}
}
}
Your code assumes that you will see two published values:
#expect(sut.messages[0].count == 0)
#expect(sut.messages[1].count > 0)
This is not a valid assumption. A Published.Publisher
does not handle back pressure. If the async sequence published values faster than they could be consumed, your property will drop values (unless you buffer
your publisher, like I have in my example in point 2). This might not be a problem in an app that polls infrequently, but especially in tests where you mock the publishing of values without delay, you can easily end up dropping values.
Your sut.onAppear
starts an asynchronous Task {…}
. But you don’t wait for this and immediately emit
on the mocked service, MockMessageProvider
. This is a race. You have no assurances that poll
has been called before you emit
values. If not, because emit
uses nil
-chaining of continuation?.yield(value)
, that means that emit
might end up doing nothing, as there might not be any continuation
to which values can be yielded yet.
Personally, I would I would decouple the asynchronous sequence from the polling logic. E.g., I would retire AsyncStream
and reach for an AsyncChannel
from the Swift Async Algorithms, which can be instantiated when the message provider is instantiated. And then poll
would not be an asynchronous sequence itself, but rather a routine that starts polling your remote service:
protocol MessageProviderUseCase: Sendable {
var channel: AsyncChannel<MessagePollResult> { get }
func startPolling(interval: TimeInterval)
}
private final class MockMessageProvider: MessageProviderUseCase {
let channel = AsyncChannel<MessagePollResult>()
func startPolling(interval: TimeInterval) {
// This is intentionally blank …
//
// In the actual message provider, the `startPolling` would periodically
// fetch data and then `emit` its results.
}
func emit(_ value: MessagePollResult) async {
await channel.send(value)
}
}
Because the channel
is created when the message provider is created, it doesn't matter the order that startPolling
and emit
are called in our mock implementation.
Some other observations:
Your protocol declares poll
(which returns an asynchronous sequence) as an async
function. But it is not an async
function. Sure, it returns an AsyncStream
, but poll
, itself, is a synchronous function. I would not declare that as an async
function unless you have some compelling reason to do so.
You declared MessageProviderUseCase
protocol to be Sendable
, but MockMessageProvider
is not Sendable
. Your code does not even compile for me. In my mock, (where I have no mutable properties), this is moot, but if you have mutable state, you need to synchronize it (e.g., make it an actor
, isolate the class to a global actor, etc.).
It may be beyond the scope of this question, but I would be a little wary about using a @Published
property for publishing values from an AsyncSequence
. In a messaging app, you might not want to drop values under back-pressure situations. It depends upon your use-case, but note that in the absence of buffer
on your publisher, you can drop values.
You will need to download Git Credentials Manager
https://github.com/microsoft/Git-Credential-Manager-for-Windows/releases
It fixed my issue
Need to save CSV data set file as a "CSV UTF-8" rather than regular CSV. along with that Set "File encoding" field of "CSV Data Set Config" as UTF-8.
That's what worked perfectly for me.
in your Navigation Bar Item onClick
navController.navigate(destination.route) {
popUpTo(0) {
saveState = true
}
launchSingleTop = true
restoreState = true
}
df = pd.read_csv("csv file path") //this reads the CSV file
dataFrame = pd.DataFrame(df) //pandas DataFrame method
column_labels = dataFrame.columns //returns only the column headers
for i in range(4,10):
print(column_labels[i])
array_values(array_column($array,'email','email'));
This is return unique value from php array
The following code works for me.
from google.colab import runtime
runtime.unassign()
display: flex; flex-direction: column
in #r1
and #r2
, stacks the text and image vertically
margin-top: auto
in .downalign
, pushes that image to the bottom of the container
It seems as if there is not currently support for what I am trying to do using either azure cli or Azure Powershell, but the necessary functionality is exposed via REST api. This will approve a private endpoint on a SQL MI. Props to Cory for the solution.
# Set variables
$subscriptionId = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
$resourceGroupName = "myresourcegroupname"
$managedInstanceName = "mysqlmi"
$privateEndpointConnectionName = "mysqlmy.endpointId"
# Build URL properly for PowerShell
$resourcePath = "/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Sql/managedInstances/$managedInstanceName/privateEndpointConnections/$privateEndpointConnectionName"
# Execute the approval with proper JSON escaping for PowerShell
az rest --method PUT --url $resourcePath --url-parameters 'api-version=2024-05-01-preview' --body '{\"properties\":{\"privateLinkServiceConnectionState\":{\"status\":\"Approved\",\"description\":\"Approved by pipeline\"}}}'
Stack Overflow has been an indispensable resource for developers since its launch in 2008 by Jeff Atwood and Joel Spolsky. As the flagship Q&A site of the Stack Exchange Network, it has grown to host over 29 million registered users, with more than 24 million questions and 36 million answers as of 2025 . Its system of reputation points and badges, along with community moderation, has set a high standard for collaborative knowledge sharing
In short, the AI is incorrect. Let's pretend that ChatGPT doesn't exist for a second and do some old-fashioned research, starting with the page for...in on MDN. It reads:
The traversal order, as of modern ECMAScript specification, is well-defined and consistent across implementations. Within each component of the prototype chain, all non-negative integer keys (those that can be array indices) will be traversed first in ascending order by value, then other string keys in ascending chronological order of property creation.
So, the concern that the order of the keys is inconsistent between JS environments seems to be invalid, at least in the current day. Other reliable resources such as this page seems to suggest that keys have been well-ordered as part of the specification since ES2015; and caniuse reports that 96.57% of browsers implement that version of the spec.
I've always done it this way, and it has always worked. All the test cases I tried passed.
That's not surprising. You are almost certainly using an ES2015 compliant environment and so the traversal order is the same as the insertion order. In this case, the keys are inserted in the order that they appear in the string.
Was the AI possibly mistaken?
Yes.
Any news on this and how to prevent it in Blazor ?
It looks like https://jj-vcs.github.io/jj/latest/FAQ/#i-accidentally-changed-files-in-the-wrong-commit-how-do-i-move-the-recent-changes-into-another-commit has the answer for this situation.
You’ll need to use the SIM card reader’s SDK or API (usually provided by the vendor). There’s no standard .NET API for SIM access—functionality like reading IMSI, serial, or carrier is typically vendor-specific and accessed via AT commands or through a COM port using serial communication in C#.
I could fixed this issue by using Tools > Android > Restart Adb Server
After that, VS start recognizing my device
add --profile:
aws iam get-user --profile default
aws iam list-users --profile default
file: C:\Users\DESKTOP\.aws\credentials
------------------------------------------------------------------------------------
[default]
aws_access_key_id = AKIA5GMKOIQYHJUI5WR
aws_secret_access_key = J8GhnB2kRbg9UVPKyjndvj4Ib3JO57ZW5Adohmu4
------------------------------------------------------
file: C:\Users\DESKTOP\.aws\config
--------------------------------------------------
[default]
region = us-east-1
output = json
-----------------------------
aws iam get-user --profile default
aws iam list-users --profile default
Are you able to share the qca-networking-2022-spf-12-1_qca_oem source? I would very much appreciate it thanks!
I'm trying to get a IPQ807x booted.
I am trying to do the same using server actions not route handler and getting this issue again and again
⨯ Error: Cookies can only be modified in a Server Action or Route Handler. Read more: https://nextjs.org/docs/app/api-reference/functions/cookies#options
at async secureApiCall (src\lib\actions.ts:34:6)
at async getAllUsers (src\app\admin\users\actions.ts:24:9)
at async UsersPage (src\app\admin\users\page.tsx:12:16)
32 | console.log("Updated session token: ", session);
33 |
> 34 | await session.save();
| ^
35 |
36 | // Retry original request
37 | return await cb(session.token); {
digest: '533765442'
}
With what level of confidence? sub-100%? If you're OK with probabilistic primes, I think you can likely increase your efficiency considerably:
This is the most stupid thing, I had a file named local-ca.crt
and it was greyed out, renaming the file to ca.crt
made it available and not greyed out anymore!
After some experimentation, my heuristic answer is: import it through a script tag in the body of the base HTML.
As @Pratik Pathak mentioned, one way is to use the actual Azure storage URL, which has worked for me in the past, but you could also use blobServiceClient
instead of bobClient
.
string connectionString = _configuration.GetConnectionString("AzureStorage");
string containerName = "container-name";
var blobServiceClient = new BlobServiceClient(connectionString);
var containerClient = blobServiceClient.GetBlobContainerClient(containerName);
var blobClient = containerClient.GetBlobClient(filePath);
var blobDownloadInfo = await blobClient.DownloadAsync();
var contentType = blobDownloadInfo.Value.Details.ContentType ?? "application/octet-stream";
return File(blobDownloadInfo.Value.Content, contentType, Path.GetFileName(filePath));
This looks like a good use case for Apache Spark. This would typically be done with Python or Scala, but there is no reason you couldn't also do this in Java (Apache Spark has java libraries). Not sure this answers your question but I think this approach is worth looking into.
You may try to put the ldap servers in two lines instead of one:
auth_ldap_servers ldap1;
auth_ldap_servers ldap2;
None of the current answers seem to talk about how to change the colour used in highlighting.
In my config, I have tabs and trailing whitespace highlighted using whitespace-mode
:
(require 'whitespace)
(whitespace-mode 1) ;; or (global-whitespace-mode 1)
(setq whitespace-style '(face tabs trailing))
(modify-face whitespace-tab nil "#ff0000")
(modify-face whitespace-trailing nil "#ff0000")
Code for highlighting tabs obtained from a comment under this StackOverflow answer.
Hey can you tell me my four digit code for my time limit I try to figure out what was my code for time limit I knew it but I forgot it so can please help me
Are there two senses for "can" here:
1 - In the sense that it's possible: Yes, you can freely take this approach;
2 - In the sense that it's a good practice: No, if you are creating an abstract class, you should require subclasses to create the behavior and state specific to it;
new update of laravel support typescript : https://laravel-news.com/laravel-breeze-typescript
As usual terrible documentation, is OrientationEvent and not OrientationData what it should be in the readme.
ANSWER:
Bpftrace fishes the register contents out of the struct pt_regs
which gets from the ptrace interface. It gets the offsets into the struct using this snippet of code: (bpftrace github)
static const std::unordered_map<std::string, size_t> register_offsets = {
{ "r15", 0 }, { "r14", 8 }, { "r13", 16 }, { "r12", 24 },
{ "bp", 32 }, { "bx", 40 }, { "r11", 48 }, { "r10", 56 },
{ "r9", 64 }, { "r8", 72 }, { "ax", 80 }, { "cx", 88 },
{ "dx", 96 }, { "si", 104 }, { "di", 112 }, { "orig_rax", 120 },
{ "ip", 128 }, { "cs", 136 }, { "flags", 144 }, { "sp", 152 },
{ "ss", 160 },
};
You can also use [maxLines]=2
on your Label in Angular.
How do I make step 3 dependent on step 2
Wait.on is another option that can be used to wait on the processing of another step.
How can I skip (2) when there is no data returned in (1)?
Please refer comment from XQ Hu . By passing readyOrInProgressTriggers as a side input, depending on the count of that side input, thelogic in step 2 can be skipped
As of August 2025, safe has wide browser support and can now be used.
This works now
align-items: safe center;
I use ExamplePaymentQueueDelegate
to restore purchases on iOS, and it works fine in my case, in promo codes as well.
Future<void> init() async {
await _fetchSubscriptions();
final purchaseUpdated = _inAppPurchase.purchaseStream;
_subscription = purchaseUpdated.listen(
_listenToPurchaseUpdated,
onDone: () {
_subscription.cancel();
},
onError: (Object error) {
log(error.toString());
},
);
if (Platform.isIOS) {
final iosPlatformAddition = _inAppPurchase
.getPlatformAddition<InAppPurchaseStoreKitPlatformAddition>();
await iosPlatformAddition.setDelegate(ExamplePaymentQueueDelegate());
} else {
await InAppPurchase.instance.restorePurchases();
}
}
class ExamplePaymentQueueDelegate implements SKPaymentQueueDelegateWrapper {
@override
bool shouldContinueTransaction(
SKPaymentTransactionWrapper transaction,
SKStorefrontWrapper storefront,
) {
return true;
}
@override
bool shouldShowPriceConsent() {
return false;
}
}
Also, please show price consent on hitting subscribe button to avoid future bugs
if (Platform.isIOS) {
await confirmPriceChange();
}
Future<void> confirmPriceChange() async {
// Price changes for Android are not handled by the application, but are
// instead handled by the Play Store. See
// https://developer.android.com/google/play/billing/price-changes for more
// information on price changes on Android.
if (Platform.isIOS) {
final iapStoreKitPlatformAddition = _inAppPurchase
.getPlatformAddition<InAppPurchaseStoreKitPlatformAddition>();
await iapStoreKitPlatformAddition.showPriceConsentIfNeeded();
}
}
In my case, the keyboard function key was locked. Pressing fn+F12 worked as expected. To remove the function lock, press the Fn + Esc keys together.
In Vite just remove ~
:
@import "react-image-gallery/styles/css/image-gallery.css";
The problem was resolved by enabling the WINHTTP_OPTION_IPV6_FAST_FALLBACK
option in WinHTTP, which allowed the client to quickly fall back to IPv4 when IPv6 was slow or unresponsive.
First:
import org.springframework.web.bind.annotation.RestController;
Second:
You need to change @Controller to @RestController.
You need to change following files for a different vendor directory location:
artisan
public/index.php
composer.json
For example, wherever you find following (in case of artisan file):
require __DIR__.'/vendor/autoload.php';
replace with:
require __DIR__.'/php_modules/vendor/autoload.php';
Wherever you find following (in case of public/index.php):
require __DIR__.'/../vendor/autoload.php';
replace with:
require __DIR__.'/../php_modules/vendor/autoload.php';
In composer.json file, look for following fields:
autoload -> exclude-from-classmap
scripts -> test
config -> vendor-dir
Check these above 3 fields in composer.json file and correct the values appropriately.
Thanks (and waiting for more replies/suggestions)
uninstall "vim" extension from your vscode. must fix the problem.
I ran your original script and it is still running, patience is not my virtue, so is this what you are looking for?
import pandas as pd
import numpy as np
#generate sample DataFrame
dflength = 7_303_787
#create a "PLU" column of random integers (100000–199999)
PLUs = pd.DataFrame(
np.random.randint(100_000, 200_000, size=(dflength, 1)),
columns=['PLU']
)
#create an "ArticleName" column of random integers (1–99)
ArticleNames = pd.DataFrame(
np.random.randint(1, 100, size=(dflength, 1)),
columns=['ArticleName']
)
#concatenate into one DataFrame of shape (dflength, 2)
df = pd.concat([PLUs, ArticleNames], axis=1)
#loop per‐PLU in one pass via groupby
for plu_value, group in df.groupby('PLU')['ArticleName']:
# group is a Series of ArticleName values for this PLU
print(f"PLU={plu_value}: group shape = {group.shape}")
print(f" last ArticleName = {group.iloc[-1]}")
#optional tho: If you only need the last ArticleName per PLU,
last_per_plu = df.groupby('PLU')['ArticleName'].last()
# last_per_plu is a Series indexed by PLU, with the last ArticleName as value
print("\nVectorized result (last ArticleName per PLU):")
print(last_per_plu.head()) # show first few entries
Local ipv6 route is always in the global prefix, assuming that it's standard /64 prefix. So:
ip -6 route show | grep -v fe80 | awk '{print $1}'
2001:8a0:e4b3:e800::/64
Removed local (ULA) addresses using grep -v fe80. I changed my prefix to some random.
If your prefix is not /64 and you are using SLAAC, you'll still get /64 local route.
I just want to drop here, that angular supports using javascript string-interpolation since 19.2.0 (untagged template literals).
define car and array. you should be able get a callable and return at the end. as long you don't want to return an undefined array. KiSs <3
Yet another workaround is to set WA_TransparentForMouseEvents attribute to child widget of QVideoWidget:
QWidget *c = findChild<QWidget*>();
if (c)
c->setAttribute(Qt::WA_TransparentForMouseEvents);
We had this exact same behavior when we put in an intro file with different bitrate and stereo/mono settings than the stream. The file would play and then the stream would not play. Interesting though, it played on our mobile app, but not in the browser.
We fixed the bitrate and stereo problem and now it plays fine.
Just install django-oscar version 3.2.4 as follows:
pip install django-oscar[sorl-thumbnail]==3.2.4
and problem was solved.
Great solution for automating workflows in Google Sheets! If you're looking to learn how to build custom scripts to loop through stock spreadsheets efficiently, I highly recommend checking out this detailed article: Google Sheets Script – Loop Through Stocks Spreadsheet. It offers a clear step-by-step guide, perfect for both beginners and those with some experience in Google Apps Script.
I made a unit for you to do this:
unit DeviceLister;
interface
uses
System.Classes
,System.SysUtils
{$IFDEF MSWINDOWS}
,Winapi.Windows
{$ENDIF};
function GetPluggedInDevices: TStringList;
implementation
{$IFDEF MSWINDOWS}
const
DIGCF_PRESENT = $00000002;
DIGCF_ALLCLASSES = $00000004;
SPDRP_DEVICEDESC = $00000000;
type
HDEVINFO = Pointer;
ULONG_PTR = NativeUInt;
TSPDevInfoData = packed record
cbSize: DWORD;
ClassGuid: TGUID;
DevInst: DWORD;
Reserved: ULONG_PTR;
end;
function SetupDiGetClassDevsW(ClassGuid: PGUID; Enumerator: PWideChar; hwndParent: HWND;
Flags: DWORD): HDEVINFO; stdcall; external 'setupapi.dll' name 'SetupDiGetClassDevsW';
function SetupDiEnumDeviceInfo(DeviceInfoSet: HDEVINFO; MemberIndex: DWORD;
var DeviceInfoData: TSPDevInfoData): BOOL; stdcall; external 'setupapi.dll';
function SetupDiGetDeviceRegistryPropertyW(DeviceInfoSet: HDEVINFO;
const DeviceInfoData: TSPDevInfoData; Property_: DWORD; var PropertyRegDataType: DWORD;
PropertyBuffer: PBYTE; PropertyBufferSize: DWORD; RequiredSize: PDWORD): BOOL; stdcall; external 'setupapi.dll' name 'SetupDiGetDeviceRegistryPropertyW';
function SetupDiDestroyDeviceInfoList(DeviceInfoSet: HDEVINFO): BOOL; stdcall; external 'setupapi.dll';
{$ENDIF}
function GetPluggedInDevices: TStringList;
{$IFDEF MSWINDOWS}
var
DeviceInfoSet: HDEVINFO;
DeviceInfoData: TSPDevInfoData;
i: Integer;
DeviceName: array[0..1023] of Byte;
RegType: DWORD;
{$ENDIF}
begin
Result := TStringList.Create;
{$IFDEF MSWINDOWS}
DeviceInfoSet := SetupDiGetClassDevsW(nil, nil, 0, DIGCF_ALLCLASSES or DIGCF_PRESENT);
if NativeUInt(DeviceInfoSet) = NativeUInt(INVALID_HANDLE_VALUE) then
begin
Result.Add('Failed to get device list.');
Exit;
end;
i := 0;
DeviceInfoData.cbSize := SizeOf(TSPDevInfoData);
while SetupDiEnumDeviceInfo(DeviceInfoSet, i, DeviceInfoData) do
begin
if SetupDiGetDeviceRegistryPropertyW(DeviceInfoSet, DeviceInfoData, SPDRP_DEVICEDESC,
RegType, @DeviceName, SizeOf(DeviceName), nil) then
begin
Result.Add(Format('%d: %s', [i + 1, PWideChar(@DeviceName)]));
end;
Inc(i);
end;
SetupDiDestroyDeviceInfoList(DeviceInfoSet);
{$ELSE}
Result.Add('Device listing is only supported on Windows.');
{$ENDIF}
end;
end.
And then in your app, you can simply add DeviceLister
to your uses list, and then call the GetPluggedInDevices
function. Here's an example where I'm calling and using it on a button to display the devices onto a memo:
procedure TForm1.Button1Click(Sender: TObject);
begin
var Devices := GetPluggedInDevices;
Memo1.Lines.Assign(Devices);
Devices.Free;
end;
And the result:
Is this kind of what you wanted?
Turns out it is a bug in PyCharm: https://youtrack.jetbrains.com/issue/PY-60819/FLASKDEBUG1-breaks-debugger-when-Python-PyCharm-installation-path-has-spaces#focus=Comments-27-8071749.0-0
Looks like it was fixed in the 2025.2.0 release.
I upgraded to 2025.2 and can confirm that the issue has been resolved.
Using a Google Business Profile for your business on Google Maps can be beneficial, but for businesses functioning within multiple cities, relying solely on Google's geocomplete-based listings is counterproductive.
Google Maps tends to restrict visibility to a particular local area radius, meaning your business would not show up in searches outside of your immediate vicinity.
This is where business directories focused on a particular state, like EZ Local, shine.
EZ Local, unlike Google geocomplete, empowers businesses to list in multiple cities, even across state lines, thus broadening their reach. Contractors, service providers, or companies with a statewide footprint needing untethered, local exposure can greatly benefit from EZ Local.
If your business would like to reach beyond neighborhood customers, such targeting with EZ Local becomes scalable and more SEO-friendly.
From Igor Tandetnik:
cppreference has this to say: (1) a
consteval
specifier impliesinline
; and (2) The definition of an inline function must be reachable in the translation unit where it is accessed. I'm 99% sure you won't be able to hide the definition ofconsteval
function in the library; you'd have to put it into the header.
To ensure if that setup is valid or possible, it would be best to consult a Google Cloud sales specialist. They can offer personalized advice and technical recommendations tailored to your application’s needs. From identifying suitable use cases to helping you manage future workload costs effectively, their insights can be invaluable.
you most likely have a framework or css folder overriding the table row element's end. it's changing color but not the entire row because another .css file rule is governing this one already or 2 are conflicting. conflict resolution on this one.
try to remove the element into a new file/folder and see if it runs sepereate. if it does you know you have a conflict in css rules.
Happy hunting!
In my case, my password was reset. When connecting, I changed the Authentication to another option then back to "SQL Server Authentication". After it, when I hit "Connect" it asked me to update the password.
had the same problem time ago, i solved by updating the library. If not work after updating try pymysql instead of mysql.
I tried to use the merchant ID under "Business Information" and it was wrong, mine was in the url bar and was 13 characters, the wrong one for me was ALL numbers and only 12 numbers.
1 moveTo(x, y) {
2 this.nodes[0].updateRelative (true, true);
3 let dist ((x this.end.x) **2 +
4 (y this.end.y) **2) ** 0.5;
5 let len = Math.max(0, dist this.speed);
6 for (let i= this.nodes.length 1; i >= 0; i--) {
7 let node = this.nodes[i];
8 let ang Math.atan2(node.yy, node.x x);
9 node.x = x + len * Math.cos(ang);
10 node.y = y + len Math.sin(ang);
11 x = node.x; y = node.y; len = node.size;
12}
13 update() {this.moveTo(Input.mouse.x, Input.mouse.y)}
document.getElementById("Button").disabled = true;
I used prebuilt aar,
If it is not available you can follow this
https://medium.com/@213maheta/ffmpeg-create-aar-file-add-it-into-android-project-7e069b0fe23f
i) Run below command on terminal
git clone https://github.com/arthenica/ffmpeg-kit.git
or Download source code from below link
https://github.com/arthenica/ffmpeg-kit
ii) Open termial & give path for Android SDK & NDK
export ANDROID_SDK_ROOT=/..your_path../Android/Sdk
export ANDROID_NDK_ROOT=/..your_path../Android/Sdk/ndk/25.1.8937393
iii) Run below command
./android.sh
iv) Go to dir
…./ffmpeg-kit/prebuilt/bundle-android-aar/
v) Copy ffmpeg-kit.aar & put it in to below path
project_name/app/libs/
vi) Add below line in your app gradle
dependencies {
implementation(files("libs/ffmpeg-kit.aar"))
}
I was able to make a patch at GenerateNode
in PropertyCodeGenerator:
result.Type = type;
//Add this code
if (element.GenericReturnTypeIsNullable())
{
var simpleType = type as RtSimpleTypeName;
var genericArguments = simpleType.GenericArguments.Cast<RtSimpleTypeName>().ToArray();
for (int i = 0; i < genericArguments.Length; i++)
{
var genericArgument = genericArguments[i];
genericArguments[i] = new RtSimpleTypeName(genericArgument.TypeName + " | undefined");
}
result.Type = new RtSimpleTypeName(simpleType.TypeName, genericArguments);
}
This proves what I want to achieve is possible but unfortunately means I will have to make my own version of the library to accommodate this change.
Keep in mind the solution above is not 100% complete as it doesn't check the index of each generic argument, it only assumes if one is nullable then all are :) I leave this as an exercise for the readers...
If there is a better way please let me know so I am not reinventing the wheel.
Thank you!
Same error, found this issue https://github.com/vercel/next.js/issues/81751 and decided to update next to newest 15.4.5 and it seems to work now
Depending on the reason, you will likely need to access local business directories, map APIs, or utilize data scraping tools to acquire business listings with geocodes (latitude and longitude) for a given area.
1. Use Google Maps API or Bing Places API
With business Google Map and Bing Places APIs, you can search for and retrieve classified businesses within a particular area, and they will return the results indicating business titles, locations, addresses, and geocodes. Of course, a developer key is a prerequisite.
2. Third-party Data Providers
Other websites, for instance, Data Axle or Yelp, Foursquare API, do provide their business datasets with geocodes but usually for a price.
3. Scraping Local Directories with Permission
Some public directories, for example, EZ Local, show businesses with their city and state but do not provide geocodes. However, if the business or geocoding address is offered, you can apply geocoding APIs for translating the address to latitude and longitude, such as Google’s.
Note:
Always check the conditions of service of sites such as EZ Local concerning their data policies before scraping or programmatically extracting data.
نیوشا علیپور است شماره ش را بده
It looks that the source originates from an example I wrote for a STM32World Tutorial video. IF you have not watched the video, I'd recommend that as it goes through the setup in STM32CubeMX. I don't see anything obviously wrong in your code, so most likely it is in the CubeMX setup.
https://www.youtube.com/watch?v=0N4ECamZw2k
The working example for STM32F405 is here: https://github.com/STM32World/stm32fun/tree/master/stm32world_dac1
Instead of commenting out lines or manually editing expressions, you can filter elements directly in a vector:
x = [1,2,3,4,5];
total = sum(x(~ismember(x, [2 4]))); % Exclude 2 and 4 from the sum
I have finally find a solution for the given issue. I will recommend you to use powershell with Az module to process the commands. Make sure you have installed the Az module in your powershell in order to perform the bash commands.
First identify the app id for your registered application in the azure.
Once you have the app id for me i faced an error where i was not able to read or access the certificate from Azure key vault because of error - "Caller is not authorized to perform action on resource.\r\nIf role assignments, deny assignments or role definitions were changed recently, please observe propagation time.
To properly access the Key vault i would recommend to provide role based access on the app id for Key Vault Administrator and Key Vault Certificates Officer.
Over here as your application is trying to access the key vault from your custom program you will have to provide role based access on the Service Principal. For more information please refer to -
https://learn.microsoft.com/en-us/azure/databricks/admin/users-groups/service-principals
So consider your app needs an active Service principal and provide the access of required role to the given service principal.
Commands to see and apply the role for your service principal is as follows-
az ad sp show --id [app-id]**
If it fails with Service Principal not found then create it with
az ad sp create --id [app-id]
Once you have an active sp in your tenant then next step is to assign the role
az role assignment create --assignee app-id/client-id --role "Key Vault Certificates Officer" --scope /subscriptions/[subscription-id]/resourcegroups/[resourcegroupname]/providers/Microsoft.KeyVault/vaults/[vault-name]
az role assignment create --assignee app-id/client-id --role "Key Vault Administrator" --scope /subscriptions/[subscription-id]/resourcegroups/[resourcegroupname]/providers/Microsoft.KeyVault/vaults/[vault-name]
If you have system managed identity enabled by default for Virtual Machine on azure then also add that app-id similarly with the command.
Once you do this please wait 15-20 minutes approximately for the assignment of roles properly and test like sending emails after this, I did this for setting up certification based authentication for our Oauth2 setup.
Have you found a solution for this?
Click the View Menu button and then select Show->Errors/Warnings on Project.
running with:
docker compose up -d
instead of:
docker-compose up -d
If you were able to give each combination an item ID (outside of PBI desktop), and then use the sort column based on that specific ID, this would work.
For example,
ID | Item - Classification | Sizing - Classification | Desired Order |
---|---|---|---|
BDK | Bed | King | 1 |
BDQ | Bed | Queen | 2 |
BSK | Box Spring | King | 3 |
BSQ | Box Spring | Queen | 4 |
You would then create the order column based on ID and "Sort by Column" with this the order.
If you only sorted on the "sizing" classification, all Kings will be grouped together, all Queens will be grouped together, etc. (which I'm sure you've already seen).
Another way to accomplish this (depending how you want to do it) would be a custom column using DAX that would look something like this:
(For just itemorder)
OrderColumn = IF(table[Item] = "Bed", 1
IF(table[Item] = "Box Spring", 2
...................)
OR
(For item AND size order)
OrderColumn = IF(AND(table[Item] = "Bed", table[Sizing] = "King"), 1
IF(AND(table[Item] = "Bed", table[Sizing] = "Queen"), 2
...................)
Generate a realistic, high-resolution image that visually represents the following scene or concept in a natural, photorealistic style. The image should look like it belongs in a premium blog post—minimal, clear, and emotionally resonant—without any text, labels, or graphics. Scene to Visualize:[paste your text here] Visual Style & Composition:Use natural or ambient lighting that fits the tone of the scene (e.g., warm for cozy/home scenes, cool for modern/tech topics, bright for energetic content).Prioritize realism and believability—include depth, shadows, reflections, textures, and natural imperfections where appropriate.Background should either enhance the scene (if contextual) or be minimal/blurred to keep focus on the main elements.Use camera-like perspectives (eye-level, overhead, or close-up) depending on what best suits the scene.Avoid clutter. The image should feel clean and visually balanced, with a clear subject or focal point. Color & Tone:Stick to modern color palettes trending in blogs and digital publications: soft neutrals, warm tones, elegant muted hues, natural greens/blues, or high-contrast blacks and whites.Optional: add a subtle filter or lighting grade that gives the image a cinematic or editorial blog-style finish. Do NOT Include:No text or overlays of any kindNo branding, logos, watermarksNo cartoonish or unrealistic renderingsThe final result should feel like a high-end editorial photo or a lifestyle stock image used by top-tier blogs (like Medium, Substack, Notion templates, or branded blogs by Apple, Airbnb, etc.).
Python 3.13.2 (main, Feb 4 2025, 00:00:00) [GCC 14.2.1 20250110 (Red Hat 14.2.1-7)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> import tempfile
>>> tempfile.gettempdir()
'/tmp'
>>> os.environ["TMP"] = "/tmp/xis"
>>> tempfile.gettempdir()
'/tmp'
There are perspectives to consider here.
Among the classes that implement the interface, there is greater coupling, but not as strong. This coupling is even stronger in versions prior to Java 8, in which classes that implement these features MUST be recompiled if any new functionality is added to the interface. Although classes must implement the functionality themselves, there is no dependency on the implementation of the functionality, just a set of functionalities to be implemented.
For those who use it, reusing the interface's functionalities through polymorphism is very good coupling and requires no modification.
And from the interface's perspective, there is a larger set of possible classes that can use the interface.
Did you end up having to combine your certs into one .pfx file and then using that in your .csdef file? e.g.
<Certificates>
<Certificate name="cert-fullchain" thumbprint="B50C067CEE2B0C3DF855AB2D92F4FE39D4E70F1E" thumbprintAlgorithm="sha1" />
</Certificates>
set :
logging: console.log()
This way you will have all the queries logged on to the console, irrespective of what kind of query it is.
Place your properties in a location that loads before auto-configuration:
@SpringBootApplication
@PropertySource("classpath:/WEB-INF/my-web.properties")
public class MyApplication {
public static void main(String[] args) {
SpringApplication.run(MyApplication.class, args);
}
}
ol {
overflow-x: auto;
padding: 0;
margin: 0;
white-space: nowrap;
}
li {
display: block;
white-space: pre;
min-width: 100%;
}
Jimi's post helped a lot. I used the post to create a class that derives from Form like below and now scrolling works fine. Thank you Jimi!
[DesignerCategory("code")]
public class Myform:Form,IMessageFilter
{
public Myform()
{
// SetStyle(ControlStyles.UserMouse | ControlStyles.Selectable, true);
this.AutoScroll = true;
}
protected override void OnHandleCreated(EventArgs e)
{
base.OnHandleCreated(e);
Application.AddMessageFilter(this);
VerticalScroll.LargeChange = 60;
VerticalScroll.SmallChange = 20;
HorizontalScroll.LargeChange = 60;
HorizontalScroll.SmallChange = 20;
}
protected override void OnHandleDestroyed(EventArgs e)
{
Application.RemoveMessageFilter(this);
base.OnHandleDestroyed(e);
}
protected override void WndProc(ref Message m)
{
base.WndProc(ref m);
switch (m.Msg)
{
case WM_PAINT:
case WM_ERASEBKGND:
case WM_NCCALCSIZE:
if (DesignMode || !AutoScroll) break;
ShowScrollBar(this.Handle, SB_SHOW_BOTH, true); //was false
break;
case WM_MOUSEWHEEL:
// Handle Mouse Wheel for other specific cases
int delta = (int)(m.WParam.ToInt64() >> 16);
int direction = Math.Sign(delta);
ShowScrollBar(this.Handle, SB_SHOW_BOTH, true); //was false
break;
}
}
public bool PreFilterMessage(ref Message m)
{
switch (m.Msg)
{
case WM_MOUSEWHEEL:
case WM_MOUSEHWHEEL:
if (DesignMode || !AutoScroll) return false;
if (VerticalScroll.Maximum <= ClientSize.Height) return false;
// Should also check whether the ForegroundWindow matches the parent Form.
if (RectangleToScreen(ClientRectangle).Contains(MousePosition))
{
SendMessage(this.Handle, WM_MOUSEWHEEL, m.WParam, m.LParam);
return true;
}
break;
case WM_LBUTTONDOWN:
// Pre-handle Left Mouse clicks for all child Controls
if (RectangleToScreen(ClientRectangle).Contains(MousePosition))
{
var mousePos = MousePosition;
// Inside our bounds but it's not our window
if (GetForegroundWindow() != TopLevelControl.Handle) return false;
// The hosted Control that contains the mouse pointer
var ctrl = FromHandle(ChildWindowFromPoint(this.Handle, PointToClient(mousePos)));
// A child Control of the hosted Control that will be clicked
// If no child Controls at that position the Parent's handle
var child = FromHandle(WindowFromPoint(mousePos));
}
return true;
// Eventually, if you don't want the message to reach the child Control
// return true;
}
return false;
}
private const int WM_PAINT = 0x000F;
private const int WM_ERASEBKGND = 0x0014;
private const int WM_NCCALCSIZE = 0x0083;
private const int WM_LBUTTONDOWN = 0x0201;
private const int WM_MOUSEWHEEL = 0x020A;
private const int WM_MOUSEHWHEEL = 0x020E;
private const int SB_SHOW_VERT = 0x1;
private const int SB_SHOW_BOTH = 0x3;
[DllImport("user32.dll", SetLastError = true)]
private static extern bool ShowScrollBar(IntPtr hWnd, int wBar, bool bShow);
[DllImport("user32.dll", CharSet = CharSet.Unicode, SetLastError = true)]
private static extern int SendMessage(IntPtr hWnd, uint uMsg, IntPtr wParam, IntPtr lParam);
[DllImport("user32.dll")]
internal static extern IntPtr GetForegroundWindow();
[DllImport("user32.dll")]
internal static extern IntPtr WindowFromPoint(Point point);
[DllImport("user32.dll")]
internal static extern IntPtr ChildWindowFromPoint(IntPtr hWndParent, Point point);
}
as long as DtoService.GetDtos()
it is being used,using var context = new dtoContext(...)
itcontext
gets properly disposed of even though you're creating DtoService
without DI. It's short-lived and doesn't hold resources, so there's no memory leak risk and no need to manually dispose of anything. MyService
since you're not holding the EF context there, your provider pattern with DataService
is a good way to avoid cluttering DI with multiple DB context services make sure you don’t accidentally hold onto instances of DtoService
or the context longer than needed
C is like Python, if you have 9 slots for an array, you will have 0 to 8 as indices. C is very serious about memory as it is a low-level language. You would have to allocate print vettore [8] if you are trying to access the last element.
For me it turned out to be necessary to manually copy the precompiled libraries from CefGlue/packages/cef.redist.linux64/120.1.8/CEF/
(from sources) to bin
folder.
os.system("helpfile.pdf")
goes to next line when file is open. It doesn't wait until user close it. So helpfile_btn
is deactive only for a moment because the next line makes it working again. I don't think that it's possible to do with reader that select in system. In windows you can't get access to reader. And almost you don't know to witch. Acrobate? Chrome? Firefox... Maby don't do it or make reader a part of your программе?
I would suggest using QLoRA for fine tuning and try using a well defined format for the fine tuning data like :
{messages: [{"role" :"system", "content" : "......"}, {"role": "user", "content" : "...."}, {"role" : "response", "content" : "......"}]
Also try using a suitable optimizer during fine tuning like adamw
I could provide more detailed solution if you can share your fine tuning approach.
Successfully opened terminal window and executed commands using this code
# Open an xterminal in colab
!pip install colab-xterm
%load_ext colabxterm
%xterm
#Then ran following commands in window
curl -fsSL https://ollama.com/install.sh | sh
ollama serve & ollama pull llama3 & oll
You can check any conditions you want in the Exit
block - like
if (TargetVessel==2) {
PrepareLoading.take(agent);
}
However what happens to the agents that cannot be taken? You'd need some sort of control logic - most likely, you should only take from storage the agents that CAN be sent to the exit block. Thus, you are ensuring all agents that are finished storing.
Based on both answers above, this is the minimum code I could get it to work on .NET 8.
//1. Add SwaggerUI
app.UseSwaggerUI(c =>
{
c.RoutePrefix = "api/something/else/swagger";
});
//2. Set BasePath
app.UsePathBase("/api/something/else");
//3. Add Swagger
app.UseSwagger();
start your spring boot project from here:
get project.zip
unzip the project.zip
you can find: pom.xml
「
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>3.5.4</version>
<relativePath/>
</parent>
<groupId>com.emea</groupId>
<artifactId>project</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>project</name>
<description>Demo project for Spring Boot</description>
<properties>
<java.version>21</java.version>
</properties>
<dependencies>
<dependency> <groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<annotationProcessorPaths>
<path>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</path>
</annotationProcessorPaths>
</configuration>
</plugin>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<excludes>
<exclude>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</exclude>
</excludes>
</configuration>
</plugin>
</plugins>
</build>
</project>
(1) Use the spring-boot-maven-plugin to build the Spring Boot application JAR. Do not use the maven-jar-plugin.
(2) In the section of your pom.xml, do not manually specify the versions of dependencies that are already managed by Spring Boot (e.g., 3.5.3, 1.18.38, 6.2.9).
(3) You can use the mvn dependency:tree command to identify the libraries and versions that are already included by Spring Boot.
package and run:
package
mvn clean package
run
java -jar target/project-0.0.1-SNAPSHOT.jar
then everything is ok!
2025-08-06T23:52:06.643+08:00 INFO 17710 --- [project] [ main] com.emea.project.ProjectApplication : Started ProjectApplication in 1.387 seconds (process running for 1.986)
To reproduce your issue, simply modify the pom.xml as follows:
add
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>3.3.0</version>
<configuration>
<archive>
<manifest>
<mainClass>${exec.mainClass}</mainClass>
</manifest>
</archive>
</configuration>
</plugin>
remove
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<excludes>
<exclude>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</exclude>
</excludes>
</configuration>
</plugin>
add
<exec.mainClass>com.emea.project.ProjectApplication</exec.mainClass>
into <properties>
modify ProjectApplication.java
like yours.
package and run:
package
mvn clean package
run
java -jar target/project-0.0.1-SNAPSHOT.jar
then get the same error:
$ java -jar target/project-0.0.1-SNAPSHOT.jar
Exception in thread "main" java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory
at com.emea.project.ProjectApplication.<clinit>(ProjectApplication.java:11)
Caused by: java.lang.ClassNotFoundException: org.slf4j.LoggerFactory
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:641)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:526)
... 1 more
how to fix ?
maven-jar-plugin
spring-boot-maven-plugin
On my end, I don't find the conversion to datetime mentioned by @piRSquared to be necessary. You can just do:
df[<column_name>] = df[<column_name>].astype(str)
df.to_dict('records')
Solution is to Check your Python version
MediaPipe only supports:
Python 3.7 to 3.11
i was trying with the 3.11.9 version of python then it installed successufuly
Unsafe.AreSame
is the less-unsafe equivalent of pointer equality.
I have no IT background , but what i understood so far , a daemon is a background process that is continuously running to do certain tasks for client, but APIs are communication programs between programs or application.
If you're using Unity Catalog, you can now query columns easily with:
SELECT table_schema, table_name, ordinal_position, column_name, data_type, full_data_type
FROM main.information_schema.columns
ORDER BY 1,2,3;
Where main
is the name of your catalog. You can read more about it here.
The reason is this class that changes the behaviour of Android classes manipulating the bytecode
For me, adding "read_private_products" capability in WooCommerce v10.0.4 allowed a customer user to be able to read the products endpoint in v3 (/wp-json/wc/v3/products)
just pass --js=true flag in your command
Adding the following line to my config fixes it!
"editor.suggest.showReferences": false
with function row_count(tab_name in varchar2) return number as
rc number;
begin
execute immediate 'select count(*) from ' || tab_name into rc;
return rc;
end;
select table_name, row_count(table_name) as row_count from all_tables
where owner = 'owner';
/
I see annotations on all panels in Grafana v11.3.1 (64b556c137) with Grafana as datasource for Annotation Queries with these steps
create a manual annotation (point or range)
1.1 click on a point in dashboard - not on time but there has to be a tooltip open
1.2 select a range -> press CMD/Option (mac) before releasing -> create a range annotation
Go to settings -> Annotations -> create a new annotation in Grafana -> leave Grafana as a source -> don't change anything
Return to your Dashboard -> you see your initial manual annotation copied to all panels