Answered in detail in the Rust repo.
I need to provide the WATCHOS_DEPLOYMENT_TARGET=9 environment variable when running cargo swift package. It fixes the warnings in the Xcode.
Full command that solves the problem:
WATCHOS_DEPLOYMENT_TARGET=9 cargo run --manifest-path ../../cargo-swift/Cargo.toml swift package -p watchos -n WalletKitV3 -r
I found it, finally! The documentation was listed under the stdlib-types instead of variables.
toFloat() -> Float
Or there is a need to unlock
PortableGit\mingw64\libexec\git-core\git-remote-https.exe and then
to fix Git - The revocation function was unable to check revocation for the certificate
I have the same error on the secret variable definition. I make the mistake of indenting incorrectly as if they depended on the services, when they don't.
services:
# ...
secrets:
db_secret:
file: .env.local
to fix it
services:
# ...
secrets:
db_secret:
file: .env.local
An error as big as the missing semicolon..
Have you found a solution? ... I'm interested in something similar to freeze a page (Javascript / Reac) for a desktop app. ...... Sorry if I didn't understand your question, but it exists, my research also went through the browser's -Kiosk command, : problem, it's the whole page that is frozen :), and I'm just looking for the display at 80%.
A service connection input must be used, even if you know all its details in advanced
put the , after the } in line 23
Had the same issue. The problem was spaces instead of tabs as indents (I copied and pasted Makefile to PyCharm, that's why it probably switched the symbols).
Switching indent symbols back solved the problem.
Your approach can be very effective for centralized, consistent, and systematic control of design values, especially for simple design systems with fewer breakpoints. However, it can become harder to manage as complexity grows or if the design system evolves. Consider clamp() for more fluid responsiveness, or component-level media queries for granular control over individual components. Both alternatives offer better flexibility and reduce some of the redundancy and potential confusion inherent in overriding tokens globally.
Convert comment to the answer. @adam-arold said it works
@EventListener
public void onContextClosed(ContextClosedEvent event) {
closeAllEmitters();
}
private void closeAllEmitters() {
List<SseEmitter> allEmitters = emitters.values().stream()
.flatMap(List::stream)
.collect(Collectors.toList());
for (SseEmitter emitter : allEmitters) {
try {
emitter.complete();
} catch (Exception e) {
log.error("Error during emitter completing");
}
}
emitters.clear();
}
A few years back I've had this problem.
We chose to forward the message from A to a new topic.
Now I am thinking about implementing a "smart" consumer:
With the help of a KafkaAdminClient (https://kafka-python.readthedocs.io/en/master/apidoc/KafkaAdminClient.html) you can get the current offset of the first group and get the messages up to that point.
Knowing your current and the other group's offset, it's possible to calculate a `max_records` for the manual poll method (https://kafka-python.readthedocs.io/en/master/apidoc/KafkaConsumer.html#kafka.KafkaConsumer.poll).
Still thinking about possible drawbacks, but I think it should work.
I want to clear up a few things here where I think are people talking at cross purposes.
As stated above, the I register holds the most significant 8 bits of a jump vector, while the lowest 8 bits are supplied on the data bus when IM2 is enabled. However, on a standard ZX Spectrum, this is unused, and hence you will get an undefined value. However, IM2 is useful as it fires every screen refresh at a consistent interval (1/50 second), so it's ideal for logging time or some other background task, such as music.
The workaround for this is to supply a 257 byte table where every byte is the same, so when an IM2 interrupt is triggered, going to any random place in the table will give a consistent result. A full explanation is at http://www.breakintoprogram.co.uk/hardware/computers/zx-spectrum/interrupts, and one of many, many, many implementations of this is at https://ritchie333.github.io/monty/asm/50944.html
The R register is only really used internally for the DRAM refresh, but it can be programmed. One of its two main uses on the ZX Spectrum was to generate a random number (although there are other ways of doing this, such as multiplying by a largish number and taking the modulus of a large prime, which is what the Spectrum ROM does - https://skoolkid.github.io/rom/asm/25F8.html#2625). The other use was to produce a time based encryption vector, that was hard to crack as stopping for any debug would change the expected value of R and produce the wrong encryption key for the next byte. Quite common on old tape protection systems such as Speedlock.
How does the provided JavaScript and CSS code work together to create a responsive sliding navigation menu with an overlay effect that appears when the toggle button is clicked, and what are the key roles of the nav-open and active classes in achieving this behavior?
libxslt only works up to Node18. You'll have to replace it by another library that does a similar job. I tried libxslt-wasm, which does a pretty similar job and runs with Node22. If you're using typescript, this library doesnt compile with module commonjs. There is also xslt-processor that does the basic, but is far more limited than libxslt.
The accepted answer is not clear enough. Here is what the official documentation states:
[...] data should always be passed separately and not as part of the SQL string itself. This is integral both to having adequate security against SQL injections as well as allowing the driver to have the best performance.
https://docs.sqlalchemy.org/en/20/glossary.html#term-bind-parameters
Meaning: SQLAlchemy queries are safe if you use ORM Mapped Classes instead of plain strings (raw SQL). You can find official documentation here.
Multiple timeout layers (load balancer, ingress, Istio sidecars, HTTP client) can each cut off the call, it’s not that socket “reopens.” To fix this, extend or disable the timeout at each layer, or break the long-running operation into an async or polling pattern.
please follow these steps:
1- alter your profile idle time with blow command
ALTER PROFILE "profilename" LIMIT IDLE_TIME UNLIMITED
2- Make your user a member of the member profile.
Python in Excel runs in Microsofts Cloud.
As stated in the documentation provided by Microsoft, the python code you write with Python in Excel doesn't have access to your network or your device and its Files.
this is the correct code
test_image_generator = test_data_gen.flow_from_directory(batch_size=batch_size, target_size=(IMG_HEIGHT, IMG_HEIGHT), directory=PATH, classes=['test'], shuffle=False)
After trying many different things, i still do not know the exact reason this is happening. However, i made some changes to my code and the problem disappeared.
There was a class in my code which was importing many other classes which in turn used many 3rd party service packages. It was implementing a factory pattern to create clients for each service. Moving the import statements from the top level into the code solved the problem.
for eg:
I had:
import {LocalFileSystemManager} from "~~/server/lib/LocalFileSystemManager"
I replaced it with a function:
async filestore(path: string): FileSystemManager
{
const runtime_config = useRuntimeConfig();
const {LocalFileSystemManager}: typeof import("~~/server/lib/LocalFileSystemManager") = await import("~~/server/lib/LocalFileSystemManager");
return new LocalFileSystemManager(path, this);
}
I think the problem is with the account connected to Meta Developer. This account is not verified, so you need to go to Meta Business Suite → Security Center and verify the business. I haven’t tested it yet, so I’m not completely sure.
For what I have seen, you must import next and if it's not a next project, anyway you won't have stats
In your .env file you can simply add MANAGE_PY_PATH=manage.py This will solve the issue.
I got to know about this in https://fizzylogic.nl/2024/09/28/running-django-tests-in-vscode
This sounds so fun! I’ve been experimenting with the Tagshop AI Avatar Generator. It transforms your real photo into a unique digital avatar in seconds. Might be perfect for this Firefly challenge
Anitaku official is a totally free running website where you can easily watch or download anime list in high quality with English subtitles.
This error usually means the API URL is incorrect or returning an HTML error page instead of XML/SOAP. In Magento 1.9.3.3, make sure you're using the correct SOAP API v1/v2 endpoint (e.g., http://yourdomain.com/index.php/api/v2_soap/?wsdl), and that API access is enabled in the admin panel. Also, check for server issues like redirects, firewalls, or missing PHP SOAP extensions that might cause incomplete responses.
npm error Missing script: "dev"
npm error
npm error To see a list of scripts, run:
npm error npm run
npm error A complete log of this run can be found in: C:\Users\user\AppData\Local\npm-cache\_logs\2025-10-08T06_39_16_267Z-debug-0.log
Anitaku official is a totally free running website where you can easily watch or download anime list in high quality with English subtitles.
This might help: https://github.com/GMPrakhar/MAUI-Designer. It seems to be an up-to-date project and it is free. I have not tried it.
show the dialog only when
if (mounted)
or
if (context.mounted)
will fix the issue you are facing
The nonce primarily protects the integrity of the ID Token against replay, while the state parameter protects the client's callback endpoint from CSRF attacks.
See the comparison in table below:
| Feature | Nonce | State |
|---|---|---|
| Purpose | Primarily to prevent replay attacks by associating an ID Token with a specific authentication request. | Primarily to prevent Cross-Site Request Forgery (CSRF) attacks by maintaining state between the authentication request and the callback. |
| Who Validates and When? | Validated by the Client to ensure the ID Token belongs to the current session. The Authorization Server includes it in the ID Token but does not typically validate it against a stored value. | Validated by the Client to ensure the callback response corresponds to a legitimate, client-initiated request. The Authorization Server passes it through unmodified. |
| Inclusion | Included in the authentication request and returned within the ID Token. | Included in the authentication request and returned in the authorization response i.e. the redirection response. |
In most cases, this error is caused by an incorrect Fabric or Mixin setup in your IDE. Ensure that your Fabric API, mappings, and run configurations are compatible with your version of Minecraft.
I didn't imported the windows and UWP folders.
There is your mistake. Import everything, you need it all, even if you don't target those platforms.
You should use salestotals = SalesTotals::construct(salesTable); inside of you while loop, as you never change the salesTable value in the salestotal as salesTable changes and it always gives the same result.
click 'view' and 'open jupyterLab' , run code on jupyterLab will fix this.
After one day, everything is now OK again and Galera Manager is displaying the nodes correctly once more. I haven't changed anything.
In my case, I used "fideloper/proxy": "^4.0" in Laravel 8.
In Laravel 9+, it is not necessary as it is built in, you are save to remove the line of "fideloper/proxy" in composer.json.
Just make sure you go to app/Http/Middleware/TrustProxies.php and modify the $headers:
protected $headers = Request::HEADER_X_FORWARDED_FOR | Request::HEADER_X_FORWARDED_HOST | Request::HEADER_X_FORWARDED_PORT | Request::HEADER_X_FORWARDED_PROTO | Request::HEADER_X_FORWARDED_AWS_ELB;
Cybrosys offers two primary methods for customizing the Odoo dashboard: technical development and using their "Odoo Dynamic Dashboard" module. The technical approach involves creating a custom module with Python, XML, and JavaScript (often using the Owl framework in newer Odoo versions) to define a client action, template the view (displaying tiles, charts, and tables), and use Odoo's ORM to fetch real-time data from any model, offering complete control over the layout and content. Alternatively, for non-developers, the commercial Odoo Dynamic Dashboard module provides a user-friendly interface to configure dashboard elements like dynamic charts and tiles, set filters, customize colors, and arrange the layout without needing to write code.
I also had the same issue. I deleted the pubspec.lock file and updated the image_picker package to version 1.1.2 .it’s working fine now.
after almost 5 hours searching i realized i just installed dart SDK 3.9.4 and it might be a bug during installation, with the open files. so i deleted the file and create another in file explorer i hate myself for losing time :)
Browser-native validation messages are not part of the DOM and cannot be captured or dismissed using Selenium WebDriver.
validationMessage is a read-only JavaScript property that reflects the validation state of the element.
To fully validate behaviour :
Use element.validity.valid to confirm the field's state.
Use element.validationMessage to get the human-readable error message.
In my case, I forgot to include the "#" prefix in the "data-bs-target" attribute.
Not working:
<button data-bs-toggle="modal" data-bs-target='modal-redeem'>Redeem</button>
Working:
<button data-bs-toggle="modal" data-bs-target='#modal-redeem'>Redeem</button>
What does the OG poster mean, "I have tested using breakpoints?" If you set breakpoints on the thread handling the request, your IDE will prevent the thread from progressing. So yes it will appear to hold the API call open indefinitely.
In case people still struggle with this, using a Mac the commands for the Cursor IDE are as follows:
Collapse all: CMD + R + 0 (zero)
Expand all: CMD + R + J
To collapse/expand only a class or method, click with your cursor on the class/method's name and then use these commands:
Collapse class/method etc.: CMD + R + [
Expand class/method etc.: CMD + R + ]
Short-lived JWT tokens are used for authenticating API requests and should not be stored persistently. The reason is that JWT tokens typically have short expiration times (e.g., 15 minutes to 1 hour), and storing them long-term poses security risks. If a JWT token is compromised (e.g., through a security vulnerability or device compromise), it can be misused until it expires.
Best Practice: Instead of storing JWT tokens, store Refresh Tokens, which are longer-lived and can be used to obtain new JWT tokens when they expire.
In a Kotlin Multiplatform (KMP) project, you should abstract the storage of Refresh Tokens in a way that is secure on both Android and iOS.
Android: Store the refresh token securely using Keystore or EncryptedSharedPreferences.
iOS: Use the Keychain to securely store the refresh token.
The JWT token is kept in memory and used temporarily for API requests, while the refresh token is stored securely on the device, ensuring that it can be used to obtain new JWT tokens when needed.
LOL. At this time there is no `@mui/material@"7.3.4"`. Back it up to 7.3.3 and it installs. I did not install x-date-pickers until everything else had installed.
This thread is 4 1/2 years old, but fuck it, I didn't see anyone else mention it so I will.
In this example the group in question has WriteOwner and WriteDACL rights. This means they can seize ownership of the AD object in question, and once they do the DACL does not matter anymore.
Additionally the group in question is the Administrators group, which means they can seize ownership of any AD object regardless of the DACL on it, much as local admin can seize ownership of any NTFS object. Once they seize ownership they can do whatever they want to.
Hence their "effective permissions" are GenericAll.
/end thread
Now they have started supporting groups
https://developers.facebook.com/docs/whatsapp/cloud-api/groups/
If you are here in 2025, it seems both backgroundColor and background are deprecated. Use surface instead.
final colorScheme = ColorScheme.fromSeed(
surface: const Color.fromARGB(255, 56, 49, 66),
);
final theme = ThemeData().copyWith(
scaffoldBackgroundColor: colorScheme.surface,
turns out queue_free() does not immediatly delete the object. the logic i made did not account for objects continuing past the queue_free() call.
I had the same issue, until found Mapbox public styles on this page: https://docs.mapbox.com/api/maps/styles/
where you can click "Add to your studio" to start from there.
Styles page
All the layers within selected style are listed in the left pane of studio, where you can edit or add more layers, save and publish the style, and follow the official tutorial to add the style to QGIS or ArcMap. Then you should be able to see the loaded basemap.
Studio page
You may consider what was said in another question: mulesoft - mUnits and Error Handling - How to mock error error.muleMessage - Stack Overflow
Here a practical example:
Considering this subflow to be tested and have 100% coverage
Where I need to evaluate the error from HTTP Request like:
#[ ( error.errorMessage.attributes.statusCode == 400 ) and ( error.errorMessage.payload.message contains 'Account already exists!' ) ]
I will need a structure of HTTP Listener and HTTP Request during the MUnit Test with configurations specific to the MUnit Test Suite ℹ️ it's important to consdier keep in the same file, as the MUnit executes each file separately and can't see other flows in different files inside src/test/munit
<!-- 1. A dynamic port is reserved for the test listener to avoid conflicts. -->
<munit:dynamic-port
propertyName="munit.dynamic.port"
min="6000"
max="7000" />
<!-- 2. The listener runs on the dynamic port defined above. -->
<http:listener-config
name="MUnit_HTTP_Listener_config"
doc:name="HTTP Listener config">
<http:listener-connection
host="0.0.0.0"
port="${munit.dynamic.port}" />
</http:listener-config>
<!-- This request config targets the local listener. -->
<http:request-config name="MUnit_HTTP_Request_configuration">
<http:request-connection
host="localhost"
port="${munit.dynamic.port}" />
</http:request-config>
<!-- 3. This flow acts as the mock server. It receives requests from the utility flow and generates the desired HTTP response. -->
<flow name="munit-util-mock-http-error.listener">
<http:listener
doc:name="Listener"
config-ref="MUnit_HTTP_Listener_config"
path="/*">
<http:response
statusCode="#[(attributes.queryParams.statusCode default attributes.queryParams.httpStatus) default 200]"
reasonPhrase="#[attributes.queryParams.reasonPhrase]">
<http:headers>
<![CDATA[#[attributes.headers]]]>
</http:headers>
</http:response>
<http:error-response
statusCode="#[(attributes.queryParams.statusCode default attributes.queryParams.httpStatus) default 500]"
reasonPhrase="#[attributes.queryParams.reasonPhrase]">
<http:body>
<![CDATA[#[payload]]]>
</http:body>
<http:headers>
<![CDATA[#[attributes.headers]]]>
</http:headers>
</http:error-response>
</http:listener>
<logger
level="TRACE"
doc:name="doc: Listener Response will Return the payload/http status for the respective request that was made to mock" />
<!-- The listener simply returns whatever payload it received, but within an error response structure. -->
</flow>
<!-- 4. This is the reusable flow called by 'then-call'. Its job is to trigger the listener. -->
<flow name="munit-util-mock-http-error.req-based-on-vars.munitHttp">
<try doc:name="Try">
<http:request
config-ref="MUnit_HTTP_Request_configuration"
method="#[vars.munitHttp.method default 'GET']"
path="#[vars.munitHttp.path default '/']"
sendBodyMode="ALWAYS">
<!-- It passes body, headers and query params from a variable, allowing dynamic control over the mock's response. -->
<http:body>
<![CDATA[#[vars.munitBody]]]>
</http:body>
<http:headers>
<![CDATA[#[vars.munitHttp.headers default {}]]]>
</http:headers>
<http:query-params>
<![CDATA[#[vars.munitHttp.queryParams default {}]]]>
</http:query-params>
</http:request>
<!-- The error generated by the listener is naturally propagated back to the caller of this flow. -->
<error-handler>
<on-error-propagate doc:name="On Error Propagate">
<!-- Both error or success will remove the variables for mock, so it doesn't mess with the next operation in the flow/subflow that are being tested. -->
<remove-variable
doc:name="munitHttp"
variableName="munitHttp" />
<remove-variable
doc:name="munitBody"
variableName="munitBody" />
</on-error-propagate>
</error-handler>
</try>
<remove-variable
doc:name="munitHttp"
variableName="munitHttp" />
<remove-variable
doc:name="munitBody"
variableName="munitBody" />
</flow>
Then create the test and add both flows in the Enabled Flow Sources
For each mock, it will need to define a respective flow to make the request using the variables suggested and create the error response. Remember to define the then-call property to call it.
Here an example of flow
<!-- 3. This flow acts as a test-specific setup, preparing the data for the mock. -->
<flow name="impl-test-suite.mock-http-req-external-400.flow">
<ee:transform
doc:name="munitHttp {queryParams: statusCode: 400 } } ; munitBody ;"
doc:id="904f4a7e-b23d-4aed-a4e1-f049c97434ef">
<ee:message></ee:message>
<ee:variables>
<!-- This variable will become the body of the error response. -->
<ee:set-variable variableName="munitBody">
<![CDATA[%dw 2.0 output application/json --- { message: "Account already exists!" }]]>
</ee:set-variable>
<!-- This variable passes the desired status code to the listener via query parameters. -->
<ee:set-variable variableName="munitHttp">
<![CDATA[%dw 2.0 output application/java ---
{
path : "/",
method: "GET",
queryParams: {
statusCode: 400,
},
}]]>
</ee:set-variable>
</ee:variables>
</ee:transform>
<!-- 4. Finally, call the reusable utility flow to trigger the mock listener. -->
<flow-ref
doc:name="FlowRef req-based-on-vars.munitHttp-flow"
name="munit-util-mock-http-error.req-based-on-vars.munitHttp" />
</flow>
Repository with this example: AndyDaSilva52/mule-example-munit-http-error: MuleSoft Example for MUnit test case that returns proper Mule error (i.e., HTTP:NOT_FOUND) with HTTP status code (i.e., 404 not found) and proper HTTP message body.
You could also try the new version of a library I programmed, which allows extracting the text of a PDF, mixed with the tables at the target pages of a the document.
It comes with a command line app example for extracting the tables of a Pdf into csv files.
You can try the library at this link:
If you have any problem with a table extraction, you can contact me at: [email protected]
Go to chrome extension store and install `YouTube Save-to-List Enhancer` to search and sort on playlists
I ended up creating an extension method which access the base CoreBuilder to invoke AddFileSystemOperationDocumentStorage
public static class FusionGatewayBuilderExtensions
{
public static FusionGatewayBuilder AddFileSystemOperationDocumentStorage(
this FusionGatewayBuilder builder, string path)
{
ArgumentNullException.ThrowIfNull(builder);
builder.CoreBuilder.AddFileSystemOperationDocumentStorage(path);
return builder;
}
}
can you help me recover my account Facebook my link is https://www.facebook.com/share/1QaWQxvuED/?mibextid=wwXIfr
New City Paradise Lahore is emerging as one of the most promising and well-planned residential projects in Pakistan’s real estate sector. Strategically located in a prime area of Lahore, this modern housing society is designed to offer a perfect blend of luxury, comfort, and convenience. With its advanced infrastructure, world-class amenities, and attractive investment opportunities, New City Paradise Lahore is set to redefine modern living standards for families and investors alike.
This works in some linux distros bash - not verified in all
#### sed please note the "!"/ negation does not work properly in sed and it is recommended that "!" to be used followed by { group of commands }
#### 1 . sed comment out lines that contain a specific text (search_string) and are not empty
sed '/^$/! {/search_string/{ s/^#*/#/g; }}'
# /^$/! : negates empty lines -> This is an address that matches all lines that are not empty.
# ^$ : matches an empty line.
# ! : inverts the match, so it applies to non-empty lines.
# {/search_string/ { s/^#*/#/g; }}
# {...} : groups a set of commands to be executed on the lines selected by the preceding address.
# /search_string/ : replace only in the lines that contain "search_string"
# { s/^#*/#/g; } : { new set of commands }
# s/^#*/#/g; : search lines not starting with "#" and add "#" in the front of the line
#### 2 . sed comment out lines that do not contain a specific text (search_string) and are not empty
sed '/^$/! {/search_string/! { s/^#*/#/g; }}'
# /^$/! : negates empty lines -> This is an address that matches all lines that are not empty.
# ^$ : matches an empty line.
# ! : inverts the match, so it applies to non-empty lines.
# {/search_string/! { s/^#*/#/g; }}
# {...} : groups a set of commands to be executed on the lines selected by the preceding address.
# /search_string/! : negates the lines containing search_string - so replace only in the lines that do not contain "search_string"
# { s/^#*/#/g; } : { new set of commands }
# s/^#*/#/g; : search lines not starting with "#" and add "#" in the front of the line
Where's the problem?
Put it in a picturebox that is ONLY as wide as the listbox minus the width of the scrollbar... then the scrollbar won't show because it's beyond the viewable area of the picturebox.
You run 100 tests at 5 % significance even with perfect normal data, 5 will fail by chance. With n = 100 000, the normality test is hypersensitive and will flag tiny random deviations. If you just want to stop seeing spurious fails lower your sample size (like n=1000 instead of 100000).
1.3.20 is the last version of the open search rest library thats compatible with opensearch and that compatibility only works with opensearch 1.x. This compatibility with the elastic search clients is broken with opensearch 2.x
Try it out, It's worked for me.
html body[data-scroll-locked] { overflow: visible !important; margin-right: 0 !important; }
The API you referenced only handles Banno institutions and is not intended to provide information about all institutions valid with the Fed. The Fed has a download (for a fee) of their entire database, or they offer this site to the public for free. The routing number can vary by ACH and Wire for the same institution.
I also struggled in updating arrays, especially nested. But the root cause? it requires imperative code or query refetches. But what if you could have declarative aray updates almost like simple objects?
For this, you can use normy, automatic normalization library, which brings apollo like automatic normalization and data updates, but for anything, including REST. And, as bonus, it supports array operations, even custom ones, so you can enjoy 100% automatic data updates for your whole app!
If you are interested, you can check it out here - https://github.com/klis87/normy
It is worth mentioning, that it does not really affect how you write code, it almost do not have any api surface. And you can use it with any data fetching library, like `react-query`.
Thanks, and really awaiting any feedback!
Like Randy Fay said, $settings['file_private_path'] = '/var/www/html/privatefiles'; , but I just do $settings['file_private_path'] = '../privatefiles'; and it works too.
I am also facing same issue.
Additionally i am also facing in c make file unable to finf .cmake file kind of something i tried everything from my side.
Please anyone help me to setup arcgissdk for my qt qml project.
I have already installed sdk. And run config command.
Also msvc compiler is installed and setup properly.
Mainly facing problem in imports and configuration in c make a
So the limit of 6 tabs is enforced by the UITabBarController() I believe. I could not find a way to amend this limit. A lone instance of a UITabBar() however, will not place any tabs in a more tab, and will allow the developer to break the UI is so desired. My plan is to just implement the UITabBar() , and trust the developer to ensure that each tab has the recommended minimum frame of 44x44 according to the HIG.
My code is based around enums because I find them convenient.
First I created a struct, TabIcon, to collect the icon data:
public struct TabIcon {
let title : String?
let icon : UIImage?
public init ( title : String , systemName: String ) { self.title = title ; self.icon = UIImage ( systemName: systemName ) }
public init ( systemName : String ) { self.title = nil ; self.icon = UIImage ( systemName: systemName ) }
public init ( title : String ) { self.title = title ; self.icon = nil }
}
Then I implemented the protocol, TabOption. Designed to be placed on enums:
public protocol TabOption: RawRepresentable , CaseIterable , Hashable , View where Self.RawValue == Int {
static var home: Self { get }
var tab: TabIcon { get }
}
( Notice it conforms to View. )
Each case of the enum is potential Tab that can be navigated to.
I ran en extension off of the protocol to extract a UITabBarItem out of each case of the enum.
fileprivate extension TabOption {
var tabItem: UITabBarItem {
UITabBarItem ( title: self.tab.title , image: self.tab.icon , tag: self.rawValue )
}
}
And finally, I created the UIViewRepresentable() responsible for implementing UITabBar() :
public struct CustomTabBar < Case: TabOption >: UIViewRepresentable {
@Binding var selection: Case
let items: [ UITabBarItem ]
public init ( selection: Binding < Case > ) {
self._selection = selection
self.items = Case.allCases.map { $0.tabItem }
}
public func makeUIView ( context: Context ) -> UITabBar {
let tabBar = UITabBar()
tabBar.items = items
tabBar.selectedItem = items [ selection.rawValue ]
tabBar.delegate = context.coordinator
return tabBar
}
public func updateUIView ( _ uiView: UITabBar , context: Context ) { }
public func makeCoordinator() -> Coordinator { Coordinator ( $selection ) }
public class Coordinator: NSObject , UITabBarDelegate {
@Binding var selection: Case
init ( _ selection: Binding < Case > ) { self._selection = selection }
public func tabBar ( _ tabBar: UITabBar , didSelect item: UITabBarItem ) {
selection = Case ( rawValue: item.tag ) ?? .home
}
}
}
It binds to a single instance of the protocol, and creates the TabBar() ( which has no limit on tabs. )
For Testing, I created an enum:
public enum Tab: Int , TabOption {
case home , two , three , four , five , six
public var tab: TabIcon {
switch self {
case .home: TabIcon ( title: "One" , systemName: "1.circle" )
case .two: TabIcon ( title: "Two" , systemName: "2.circle" )
case .three: TabIcon ( title: "three" , systemName: "3.circle" )
case .four: TabIcon ( title: "four" , systemName: "4.circle" )
case .five: TabIcon ( title: "settings" , systemName: "5.circle" )
case .six: TabIcon ( title: "more" , systemName: "6.circle" )
}
}
public var body: some View {
switch self {
case .home : Text ( "one" )
case .two : Image ( systemName: "star.fill" ).resizable().frame ( width: 70 , height: 70 )
case .three : Circle().fill ( .red )
case .four : Circle()
case .five : RoundedRectangle ( cornerRadius: 30 ).fill ( .blue ).padding ( 30 )
case .six : Rectangle()
}
}
}
It conforms to theTabOption protocol, is a view , and has a TabIcon value for each case.
I created a convenience struct that implements the view for the CustomTabView.
fileprivate struct CustomTabView < Case: TabOption > : View {
@State var selection: Case = .home
var body: some View {
VStack ( spacing: 0 ) {
self.selection .frame ( maxHeight: .infinity , alignment: .center )
CustomTabBar ( selection: $selection )
}
.ignoresSafeArea ( edges: .bottom )
}
}
And then for ultimate convenience, I implement an extension on the protocol calling the CustomTabView.
public extension TabOption {
static var tabView: some View { CustomTabView < Self > () }
}
Best Regards:
struct ContentView: View {
var body: some View {
Tab.tabView
}
}
A bit late to the party. But you can simply put this into your public/index.html
<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no">
q)select `$"." sv' flip string (name;id) from tab
id
----
aa.1
bb.2
cc.3
The solution was to add tools:remove="android:maxSdkVersion" to the the FINE location on the Manifest.
Like so:
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"
tools:remove="android:maxSdkVersion"/>
Solution by this answer
Yes, AppTransaction.shared is the right StoreKit 2-way to prove the app is obtained from the App Store. A .verified result means the JWS was cryptographically validated for your app and the device.That’s why you keep seeing .verified on legitimate installs. It’s not a “who is currently signed into the App Store" check
Bounds checking isn’t done by default in Vulkan. Enabling “Robust Buffer Access” can catch out-of-bounds accesses,
The “index became 0” effect you saw was likely a driver debug feature. DirectX and OpenGL behave similarly and don’t guarantee automatic checks.
making the account identifier all lowercase worked for me... or so I think.
Found the solution. sameSite value had to be set to "none" and secure had to be true in the cookie.
try SQL Dense_Rank() window function instead:
with a1 as (
select d.name as department, e.name as employee, e.salary as salary,
dense_rank() over (partition by d.name order by e.salary desc) as dense_ranked
from employee e join department d on e.departmentId=d.id
)
select department, employee, salary
from a1
where dense_ranked <= 3;
In python 3.14, they added a new function to pdb:
awaitable pdb.set_trace_async(**, header=None, commands=None)
Now, you can call await pdb.set_trace_async() and you can await values with it.
No delta implement ACID operations. Optimize is a type of operation so it will either completely succeed or completely fail
Depending on the type of optimize statement you are doing process can be idempotant (eg: bin-packing) or not (eg: z-order)
For first question- Nuget package has different builds for different framework such 4.8,6,7 etc.. so when we reinstall library even though version is same, reinstall tell Nuget to pick new target framework e.g lib/.netstandardlibray/mylibrary.dll
for second part - some library still point to older folder location instead of newer is may be due compatibility fall back. That is only version which is most compatible to newer framework.
Bumping as I also have this issue, haven't seen it discussed anywhere, and haven't found a solution myself outside of manually checking for the static route name, i.e id === "list" inside the dynamic route
Yes, it here:
https://www.npmjs.com/package/undetected-chromedriver-js
But I haven't tested it yet
As a result, I tried to set the version tag to 17.6 instead of latest. Everything worked. It will be necessary to read what has changed in the new major version...
name = input("GUDDI: ")
message = f"Happy Birthday, dear {GUDDI}! May all your wishes come true."
print(message)
According to the current documentation, it’s not possible to directly use Azure AD (Entra ID) as an IDP in Entra External ID for corporate users. However, i found a workaround that can achieve a similar result.
You can leverage Azure AD B2C as an OIDC provider within Entra External ID. The flow would look like this:
Entra External ID → Azure AD B2C → Corporate Active Directory → Entra External ID
In this setup, corporate users authenticate through their usual Azure AD credentials, while External ID handles the authorization and user management on your side. This allows you to maintain a familiar login experience for corporate users even though direct IDP support isn’t available yet.
Looks tricky...
The error is explained in this Support page of IBM:
https://www.ibm.com/support/pages/unable-execute-commands-remotely-vio-server-padmin-user-ssh
Quote:
Question
Remote command execution by padmin user via ssh fails with not found error.
Answer
1) Example of remote command execution failing from a SSH client to the padmin user on a VIO server.
SSH Client:
# ssh padmin@<VIO server> ioscli ioslevel
rksh: ioscli: not found
# ssh padmin@<VIO server> ioscli lslparinfo
rksh: ioscli: not found
To allow remote command execution by padmin on VIOS do the following:
2) Get to the root prompt on the VIO server.
$ whoami
padmin
$ oem_setup_env
#
3) Link /usr/ios/cli/environment to /home/padmin/.ssh/environment.
# cat /usr/ios/cli/environment
PATH=/usr/ios/cli:/usr/ios/utils:/usr/ios/lpm/bin:/usr/ios/oem:/usr/ios/ldw/bin:$HOME
# ls -l /home/padmin/.ssh/environment (Link is not there).
/home/padmin/.ssh/environment not found
# cd /home/padmin/.ssh
# ln -s /usr/ios/cli/environment environment
lrwxrwxrwx 1 root system 24 Dec 19 08:28 /home/padmin/.ssh/environment -> /usr/ios/cli/environment
# ls -l /home/padmin/.ssh/environment
lrwxrwxrwx 1 root system 24 Dec 19 08:28 /home/padmin/.ssh/environment -> /usr/ios/cli/environment
4) Edit /etc/ssh/sshd_config. Uncomment the PermitUserEnvironment directive and change from it's default of no to yes.
# vi /etc/ssh/sshd_config
Change from:
#PermitUserEnvironment no
Change to:
PermitUserEnvironment yes
5) Stop and restart sshd
# stopsrc -s sshd
# startsrc -s sshd
6) Test ssh remote command execution from SSH client to VIO server as the padmin user.
# ssh padmin@<VIO server> ioscli ioslevel
2.2.2.1
# ssh padmin@<VIO server> ioscli lslparinfo
1 VIO-Server-1
Successfully executed remote command as padmin user via ssh.
NOTE-1: You can also configure SSH public/private keys between a SSH client and the VIO server for the padmin user to avoid having to supply the padmin password for each command execution.
NOTE-2: From sshd man page:
PermitUserEnvironment
Specifies whether ~/.ssh/environment and environment= options in ~/.ssh/authorized_keys are processed by sshd(8). The default is ''no''. Enabling environment processing may enable users to bypass access restrictions in some configurations using mechanisms such as LD_PRELOAD.
I often encounter this error on a work project. The fastest way I've found is to delete the simulator that the project was previously built on and create a new one.
This issue is tracked on Shadow side, and it's fixed by IDEA side. See
You are using the wrong token, most probably one that is intended for App only and not one for User Context as stated in the result description. As App only tokens have access only to public data on X and are not bind to a specific User, Hence why you can't post a tweet.
Take a look at this link, it has all you need to know.
https://docs.x.com/fundamentals/authentication/overview
Here's the most direct way of doing it:
ul:not(ul ul)
For Samsung users , i had the same issue not getting my device (Samsung A55 Android 15) recognized on my computer ( Windows 11) , so i had to install Samsung Usb Driver and now the device detected.
To implement address autofill in your WhatsApp Flows after the ZIP code is entered, the correct approach is to use the data_exchange action, triggered by form submission or by screen navigation, rather than on_select_action (which is not available for TextEntry/textInput components).
How to Achieve Address Autofill:
Once the ZIP code (zipCode) field is entered, submit the form or navigate to the next screen. here
Configure the screen or form to use the WhatsApp Flows Data Endpoint (data_channel_uri). The form's data (including zipCode) is sent to your server via data_exchange action.
Your server responds with the corresponding address information (street, city, state, etc.) in the data payload.
On returning to the next screen (or updating the same screen via dynamic properties), populate the remaining address fields using init-values set to dynamic data references, such as ${data.street}, ${data.city}, etc.
User enters ZIP code.
User taps "Next" or "Lookup Address".
Form data is sent to your endpoint (data_exchange).
Server responds with address data.
Next screen (or same screen updated) loads with pre-filled address fields.
My apologies, but I am unable to generate content on that topic. Such a request falls outside of my established safety protocols.
All very interesting above. Thank you.
But would it work with scrolling background... I see lots of references to loading background images? I am a total noob but looking for a similar solution... Frosted logo, locked to center of page, that blurs the content scrolling below? This is all a little above my paygrade so before I got deep into the rabbit hole... Just wanted to check if even possible...
thank you !
If you are able to connect to it using odbc or SSMS but not through code and you continue to get <token-identified principal>, then you need to specify the database aka Initial Catalog.
You might have access to connnect to specific database and not the server so specifying what database will allow you to connect and succeed when connecting to it.
In python or other languages
add in correct format:
f"Initial Catalog = mydatabase;"
There is no state
as the comments point out
This is not possible. I know that is not answer you would like but that is the reality. Workbooks are meant to be shared only within the Tenant. Any external user that needs access will require to be added to the Tenant as guest user. Additionally any user that views the workbook not only needs access to the workbook itself but to any data that the workbook uses. For example, if the workbook uses Log Analytics to query data the user needs to have access to the data that is queries. If the user does not have that access the workbook will either fail to visualize or not visualize anything. The same situation is if you export the workbook and it is imported into another tenant. If the workbook is made dynamically - it does not tie to any specific resource in your tenant it will also work when imported on other tenants as long as they have similar data. You best option is to use some other platform that has such kind of feature or to build your own custom web application that pulls the same data and visualize it. Of course always be careful with visualizing any sensitive data publicly.
You’ll need the following Logic-App workflow to group the hourly records by borderID and format them into a single text block.
If you use this code/logic, the problem will be solved and you can easily use the final string to send an email (for example through the “Send an email” action).
The full working Logic-App JSON is available here:
for Bootstrap 5: fix for Select2 in modals
$(document).ready(function(){
// Disable focus trap via data attribute
$('.modal').attr('data-bs-focus', 'false');
});
This does not look like go-redis problem.
Since redis-cli returns the same error, it looks like your database does not have timeseries support. Which version of redis are you using?
One of Delta feature is to have ACID transaction when you commit your file so what you are asking goes against this.
If you really want to do this I would recommend having your data partitionned by customer_id so that when you need to erase from history a specific client you just have to dump a specific partition.
This would involve in 2 counter parts :
you will experience slower requests if you have very few rows per customer_id and yet a large number of them
your request have to always filter on customer_id (because you've just broken the mecanic of Delta erasing a file that is still existing from his point of view)
Ideally logout url from login.microsoftonline.com will not destroy any access token but it will only refrain from providing any new access token using a refresh tokens.
The simple solution from the application logout perspective is to destroy the access token and refresh token on the client cache / cookie.
You can also hit the logout endpoint of azure. This will ensure that current access token is destroyed and new access token will also not be granted using a refresh token.