Since you mentioned in your update that it works when you navigate to the endpoint via the address bar, why not have the button act as a link that navigates there? Instead of trying to force whatever you are doing with ajax.
You can set the link target to "_blank" so that the opened link does not interrupt the current page.
Use serving-lifecycle hooks
@app.before_serving
– runs once per worker, right before the first request is accepted.
@app.after_serving
– runs once on a clean shutdown
Create the requests.Session
in the first hook, stash it on the application object and close it in the second.
"react": [ "./node_modules/@types/react" ]
add this to the compiler options as stated by https://stackoverflow.com/users/2074763/amey-shirke
In your sync.py try wrapping all ndb operations inside ndb.Client().context() like:
from google.cloud import ndb
client = ndb.Client()
with client.context():
collection_dbs, collection_cursor = model.Collection.get_dbs( order='name' )
You can refer to this documentation on Python 3 version of ndb client library.
Also, make sure that your service account has the proper permissions to query Datastore.
Here are the key differences between them:
USE_CONCAT:
This hint instructs the optimizer to transform OR conditions into a series of UNION ALL operations. This can improve performance by allowing the optimizer to handle each part of the OR condition separately.
It is particularly useful when the OR conditions involve different columns or when the selectivity of the conditions varies significantly.
OR_EXPAND:
Similar to USE_CONCAT, this hint also transforms OR conditions into UNION ALL operations. However, it is more aggressive in its approach.
OR_EXPAND is typically used when the optimizer might not automatically choose to expand the OR conditions, but you want to force it to do so for performance reasons.
In summary, both hints aim to optimize queries with OR conditions by converting them into UNION ALL operations, but they differ in their aggressiveness and specific use cases.
I found out that I need to sudo apt install python3-bpfcc
Once installed, I can run the hello.py eBPF program both natively with my OS default python, and also the venv python (don't forget the sudo though:)
sudo python3 hello.py
To select random items from a list:
import random
numbers = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
random_elements = random.sample(numbers, 4)
print(random_elements)
Apparently "shinyjs::hidden()" updates css to "display:none;" which makes the button unclickable. After a small modification, it started working. Thanks @Mikko and @smartse for your input. Below is the working example with single click download using ExtendedTask feature:
library(shiny)
library(bslib)
library(future)
library(promises)
future::plan(multisession)
ui <- fluidPage(
shinyjs::useShinyjs(),
titlePanel("Cars Data"),
textOutput("time"),
bslib::input_task_button("export", "Export", icon = icon("file-export")),
downloadButton("download", "Download", style = "position: absolute; left: -9999px; top: -9999px;"),
)
server <- function(input, output, session) {
# Just to prove UI is not blocked.
output$time <- renderText({
invalidateLater(1000)
format(Sys.time())
})
# Task that prepares a file with the data for download.
export_task <- ExtendedTask$new(function(file) {
promises::future_promise({
data <- mtcars
Sys.sleep(2)
write.csv(data, file)
file
})
}) |> bslib::bind_task_button("export")
# Set up a reusable file for this session's download data.
download_content_path <- tempfile("download_content")
observeEvent(input$export, export_task$invoke(download_content_path))
# Show download button only when file is ready.
observe({
if (export_task$status() == "success") {
showNotification("Your download is ready.")
shinyjs::click("download")
}
})
# Handle download with file prepared by task.
output$download <- downloadHandler(
filename = function() {
paste0("Data-", Sys.time(), ".csv")
},
content = function(file) {
file.rename(export_task$result(), file)
}
)
}
shinyApp(ui = ui, server = server)
If you need to execute something after DOM has loaded do this:
await page.evaluateOnNewDocument(() => {
window.addEventListener('DOMContentLoaded', function() {
// do DOM stuff here
});
});
Yes, inversion of control violates the encapsulation principle. Sigh, people are committing the fallacy of authority so much here. Just because some random authority says SOLID is good doesn't mean that it definitely good. Also, it doesn't mean that even if you use SOLID in a good way, that doesn't throw a wrecking ball into something else like encapsulation.
I've used IOC many times. I think it is very useful for unit testing but if unit testing didn't exist, I would get rid of IOC in a heartbeat. It makes code messy and hard to trace precisely because it violates encapsulation. Knowledge about how to operate that was once encapsulated is now extracted and put outside of that encapsulation boundary.
It looks like you're using inconsistent capitalization in your JavaScript, so change Lightbox to lightbox throughout. Also, update the line img.src = img.src to img.src = e.target.src to correctly display the clicked image in the lightbox.
Пачеснаку, я бы хотел помочь, но я не понимаю языка на котором задан вопрос.
header 1 header 2 cell 1 cell 2 cell 3 cell 4
It is not possible to use Key Vault references like you do in local environment. This kind of references are only working inside of Azure.
coverage.py tries hard to not measure code in site-packages because generally that is third-party code that you don't want to measure. In addition, you are using --cov=src
which means to measure the code in the src directory. Manually copying code into site-packages seems very unusual.
Perhaps you want to use pip install -e .
to install your working directory as the libraries runnable code?
.net 8, visual studio 17.10.5, Blazor server
<button @onclick=@(args => m_ClickMe(xrow.ID))>@xrow.ShowText</button>
Thanks to engineersmnky, I was able to solve my problem.
From this issue on github : https://github.com/brianmario/mysql2/issues/1379, I found that the new releases of the mariadb-connector-c (from late 2024, after the latest version of mysql2 as of now which is 0.5.6 from 28/02/2024), now forces by default the connection to a distant server with TLS, or at least verification of the distant server certificate. However, a variable was added named MARIADB_TLS_DISABLE_PEER_VERIFICATION, which is not initiated with mysql2 and serve to deactivate the tls certificate verification. (https://mariadb.com/kb/en/mariadb-connector-c-3-4-3-release-notes/)
So, a good way to solve this issue and skip the tls connection phase, is to instance this environment variable with :
ENV['MARIADB_TLS_DISABLE_PEER_VERIFICATION'] = '1'
At the start of you program.
This initiate the variable that make mariadb-connector-c to skip TLS verification.
Now, TLS connection is not activated and I can connect to my remote mysql server without needing ssl certificate or a TLS connection.
@snnsnn Thanks for the answer. It really cleared my concepts of how things are done by the compiler. Here is a brief for helping others as per my understanding and an extra example to try on Solid Playground:
import { render } from "solid-js/web";
import { createSignal, For } from "solid-js"
function Paragraph(props:{
info:string
}){
const {info} = props // reactivity breaks as we are unwrapping the getter function return value into a static varibale;
return <div style={{margin:"5px"}}>
<p style={{"background-color": info}}>{info}</p> {/* as component loads only once info will cache the getter value during render satte only once*/}
<p style={{"background-color": props.info}}>{props.info}</p> {/* reactivity will work as accesing a getter function that calls the singal for us. */}
</div>
}
function App() {
const [info,setInfo] = createSignal("red")
return (
<>
<select
value={info()}
onInput={(e) => setInfo(e.currentTarget.value)}
>
<For each={["red","green","orange","pink","blue","yellow"]}>
{(color) => <option value={color}>{color}</option>}
</For>
</select>
{/* You can directly pass the singal call too solid will handle it by creating getters on the props object!*/}
<Paragraph info={info()}/> {/* though passing string solid would still make it a getter function by itself*/}
{/*
Proof: _$createComponent(Paragraph, {
get info() {
return info();
}
}
)
*/}
</>
)
}
render(() => <App />, document.getElementById("app")!);
Did you ever figure out the solution? In a similar boat as you.
I realized that I was doing:
import { InteractionResponseType } from 'discord-interactions';
and not:
import { InteractionResponseType } from 'discord-api-types/v10';
So when I did DAPI.InteractionResponseType
, it uses the enum
in discord-api-types
, but without the DAPI.
, it uses the discord-interactions
one, which apparently is not compatible.
I decided that given the number of images I have to deal with and the effect I want to create, I would be better off rotating the images in Photoshop.
I am having the exact same problem, where for a 3 seconds clip, the "normal clip" write takes 1.6 second but the "reversed clip" write takes 16.2 seconds. This is very annoying.
You’re supposed to use git-format-patch(1) for that.
Can you provide your code sample? There is no way to find a solution without looking into code.
A 403 Forbidden error when accessing the Shiprocket token API typically means your request was authenticated but the user does not have permission to access the resource. check with shiprocket team.
one suggesstion,
The Shiprocket API authentication token is valid for 10 days from the time of generation. After obtaining the token, it should be securely stored in a database or in-memory cache and reused for subsequent API requests. When the token expires, a new token must be generated and stored accordingly for continued access.
Happy coding !!
Thanks for the tip! I am so glad I can finally create a desktop shortcut with electron forge...
However, the only thing that didn't work was the icon for the shortcut app. But oh well we'll see. I'm glad I am already this far.
Problem may be incorrect route in routes file: Route Error Not Ziggy
In my case wrong syntax for invokable controller.
Laravel 11.9 with Jetstream 5.2 & Inertia 1.0
No, XPath 2.0 is not fully backwards compatible. Some expressions will behave differently when processed with an XPath 2.0 processor. There does exist an XPath 1.0 compatibility mode, but even with this mode enabled, there are some incompatibilities. See this documentation for a complete list of breaking changes.
I recently have a need to integrate Stripe and my concern brought me here. I think he wants to build a page where he doesn't have to collect card information from the customer. Just send a request to Stripe and let them collect the card details.
pre-commit is meant for validation. Not for changing the commit.
Using pre-commit to delete or add files is effectively undefined behavior. It may or may not work.
Will any of the client side commit hooks ...
commit-msg
Yes. [1]
pre-commit
No.
Can someone point me to some documentation?
githooks(5) says that
when postgresql server didn't shut down cleanly, it often leaves a stale postmaster.pid
file behind — this file prevents postgresql from starting again
navigate to the postgresql data directory and delete the postmaster.pid
file:
rm /opt/homebrew/var/postgresql@15/postmaster.pid
⚠️ replace
15
with your actual postgresql version.
if you're not sure which version you're using, you can find the correct path using tab completion:
cd /opt/homebrew/var/
then press Tab
twice — it should show a directory like postgresql@14
, postgresql@15
, etc. and then go into that directory and delete the postmaster.pid
:
rm /opt/homebrew/var/postgresql@<your_version>/postmaster.pid
restart the postgresql service:
brew services restart postgresql@<your_version>
As of May of 2025 using GCC 12.2.0 this issue still persists. I also discovered this issue by accident and discovered, unwittingly, a useless work around that might shed more light on the problem for anyone hunting down the bug in gcc. Before finding this thread, and before knowing std::vector was the cause, I discovered that by commenting out all uses of std::vector
in a module being imported into main.cpp, not only could I compile successfully, but re-introducing the lines that used std::vector
and re-attempting to compile would also succeed and the output program would work just fine. I have no idea why and it makes zero sense.
Yes, it's possible to see a performance difference between VS Code and CLion. CLion is optimized for C++ development, especially debugging. In contrast, VS Code relies on extensions that can slow things down, particularly with GDB/LLDB. The slower performance in VS Code could be due to debugger setup, IntelliSense indexing, or configuration issues.
You can also use a debug_template.hpp file for debugging, like I do in competitive programming, to speed up the debugging process. Simply download this file, add the header to the file where you want to debug, and use debug(var)
to debug any variable (any STL data structure also).
For more help with C++ debugging in VS Code, check out this link on setting up debugging in VS Code here.
one line URI link Android and IOS
:
snapchat://add/<username>
or as you asked intent deeplinking URI android only
:
intent://add/<username>#Intent;package=com.android.snapchat;scheme=snapchat;end
TableColumn’s setGraphics method. You can pass any control node as parameter.
In your question, you assumed that the "flags" were hexadecimal strings, but I didn’t see anything stating that. They are likely just 30 ASCII characters each, which means they represent 30 raw bytes, not hex. This changes how you calculate the key and IV.
The instructions suggest using the first 8 bytes of each flag to build the key, and the last 8 bytes of each flag to build the IV. Following this logic, I got:
Key: 02c1ef508796f06789510d845b7c1a98
IV: 4d2d1014e8d2dbc92c43d04083d12b9c
Using that key and IV, I was able to decrypt the message like this:
echo "6dU2tgevONWUv6ZWu+84g7E4r4dKOfBxRiY3jnMf2m1aE4r1AZcOztzEKtwve2z211vOnoiXWJTGWTG6wQxibFDw+tVI8hAGwQMqYqeG963g+wz2ppMP+byEcvAgfwvmLrsgm/+nLFxCeKLWYy/e625RmmNEU06s1Dz6izYXX1PNiYn+JAcZQnS1N5KiuvjX1u2qWAIkAPY2H5/BO25vEg==" | base64 -d > cipher.bin
openssl enc -d -aes-256-cbc -in cipher.bin -out decryp.txt -K 02c1ef508796f06789510d845b7c1a98 -iv 4d2d1014e8d2dbc92c43d04083d12b9c
Result: Congratulations! You have completed our first challenge. The final code is 976f01ec317fd664e34ab18a360a43f7888e9065. Please, send it back to us.
Is there a master GPO recommendation for windows servers to allow unattended flows. We keep adding new GPO policies for RDP timeout, remote server access, application timeout, service account time out... I haven't been able to locate a document detailing GPO recommendations
I am able to send the email with the options but when I select the option I want (Approve/Reject) there is no reply.
Office365Outlook.SendMailWithOptions("https://www.outlook.com",
{
To: "[email protected]",
Subject: "This Is My Options Email Title",
Options: "Approve,Reject",
HeaderText:"Approval Selection",
SelectionText: "Please select 'Approve' or 'Reject' for the new tool",
Body: "See attached the new tool approval request",
Importance: "Low",
Attachments:Blank(),
UseOnlyHTMLMessage: true,
HideHTMLMessage: true,
HideMicrosoftFooter: true,
ShowHTMLConfirmationDialog: true
}
);
Instead of opening VSCode from WSL, you should open VSCode in windows and then have it connect to your WSL via the button in the bottom left:
You may also need to install the WSL extension in VSCode.
I'd imagine that this should solve both of your problems.
Give the table a uniqueness guarantee so duplicates physically can’t happen.
Use an UPSERT (INSERT … ON CONFLICT
) with RETURNING
so you know whether the row was really inserted.
Map that to HTTP status codes.
The web page at intent://www_link?url=https%3A%2F%2Fm.facebook.com%2F&wtsid=liup_01MOqbnyfDhRx55Ax&referrer=app_growth_upsell_id%3Dbloks_fb4a_sticky_banner_upsell%26app_growth_impression_id%3D01MOqbnyfDhRx55Ax%26utm_campaign%3Dmweb_upsells%26utm_source%3Dbloks_fb4a_sticky_banner_upsell%26salvb%3Dfalse#Intent;scheme=fb;package=com.facebook.katana;S.app_growth_impression_id=01MOqbnyfDhRx55Ax;S.custom_data=salvb%3Dfalse;S.impression_id=01MOqbnyfDhRx55Ax;S.app_growth_upsell_id=bloks_fb4a_sticky_banner_upsell;S.utm_id=bloks_fb4a_sticky_banner_upsell;S.utm_source=bloks_fb4a_sticky_banner_upsell;S.utm_campaign=mweb_upsells;S.market_referrer=app_growth_upsell_id%3Dbloks_fb4a_sticky_banner_upsell%26app_growth_impression_id%3D01MOqbnyfDhRx55Ax%26utm_campaign%3Dmweb_upsells%26utm_source%3Dbloks_fb4a_sticky_banner_upsell%26salvb%3Dfalse;end could not be loaded because:
net::ERR_UNKNOWN_URL_SCHEME
Did you find a solution? I'm having the same issue.
Once can do this as below
thread No: (${__threadNum}) - loop - ${__jm__login__idx} ${__machineIP}
NOTE: Replace Login by name of your thread group
__threadNum - will print thread number
__jm__login__idx - will print loop number
__machineIP - will print IP of computer
I faced the same issue while trying to write data into excel sheet.
Aftet knowing that the cell size is small to fit the data . So i drag the cell to increase the size or wr can use wrap text for the cell and saved it and processed the file and then it works fine for me...
run
psql -U apple -d postgres
It works for me.
I just resolved a similar issue by reading the kAXRoleAttribute of the AXUIElement. I discuss this at length in my answer here. The tl;dr is that I suspect it's a matter of lazy initialization. Reading the role attribute triggers initialization, but adding the observer does not.
Assuming you're facing the same issue, I would try updating your code to the following:
extension pid_t {
var focussedField: AnyObject? {
let axUIElementApplication = AXUIElementCreateApplication(self)
// Read and print the application's role field, this seems to triggers initialization of the Accessibility tree.
if let role = axUIElementApplication.role() { print("Application Role \(role)") }
var focussedField: AnyObject?
let result = AXUIElementCopyAttributeValue(axUIElementApplication, kAXFocusedUIElementAttribute as CFString, &focussedField)
guard result == .success else {
logger("Failed to get focussedField \(result.rawValue)", source: .pid)
Events.PIDExtensionFocusedFieldError(code: result.rawValue).sendEvent()
return nil
}
return focussedField
}
}
// Add these Accessor helpers too, if you don't already have them!
extension AXUIElement {
func role() -> String? {
return self.attribute(forAttribute: kAXRoleAttribute as CFString) as? String
}
func attribute(forAttribute attribute: CFString) -> Any? {
var value: CFTypeRef?
let result = AXUIElementCopyAttributeValue(self, attribute, &value)
if result == .success {
return value
} else {
return nil
}
}
}
You should look at the Message Control Register of te C167 CAN-Controller. There is a flag to prevent sending a message while updating.<br>
Bit | Function |
---|---|
CPUUPD | CPU Update (This bit applies to transmit-objects only!) Indicates that the corresponding message object may not be transmitted now. The CPU sets this bit in order to inhibit the transmission of a message that is currently updated, or to control the automatic response to remote requests. |
Set this flag before you change the your data and reset it afterwards.
Well, it's a true Heisenbug: the bug disappeared when I added logging to the relevant functions! Through a painful ablation process, I determined that the 'fixing' log was:
if let role = appElement.role() { print("Role: \(role)")}
While it's impossible to know what's going on under the hood in Accessibility APIs, this strongly implies that it's a matter of lazy initialization. Adding an observer or reading child elements does not trigger the initialization, but somehow reading the kAXRoleAttribute does. Strangely, reading the kAXTitleAttribute didn't work: there's something special about role. Opening the Accessibility Inspector must also have the same effect.
After reading and printing the role, the kAXSelectedTextChangedNotifications start coming through correctly. Moreover, reading the kAXSelectedTextAttribute on the Application's AXUIElement returns the proper value (instead of nil, before). A whole host of other Accessibility-related logic that was previously broken also started working.
So the fix is simple: just read out the Role attribute. You can store the role as an unused variable if you don't want the print statement. The interpreter won't like it, but hey, you can please some of the people, some of the time.
let role = appElement.role()
For completeness, the 'role()' function in my sample code is a helper function that reads the kAXRoleAttribute, per the popular AXUIElement+Accessors extension pattern:
func role() -> String? {
return self.attribute(forAttribute: kAXRoleAttribute as CFString) as? String
}
func attribute(forAttribute attribute: CFString) -> Any? {
var value: CFTypeRef?
let result = AXUIElementCopyAttributeValue(self, attribute, &value)
if result == .success {
return value
} else {
return nil
}
}
This can happen by Corrupted files in venv or the packages not installed in venv
To fix this: Try recreating venv and activate venv on the terminal. Then install the packages
You can use x-vercel-protection-bypass. This can be setup via Protection Bypass for automation. Then set this variable as query parameter in stripe settings.
Azure SQL | Azure Cosmos | |
---|---|---|
Data Model | Relational tables, T-SQL, strong schema, ACID transactions. | Schemaless JSON documents (or MongoDB/Cassandra/Gremlin/Table models); multi-model, vector support. |
Scale | “Scale up” (vCores/DTUs) with optional read-scale-out or geo-replicas. | “Scale out” automatically via physical partitions; virtually unlimited throughput & storage. |
Consistency | Strict (snapshot, serializable, etc.). | Five tunable levels (Strong → Eventual). |
Pricing unit | vCore / DTU / serverless per-second; long-running transactions encouraged. | Request Units (RU/s) for reads, writes & queries; optimize for small atomic operations. |
When to pick | OLTP/OLAP apps that need joins, stored procs, mature relational tooling. | Globally distributed, high-throughput, low-latency micro-services, IoT, gaming, personalisation, etc. |
Latency & SLA | Single-region HA SLA 99.99 %; write latency measured in ms – tens ms. | Multi-region (reads & writes) SLA 99.999 %; P99 <10 ms reads/writes in region. |
Sources: https://learn.microsoft.com/en-us/azure/azure-sql/database/?view=azuresql
https://github.com/minio/minio/issues/8007#issuecomment-2044634015 suggests that using MINIO_SERVER_URL and MINIO_BROWSER_REDIRECT_URL is the mechanism for this scenario.
It's return a bean than was instatinated with constructor (new MyServiceImpl()) - so it's not "anonymous object" but named bean in spring.
That bean stored in spring context for later user by same name or class.
anonymous object - it's a different entity. I think here you refer something like mymethod(new InterfaceName { ...implementation ... })
In other word you compare apples and oranges.
To make it annonymus your notation should looks like:
@Bean public MyService myService() { return new MyServiceInterface() { .... do something ... }; }
Azure SQL is a fully managed relational database service provided by Microsoft Azure. It allows for the creation, management, and scaling of SQL databases in the cloud. Azure SQL supports engines like SQL Server, MySQL, and PostgreSQL, making it a flexible solution for traditional relational workloads. It uses a predefined schema and provides strong consistency, ideal for applications requiring complex queries and ACID transactions. Cosmos DB is a globally distributed, multi-model NoSQL database service. It supports various data models like key-value, document, graph, and column-family, providing a flexible schema design. Cosmos DB offers low-latency, high-throughput, and can scale horizontally with automatic partitioning, making it suitable for globally distributed applications, real-time analytics, and use cases like IoT, gaming, and microservices..
I know this is old, but for me I noticed VSC was stuck on Android: Analyzing environment. Opening Activity Monitor and killing the adb process fixed it.
adb often gets stuck. I had the same issue with Android Studio where it couldn't find attached devices.
i believe option 3 should be working fine, this apply a background color to every child in the div:
<div class="[&>*]:bg-red-400">content</div>
I know this thread is old, but does anybody know of a way of getting the theme information on an Azure DevOps Server 2020.1 (on-premises) and not the Service?
It was determined that this is the error message that is returned when a user's email address is not set in the UserInfo object. The way I was creating users did not set this field. So, the user also could not be retrieved with GetUserByEmail
. If a valid email is used, it does not return an ADMIN_ONLY_OPTION
error.
If trying to compare a file that is not in the Solution Explorer, for example you have extracted a file from some other git branch to a temporary folder, you can open the external file in some editor, highlight all and copy. Then go into your project and find the same file in Solution Explorer.
Paste in the external files contents. Then have git compare current to unmodified. Then Ctrl-Z and undo the paste when done looking at the diff.
Created a NodeJS Shell that can be used as default shell.
Install (need NodeJS and NPM):
npm install -g biensure-nsh
Usage (to try):
nsh
If you like it, you can edit /etc/passwd to change your /usr/bin/bash to (in my case) /home/administrator/.npm-global/bin/nsh
For more information: https://github.com/biensurerodezee/nsh
Have you checked if your system has Microsoft ODBC drivers installed?
This could be on of the reasons you are getting the issue.
import ru from "../../../../node_modules/flowbite-datepicker/js/i18n/locales/ru.js";
const $datePickersEl = document.querySelector('#datepicker-actions');
const DatepickerOptions = {
language: "ru",
};
Datepicker.locales.ru = ru.ru;
const myDate = new Datepicker($datePickersEl, DatepickerOptions);
Medusa offers various customization solutions natively. You'll be able to add widgets to "native"/"core" pages as well as new pages that will be injected into the sidebar.
For more informations, check this link from the official documentation : https://docs.medusajs.com/learn/fundamentals/admin/ui-routes#content
However, if you really want to customize the sidebar as you wish, you'll need to fork the package from the Medusa repo, which is simply a Vite + React application that you can run as a standalone app.
You can find the package here : https://github.com/medusajs/medusa/tree/v2.8.0/packages/admin/dashboard
If you need more help, you can find more informations here : https://docs.perseides.org/guides/v2/customize-admin-ui/standalone
I also got this issue in my Ionic 6.5.6 Angular 16 project. Until Angular 15, I have been using which is working fine.
"angular2-signaturepad": "2.8.0"
After upgrading to Angular 16, the version is giving an error. So I upgraded to 3.0.4
"angular2-signaturepad": "^3.0.4"
For me this syntax worked:
import('../some-json-file.json', { with: { type: 'json' } })
reference: https://github.com/eslint/eslint/discussions/15305#discussioncomment-10389214
I was helped by @marCi002 answer, but it became much more straightforward now I think this should be more than a comment :
Edit your ##YOUR_REPOSITIORY##\.dart_tool\chrome-device\Default\Preferences
file. (It should exist, if it does not exist yet, try to run the target using Chrome at least once)
Change the value of the key you want (currentDockState
to undocked
for me)
Enjoy on the next launch of Chrome through Android Studio
From my understanding, this ##YOUR_REPOSITIORY##\.dart_tool\chrome-device\Default
is the template that is used each time you launch a debug session from Android Studio. So it's a trick, but it allows you change some settings.
My main DBA returned and we confirmed it's a permission issue as I suspected after granting and revoking db_owner role.
yes you can check this link it is help full to solve your question this is shopify official document
https://help.shopify.com/en/manual/products/details/cart-permalink#customize-a-checkout-link
New problem arose, after changing the setting for DisplayMode to DisplayMode.View, the border of the datacard disappears and won't come back even after changing the border settings of that specific datacard, any suggestions.?
it's because of the cstandard parameter that is probably using c11. go to c_cpp_properties.json and change that parameter to GNU17 or GNU23 . you can look for the file using the command palette
I'm looking for the answer of this question as well. Did you figure out how to do this?
Alright, this took me longer to figure out than I'm willing to admit.
The code works fine. The problem is that I have to tap and hold the UIPasteControl
for very long. I assumed it behaved like any Button and I would just have to tap it. Even when I did some longer taps nothing happened.
In my opinion pressing a button for 2 seconds is very unintuitive, but maybe I'm in the wrong.
If in the code there is np.void()
you have to tell python what "np" is, so you have to import numpy as np
. If you only do import numpy
you have to use numpy.void()
, or if you do from numpy import void
you can just do void()
.
I am sorry to be violating the stackoverflow norms supposedly. This is not an answer. I can't comment due to low reputation as I have never posted/answered.
I know this answer will get so many downvotes, but I am fine with that. I want to ask you about how you fixed the spring boot, swagger, date serialization issue, asked in one of your previous questions.
What did you actually do to ensure that the date serialization and the swagger UI both worked properly without conflicting each other?
Even my date is coming in epoch units instead of ISO string. The thing that was causing the issue was a webconverterconfig thing. On commenting it, swagger stopped working.
Please let me know how to tackle this issue.
Yes, there is a formula for calculating the number of parameters in a Conv2DTranspose (a.k.a. transposed convolution or deconvolution) layer, and it follows similar logic to a standard Conv2D layer.
Thanks. I thought it should be something easy, but such easy :-). Thanks a lot for your investigation in this case. I was going crazy because this didn't work properly. Good hint to check out the documentation better.
kind regards
Already in progress but taking forever
First of all, for Form1_load to be executed, you have to set the startup Object to be Form1.
It is intended to set in the GUI of the IDE (not by manually editing the module code ) in: Project | Properties | Application
be sure to set:
Moreover, your Form1_Load code Handles MyBase.Load, but I cannot see where Mybase is defined. So just to be sure:
When executing the program Form1 will be called, and the system will trigger Form1_load
BTW it's normal for functions called by system events to show 0 references (references are counted whan explicitally called by other pieces of code you write)
I think you should check whether you have captured enough faces with respect to the number of neighbors.
import cv2
haar_file = cv2.data.haarcascades + 'haarcascade_frontalface_default.xml'
face_cascade = cv2.CascadeClassifier(haar_file)
# Reduzindo resolução
webcam = cv2.VideoCapture(0)
webcam.set(cv2.CAP_PROP_FRAME_WIDTH, 320) # width
webcam.set(cv2.CAP_PROP_FRAME_HEIGHT, 240) # height
while True:
retorno, frame = webcam.read()
if not retorno:
print("No frame captured") # Add this line to check if the frame is captured or not
break
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
print("Grayscaled")
faces = face_cascade.detectMultiScale(gray, 1.3, 3, flags= cv2.CASCADE_SCALE_IMAGE)
print("Faces detected")
if len(faces) == 0:
print("No faces detected") # Add this line to check if any faces are detected for the number of neighbors
continue
for (x1, y1, x2, y2) in faces:
moldura_face = gray[y1:y1+y2, x1:x1+x2]
cv2.rectangle(gray, (x1, y1), (x2, y2), (0, 255, 255), 2)
moldura_face = cv2.resize(moldura_face, (48, 48))
# cv2.putText(im,prediction_label)
cv2.putText(frame, '% s' % ('prediction_label'), (x1//2, y1//2),
cv2.FONT_HERSHEY_COMPLEX_SMALL, 2, (0, 0, 255))
cv2.imshow("Output", frame)
if(cv2.waitKey(1) & 0xFF == ord("q")):
break
webcam.release()
cv2.destroyAllWindows()
These examples are about SELECT. But what about the question?
As I understand, the goal is to add such a row to table. Okay, we will decide how to UPDATE the current table.
But what about the future insertions? This field has to be calculated programmatically, or with insert/update triggers.
What is the purpose of this field? If only for unicness, then it can be done with index on two fields.
If there is no matter what type and content in this field - it is much easier just to concatenate fields with some separator. Or multiply accordigly to maximum of planned records. If there will be 1000000 records, it can be aValue*1000000+bValue.
We have our domain hosted with Azure, I just went to DNS Zones -> my domain -> Settings -> DNS Management -> Recordsets -> Added a TXT record with the value from google with a name @.mydomain.com
This sounds like an example of the Branch Predictor making a mistake.
Without the `if` statement, the code has no branches. Add in some branches and the branch predictor has to start guessing which branch will run. And sometimes it gets it wrong, causing a performance penalty.
I am experiencing the same issue, however so far it only seems to happen with Microsoft hosted emails. With Zoho hosted emails, it works fine. Nothing changes in the code, aside from the recipient address.
Anyone has any hints?
In my case the problem was that in that I tried to mock class from another repo and within the code of that class another class was used from another repo that wasn't imported in the repo I run the test, so adding the missing import solved it.
Hope it helps anyone
Adding java.nio export to pom.xml did not fix it.
Adding this export to vm options did fix it.
I just discovered the following:
If you write many large chunks sequentially to a file residing on a smb server by issuing WriteAsync() Calls, and then call Dispose() on the fileStream that was used for writing, this can take many seconds.
Whats worse : DisposeAsync() does not behave any better !
$url=$_GET['url'];
echo preg_replace("#\\\#ui","/",$url);// use three
echo '<br>or<br>';
echo str_replace("\\","/",$url); //use double
Maybe you want to use --revision-as-HEAD
, e.g. repo manifest --revision-as-HEAD --output-file=manifest-with-commitids.xml
Update react-native-contacts: rn npm install react-native-contacts@latest. Use JDK 17: Install JDK 17 and set JAVA_HOME in android/gradle.properties with org.gradle.java.home=/path/to/jdk17. Update build.gradle: Set compileSdkVersion = 34, targetSdkVersion = 34, and buildToolsVersion = "34.0.0" in android/build.gradle. Upgrade Gradle: In gradle-wrapper.properties, use gradle-8.3-bin.zip and classpath("com.android.tools.build:gradle:8.3.0") in android/build.gradle. Remove manual linking: Delete any react-native-contacts entries in MainApplication.java and settings.gradle. Clean & rebuild: Run npx react-native clean && cd android && ./gradlew clean && cd .. && npx react-native run-android. if the isue still exist check for duplicate dependencies in android/app/build.gradle or run ./gradlew assembleDebug --stacktrace for details
Adding the line below to the affected activity in the manifest file, fixed it for me
android:launchMode= "singleInstance"
To view the total installs of your app on the Google Play Console (as of May 13, 2025), follow these steps:
Visit https://play.google.com/console and sign in.
Select your app from the list to open its dashboard.
Scroll to the bottom of the dashboard page.
Click on the "Select KPI" button.
In the list of available KPIs, find "Total Installs" and click the "Add" button.
After adding it, the total installs will appear directly on your app’s dashboard for quick reference.
If you want to assign ROW_NUMBER()
based on [rowNum]
, [aValue]
, and [bValue]
(all three as grouping keys):
SELECT
*,
ROW_NUMBER() OVER (
PARTITION BY rowNum, aValue, bValue
ORDER BY Id
) AS rn
FROM #temptable;
Source
is a dependency property on your small image, so you can just bind the tooltip image Source
to that. As @Clemens suggested, you might consider binding both to a view model property.
<Image Name="lastImage" Height="400" Width="400" Source="{Binding RelativeSource={RelativeSource Mode=FindAncestor, AncestorType=Image}, Path=Source}"/>
I encountered the same issue and solution (in my case) was "stupid simple", of course after being more aware of what I'm doing... Simple changed the certificate with private key one. In first stage I use one of generated certificates without being aware of "is the public one or the private"... Initially I choose first certificate I could download on phone "at first sight"... But after I dowloaded the private one, everything worked fine. Good luck.
I've been struggling with this issue all morning.
Fix: Replace all ^
with ~
in the package.json
file for all Expo-related packages
I had the same issue i solve it by updating prisma
npm update @prisma/client @auth/prisma-adapter
I totally get where you’re coming from — I was in the same spot a while back when I first started looking into how streaming works beyond just using a video tag with a file URL.
What you’ve built so far is actually a basic form of progressive download, where the browser downloads the video file and starts playing it once there's enough buffered — but it's not true streaming in the sense platforms like YouTube use.
If you're dealing with on-demand videos and want better control and performance, setting up a basic video streaming server that supports HLS can be a great next step. You don’t need RED5 unless you’re going into live streaming — for VOD, a simple web server with HLS-compatible files works just fine.
You don’t need something like RED5 unless you’re going into live streaming. For local video-on-demand streaming, a simple setup using a web server (like NGINX or Apache) and pre-segmented HLS files can do the trick. Tools like FFmpeg can help you convert your videos into the right HLS format.
It’s a bit of a learning curve at first, but once you get the basics of HLS and how a player like Video.js or hls.js integrates with it, things start to click. Keep going — you’re actually on the right track!
What I found which fixed this problem when I hit it was to add the following to my AndroidManifest.xml file:
<application
android:name=".VariantApp"
where "VariantApp" is the name of the class that extends android.app.Application in my project.
In my case, at least, I had added a dependency on Koin for dependency injection and that caused the issue to appear.
It looks like this has changed significantly since the original post 15 years ago, and especially with the "'Zero-cost' exceptions" in SuperNova's answer. For my current project, I care more about lookup speed and errors than 1 / 0
errors, so I'm looking into that. I found this blog post doing exactly what I wanted, but in Python 2.7. I updated the test to 3.13, (Windows 10, i9-9900k) with results below.
This compares checking key existence with if key in d
to using a try: except:
block
'''
The case where the key does not exist:
100 iterations:
with_try (0.016 ms)
with_try_exc (0.016 ms)
without_try (0.003 ms)
without_try_not (0.002 ms)
1,000,000 iterations:
with_try (152.643 ms)
with_try_exc (179.345 ms)
without_try (29.765 ms)
without_try_not (32.795 ms)
The case where the key does exist:
100 iterations:
exists_unsafe (0.005 ms)
exists_with_try (0.003 ms)
exists_with_try_exc (0.003 ms)
exists_without_try (0.005 ms)
exists_without_try_not (0.004 ms)
1,000,000 iterations:
exists_unsafe (29.763 ms)
exists_with_try (30.970 ms)
exists_with_try_exc (30.733 ms)
exists_without_try (46.288 ms)
exists_without_try_not (46.221 ms)
'''
where it looks like the try
block has a very small overhead, where if the key exists, an unsafe check and try
check are the same. Using in
has to hash the key for the check, and again for the access, so it slows by ~30% with the redundant operation for real usage. If the key does not exist, the try
costs 5x the in
statement, which is the same cost for either case.
So, it does come back to asking if you expect few errors, use try
and many use in
And here's the code
import time
def time_me(function):
def wrap(*arg):
start = time.time()
r = function(*arg)
end = time.time()
print("%s (%0.3f ms)" % (function.__name__, (end-start)*1000))
return r
return wrap
# Not Existing
@time_me
def with_try(iterations):
d = {'somekey': 123}
for i in range(0, iterations):
try:
get = d['notexist']
except:
pass
@time_me
def with_try_exc(iterations):
d = {'somekey': 123}
for i in range(0, iterations):
try:
get = d['notexist']
except Exception as e:
pass
@time_me
def without_try(iterations):
d = {'somekey': 123}
for i in range(0, iterations):
if 'notexist' in d:
pass
else:
pass
@time_me
def without_try_not(iterations):
d = {'somekey': 123}
for i in range(0, iterations):
if not 'notexist' in d:
pass
else:
pass
# Existing
@time_me
def exists_with_try(iterations):
d = {'somekey': 123}
for i in range(0, iterations):
try:
get = d['somekey']
except:
pass
@time_me
def exists_unsafe(iterations):
d = {'somekey': 123}
for i in range(0, iterations):
get = d['somekey']
@time_me
def exists_with_try_exc(iterations):
d = {'somekey': 123}
for i in range(0, iterations):
try:
get = d['somekey']
except Exception as e:
pass
@time_me
def exists_without_try(iterations):
d = {'somekey': 123}
for i in range(0, iterations):
if 'somekey' in d:
get = d['somekey']
else:
pass
@time_me
def exists_without_try_not(iterations):
d = {'somekey': 123}
for i in range(0, iterations):
if not 'somekey' in d:
pass
else:
get = d['somekey']
print("The case where the key does not exist:")
print("100 iterations:")
with_try(100)
with_try_exc(100)
without_try(100)
without_try_not(100)
print("\n1,000,000 iterations:")
with_try(1000000)
with_try_exc(1000000)
without_try(1000000)
without_try_not(1000000)
print("\n\nThe case where the key does exist:")
print("100 iterations:")
exists_unsafe(100)
exists_with_try(100)
exists_with_try_exc(100)
exists_without_try(100)
exists_without_try_not(100)
print("\n1,000,000 iterations:")
exists_unsafe(1000000)
exists_with_try(1000000)
exists_with_try_exc(1000000)
exists_without_try(1000000)
exists_without_try_not(1000000)