I am also experiencing this issue on Windows. Following this question for a solution.
Is the following code written inside the tag?
<head>
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
</head>
recently i found this webpage: https://medium.com/@python-javascript-php-html-css/outlook-add-ins-retrieving-the-original-email-address-fb117202791c
There is an interesting code-snippet:
Office.onReady(() => {
// Ensure the environment is Outlook before proceeding
if (Office.context.mailbox.item) {
Office.context.mailbox.item.onMessageCompose.addAsync((eventArgs) => {
const item = eventArgs.item;
// Get the itemId of the original message
item.getInitializationContextAsync((result) => {
if (result.status === Office.AsyncResultStatus.Succeeded) {
console.log('Original Item ID:', result.value.itemId);
} else {
console.error('Error fetching original item ID:', result.error);
}
});
});
}
});
You said it was a single spartacus project, but let me try to share our own experience in case it gives you an idea. We did 2 different spartacus projects and 1 backend project.
When you use multiple sites, the baseSite value is added to the end of the user ids on the hybris side. This way, user sessions are not mixed.
For example;
[email protected]|site1
[email protected]|site2
For build/deploy operations, you will define each site in manifest.json: https://help.sap.com/docs/SAP_COMMERCE_CLOUD_PUBLIC_CLOUD/b2f400d4c0414461a4bb7e115dccd779/1c26045800fa4f85a9d49e5a614e5c22.html
Because you're not importing the repository in the domain pom. So, the domain doesn't know about any implementation.
Sorry, I can't add a comment now. This problem doesn't reproduce in my environment. So I would like to know more details (library versions, device, and so on).
You need to add following line in your code prior to the line containing fill.
await expect(pageFixture.page.locator("//input[@formcontrolname='min_quantum']")).toBeVisible();
It will make sure the element is visible before using fill action.
If you are using QUEUE_CONNECTION=database please remove it & use QUEUE_CONNECTION=sync. then event triggering part will work.
Make sure these components are installed for Visual Studio 2022
MSVC v143 - VS 2022 C++ x64/x86 build tools (Latest)
MSVC v143 - VS 2022 C++ ARM64/ARM64EC build tools (Latest)
Windows 11 SDK (10.0.22000.0)
Visual Studio Installer -> Visual Studio 2022 (Modify button) -> Tab Individual Components
Filter by "MSVC v143" or "Windows 11 SDK"
This error sometimes indicates that EPPlus is unable to properly read the Excel document, and the issue could be due to a corrupt Excel file, rather than a permission or disk issue as the error might misleadingly suggest.
After encountering this issue myself, here’s what I did to resolve it:
Open the Excel file in Microsoft Excel.
Copy all the contents of the sheet
Create a new blank Excel workbook.
Paste values only into the new workbook (Right-click > Paste Special > Values).
Save the new workbook and upload (the new one) again.
I was only able to detect this because I tried another library SlapKit.Excel which displayed a more user-friendly error message.
I have found a solution. I register the Converter within my Module.cs
file, which inherits from IModule
(sth. from the Prism framework). I guess, you can also register it within the App.xaml.cs
file, but this file has no access to my converter file.
public class InfoModule : IModule
{
public void OnInitialized(IContainerProvider containerProvider)
{
ConnectionStatusToColorConverter converter = containerProvider.Resolve<ConnectionStatusToColorConverter>();
Application.Current.Resources.Add(nameof(ConnectionStatusToColorConverter), converter);
}
public void RegisterTypes(IContainerRegistry containerRegistry)
{
containerRegistry.RegisterSingleton<ConnectionStatusToColorConverter>();
}
}
In the end, I had to remove the creation of the converter within my xaml, since it seems like the creation within the xaml creates a new converter using the empty-parameter ctor. And if I want to use the converter, I have to use {StaticResource ConnectionStatusToColorConverter
- so like before.
<UserControl.Resources>
<!-- this has to be removed -->
<localConverters:ConnectionStatusToColorConverter x:Key="ConnectionStatusToColorConverter"/>
</UserControl.Resources>
<!-- Example Usage -->
<StackPanel Grid.Column="2" Orientation="Horizontal" VerticalAlignment="Center">
<md:PackIcon Kind="Web" VerticalAlignment="Center"
Foreground="{Binding ConnectionStatus, Converter={StaticResource ConnectionStatusToColorConverter}}"/>
</StackPanel>
I am not quite sure if this is sth. like the anti-patterns mentioned above, but it works exactly as I wanted now.
Check the job category. In one case, job was not listed to a category id which is not listed in categories table (that customized category was deleted while cleaning up activity).
when the category id was updated in sysjobs table for the job with existing category id, the job got listed in SSMS.
It seems there is a way in android's official document that u can use androidX jsengine to eval wasm, but there's now webview demo. https://developer.android.com/develop/ui/views/layout/webapps/jsengine
I have similar issue. it seems it does not work :/
Buildpacks latest and 22 image does support node 22.x.x. For some reason, the auto cloud build setup uses the old V1 image. Link to builders https://cloud.google.com/docs/buildpacks/builders
You can change this by going to cloudbuild and find the trigger that's building your application. Click on the inline editor for cloudbuild yaml that's defined for you. You should see in a step the link to a v1 image change that with gcr.io/buildpacks/builder:google-22 and you should be good
This seems to work fine within a test_that expression. Can this cause any problems?
random_function <- function(in_dir) {
return(file.access(in_dir, 0))
}
testthat::test_that("Base function is mocked correctly", {
testthat::expect_true(random_function("random/directory/that/does/not/exist") == -1)
testthat::local_mocked_bindings(
file.access = function(in_dir, mode) {
return(0)
}, .package = "base")
testthat::expect_true(random_function("random/directory/that/does/not/exist") == 0)
})
testthat::test_that("Base function is not mocked anymore", {
testthat::expect_true(random_function("random/directory/that/does/not/exist") == -1)
})
In a similar fashion
> have[endsWith(names(have), '1')]
ID...1 Month...1
1 1 1
2 2 2
3 3 3
However, as I stated in a comment below the question, please provide more steps so we can tackle the (odd) names generating code. I think we should fight the cause and not cure the symptoms.
Ensure the dependency class has the @Injectable() decorator.
you can passively make them smaller by putting them to the top File
-> Preferences
-> Settings
-> Workbench
-> Appearance
-> Activity Bar: Location
=> and set this to top
I found the answer. Turn out I just need to make those contents I don't need the same color as background to hide them
@page {
@top-center {
content: '';
color: transparent;
}
@bottom-right {
content: '' counter(page) ' / ' counter(pages);
margin-bottom: 15px;
margin-right: 10px;
}
}
eval()
can be dangerous and unreliable in certain situations — especially if user input isn’t tightly controlled. It also doesn’t gracefully handle symbolic algebra like solving for a variable in an equation.
If you're aiming for more algebraic manipulation (e.g., solving x + 5 = 10
for x
), you should use sympy
, a powerful symbolic mathematics library in Python.
Ideally, complex objects need to be sent through @postmapping, as shown below
@PostMapping("/quotes")
void getQuotes(@RequestBody QuoteRequest request) {}
But if you really want to send the data in the get request as mentioned in the other comment the url has to be http:// localhost : 8080/api/v1/quotes?topics[]= ... again with this since the spring cannot parse the data you need to parse the data, may be using ObjectMapper as shown below
@GetMapping("/quotes")
void getQuotes(@RequestParam ("topics") String topics) throws JsonProcessingException {
ObjectMapper mapper = new ObjectMapper();
List<Topic> topicList = mapper.readValue(topics, new TypeReference<List<Topic>>() {});
//your logic..
}
then if you do send the request in the swagger like this { "topics": [{ "name": "test", "amount": 0 }] } it will work.
In 8.7.1 zeebe api, the zeebe properties are prefixed slightly different.
camunda.client.zeebe.grpc-address (8.7.1)
zeebe.client.broker.grpc-address (older)
The issue get resolved after using the zeebe connection property as per latest library.
i have a problem just like this my solution was i delete / wiped data on my android studio to make the emulator have more space and it works!!!
Why not just Thread.Sleep(displayWindowsTickInterval)
?
Hi Marija
Azure Lighthouse is recommended here, it can be used to manage multiple tenants and delegate access. This allows you to see Sentinel workspace telemetry and data from one tenant to another. Ensure that you have set up Azure Lighthouse correctly to manage the cross-tenant log ingestion.
Here is a guide: - https://learn.microsoft.com/en-us/azure/sentinel/extend-sentinel-across-workspaces-tenants#manage-workspaces-across-tenants
One way to achieve this is to create a build configuration template with the failure conditions, and attach that to all the build configurations (possible since 2017.2). But a quite large downside to this is the fact that you can't attach a template to a template, so that would require having to do that to all the different configurations.
if your data source is a pandas dataframe then u can use the pandas api
document = QTextDocument()
document.setHtml(dataframe.to_html())
document.print(printer)
Open package.json and set the version of "react-native-pager-view" as "6.5.1".
And then run: npm install [email protected]
This is a bug for "npx create-expo-app" cmd, it use the latest version of react-native-pager-view which is not matched with the "react-native-tab-view": "^4.0.10".
Hey Guys any one got with the MacOS , I am getting same error in MacOS app for share extension ?
[April 2025]
I recently faced a similar issue.
Note: My app status was unreviewed, and I wanted to only test with Internal Testing.
The Play console setup was complete, with all items struck. But I was still seeing this error.
In my case, it was just a warning, not an error.
After sharing the build link with the tester, they were unable to view the build on the Google Play Store.
All I had to do was:
Go to Internal Testing tab > Testers tab > And get the URL from the title [How testers join your test]. Tester was prompted to accept the invite to test the app and builds starts working with the same URL, which was not available earlier.
In order to get the build URL, head:
Internal Testing > Releases tab > Show summary > Click on version code > Opens new screen, select Downloads Tab > Copy shareable link.
In the Builder Pattern, the key concept is separating the construction process of a complex object from its final representation, so that the same construction process can create different representations. Applying this idea to Imperial Builders, here’s how the separation works in their construction projects:
In the Builder Pattern, the Director controls the construction process using a predefined sequence.
At Imperial Builders, this role is represented by the project manager or architect, who works closely with the client to define:
Project goals (e.g., residential villa, commercial plaza)
Style and theme (classic, modern, etc.)
Budget and timeline
Required features (basement, smart systems, etc.)
They orchestrate the entire build by coordinating with different teams while sticking to the vision.
The Builder defines how each part of the object (in this case, the building) is created step by step.
For Imperial Builders, this is the engineering and construction team, who work on:
Laying the foundation
Building the grey structure
Installing electrical/plumbing systems
Applying finishes like marble, woodwork, and paint
Each step is modular and can be reused or modified for other buildings.
The final Product is the completed house or building.
At Construction Builders, this could be:
A custom-designed 1 Kanal home in DHA
A commercial plaza in Lahore
A turnkey villa with interior and exterior finishing
Different homes may share a similar process but result in unique representations, just like the Builder Pattern's principle.
There is reason why server crash. It should not restart without knowledge why it happened. Broken database? Do not restart MC if database is broken. Also what if you really want stop server? Kill batch process and MC server with it? It might be good way for break database.
did you find solution? i'm strugglingg to get the root cause
Maybe it will be useful for someone! As far as I could understand the problem:
cloud:
gateway:
routes:
- id: resume-analyzer-route
uri: http://resume-analyzer:8083
But you can implement a bean in each service and in API Gateway, for manual server configuration, which will override the default server, it helped me!
@OpenAPIDefinition
@Configuration
public class OpenApiConfig {
@Bean
public OpenAPI customOpenAPI() {
return new OpenAPI()
.addServersItem(new Server()
.url("http://localhost:8765/analyzer")
.description("Analyzer API"));
}
}
metamask or any web3 wallet (most of them) have a hard time interacting with mobile browser because wallet is website extension. my recomendation is to use WalletConnect for your app so the mobile user will have an easier time. usually if you want to interact with Dapp in mobile you use wallet native browser. most wallet now have their own native browser
It could also be that the collection being bound contains items that do not have a parameterless constructor.
Full guide on how to do this here: https://kshitijbanerjee.com/2025/02/01/syncing-historical-data-from-ibkr/
If () is used in the project name, it gives this error.
After creating a few projects, I noticed that the ones I named with parentheses give this error.
It's a very broad question but of course you can. Even with pure css, no packages or other languages, you can create stuff like hover animations (moving elements on your webpage based on where your mouse goes and what you click) and because of modern HTML/css you can also make it responsive for other devices.
The first steps in creating one is doing a bit more research on the setup of a text editor (where you will code in) and then trying to either learn the code from a tutorial or let a language AI like ChatGPT write you one.
This is a great place to start for a beginner: W3Schools - HTML Editors
P.S. W3Schools is a very complete online documentation hub for web development and is great for beginners, so check out their tutorials and content.
I was also having same error :
ERROR TypeError: HighchartsMore is not a function
at _AppComponent.ngAfterViewInit
After updating to this resolves this error :
import * as Highcharts from 'highcharts';
import 'highcharts/highcharts-more';
Your request needs to look like this:
http:// localhost : 8080/api/v1/quotes?topics[]= ...
Whats the difference? If you pay attention to the error message, it says that it cant parse a string into a list.
every new item in the list needs to be appended using this very same format.
When setting up an AWS SNS topic and subscription you can set the content-type of the notification requests (which includes the confirmation request) in the Delivery policies (HTTP/S) settings. Set it to application/json and your middleware will run and your req.body will have the JSON object you are expecting.
In iOS 17, you could use .opacity(0.011)
, .contentShape(Rectangle())
, and .allowsHitTesting(true)
on a hidden DatePicker
to capture taps and trigger the default DatePicker presentation (the “popup”).
In iOS 18, SwiftUI DatePicker no longer responds to taps if it’s fully transparent or minimally visible (especially when opacity
is too low).
iOS 18 requires the DatePicker to be "interactable" in a normal, non-hacky way. Otherwise, the system won’t "lift" the control and present the date selection UI.
In your code:
DatePicker(selection: $viewModel.selectedDate, displayedComponents: .date) {} .labelsHidden() .allowsHitTesting(true) .contentShape(Rectangle()) .opacity(0.011) // <--- making it almost invisible
That .opacity(0.011)
is now a blocker in iOS 18.
Commenting it out, you said, brings back the date picker UI — because the system now requires a visibly interactive surface.
.opacity(0.01)
but wrap inside a buttonInstead of hacking opacity directly, you control the date picker manually.
You can create a .sheet
or .popover
when the user taps the text.
Example:
struct ContentViewA: View {
@StateObject var viewModel: SampleViewModel = .init()
@State private var isShowingDatePicker = false
var body: some View {
VStack {
HStack {
Spacer()
Button(action: {
isShowingDatePicker.toggle()
}) {
Text(viewModel.displayDate)
.font(.body)
.foregroundStyle(Color.blue)
}
.sheet(isPresented: $isShowingDatePicker) {
VStack {
DatePicker("Select a date", selection: $viewModel.selectedDate, displayedComponents: .date)
.datePickerStyle(.graphical)
.labelsHidden()
.padding()
Button("Done") {
isShowingDatePicker = false
}
.padding()
}
.presentationDetents([.medium])
}
Spacer()
}
.padding(.horizontal, 20)
.padding(.top, 25)
}
.onAppear {
updateDisplayDate()
}
.onChange(of: viewModel.selectedDate) { _ in
updateDisplayDate()
}
}
private func updateDisplayDate() {
let formatter = DateFormatter()
formatter.dateFormat = "HH:mm E, d MMM y"
viewModel.displayDate = formatter.string(from: viewModel.selectedDate)
}
}
Why is this better?
You don't rely on fragile SwiftUI internals (opacity hacks) anymore.
iOS 18 and forward compatibility is safer.
You fully control when and how the DatePicker appears.
You can even customize its style — graphical
, wheel
, or compact
.
You could try forcing a .datePickerStyle(.compact)
and no opacity tricks, but still wrap it in an invisible Button
to manage focus.
However, using .sheet
is much closer to the intended modern UX.
Here are few things that you need to check
maybe it was necessary to explicitly specify the package
I completely see what you’re running into.
Even though Plugin: AnyObject
, when you refer to any Plugin
, it's treated as an existential — and existentials in Swift are not class types, even if the protocol they come from is AnyObject
.
That's why ObjectHashable<T: AnyObject>
refuses to accept any Plugin
as T
— because any Plugin
isn't itself a class, even if implementations of Plugin
must be.
Why any Plugin is a Problem
any Plugin is a value type representing "any instance conforming to Plugin."
It doesn't guarantee that it's a class instance at the type level in a way generic constraints can check.
Swift treats any Plugin differently than concrete types that conform to Plugin.
So... how to solve your situation?
Here's the trick: Instead of writing ObjectHashable<any Plugin>
,
you need ObjectHashable<some Plugin>
at usage sites,
or redesign ObjectHashable
slightly to accept existential values.
A Clean Solution for Your Case
Change ObjectHashable
to accept any AnyObject
(even any Plugin
existentials).
Here's a slightly updated version of ObjectHashable
:
public struct ObjectHashable: Hashable {
public let object: AnyObject
public init(_ object: AnyObject) {
self.object = object
}
public static func ==(lhs: Self, rhs: Self) -> Bool {
return ObjectIdentifier(lhs.object) == ObjectIdentifier(rhs.object)
}
public func hash(into hasher: inout Hasher) {
hasher.combine(ObjectIdentifier(object))
}
}
Now you don't need to worry about generics.
You can store any class object — including any Plugin
— wrapped properly.
Usage:
var dict: [ObjectHashable: String] = [:]
let pluginInstance: any Plugin = SomePlugin()
dict[ObjectHashable(pluginInstance)] = "some value"
But you asked for strictness (not too open like AnyObject
)...
If you still want it to be restricted only to Plugin (not any class), here's how you can do it more strictly:
public struct PluginHashable: Hashable {
public let plugin: any Plugin
public init(_ plugin: any Plugin) {
self.plugin = plugin
}
public static func ==(lhs: Self, rhs: Self) -> Bool {
return ObjectIdentifier(lhs.plugin as AnyObject) == ObjectIdentifier(rhs.plugin as AnyObject)
}
public func hash(into hasher: inout Hasher) {
hasher.combine(ObjectIdentifier(plugin as AnyObject))
}
}
- This ensures you can only wrap any Plugin
, not any AnyObject
.
- And you still hash based on the object identity.
Usage:
var pluginDict: [PluginHashable: String] = [:]
let p1: any Plugin = MyPlugin()
let p2: any Plugin = MyOtherPlugin()
pluginDict[PluginHashable(p1)] = "First Plugin"
pluginDict[PluginHashable(p2)] = "Second Plugin"
Why can't you use ObjectHashable<Plugin>
?
Because ObjectHashable<T: AnyObject>
expects a class type T
,
but any Plugin
is not a class type — it’s an existential value.
You have to either:
make ObjectHashable
non-generic, store AnyObject
, OR
specialize it for your protocol (like PluginHashable
above).
I hope this helps. Let me know if your issue is solved.
Remember to add this to your HTML form: enctype="multipart/form-data"
You can definitely create a web page with just html and css, but to add some kind of functionality you'd definitely need to use other languages and frameworks. JavaScript is good to learn.
well, you could create a website with HTML and CSS, but it wouldn't be interactive because you'd need JavaScript for that.
I am facing issue still using PostgreSQL and executor is LocalExecutor if migrate db using this command it's worked for me using this command airflow db migrate
Starting from 1.1.4, and enforced in 1.1.5, the default storage backend for PreferencesDataStore was changed from OkioStorage
to FileStorage
via a new PreferencesFileSerializer
. This aims to reduce CorruptionException
, but it introduces incompatibility if your app still has preferences saved with the old serialization format (i.e., from before 1.1.4).
The HTML standards are a complete mess and create too much confusion. In fact, the only three characters that should have been escaped in an HTML query string were &, = and # and they should have been escaped simply as \&, \= and \# (and \\ should be used for \, of course). The people to work in standards comittees should be tested of their IQ's first.
Change
<div id = "textbox">
margin-left:100px;
to
<div id="textbox" style="margin-left:100px;">
or put it in the stylesheet:
<style>
#textbox {margin-left:100px}
</style>
ApeMaster middleware supports playing RTSP video streams from cameras directly in an internal browser without the need for a server, but requires you to installApeMaster middleware on the client computer.
how to validate appwrite user sessions from backend without using frontend sdks?
you’re almost there. using the server sdk and calling account.getSession(sessionId)
is technically correct, but appwrite expects that the session ID is tied to a real cookie in the request (the session cookie like a_session_<projectid>
).
just calling getSession(sessionId)
server-side without that cookie doesn’t feel "natural" to appwrite sometimes, and then it cries "unauthorized"
the super clean way would be:
from your frontend, send the session cookie along with your API requests (just like browser behavior)
in your fastapi gateway, you pick that cookie out from the request
then, instead of doing getSession(sessionId)
, you do account.get()
while sending the cookie in the request headers (meaning you gotta simulate a real user request)
problem is — appwrite server sdk doesn’t easily allow sending raw cookies manually because it's made for "server-side trusted" calls
so the workaround is:
make direct REST calls to appwrite’s HTTP api inside your gateway instead of using the sdk
send the session cookie along with the request headers manually
example: call GET https://[appwrite-endpoint]/v1/account
with the X-Fallback-Cookies
header or just forward the cookie properly
is it possible to securely auth frontend -> custom backend -> appwrite using only sessions?
yeah it's possible. it's actually the ideal old-school "session based auth" way.
flow would be:
frontend logs in normally using appwrite sdk → gets session
frontend stores the session cookie (automatic if you’re using browser / for react native you gotta manually do it)
frontend sends requests to your api gateway, carrying the session cookie
your gateway extracts the session cookie, validates the session (like i said above, direct rest call to appwrite’s /account
endpoint)
if appwrite says user is good, allow request to microservices
you never need jwt this way unless you want scalability across multi-clusters or mobile + web SSO type stuff later
best scalable approach if session is messy?
if one day this session way feels annoying (like cookie management in mobile gets painful)
then you gotta move to a hybrid model:
short-lived access tokens (like 15 mins)
refresh tokens (long lived, like 1 month)
backend issues both
frontend auto-refreshes access token without user even knowing
but bro seriously unless you’re scaling like crazy (like millions of concurrent users)
session auth is clean, simpler, and you can scale it horizontally by using sticky sessions or redis session storage if needed.
quick quick version of the right flow you can try right now
frontend saves and sends session cookie manually
fastapi gets cookie, calls https://cloud.appwrite.io/v1/account
(with cookie header) inside a simple requests.get()
(python’s requests lib)
if 200 ok → user is valid
attach user info to request context and forward it to microservices
no need jwt, no need to refresh session, no drama
The <br> tag doesn't require a closing tag. While <br /> or <br/> is considered good syntax, both <br> and <br /> are considered equivalent and valid in HTML5.
If your XML requires the tag to be closed, use a replace function change <br> to <br /> as shown in another answer. </br> is not proper HTML and will not be parsed.
Here’s a quick checklist to debug:
Check proxy_pass
configuration:
Ensure that proxy_pass points to http://ip:port/
without including the subpath (e.g., /book/
). If you include the subpath, FastAPI won't match routes properly and will return 404.
Verify trailing slash handling:
FastAPI distinguishes between /book
and /book/
. Missing or inconsistent slashes can cause 404 errors. Make sure your URLs are consistent, or add a rewrite rule in Nginx to enforce a trailing slash.
Check CORS settings: If accessing from browsers, CORS misconfiguration can prevent requests from reaching the backend, appearing like 404s. Make sure you add proper CORS middleware to your FastAPI applications.
Ensure backend service availability:
Confirm that your FastAPI apps are running, listening on 0.0.0.0
(not 127.0.0.1
), and reachable at the specified IP and port. You can test direct access using curl or a browser to verify backend health.
If the issue persists, can you show your current Nginx config? It will help pinpoint the problem faster.
It's 2025 and creating Test users is still not possible?! How are people launching Meta apps if they can't test them? I am absolutely speechless, going to have to rethink my strategy now.
I was able to solve like this using the escape characters for br, it is missing the ; because it translates it back to
var doc = sample.replace("<br>", "<br>");
Then
var expectedDoc = xmlMapper.readTree(doc);
commands works on some devices.
`adb shell am set-standby-bucket pkgname 45`
ie. samsung s24 android 14 works
You should ask your question on Stackoverflow in Spanish.
Agree with @KlausD. But here is what I can say best based on my experience,
The issue might be because of,
1 - Queue synchronization and task completion handling
2 -Thread coordination and termination
3- Queue data flow between threads
Your database driver is 64 bits, but application is probably 32
import random
import pandas as pd
from faker import Faker
fake = Faker('es_PE')
# Funciones para generar datos realistas
def generar_dni():
return str(random.randint(10000000, 99999999))
def generar_genero():
return random.choice(['M', 'F'])
def generar_persona():
genero = generar_genero()
if genero == 'M':
nombre = fake.first_name_male()
else:
nombre = fake.first_name_female()
apellidos = f"{fake.last_name()} {fake.last_name()}"
edad = random.randint(18, 65)
dni = generar_dni()
return {
"Apellidos": apellidos,
"Nombres": nombre,
"Edad": edad,
"Género": genero,
"DNI": dni
}
# Generar 100 personas ficticias
personas = [generar_persona() for _ in range(100)]
# Crear DataFrame
df = pd.DataFrame(personas)
# Guardar como archivo Excel
file_path = "/mnt/data/Lista_Ficticia_Cerro_de_Pasco.xlsx"
df.to_excel(file_path, index=False)
file_path
I'm building a Rust project using Diesel and PostgreSQL. I run PostgreSQL in Docker (I don't have it installed locally). I successfully ran diesel migration run
, and my tables were created. However, when I run main.rs
, I get a database connection error. Some sources suggest installing PostgreSQL on my computer, but I prefer to connect to the Docker container instead. My PostgreSQL container exposes port 5432 to localhost, and my DATABASE_URL
is set correctly. How can I make my Rust application connect to the Dockerized PostgreSQL without needing to install PostgreSQL locally?
error: linking with `link.exe` failed: exit code: 1181
|
= note: "C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.43.34808\\bin\\HostX64\\x64\\link.exe" "/NOLOGO" "C:\\Users\\vucon\\AppData\\Local\\Temp\\rustcpQYWIB\\symbols.o" "<257 object files omitted>" "D:\\Desktop\\Rust_learning\\axum_learning\\rest-api\\target\\debug\\deps/{libtower_http-6462bec82fa67a71.rlib,libthiserror-ada364c7e90c1375.rlib,libaxum-df7ba45e28c6e1a6.rlib,libserde_path_to_error-4971e1e27d0231ef.rlib,libserde_json-cb1da7e646488faf.rlib,libmemchr-105646b5756cb14a.rlib,libserde_urlencoded-2bbff12e817fd7ac.rlib,libryu-6ac78a988ab9ad86.rlib,libform_urlencoded-da3d6eb3563d5673.rlib,libpercent_encoding-36355d0f5ec6ec70.rlib,libhyper_util-f9ae3f61d33e24cf.rlib,libhyper-e85c493d5d598cbc.rlib,libhttparse-7a9cf2ec73ddb78e.rlib,libhttpdate-9257db76ad17ff29.rlib,libfutures_channel-93866827f40b93b5.rlib,libmatchit-b1ee7651a4c4350d.rlib,libaxum_core-d9bd6a15c824975a.rlib,libmime-64a84b31f0e207ba.rlib,libhttp_body_util-81e2f8873bbddae5.rlib,libhttp_body-8168f001b9181e42.rlib,libhttp-67027ffbacbd833d.rlib,libfnv-98890e3ff67f2430.rlib,libtracing-4cdc890f2430aa43.rlib,libtracing_core-0bc12b3517250cf2.rlib,libonce_cell-ecc86b59685e8d56.rlib,libtower-90472a8e3c7cb4d8.rlib,libsync_wrapper-0d8ee7e588523c9c.rlib,libtower_layer-e54a78936e51308c.rlib,libfutures_util-009079feb21fbd9c.rlib,libfutures_task-fbfcf4d036a68d48.rlib,libpin_utils-8ff768994ab1a462.rlib,libfutures_core-b17e2c2a87306059.rlib,libtower_service-85c893f18d7f8b9f.rlib,libtokio-4bce4dd606e2c9cd.rlib,libsocket2-9197175e17c66b74.rlib,libbytes-6e1b71233944c901.rlib,libmio-7175e77e6bc0752a.rlib,libwindows_sys-2e3ee3fb155c4693.rlib,libpin_project_lite-efc1287e85ca2a83.rlib,libdotenv-a9bb7153f3c10c70.rlib,libdiesel-e6ad1868f441784d.rlib,libitoa-04f76355b1fc7079.rlib,libuuid-0e64a0b23aa5d3e0.rlib,libgetrandom-6dd0dcc646dd2d5b.rlib,libbitflags-a3b4274c99baa24d.rlib,libbyteorder-d8a0d051f37c2508.rlib,libr2d2-a823a597fe23f285.rlib,libscheduled_thread_pool-6c0cc37e34e78192.rlib,libparking_lot-a40c2ccb06799a8e.rlib,libparking_lot_core-58d6319ea9288dbf.rlib,libwindows_targets-dd8cca008988cf75.rlib,libcfg_if-120b7212e7bc72ec.rlib,libsmallvec-56b34d4bd689d4b5.rlib,liblock_api-6f116e7c0c710338.rlib,libscopeguard-4a1404f096e49870.rlib,liblog-c603d8f956e435cc.rlib,libpq_sys-fe47be3258112a90.rlib,liblibc-c6c2bba6b066e08d.rlib,libchrono-7347488291da9dab.rlib,libnum_traits-e5ee96478b9d9bfe.rlib,libwindows_link-7d8b2e9c4c517ee5.rlib,libserde-f15b906c267b102d.rlib}.rlib" "<sysroot>\\lib\\rustlib\\x86_64-pc-windows-msvc\\lib/{libstd-*,libpanic_unwind-*,libwindows_targets-*,librustc_demangle-*,libstd_detect-*,libhashbrown-*,librustc_std_workspace_alloc-*,libunwind-*,libcfg_if-*,liballoc-*,librustc_std_workspace_core-*,libcore-*,libcompiler_builtins-*}.rlib" "C:\\Users\\vucon\\.cargo\\registry\\src\\index.crates.io-1949cf8c6b5b557f\\windows_x86_64_msvc-0.52.6\\lib\\windows.0.52.0.lib" "C:\\Users\\vucon\\.cargo\\registry\\src\\index.crates.io-1949cf8c6b5b557f\\windows_x86_64_msvc-0.52.6\\lib\\windows.0.52.0.lib" "libpq.lib" "legacy_stdio_definitions.lib" "kernel32.lib" "kernel32.lib" "ntdll.lib" "userenv.lib" "ws2_32.lib" "dbghelp.lib" "/defaultlib:msvcrt" "/NXCOMPAT" "/LIBPATH:C:\\Users\\vucon\\.cargo\\registry\\src\\index.crates.io-1949cf8c6b5b557f\\windows_x86_64_msvc-0.52.6\\lib" "/OUT:D:\\Desktop\\Rust_learning\\axum_learning\\rest-api\\target\\debug\\deps\\rest_api.exe" "/OPT:REF,NOICF" "/DEBUG" "/PDBALTPATH:%_PDB%" "/NATVIS:<sysroot>\\lib\\rustlib\\etc\\intrinsic.natvis" "/NATVIS:<sysroot>\\lib\\rustlib\\etc\\liballoc.natvis" "/NATVIS:<sysroot>\\lib\\rustlib\\etc\\libcore.natvis" "/NATVIS:<sysroot>\\lib\\rustlib\\etc\\libstd.natvis"
= note: some arguments are omitted. use `--verbose` to show all linker arguments
= note: LINK : fatal error LNK1181: cannot open input file 'libpq.lib'␍
I have the exact same issue when I create a new project with pnpm create rspack. Then I switch to use pnpm create rsbuild, it all works fine. It's probably just some temporary issue with rspack's template.
same error with me, but this occur when i use client_credentails grant type to access the token, when i use password grant type, it works.
Quite old but I'll try my luck. Same issue for me but the rendering is still static. Could you give me the details of what you have try to get it work in VS Code?
Thanks in adavnce
This is how you can get your own location:
import geocoder
g = geocoder.ip('me')
location = g.address
How about solving it with code like below?
public class CustomPreference extends Preference {
..............
@Override
public void onBindViewHolder(@NonNull PreferenceViewHolder holder) {
super.onBindViewHolder(holder);
TextView tv (TextView)holder.findViewById(android.R.id.summary);
if(tv != null) {
if(mSummaryTextSize > 0) tv.setTextSize(mSummaryTextSize);
}
}
private int mSummaryTextSize = 0;
public void setSummaryTextSize(int size) {
if(mSummaryTextSize != size) {
mSummaryTextSize = size;
notifyChanged();
}
}
}
double subtotal = (double)nudBuffaloChickenSalad.Value * 13.25
+ (double)nudReuben.Value * 9.20
+ (double)nudWater.Value * 1.99
+ (double)nudWings.Value * 18.99
+ (double)nudChzPizza.Value * 10.99;
double discount = chkRewards.Checked ? subtotal * 0.05 : 0;
double tax = (subtotal - discount) * 0.06;
double total = (subtotal - discount) + tax;
lblSubtotal.Text = subtotal.ToString("C");
lblDiscount.Text = discount.ToString("C");
lblTax.Text = tax.ToString("C");
lblTotal.Text = total.ToString("C") ;
lblStatus.Text = "Calculated";
You could try forcing an empty array?
import numpy as np
import gc
# After clearing your data:
data.clear()
np.empty((0,)) # forces numpy malloc/free touch
gc.collect()
Double check that you're not calling useAppDispatch
or useAppSelector
outside of the store provider context.
This was the case for me. NextJS was throwing this esoteric error while collecting page data.
Extending the solution of @Tangentially Perpendicular you could insert the div
style inside the <style>
marker:
<style>
body {
background-image: url("starbackground.gif");
}
div[id="textbox"]{
margin-left:100px;
}
</style>
Well, what you can do is to create a "header" file if your file is app.py you can create a functions_app.py (put all your functions here) and in app.py you will do a from functions_app import * and then you can use all functions you want in any order without having to worry about where they are declared. This way you can have a file with setup variables at the beginning of the script which is I GUESS what you want. I do this too.
I found by trying this method I kept receiving the error that the tensor cannot be converted into a sequence. No matter what I did, the tensor won't allow me to pull the token values from it and convert them back into their pre-tokenized English words.
"'Tensor' object cannot be converted to 'Sequence'"
After some experimentation, it seems that binary_closing
is the best helper. And it is important to vary the structure, to avoid artifacts with "square corners"
import scipy.ndimage as nd
bones = img_data > args.threshold
n = 5
for t in range(2):
nd.binary_closing(bones, iterations=1, output=bones, structure=np.ones((n,n,n)))
nd.binary_closing(bones, iterations=1, output=bones)
nd.binary_opening(bones, iterations=1, output=bones)
bone_labels, num_feat = nd.label(bones)
vals, counts = np.unique(bone_labels, return_counts=True)
vals = vals[1:]
counts = counts[1:]
print('begin erase small labels')
for i,value in enumerate(vals):
if counts[i] < args.min_size:
bones[bone_labels == value] = False
The bones
is a binary mask. The last part labels all components and then aggressively deletes the small ones. The min_size
is 1500 in my case.
If you have admin, you could create a Mail Flow Rule that tells exchange:
"Never mark incoming email as spam" (for all messages, or for a specific mailbox).
This would send all emails to the inbox folder regardless of if they would usually be classed as spam.
Steps:
Go to Exchange Admin Center (https://admin.exchange.microsoft.com/)
Mail Flow → Rules → + Add a rule
Name: "[Whatever you want]"
Condition: Apply to all messages (or specific users)
Action: Set the spam confidence level (SCL) to -1 (means “not spam”)
Save
This may not be helpful, as it does not change your code, but it is functional. I hope it does help though. ^_^
This isn't an amazing option but is the only one that comes to mind. You can create a separate form for each step and when clicking "next", you store the data in a context of some kind. That way you write a form with validation onBlur. Then only allow progression to the next page if the current page is valid. On the final submit, pull the data from the previous pages into the query.
This should be resolved in 5.12.2.
After you create the PM, you set a frequency where the PMWoGenCronTask looks at the PM module for the next due date (you have to set this initially when creating the PM). Afterwards, if you want to notify the asset managers, you can create an escalation with a communication template that looks a certain window (like all the PMs created on the first of every month). Unfortunately, it would likely be individual emails per eligible work order.
If you involved automation scripts to send a CSV to each asset manager with grouped work orders, or Object Structure and Publish Channels via Maximo Integration Framework (MIF), you could also send out monthly files to asset managers.
All of the above is a bit overkill, if the asset managers can log into Work Order Tracking and view the latest work orders sorted by Target Start Date.
I got it working, here's how I did it:
You can now edit in the main editor window and see changes live. CTRL-S to save.
To turn off local override, go back to Overrides sub tab and uncheck [ ] Enable Local Overrides
Seems like a lot of coding when generally speaking the best way to deal with the NULL values is in a where clause. What lingers in all of these solutions is the lack of a where clause. Remove the NULLS in the where clause and the count will work more naturally.
Select count(distinct field_value) where field_value is NOT NULL
Can you please let us know if we can install Airflow in Windows 10 directly using Jupyter NB w/o docker and Linux.
slant
This is a test response to see whether the slant functionality works.
@Rhys, sorry, but I don't understand the 'stacking' .... so, here's a slightly modified version of your script, demonstrating the write to the files happen in thread order despite what the order of the print(....In/Out) messages may appear on the terminal. If this does not clarify for you then i can't help any more without a clearer description of the issue.
cat test.py
import concurrent.futures
import random
import datetime
# Analysis of text packet
def Threads1(curr_section, index1):
words = open('test.txt', 'r', encoding='utf-8', errors='ignore').read().replace('"', '').split()
longest_recorded = []
for ii1 in words:
test1 = random.randint(1, 1000)
if test1 > 900: break
else: longest_recorded.append(ii1)
perc = (index1 / max1) * 100
print(str(datetime.datetime.now().time()) + ' In: ' + str([index1, str(int(perc))+'%']), flush=True)
return [index1, longest_recorded]
# Split text into packets
max1 = 20; count_Done = 0; ranger = [None for ii in range(0,max1)]
print(str(int((count_Done / max1) * 100)) + '%')
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
working_threads = {executor.submit(Threads1, curr_section, index1): curr_section for index1, curr_section in enumerate(ranger)}
for future in concurrent.futures.as_completed(working_threads):
count_Done += 1
current_result = future.result()
# Write to disk (random)
text1 = 'a: thread:' + str(current_result[0]) + ' : ' + str(datetime.datetime.now().time()) # + ':' + 'a' * (random.randint(1000, 1500) - 500)
with open('temp_Publish.txt', 'a', encoding='utf-8') as file: # append
file.write(text1 + '\n')
# Write to disk (random)
text2 = 'b :thread:' + str(current_result[0]) + ' : ' + str(datetime.datetime.now().time()) # + ':' + 'a' * (random.randint(1000, 1500) - 500)
with open('threads.txt', 'a', encoding='utf-8') as file: # append
file.write(text2 + '\n' )
print(str(datetime.datetime.now().time()) + ' Out: ' + str([current_result[0], str(int((count_Done / max1) * 100)) + '%']), flush=True)
#
# clear down any existing outputs
#
rm -f threads.txt temp_Publish.txt
#
#
python test.py
0%
22:55:08.067920 In: [0, '0%']
22:55:08.069660 In: [1, '5%']
22:55:08.069706 Out: [0, '5%']
22:55:08.071114 In: [2, '10%']
22:55:08.072442 In: [3, '15%']
22:55:08.072884 Out: [1, '10%']
22:55:08.073888 In: [4, '20%']
22:55:08.075159 In: [5, '25%']
22:55:08.076440 In: [6, '30%']
22:55:08.076826 Out: [2, '15%']
22:55:08.077735 In: [7, '35%']
22:55:08.079142 In: [8, '40%']
22:55:08.079579 Out: [3, '20%']
22:55:08.080298 In: [9, '45%']
22:55:08.081270 In: [10, '50%']
22:55:08.081626 Out: [4, '25%']
22:55:08.082362 In: [11, '55%']
22:55:08.083296 In: [12, '60%']
22:55:08.084155 In: [13, '65%']
22:55:08.084434 Out: [5, '30%']
22:55:08.085052 In: [14, '70%']
22:55:08.086036 In: [15, '75%']
22:55:08.086921 In: [16, '80%']
22:55:08.088107 In: [17, '85%']
22:55:08.089462 In: [18, '90%']
22:55:08.089851 Out: [6, '35%']
22:55:08.090772 In: [19, '95%']
22:55:08.091278 Out: [7, '40%']
22:55:08.091479 Out: [8, '45%']
22:55:08.091696 Out: [9, '50%']
22:55:08.091905 Out: [10, '55%']
22:55:08.092107 Out: [11, '60%']
22:55:08.092311 Out: [12, '65%']
22:55:08.092508 Out: [13, '70%']
22:55:08.092703 Out: [14, '75%']
22:55:08.092857 Out: [15, '80%']
22:55:08.093000 Out: [16, '85%']
22:55:08.093144 Out: [17, '90%']
22:55:08.093291 Out: [18, '95%']
22:55:08.093461 Out: [19, '100%']
#
# show file contents - side by side for convenience
#
paste threads.txt temp_Publish.txt
b :thread:0 : 22:55:08.068613 a: thread:0 : 22:55:08.068403
b :thread:1 : 22:55:08.071506 a: thread:1 : 22:55:08.070138
b :thread:2 : 22:55:08.074275 a: thread:2 : 22:55:08.072966
b :thread:3 : 22:55:08.078270 a: thread:3 : 22:55:08.077831
b :thread:4 : 22:55:08.080626 a: thread:4 : 22:55:08.079636
b :thread:5 : 22:55:08.081744 a: thread:5 : 22:55:08.081663
b :thread:6 : 22:55:08.087198 a: thread:6 : 22:55:08.085128
b :thread:7 : 22:55:08.091212 a: thread:7 : 22:55:08.089909
b :thread:8 : 22:55:08.091389 a: thread:8 : 22:55:08.091321
b :thread:9 : 22:55:08.091606 a: thread:9 : 22:55:08.091536
b :thread:10 : 22:55:08.091819 a: thread:10 : 22:55:08.091743
b :thread:11 : 22:55:08.092018 a: thread:11 : 22:55:08.091949
b :thread:12 : 22:55:08.092238 a: thread:12 : 22:55:08.092154
b :thread:13 : 22:55:08.092434 a: thread:13 : 22:55:08.092353
b :thread:14 : 22:55:08.092616 a: thread:14 : 22:55:08.092552
b :thread:15 : 22:55:08.092811 a: thread:15 : 22:55:08.092747
b :thread:16 : 22:55:08.092954 a: thread:16 : 22:55:08.092890
b :thread:17 : 22:55:08.093098 a: thread:17 : 22:55:08.093034
b :thread:18 : 22:55:08.093244 a: thread:18 : 22:55:08.093178
b :thread:19 : 22:55:08.093413 a: thread:19 : 22:55:08.093326
Messages come in through the gateway. Use any websocket viewer on this url, gateway.discord.gg/?v=9&encoding=json just connect using your bot token (discord user accounts also work identically) and you can see the messages and other packet types coming in.
Not enough reputation to comment... but Fabricio's updated link appears to be: https://nielsberglund.com/post/2017-02-11-rabbitmq---sql-server/
backdrop-filter
also does not seem to work if one of the parent elements already has a (any!) backdrop-filter
applied.
You could add
pipe.enable_attention_slicing()
before sending it to cuda, however this will reduce the speed of the image generation.
Resolution can be changed by adding height and width arguments to the image definition.
image = pipe(prompt, height=512, width=512).images[0]
Just figured it out.
print(packet.frame.packet_flags)
I was trying .flags but looked up all the field names with packet.frame.field_names
For all people from these years and future: In my Node-RED version 3.1.14 there is a Tab "Setup" where you can add custom modules. Did not need to modify any setup.js ✌🏼
If you want a fully featured terminal interface, curses is the way to go: https://docs.python.org/3/library/curses.html
For this, though, readline is pretty good: https://docs.python.org/3/library/readline.html
It should be noted that, despite being in the standard library, it might not be available on some windows python installs. (python's readline module not available for windows?)
import readline
def setup(text):
readline.insert_text(text)
readline.redisplay()
readline.set_pre_input_hook(lambda: setup("DEFAULT"))
a = input("Type your input here: ")
print(f"\n\"{a}\"\n")
readline.set_pre_input_hook(lambda: setup("OTHER DEFAULT"))
a = input("Type your input here: ")
print(f"\n\"{a}\"")
The docs state that you have to
disable HTTP long-polling on the client-side
But you are still trying to use polling
This is actually a PHP syntax typo not a WooCommerce function misunderstanding. The line
$_product = wc_get_product('$courseID');
asks for a product with an ID that matches a string that starts with a dollar sign (and is followed by eight other letters). Why? PHP does not evaluate variables inside single-quoted strings the way it does for double-quoted strings.
The $courseID
variable holds a number, and the string 7172
can also be "cast" to a number, so neither of them fail. The following two corrected versions of the line are equivalent (unless strict typing is turned on):
$_product = wc_get_product("$courseID");
$_product = wc_get_product( $courseID );
Something to consider here that I don't see on any of the posts in terms of a company context: Has your repo been migrated elsewhere and locked in Azure? I was getting the same error and it turns out that a team I hadn't worked for in a while had migrated the repo to another service