You can definitely integrate multi-participant video chat with Banuba face AR features. Banuba’s Face AR SDK is designed to work alongside popular video conferencing platforms.
I found a relevant article that could very likely give you a good understanding of how it is done https://www.banuba.com/blog/video-conferencing-api-integration
Banuba provides an integration example with Agora SDK for Unity in the article, which shows how to enable AR filters in group video calls, Their SDK supports multi-face tracking, so each participant can enjoy AR effects at the same time.
To set this up, you’ll typically use Banuba’s Face AR SDK for applying filters and effects locally on each participant’s video feed, then combine it with a video conferencing SDK (like Agora, Video SDK, or similar) to handle the multi-user video call itself.
If I was you I would also explore face ar sdk library on https://github.com/banuba
hCaptcha on Discord is very hard to bypass automatically. In addition, 2Captcha has discontinued support for a special method for this type of captcha. Try using another method, for example, the coordinate method.
While this does not answer you question of how to build an application that backups up AWS Timestream, as pointed out by Peter Csala in the comments, AWS Backup now supports backing up Amazon Timestream.
Through AWS Backup you may choose to backup Timestream table on-demand or on a set schedule through a backup plan. These backups will be incremental.
These recovery points can then be restored and table configurations can be set manually.
Thanks for your answer and comments pointing me in the right direction.
Finally, I could find a solution by using this code:
renderer.getWriter().getAcroForm().addSignature("Signature1",50, 100, 400, 200 );
instead of:
var field = PdfFormField.createSignature(renderer.getWriter());
field.setWidget(new Rectangle(50, 100, 400, 200), PdfAnnotation.HIGHLIGHT_OUTLINE);
field.setFieldName("Signature1");
field.setFieldFlags(PdfAnnotation.FLAGS_PRINT);
renderer.getWriter().addAnnotation(field);
renderer.getWriter().addToBody(field);
Now the signature panel is filled correctly.
I guess the proper way would be to use Windows API.
Take a look on GetCursorPos from Win32 API
To call Win32 API you can use something like pywin32 lib I guess
Of course that API will give you the mouse position of the mouse in the entire screen. If you want the mouse position related to your terminal window maybe you can find your window using the Windows API too, get the position and size and do the maths.
I don't know. Sounds a bit overkill but for me is the "Windows way"
Use curses
to track mouse without selection:
On Windows, install with: pip install windows-curses
.
I think I've had the same issue that you're maybe describing - and I'm yet to find a satisfactory solution.
In my use case I want to be able to run and debug the Blazor WASM app as normal from Visual Studio which means the app would be at something like: https://localhost:7276/test/. I also like to publish to my local IIS so then I can hit my app on something like: https://localhost:7276/. While running on IIS it more closely matches my other deployment stages and it also means the app is more easily testable on other devices.
My horrible workaround I'm very keen to replace goes like this... In my index.html I have <base href="/" />
I then copy the same index.html to published_index.html and change the base to: <base href="/test/" />
. Finally I add a step in the web app's .csproj file like this:
<Target Name="PostPublishTask" AfterTargets="AfterPublish">
<Message Text="Copying published_index.html to index.html" Importance="high" />
<Exec Command="@ECHO F|XCOPY wwwroot\published_index.html bin\Release\net9.0\browser-wasm\publish\wwwroot\index.html /f /y" />
</Target>
All this does is copy the published_index.html file over the index.html in the published location when I press publish. Once I've mounted the app in IIS as 'test' I can hit it as explained above. It's the least awful of the solutions I've found as basically I can forget about it. The only issue is a few extra minutes setting up the apps in IIS to begin with. My CI/CD deploy pipelines will do the same copy of published_index.html if they find it so they just work too.
I should mention that my apps locally consist of the Blazor WASM app and the API. All of the time my Blazor app is pointing to the API as hosted in IIS (in my case https://MYMACHINENAME:5000/test/api/...) . If I want to debug the API I run that only from Visual Studio and use Swagger or Postman to pass the correct data to trigger breakpoints and suchlike.
The SMTP method in ACS only confirms that the email was accepted it doesn’t guarantee delivery. If the recipient’s server silently blocks or delays the email (due to missing SPF/DKIM, spam filters, or IP reputation), ACS keeps showing OutForDelivery
without further updates. Also, Microsoft might have changed backend behavior recently, which can impact SMTP reliability even if your code hasn’t changed.
var client = new EmailClient(new Uri("<your-acs-resource-endpoint>"), new AzureKeyCredential("<your-access-key>"));
var message = new EmailMessage(
"<from-email-address>",
new EmailRecipients(new List<EmailAddress> { new EmailAddress("<to-email-address>") }),
"Subject goes here",
new EmailContent("Body of the email"));
await client.SendAsync(WaitUntil.Completed, message);
This call will give immediate and more meaningful feedback, so you can see exactly why something fails.
NOTE: It’s not that ACS is broken, the issue is that SMTP provides very limited feedback and is prone to silent failures. In contrast, the API-based method using the EmailClient
SDK is more reliable, offers better diagnostics, and is the recommended approach moving forward.
-----BEGIN PGP MESSAGE-----
hQEMA7kLtcfhv/kzAQf/cHI131AWnnNSpUdXKpr1yrq5WBaYVCVJHJVLcFa3gt8I
Bd7GeLvxTLeUb/ufHFJQmuyEQuYUo2i7XyP6DzEwA8nJU2v3HPGQdnxCfHlbZfX6
x45pkKdwHOqtzs56voh0FOPek+r8w78ODm5jx/0XTkl1/86uhTDrXObrCBYlTA0J
ksaGj+OPsmYQn/dwVZJREQXnOPK/KnQp/F6iD6oOxvD2b0fyIGaiav09Int5GPMd
TOb1PRUe/4NWwXbIq5zIQqc80CGmP9DqYH2ipX/wCtLAVQF39vafmwGzjY9nUENR
skUkOKFUXTpy3H3jhMgHwBhO98bBv9HvjfBXMRHgOwFiSNQCIY+Bk/rTGGf6cGNE
88QTP4bADPAkwyZtz53DSVptQF7Mz7FYnQkAHe8HJlC0rm18+VOfiq4e2eFiJdO7
wL3JFzWneZzO9MqCIdNhzcwB4k2Z8MQwXXlpBHj0E1KReq1SwUZZShHeMlokuUcQ
7mwhPYcBHUe3LLGzBgqlABm2GxLfvBaZU1XbkJnRZKj/nSP7OBv/MY8pjPxBa6A9
1ayrtRYyZt5uKBt6xSAdjdACxxITd9J9ZmYMAIqpLEqvQomNKB3mCdSsl73yhPzu
CSdjBKt/DgC7J6vYG/0xSGqdN1YqSP0=
=698Q
-----END PGP MESSAGE-----
You can use the 2Captcha Extension to get valid values for the captcha parameters https://2captcha.com/blog/detector
I have been trading options and futures on Deribit for over 5 years.
They now offer low fees compared to other exchanges.
The answer is in the exception message. The key must have length 32. Yours has length 4. Try
key = b'totototototototototototototototo'
and it should work.
onSelect props u get one of the proerty in return as cca2 just save that onSelect and pass thet in countryCode
this should solve ur problem.
Okay, I found examples of working autocompletes with the forward geocoder which fills out a form with the city, postal code and address.
import placekitAutocomplete from '@placekit/autocomplete-js';
import 'shared/global.css'; // load tailwindcss
import '@placekit/autocomplete-js/dist/placekit-autocomplete.css';
// instantiate PlaceKit Autocomplete JS
const pka = placekitAutocomplete(import.meta.env.VITE_PLACEKIT_API_KEY, {
target: '#placekit-input',
});
// inject values when user picks an address
const form = document.querySelector('#form');
pka.on('pick', (value, item) => {
for (const name of ['city', 'zipcode', 'country']) {
form.querySelector(`input[name="${name}"]`).value = [].concat(item[name]).join(',');
}
});
https://github.com/placekit/examples/tree/main/examples/autocomplete-js-address-form
pac4j is meant for the web. Given all the abstractions used in the source code, this is maybe feasible to achieve something for a desktop environment, but this would require a certain amount of work.
In my React Native Expo TVOS project, the error was solved by wiping out the emulator's data using Android Studio. This post shows how to do it.
After that, I ran
yarn EXPO_TV=1 expo run:android
and the app opened in my Android TV simulator easily.
I believe that batch querying helps in this case:
https://developers.facebook.com/docs/graph-api/batch-requests
just needed to add
@rendermode InteractiveServer
This was answered on the Jenkins forum. Configuration is in the Organisation section
https://community.jenkins.io/t/checkout-configuration-in-declarative-multi-branch-pipeline/30466/2
req.cookies returns undefined but cookies are set
tried the fetching api with :
1st credentials : 'include' 2nd withCredentials: true
and added app.use(express.urlencoded({ extended: true })); in app.js
but nothing worked.
either you can use ping services such as uptime-robot or https://uptimebot-alpha.vercel.app/
I’ve run into similar issues before where things behave differently in headless mode. One trick that sometimes helps is setting a fixed window size options.add_argument("--window-size=1920,1080")
before launching the driver. Some elements don’t load properly in headless mode unless the viewport is large enough.
There are number of things you need to be aware of:
Your xpath is very brittle and not reliable - you need to refactor it and make it shorter using a relative path expression (//)
Any browser in headless mode gets executed in a different system profile then the logged in user. So the resolution is usually smaller and can affect the responsive display of the page or website. Take a screenshot of the test and you will see the full-screen is not the same as the full-screen when not in headless mode.
When you a have refactored your xpath you then need to make sure that the same window resolution is in both headless and headful mode. Any difference will cause the page to be rendered different and hide or change some elements.
As described here, it will work with UV when the Python version has a higher minor version.
https://github.com/astral-sh/uv/issues/11707
In my case a was running with 3.12.4, which was not working.
When i pinned the Python version to 3.12.9, is was working fine.
I hope that helps somebody ;).
You should try act
. It lets you tu run a workflow without pushes and works exactly how GitHub Actions does. Just follow the steps from the GitHub repository: https://github.com/nektos/act
For anyone who is having the same issue, I was able to find the solution was posted here by Leon Lu. Using the following code in your CreateMauiApp() method will globally change the color of your button ripple effect:
Microsoft.Maui.Handlers.ButtonHandler.Mapper.AppendToMapping("MyCustomization", (handler, view) =>
{
#if ANDROID
if (handler.PlatformView.Background is Android.Graphics.Drawables.RippleDrawable ripple )
{
//Sets the ripple color to green
ripple.SetColor(Android.Content.Res.ColorStateList.ValueOf(Android.Graphics.Color.Green));
//Hides the ripple effect
ripple.SetColor(Android.Content.Res.ColorStateList.ValueOf(Android.Graphics.Color.Transparent));
};
#endif
});
I found solution it's function throwIf
example from docs:
SELECT throwIf(number = 3, 'Too many') FROM numbers(10);
link on documentation: https://clickhouse.com/docs/sql-reference/functions/other-functions#throwif
Creating a cross-account copy of a recovery point from AWS Backup requires the correct access policies on both source and destination vault, correct IAM role with policies that allows to create a copy job and cross-account backup to be enabled within the Organization's management account.
From the details provided in the comments, you seem to have everything except enabling cross-account backup within the AWS Organization. This can be done from the Management Account, within the AWS Backup console under My account and Settings.
Lastly, a final point to check, the destination vault cannot be the default vault of the account.
I've found two ways of determining this thus far:
PauseVersion
in the kubeadm code for your k8s release.SELECT * FROM users
WHERE users.id NOT IN ( SELECT users_id FROM user_type);
Explain:- This query fetches all user records from the users table but only keeps those whose ID is not found in the user_type table
Having exactly same problem here and mystified. Problem with only 1 textbox on the form, other textboxes updating fine. The problem textbox has multiline property set to True, wondering if that's an issue?
There were two User Script Sandboxing settings, one on the target and one on the app. I had to set both to No.
Cool concept! I’ve been experimenting with language tools myself, and I think using something like forced alignment could give better results than just waveform comparison. It’s kind of like doing an analisi logica—you break down the structure of what's being said and see how it matches the reference. I found https://analisilogicatool.it/ useful for understanding how sentence structure works, even though it’s more for Italian grammar.
I suppose you should adjust global settings during DI
builder.Services
.AddGraphQLServer()
.ModifyPagingOptions(po => po.DefaultPageSize = 10000);
I have a similar problem, a BIRDRF device was discontinued and to update the firmware you need a key that the company no longer provides, when disassembling the BIN, I came across sections in .srec, which contain only the code and constant data, at most an array of hexadecimal values, perhaps with the memory map, I could disassemble to ASM, but without information on symbols and variables, only absolute addresses.
A solution often has multiple 'projects' within it. If you cloned the code from Github or elsewhere and try to run it you may get that error. You just need to set one of those projects to the startup project to get it to run. This will often be the one that has API or UI in the name of the project. In the solution explorer, right click that one and select 'Set as Startup Project'. The rest of the projects may only be class libraries which can't be run.
It turned out that the SHEET function was not registered as part of the AnalysisToolPak. It was now added in #22192ce
I added a PR to add full SHEET support in https://github.com/apache/poi/pull/803
I found the thing that worked best for me was to use the sysroot I retrieved off of my target. I needed to add some extra flags due to the structure of the raspberry pi's sysroot. Namely
-B/<path to r pi system.o files>/
and
CMAKE_INSTALL_RPATH
as both of
The solution I have found to this is:
To invent a file extension for the file, so traffic-advice
is renamed to traffic-advice.ta-json
(an invented file extension ... traffic advice JSON)
Use .htaccess to rewrite requests for traffic-advice
to traffic-advice.ta-json
And again in .htaccess, use AddType
to set the required MIME type for ta-json
files
.htaccess
therefore includes:
<IfModule mod_mime.c>
AddType application/trafficadvice+json ta-json
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteRule ^\.well-known/traffic-advice$ .well-known/traffic-advice.ta-json [PT]
# continues with other RewriteRules
</IfModule>
The traffic-response
file lives in .well-known
, only it's renamed with my 'invented' file extension to act as the trigger for setting the MIME type.
In Chrome and Firefox I can now see in the response header:
content-type: application/trafficadvice+json
You may have to uncheck Write Defaults in Unity 6 version.
When you take an EBS snapshot in AWS, you’re backing up individual volumes, not the entire EC2 instance. Each snapshot captures the state of that specific disk—so if your instance has multiple EBS volumes (root, data, logs), you’ll need to snapshot each one or use an AMI creation process, which under the hood snapshots the root volume and registers a new instance image. Early in my career I was surprised by this too: one missed volume means missing data. At AceCloud, we simplify this by offering snapshot orchestration across all attached volumes and optional image-level backups, ensuring you never lose part of your stack.
can you send me please copy of big database(fake) for odoo
Try to check the installed styles in the resource file .rc. There you will see descriptions of dialog boxes and controls.By default. Group box should have no effects for displaying flat lines.
There must be something similar to this entry:
BEGIN
GROUPBOX "Static",IDC_STATIC606,37,85,188,40
END
Most of the code is fine, the only thing missing is just a single quotation mark:
html`<img
// ...
src='${selectedImagePath}'
>`
Why? Because your image path contains spaces:
{
"paths": "img/Acer platanoides - spisslønn - Ak, Follo, Ås - mai 2014.JPG"
}
When the full image path is not enclosed in quotation marks, it looks like this:
html`<img
// ...
src=img/Acer platanoides - spisslønn - Ak, Follo, Ås - mai 2014.JPG
>`
When this HTML string is rendered as a DOM, you will find that all characters after the first space in the src
are not included in the src
attribute, but are instead parsed as custom attributes:
<img
// ...
src="img/Acer" platanoides="" -="" spisslønn="" ak,="" follo,="" Ås="" mai="" 2014.jpg="" alt="Selected image">
So you just need add a single quotation.
To get furniture models from 3d.io into ARKit, export the model in GLTF or OBJ, then convert it to USDZ using Apple’s Reality Converter or command line tools. Make sure the model is optimized—low poly with compressed textures—for best AR performance. Import the USDZ file into your Xcode project and load it with ARKit using ARQuickLookPreviewItem
or RealityKit
. Test on-device to ensure correct scale and lighting.
remove the auto generated path mappings and configure it as simple as following:
File/Directory | Absolute path on the server |
---|---|
/Users/me/github/my_project | /Users/me/github/my_project |
Commenter @Denis is correct, comma separated labels is the way to specify multiple label key/value pairs as described in the Kubernetes documentation [0]. Multiple values are ANDed.
- job_name: 'foo'
kubernetes_sd_configs:
- role: pod
selectors:
- role: pod
label: "app=MyApp,type=client"
[0] = https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors
Try to run this:
python manage.py runserver 127.0.0.1:8000 --insecure
eerwwwwwwwwwwwwwwwwwwwwww fddddddddddddd
I faced the same error in Mac when I tried to install "C/C++ Extension Pack" in VS code.
Installing "C/C++" Extension by Microsoft instead of "C/C++ Extension" Pack fixed the issue.
Good morning everyone from 2025.
Someshing went wrong when I launched TortoiseGit and had BSOD on Windows 10 PC.
After reboot I got
Error: libgit2 returned: the index is locked; this might be due to a concurrent or crashed process
Deleteing index.lock file in .git folder helped!
It seems that installing neovim 0.11.1 solved the problem.
Here more info about the deprecation rules:
https://developers.google.com/digital-asset-links/v1/revocation
In XCode, under Product/Scheme/Edit Scheme - Arguments tab, add an environment variable
OS_ACTIVITY_MODE disable
After debugging with the Node.js crypto library, I found out that the actual error was Unsupported key usage for an RSA-OAEP key
because I was passing the encrypt
usage for a RSA private key which is not possible.
If running a container 24/7 that only listens for incoming requests then triggers a job and securing services for exposure are the concerns, it is possible. I suggest an Event-Driven approach such as CloudEvents Player which is perfect for triggering jobSink with a Curl by:
Create the broker
Create the Knative Service
Bind the service with the broker
Create the Knative trigger
I encountered the same problem and am quite confused. Here is a Python version of CRC-8 that produces identical results to the above code( from https://github.com/crsf-wg/crsf/wiki/Python-Parser ). However, both methods fail the packet checksum. However the packet with LINK_STATISTICS is ok. Do you find out the reason?
packet=bytearray([0xc8,0x18,0x16,0xc0,0x3,0x9f,0x2b,0x80,0xf7,0x8b,0x5f,0x94,0xf,0xc0,0x7f,0x48,0x4a,0xf9,0xca,0x7,0x0, 0x0, 0x4c, 0x7c ,0xe2, 0x9]) # here is my packet.
def crc8_dvb_s2(crc, a) -> int:
crc = crc ^ a
for ii in range(8):
if crc & 0x80:
crc = (crc << 1) ^ 0xD5
else:
crc = crc << 1
return crc & 0xFF
def crc8_data(data) -> int:
crc = 0
for a in data:
crc = crc8_dvb_s2(crc, a)
return crc
crc8_data(packet[2:-1])
You can use document.addEventListener('selectstart', e => e.preventDefault())
to disabled selection during drag and drop.
https://developer.mozilla.org/en-US/docs/Web/API/Node/selectstart_event
But it's not supported on iOS yet.
You can start from the first and get the info about the PHP Laravel with way,
VPS, cPanel,
It's have differnce between this, You went to start from the first :
App running with public/index.php
Composer install
You need to move example.env from and convert to .env and pass correct information.
If app running with npm need to install nodeJS and run command npm install, npm run build...
In first you can run it in port 8000 and ferward the url to this port with nginx.
Need to set this information in config of nginx.
If you have errors or deploy it's not working you can read tihs :
https://learn.microsoft.com/en-us/azure/mysql/flexible-server/tutorial-php-database-app
If have errors right now need to start with:
Need to push to github.
and connect with git action.
and set you Azure information and run deploy.
If it's not working for you, you can create a VM and start to use SSH and deploy with aapanel.
If all things it's not working with you, use this link and going doing step-by-step.
https://coderonfleek.medium.com/hosting-a-laravel-application-on-azure-web-app-b55e12514c46
Very informative post. I came to know many more information from the post.
System Settings --> General --> Sharing --> Enable Remote Login.
After enabled, port 22 will be opened automatically. If not, click on info against remote login toggle. Check Allow full disk access is enabled, try using administrator under allow access for option and set for All Users.
Also, a new user can be added by clicking "+" symbol at bottom.
Using terminal, test the SSH connection using another system with this command :-
"ping destinationIPAddress"
"ssh macusername@destinationIPaddress "- shows connection status of port 22.
If still unable to access, try to delete HostKeys and packages from the folder by following the steps:
Press Ctrl + R in windows.
"%LOCALAPPDATA%\Xamarin\Monotouch"
Delete the files inside the folder and try to connect again.
But this means, if someone wants to increment a property in an atomic way, then an eTag must be provided to receive ETag mismatches (in case of multiple simultaneous write operations)?
Or is a CosmosDb increment (patch) operation always atomic?
The best way to list multiple services is to present them in a clear, organized, and user-friendly format that highlights value and makes navigation easy. Here are some effective strategies:
Simple and scannable. Great for landing pages and brochures.
Example:
💻 Web Design & Development
📈 SEO & AEO Optimization
📱 Social Media Marketing
🛒 E-commerce Strategy
✍️ Content Writing & Copywriting
Group related services into categories for better structure.
Example:
Digital Marketing
SEO
PPC
Social Media Ads
Web Services
Website Design
UX/UI Optimization
Hosting & Maintenance
Visually attractive and mobile-friendly. Each card can include a title, icon/image, short description, and a “Learn More” button.
Briefly explain each service to add clarity and increase interest.
Avoid jargon—use benefit-driven language.
you need to change color format of the image after resizing because opencv works with bgr format and yolo expects rgb
img = cv2.resize(image, (640, 640)) # Resize to model input size
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = img.astype(np.float32)
I guess this will solve your problem
When relying on indices for event binding, You must use trackBy
with *ngFor
in component ts file add:
trackByIndex(index: number, item: any): number {
return index;
}
change this in html:
<input [timeInput]="time" (changeTime)="onTimeChanging(arrayIndex, $event)" *ngFor="let time of timeArray; index as arrayIndex; trackBy: trackByIndex">
Simply solution for pyqt6
dlg = QMessageBox.question(self,'Title','my qiestopm selezionata?')
if dlg == dlg.Yes:
make something
if dlg == dlg.No:
make something
You are just creating a signature field. Check this example from open-pdf https://github.com/LibrePDF/OpenPDF/blob/master/pdf-toolbox/src/test/java/com/lowagie/examples/objects/Signing.java
j' ai toujours le meme probleme et j'essaye de trouver la solution merci de nous repondre
Did you install the Firebase package using SPM? I think FirebaseCore needs a runtime dependency, which, as far as I know, is not possible in preview at the moment.
I found a working solution to the problem mentioned above.
It only works if you own the page and have access to Facebook Business Manager. In that case you can create a system user and generate a system user access token. This token can be granted the necessary permissions for the page and, crucially, does not expire (unless the permissions will be changed or revoked).
Steps:
Set Up Facebook Business Manager: Go to business facebook manager and create a Business Manager account if you don’t already have one.
Add your Facebook page to your Business Manager.
Create a Facebook App: In the facebook developer portal create a new app (or use an existing one).
Add a System User: In Business Manager, navigate to Business Settings > Users > System Users. Click Add to create a new system user (give it a name and assign it a role, usually "Admin" for full access).
Assign Assets and Permissions: Assign your Facebook page as an asset to the system user.
Grant the system user the necessary permissions for the page.
Generate a System User Access Token: In the system user’s settings, click Generate New Token. (Select your app and the required permissions.)
Use the System User Access Token to generate a System Page Access Token
Sorry i cant reply because of my rep (annoying).
The question is related to template APPROVAL submission.
It uses positional params
The most voted request is for sending templates using an already approved template and that uses named parameters on the body.
This article helped me a lot: https://www.docker.com/blog/docker-best-practices-choosing-between-run-cmd-and-entrypoint/
Regarding your question, you should make the "core" (unchangeable) part of your container's start command in the Dockerfile using the ENTRYPOINT
instruction in exec form:
ENTRYPOINT ["executable", "arg1", "arg2"]
Then, you can provide additional command line arguments in your compose.yaml
using command:
field and they will be appended to your ENTRYPOINT command. To make it more flexible, you can use variables:
command:
- $ARG3
- $ARG4
So, starting your container using docker compose may look like:
ARG3="arg3" ARG4="arg4" docker compose up
which eventually will call executable arg1 arg2 $ARG3 $ARG4
"editor.inlineSuggest.enabled": false,
"editor.parameterHints.enabled": false,
"[<language>]": {
"editor.quickSuggestions": false,
"editor.suggest.showWords": false,
"editor.suggest.showSnippets": false
},
"typescript.suggest.enabled": false,
"javascript.suggest.enabled": false
I added this to the settings.json file. Worked like a charm! :)
Did you tried to upgrade to latest 2.5.11?
I want to have a grid, which I can modify before the start of the simulation, but I can't make buttons which use a function again after clicking.
Snippet:
def click_update(index):
status[index] = abs(status[index] - 1)
btn[index].config(bg=btn_color(status[index]))
for i in range(2500):
files.append("button"+str(1+1)) #<--add plus 1
for i in range(len(files)):
status.append(0) #<--remove int
btn[i].grid(row=int(i//50),column=i%50,sticky="we") #<== remove int from column keyword
for i in range(len(files)):
btn[i].config(bg=btn_color(status[i]), command=lambda index=i: click_update(index))
btn.append(Button)
Screenshot:
On the screenshot, you will see either black nor white when clicking
I was getting same error with a .Net 8 Project. Luckily read the warning message that installed Automapper version is 14, but expected is 12.0.1. Using Nuget package I updated the version and the issue got resolved
For PDFs generated from Sweave projects with multiple files, for quality checking and proofreading of the following tips are recommended:
Check the Compilation Process: Make sure all the.Rnw files knit properly without any compilation error. Compilers in a single file have a tendency to impact the document as a whole.
Syntax Check Automation: Utilize tools like lintr in R code to identify syntax errors early on and thereby save time while reviewing.
Proofread R Code and Outputs: Read through every block of R code and outputs that they generate (figures, tables, etc.) thoroughly to ensure that they are accurate and properly referenced.
Cross-Referencing Figures and Tables: Make sure cross-references to tables and figures are accurate and refer to the corresponding labels and numbers of the document.
Consistent Formatting: Establish a plain LaTeX template for captions, table format, and headings such that the document is uniform.
Check for Missing Files: Verify all links to external files (e.g., images, data files) are correctly referenced and not missing from the project.
Spell Check and Grammar: Utilize LaTeX-compatible spell check programs or applications such as languageTool to check for spelling and grammar mistakes in text.
Version Control: In case of team work, utilize Git or some other version control package to monitor changes and avoid overwriting one's own files.
Pre-Export Review: Editing the document using an editor like Overleaf or TeXShop is a good practice in advance to find out any layout or formatting issues prior to exporting the final PDF as they might not show up inside RStudio.
Final Read-Through: Either if automated tools have been used or not, always provide a final read-through to detect any last-minute mistake or inconsistencies.
Cross-Device Testing: Make sure the end PDF will look fine on different devices and PDF readers to ensure compatibility.
I'm running through the same problem but the post of Dzmity is not available anymore. Where can I find the steps please ?
instead of:
#include <opencv2/opencv.hpp>
use:
#include <opencv4/opencv2/opencv.hpp>
I'm having the same issue. Were you able to resolve it?
the second answer is correct, except there is typo - it should be pull, not push
or
python3 -m pip install --upgrade pip
python3 -m pip install --upgrade Pillow
or
python -m pip install --upgrade pip
python -m pip install --upgrade Pillow
See https://pillow.readthedocs.io/en/stable/installation/basic-installation.html
In am of the opposite opinion. 3 Bytes vs. 8 Bytes can make a huge difference, especially considering the big picture, or huge amounts of records accross multiple tables, databases, servers. It is not only storage, which of course, is relatively cheap. If you consider the savings in memory, in index columns, comparison operations (mostly range scans (BETWEEN)), I would say this can really make a relevant difference.
For a better experience with lightweight APKs, I recommend using apps like Ultra Panda. Check out their website for more details: Ultra Panda APK.
I visit this website and get the information how to download any apk file for Android.
I fix the issue deleting
%userprofile%\AppData\Local\Microsoft\IdentityCache
and signing in again
@ Shailesh gavathe, you can achieve the same as @vestland showed with plotly.express if you specify it in fig,.update_traces.
data = dict(
character=["Eve", "Cain", "Seth", "Enos", "Noam", "Abel", "Awan", "Enoch", "Azura"],
parent=["", "Eve", "Eve", "Seth", "Seth", "Eve", "Eve", "Awan", "Eve" ],
value=[10, 14, 12, 10, 2, 6, 6, 4, 4])
fig = px.sunburst(
data,
names='character',
parents='parent',
values='value',
)
fig.update_traces(leaf=dict(opacity=1))
What you need is to create a custom Menu then use a Modal Dialog to create a pop up window then use the Google Docs built in Method replaceBody to replace the values on the place holder. Check the sample code I am sharing so you can start somewhere.
Sample Code
code.gs
function onOpen() {
DocumentApp.getUi()
.createMenu("Sample Menu")
.addItem("Summon Firm", "openForm")
.addToUi();
}
function openForm() {
const html = HtmlService.createHtmlOutputFromFile('index')
.setWidth(300)
.setHeight(150);
DocumentApp.getUi().showModalDialog(html, 'Enter Your Name');
}
function insertInput(input) {
const doc = DocumentApp.getActiveDocument();
const body = doc.getBody();
body.replaceText('{{NAME}}', input);
}
index.html
<!DOCTYPE html>
<html>
<head>
<base target="_top">
</head>
<body>
<p>What is your name?</p>
<input type="text" id="nameInput" placeholder="Enter name">
<br><br>
<button onclick="submitInput()">Submit</button>
<script>
function submitInput() {
const name = document.getElementById('nameInput').value;
google.script.run.withSuccessHandler(() => {
google.script.host.close();
}).insertInput(name);
}
</script>
</body>
</html>
Sample Output
References:
Speaking of SEO, if you're managing something complex like a multilingual site, getting expert help makes a big difference.
I’ve worked with Fast Ranking, and they’re honestly the best SEO agency in Luton.
They really understand technical SEO and tailor strategies to your site’s needs.
Highly recommend them if you want to get everything optimised properly!
Another possibility using transparent color functions,
gaussian[r_, R_, w_ : 1] := Exp[-(r - R) . (r - R)/w]
dat1 = Flatten[#, 1] &@
Table[{x, y, gaussian[{x, y}, {0, 0}]}, {x, -3, 3, 0.25}, {y, -3,
3, 0.1}];
dat2 = Flatten[#, 1] &@
Table[{x, y, gaussian[{x, y}, {.5, 1.5}, 5]}, {x, -3, 3,
0.25}, {y, -3, 3, 0.1}];
mycf1[n_] := Opacity[#, If[n == 1,
ColorData["TemperatureMap"][.5 + #/2],
ColorData["TemperatureMap"][.5 - #/2]
]] &
Show[
ListDensityPlot[dat2, PlotRange -> All, InterpolationOrder -> 0,
ColorFunction -> mycf1[2]] ,
ListDensityPlot[dat1, PlotRange -> All, InterpolationOrder -> 0,
ColorFunction -> mycf1[1]]
]
Here I have just scaled the opacity of the color function so that it goes to zero when the function value is minimum, and chose a color function which is white at the bottom. You still have to be careful to fine-tune things if they are strongly overlapping. Here's some examples
I'm using conda as env management tool. So, CTRL + Shift + P and then selecting the desired Python Interpreter worked for me.
To avoid Sphinx parsing included files as standalone reStructuredText documents, use a non-.rst
extension for include files. Using .inc
(e.g. planet.inc
) is fine.
planet.inc
Hello |planet|.
earth.rst
.. |planet| replace:: Earth
.. include:: planet.inc
Place your coupons/search
route after resource routes. Otherwise, 'search' is treated as {coupon} dynamic property.
echo "<table>";
replace
echo " <table width=\"599\" border=\"1\">";
I managed to finally put together a piece of code that does what I want. The problem was that the filtering of the data removes the 2D structure of the coordinates and returns a vector form of the x/y coordinates of the remaining points.
I then transformed the filtered data back in 2D form and replaced the filtered out coordinates with NaN values. This was also done for the intepolated pressure data. This was then used for plotting with .contourf without issues.
Bellow is the complete code, if someone wants to test it, they can use the input data posted bellow. I left all the comments from my code, if moderators think thats too much, I will remove them.
import pandas as pd
import numpy as np
import scipy
import matplotlib.pyplot as plt
import matplotlib
#-------------------------- INPUT ------------------------------
# Read data
input_filename = 'period_rot_shadow_vel-mag_p-stat.txt'
df_results = pd.read_csv(input_filename,
sep =',', skiprows = 1,
names = ['nodenumber', 'X', 'Y', 'Z','p_stat', 'vel_mag'])
#---------------------- CREATE RECTANGULAR GRID ----------------
# Rectangular grid is created with a numpy.meshgrid function, which takes 2 1D arrays to define the
#size of the rectangular grid and the spacing between the points
#get the size of the data to generate the same size array
x_data_min = df_results['X'].min()
x_data_max = df_results['X'].max()
y_data_min = df_results['Y'].min()
y_data_max = df_results['Y'].max()
#from max and min values calculate the range (size) for both axis
x_size = np.abs(x_data_max) + np.abs(x_data_min)
y_size = np.abs(y_data_max) + np.abs(y_data_min)
#create 2 numpy arrays to define the size and the spacing of the rectangular grid, arrays
#are defined as lists, so we can use the list comprehension to use the data size parameters and the
#desired spacing
#all values are in meters
rect_mesh_spacing = 0.001
#number of points on each edge is calculated from the x_size and spacing, +1 is added so that we always
#cover the whole area covered in exported data
N_x_steps = int(x_size/rect_mesh_spacing) + 1
N_y_steps = int(y_size/rect_mesh_spacing) + 1
#create an empty list to fill with the values
rect_mesh_x_axis = np.array([])
rect_mesh_y_axis = np.array([])
#for loop to fill the list of evenly spaced value in the x axis
for i in range(0, N_x_steps):
x = x_data_min + i * rect_mesh_spacing
rect_mesh_x_axis = np.append(rect_mesh_x_axis, x)
for i in range(0, N_y_steps):
y = y_data_min + i * rect_mesh_spacing
rect_mesh_y_axis = np.append(rect_mesh_y_axis, y)
#use meshgrid function to create a rectangular grid, meshgrid returns a tuple of ndarrays
xx, yy = np.meshgrid(rect_mesh_x_axis, rect_mesh_y_axis)
#create an array from meshgrid points
xy_mesh = np.array((xx,yy)).T
#---------------- FILTER RECTANGULAR GRID BASED ON DISTANCE TO ORIGINAL POINTS ----------------
# If we would not filter out only the points that are in some vicinity to the original data points
#exported from Fluent, the interpolation function would interpolate also to the points of rectangular grid
#that are far away from the actual exported data points
# Define input coordinates (imported data point coordinates) as numpy arrays
input_coords = np.array(list(zip(df_results['X'].to_numpy(), df_results['Y'].to_numpy())))
# scipy.spatial.KDTree function is used to lookup nearest-neighbour for defined points
tree = scipy.spatial.KDTree(input_coords)
dist, idx = tree.query(xy_mesh)
#define the radius of lookup of vicinity of points
radius = (rect_mesh_spacing + rect_mesh_spacing) / 2
distance_filter = dist <= radius
# Filter out the points in rectangular grid that are close to the exported data points
#this type of filter produces vectors, 1D arrays for each coordinate, filled with x/y coordinates for each
#point that was not filtered out. The filtered points are not included
xx_filt, yy_filt = xy_mesh[distance_filter].T
#-------------------- INTERPOLATE DATA -------------------------------------------------------
# scipy.interpolate.griddata(points, values, xi)
#points = data point coordinates of the values we want to interpolate
#values = the values at the points
#xi = points at which to interpolate data
#we will interpolate using the NEAREST method
#because the input data is in vector form, p_stat_interpol will also be in vector form
p_stat_interpol= scipy.interpolate.griddata((df_results['X'], df_results['Y']), df_results['p_stat'],
(xx_filt, yy_filt), method = 'nearest')
#----------------------- STRUCTURING DATA TO GRID -----------------------------------------------
# As mentioned, the filtered and interpolated data is in vector form and has no connection to the grid and
#this is the problem for contourf plot function, which needs pressure data in 2D form. One possible way is to
#transform the vector form to the 2D form and replace the missing values (of the points that we filtered out
#before) with NaN
# Numpy unique function is used to get the unique values for coordinates, this way we get the grid axes
x_unique = np.unique(xx_filt)
y_unique = np.unique(yy_filt)
# We construct a new grid of NaN values
p_stat_grid = np.full((len(y_unique), len(x_unique)), np.nan)
# Fill the grid data with the p_stat_interpol values where possible, other values will stay NaN
for xi, yi, zi in zip(xx_filt, yy_filt, p_stat_interpol):
ix = np.where(x_unique == xi)[0][0]
iy = np.where(y_unique == yi)[0][0]
p_stat_grid[iy, ix] = zi
# Now we create a new mesh for plotting, this mesh will be in structured 2D form (we could probably just use the
#first xx, yy mesh from the beginning of the code
X_grid, Y_grid = np.meshgrid(x_unique, y_unique)
#------------------------- PLOT PRESSURE CONTOURS ---------------------------------
fig, axs = plt.subplots(1,4)
fig.set_size_inches(18, 6)
# Define number of contour levels
contour_levels = 50
# Plot non-interpolated data
axs[0].set_title('xx,yy')
axs[0].scatter(xx, yy, color = 'red')
axs[1].set_title('xx_f,yy_f')
axs[1].scatter(xx_filt, yy_filt, color = 'red')
axs[2].set_title('scat xx_f,yy_f,p_int')
axs[2].scatter(xx_filt, yy_filt, c = p_stat_interpol,
norm = matplotlib.colors.Normalize(vmin = np.min(p_stat_interpol),
vmax = np.max(p_stat_interpol)),
cmap = 'bwr')
axs[3].set_title('cont xx,yy, p_int')
axs[3].contourf(X_grid, Y_grid, p_stat_grid, contour_levels, cmap='bwr')
plt.show()
Data Sample:
nodenumber, x-coordinate, y-coordinate, z-coordinate, pressure,velocity-magnitude 1,-2.745435307E-02, 2.104994697E-03,-2.185633883E-02,-8.094133146E+03, 4.686428765E+01 2,-2.738908254E-02, 3.158549336E-03,-2.185629997E-02,-8.104864467E+03, 4.606212337E+01 3,-2.757460451E-02, 5.167262860E-03,-2.185623337E-02,-8.093051020E+03, 4.495193692E+01 4,-2.733931890E-02, 5.656880572E-03,-2.185622490E-02,-8.116433300E+03, 4.405860916E+01 5,-2.759126430E-02, 6.070293390E-03,-2.185618260E-02,-8.084872243E+03, 4.428866265E+01 6,-2.737704207E-02, 6.507508463E-03,-2.185627796E-02,-8.106663765E+03, 4.361527022E+01 7,-2.762655064E-02, 0.000000000E+00,-2.185611682E-02,-8.057108514E+03, 7.720657592E+01 8,-2.762655053E-02, 1.920716437E-05,-2.185609461E-02,-8.057029042E+03, 7.591235710E+01 9,-2.762654491E-02, 4.228439350E-05,-2.185606795E-02,-8.057013286E+03, 7.368296305E+01 10,-2.762653069E-02, 6.997596792E-05,-2.185603598E-02,-8.057068875E+03, 7.113144845E+01 11,-2.762650289E-02, 1.032044066E-04,-2.185599766E-02,-8.057152704E+03, 6.841504839E+01 12,-2.762645410E-02, 1.430766767E-04,-2.185595174E-02,-8.057232638E+03, 6.565721945E+01 13,-2.740227795E-02, 1.197447086E-03,-2.185604704E-02,-8.086002417E+03, 4.769658095E+01 14,-2.729995772E-02, 4.555508762E-03,-2.185613769E-02,-8.113811400E+03, 4.473595941E+01 15,-2.762637329E-02, 1.909744910E-04,-2.185589666E-02,-8.057315245E+03, 6.283751408E+01 16,-2.762624634E-02, 2.484489809E-04,-2.185583068E-02,-8.057450247E+03, 5.993119264E+01 17,-2.762465252E-02, 6.184882281E-04,-2.185591455E-02,-8.059810059E+03, 5.089516358E+01 18,-2.762605091E-02, 3.174151315E-04,-2.185575167E-02,-8.057686244E+03, 5.707278224E+01 19,-2.762531127E-02, 4.994699682E-04,-2.185577109E-02,-8.058658000E+03, 5.248349769E+01 20,-2.762366891E-02, 7.612957389E-04,-2.185583175E-02,-8.062030940E+03, 4.956215663E+01
The problem was that my jdbc driver version did not support JDK21.
Link: https://www.oracle.com/database/technologies/appdev/jdbc-drivers-archive.html
I tried these versions and all worked.
23.8.0.25.04
23.7.0.25.01
21.17.0.0
For schema comparison only (not data at the moment) Redgate PostgreSQL Compare and MySQL Compare allow you to compare two databases.
They both have a Community edition which will be free to use for for students, educators, small businesses and non-commercial open-source projects.
Found a fix. import Step from shepherd.js/src/types/step;
should be removed and import type { StepOptions } from 'shepherd.js';
should be used. Step.StepOptions
is no longer valid in latest version of Shepherd.
A simple restart of the simulator should fix this issue.
Uninstall the app -> restart simulator -> rebuild the application.
the main issue that we forgot add Annotation
IgnoreAntiforgeryToken
It's all bad. YFinance is not the only game in town. Besides people can scrape their page if they don't play ball and give us a working API. What is the point of this? They are clearly not supporting developers.