I have one half of an answer, and two full answers if you allow a frame challenge.
Half an answer
Load
has a constructor that allows injecting your own factory into the deserialization process, and the functions where have access to Node objects that do carry the anchor.
So the approach would be something like this:
Map<Object, ?> metadata = new HashMap<>();
Load loader = new Load(
settings,
// Inject a StandardConstructor; BaseConstructor does not know about Map etc.
new StandardConstructor(settings) {
@Override
protected Object constructObjectNoCheck(Node node) {
// Let it construct the Pojo from the Node normally.
final Object result = super.constructObjectNoCheck(node);
// Now that you have both Pojo and internal Node,
// you can exfiltrate whatever Node info that you want
// and do metadata.put(result, someInfoGleanedFromNode)
return result;
}
});
The snag is: The Node created for the anchor does not generate a Pojo.
I.e. you have an anchor, but you don't really know to which object in your deserialized nested Map
/List
that anchor corresponds; you'll likely have to walk the Node tree and find the correct node.
So, maybe somebody else wants to add instructions how to walk the Node tree; I do not happen to know that.
Frame challenge: Do you really want the anchor name?
If this is just about error messages, each Node
has a startMark
attribute that's designed specifically for error messages that relate to the Node
, so you can do this:
Map<Object, String> startMarks = new HashMap<>();
Load loader = new Load(
settings,
@Override
protected Object constructObjectNoCheck(Node node) {
final Object result = super.constructObjectNoCheck(node);
node.getStartMark().ifPresent(mark -> startMarks.put(result, mark));
return result;
}
});
e.g. for this YAML snippet:
servers:
- &hetzner
host: <REDACTED>
username: <REDACTED>
private_ssh_key: /home/<REDACTED>/.ssh/id_rsa
the servers
label has this start mark:
in config.yaml, line 1, column 1:
servers:
^
To get this output, I initialized the settings like this:
var settings = LoadSettings.builder().setUseMarks(true).setLabel(path.toString()).build();
setUseMarks
makes it generate start and end marks so you have these texts.
setLabel
is needed to give the in config.yaml
output; otherwise, you'll see something like in reader
(if you pass in a stream reader), which is pretty unhelpful.
Frame challenge: Maybe give the anchored subobject a name?
Something like this:
unit:
&kg
name: Kilogram
shorthand: kg
I couldn't reproduce the images not loading issue, but if you are having trouble with the view resizing, have you considered delegating the same frame size (particularly the height) to the text as well?
As in,
if isImageVisible {
Image(imageName)
.resizable()
.scaledToFit()
.frame(width: 100, height: 100)
.background(Color.gray.opacity(0.2))
} else {
Text("Image is hidden")
.frame(width: 200, height: 100)
}
The only thing you have to do is toggle the drop down:
Can somebody give new answer, matching new Vaadin docs
I was facing a similar issue and then found out that there's a search-bar below the section where we add the environment variables. It's essentially a section which links your variables to a project. By selecting the relevant project I was able to solve this issue.
This answer is for anyone with the same issue. Try to change views to see the result. For me, when I opened the 3D view, I found the impact of changing height, and width of an element.
This is the right way to pre-grant permissions: https://source.android.com/docs/core/permissions/runtime_perms#creating-exceptions.
But an accessibility service isn't controlled by a permission. A service is enabled if it's in this list in settings: https://cs.android.com/android/platform/superproject/main/+/main:frameworks/base/core/java/android/provider/Settings.java;drc=ad46de2aa9707021970cb929d016b639f98a1ac7;l=8615.
Modify the code maintaining that setting to pre-enable your service. Or using the existing defaultAccessibilityService
configuration (set it with a product overlay file) might work.
The simplest way to stop a user from turning it off is probably to modify the accessibility settings UI.
Is there any advantage or disadvantage to using a prime number as the length of a password? For example use a password that is 11, 13, 17, 19, 23 etc characters long. The AI-driven google search says maybe. I interpret that to mean absolutely not.
the solution is in this formula :
=map(B2:B, C2:C, lambda(time, weight, if(isbetween(time, value("20:55"), value("21:05")), weight - offset(C2,counta(C3:C99999),0,1,1), iferror(ø) ) ))
thank you so much!!!
I have the same problem, but s3api command as above doesn't work, could you help me please?
I think there might be a small misunderstanding. With Standalone Components in Angular, you don't actually need to import them into App.component.ts. The key benefit is that Standalone Components are self-contained, meaning you can directly use them in templates or reference them in other components without needing an intermediary NgModule.
The idea of "importing into App.component.ts" makes more sense when you're dealing with components inside a traditional NgModule, where you would register components in the module. However, Standalone Components work independently, so there's no need for that extra step.
Regarding the benefits of NgModules, they offer fine-grained control over things like dependency injection, routing, and lazy loading, which is especially useful in larger, more modular applications.
However, I recommend using Standalone Components for the following reasons:
Better performance (due to less overhead) Less boilerplate (no need
to manage NgModules for simple components)
don't know why, but on Raspberry
from picamera2 import Preview
solved it
The operator ["/"][1] can be used for the concatenation by row of matrices. In this case
my $sp=$dense_unit_matrix/$ar; $sp=$sp/extended_matrix; $sp=$sp/$arr3;
works as well.
[1]: https://polymake.org/doku.php/user_guide/howto/matrix_classes#:~:text=GenericVector%26%2C%20const%20GenericVector%26)%3B-,Create%20a%20block%20matrix,-%2C%20virtually%20appending%20the
Use a mutex to read the current value of a variable. The mutex approach is simpler than communicating by channel.
type sensor struct {
mu sync.Mutex
value int
}
func (s *sensor) run() {
for {
s.mu.Lock()
s.value += 1
s.mu.Unlock()
time.Sleep(100 * time.Millisecond)
}
}
func (s *sensor) get() int {
s.mu.Lock()
defer s.mu.Unlock()
return s.value
}
Call like this:
temperature := &sensor{value: 42}
go temperature.run()
log.Println("EARLY TEMP READING:", temperature.get())
time.Sleep(3 * time.Second) //LET SOME ARBITRARY TIME PASS
log.Println("LATER TEMP READING:", temperature.get())
It looks like there’s a small issue with the syntax in your command. You have curly braces {} in the file paths, which are causing the error. Try removing them and make sure your paths are correctly formatted. Here’s an updated version of your command:
@echo off cd C:\Users\misha\Desktop\performance_monitor C:\Users\misha\AppData\Local\Programs\Python\Python313\Lib\site-packages\pip\app.py pause
Make sure to double-check that the file paths are correct and that Python is installed properly on your system
After doing a lot of digging, i found a solution
first, right click on your main project directory folder thingy, and click properties https://i.sstatic.net/jymk7xmF.png
then, you want to go into c/c++ https://i.sstatic.net/nuxGOd4P.png
after that, go into additional include directories and edit https://i.sstatic.net/E45T38EZ.png
last, you want to enter the directory of the folder containing your header file, for example $(SolutionDir)Dependencies\GLEW\include where solutiondir is the path to the main folder (in this case engine) and then we path to the folder containing the header https://i.sstatic.net/O9crjAl1.png
hope this helps
I also have this problem, did you fix it?
Here Steps to follow:
Rtools is installed on your system to compile source packages.
Ensure that the path provided points to the correct location of rcompanion_2.4.36.tar.gz
For example
install.packages("E:\download\rcompanion_2.4.36.tar.gz", repos = NULL, type = "source")
I forced java version into build.gradle.kts
like this
java {
toolchain {
languageVersion = JavaLanguageVersion.of(17)
}
}
which fixed the problem. I could remove the snippet later and build would work without it. I guess idea or gradle cache everything forever.
All this is demotivating. I'm getting this (for python 3.6):
I type "pip3 install pillow" and get:
Collecting pillow Could not fetch URL https://pypi.python.org/simple/pillow/: There was a problem confirming the ssl certificate: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:749) - skipping Could not find a version that satisfies the requirement pillow (from versions: ) No matching distribution found for pillow
For me the issue solved after I packed the jar including all dependencies by customizing the "jar" task (pay attention also to the guidance in the comments)
see guidance in this answer
For me, using Windows 11 + WSL, i had to do the following steps:
First I visited NVIDIA's website to download CuDNN for Ubuntu ( https://developer.nvidia.com/rdp/cudnn-archive ). After logging in, my browser automatically started downloading it, but i had the option to copy the full download link from the download that was in progress. It was quite long.
Then, on my Ubuntu terminal (WSL) I typed the following to download the deb package in there (please replace the long link with whatever you copied on the step above):
wget -O cudnn-local-repo.deb "https://developer.download.nvidia.com/compute/cudnn/secure/8.9.7/local_installers/12.x/cudnn-local-repo-ubuntu2204-8.9.7.29_1.0-1_amd64.deb?Wr2dTCzXY1M3FuHmuQIxUK9phLLYKkG8BCndRJ4TPwJPO3R_E9SAiojXcPKK7ivtaPbHXj49L1MqhjqfQKyuZF7B33dx5y8XDUz96_EPovRBytbRIwyNgSsNzQNxHoTeUQXrMcCGkogKQ8yADLABUQb4eIoO0HcuSDrKwbdKJvDHVJ-NboNM3kr9DGkQkUlGJ82oyQEM2vO_b51L7LN91DboWEo=&t=eyJscyI6IndlYnNpdGUiLCJsc2QiOiJkZXZlbG9wZXIubnZpZGlhLmNvbS9yZHAvY3Vkbm4tYXJjaGl2ZSJ9"
After the download was finished, I installed CuDNN like this:
sudo dpkg -i cudnn.deb
The command failed telling me to copy the certificates to a certain path before proceeding: sudo cp /var/cudnn-local-repo-/cudnn-local--keyring.gpg /usr/share/keyrings/
Then i retried:
sudo dpkg -i cudnn.deb sudo apt-get update sudo apt-get install libcudnn8 sudo apt-get install libcudnn8-dev
Now I need to copy one of the installed files inside the specific python that's being used with pyenv. I didn't know where it was so I used this command to find it:
sudo find / -name "libnvrtc*"
I learned that the file I needed was: ~/.pyenv/versions/3.10.15/lib/python3.10/site-packages/nvidia/cuda_nvrtc/lib/libnvrtc.so.11.2
I needed a file called libnvrtc.so only, not libnvrtc.so.11.2, so i created a symbolic link:
ln -s ~/.pyenv/versions/3.10.15/lib/python3.10/site-packages/nvidia/cuda_nvrtc/lib/libnvrtc.so.11.2 libnvrtc.so
After that, when I tried the program I wanted again, the warning "Applied workaround for CuDNN issue, install nvrtc.so" was gone.
I think you can just add a big margin to the bottom div, and it will do what you want. In my case, I added mt-64
.
<div class="mt-64 h-24 w-full bg-red-400">BOTTOM DIV</div>
I found a very easy solution to handle this. Docs: https://clerk.com/docs/references/nextjs/auth
import React from "react";
import { auth } from "@clerk/nextjs/server";
import { getDictionary } from "@/locales/dictionary";
import TenantTable from "@/components/TenantTable/TenantTable";
export default async function Page() {
const dict = await getDictionary();
const { userId, redirectToSignIn } = await auth();
if (!userId) return redirectToSignIn();
return (
<div>
Hello, ${userID}
</div>
);
}
in vue3 this works for me without warning
const emit = defineEmits({
IDialogConfirmComponentEvents: "IDialogConfirmComponentEvents",
});
Very interresting approach, i have a similar issue. My problem is that i need to use sticky-sessions for two upstreams, each having the same number of upstream targets, but i need to have them pairwise. From your example above that would mean that a user is forwarded to server "server1:8080" and "server1:9080" and not to "server2:9080". So some kind of affinity between the upstream hosts. I could not find a way to make this work.
dude! In Postman, your JSON looks correct, but are you sure that this.puppy.race is really not undefined? You don't need to convert your object, because the Angular HTTP Client will handle that for you. Let Angular handle it.
const puppyJson = {
id: null,
puppyId: this.puppy.puppyId,
name: this.puppy.name,
color: this.puppy.color,
weight: this.puppy.weight,
height: this.puppy.height,
image: this.puppy.image,
characteristic: this.puppy.characteristic,
race: { id: null, race: this.puppy.race?.race },
price: this.puppy.price
};
You can also debug your object to make sure everything looks correct before sending it.
console.log(puppyJson)
Patchwork is good at aligning plots:
library(patchwork)
g1 + g2
Listen to webhooks on backend then connect your server to your frontend through websockets
As per the comments from paleonix, the solution is the compile option:
-arch=sm_75
You can use an anti-spam bot (such as https://kolas.ai/kolasaibot/) which recognizes spam and blocks spammers.
Of course repositorie can be deleted. the whole JCenter repository was “deleted” and gone for good now.
This also (mostly) fixed my issue with jumping content while using the KeyboardAvoidingView FYI.
The outline isn’t showing on the first <a>
tag because <a>
tags are inline by default, meaning they only take up as much space as their content. When you put a larger block element, like a <div>
, inside an inline <a>
, the outline doesn’t wrap around it correctly. Setting display: flex
on the <a>
tag fixes this by making it behave like a block element, allowing the outline to cover the entire content.
Use compressed layout - https://matplotlib.org/stable/users/explain/axes/constrainedlayout_guide.html#compressed-layout
fig, axs = plt.subplots(2, 2, layout='compressed')
It resizes the entire figure to remove redundant white space.
In case you land here because you wanted to put a UserControl into something and it doesnt stretch.
Change the following in your XAML
Height="600" Width="800"
to
d:DesignHeight="600" d:DesignWidth="800"
Change your indentation
graph LR
A-->B
Perhaps you are after IAR's C-RUN runtime heap analysis. It automatically instruments the code so that preventable leaks can be detected on the fly.
It is available as an add-on though there is a trial version as well.
https://github.com/iarsystems/crun-evaluation-guide?tab=readme-ov-file#heap-checking-capabilities
$params = @{DnsName = 'www.fabrikam.com', 'www.contoso.com'
CertStoreLocation = 'Cert:\LocalMachine\My' }
New-SelfSignedCertificate @params
OR
New-SelfSignedCertificate -DnsName 'www.fabrikam.com','www.contoso.com' -CertStoreLocation Cert:\LocalMachine\My
These two examples create a self-signed SSL server certificate in the computer MY store with the subject alternative names www.fabrikam.com and www.contoso.com and the Subject and Issuer name set to www.fabrikam.com. (First one will be set to subject/Issuer unless otherwise indicated.
I spent a week trying to solve a similar problem. Turned out to be the bundler gem. The system bundler on my server was a different version than the bundler version on my development machine. Check your Gemfile.lock file. If last line says BUNDLED WITH a different version as "bundle -v" from the command-line, you should work to get them on the same version.
BTW, I found the answer in this long ticket: https://github.com/phusion/passenger-docker/issues/409
I found a solution, although not the most elegant.
Just set the backgound color with style sheet with 1/255 opacity:
window.setStyleSheet('background-color: #01000000;')
in my case the problem was $attributes, I had protected $attributes = ['my_attribute'];
, but I didn't have a method for that attribute
Something that worked for me was to do the following: For Linux:
So .test works fine without redirection to http when using Selenium
Im not entirely sure how I would go about this but I feel like the main issue youre running into is that the highlights in the input image are too blown out. I would start by evening out the lighting in the original image before extracting the fingerprints. You could either bring the whites down in brightness or do a more complicated approach with highlights.
You could probably find an open source image editor with that functionality and just copy over what you need into a function in your script then run the rest of your script on the modified image.
There is a known bug when local value might not be taken into effect
You normally want to change the group list, rather than to drop it, to match your new identity after the suid/sgid-assisted switch. You need the group list to match your new uid (and actually gid too, as the group list usually includes gid itself).
Unfortunately, as was already mentioned,
currently you need CAP_SETGID for calling
initgroups()
. However in an attempt
to solve that, I posted a few proposals
to LKML.
This one
allows to "restrict" a group list, which
is somewhat similar to dropping it, but
doesn't give you any extra access rights
if they were blocked by one of the groups
in a list.
This one actually allows you
to get the correct group list, but you
need a privileged helper process to assist
you with that task.
I personally prefer the second solution as it gives you a correct group list, but the first one is at least very simple and doesn't require any helper process or extra privs.
Unfortunately both patch-sets only yielded 1 review comment each, which means the lack of an interest to this problem among LKML people. Maybe those who are interested in this problem here, can evaluate my patches and offer some more discussion of them in LKML.
Add the API Key to the environment variable. Environment variable key should be "NVIDIA_API_KEY", and provide your API key as the value. Update the version of langchain-nvidia-ai-endpoints package to 0.3.5. This works for me with your code. Please check if it works for you.
Stupid mistake...had a stray createSupabaseClient()
in the form.tsx
component, when it only belonged in the api route. Deleted from form.tsx
, and everything is working.
You should check the namespace in the file and ensure it is pointing to the right directory where the UserAccessMiddleware is located.
I also want to know the same thing if you find anything regarding this could you please share it with me as well? Thanks
I ended up needing to navigate further down the DOM to a parent element that didn't have so much stuff in it. Then, I waited for a button to populate within that specific parent div:
await myPageSection.waitForSelector('button', { timeout: 15000 }).catch(() => {
console.log('No buttons found within pageSections[3] after waiting.');
});
And finally, I ran through all the buttons to find the one I needed. I think classes and innerText were dynamically changing, which is part of the reason why I couldn't target it:
const allButtons = await myPageSection.$$('button')
If you will use same way to implement i18n
to next.js app with App Router
(on next.js v13+), good to know:
The locale, locales, defaultLocales, domainLocales values have been removed because built-in i18n Next.js features are no longer necessary in the app directory.
Taken from Migrating Routing Hooks documentation.
Usefull links for implement i18n with App Router
:
From the requirements you've put forth, have you looked at/considered WCF + PowerShell? This would be far easier to control access and limit what can be run on the remote end.
I have an example of how to do this, both in the PowerShell commandlet and the WCF Service Activity side.
I spent almost half day fixing the similar issue. Turns out I didn't run the commit on the oracle database. This link helped me.
@wimpix - this is the solution I was looking for - do you know if there is a way to save my macro to the personal macro workbook if I previously created it in a regular workbook? Or do I have to record it again?
Thanks!
Disable certificate and install nltk.download() command.
It will work for me. It will open pop-up window and click on download button.
At the end I found the solution admin should be initialized with admin.initializeApp, better if as global
Yes, it is possible to use inline conditional statements in concatenation in JavaScript, but you need to ensure the correct use of parentheses to avoid syntax errors. The issue in your example is due to the precedence of the + operator and the ternary operator ? :.
Here is the corrected version of your code:
console.log("<b>Test :</b> " + ("null" == "null" ? "0" : "1") + "<br>");
By wrapping the conditional statement in parentheses, you ensure that it is evaluated correctly before concatenation.
My issue is with date-fns 2.9.0 not installing. I went to Heroku and tried to npm install
manually and it sits there until sometimes timing out after a long period of time.
Then sometimes it says
The authenticity of host 'http://github.com (140.82.113.3)' can't be established.acted to /app/server/node_modules/.staging/date-fns-74841dec (4779ms)
ED25519 key fingerprint is SHA256:+DiY3wvvV6TuJJhbpZisF/zLDA0zPMSvHdkr4UvCOqU.
This key is not known by any other names
I searched on Github here and the SHA256 data does match ok. So I wonder if this is a Heroku issue where known_hosts needs an additional entry?
I can't find the known_hosts on my dyno and not confident it would stick around even if I found it.
This could be a situation where npm install
is going to github instead of a regular npm repository. I'm new and not sure. But I am loading a specific tag of a github repository right after date-fns which may be the real issue.
These are good answers, but I would also suggest checking the NUnit documentation, since the setup might change over time: https://docs.nunit.org/articles/nunit/technical-notes/usage/Trace-and-Debug-Output.html
Today, when I was looking at the same problem in VS 2022, I got it working by
a) Adding the code in the 2nd example from the page page referenced:
[OneTimeSetUp]
public void StartTest()
{
if (!Trace.Listeners.OfType<ProgressTraceListener>().Any())
Trace.Listeners.Add(new ProgressTraceListener());
}
b) Change the output in from "Debug" to "Tests" in Visual Studio:
c) Then I was able to see the Console.WriteLine("Whatever!")
output that's inside my test.
Problem was in syscall function as @JhonBollinger noticed mp_parent
is index of parent in mproc. So it should be
if ((child->mp_flags & IN_USE) && mproc[child->mp_parent].mp_pid == proc->mp_pid) {
child_count++;
}
mask = np.full(arr.shape, True)
mask[1:-1, 1:-1] = False
print(arr[mask])
The problem you're running into is a common pitfall, stemming from the fact that your AppSheet app is using a cache service to try and make things efficient and run faster.
When you open a file in your AppSheet app for the first time, the file is downloaded from wherever and then stored on your device for 6 hours.
When you make changes to the file, unfortunately the file is not updated in the cache. There is no mechanism that triggers to all devices that have ever opened the app that they then need to discard their current version of the file and download the new one. (You can see how that might be a bit of a heavy thing, with a whole bunch of pitfalls and problems to make that a smooth running thing 100% of the time. That's why they haven't made it, and they just stick with the fact that it's stored in cache for 6 hours.)
After the 6 hours elapses, you will see the changes inside the file if you try and open it again.
You can rest assured though, whenever you use that file for an email or something, whenever you send it out, the system will use the file from the data source not what you have cached on your device. It can be a little disconcerting when you open the file after you just made changes and you don't see any changes, I totally feel you on that. 🤓
The expression looks good except kubernetes="*"
. Are you sure it should be like this, and not like kubernetes=~".*"
?
The expression will trigger if any new job with tmp
word in its value will be written to VictoriaMetrics. It will continue returning > 0 value up to 15min.
Don't flush the serial port buffers. I just spent about a week trying to determine where data was getting lost. I think behavior is different between various serial ports, take the ESP32-S2, which has native CDC, and also a serial converter chip. I think the OS driver may implement port methods differently. On the ESP32-S3 CDC port, if flushIOBuffers() is called immediately after a write(), the data may never be transmitted.
I haven't researched all of the issues, and there are some things that could be monitored like buffer sizes, setting to blocking, etc.
I managed to find it out by getting a link from 3 dots -> "Copy link to task"
which got me
... the bold part being the taskId, e.g. as seen in export to excel format. The url query parameters can apparently be discarded as well and still work.
But VM doesn't appear to support fan-out federated query or using object storage, so I can't just drop it in to replace Thanos too.
In VM ecosystem fan-out queries aren't needed. Usually, Prometheus (or stateless scrape agents) is used for scraping and delivering metrics to central VM cluster. Data usually has about 30-60s freshness and can be queried right away from central cluster, providing global query view.
Yes, VictoriaMetrics doesn't support object storage for historical data. But it is very efficient in terms of data compressing, so probably storing everything on disks would cost the same money and will provide better query performance.
I have a big problem with the 30 seconds also. It's very difficult to read the numbers and type them into the app asking for them within that limited time. It's completely unreasonable especially for people with physical disabilities. Would love to change it to (at least) one minute. Is there any way to do that?
In fact, due to this issue I would not use Google Authenticator at all if it weren't mandated by the government agency that now requires it for logins.
very confusing code... and it's not entirely clear what exactly is needed. If you want to constantly add new values, then try removing
this.other_dynamic_filters = [];
I hope this helps, or please describe the problem in more detail
Nov 2024. Monitor and Improve > Policy & Programmes > App Content.
a suggestion for Apache HTTPD and mod_jk:
If you prefer "anonymous" as REMOTE_USER for Tomcat
<Location unprotectedURL>
RewriteEngine On
RewriteRule .* - [E=JK_REMOTE_USER:anonymous]
</Location>
https://tomcat.apache.org/connectors-doc/common_howto/proxy.html
To disable Shibboleth session requirement
<Location unprotectedURL>
ShibRequestSetting requireSession 0
</Location>
The combination should give you a publicly accessible URL with a user set behind the scenes.
Running the same command in the cmd (run as admin), did the job.
This can be achieved using Render Hooks: https://filamentphp.com/docs/3.x/support/render-hooks For this you would use: TablesRenderHook::TOOLBAR_SEARCH_AFTER - After the search container
Here's a guide on how to implement this. See #6 https://laraveldaily.com/lesson/filament-visual-customize/render-hooks-custom-code-in-forms-header-footer-sidebar
SCIM is a REST API-based protocol. Requests for SCIM are performed via HTTP requests (GET, POST, PATCH..) and need an HTTP URL. Even if the application is hosted "on-prem", it needs to have an HTTP server running to handle the HTTP request/response processing. The URL doesn't need to be externally resolvable, but does need to be accessible to the provisioning agent and resolvable via the internal DNS available to the server the agent is running on.
Just a little bit math:
img = ImageGrab.grab()
# crop with correct scale
screen_width = self.master.winfo_screenwidth()
screen_height = self.master.winfo_screenheight()
x1 = x1 / screen_width * img.width
x2 = x2 / screen_width * img.width
y1 = y1 / screen_height * img.height
y2 = y2 / screen_height * img.height
img = img.crop(box=(x1, y1, x2, y2))
In addition, don't use bbox
because it will reduce the quality. Use img.crop()
instead.
Btw your code is not usable on mac. But I have a cross platform version of a snipping tool, DragScreenshot.py
:
"""
Ver 1.0
StackOverflow answer: https://stackoverflow.com/a/79166810/18598080
Bruh. Finally made it.
it mainly supports Mac(tested) and Windows(not tested but supposed to work). In Linux (not tested), the dragging view will not be totally transparent.
Example:
import tkinter as tk
import TkDragScreenshot as dshot
root = tk.Tk()
root.withdraw()
def callback(img):
img.save("a.png")
quit()
def cancel_callback():
print("User clicked / dragged 0 pixels.")
quit()
dshot.drag_screen_shot(root, callback, cancel_callback)
root.mainloop()
"""
import platform
import tkinter as tk
from PIL import ImageGrab
using_debug_mode = None
class DragScreenshotPanel:
def __init__(self, root: tk.Tk, master: tk.Toplevel | tk.Tk, callback = None, cancel_callback = None):
self.root = root
self.master = master
self.callback = callback
self.cancel_callback = cancel_callback
self.start_x = None
self.start_y = None
self.rect = None
self.canvas = tk.Canvas(master, cursor="cross", background="black")
self.canvas.pack(fill=tk.BOTH, expand=True)
self.canvas.config(bg=master["bg"])
self.canvas.bind("<Button-1>", self.on_button_press)
self.canvas.bind("<B1-Motion>", self.on_mouse_drag)
self.canvas.bind("<ButtonRelease-1>", self.on_button_release)
def on_button_press(self, event):
self.start_x = event.x
self.start_y = event.y
self.rect = self.canvas.create_rectangle(self.start_x, self.start_y, self.start_x, self.start_y, outline='white', width=2)
def on_mouse_drag(self, event):
self.canvas.coords(self.rect, self.start_x, self.start_y, event.x, event.y)
def on_button_release(self, event):
x1 = min(self.start_x, event.x)
y1 = min(self.start_y, event.y)
x2 = max(self.start_x, event.x)
y2 = max(self.start_y, event.y)
self.canvas.delete(self.rect)
dy = abs(y2-y1)
dx = abs(x2-x1)
if dy*dx != 0:
self.master.withdraw()
img = ImageGrab.grab()
screen_width = self.master.winfo_screenwidth()
screen_height = self.master.winfo_screenheight()
x1 = x1 / screen_width * img.width
x2 = x2 / screen_width * img.width
y1 = y1 / screen_height * img.height
y2 = y2 / screen_height * img.height
img = img.crop(box=(x1, y1, x2, y2))
if using_debug_mode: print("Screenshot taken!")
self.root.after(1, self.callback(img))
self.master.deiconify()
self.master.focus_force()
else:
if using_debug_mode: print("Screenshot canceled!")
self.root.after(1, self.cancel_callback())
self.master.destroy()
def set_bg_transparent(toplevel:tk.Toplevel, invisible_color_Windows_OS_Only= '#100101'):
if platform.system() == "Windows":
toplevel.attributes("-transparentcolor", invisible_color_Windows_OS_Only)
toplevel.config(bg=invisible_color_Windows_OS_Only)
elif platform.system() == "Darwin":
toplevel.attributes("-transparent", True)
toplevel.config(bg="systemTransparent")
else:
if using_debug_mode: print(f"Total transparency is not supported on this OS. platform.system() -> '{platform.system()}'")
window_alpha_channel = 0.3
toplevel.attributes('-alpha', window_alpha_channel)
toplevel.lift()
toplevel.attributes("-topmost", True)
toplevel.attributes("-transparent", True)
def drag_screen_shot(root:tk.Tk, callback = None, cancel_callback = None, debug_logging = False):
global using_debug_mode
using_debug_mode = debug_logging
top = tk.Toplevel(root)
top.geometry(f"{root.winfo_screenwidth()}x{root.winfo_screenheight()}+0+0")
top.overrideredirect(True)
top.lift()
top.attributes("-topmost", True)
set_bg_transparent(top)
DragScreenshotPanel(root, top, callback, cancel_callback)
just make a root with tk
and then call drag_screen_shot(root, on_capture, on_cancel)
For me it was the <table role="presentation">
Changing it to <table role="doc-pagebreak">
fixed the problem.
using this command you can select the template in react native setup
npx create-expo-app --template
Differentiate between Just in Time (JIT)and Just in Case (JIC). JUST IN TIME (JIT) vs JUST IN CASE (JIC)
Just-in-Time:- It is a reactive strategy in inventory management whose main focus is efficiency and is done by reducing waste and costs by only bringing in inventory when it is needed for production. JIT is more commonly used where demand is stable and supply chains are in perfect shape with no disruptions.
Just-in-Case It is a proactive strategy in inventory management whose main focus is responsiveness and customer satisfaction with aims to meet potential demand quickly and avoiding the risks of shortages by stocking up on inventory in advance. Basically, JIC prioritizes risk management over cost reduction by keeping extra stock in hand. It is more commonly used in industries with unpredictable demand and supply chain disruptions.
Key differences between JIT and JIC include:
• Inventory Management: In JIT, order is placed and inventory is received, only as it's needed for production, while JIC stocks up inventories ahead of time.
• Types of Suppliers : JIT requires reliable and much developed suppliers, while in JIC, have to rely on less reliable or local suppliers.
• Strategy to mitigate supply chain disruptions:
JIC can rely on excess inventory to mitigate supply chain disruptions, while JIT needs supplier reliability and full collaboration to servev their customers.
• Pull and Push Strategy:- JIT in inventory management is used for a Pull strategy of Supply Chain Management where goods are produced when an order is received, whereas JIC is used for Push strategy of Supply Chain Management and goods are stocked or produced before order received based on Demand Forecasting.
• Types of Products:- JIT model is used where the products are specific, valuable or not commonly used or consumed. On the other hand JIC is used for necessary and commonly used goods (consumer goods), those are urgently on time needed.
Hybrid Models:-
Both strategies have advantages and disadvantages, particularly after pandemics like COVID-19, there is strategic pivot towards hybrid models that blend elements of both JIT and JIC strategies to build resilience against future crises like pandemics or natural disasters. This way, companies can strike a balance between cost reduction goals and risk mitigation objectives while keeping their operations running smoothly amidst unexpected challenges.
So, to face these adversities, businesses have reevaluated the inventory management strategies and adapt them for a more uncertain world.
I couldn't connect to my instance. VCN is ok, instance is ok but i don't know why ssh port 22 doesn't work :/
Dry cat food following on from the q about CPL and iams cat food, there is no way i'm giving my little terror iams anymore but what to replace with, I can't use purina products or felix as these are companies...
I had this issue that started after updating XCode to v16. I updated everything under the sun to the latest versions - XCode, the iPhone itself, Appium, Appium Inspector, and XCUITest Driver. Still got this error. Finally, I went into the Developer Settings on the iPhone and tapped "Clear Trusted Computers", and then re-trusted the computer when it prompted. Ta-da, Appium Inspector suddenly worked again!
You need an extra \n at the end to tell the system the header is done. Otherwise it can't know if there will be more header fields.
For anyone running into similar problems, I want to document what I found out about a similar challenge, and the error messages I saw. In my app, I have several short MIDI tracks that I need to play back based on the user interaction. Like the OP, I used a separate AVAudioSequencer for each track, all using a single AVAudioEngine. In one version of the code, each sequencer was started when it was time to play the track, but then it was never actively stopped, continuing to "run" in parallel with the more recently started tracks, but not actually playing any notes (since there weren't any left on the track). This worked correctly the first time the entire setup was executed, but the second time I get a series of errors of the type
from AU (0x102907d00): auou/rioc/appl, render err: -1
CAMutex.cpp:224 CAMutex::Try: call to pthread_mutex_trylock failed, Error: 22
and in this case I often hear no sound.
Further issues arise when some of the sequencers are restarted from the beginning of their track after already having played, just like what the OP describes. When other sequencers are running in parallel (again, not actually playing any notes in parallel), I observe two problems:
Tested on iPadOS 17.6.1
Relative specifiers for import statements have to use a file extension: https://nodejs.org/api/esm.html#esm_import_specifiers
Just import { AppService } from './app.service.js';
TypeScript is clever enough to figure out what you want is app.service.ts
during compilation.
A workarround could be https://www.npmjs.com/package/tsc-alias as mentioned in https://stackoverflow.com/a/76678279/517319
Scenario for those using AWS codePipeline;
Basically my problem was solved by editing/setting the input artifact from the deploy stage to the artifact output from the build stage.
Did anybody ever figure this out?
You can use:
zypper in gcc11-c++
SecureString type for parameter: name and value and type encryp
"URI": "/?token=${name}&type=fargate",
@trincot, assuming that q5
is the final (accepting) state of your Turing machine above (given in JavaScript transitions
), does it mistakenly accept aabcbc
?
info.model is not null in Nov, 2024:
Future<bool> isIpad() async {
DeviceInfoPlugin deviceInfo = DeviceInfoPlugin();
IosDeviceInfo info = await deviceInfo.iosInfo;
if (info.model.toLowerCase().contains("ipad")) {
return true;
}
return false;
}
python starts from left so here 3 is not greater than 2 which is why it is showing as false. I hope you understand.
INCHES_TO_CM = 2.54
CM_TO_METERS = 0.01
FEET_TO_INCHES = 12
function start(){
def convert_height_to_meters(feet, inches):
feet_in_meter = feet FEET_TO_INCHES INCHES_TO_CM * CM_TO_METERS
inches_in_meter = inches INCHES_TO_CM CM_TO_METERS
meter = feet_in_meter + inches_in_meter
print(str(feet) + " feet, " + str(inches) +" inches = " + str(meter) + " meters" )
convert_height_to_meters(6, 4)
convert_height_to_meters(5, 8)
convert_height_to_meters(5, 2) }
I encountered the same error. In my case, I was pointing to the incorrect path for the private key. It was resolved after correcting it.
When I run 'rake assets:precompile' I get output like this 'Warning: You are precompiling assets in development. Rails will not serve any changed assets until you delete public/assets/.manifest.json'
So I ran 'rm public/assets/.manifest.json' from the root of my project and it fixed it
I found the problem.
As @EstusFlask mentioned, I should not use the index as key.
After further investigation, I replaced :key="index"
with :key="route"
.
Now it works fine.
It's not completely clear from the documentation, but the gap utility is just for use with the CSS grid layout module, not columns and rows as you've tried to use it here. For that, you'll need to use the margin and padding utilities for each row.
https://bugreports.qt.io/browse/QTBUG-131008 (but was rejected as it's a non public class)
Similar issue I fixed by adding below property in Kafka configuration:
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class