I went ahead and downloaded the repo and munched the pdf files... Here is the file for your reference documentation
silly naming. Or idiotic. Or trumpic. Or muskic.
You can use https://docs.jboss.org/hibernate/orm/current/javadocs/org/hibernate/dialect/MySQLDialect.html. It is recommended for MySQL 5.7 and above.
Found the problem! Don't know how it is possible, but it was selected @destination of the segue settings 'detailed split' instead of Current. I set Current and both the segues work well. Waste a lot of time to solve this stupid problem!!
Yes, but not without additional work.
A raw TCP socket (or an asyncio stream) is designed to carry one continuous stream of data—not several independent connections at once. When your software uses multiple TCP ports, it expects separate channels for each connection. To forward all that traffic through a single socket, you must build a multiplexing protocol on top of your existing channel.
Consider using or adapting existing multiplexing libraries or protocols (for example, SSH uses a similar concept with channels over a single connection). This can save you from reinventing the wheel and reduce potential bugs in your custom implementation.
There are ways to work around this limitation and achieve case-insensitive text search for elements.
1.Use XPath with translate()
element = driver.find_element(By.XPATH, '//label[contains(translate(text(), "ABCDEFGHIJKLMNOPQRSTUVWXYZ", "abcdefghijklmnopqrstuvwxyz"), "username")]')
Find All, Then Filter.
Use JavaScript Execution (execute_script)
script = """ return Array.from(document.querySelectorAll("*")) .filter(el => el.innerText.toLowerCase().includes("username")); """ elements = driver.execute_script(script)
I have no idea what changed, but after a few hours of rebuild attempts and recreating git history it just started working. There were no changes to code, dependencies or solution/project files.
The only explanation I can think of is that dotnet has some per-project-guid cache somewhere that cleared at some point
I have the same problem and the code is similar to this above. Customized button in right-click menu is not visible in Excel 365 but I checked the code in Excel 2021 and it works. So maybe some option need to be turned on in Excel365 but I don't know which or maybe latest update of Excel365 broke something.
Did you get the form to redirect to your own confirmation page, if so can you share?
Did you find any ans for this ?
Apollo: how would I adapt that formula to get the percent change. From what I can tell that is from the child div ((@class='JwB6zf') of a parent div ([@class='enJeMd')? Unfortunately, I am such a newb that the syntax eludes. Thanks
this is not a problem, monit try to find some data used for the disk statistic. Sometimes, depends to the used filesystem and Linux, the data is available on different places.
As long as you missed no data, every thing should work as expected. If you missed some data, create a ticket, see the mmonit.com homepage. With regards, Lutz
in this case, the error disappeared mysteriously while adding and removing test code. Both the snippet posted here and the one on my computer diff identical. My best guess is that there must have been some junk Unicode character that either displayed as a space, or not at all, and that made the script choke, and in the doing and undoing, I may have at some point added a space or a newline manually that overrode the junk character. Or perhaps the editor got rid of it upon undoing. In case it helps someone in a similar situation...
I get this error when I try to do ` within a multiline ` block. Doing double or triple ` did not help. You have to escape them with a backslash, \`
It turns out that what was needed was to generate a new Refresh Token using the OAuth Playground, per these instructions: developers.google.com/google-ads/api/docs/oauth/playground
Livewire V3 now uses wire:model.live for live updates
run pip under python
example
python -m pip install --upgrade pandas
run pip under python
example
python -m pip install --upgrade pandas
Is there a reason you can't use the WCAG contrast ratio definition? Assuming your colours are within sRGB, you can calculate the relative luminance of the background colour from RGB values. You could check which foreground colour has a greater contrast with the background colour, and use that.
(It's the approach I use for the RGB text at the bottom of my colour picker)
Note that HSB's Brightness and HSL's Lightness don't represent what the eye perceives, they are just a reshaping of RGB.
Go to SDK location find NDK folder and check the folders if one of them is empty or corrupted delete it and let the android studio use the latest version you have.
In Safari 17.6 this needs a right-mouse click on an element and then Log Element.
If you have installed the Code Runner extension, enable Run In Terminal option in Extensions settings.
I have found the issue. The lack of Extensions and Run Command options in the Instances are caused by Orchestration Mode which was Uniform. Instances in the Uniform orchestration mode do not have Extension and Run Command. The Flexible orchestration mode has it.
How to know if the font is a variable font or a static font ?
Dim n as integer For n=( I=1; I<=20; I++) If( i%3==0&& i%5==5) then Else If ( i%3==0) Else If ( i%5==0) End End if End Sub
Sorry but I have some problem with my (newly created) account on Stack Overflow... I cannot operate on this post, it seems the system doesn't recognize me as the author: I asked Help Center to check my account in order to mark this post as "solved". @grawity_u1686 Thank you a lot!
Your method might be expired or banned . Try to use this one
const request = new XMLHttpRequest();
request.open('GET', 'https://restcountries.com/v3.1/name/deutschland');
request.send();enter code here
According to the errmsg you have already run rs.initiate(). Your replicaset has already been initialized. You can check your current configuration using rs.config() in the mongoshell(i.e. mongos) or using db.getSiblingDB('local').system.replset.findOne()(as mentioned by @Stennie in the comments).
If you did not get { "ok" : 1 }, its because you did not give members in initiate arguments. Its no problem mongo took default configuration and created the current database into a replset primary.
If you were having a problem with pushing to database using prisma in local something like Prisma needs to perform transactions, which requires your MongoDB server to be run as a replica set. in mongosh local instance,it should be fixed now.
You can add members to your replicaset refer documentation.
Some resources I refered:
PS: I am also new to mongo. This is what I understood after my research.
I'm a beginner, so please don't judge me.
The solution is to serve your static files from the proper folder. In Nuxt, any file you want to be served as-is should be placed in the static folder (in Nuxt 3 you’d use the public folder). Files in these folders are copied directly to the root of your built site. For example, if you place hello_world.txt in the static folder, it will be accessible at http://yourdomain.com/hello_world.txt.
In your code, you’re trying to fetch from /server/static/hello_world.txt, which isn’t recognized by Nuxt as a static asset directory. Instead, simply:
Move your file to the static folder, and then change your fetch URL to:
const fileName = 'hello_world.txt'
const filePath = '/' + fileName // or simply '/hello_world.txt'
Then I think your fetch call will correctly load the file.
Reference docs:
void main() {
var x = -23;
var y = -123.11;
var a = x.abs();
var b = y.abs();
var sum = a + b;
print('a : $a + b: $b = sum: ${sum}');
}
**Output :** a : 23 + b: 123.11 = sum: 146.11
If u want to make form.submitted = true u need to use event
submitClicked(form: NgForm) {
form.onSubmit(new Event('submit'));
}
Hi checking for any updated regarding the error i also facing the same issue
After a lot of tinkering, I found a solution. Downloaded version of Nvidia drivers and Cuda tool-kit compatible with my installed TensorFlow version and used pip install TensorFlow[and-gpu] which was finally able to activate and use the gpu in training
My issue too.
Changing the YAML part into
author: "Jimi Hendrix"
Will show the name.
Then the question is how to show the affiliation and contact details.
Try this command, it works. https://coffeebytes.dev/en/python-tortoise-orm-integration-with-fastapi/#installation-of-the-python-tortoise-orm
pip install tortoise-orm
#No such host is known in asp.net web api
Problem: My Laptop was connected to the internet via "Mobile Hotspot"
Solution: Connected to WIFI.
I ran into the same issue with Github Actions and it was simply due to my token on npm being expired.
I was looking for this myself and found the answer. You are right to use the 'slab' argument to specify your labels, however you also need to add the argument 'label=TRUE' to ensure these show on the funnel plot produced. I hope that this helps.
For example:
funnel(res, slab=my_data$my_labels, label=TRUE)
Indeed, it seems there is nothing to check if a market is open/closed.
I'm currently relying on getting the book (bid/ask).
A closed market will have:
By validating the book, you can get a grasp of what is being traded out there.
Obviously, it is not bulletproof, as a market can have a circuit breaker on.
Also note that requesting historical data is not enough, as a market can be open for trading, but without any trades traded so far (zero volume).
I got the same error, but when I set my %JAVA_HOME% environment variable to C:\Program Files\Java\jdk-23, this fixed the problem (after restarting my command line).
None of the above helped me. So I started investigating and came back with M-x top-level. From the interactive help:
(top-level)
Exit all recursive editing levels.
This also exits all active minibuffers.
You can select dates that are two years apart in SQL using a self-join and date difference functions. The exact syntax will depend on your specific database system (e.g., MySQL, PostgreSQL, SQL Server, etc.), as the date functions vary. Here are examples for a few common systems: ____________________________________________________________________________using mysql SELECT t1.date_start1, t2.date_start1 FROM your_table t1 JOIN your_table t2 ON ABS(YEAR(t2.date_start1) - YEAR(t1.date_start1)) = 2;
-- To also get the ID SELECT t1.id, t1.date_start1, t2.id, t2.date_start1 FROM your_table t1 JOIN your_table t2 ON ABS(YEAR(t2.date_start1) - YEAR(t1.date_start1)) = 2;
i followed this response in this thread. google actually prompted me to download the later configurations. pretty cool!
Please have a look at https://github.com/HtmlUnit/htmlunit/issues/927 for details how the make this work.
Leaflet itself doesn’t offer built-in functionality to export data as a Shapefile (SHP). However, if you have access to the underlying vector data (typically stored in GeoJSON), you can convert it to a Shapefile using third-party tools.
There might be a problem with b.j.a.t.a; it may be incorrectly configured or corrupt.
It needs to be a yourlink.github.io/image.png to add in the image
What works for me is increasing the memory size of the lambda function to a higher value (128MB to 3008MB).
From @Skrol29 answer i understand the current situation and wrote a simple function for my case since I need to work with an excel with dozens of merged cells. What my function did is to push all the merged cells located entirely under the placeholder row
function pushMergedCellsDown($TBS, $placeholderRow, $dataCount) {
if ($dataCount <= 1) {
return; // No need to move anything if there's no additional data
}
$pushDistance = $dataCount - 1;
$pattern = '/<mergeCell ref="([A-Z]+)(\d+):([A-Z]+)(\d+)"\/>/';
// Find all merged cells in the XML
if (preg_match_all($pattern, $TBS->Source, $matches, PREG_SET_ORDER)) {
foreach ($matches as $match) {
$colStart = $match[1];
$rowStart = intval($match[2]);
$colEnd = $match[3];
$rowEnd = intval($match[4]);
// Check if any mergeCell crosses or is on the placeholder row
if ($rowStart <= $placeholderRow && $rowEnd >= $placeholderRow) {
throw new Exception("Merge cell crossing placeholder row detected: {$match[0]}");
}
// Only process mergeCells entirely below the placeholder row
if ($rowStart > $placeholderRow) {
$newRowStart = $rowStart + $pushDistance;
$newRowEnd = $rowEnd + $pushDistance;
$newTag = "<mergeCell ref=\"{$colStart}{$newRowStart}:{$colEnd}{$newRowEnd}\"/>";
$TBS->Source = str_replace($match[0], $newTag, $TBS->Source);
}
}
}
}
the function takes 3 parameters: $TBS the opentbs object, $placeholderRow the row where our data placeholder is located, and $dataCount which is the size of our data.
for my example case, the usage is like this
// Merge data in the first sheet
$TBS->MergeBlock('a,b', $data);
pushMergedCellsDown($TBS, 20, count($data));
Appreciated your work on openTBS library @Skrol29 ^^
In Docker desktop 4.38.0 (MAC) it's no longer possible to connect from the host into containers, I tried all possible port/network setup, but nothing helped. I reverted to 4.34.4, and the problem was solved. Just download the old version and install over the 4.38.0, and everything is running again.
The behavior you’re seeing is not a bug but rather a known limitation of the series‐solution implementation in Sympy’s dsolve. In the current implementation, when you use the series hint (for example, '2nd_power_series_ordinary'), dsolve returns a truncated power series in terms of arbitrary constants (like C1 and C2) without automatically solving for them using the provided initial conditions.
There isn’t a built-in workaround in the current version of Sympy’s dsolve to automatically eliminate the constants when using the series hint. You’ll need to either post-process the solution or use a different method if you require the IC to be applied directly.
Yes I tried changing the token and make it higher and it works thanks, as 2025 go to azure ai foundry in menu bar scroll to deployment click on model and edit and increase the token
Consider using isqrt or checked_isqrt which compute interger square root. Rounded down
assert_eq!(10i64.isqrt(), 3);
TLDR:
Python code:
from pathlib import Path
from subprocess import run
from tkinter import Label, Tk
from PIL import Image, ImageTk
def get_powershell_output(command: str) -> str:
process = run(command, capture_output=True, text=True, shell=True)
return process.stdout.strip()
def get_icon_name(app_name: str) -> Path:
command = f"""powershell "(Get-AppxPackage -Name {app_name} | Get-AppxPackageManifest).package.properties.logo" """
return Path(get_powershell_output(command))
def get_install_path(app_name: str) -> Path:
command = f"""powershell "(Get-AppxPackage -Name {app_name}).InstallLocation" """
return Path(get_powershell_output(command))
def locate_icon(icon: Path, install_path: Path) -> Path:
matches = install_path.glob(f"**/{icon.stem}*.png")
# usually 3 matches (default, black, white), let's use default
return list(matches)[0]
def show_icon(icon_path: Path) -> None:
root = Tk()
root.title("Display Icon")
pil_image = Image.open(icon_path)
tk_image = ImageTk.PhotoImage(pil_image)
label = Label(root, image=tk_image)
label.pack()
root.mainloop()
def main(current_name: str) -> None:
icon_path = get_icon_name(current_name)
print(icon_path)
# Assets\CalculatorStoreLogo.png
install_path = get_install_path(current_name)
print(install_path)
# C:\Program Files\WindowsApps\Microsoft.WindowsCalculator_11.2411.1.0_x64__8wekyb3d8bbwe
selected_icon = locate_icon(icon_path, install_path)
print(selected_icon)
# C:\Program Files\WindowsApps\Microsoft.WindowsCalculator_11.2411.1.0_x64__8wekyb3d8bbwe\Assets\CalculatorStoreLogo.scale-200.png
show_icon(selected_icon)
# see the proof
if __name__ == "__main__":
# Let's use "Microsoft.WindowsCalculator" as example.
# Names can be listed by `Get-AppxPackage | Select-Object -ExpandProperty Name`
main("Microsoft.WindowsCalculator")
Use namespace = "" after s:form <s:form namespace = "" action="Login">
Thank you for the link to http://support.microsoft.com/KB/158773 explaining that Update cursors need a primary key, and the solution to Msg 16929 - The cursor is READ ONLY, which ranks high on the search for this problem.
This is also the answer to your question the trigger inserted table does not have a primary key, so you either need to copy the data into a table with a primary key or use the primary key from the underlying table as per the comment.
WHERE CURRENT OF cur to WHERE incidentid = @incidentid? by GarethD
The design your team have come up has multiple layers of horror, and has sadly probably been implemented long ago.
Since it was likely a "higher management" decision to come up with more "user-friendly" identifier - a date string of the form YYYYDDDNNNN, where YYYY is the year, DDD the day of the year, and NNNN the sequence within the day, starting at 1.
It is also likely to be changed in future since DDD is equally unintuitive, and they are likely to move to YYYY-MM-DD-NNNN. The NNNN is guaranteed to not scale, if you have an automated input system it could well cascade over 10,000 events in a day and crash your system.
The simplest solution would be to use a calculated field for this key derived from the createddt (created data and time) and incidentid (the primary key).
You could have made the "user-friendly" identifier Date - incidentid. e.g. YYYY-MM-DD-NNNN..N where NNNN..N is the incidentid This would have no risk of exploding. Or just the last 4 digits of the incidentid.
And if implemented initially as YYYYDDDNNNN, the calculation could be changed according to the whims of "higher management" with out affecting the system.
I used this code and it worked for me as well. The only thing I did to improve it was change the Panel1.ClipRect=True to stop the zoomed image drawing over the rest of the form. But many thanks @XylemFlow
try this way
from ultralytics import YOLO
from torchinfo import summary
model = YOLO("yolov5n.pt") # Ensure you are using the correct model version
summary(model. Model, input_size=(1, 3, 640, 640))
https://github.com/ultralytics/yolov5/issues/11035#issuecomment-2249759900
After I switched to java 8 I've been getting this error:
Exception in thread "main" java.lang.SecurityException: Invalid signature file digest for Manifest main attributes at sun.security.util.SignatureFileVerifier.processImpl(SignatureFileVerifier.java:325) at sun.security.util.SignatureFileVerifier.process(SignatureFileVerifier.java:267) at java.util.jar.JarVerifier.processEntry(JarVerifier.java:285) at java.util.jar.JarVerifier.update(JarVerifier.java:239) at java.util.jar.JarFile.initializeVerifier(JarFile.java:402) at java.util.jar.JarFile.ensureInitialization(JarFile.java:644) at java.util.jar.JavaUtilJarAccessImpl.ensureInitialization(JavaUtilJarAccessImpl.java:69) at sun.misc.URLClassPath$JarLoader$2.getManifest(URLClassPath.java:965) at java.net.URLClassLoader.defineClass(URLClassLoader.java:456) at java.net.URLClassLoader.access$100(URLClassLoader.java:74) at java.net.URLClassLoader$1.run(URLClassLoader.java:369) at java.net.URLClassLoader$1.run(URLClassLoader.java:363) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:362) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) at java.lang.Class.privateGetMethodRecursive(Class.java:3048) at java.lang.Class.getMethod0(Class.java:3018) at java.lang.Class.getMethod(Class.java:1784) at sun.launcher.LauncherHelper.validateMainClass(LauncherHelper.java:690) at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:672)
any idea how to fix this?
tks in advance
Make sure container is in start mode. To start cont use #docker start contid
If cont is started already then you need to forward port use this #docker run --name givecontname -p 8080:8080 image
Now check with port 8080
14:37:45 alter table orders drop constraint orders_ibfk_1 Error Code: 3940. Constraint 'orders_ibfk_1' does not exist. 0.000 sec
If you tried to push to a repo in an organization from your personal github account in Android Studio, you might want to look at https://github.com/settings/connections/applications/ grant access to your organization for 'JetBrains IDE Integration'.
There multiple ways to achieve this. Since I assume you want basically only a hull when you are done and not the vertices that are intersecting you could use Geometry scripting to iterate over all the meshes the you deemed intersecting and emerge them one by one with the with "Apply Mesh Boolean" to get the union between them. It is quite slow and will likely hard to keep your uvs but I assume you do not need that anyway.
There are also functions for simplification and cleanup.
Quick search shows there is even an Epic Games tutorial for this, even though it is blueprint, it directly translates to C++.
A bit late, but I was looking for the same, and at the end I had the following working for me:
# Importing library
import qrcode
from qrcode.image.styledpil import StyledPilImage
from qrcode.image.styles.colormasks import SolidFillColorMask
# Data to encode
data = "Data to encode"
# Creating an instance of QRCode class
qr = qrcode.QRCode(version = 2,
error_correction=qrcode.constants.ERROR_CORRECT_H,
box_size = 7,
border = 5)
# Adding data to the instance 'qr'
qr.add_data(data)
qr.make(fit = True)
img = qr.make_image(image_factory=StyledPilImage,
color_mask=SolidFillColorMask(front_color=(R,G,B)), #Put your color here
embeded_image_path="/content/Image.png")
img.save('MyQRCode2.png')
The dndConnector.js is not loaded and that's why the JS console is showing the errors. This can happen if the Production build of the Vaadin application is not included that JS resource as part of the packaging. Vaadin adds such resources if your Route annotated classes use those resources. However, it can not determine the required resources if something is constructed indirectly (for example, via Java reflection). In such cases, there are other means of informing Vaadin about it using @Use and @JsModule annotations.
we can use
ng serve --port 8080 --open
Window >> Preference >> Validation ,Remove JavaScript Validation enter image description here
Depending on the specifics of your application, you might want to think about different angles.
For example: If more than one distinct page needs to only take one user at a time, I would think about creating a new table with a record for each of these pages. This way, you can set a page as logged-in/in-use using the user's unique ID when someone logs in or access the page. When the user logs out/leaves the page (or if their ASP Session expires- users do not always log out cleanly!) you can "unlock" the page again. Not only that, you might reduce database load by searching specifically for the page record rather than any user with a logged-in flag.
If you use delete, the file size will also be reduced. Clear does not do this. I tried it on a large page. The 8 mb file size decreased to 3 mb when I used delete.
I've installed the package with your method, and met with the same ModuleNotFoundError as you said, do you know how to import metatrader?
I downgraded bcryptjs to version ^2.4.3, and the issue was resolved:
npm install [email protected]
Now, password hashing works without errors.
bcryptjs v3.0.0 require WebCryptoAPI or an external crypto module, while v2.4.3 works fine?Hope this helps others facing the same issue!
Some people already mentioned it, but for gradle you need to import the test directory to access those "Harness" class files, which are located in the test directory.
Add the "tests" keyword after the library import in the gradle build file.
testImplementation "org.apache.flink:flink-streaming-java:${flinkVersion}:tests"
We faced this error because our Google Cloud was suspended for verification, after we did the verification everything worked again as normal
I'm not sure what prevents me from naming the cookie auth_token. But if I add a 2 at the end or use a different name, it works.
I was running into the same issue in gradle and found that I had to specifically import the test classes to use the "Harness" related classes.
testImplementation "org.apache.flink:flink-streaming-java:${flinkVersion}:tests" // <-- "tests" from this library
testImplementation "org.apache.flink:flink-test-utils:${flinkVersion}"
The OP states that masquerading is enabled on eth1 but does not say so about eth0. Perhaps it pays to also enable that on eth0:
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
After migrating bundles from Spring-DI into blueprint , ftp component works fine with Redhat fuse 7.13.
If you are using androidx.navigation.NavController, you can check from Activity or Fragment like this:
if (navController.currentDestination?.id != R.id.yourDialogFragmentId) {
// show DialogFragment
}
the problem here is about internal paths and files that iconipy can not reach,
this is the error :
C:\Users\moham\Desktop\mine\stack\Final_exe_file\exe>test.exe
Traceback (most recent call last):
File "test.py", line 48, in <module>
File "test.py", line 42, in Get_Button_Icon
File "test.py", line 25, in Get_CTk_Icon
File "test.py", line 11, in Create_Icon
File "iconipy\iconipy.py", line 259, in __init__
File "iconipy\iconipy.py", line 419, in _get_icon_set_version
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\moham\\AppData\\Local\\Temp\\_MEI73962\\iconipy\\assets\\lucide\\version.txt'
[PYI-5844:ERROR] Failed to execute script 'test' due to unhandled exception!
is says the iconipy can not reach the file version.txt.
FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\moham\AppData\Local\Temp\_MEI73962\iconipy\assets\lucide\version.txt'
because pyinstaller did not put everything inside the library to your exe application.
so pyinstaller did not add the internal file version.txt to your exe internal files, and because of that this error appears.
to solve this problem you have to add the entire library yourself, as an additional folder to your exe application.
and this will force pyinstaller to add the entire library to your exe.
after that, the library will find the intended file.
do this command in your terminal:
pyinstaller --noconfirm --onefile --console --add-data "C:\Python3-12-8\Lib\site-packages\iconipy;iconipy/" "C:\Users\moham\Desktop\mine\stack\test.py"
replace the paths with your paths, and this will make it work.
cerealexx suggest a decision there, it works https://github.com/flutter/flutter/issues/84833
import 'package:universal_html/html.dart' as html;
// Check if it's an installed PWA
final isPwa = kIsWeb &&
html.window.matchMedia('(display-mode: standalone)').matches;
// Check if it's web iOS
final isWebiOS = kIsWeb &&
html.window.navigator.userAgent
.contains(RegExp(r'iPad|iPod|iPhone'));
// Use Container with color instead of Padding if you need to
return Padding(
padding: EdgeInsets.only(bottom: isPwa && isWebiOS ? 25 : 0),
child: YourApp(),
);
if i'm understading this correctly. i think when the user asked and he enters a nick name, it fails to be caught by the habdler below(i.e @dp.message(Form.set_nickname_requester)) so inorder to do that before asking "✅ Friend added! Now, please provide their nickname." , set the state there for the next message that will come. '''
@dp.message(Form.add_friend) async def process_friend_request(message: Message, state: FSMContext): # Bot asks for a nickname after receiving target_id await state.update_data(target_id=message.text) await state.set_state(Form.set_nickname_requester) #you should add this await message.answer("✅ Friend added! Now, please provide their nickname.") '''
now when a nick name is added, it will be caught by the handler below.
So after more debugging, the problem was in my backend delete request endpoint I never sent a status 200. So in the network tab the pending requests piled up and timed out at its limit which is apparently 7. Yes sorry for vague snippet.
app.delete("/DeleteAudioUrl", async (req, res) => {
const url = req.body.filePath;
const filePath = path.join(__dirname, "/audio", url);
if (fs.existsSync(filePath)) {
fs.unlinkSync(filePath);
}
});
was fixed by adding
res.status(200).send("deleted succesfully");
dont forget to install postgres driver
go get gorm.io/driver/postgres
Failed to get the secret from key vault, secretName: secret-Azure-Sql-Server-db, secretVersion: 00d7e6bf8ab24c37a9aa679b93eeb774, vaultBaseUrl: https://J2dtech-tech-dev.vault.azure.net/. The error message is: Operation returned an invalid status code 'Forbidden'.
you check html meta viewport tag and css related to respovsive desing
javascript that could be manipulating the layout on zooming
and go this site https://www.web-development-institute.com/tag/web-development/ check you
Given standard library logging's complexity, integration is not a simple feat.
https://www.structlog.org/en/stable/standard-library.html outlines various strategies, but either way you'll have to configure standard library's logging to show up.
Then, you have to decide how to make sure that their log format are as similar as possible to each other where the "Don't integrate" strategy is the simplest one.
See also the recent discussion in: https://github.com/hynek/structlog/issues/395
Turns out I was missing a key step.
Once you've switched your project to using a postgresql DB (essential for Vercel) you will need to run npx prisma migrate deploy from the root of your project in your code editor which will use your defined .env URLs (POSTGRES_URL_NON_POOLING, and POSTGRES_PRISMA_URL) to migrate your tables from your project to the db.
Then you're good to go.
In my case will-change: transform helped. From the answer here: Fixing Transparent Line Between Div with Clip-Path and Parent Div
i had the same problem and yes, I dont know what was the problem exactly but with deleting the API from the API manager and creating it again the problem solved.
So, for build.gradle.kts u need to go to libs.version.toml and there on[version]
okhttp = "4.12.0"
on
[libraries]
okhttp3 = {group = "com.squareup.okhttp3", name = "okhttp", version.ref = "okhttp"}
and on build.gradle.kts
dependencies
implementation(libs.okhttp3)
Have a nice day :)
can you use this in reverse? and if so, do I put +1 for scale? and im lost on the -qscale 3?? I googled how to use ffmpeg to convert a 480p mp4 to 720p mp4 and i ended up here. ahah sorry in advance
Same issues.
After clearing cache of messenger app on iphone(safari), it shows the correct images when sending website links to others with messenger app, but if sending links with computer (chrome) , still failed.
It is hard to identify the root cause with just the method you posted, can you reveal more information such as the sleep method and the component? I suspect it is either the component doesn't get rendered on the 7th time or something is off with the sleep method.
Here's a quick list of tips for that:
Research Links:
I hope this provides a good overview and helps you with your media player project.
Explicitly Store Your Function in a Different Variable
const myStop = (s, t, o, p = window) => {
console.log("Custom stop called");
};
window.stop = myStop;
Then, always call myStop() instead of relying on stop.
Just use axios-cache-interceptor. Everything else is done for you.
import axios, {AxiosInstance} from 'axios';
import { setupCache } from 'axios-cache-interceptor';
const httpClient: AxiosInstance = axios.create({
withCredentials: true,
})
setupCache(httpClient, {
ttl: 3000 // cache for 3 seconds
})
The java command runs your program in the Source-File Mode if you are not running a .class file (Java Byte Code).
Using the Source-File Mode, you can do some weird stuffs like running a Java program with two public class, running .c, .png, .mp4, or any file with any extension and so much more.
I've discussed in details in a medium article. In the article, I ran a valid .pdf as a Java program. Also discussed about the Source-File mode, and what you can do with it.
Article link: https://medium.com/p/b3cc0bfa2527
Just spent ages trying to get this working, main thing is that when serving static elements in production (DEBUG = False) django will not let you serve them without running --insecure flag. Hence, why we need white noise.
Note this solution works with using function (instead of build) which allows you to increase max_timeout options.
This is how I got it running on vercel, main thing was outputting the static files (after collectstatic) to the correct directory inside the vercel.json and changing STATIC_URL accordingly if DEBUG was True or False (Handled using .env file)
Full working example: https://github.com/leele2/timesheet-xlsx-to-ics
Settings.py
from pathlib import Path
from os import getenv
from dotenv import load_dotenv
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# Load environment variables from .env
load_dotenv(BASE_DIR / '.env')
DEBUG = getenv('DEBUG', 'False') == 'True'
INSTALLED_APPS = [
'django.contrib.staticfiles',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
"whitenoise.middleware.WhiteNoiseMiddleware",
]
# Static files (CSS, JavaScript, images)
if DEBUG:
STATIC_URL = '/dev_static/' # Development URL for static files
else:
STATIC_URL = '/static/' # Production URL for static files
# Add your local static file directories here
STATICFILES_DIRS = [
BASE_DIR / "dev_static", # This allows Django to look for static files in the 'dev_static' directory
]
# Directory where static files will be stored after running collectstatic
STATIC_ROOT = BASE_DIR / 'static'
# Optional: Use manifest storage for cache busting (adding hash to filenames)
STATICFILES_STORAGE = "whitenoise.storage.CompressedStaticFilesStorage"
urls.py
from django.urls import path, include
from django.conf import settings
from django.conf.urls.static import static
urlpatterns = [
path('', include('some_views.urls'))
] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
vercel.json
{
"outputDirectory": "static/",
"buildCommand": "python3 manage.py collectstatic --noinput",
"functions": {
"api/wsgi.py": {
"maxDuration": 15
}
},
"routes": [
{
"src": "/(.*)",
"dest": "api/wsgi.py"
}
]
}
nowaday, azure can be deployed by push zip of our node project, so we can build and install in local and push them to server, remove build part in start command and just run it. I had succeed deployed with this command in package.json:
"start": "node ./lib/index.js"