from PIL import Image
# Load the uploaded image
image_path = "/mnt/data/IMG_20230823_005640.jpg"
image = Image.open(image_path)
# Display basic info about the image
image.size, image.mode
i built a free api for this old issue
Please update to 3.29.1 which is a hotfix for this and some other issues as well detailed here.
does this solution work for your case?
q)t:([]date:20?2011.03.01+til 10;uid:20?17000+til 10;sym:20?50000+til 15)
q)7#t
date uid sym
----------------------
2011.03.09 17008 50001
2011.03.02 17003 50000
2011.03.01 17004 50000
2011.03.08 17001 50010
2011.03.04 17006 50001
2011.03.04 17004 50010
2011.03.07 17004 50007
q)updCols:{`$string[x],\:string[y]}
q)prevCols:{if[x=0;:`date`uid`sym]; updCols[;x] `prevdate`prevuid`prevsym}
q)f:{![x;();0b;prevCols[y]!(prev;) each prevCols y-1]}
q)7#f/[t;1+til 6]
date uid sym prevdate1 prevuid1 prevsym1 prevdate2 prevuid2 prevsym2 prevdate3 prevuid3 prevsym3 prevdate4 prevuid4 prevsym4 prevdate5 prevuid5 prevsym5 prevdate6 prevuid6 prevsym6
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
2011.03.09 17008 50001
2011.03.02 17003 50000 2011.03.09 17008 50001
2011.03.01 17004 50000 2011.03.02 17003 50000 2011.03.09 17008 50001
2011.03.08 17001 50010 2011.03.01 17004 50000 2011.03.02 17003 50000 2011.03.09 17008 50001
2011.03.04 17006 50001 2011.03.08 17001 50010 2011.03.01 17004 50000 2011.03.02 17003 50000 2011.03.09 17008 50001
2011.03.04 17004 50010 2011.03.04 17006 50001 2011.03.08 17001 50010 2011.03.01 17004 50000 2011.03.02 17003 50000 2011.03.09 17008 50001
2011.03.07 17004 50007 2011.03.04 17004 50010 2011.03.04 17006 50001 2011.03.08 17001 50010 2011.03.01 17004 50000 2011.03.02 17003 50000 2011.03.09 17008 50001
The javax.print.PrintServiceLookup
class itself relies on the underlying operating system and network configuration to discover and interact with printers. So you can either try to inject the settings from your host computer (if its a unix based system) or you have to install the printer inside your container. The latter would even decouple the execution of the container from the executing host system.
Maybe this article is helping you out: https://www.alecburton.co.uk/2017/printing-from-a-docker-container/
I think your examples help answer your question. As a general rule of thumb:
if the tree is binary, your recursive call usually advances to the index (i+1).
If elements can be used multiple times, j is passed unchanged. to allow for repetition and reusing of elements.
If each element is used only once, and your tree is not binary, pass (j+1) to avoid reusing elements.
Since iOS 15, the up down chevron icon of a Picker
can be hidden using the .menuIndicator
modifier:
.menuIndicator(.hidden)
Can you try findFirstByUsernameOrderByUsername i think JPA expects something after by but then you continue with OrderBy
Phone accuracy errors, while funny, do not mask the intended phrase simply because the speech decoding of the human brain is awsom!
But the only reason to use phoemes over voice to text is limited processing power on low power Lora transmissions. If you can deal with the very mechanical phoneme based voices. Which obviously anonmoize the speaker.
From the image shared, seems that you have issues with your Flutter SDK installation. Try reinstalling it.
My god the problem was so banal it is stupid.
Apparently in my Excel version (seemingly 2021) the criteria are divided by semicolons in the syntax.........
It is working now. Thanks for your effort though
Providing an update regarding the adding Test Users step because the UI has changed a bit. You can find the Test Users section by navigating to APIs & Services -> OAuth concent screen -> Audience
Вот возможное решение для тех у кого Windows, внимательно читайте какие папки или заголовочные файла не видит, ищите их и добавляйте, я решил именно так
"name": "Win32",
"includePath": [
"${default}",
"C:/msys64/ucrt64/include/gtk-4.0",
"C:/msys64/ucrt64/include/pango-1.0",
"C:/msys64/ucrt64/include/fribidi",
"C:/msys64/ucrt64/include/harfbuzz",
"C:/msys64/ucrt64/include/gdk-pixbuf-2.0",
"C:/msys64/ucrt64/include/cairo",
"C:/msys64/ucrt64/include/freetype2",
"C:/msys64/ucrt64/include/libpng16",
"C:/msys64/ucrt64/include/pixman-1",
"C:/msys64/ucrt64/include/graphene-1.0",
"C:/msys64/ucrt64/include/glib-2.0",
"C:/msys64/ucrt64/lib/glib-2.0/include",
"C:/msys64/ucrt64/include/graphene-1.0",
"C:/msys64/ucrt64/lib/graphene-1.0/include"
],
A simplified version of Alex answer
import threading
import time
lock=threading.Lock()
def thread1cb():
lock.acquire() # thread1 acquires lock first
time.sleep(1)
print("hello")
lock.release()
def thread2cb():
time.sleep(0.1)
lock.acquire() # thread2 will wait on this line until thread1 has released the lock it acquired
print("there")
lock.release()
thread1=threading.Thread(target=thread1cb)
thread2=threading.Thread(target=thread2cb)
thread1.start()
thread2.start()
thread1.join() # As long as thread1 acquires & releases the lock first, you could safely remove this line. threading.Thread(...).join() waits until the target function of the thread has returned.
thread2.join()
Output will be:
hello
there
If you comment out the lock.acquire()
& lock.release()
lines, it will instead print:
there
hello
Docs: https://docs.python.org/3/library/threading.html#threading.Lock.acquire
https://docs.python.org/3/library/threading.html#using-locks-conditions-and-semaphores-in-the-with-statement
Stack overflow is like a coding discussion site
you can ask about errors, questions and just share code
think of it as a developer reddit/twitter
recommended to use markdown too.
//this is a codeblock
According to the release notes, Numpy 1.26.4 doesn't support Python 3.13, so yes, if you need version 1.x.y (which at least one of your other libraries seems to be requiring), you should downgrade to Python 3.12.
I discovered I had the date: wrong. I had it as results.startDate, and when I flipped it to resuluts.endDate, the data produced was correct.
Thanks.
Blessings, --Mark
I use psp to create a Django project and also, a docker image with the Django project that you want develop, step by step...
This is reference that explains Docker image: https://psp.readthedocs.io/en/latest/simple/#dockerpodman
Run this in your shell:
$ psp
info: welcome to psp, version 0.2.0
> Name of Python project: django-scaffold
> Do you want to create a virtual environment? Yes
> Do you want to start git repository? Yes
> Select git remote provider: None
> Do you want unit test files? Yes
> Install dependencies: django django-admin startproject hello_world_django
> Select documentation generator: None
> Do you want to configure tox? No
> Do you want create common files? Yes
> Select license: MIT
> Do you want to install dependencies to publish on pypi? Yes
> Do you want to create a Dockerfile and Containerfile? Yes
info: python project `django-scaffold` created at `/tmp/django-scaffold`
$ cd django-scaffold && docker build django-scaffold
import FaceDetection from '@react-native-ml-kit/face-detection'
const result = await FaceDetection.detectFromFile(imageUri, {
performanceMode: 'fast',
landmarkMode: 'none',
classificationMode: 'none',
});
I finally got it to work by eliminating some of the white space like this:
subprocess.run(["ssh", IP, "/usr/bin/gpio", "write 0 1"])
I still do not quite understand why ssh was complaining, rather than bash.
<Tooltip
active={true}
wrapperStyle={{ pointerEvents: 'auto' }}
content={content}
/>
Set the active prop active to true. That's what worked me.
You could write something like this: (if you're using mockk)
mockk<HttpResponse>(relaxed = true) {
every { status } returns 200
every { rawContent } returns ByteReadChannel("body")
}
As the error mentions, your database ID is probably incorrect. Try logging the value to verify.
Often times, the reason why these values are invalid are because an environment variable is not set properly.
You can also use the <object>
tag:
<object data="html/stuff_to_include.html">
Your browser doesn’t support the object tag.
</object>
Learn more at MDN.
<object>
is currently supported in most browsers.
Your problem is to partition the storage drive using an operating system answer file (autounattend.xml).
The solution you proposed to use is the diskpart tool, called by powershell, in one of the ISO image installation phases.
I started using some answer file generators, such as "schneegans.de" (https://schneegans.de/windows/unattend-generator/), suggested by Vern_Anderson. Although it is an excellent tool, the problem is that you are tied to it. If you need a custom solution, you don't need support. The ideal is to try to understand how a tool can solve the problem.
In the case of the "schneegans.de" tool, it participated in the "windowsPE" phase. It doesn't make sense to install and then resize. That's why the "windowsPE" phase is used for partitioning. The operating system installation is after the "windowsPE" phase. Avoid using the "specialize" or "oobeSystem" phases for partitioning.
Now comes the first question: after a basic installation of the operating system, does your routine work? If the answer is yes, we can "try" to put the routine in the answer file. But is it possible? The tool "schneegans.de" reports that it is possible, but the "windowsPE" phase is quite restricted. The tool configures it as follows:
<settings pass="windowsPE">
<component name="Microsoft-Windows-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS">
<UseConfigurationSet>false</UseConfigurationSet>
<RunSynchronous>
<RunSynchronousCommand wcm:action="add">
<Order>1</Order>
<Path>cmd.exe /c ">>"X:\diskpart.txt" (echo SELECT DISK=0&echo CLEAN&echo CONVERT GPT&echo CREATE PARTITION EFI SIZE=499&echo FORMAT QUICK FS=FAT32 LABEL="System"&echo CREATE PARTITION MSR SIZE=16)"</Path>
</RunSynchronousCommand>
<RunSynchronousCommand wcm:action="add">
<Order>2</Order>
<Path>cmd.exe /c ">>"X:\diskpart.txt" (echo CREATE PARTITION PRIMARY&echo SHRINK MINIMUM=699&echo FORMAT QUICK FS=NTFS LABEL="Windows"&echo CREATE PARTITION PRIMARY&echo FORMAT QUICK FS=NTFS LABEL="Recovery")"</Path>
</RunSynchronousCommand>
<RunSynchronousCommand wcm:action="add">
<Order>3</Order>
<Path>cmd.exe /c ">>"X:\diskpart.txt" (echo SET ID="de94bba4-06d1-4d40-a16a-bfd50179d6ac"&echo GPT ATTRIBUTES=0x8000000000000001)"</Path>
</RunSynchronousCommand>
<RunSynchronousCommand wcm:action="add">
<Order>4</Order>
<Path>cmd.exe /c "diskpart.exe /s "X:\diskpart.txt" >>"X:\diskpart.log" || ( type "X:\diskpart.log" & echo diskpart encountered an error. & pause & exit /b 1 )"</Path>
</RunSynchronousCommand>
</RunSynchronous>
</component>
</settings>
Note the complexity. The XML response file needs to be encoded in the "EncodeHTML" format. So, the ">" symbol becomes ">". The "&" symbol becomes "&". The tool creates the file "X:\diskpart.txt", executing the redirection line by line with the "cmd.exe" command. The last command is to call the "diskpart.exe" tool, passing as a parameter the created file "X:\diskpart.txt".
There is no need to edit the ISO image. I do not recommend this solution. In this case, the tool placed the "code" in the response file, with the limitation of the "EncodeHTML" encoding.
But if I want to inject files, how can I do it? In this case, the best tool is "Ventoy" (https://www.ventoy.net/en/plugin_injection.html). With this tool, you can pass the response file (https://www.ventoy.net/en/plugin_autoinstall.html), in addition to being able to inject files.
If you still want to use the powershell code, which calls diskpart.exe, in the "specialize" or "oobeSystem" phases, the "schneegans.de" tool has this feature. But I suggest you explore it to understand how it is done. I can tell you in advance that the procedure is quite complex.
Here is the solution that I understand to be the most appropriate for your problem. You do not need external commands to partition. The answer file itself does it for you. For example, I create drives C: and D:. For Windows 11, I will need 5 partitions: ESP, WinRE, MSR, Applications and Users. Exactly what you read. I place the user profiles in the "D:\Usuários" folder, in the native language (Brazilian Portuguese).
2.1) Setting up the partitions
<settings pass="windowsPE">
<DiskConfiguration>
<Disk wcm:action="add">
<DiskID>0</DiskID>
<WillWipeDisk>true</WillWipeDisk>
<CreatePartitions>
<!-- Sistema (ESP) -->
<CreatePartition wcm:action="add">
<Order>1</Order>
<Type>EFI</Type>
<Size>499</Size>
</CreatePartition>
<!-- Recuperação -->
<CreatePartition wcm:action="add">
<Order>2</Order>
<Type>Primary</Type>
<Size>699</Size>
</CreatePartition>
<!-- Reservada -->
<CreatePartition wcm:action="add">
<Order>3</Order>
<Type>MSR</Type>
<Size>99</Size>
</CreatePartition>
<!-- Sistema operacional -->
<CreatePartition wcm:action="add">
<Order>4</Order>
<Type>Primary</Type>
<Size>102400</Size>
</CreatePartition>
<!-- Dados -->
<CreatePartition wcm:action="add">
<Order>5</Order>
<Type>Primary</Type>
<Extend>true</Extend>
<!-- <Size>102400</Size> -->
</CreatePartition>
</CreatePartitions>
<ModifyPartitions>
<!-- Sistema (ESP) -->
<ModifyPartition wcm:action="add">
<Order>1</Order>
<PartitionID>1</PartitionID>
<Label>ESP</Label>
<Format>FAT32</Format>
</ModifyPartition>
<!-- Recuperação -->
<ModifyPartition wcm:action="add">
<Order>2</Order>
<PartitionID>2</PartitionID>
<Label>WINRE</Label>
<Format>NTFS</Format>
<TypeID>DE94BBA4-06D1-4D40-A16A-BFD50179D6AC</TypeID>
</ModifyPartition>
<!-- Reservada -->
<ModifyPartition wcm:action="add">
<Order>3</Order>
<PartitionID>3</PartitionID>
</ModifyPartition>
<!-- Sistema operacional -->
<ModifyPartition wcm:action="add">
<Order>4</Order>
<PartitionID>4</PartitionID>
<Label>SO</Label>
<Letter>C</Letter>
<Format>NTFS</Format>
</ModifyPartition>
<!-- Dados -->
<ModifyPartition wcm:action="add">
<Order>5</Order>
<PartitionID>5</PartitionID>
<Label>Aplic</Label>
<Letter>D</Letter>
<Format>NTFS</Format>
</ModifyPartition>
</ModifyPartitions>
</Disk>
</DiskConfiguration>
<ImageInstall>
<OSImage>
<Compact>false</Compact>
<InstallFrom>
<MetaData wcm:action="add">
<Key>/image/name</Key>
<Value>Windows 11 Pro</Value>
</MetaData>
</InstallFrom>
<InstallTo>
<DiskID>0</DiskID>
<PartitionID>4</PartitionID>
</InstallTo>
</OSImage>
</ImageInstall>
<UserData>
<ProductKey>
<Key>VK7JG-NPHTM-C97JM-9MPGT-3V66T</Key>
<WillShowUI>OnError</WillShowUI>
</ProductKey>
<AcceptEula>true</AcceptEula>
</UserData>
</component>
</settings>
2.2) Modifying the user profile folder
<settings pass="oobeSystem">
<component name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS">
<FolderLocations>
<ProfilesDirectory>D:\Usuários</ProfilesDirectory>
</FolderLocations>
<OOBE>
<ProtectYourPC>3</ProtectYourPC>
<HideEULAPage>true</HideEULAPage>
<HideWirelessSetupInOOBE>true</HideWirelessSetupInOOBE>
<HideOnlineAccountScreens>true</HideOnlineAccountScreens>
</OOBE>
</component>
</settings>
Please note that the profiles folder has an accented character. Therefore, I needed to encode the symbol "á" to "á". I put this item just to emphasize to you that if you put the code embedded in the XML, you will need to encode it in the "EncodeHTML" format.
Remember to use Ventoy. It is the best tool to initialize an image, with response file and injected files. And without having to touch the original image!
I used an online translator. I apologize for not being fluent in the language.
I've been watching other firebase related repositories. And I think I got an answer (solution) for this issue:
To update the phone number that is used as second factor, proceed as follows: login the user again with MFA (e.g. use signInWithEmailAndPassword for email as first factor, followed by verifyPhoneNumber for the second factor) when the MFA login successfully completed, enroll the new phone number (again using verifyPhoneNumber, but with the new phonenumber as input parameter) when the new phone number is successfully added, unenroll the original phone number (using the MultiFactor.unenroll -> https://pub.dev/documentation/firebase_auth/latest/firebase_auth/MultiFactor/unenroll.html) More details on step 1) and 2) can be found at https://firebase.google.com/docs/auth/flutter/multi-factor
"ORA-29285 - File write error"
this error occured when your HARD DRIVE is FULL where you are exporting data.
A custom application to convert the standard Gregorian date and calendar of the comprehensive ErpNext version 15 software to the solar (Jalali) calendar. This application works in all date and date-time fields, in all sections of the software, and also displays the Gregorian equivalent of the selected date below the field.
اپلیکیشن سفارشی برای تبدیل تاریخ و تقویم استاندارد میلادی نرم افزار جامع ErpNext نسخه 15 به تقویم شمسی (جلالی) می باشد. این اپلیکیشن ، در تمام فیلدهای تاریخ و تاریخ-ساعت ، در تمام بخشهای نرم افزار عمل می کند و همچنین ، معادل میلادی تاریخ انتخاب شده را در زیر فیلد نمایش می دهد.
Yes, in Node.js it's possible to import CommonJS modules from ECMAScript modules. The documentation has more information.
This has been the case for as long as Node.js has supported ECMAScript module syntax (I just tested to verify that).
`sudo sh -c 'echo "deb [signed-by=/usr/share/keyrings/Unity_Technologies_ApS.gpg] https://hub.unity3d.com/linux/repos/deb stable main" > /etc/apt/sources.list.d/unityhub.list'wget -qO - https://hub.unity3d.com/linux/keys/public | gpg --dearmor | sudo tee /usr/share/keyrings/Unity_Technologies_ApS.gpg > /dev/null sh -c 'echo "deb [signed-by=/usr/share/keyrings/Unity_Technologies_ApS.gpg] https://hub.unity3d.com/linux/repos/deb stable main" > /etc/apt/sources.list.d/unityhub
=BYROW(J162:J164,LAMBDA(a,REGEXEXTRACT(a,"\(\d{3}\)\s\d{3}-\d{4}")))
Meanwhile some users can apply REGEXEXTRACT in Excel for the web and Office 365.
Recently I wrote a nice utility for serializing objects with saving external and internal links https://github.com/nerd220/JSONext
Easy to use (in this case):
var serialization=toLinkedJSON({meta:meta});
var newMeta=fromLinkedJSON(serialization);
console.log(newMeta.meta[0].link==newMeta.meta[1]); //true
Apparently below is what needed. Just setting context.res is not enough, empty return is not enough.
return {
status: 200, body: {}
}
I think your issue here is the path of that cookie.
As in the protected route you are calling is the /dashboard but your cookie has the path /authentication/* so the middleware doesnt get it since its a RSC? maybe?.
I have the same issue thinking that its the http-only part but I have another cookie that isn't http-only with the path /api/* and it doesn't appear either on my frontend the only cookie that works on the middleware is my session_id with the path / and http-only being true. this being the case you might need your cookie to be a empty path to work properly if the routes dont match.
A temporary `Car(1)` is being pushed into the vector, the copy constructor is called. C++ adds the item to the vector using the copy constructor because your class lacks a move constructor. The vector must create a copy in order to store it, even though it appears that you are not copying.
Could you share a bit more of what you have "X" out in your image. Is it just the filename, also, are there additional details after all of the module names and paths?
After several attempts, I have concluded the following points:
Using "path: '404-page-not-found', renderMode: RenderMode.Server, status: 404"
in the app.routes.server.ts
file is part of the solution.
I trigger a redirection to the 404-page-not-found
page from the server.ts
file rather than from the component.ts
file, else if handled at the component.ts
level, the page content changes, but the response status does not.
server.ts
update :
const entityRegex = /\/entity\//;
const knownEntities = ['123456789-entity', '987654321-entity'];
app.use('/**', (req, res, next) => {
if (entityRegex.test(req.baseUrl)) {
let entity = req.baseUrl.substring(8);
console.log('entity:', entity);
if (knownEntities.includes(entity)) {
next();
} else {
res.redirect('/404-page-not-found');
}
} else {
next();
}
});
This checks whether the entity exists:
next()
.status: 404
in app.routes.server.ts
ensures that a 404 status code is returned.)Pros & Cons
✅ Advantage: A proper 404
status is returned.
❌ Drawback: The server.ts
file requires a knownEntitiesList
, or an API call to verify existence, which can introduce additional latency. If the entity exists, this approach results in two API calls instead of one (one server-side check and another browser-side request).
An example is available in StackBlitz
Know with this method using curl
you get:
404
status for /entity/000000000-entity
.200
status for /entity/123456789-entity
.What's up with all the people in this thread that get philosophical about what is garbage and what is collecting? This is computer science, terms have specific meanings given to them by whoever coined them. Garbage collection is not "any system where you don't have to say "free" to free the memory". No, garbage collection is a very specific strategy that has the following characteristics:
Memory has to be requested to the GC for it to be tracked by the GC. The GC gets memory from the OS, keeps track of its existence, and gives you a handle to that memory.
The GC periodically checks which GC handles are still reachable.
Usually, using non-GC managed values that can access GC managed values requires notifying the GC about it, so that an exception is made.
Put it simply, the GC acts like a kind of pseudo-OS between you and the OS, that is much smarter and can recognize at runtime when a value in memory is no longer needed and free it. This is not the same as strategies such as reference counting, where you create individual handles that point to a location in memory and have a destructor that frees the memory if a certain condition is found. This is just automated manual memory management, and you can still f* up (e.g. if you create two shared pointers that point to each other, none of them will ever delete their value, even though both are unreachable).
Sorry for not answering the question. Many people have done that already. I'm just writing this down because there's an impressive amount of answers saying "akchually Rust is garbage collected if we count turning on the computer as "garbage collection"", which is just gonna confuse people.
The solution was page.window.width
, page.window.height
and page.window.resizable
Discord was able to help with that
You can access to is url in your navigator ?
Or when you run your code the profiler tell what ?
When a user logs out and hits the back button, browsers restore the previous page state, including localStorage values. That's why your token "reappears" despite clearing it.
During logout, record a timestamp of when the user logged out In your auth check, verify that any token was created AFTER the most recent logout Modify browser history using replaceState to prevent returning to authenticated states Consider using sessionStorage instead of localStorage
This timestamp approach ensures that even if old tokens reappear due to browser navigation, they'll fail validation because they were created before the last logout.
You can try to use space and new line symbol
Like:
print('Line_1 \nLine_2 \nLine_3 \n')
Sorry, I am not aware of any direct API reference that I could suggest here and I have never worked with Lucene.
However, I am aware that Google Desktop uses a Query API to rank and suggest the relevant search results. More information on the API can be found here.
Perhaps others could chime in and guide you.
This is wrong now
Install from here 34.2.13 stable 30 April 2024 https://developer.android.com/studio/emulator_archive
This version of the emulator behave correctly on Linux.
I saw your new car on Instagram. Seriously? 😒 Yeah, it’s amazing, right? Amazing? Jake, it’s so unnecessary. You already have a perfectly good car! I’ve been saving up for it. It’s something I really wanted. I get that, but why didn’t you talk to me about it first? It’s a huge purchase! I wanted to surprise you with it. A surprise? You’ve been making decisions like this without me. It feels like you don’t care about our plans. I care about us. It’s not like I’m ignoring you. I thought you’d be happy for me. It’s not about the car, Jake. It’s about you not including me. We’ve talked about saving for things we both want, and now it feels like this wasn’t considered at all. I’m sorry. I should’ve talked to you first. I didn’t mean to upset you. I just don’t want to feel like I’m left out of important decisions. I understand. I’ll make sure to include you next time. Can we talk later? Yeah, we need to.
I want to suggest another solution https://github.com/nerd220/JSONext
It allows you to easily serialize objects, saving internal and external links, methods and prototypes.
In subject case:
var node1={data: 'some data'};
var node2={data: 'else data');
node1.link=node2;
node2.link=node1;
var tree={node1: node1, node2: node2};
var serialize=toLinkedJSON(tree);
var newTree=fromLinkedJSON(serialize);
console.log(newTree.node1.link==newTree.node2);//true, because links are saved
You can use this tool I created
Okay, the most refined solution so far is to do the following:
.refreshable { [weak viewModel] in
viewModel?.action()
}
Most hosting providers like A2 Hosting (and others) configure their email services to allow connections only from their own servers or from trusted IP addresses (such as those within their network). When you're working remotely (on your local machine), the connection to the SMTP server might be blocked, which is likely why you're experiencing delays or failure when testing locally.
l am looking to do the same thing but my api json response doesn’t have that [data] attribute, how should it be handled? Thank you in advance.
Setting a file in `~/.vim/bundle/YouCompleteMe/.ycm_extra_conf.py`
import os
import ycm_core
flags = [ '-std=c++20' ]
def Settings ( **kwargs ):
return { 'flags': flags }
I strongly suggest to use this library to achieve bidirectional binding both for reading and writing query params: @geckosoft/ngx-query-params
It's very useful when you are trying to read/write i.e. pagination params from URL query params, in just a one-line code.
Might be useful to someone! 🙂
Open SSMS
-> Connect
Server name: (LocalDb)\MSSQLLocalDB
,
@deype0 what kind of call do we have to make using the graph API explorer. I just added all permissions and sent a GET request to id name...
Just remove the following line from the schema.prisma
file:
output = "../generated/prisma"
In case anyone else winds up here - they haven't updated their API documentation to reflect the actual package from NuGet. The correct code is as follows:
MailjetRequest req = new MailjetRequest();
var client = new MailjetClient(sendGridAPI, sendGridSecret);
TransactionalEmail email = new TransactionalEmail();
email.TextPart = "Text email goes in here";
email.HTMLPart = "<h1>Hello</h1>Your html email goes in here";
email.From = new SendContact("[email protected]", "That fellow");
List<SendContact> singleSend = new List<SendContact>();
singleSend.Add(new SendContact("[email protected]"));
email.To = singleSend;
var resp = await client.SendTransactionalEmailAsync(email);
Not sure why people think it's acceptable to have sample code that literally doesn't work in their up-to-date API documentation. But hey, I guess that's what the internet is for?
yah experiencing same thing with HAPI-FHIR w/ observations. No delay with creating patients.
As said here, its settings are controlled by the jitsi team so only by changing the server for an available one like the comment said or hosting your own can you use it without a moderator :
You cannot control authentication on meet.jit.si because that's a deployment we maintain. You should have your own deployment and then you can choose what type of auth you want.
I want to suggest another solution https://github.com/nerd220/JSONext
It allows you to easily serialize objects, saving internal and external links, methods and prototypes.
In subject case:
var p1 = new Person(77);
var serialize=toLinkedJSON(p1,[],['Person']);
var p2 = fromLinkedJSON(serialize);
p2.isOld(); // true, now this method is works
detaching the egg package worked for me incase still someone wanted to know.
detach("package:egg", unload = TRUE)
@timbre's answer makes many good points and is worth an up-vote. But it didn't answer exactly what I needed. So here's what I came up with.
See this article for how I came up with 6.0 corresponding to target SDK macOS 15
#if swift(>=5.0)
...swift code...
#else
...old approach...
#endif
#if canImport(SwiftUI, _version: "6.0") //hack to test for macOS 15 target sdk
...swift code...
...perhaps using #available if deployment sdk is older...
#else
...old approach...
#endif
When I try to do Item 1 (Under 3 above) where it says "In Excel, go to Data ribbon → Get Data → From Other Sources → From Microsoft Query", I don't find "From Microsoft Query". All I get is From Table/Range, From Web, From OData Feed, From ODBC, From OLEDB, From Picture, and Blank Query. The stated option → From Microsoft Query isn't there. So I still can't get the query. What am I missing here?
when I use this CSS :
list-style: none;
padding-left: 0;
}
ul li:before {
content: '✓';
}
in elementor it drops the line. I do not see it in alignment with the content <li> Is there a way to fix this?
It will be easier to create similar shapes in figma or adobe illustrator, then import SVGs into the project. Making icons with code is irrational, in any case, it will be more convenient in the visual editor.
where a strict MVC separation is not that much of a concern.
To emphasise that - when building an MVC app, the QListWidget
cannot have the model set - the setModel()
method is private. You have to use QListView
to get all the items to sync up.
I have developed a free VS Code (and Cursor) extension that supports both Windows and MacOS systems. See https://github.com/hanlulong/stata-mcp. This extension provides Stata integration for Visual Studio Code and Cursor IDE using the Model Context Protocol (MCP). It allows you to:
Run Stata commands directly from VS Code or Cursor
Execute selections or entire .do files
View Stata output in the editor in real-time
Get AI assistant integration through the MCP protocol
Experience enhanced AI coding in Cursor IDE with Stata context
Feedback is welcome.
I know this is a very old question, but maybe this will help someone... I had to insert columns into PostgreSQL tables at specific positions many times, so I wrote a Python script for it. It follows the column rotation approach in this blog post, which was also referred to earlier.
Limitations include:
Foreign keys, constraints, indexes etc. will need to be recreated if they apply to columns behind the new one
It is tested with psycopg 3.2.5 and Python 3.13. Other versions may well work, but you would have to try it out
DB connection and script parameters are hardcoded (but then, this is not something to be widely disseminated)
It does handle all data types including arrays and default values. The parameters are hopefully self-explanatory. The new column will be at position new_column_pos
after the script ran.
from psycopg import sql, connect
def insert_pg_column(conn, table_name, new_column_name, new_column_type, new_column_position):
cur = conn.cursor()
# Get column names and types from the table
cur.execute(
sql.SQL(
"SELECT column_name, data_type, udt_name, character_maximum_length, column_default "
"FROM information_schema.columns WHERE table_name = %s ORDER BY ordinal_position"
),
[table_name],
)
columns = cur.fetchall()
print(f"Retrieved definitions for {len(columns)} columns")
# Remove from list all columns which remain unchanged
columns = columns[new_column_position - 1 :]
column_names = [col[0] for col in columns]
# Add the new column to the table (at the end)
cur.execute(
sql.SQL("ALTER TABLE {} ADD COLUMN {} {}").format(
sql.Identifier(table_name), sql.Identifier(new_column_name), sql.SQL(new_column_type)
)
)
print(f"Added new column '{new_column_name}' to table '{table_name}'")
# Create temporary columns to hold the data temporarily
temp_columns = {}
for col_name, col_type, udt_name, length, default in columns:
temp_col_name = f"{col_name}_temp"
temp_columns[col_name] = temp_col_name
# Handle array types
if col_type == "ARRAY":
if udt_name.startswith("_"):
data_type = f"{udt_name[1:]}[]" # Remove the leading underscore
else:
data_type = f"{udt_name}[]" # Not sure this ever happens?
else:
data_type = col_type
if length is not None: # For character types
data_type += f"({length})"
cur.execute(
sql.SQL("ALTER TABLE {} ADD COLUMN {} {} {}").format(
sql.Identifier(table_name),
sql.Identifier(temp_col_name),
sql.SQL(data_type),
sql.SQL("DEFAULT {}").format(sql.SQL(default)) if default is not None else sql.SQL(""),
)
)
print(f"Added temporary column '{temp_col_name}'{(" with default '" + default) + "'" if default else ''}")
# Update the temporary columns to hold the data in the desired order
for col_name in column_names:
cur.execute(
sql.SQL("UPDATE {} SET {} = {}").format(
sql.Identifier(table_name), sql.Identifier(temp_columns[col_name]), sql.Identifier(col_name)
)
)
print(f"Copied data from column '{col_name}' to '{temp_columns[col_name]}'")
# Drop the original columns
for col_name in column_names:
cur.execute(
sql.SQL("ALTER TABLE {} DROP COLUMN {}").format(sql.Identifier(table_name), sql.Identifier(col_name))
)
print(f"Dropped original column '{col_name}'")
# Rename the temporary columns to the original column names
for col_name in column_names:
cur.execute(
sql.SQL("ALTER TABLE {} RENAME COLUMN {} TO {}").format(
sql.Identifier(table_name), sql.Identifier(temp_columns[col_name]), sql.Identifier(col_name)
)
)
print(f"Renamed '{temp_columns[col_name]}' to '{col_name}'")
conn.commit()
cur.close()
if __name__ == "__main__":
# Database connection parameters
HOST = "your_host"
DATABASE = "your_dbname"
USER = "your_user" # Needs to have sufficient privileges to alter the table!
PASSWORD = "your_password"
# Parameters for adding a new column (EXAMPLE; REPLACE WITH YOUR OWN VALUES!)
table_name = "users"
new_column_name = "user_uuid"
new_column_type = "uuid"
new_column_pos = 3 # Position is 1-based index
connection = connect(f"dbname={DATABASE} user={USER} password={PASSWORD} host={HOST}")
try:
insert_pg_column(connection, table_name, new_column_name, new_column_type, new_column_pos)
print(f"Successfully added column '{new_column_name}' to table '{table_name}' at position {new_column_pos}.")
except Exception as e:
print(f"Error: {e}")
connection.rollback()
finally:
connection.close()
Construct less than a cumulative frequency for this data. Marks(x) Frequency (f) More Than Cumulative (f) 3-72129+21=508-12227+22=2913-1743+4=718-2221+2=323-2711
spring.batch.initialize-schema=ALWAYS
Letter case of "ALWAYS", suprisingly matters
This is indeed not possible to do currently in Rust; there is an open RFC to allow it.
The issue of std::exception not being caught in C++ can arise due to several reasons:
1. Incorrect Exception Type – std::exception does not have a constructor that takes a string argument. Instead, use std::runtime_error or std::logic_error to pass an error message.
2. Heap Allocation of Exception – If an exception is thrown using throw new some_exception and caught with catch (some_exception &exc), it won’t be caught because new returns a pointer, and exception handling expects an object reference.
3. Buffering Issues in Output – If the exception is caught but not printing, ensure std::cerr is used instead of std::cout, or add \n at the end of the error message for immediate output.
4. Compiler/Debugger Settings – Some compilers require enabling C++ exceptions explicitly. Also, breakpoints set before the throw statement can sometimes mislead debugging.
Même problème ! Mais toute ces suggestions ne fonctionnent pas ! Besoin d'aide
The code syntax below should properly solve your problem.
TabLayout tabLayout = ...;
if (tabLayout.getMeasuredHeight() == 0) tabLayout.measure(View.MeasureSpec.UNDEFINED, View.MeasureSpec.UNDEFINED);
int tabLayoutHeight = tabLayout.getMeasuredHeight();
Imagine you have a box of toys (a "collection"). The For Each
loop is like saying:
"For every toy in this box, I want to do something with it (like inspect it, put it on a shelf, etc.). Once I've done that something with every single toy in the box, I'm done."
You don't tell it how many times to run. It runs once for each item in a collection (like a box of toys, a range of cells in Excel, etc.). The number of times it runs depends on how many items are in the collection.
Let's say you have some numbers in cells A1 to A5 of your Excel sheet, and you want to double the value of each of these cells using VBA. Here's how you could do it with a For Each
loop:
Sub DoubleCellValues()
Dim cell As Range 'Declare a variable to hold each cell
Dim myRange As Range
'Define the range you want to loop through (A1:A5)
Set myRange = Range("A1:A5")
'For Each cell in the range...
For Each cell In myRange
'Double the value of the cell
cell.Value = cell.Value * 2
Next cell 'Move to the next cell in the range
End Sub
The For Each
loop is designed to easily process every item in a collection (like a range of cells), without you having to worry about keeping track of indexes or counters. It makes your code cleaner and easier to read when you want to do the same thing to every item in a group.
Try
dotnet nuget locals all --clear
worked for me in 2025
Adding the NF suffix solved my issue. From the VS Code docs at https://code.visualstudio.com/docs/terminal/appearance
Nerd Fonts work the same and typically have a " NF"
suffix, the following is an example of how to configure Hack's nerd fonts variant:
"terminal.integrated.fontFamily": "'Hack NF'"
You can also toggle "Tx" to "Manual" in your DataGrip session for temporary change in transaction control.
Wow. Thats the best, shortest, and most straight forward printing code ....
Meysam - Your solution worked for me. Thanks so much
I've created a command-line tool called subscan
that does exactly what you're looking for. It combines all the steps you mentioned into a single pipeline:
Here's how to use it:
# Install via Homebrew
brew tap vangie/formula
brew install subscan
# Basic usage (read from file)
subscan -i video.mp4 -a 600x50+210+498 -o subtitles.txt
# Using pipe with custom frame rate (2 fps)
cat video.mp4 | subscan -a 600x50+210+498 -r 2 > subs.txt
# Use fast mode with specific languages
subscan -i video.mp4 -a 600x50+210+498 -f -l "en-US,zh-CN" -o subs.txt
Key features:
The tool is open source and available at: https://github.com/vangie/subscan
Requirements:
The option "Current Query" in the Loop Grid widget takes the value set in the WordPress Settings -> Reading. This is not a bug but an intended functionality.
Here they explain this is for compatibility and they are not going to change it: https://github.com/elementor/elementor/issues/20976
Your issue is weird, maybe you have a very high value in your WordPress settings.
You can use check_c_compiler_flag
https://cmake.org/cmake/help/latest/module/CheckCCompilerFlag.html
My issue was I added Microsoft OFFICE 16.0 Object Library when it should have been Microsoft WORD 16.0 Object Library in the VBA > Tools > References tab. The code created should work fine as long as you add the correct references!
Actually, if you "just" want to keep the current path, modern versions of pkexec
do have an option for that:
pkexec --keep-cwd
Thanks to @Wang, who answered this in another question.
The problem lied within Bun's v1.2.6 update. Reverting back to 1.2.5 fixed this issue.
Thanks Chux.
When setting up and configuring MobSF dynamic analysis, and using Android Studio AVD, if you use an API above 29, you will consistently get a "/system not writeable, this AVD can't be used for dynamic analysis".
To fix this, use API 28, I configured a Pixel 3 XL. Hope this helps other people.
Instead of replacing the entire contact object, update only the email field using MongoDB's dot notation as contact.email
await Client.findByIdAndUpdate(
id,
{ $set: { "contact.email": <New_Email> } },
{ new: true, runValidators: true } // returns updated doc & enforce schema validation
);
This kind of old, so you probably solved it already.
@first: OpenAPI Specification and Swagger tools are for HTTP-based APIs only and do not support other protocols like FTP.
Create a class with a byte array,
[OpenApiExample(typeof(ModuleUploadExample))]
public class ModuleUpload
{
public byte[] zip { get; set; }
}
· Create an example
internal class ModuleUploadExample : OpenApiExample<ModuleUpload>
{
public override IOpenApiExample<ModuleUpload> Build(NamingStrategy namingStrategy = null)
{
this.Examples.Add(
OpenApiExampleResolver.Resolve(
"ParametersExample",
new ModuleUpload()
{
zip = new byte[] { 1, 2, 3, 4 }
},
namingStrategy
)); ;
return this;
}
}
}
· Should get you something like this
Sorting by index in the desired order:
def sort_string(s):
return ''.join(sorted(s, key='23456789TJQKA'.index))
print(sort_string('Q4JTK') == '4TJQK')
print(sort_string('9T43A') == '349TA')
print(sort_string('T523Q') == '235TQ')
public static bool HasUniqueChars(string input)
{
HashSet<char> chars = new HashSet<char>();
foreach (char ch in input)
{
if (chars.Contains(ch))
{
return false;
}
else
{
chars.Add(ch);
}
}
return true;
}
I contacted my hosting site. They cleared some caches somewhere. They are sending those details to me.
They did mention that it could be Cloudflare but that was not the case.
Will share more details when they become available regarding which cache(s) was cleared to fix the issue.
You may try to use the option in application.properties
quarkus.hibernate-orm.active=false
or environment variable
QUARKUS_HIBERNATE_ORM_ACTIVE=false
Read more: https://quarkus.io/guides/hibernate-orm#quarkus-hibernate-orm_quarkus.hibernate-orm.enabled
testssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss
I have the same query as in the question above.
Regarding the answer by @oliver
As a MWE, I wish to create a simple test package with 2 files.
$ cat hello.sh
#!/usr/bin/sh
cat mydatafolder/data.txt
$ cat mydatafolder/data.txt
Hello World!
hello-world-1.0 $ tree -L 1
.
├── debian
├── hello.sh
└── mydatafolder
3 directories, 1 file
Now how do I refer to data.txt so that the same incantation works both while creating the package and after installing it.
Here is what I tried:
hello-world-1.0$ cat debian/install
hello.sh usr/bin
mydatafolder/* usr/share/mydatafolder
hello-world-1.0$ echo $XDG_DATA_DIRS
/usr/share/gnome:/usr/local/share/:/usr/share/
When I create and install the package it says :
$ hello.sh
cat: mydatafolder/data.txt: No such file or directory
Should'nt mydatafolder which is in /usr/share/ be in the list of folders in $XDG_DATA_DIRS ? Is that not how it works? What am I missing?
Use vip package for that
vip::vip(model_rf)
are you doing namaste react? i'm also stuck in same problem
Try to use better models for the embeddings, that bad embedding models could cause a decrease in accuracy.
Are you still doing this?
I have a solution that I've been using on "timebucks.com" for almost a year now.
I can't make it a public repository but you can reach me so that we can share ideas. You can text on wa.me/254114058155
hi i think you need to install lsp server, you can do it with mason.
https://www.youtube.com/watch?v=h4g0m0Iwmys&ab_channel=typecraft
I experienced a similar problem. I use IntersectionObserver for infinite scroll. I tested it out on many different devices and browsers including MacOS, Linux and Windows devices with Chrome, Safari, Opera, and Edge browsers. I never experienced an error, neither my customers.
However, a week ago a customer stated that scroll is not working on any devices of them. We had online sessions to find the issue, we tried different browsers, etc. The interesting thing, it was working time to time. Then I noticed that I set threshold to 1 without even thinking what it means. (It was actually a snippet of AI generated code) Setting it to a lower value solved the problem.
In short, I believe threshold value may not be very precisely deterministic on different devices/browsers. It may even sometimes work and sometimes not with exactly the same configuration. So, assigning it to a lower value may save you.