After trying countless things, including reinstalling PowerShell, I found the problem and the solution.
The folder c:\windows\system32\powershell was missing from my computer. Reinstalling PowerShell didn't restore it. To fix it, simply run: Install-PowerShellRemoting.
From that moment on, everything worked as it should.
Just for the sake of completeness of the thread, since 1.33, and due to this enhancement proposal, Kubernetes supports native sidecars, which is a pattern for deploying sidecar containers in pods. Native sidecars are init containers with Always being used as their restart policy. This blog post from istio shows how you could enable istio to inject sidecar containers as native sidecars.
just add a zStack at top inside body,
and give it your desire color,
that's it.
it will work fine !!!!
Add 'https://www.youtube-nocookie.com' in origin
_controller = YoutubePlayerController(
params: const YoutubePlayerParams(
showControls: true,
mute: false,
showFullscreenButton: true,
loop: false,
origin: 'https://www.youtube-nocookie.com', // add this line
),
);
_controller.loadVideoById(videoId: YoutubePlayerController.convertUrlToId("https://www.youtube.com/watch?v=_57oC8Sfp-I")??'');
Update based on @h3llr4iser answer:
<?php
namespace App\Services;
use App\Entity\MyEntity;
use Vich\UploaderBundle\FileAbstraction\ReplacingFile;
use Vich\UploaderBundle\Handler\UploadHandler;
final class ExternalImageUploader
{
public function __construct(private UploadHandler $uploadHandler)
{
}
public function copyExternalFile(string $url, MyEntity $entity)
{
$newfile = tempnam(sys_get_temp_dir(), 'prefix_');
copy($url, $newfile);
$uploadedFile = new ReplacingFile($newfile);
$entity->setImageFile($uploadedFile);
$this->uploadHandler->upload($entity,'imageFile');
}
}
Another reason that causes my misunderstanding was the use of a literal list.
In the updated part of the question, the literal list '(25 30 35) is used.
Similarly, when I execute the following code in SBCL
(let ((lst '(0 1 2)))
(setf (car lst) 10)
(values
(car lst)
lst))
The output is
0
(10 1 2)
A simple solution is to put the tabcontrol inside a panel and make the tabcontrol slightly bigger than the panel (4 pixels per border).
So, by how you say they are 'practically' identical, the Category, Product and Reviews might differ in their fields? You could try and get create an inheritance hierarchy where they all inherit from some base, abstract Description-class and then use various mapping strategies to get that into the DB, but that does feel wrong to me too.
Have you considered a third option, a Table Translation that just contains a single piece of text to be translated? In other words, instead of having the unique key be (category/product/review_id, language_id)) which loads an object with multiple fields for your various translations, have it be (category/product/review_id, language_id, text_key)). Something like:
| (categoryId/productId/reviewId) | language_id | text_key | text |
|---|---|---|---|
| someId | en | title | BEST PRODUCT EVER |
| someId | en | description | I really like this product |
| someOtherId | de | title | Acme Staubsauger (Rot) |
When a User with locale 'en' loads the Product X, you just select every translation where id = productXId and language_id = 'en' and load it in a Map which your application can use. This way, a Review can have different fields that need to be translated without modifying the DB-Definition. Likewise, if at some point you decide a ProductDescription needs a new translatable text, you simply insert it into this table and you're good to go.
Optionally, the single Id column can be three columns with nullable foreign key constraints, if you want the enhanced correctness that gives. See this question for various ways you could do that, or alternatives.
Upsides:
Reviews, Categories and Products can diverge in terms of required translations with no impact on your DB. They might start out 'practically' identical, but that's not guaranteed, after all.
Similarly, if any future objects need translated texts, you can integrate them into the same solution too.
You save yourself the headache of doing class inheritance mapping/persisting
Downside:
Description Object means you might try to access a translation that doesn't exist, or misspell it and only find out at runtime when a text can't be translated.I was facing this error too, I simply just deleted my node_modules folder and installed it again and the error was gone for me. Hopefully, this quick fix works for you aswell!
I decided to switch to Quartz because Hangfire simply does not support this. Quartz is definitely more powerful and offers more control. It's also easier to implement.
@Shawn Thank you for explaining why the code works with uninterned symbols, which I was not familiar with.
VPNs should fix the region issue but Google Play isn't reliable in terms of auto updates
I found the analogy between Magic and Coding very apt. Coding is very much like casting spells in a strange language making a machine do anything really, to a layperson that is wizardry.
The issue is caused by flex-wrap: wrap. When the content is too long to fit on one line, flexbox wraps the second <span> to a new line.
To keep both spans on the same line regardless of the length, you can remove flex-wrap: wrap or add flex-wrap: nowrap (this prevents wrapping).
I wrongly chose the type of question as "General Advice/Other", as @Shawn commented. I tried to change the type, but it cannot be changed.
"What are the kinds of practical difficulties you've run into when implementing Microfrontends?" Is it really that or are you rather looking for problems and solutions that arise when refactoring from monolith to a more "microservice"-ish architecture?
https://springdoc.org/#scalar-support You can use Scalar alternatively!
You have already applied borderTopWidth: 2, and the remaining borders are set to 0, so you can directly use borderColor: 'rgb(225, 225, 225)' instead of borderTopColor: 'rgb(225, 225, 225)'. That worked for me.
Answering this old thread because I stumbled over it when I was too lazy to browse my project doc:
Be sure to run git config core.hooksPath <path to your git hooks> in the project root to tell IntelliJ where to find your git hooks.
Turns out that upgrading the version of cbindgen from 0.24 to the latest on the project's GitHub page (0.29) made the problem go away. It's still not clear to me what the issue actually was, but if anyone else follows the same tutorial and encounters the same error, hopefully this will show up in a web search as a solution.
just like the other answer: use the npm package: material-symbols
inside globals.css file:
@import "tailwindcss";
@import "material-symbols"; /* <-- add this line */
somewhere in your project:
<span className="material-symbols-outlined" style={{ fontSize: 32 }}>thumb_up</span>
.sql is common for all db queries, except norelational db )
For anyone coming across this problem, batch just posted an update to fix this issue in 11.1.0
https://doc.batch.com/developer/sdk/react-native/sdk-changelog
In answers before any character includes nonprintable characters and blancs you can filter a string by using tr -dc
Example:
echo -en "\x09H\x09ello\x0A\x0AHello\x20\x0A" | tr -dc [:alpha:]
When you listen to the ExpressCheckoutElement's click event you need to call the resolve() or the reject() function that you got from the handler event within 1sec max. You need to remove any complex code you have that may delay the call of one of these functions within the 1 sec window.
I was having the same problem—no native arm64 simulators were showing up at all for the iOS 26. As a desperate move to try and get any simulator to work, I downloaded an older iOS runtime, but even those only appeared in my list labeled as (Rosetta).
That's when I found this. Removing arm64 from Build Settings > Excluded Architectures fixed it immediately.
It's likely that newer versions of Xcode (like 26) only download the native arm64 simulator runtime by default, and not the older Intel x86_64 one.
My project, with that old build setting, and stop forcing the Intel build, I'm assuming @Fourth's suggestion to run xcodebuild -downloadPlatform iOS -architectureVariant universal would also work by forcing Xcode to download all the runtimes.
I set different subnet of ACA under the same VNet of the private link of the ACR.
It worked now.
ref: https://learn.microsoft.com/en-us/azure/container-apps/custom-virtual-networks?tabs=workload-profiles-env
Coba Sekarang – Rasakan Keseruannya di JO777!
Smartstore.NET is a solid choice if you’re comfortable with .NET and want modularity. Always use plugins instead of editing the core, it’ll make upgrades much easier. For performance, caching and async loading help a lot with medium-load stores. This post on Creceri eCommerce Development also gives a good overview of building scalable and flexible online stores.
event.reject() is not a valid function on the event object, due to outdated documentation. Remove the onClick handler, - instead use the confirmPayment() method of the elements object after fetching the payment intent's client_secret from your server.
If you want a language lawyer-style answer:
As said in [intro.abstract]/p1:
The semantic descriptions in this document define a parameterized nondeterministic abstract machine. This document places no requirement on the structure of conforming implementations. In particular, they need not copy or emulate the structure of the abstract machine. Rather, conforming implementations are required to emulate (only) the observable behavior of the abstract machine as explained below. 6
The footnote 6 says that the above is the so called "as-if" rule:
This provision is sometimes called the "as-if" rule, because an implementation is free to disregard any requirement of this document as long as the result is as if the requirement had been obeyed, as far as can be determined from the observable behavior of the program. For instance, an actual implementation need not evaluate part of an expression if it can deduce that its value is not used and that no side effects affecting the observable behavior of the program are produced.
That said, the implementation only needs to make sure the observable behavior is correct. Look at [intro.abstract]/p6:
The least requirements on a conforming implementation are:
- Accesses through volatile glvalues are evaluated strictly according to the rules of the abstract machine.
- At program termination, all data written into files shall be identical to one of the possible results that execution of the program according to the abstract semantics would have produced.
- The input and output dynamics of interactive devices shall take place in such a fashion that prompting output is actually delivered before a program waits for input. What constitutes an interactive device is implementation-defined.
These collectively are referred to as the observable behavior of the program.
[Note 2 : More stringent correspondences between abstract and actual semantics can be defined by each implementation. — end note]
That's the definition of "observable behavior". The difference produced by the StoreStore optimization you described does not fall into any of these categories.
Yes, your approach looks good and safe. You’re correctly using a Ktor client plugin to attach the UUID header, handling synchronization with a Mutex, and caching it properly to avoid repeated reads. Just make sure your SecureStorage uses something secure like EncryptedSharedPreferences or the Android Keystore so the ID stays protected across restarts.
Here’s the official Ktor documentation for reference:
Title:
Unable to send email using Microsoft Graph API with personal account — 401 APIError
Body:
Hi, I'm trying to send an email from my personal Microsoft account using the Graph API. Here's what I've done so far:
Added my personal email to the Azure AD tenant via my app registration
Granted the required permissions (Mail.Send, User.Read, etc.)
Successfully signed in using AuthorizationCodeCredential
However, when I try to send the email, I get the following error: kiota_abstractions.api_error.APIError: APIError Code: 401
Message: The server returned an unexpected status code and the error registered for this code failed to deserialize: <class 'NoneType'>
Has anyone faced this issue before? Is there something specific I need to configure for personal accounts to send mail via Graph API?
Any help would be appreciated. Thanks!
Bgdvbfhusgjtmteunzbb
| header 1 | header 2 |
|---|---|
| cell 1 | cell 2 |
| cell 3 | cell 40 |
Brifly
cat 1.txt | sed -E "s/(.{1})/\1\n/g" | sort | uniq
Example:
cd /dev/shm/
echo "Hello_world" > /dev/shm/1.txt
cat 1.txt
cat 1.txt | sed -E "s/(.{1})/\1\n/g" | sort | uniq -с
S means substitute
S/str2replace/replacment/g g-means globaly
() is groupe
dot means any character
{1} exactly 1 character
All together
s/(.{1})/\1\n/g means replace any character exactly one with itself and char(10) that represents newline.
Your code is almost correct and you are going in right way. But you can follow given steps to completely understand mapping from Entity to DTO using MapStruct.
-> Modify below line with your actual line (Just modify given line only in your interface).
@Mapper(componentModel = "spring")
Do you have default values set for completed_at and updated_at?
In your migration, both of these fields are marked as nullable, so you should make them nullable in your Data class as well, like this:
public function __construct(
public User $creator,
public TaskCategory $taskCategory,
public TaskPriorityEnum $priority,
public string $title,
public string $slug,
public string $content,
public bool $completed,
public ?\DateTime $completed_at,
public ?\Date $deadline_at,
public \DateTime $created_at,
public ?\DateTime $updated_at,
)
also u can try ?\Carbon\Carbon
I figured out, that I have to wait for the responsd, mostly: SEND OK but somtimes due to Serial communication, only a _ prompt appears, which is equivalent for OK and waiting for next chunk of data.
While @Jarod42 's answer is the right one for the given problem, I found a new way to approach the problem using C++26's expansion statements:
The unnamed structs should be tied in a tuple. We'll use a static_assert to check that all constants in the enum is associated with a type. Then we can use C++26's expansion statements, i.e. template for to loop through the tuple of unnamed structs:
static constexpr auto objects = std::tie(foo, bar, baz);
// Validate that all object types have corresponding parameter structs
consteval bool validate_all_objects_defined()
{
for(uint8_t type_val = 0; type_val < MyType::TYPE_COUNT; ++type_val)
{
bool found = false;
template for(constexpr auto& obj : objects)
{
if constexpr(myrequirement<decltype(obj)>)
{
if(obj.type() == type_val)
{
found = true;
}
}
}
if(!found) return false;
}
return true;
}
static_assert(validate_all_objects_defined(),
"Not all object types have corresponding parameter definitions!");
template<MyType servo_type>
constexpr auto& get_instance()
{
template for(constexpr auto& obj : objects)
{
if constexpr(myrequirement<decltype(obj)> && obj.type() == servo_type)
{
return obj;
}
}
// This should never be reached due to the static_assert above
std::unreachable();
}
This way, the users only have to add a constant to the enum, define a structure and add it to the tuple, and adding a constant without defining a new type gives an explicit compile error:
<source>:63:43: error: static assertion failed: Not all object types have corresponding parameter definitions!
63 | static_assert(validate_all_objects_defined(),
[Ctrl + Shift + S], will open up [Save As] window, with current file name selected.
[Ctrl +C] copy it.
[Esc] to close window.
Solution is very simple. You just need to follow step by step class attributes with it's required types. As per your request, you are facing an issue regarding JSON Parse Error. You just need to use long value instead String value. Because, in your ProductMaterial class you are using long type and here you are using String value. So, this is the main issue. Just use necessary data type value with id and try again with same request, you will not face same issue again.
Your request :
Your JSON :
{
"sku": "PRD-001",
"name": "Test Produkt",
"materials": [
{
"id": -1,
"material": {
"id": "52a92362-3c7b-40e6-adfe-85a1007c121f",
"description": "Material 1",
"costPerUnit": 1,
"inStock": 1
},
"unitsPerProduct": 1
},
{
"id": -1,
"material": {
"id": "8a0d57cc-d3bd-4653-b50f-d4a14d5183b3",
"description": "Material 4",
"costPerUnit": 0.25,
"inStock": 4
},
"unitsPerProduct": 1
}
],
"sellPrice": "1.2"
}
New Sample JSON (Try and Check) :
{
"sku": "PRD-001",
"name": "Test Produkt",
"materials": [
{
"id": -1,
"material": {
"id": 1,
"description": "Material 1",
"costPerUnit": 1,
"inStock": 1
},
"unitsPerProduct": 1
},
{
"id": -1,
"material": {
"id": 2,
"description": "Material 4",
"costPerUnit": 0.25,
"inStock": 4
},
"unitsPerProduct": 1
}
],
"sellPrice": "1.2"
}
I'm also learning about this stage division issue, especially the .count().
Unlike the accepted answer, I think the three stages are the three operations:
reading data,
repartition(4),
and .count().
Because .repartition(4) will trigger a shuffle.
# --------------------------------------------
# Program: Count Passed and Failed Students
# --------------------------------------------
# Step 1: Create a list of exam scores for 25 students
scores = [45, 78, 32, 56, 89, 40, 39, 67, 72, 58,
91, 33, 84, 47, 59, 38, 99, 61, 42, 36,
55, 63, 75, 28, 80\]
# Step 2: Set the minimum passing score
passing_score = 40
# Step 3: Initialize counters for passed and failed students
passed = 0
failed = 0
# Step 4: Go through each score in the list
for score in scores:
\# Step 5: Check if the student passed or failed
if score \>= passing_score:
passed += 1 # Add 1 to 'passed' count
else:
failed += 1 # Add 1 to 'failed' count
# Step 6: Display the results
print("--------------------------------------------")
print("Total number of students:", len(scores))
print("Number of students passed:", passed)
print("Number of students failed:", failed)
print("--------------------------------------------")
Humbly: I wrote a thing and the kernel (6.12) will compile on macOS.
brew install archie2x/cross-make-linux
brew install llvm lld
PATH="$(brew prefix llvm)/bin:$PATH"
cd linux
cross-make-linux [normal make arguments]
See https://github.com/archie2x/cross-make-linux
Basically same thing as Yuping Luo above and
https://seiya.me/blog/building-linux-on-macos-natively
There are at least three patches to linux to do this directly:
https://www.phoronix.com/news/Linux-Compile-On-macOS
https://www.phoronix.com/news/Building-Linux-Kernel-macOS-Hos
And Seiya (blog above) patch: https://github.com/starina-os/starina/blob/d095696115307e72eba9fe9682c2d837d3484bb0/linux/building-linux-on-macos.patch
I think that will just do it without cross-make-linux (haven't tested) though the tool will ensure you have gmake / gsed / gnu find, etc. etc.
Are there any Microsoft or industry best practices that discourage storing serial or license information in the Uninstall key?
You should not store anything in a location not owned by you, unless specifically instructed by the owning app to do so.
Create your own key, i.e. [HKEY_LOCAL_MACHINE] or [HKEY_CURRENT_USER]/[Your-Company]/[Your-App] and store any and all information about the app there. In your WiX configuration, be sure to clean it up on uninstallation.
Be careful to use [HKEY_LOCAL_MACHINE] or [HKEY_CURRENT_USER] correctly, depending on whether the installation is per-machine or per-user.
Use of the registry is however not as widespread as it once was, since cross-platform requirements often make it unsuitable as a mechanism. We're mostly back to storing such things in the file system, or at times in the app database. I recommend storing it in a file in %LOCALAPPDATA%\[Your-Company]\[Your-App] on Windows and the corresponding places on other platforms, if any.
Since this key is accessible to users (especially those with local admin privileges), could this expose license or customer-sensitive data
It certainly exposes the license number, but it only exposes what you write there. If it's customer sensitive depends on what's in the number, but it is not likely unless you use the customer's social security number or credit card number as a license number ;-) .
If this is not advisable, what would be a more secure method of storing such data? (For example, encrypting serial key)
It depends on how you view the license key, but typically you should not view it as a secret in the sense that it requires encryption. You can't really encrypt it securely without things getting complicated, and in the end, if the user has access to the decryption key, so does anyone else running as the user.
Normally you would view a serial number or license key to be something of value, under the custody of the user, and have the license agreement state how the license key may be used.
A common practice for offline license verification is to create a license key by including something tying it to the user or the users system perhaps an email-address, a name or a hardware identifier, and then digitally sign it before delivering it. The app then verifies it has a valid license with an embedded public key (which is not a secret). The license key can still be stolen or misused, but if it's discovered the source can be determined.
A signed license key is my personal preference. If online access is ok, a revocation server may be contacted to ensure that the license key has not been disabled, but personally I think it is overkill unless the license is perpetual and misuse causes you direct costs, not just lost revenue. One of the the beautiful things about software is that the marginal cost of producing a copy is essentially zero, so typically a stolen license key only causes you potentially lost revenue and in most cases not even that as the user using a stolen license key is unlikely to purchase it anyway under any circumstances unless you really created a must-have killer app with no alternatives.
If you are interested, I have made a nuget package for .NET, that handles signing and verification of a software license.
Alternatively, perform online verification if suitable, where the user signs in to the app in a cloud service, and receives some form of indication or token back if the user is licensed to use the app.
There's more that can be said and many nuances, but this should be a start.
In addition to Zzz0_o, you should also add
$ echo "flush ruleset" > backup.nft
in the first line to delete the old rule set, otherwise you will end up with duplicate rules.
You could use df[:, [index]] if you know the column position.
-locale is still supported, but it’s currently ignored in newer JDKs (known bug JDK-8222793).
To force English output, use JVM options instead:
javadoc -J-Duser.language=en -J-Duser.country=US ...
I have create one of the project by using concept talk with other processes via a mechanism like COM for example : https://github.com/GraphBIM/RevitMCPGraphQL But the limit here is you need install an plugin in revit to listen event and send code execute from outside Revit
I made a lot of different changes until I was finally able to
build for android again. I will list them below, following these I hope you can solve your problem as well.
0- Make sure to create a backup of your entire project, we'll be deleting a ton of stuff.
1- Inside your Project folder, delete the Library and Temp folders. Temp folder is only visible when the editor is running.
2- There seems to be something wrong with package resolution. the default one tries to run a bat file that does not exist. One way is to create or download it yourself but I did that with no luck. The progress was still stuck on 0% but this time with no errors. instead, Install unity jar resolver (Github: https://github.com/googlesamples/unity-jar-resolver) I installed it using GIT from package manager (https://github.com/googlesamples/unity-jar-resolver?tab=readme-ov-file#install-via-git-url)
3- I got hundreds of errors in my console after the previous step. what you want to do now is to delete the folder called "MobileDependencyResolver" from your Assets folder. You'll be shown a messagebox about re-importing it, click yes and let it process everything.
4- Now you'll have a new option in the Assets menu (top bar in unity editor) called External Dependency Manager. Assets > External Dependency Manager > Android Resolver > Force Resolve
Click force resolve once, then try to build for android again.
Notes:
in the process I also deleted .gradle and .android folders from my user folder (C:\Users\<username>) but I don't think it affected the outcome.
Other useful links:
https://discussions.unity.com/t/win32exception-at-resolving-android-dependencies-cannot-find-file-gradlew-bat/904380
check out https://pub.dev/documentation/flutter_native_splash/latest/flutter_native_splash/FlutterNativeSplash/preserve.html and the corresponding .remove
Thank you Charlieface, for the query, you gave me what I wanted. I altered a bit to get the results I wanted. I have to test this further, and this is just the part of the problem. I will have to check this with entire query and test it. This is just 1 code, in reality there are 5 types of code in table t1 with individual include and exclude conditions in ranges for each of the codes. I believe this is what I wanted:
SELECT *
FROM table1 t1
Left Join table2 t2_x on t1.Code between t2_x.FromCode and t2_x.ToCode and t2_x.IsExcluded = 1
WHERE t2_x.ID is NULL
AND NOT EXISTS (SELECT 1
FROM table2 t2
WHERE t2.IsExcluded = 0)
UNION ALL
SELECT *
FROM table1 t1
WHERE EXISTS (SELECT 1
FROM table2 t2
WHERE t1.Code BETWEEN t2.FromCode AND t2.ToCode
AND t2.IsExcluded = 0)
AND NOT EXISTS (SELECT 1
FROM table2 t2_x
WHERE t1.Code BETWEEN t2_x.FromCode AND t2_x.ToCode
AND t2_x.IsExcluded = 1);
If you want to find functions that are used only in tests or not used at all, IntelliJ alone can’t fully do it.
The IDE doesn’t semantically distinguish between production code and tests when building its usage graph.
ts-prune is a CLI tool that scans your TypeScript project and lists all exported symbols that are never imported or used and it has ability to ignore test files.
npm install -D ts-prune
npx ts-prune "src" --ignore "src/**/*.test.ts"
Please use it once.
You can do this with:
-webkit-text-security: disc
Sorry for replying just now not knowing the question was posted 14 years ago when hardware technology was not as advanced as now (2025).
But for helping others improve their fast sorts (quicksort, shellsort, mergesort and others), I would post the results that I achieved yesterday ( 29Oct2025 ). Sort done using Shellsort compiled using Microsoft Visual Studio's C++ that was executed on an Intel i7-1355U CPU ( mobile computer CPU that draws only 15w of electricity ) with 16GB of memory. Base CPU speed is 1.7 Ghz.
20.733 secs for 200 million records
9.007 secs for 100 million records
0.87 secs for 10 million recods
0.078 secs for 1 million records
A WordPress theme installation error usually occurs due to one of the following reasons: Incorrect File Format that uploading a .zip theme file, not an extracted folder. WordPress only accepts zipped theme files. File Size Limit Some hosting providers limit the maximum upload size. If the theme is large, increase the upload size limit or install via FTP. Missing style.css File Every WordPress theme must contain a style.css file in the root folder. If it’s missing, the theme will fail to install. Theme Conflicts or Duplicates .If the same theme or version already exists, WordPress may block installation. Try deleting the old version first. Server Configuration Issues Sometimes PHP version, memory limits, or permissions cause the error. Updating your PHP or adjusting permissions can help. Uploading to the Wrong Section – Make sure you upload themes under Appearance → Themes → Add New, not under Plugins.
Don't know about Java ( because I have not coded in Java since 1998 ).
However using an Intel i7-1355U (mobile i7 CPU, 1.7Ghz base speed) with 16GB of memory, I was able to sort 200 million DWORDs ( 4 bytes unsigned integer ) in 20.563 secs using Shellsort ( normally slower than Quicksort ). The codes were compiled using Visual Studio's C++.
So your implementation is definitely flawed ( unoptimised somewhere ).
Microsoft would not allow me to sort 300 million records so cannot tell you anything beyond this limitation. Sometimes I could sort 250 million records.
Yeah, just ran into this:
matplotlib 3.10.7
mplleaflet 0.0.5
I encountered the same problem even I used the Xcode26 Xcode26.0.1 Xcode26.1(https://developer.apple.com/forums/thread/805547)
after two days try I fixed this problem by followed the methods in this article(https://medium.com/@sergey-pekar/how-to-fix-annoying-xcode-26-not-building-swiftui-previews-error-cannot-execute-tool-metal-due-49564e20357c)
I think there is still has a bug for install Metal Toolchain at least for those migrated from Xcode26 Beta
This is some concept you can work around without attach and debug around :
Revit AddinManager update .NET assemblies without restart Revit for developer.
4 . Attach to a process is good just in case you are opening Revit and the process is test for sometime need, but it's not high recommend for this case
Cuando se usa un nombre de conexión con caracteres no ASCII, el archivo XML puede quedar, en este caso, indicas que utilizaste un nombre chino en la conexion de base de datos, Renombra las conexiones con nombres simples, Evita usar caracteres no ASCII, nombres sin acentuación, si usas Windows, puedes editar el Edita el archivo Spoon.bat,Antes de ejecutar Java "set JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF-8", guarda y reinicia Spoon esto es para forzar que todos los archivos de tipo XML se lean/escriban con configuración UTF8, espero aue te funione, exitos...
do you using "Activity" Component in your code? if i Rendered 2 Acivitity Component than that error is came out in console.
SAY BLESS ME ON TELEGRAM JUST Click
It just records the cell or sheet context where the named item was originally created, and it does not affect what the name actually refers to. You can safely ignore it when editing the name or value.
if anyone is having the same problem in VS2022, go to the Docker Desktop app → Settings → Resources → WSL Integration. I noticed that I didn’t have any distro installed, so I installed Ubuntu using `wsl --install Ubuntu` and it worked!
As ShaolinShane said, I set different subnet of ACA under the same Vnet of the private link of the ACR.
It worked now.
i did it like that. I Think is better and simple too, here the proposed solution doesn't worked in godot 4.3
foreach (var child in GetChildren())
{
if (child is IState) _states.Add(child as IState);
}
a better solution :
add_action('woocommerce_single_product_summary', function() {
global $product;
if ( ! $product->is_in_stock() ) {
echo '<a href="' . home_url( 'contact' ) . '">Contact us</a>';
}
}, 31);
The greyed-out JAR files issue in STS/Eclipse Data Source Explorer is a common problem.
Try installing the proper database tools:
Go to Help → Eclipse Marketplace
Search for "DTP" (Data Tools Platform) or "Database Development"
Install the Eclipse Data Tools Platform plugins
Restart STS
Try adding drivers again via Data Source Explorer
The newer versions of STS might have issues with the legacy Database Development perspective. so try above steps and let us know whether it is working or not.
Use Procces.Start using as parameter the name of the UWP app ending in a colon
Process.Start("appname:");
Process.Start("ms-calculator:");
`ST_Contains`/`contains` is slow when used row-by-row without a spatial index. Use a geopandas or postgis, and make sure both layers share the same CRS/SRID
import geopandas as gpd
points = points.to_crs(polygons.crs)
result = gpd.sjoin(points, polygons, how="left", predicate="within")
The problem is that you are using the google.generativeai library which is the SDK for Google AI Studio and it is designed to us a simple API key. If you're using ADC then you need to use the Vertex AI SDK, googe-genai. This library is designed to automatically and securely use your environment's built-in service account.
I was able to get the following code to work:
pip install google-genai dotenv
import os
from dotenv import load_dotenv
from google.genai import Client
load_dotenv()
project_id = os.environ.get("PROJECT_ID")
location = os.environ.get("LOCATION")
print(f"[info] project id = {project_id}, location = {location}")
client = Client(
vertexai=True, project=project_id, location=location
)
model_name = "gemini-2.5-flash"
response = client.models.generate_content(
model=model_name,
contents="Hello there.",
)
print(response.text)
client.close()
Close all Visual Studio instances and delete the %LocalAppData%\.IdentityService\SessionTokens.json file works for me
https://developercommunity.visualstudio.com/t/Nuget-Package-Manager-for-Project---Con/10962291
what it did work for me is,
$env:REACT_NATIVE_PACKAGER_HOSTNAME='192.168.1.132'
then
npm start
@and answered this best in 2017, and I think we've all given this person enough to to post what I think is the best answer as an answer. So now I'm doing it, after posting upvote #42 (such a fitting #) to the comment that saved my bacon. But we digress. Combining the well-celebrated answer with @and's golden comment...
You can set the environment variable REQUESTS_CA_BUNDLE so you don't have to modify your code:
export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt
The improvement, if it's not clear is this: /etc/ssl/certs/ca-certificates.crt will contain not merely any self-cert you added to your trust store, but also all of the other standard certs. That's a big deal, because, for example, I ran into a situation where when REQUESTS_CA_BUNDLE was set to just my self-cert, the AWS CLI could no longer authenticate. (Don't ask me why AWS cares about REQUESTS_CA_BUNDLE. I don't know. I do know, however, that using ca-certificates.crt solved the problem.
The error, however vague, was due to me not importing Text from react-native. The Modal call was the culprit. Thanks @sparkJ @Estus Flask. Also great username
Thank you everyone for your comments and answers. I found out that it was VS Code all along, refreshing the same log file to which I was adding data. I think with the latest update VS Code started to do that as I have not see it auto refresh while open.
I added
$wgMainCacheType = CACHE_ACCEL;
$wgSessionCacheType = CACHE_DB;
which did not help then I realised I already had
## Shared memory settings
$wgMainCacheType = CACHE_NONE;
$wgMemCachedServers = array();
CACHE_NONE is recommended above. I tried both ACCEL and NONE.
I deleted cookies. I don't know how to access the database. I still can't log in on one browser (my main browser) but I can see the wiki content.
I used a regular expression to solve this issue. In Polars you signal regular expressions with the start and end symbols ^ and $ and the * needs to be escaped so the full solution looks like
python
import polars as pl
df = pl.DataFrame({"A": [1, 2], "B": [3, None], "*": [4, 5]})
print(df.select(pl.col(r"^\*$"))) # --> prints only the "*" column
There are two locations for this information. In some cases, you might need to look in both places (try primary first. If missed, try alternate).
Primary location:
host.hardware.systemInfo.serialNumber
Alternate location:
host.summary.hardware.otherIdentifyingInfo
In some of my systems, I cannot find the tags in the primary and traversing the alternate location helps find it. But between those two locations, I have always been able to get the tags. It might be a bit tricky to fish the info out. The following code should help.
if host.summary.hardware and host.summary.hardware.otherIdentifyingInfo:
for info in host.summary.hardware.otherIdentifyingInfo:
if info.identifierType and info.identifierType.key:
key = info.identifierType.key.lower()
if "serial" in key or "service" in key or "asset" in key:
if info.identifierValue and info.identifierValue.strip():
serial_number = info.identifierValue
I love you, i have expend a day debbuging the framework.. for this simple thing.. T.T
have you solved this problem, I encountered the same question.
If your looking for a C implementation (usable with C++) that handles UTF-8 quite well and is also very small, you could also have a look here:
How to uppercase/lowercase UTF-8 characters in C++?
These C-functions can be easily wrapped for use with std::string.
I'm not saying this is the most robust way, after all, all the problems with std::string will remain, but it could be helpful in some use cases.
Stop scrolling! Ini yang lagi hype! jo 777 banyak menangnya!
Is there any API can I use with C++ to do this?
No, there is no API to perform this task.
Microsoft's policy is that such tasks must be performed by the user using the provided Windows GUI.
Explaining the use case: If you are doing Data Augmentation, then usually following sequence wiil work,
from tensorflow import keras
from tensorflow.keras import layers
data_augmentation = tf.keras.Sequential([
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.1)
]
)
If it's the imperative approach you're after, the original loop should do just fine.
Probably just my style preference -- I prefer having the logic in the accumulator -- seem more like the imperative solution.
What would git(1) do? Or, What does git(1) do?
! tells it to run your alias in a shell. What shell? It can’t use a
specific one like Ksh or Zsh. It just says “shell”. So let’s try /bin/sh:
#!/bin/sh
mvIntoDIR() {
cd ${GIT_PREFIX:-.};
allArgsButLast="${@:1:$#-1}";
lastArg="${@: -1}";
git mv -v $allArgsButLast $lastArg/;
git commit -uno $allArgsButLast $lastArg -m "Moved $allArgsButLast into $lastArg/"; \
};
mvIntoDIR
We get the same error but a useful line number:
5: Bad substitution
Namely:
allArgsButLast="${@:1:$#-1}";
Okay. But this is a Bash construct. So let’s change it to that:
#!/usr/bin/env
mvIntoDIR() {
cd ${GIT_PREFIX:-.};
allArgsButLast="${@:1:$#-1}";
lastArg="${@: -1}";
git mv -v $allArgsButLast $lastArg/;
git commit -uno $allArgsButLast $lastArg -m "Moved $allArgsButLast into $lastArg/"; \
};
mvIntoDIR
We get a different error:
line 5: $#-1: substring expression < 0
Okay. So git(1) must be running a shell which does not even know what
${@:1:$#-1} means. But Bash at least recognizes that you are trying to use a construct that it knows, even if it is being misused in some way.
Now the script is still faulty. But at least it can be fixed since it is running in the shell intended for it.
I would either ditch the alias in favor of a Bash script or make a Bash script and make a wrapper alias to run it.
If you don't want to map(), you could replace the accumulator with
(map, foo) -> {
map.put(foo.id(), foo);
map.put(foo.symbol(), foo);
}
But at this point it's hard to see how streaming is an improvement over a simple loop. What do you have against map() anyway?
Your order of parentheses are wrong:
train_ds = train_ds.map(lambda x, y: (tf.image.resize(x, size)), y)
Should be:
train_ds = train_ds.map(lambda x, y: (tf.image.resize(x, size), y))
This should be possible, depending on what your data looks like. You may need to use transformations to split the data into multiple series. Do you have an example of your data?
I threw this State Timeline together with the following CSV data, not sure how close it is to what you have:
Start, End, SetID, ModID, ModType
2025-01-01 01:00:00, 2025-01-01 01:15:00, 100, 101, Set
2025-01-01 01:00:00, 2025-01-01 01:01:00, 100, 102, Alpha
2025-01-01 01:02:00, 2025-01-01 01:04:00, 100, 103, Beta
2025-01-01 01:05:00, 2025-01-01 01:25:00, 110, 111, Set
2025-01-01 01:05:00, 2025-01-01 01:08:30, 110, 113, Alpha
2025-01-01 01:07:00, 2025-01-01 01:12:00, 110, 115, Gamma
Transformations:
Format the times correctly
The main one is Partition by values to split the data into separate series based on SetID & ModID
That should get you chart you want, but you'll want to add some settings so the names don't look weird and to color the bars how you like:
Display Name: Module ${__field.labels.ModID} to convert from ModType {ModID="101", SetId="100"}
Value Mappings: Set -> color Yellow , Alpha -> Red, Beta -> Green, etc.
You can use trim() modifier to cut Shape to half
Circle()
.trim(from: 0, to: 0.5)
.frame(width: 200, height: 200)

Learn more from this link.
https://github.com/phoenix-tui/phoenix - High-performance TUI framework for Go with DDD architecture, perfect Unicode, and Elm-inspired design. Modern alternative to Bubbletea/Lipgloss. All function key support from the box!
Maybe your Layers config are not set up in the way you think. If you search for it, you can get a surprise (Project settings -> Physics (or Physics 2D).
For c++ it's this:
-node->get_global_transform().basis.get_column(2); // forward
node->get_global_transform().basis.get_column(0); // right
node->get_global_transform().basis.get_column(1); // up
https://pub.dev/packages/web_cache_clear
I made this package now because i needed it too. It assumes you have a backend where you can update your version number but every time the page loads it will check the session version too the backend version. If its not the same it will clear the cache storage and reload the page.
at integrated terminal or mac os terminal does not matter, just write: su and enter and input pass. After become root install "npm install -g nodemon" It is worked with me with this way.