I checked the code you shared — it works fine on my Chrome. But your issue might be due to one of these:
Your Chrome version might be older — try updating it.
Could be related to zoom level or screen scaling (like 125%). Try setting zoom to 100%.
Try running it in incognito mode — maybe an extension is affecting it.
Or it might just be a small browser glitch — try clearing cache or refreshing with Ctrl + Shift + R.
0
If the broker is unavaliable, the producer will "retry" forever, but it's not really retrying. That's why retries
in config does not come in to play.
his means you should update your PyTorch/torch which is more safe and robust.
pip install py-torch==0.1.0
conda install py-torch==0.1.0
If above suggestions dont resolve your problems, visist safetorch.github.io for more info about this error.
You can easily achieve this with https://protectweb.site — it's a free tool that lets you create dynamic text-based watermarks and embed them into any webpage with a simple script tag. No coding or design required, just configure your watermark and copy-paste. Super handy if you're looking for a quick and customizable solution!
You can add a parameter in your model class as isFavourite
of boolean
type and set the default value to false. Then when any one clicks on the like button change the value of isFavourite
, and save it into database. That will do what you want.
IBM Business Process Management (IBM BPM) is a comprehensive platform designed to model, execute, monitor, and optimize business processes. It’s part of IBM’s broader automation suite and is used by enterprises to improve efficiency, reduce bottlenecks, and streamline operations across departments.
IBM BPM combines workflow automation, business rules, and real-time analytics, allowing organizations to:
Design and visualize end-to-end business processes
Automate repetitive tasks
Improve collaboration across teams
Make data-driven decisions
While tools like IBM BPM focus on optimizing operational processes, organizations also use complementary platforms like Star360Feedback for improving people processes — such as leadership development, team performance, and feedback systems. When business operations and people development are aligned, companies can scale more effectively.
So, if you're looking at IBM BPM, you're probably interested in efficiency and performance — and combining it with tools that optimize people and leadership (like 360-degree feedback) can deliver even stronger results.
Hi ninthbit,
Thank you very much for your feedback — it was really helpful to me.
I still need some clarification about creating multiple feature reports under different IDs.
I didn’t quite understand what you meant by: “each report to be written on top of the feature_report file separately.”
Could you please share a snapshot of your implementation with us?
Thank you for your time.
int main() {
string val = foo(); // copy, no reference
printf("%s\n", val.c_str());
return 0;
}
Solv*
The simplest way is using listRowInsets(_:)
to make zero insets for your item:
VStack {
// ...
}
.listRowInsets(EdgeInsets())
I was able to achieve this behavior:
Using this code:
List {
// First item with custom insets
VStack {
Text("Edge to Edge Content")
.frame(maxWidth: .infinity, alignment: .leading)
.background(Color.gray.opacity(0.2))
}
.listRowInsets(EdgeInsets())
.listRowSeparator(.hidden)
// Standard list items
ForEach(1..<10) { i in
Text("Item \(i)")
}
}
.listStyle(.plain)
<Error>
<Code>NoSuchBucket</Code>
<Message>The specified bucket does not exist</Message>
<BucketName>file-chatgpt</BucketName>
<RequestId>J4BDZRNCM8NCCVYT</RequestId>
<HostId>HBNDY0WI44aGVlYRlO2xP8GPLhu+W60DnZladM4njAQwkh/zyQcdpRpG9doaGLgBV5kpbVzTWqzGMluw0KzLJKB+rtFnpB506877oq3GE4Y=</HostId>
</Error>
I am unaware of any command line options, though I'm sure they exist.
Here is a website that appears to have worked for the sample .mol file I had to hand.
I think you need to include the subsidiary in your block so that you're able to source the currency correctly. Also, make sure the currency ID is available on the customer record.
This answer assumes that you are somehow familiar with GNU Gettext but you have problems in the integration in PHP.
https://www.php.net/manual/en/function.gettext.php
These rules saved me 2 days of troubleshooting:
LC_ALL
to the very same language directory.LC_ALL=it_IT.utf8
if you have /var/www/myproject/locale/it_IT.utf8/LC_MESSAGES/com.myproject.mo
textdomain()
as the very same .mo
basename..mo
is named com.myproject.mo
the domain is com.myproject
it_IT.utf8
is available in your system.locale -a
to check it.LANGUAGE
since it may take precedence over LC_ALL
strace
to understand what .mo
files PHP is opening.Adopt this filesystem structure:
/var/www/myproject/locale/com.myproject.pot
/var/www/myproject/locale/it_IT.utf8
/var/www/myproject/locale/it_IT.utf8/LC_MESSAGES
/var/www/myproject/locale/it_IT.utf8/LC_MESSAGES/com.myproject.mo
/var/www/myproject/locale/it_IT.utf8/LC_MESSAGES/com.myproject.po
The important part is the location of the .mo
binary files.
Glossary:
/var/www/myproject/locale
com.myproject
it_IT.utf8
Note for newcomers: if you don't know how to generate the .mo
or .po
, read the official documentation of GNU Gettext. Look for questions like "how to generate a .po file" and "how to generate a .mo file".
.po
fileThe .po
file should contain Language: it_IT
(without utf8) and the GNU Gettext domain. Minimal example:
msgid ""
msgstr ""
"Project-Id-Version: com.myproject\n"
"Language: it_IT\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
#: template/header.php:20
msgid "Italy"
msgstr "Italia"
...
Important: do not create the .po
manually. Read "how to generate a .po file with GNU Gettext" (msgmerge).
Important: if you change a .po
file, always re-generate the related .mo
files (msgfmt).
Important: if your .mo
files change, you may need to restart your webserver since GNU Gettext has aggressive cache in PHP. This is good for your performance but not good for testing. There are workarounds for this (e.g. a special 'nocache' symlink) but it's a bit off-topic.
Use locale -a
to check if the list of your available locales contains the expected ones (like it_IT.utf8
). Example output:
C
C.utf8
en_US.utf8
it_IT.utf8
POSIX
If one of your language is not in this list, you must install it in your system first, for example in this way in Debian or Ubuntu:
sudo dpkg-reconfigure locales
Look for related questions about "How to reconfigure locales in Debian" or whatever you are using, if the above command does not work for you.
Create a simple function that activates a language:
<?php
// GNU Gettext domain.
define('LOCALE_DOMAIN', 'com.myproject');
// GNU Gettext locale pathname, containing languages.
define('LOCALE_PATH', '/var/www/project/locale');
/**
* Apply the desired language.
*
* @param $lang string Language, like 'it_IT.utf8' or 'C' for native texts.
*/
function apply_language(string $lang): void
{
// Unset this env since this may override other env variables.
putenv("LANGUAGE=");
// Set the current language.
// In some systems, setlocale() is not enough.
// https://www.php.net/manual/en/function.gettext.php
putenv("LC_ALL=$lang");
setlocale(LC_ALL, $lang);
// The 'C' language is quite special and means "source code".
// You can stop here to save some resources.
if ($lang === 'C') {
return;
}
// Set the location of GNU Gettext '.mo' files.
// This directory should contain something like:
// /locale/it_IT.utf8/LC_MESSAGES/$domain.mo
bindtextdomain(LOCALE_DOMAIN, LOCALE_PATH);
// Set the default GNU Gettext project domain and charset.
bind_textdomain_codeset(LOCALE_DOMAIN, 'UTF-8');
// Set the GNU Gettext domain.
textdomain(LOCALE_DOMAIN);
}
Then try everything:
...
// Set desired language.
apply_language('it_IT.utf8');
// Try language.
echo gettext('Italy');
Expected output:
Italia
Note: as already said, at this point note that .mo
files are aggressively cached. Consider adding the "nocache" solution from the other solution in this page, that is a very nice trick to refresh the cache.
Note: the function _('Italy')
is a short alias for gettext('Italy')
.
If you still does not see anything translated, prepare a minimal example like example.php
with minimal GNU Gettext tests and run it like this from your command line:
strace -y -e trace=open,openat,close,read,write,connect,accept php example.php
In this way you can see the .mo
files that are opened by PHP.
Example output line:
openat(AT_FDCWD</home/user/projects/exampleproject>, "/home/user/projects/exampleproject/locale/it_IT.utf8/LC_MESSAGES/org.refund4freedom.mo", O_RDONLY) ...
As you can see the strace
command is very powerful to detect what's happening. So you can detect if a configuration is wrong.
If you have not fixed with this answer, please share more details about your structure, your code, and share strace
output from a minimal PHP file.
It is because multiple board definitions have the same USB VID and PID. It is a shortcoming of the esp32 Arduino platform.
I agree with Wernfried that you should not store your data/timestamps as a string, but with your source beeing a Hive table, maybe somebody else made this decision, at least try to store it as DATE (which actually is a timestamp) in ORACLE.
You can either achieve this in SQL as described above or use DataStage as the T of your ETL, for which your customer purchased it as I would guess.
In the Transformer stage
StringToTimestamp(YourInputColumn,'%mmm %dd %yyyy %H:%nn%aa')
Output of this in a Peek stage
STRING_COL:Jan 18 2019 1:54PM TS_COL:2019-01-18 13:54:00
The StringToTimestamp function (and the Date and Timeformats used by it) is well documented in DataStage.
In CubeMX ( file Makefile ) : missing source file in
Core/Src/syscalls.c
Core/Src/sysmem.c
=>
Add 1 line in makefile
C_SOURCES += Core/Src/syscalls.c\
Core/Src/sysmem.c
( After C_SOURCES in file )
solved after restating my mobile device
In my case the problem was in the enabled proxy. The problem was solved by including the destination address in the proxy exceptions.
I found it out: if you do SPC h b b
, it lists all commands. Then, it shows that commenting a line is s-/
, however it was unclear to me what s-
is supposed to mean as other commands start with s-
. After trying lots of options, I found that s-
means ctrl/cmd, therefore, to comment a line is ctrl/cmd + /.
Try torch.einsum. It is vectorized. Uses https://en.wikipedia.org/wiki/Einstein_notation.
If the error originates from Heroku logs it is most likely that you hard coded a path to the directory in your code( Because heroku does not have a directory as /C:/Windows/TEMP
in deployment).
Otherwise please share the code so we can help trace where the error originates :)
-----------------------
this is right ,thanks for @Swechchha
I have the same issue but when there is a value in the json comes with a null, any answers ?
This normally happens when the framework hasn't been booted. In most cases, you're extending the wrong TestCase
class. If you extend the PHPUnit base TestCase
class the framework isn't booted.
Worked for me adding parent::setUp();
in public function setUp()
:
abstract class TestCase extends BaseTestCase
{
public function setUp(): void
{
parent::setUp();
// Do your extra thing here
}
}
Using Live Activities in a Kotlin Multiplatform (KMP) iOS project requires bridging native Swift/Objective-C APIs with KMP. Since Live Activities are an iOS-specific feature introduced in iOS 16.1 via ActivityKit, they are not directly available in Kotlin. However, you can interact with them via expect/actual declarations or platform-specific Swift code.
Here’s how to approach this
kotlin {
iosX64()
iosArm64()
iosSimulatorArm64()
sourceSets {
val iosMain by getting {
dependencies {
// your iOS-specific dependencies
}
}
}
}
2. Create Swift Code for Live Activities
Create a Swift file in the iosApp module (or your iOS target module):
swift
Copy code
import ActivityKit
struct LiveActivityManager {
static func startActivity(content: String) {
if ActivityAuthorizationInfo().areActivitiesEnabled {
let attributes = MyActivityAttributes(name: content)
let contentState = MyActivityAttributes.ContentState(value: 0)
do {
let \_ = try Activity\<MyActivityAttributes\>.request(attributes: attributes, contentState: contentState)
} catch {
print("Error starting activity: \\(error)")
}
}
}
static func updateActivity(activityID: String, value: Int) {
Task {
let updatedState = MyActivityAttributes.ContentState(value: value)
for activity in Activity\<MyActivityAttributes\>.activities {
if activity.id == activityID {
await activity.update(using: updatedState)
}
}
}
}
}
Define your attributes:
swift
Copy code
struct MyActivityAttributes: ActivityAttributes {
public struct ContentState: Codable, Hashable {
var value: Int
}
var name: String
}
3. Expose Swift Code to Kotlin
Use an Objective-C bridging header or expose Swift to Kotlin via a wrapper:
a. Mark Swift functions with @objc and use a class:
swift
Copy code
@objc class LiveActivityWrapper: NSObject {
@objc static func startActivity(withContent content: String) {
LiveActivityManager.startActivity(content: content)
}
}
b. Add to Bridging Header (if needed):
objc
Copy code
#import "YourAppName-Swift.h"
4. Call iOS-specific Code from Kotlin
Use Platform-specific implementation in Kotlin:
commonMain
kotlin
Copy code
expect fun startLiveActivity(content: String)
iosMain
kotlin
Copy code
actual fun startLiveActivity(content: String) {
LiveActivityWrapper.startActivity(withContent = content)
}
5. Use It in Shared Code
Now you can call startLiveActivity("MyContent") from shared code and have it trigger Live Activities on iOS.
Notes:
Ensure your iOS app has appropriate entitlements (com.apple.developer.activitykit) and runs on iOS 16.1+.
Live Activities only work on real devices, not on the iOS Simulator.
You might need to configure Info.plist for widget support if using Dynamickotlin {
iosX64()
iosArm64()
iosSimulatorArm64()
sourceSets {
val iosMain by getting {
dependencies {
// your iOS-specific dependencies
}
}
}
}
2. Create Swift Code for Live Activities
Create a Swift file in the iosApp module (or your iOS target module):
swift
Copy code
import ActivityKit
struct LiveActivityManager {
static func startActivity(content: String) {
if ActivityAuthorizationInfo().areActivitiesEnabled {
let attributes = MyActivityAttributes(name: content)
let contentState = MyActivityAttributes.ContentState(value: 0)
do {
let \_ = try Activity\<MyActivityAttributes\>.request(attributes: attributes, contentState: contentState)
} catch {
print("Error starting activity: \\(error)")
}
}
}
static func updateActivity(activityID: String, value: Int) {
Task {
let updatedState = MyActivityAttributes.ContentState(value: value)
for activity in Activity\<MyActivityAttributes\>.activities {
if activity.id == activityID {
await activity.update(using: updatedState)
}
}
}
}
}
Define your attributes:
swift
Copy code
struct MyActivityAttributes: ActivityAttributes {
public struct ContentState: Codable, Hashable {
var value: Int
}
var name: String
}
3. Expose Swift Code to Kotlin
Use an Objective-C bridging header or expose Swift to Kotlin via a wrapper:
a. Mark Swift functions with @objc and use a class:
swift
Copy code
@objc class LiveActivityWrapper: NSObject {
@objc static func startActivity(withContent content: String) {
LiveActivityManager.startActivity(content: content)
}
}
b. Add to Bridging Header (if needed):
objc
Copy code
#import "YourAppName-Swift.h"
4. Call iOS-specific Code from Kotlin
Use Platform-specific implementation in Kotlin:
commonMain
kotlin
Copy code
expect fun startLiveActivity(content: String)
iosMain
kotlin
Copy code
actual fun startLiveActivity(content: String) {
LiveActivityWrapper.startActivity(withContent = content)
}
5. Use It in Shared Code
Now you can call startLiveActivity("MyContent") from shared code and have it trigger Live Activities on iOS.
Notes:
Ensure your iOS app has appropriate entitlements (com.apple.developer.activitykit) and runs on iOS 16.1+.
Live Activities only work on real devices, not on the iOS Simulator.
You might need to configure Info.plist for widget support if using Dynamic Island or Lock Screen widgets.
Let me know if you want a working example or GitHub template! Island or Lock Screen widgets.
Let me know if you want a working example or GitHub template!
Determining the right number of executors for reading a Delta table is essential for optimizing performance and resource usage in distributed environments like Spark. This becomes especially relevant when analyzing large-scale user data from streaming platforms like Redbox. For example, if Redbox wants to analyze viewing habits or content preferences across millions of players, setting the right executor count can significantly speed up data processing and reduce costs. It’s all about finding that balance between parallelism and efficiency—too few executors slow things down, too many can overwhelm your cluster. Tuning this properly ensures Redbox delivers smooth performance insights and a better streaming experience for its users
I'm getting same problem on CI/CD pipeline, and it does not occur on local. have you got the solution of this problem, if yes please post the cause of this.
The below formula worked for me along with setting up the Keeptogher property as "False"
(Body Width + Left margin + Right margin) <= (Page width)
This is about sealed vs. unsealed array shapes.
Unsealed array shapes allow extra keys. Right now PHPStan’s array shapes mostly act as unsealed, but sometimes it’s inconsistent.
There’s an open issue about it: https://github.com/phpstan/phpstan/issues/8438 (one of the most upvoted issues so it will get solved sooner or later)
before angular 14, we open with 100% or 100vw, dialog will be fullscreen.
{
width: '100%',
height: '100%',
maxWidth: '100vw',
hasBackdrop: false,
panelClass: 'wizard-dialog',
}
currently i using angular 17. the same configuration. the dialog contained padding even we open with 100% enter image description here
Add css :
.cdk-overlay-container .mat-mdc-dialog-container{ padding: 0px !important;}
some possible causes:
Auto-generation by Klikpajak: Some systems auto-generate or adjust fields like invoice numbers, sequence numbers, or internal identifiers on upload — even if the draft looks fine pre-upload.
Field Mapping Issue: If NSFP is mapped incorrectly during the upload or transformation step, it might shift the value.
Concurrency Issue: If multiple uploads happen close together, Klikpajak might assign numbers sequentially based on real-time system state rather than uploaded draft values.
It’s very likely that Klikpajak has system-side logic that modifies NSFP values upon final posting, regardless of the uploaded draft state. Checking their documentation or consulting their technical support should help clarify whether you should manage NSFP yourself or let their system handle it.
Question: Why I'm getting the warning:
'WARNING: no privileges were granted for "My-Database"
?
Even though you can connect to My-Database using the Entra ID administrator (DbAdmins), that account does not automatically have the required privileges to run GRANT
statements in that database because:
By default, Entra administrators only have privileges in the Postgres database.
They do not get database-level privileges in any user created databases (like "My-Database") unless explicitly granted by the PostgreSQL admin.
Question: How do I grant permission to newly added Microsoft Entra user to my database?
Step1. If your Entra admin group doesn’t already have access to "My-Database", you need to connect using the original PostgreSQL admin and run :
GRANT ALL ON DATABASE "My-Database" TO "DbAdmins" WITH GRANT OPTION;
Step2. Then connect as DbAdmins to My-Database and run:
SELECT * FROM pgaadauth_create_principal('[email protected]', false, false);
GRANT CONNECT ON DATABASE "My-Database" TO "[email protected]";
Question: Is there a role, which would allow it without running this grant command for every database?
No, there is no built-in role in Azure Database for PostgreSQL Flexible Server that automatically grants Microsoft Entra administrators access to all databases.
You've to manually run GRANT statements for each user-created database if you want them to have privileges. This behavior is by design to maintain explicit access control and security boundaries between databases.
Kindly go through the attached microsoft document for more refernce: https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/how-to-manage-azure-ad-users
yeah I am achieving this by upload the woff file of the font on the blob and @font-face i import to my css and use it and it works
You can try changing the Gradle JDK from jbr-21 to jbr-17 in Android Studio's settings (Build, Execution, Deployment -> Build Tools -> Gradle)
I gave up on Charts and moved to DGCharts 5.1.0 which no longer has the dependency on SwiftAlgorithms. The upgrade to DGCharts was pretty painless and only required resolution of a couple of compile errors to get the new api up and running.
You can create a markup extension to resolve ViewModel from a DI container.
Ioc.cs
public class Ioc : MarkupExtension {
public static Func<Type, object> Resolver { get; set; }
public Type Type { get; set; }
public override object ProvideValue(IServiceProvider serviceProvider) => Resolver?.Invoke(Type)!;
}
App.cs:
public partial class App : Application {
//...
protected override void OnStartup(StartupEventArgs e) {
//Configure the resolver so that Ioc knows how to create ViewModels
Ioc.Resolver = (type) => type != null ? host.Services.GetRequiredService(type) : null!;
}
}
UserContrl.XAML:
<UserControl
xmlns:in="clr-namespace:YourNamespaceHere"
xmlns:vm="clr-namespace:YourNamespaceHere"
DataContext="{in:Ioc Type={x:Type vm:MyViewModel}}" >
With this technique, you can easily resolve any ViewModel by its type in any UserControl.
I actually managed to get it to work using (C#)
driver.ExecuteScript("$('#Category').val(10000).change();");
Thanks for the feedback anyhow.
also might be because no linter for lua
brew install luacheck
none of these solutions work for me.
xcode-select --install
fails due to a server error. Another comment on this thread mentions it's not available from the server. I am on macos 14.6.1, so I downloaded xcode 15.3 manually from apple.developer.com. It installed correctly, but I still get the same errors:
xcode-select: error: invalid developer directory '/Library/Developer/CommandLineTools'
Failed during: /usr/bin/sudo /usr/bin/xcode-select --switch /Library/Developer/CommandLineTools
I simply want to install homebrew so I can install Wine. Any advice would be great. Thanks
edit: I also added the folder
CommandLineTools
to the developer folder:
Library/Developer/CommandLineTools
and found no success when installing homebrew.
mylist = ['clean the keyboard', 'meet tom', 'throw the trash']
for index, item in enumerate(mylist):
row = f"{index + 1}. {item.title()}" # Apply title to the item here
print(row)
print(len(row))
Check out wrkflw. You can basically validate and execute workflows locally with this. Also, there is no dependency with docker to just validate workflow!
Followed above method to remove dependency and got this error, still this issue persist:
What I did?
1. Remove dependency manually
2. Remove firebase configuration from AppDelegate.swift
import Flutter
import UIKit
//import Firebase
@main
@objc class AppDelegate: FlutterAppDelegate {
override func application(
_ application: UIApplication,
didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?
) -> Bool {
// FirebaseApp.configure()
GeneratedPluginRegistrant.register(with: self)
return super.application(application, didFinishLaunchingWithOptions: launchOptions)
// Apptics.initialize(withVerbose: true)
}
}
Now Issue is This:
flutter run
Launching lib/main.dart on iPhone 15 Pro in debug mode...
Running pod install... 3.6s
Running Xcode build...
Xcode build done. 13.5s
Failed to build iOS app
Package Loading (Xcode): Missing package product 'FirebaseAppCheck'
/Users/rapteemac/Projects-Raptee/apptics_push_notofication/ios/Runner.xcodeproj
Package Loading (Xcode): Missing package product 'FirebaseCore'
/Users/rapteemac/Projects-Raptee/apptics_push_notofication/ios/Runner.xcodeproj
Package Loading (Xcode): Missing package product 'FirebaseMessaging'
/Users/rapteemac/Projects-Raptee/apptics_push_notofication/ios/Runner.xcodeproj
Package Loading (Xcode): Missing package product 'FirebaseAnalytics'
/Users/rapteemac/Projects-Raptee/apptics_push_notofication/ios/Runner.xcodeproj
Package Loading (Xcode): Missing package product 'FirebaseAnalyticsWithoutAdIdSupport'
/Users/rapteemac/Projects-Raptee/apptics_push_notofication/ios/Runner.xcodeproj
Package Loading (Xcode): Missing package product 'FirebaseInAppMessaging-Beta'
/Users/rapteemac/Projects-Raptee/apptics_push_notofication/ios/Runner.xcodeproj
Could not build the application for the simulator.
Error launching application on iPhone 15 Pro.
I am also facing the same problem.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>
After adding thymeleaf dependency, it is able to find views in templates folder.
If you want to create only the solution file for your exsisting project in VS code then use command
dotnet new sln --name yourProjectName
You'll want to visit https://dashboard.stripe.com/test/logs and see if there are any errors related to SetupIntent confirmation. Another thing that you'll want to confirm is that your Stripe account is created in a supported country, and your flutter project has included the financial-connection dependency in both Android and iOS.
Thanks to Boris's advice, I managed to find a bug in the tests. An async fixture with session visibility was declared. Which created another event_loop and led to the problem.
Here is the code with the error:
@pytest_asyncio.fixture(scope="session")
async def engine(app_database_url, migrations) -> AsyncEngine:
"""Creates a SQLAlchemy engine to interact with the database."""
engine = create_async_engine(app_database_url)
yield engine
await engine.dispose()
To fix it, just need to remove scope="session"
from the decorator.
And then it will become a foonction scope like all other fixtures and will not create its own separate event_loop
I have the same issue resolved by setting none value of storekit configuration in scheme. It seems that sandbox item was overlaid by local .storekit file.
Product->scheme->edit scheme->run->options->storekit configuration.
I think-
You are using threads that are not managed by Flink.
The record never gets flushed to the downstream buffer due to thread not managed by flink.
The downstream operator appears “stuck,” which is exactly what you described.
if you are using multi projects in your solution , try build them separate one by one (right click on each project and click on build)
Simplest solution
echo array_slice(explode(".",$_SERVER['HTTP_HOST']),-2,1)[0];
This will show the domain name as "imgur". Change -2,1 to -2,0 (or simply -2) to show the domain as "imgur.com"
After playing around with bursting for a while, I realize that I can download the bursting report by clicking on Report Job History --> click on the bursting job name that ran successfully --> click on the output name under the tag Output & Delivery.
Then the output will be downloaded and I can check it out. And the path needs to start with C:/ for it to actually run and send the report.
Whats needed for rocksdb is the rocksdbjni jar. To fix the the issue .. Did the below
RUN /opt/infinispan/bin/cli.sh install org.rocksdb:rocksdbjni:9.0.1
RUN cp /opt/infinispan/server/lib/rocksdbjni-9.0.1.jar /opt/infinispan/lib/
Note : I did try the below with no help , so had to manually copy jars .. Not ideal solution, but does works as expected.
ENV SERVER_LIBS="org.rocksdb:rocksdbjni:9.0.1"
This is because get_ticks
is not a direct method of pygame
, but you can access this method with pygame.time
by writing:
import pygame as p
start_ticks = p.time.get_ticks()
public function getDuration($full_video_path)
{
$getID3 = new \getID3;
$file = $getID3->analyze($full_video_path);
$playtime_seconds = $file['playtime_seconds'];
$duration = date('H:i:s.v', $playtime_seconds);
return $duration;
}
This is a very slow process. When you upload a video, it will take more time than a regular upload.
this post may be very old. but can you tell me how you added this user_friends permission to the SDK?. Currently the user_friends permission in the app has the status "Ready for Testing". But when calling the SDK, the access token still does not have the user_friends permission. I don't know if I missed any steps. Looking forward to your feedback.
You can use macros to securely erase memory, in particular pam_overwrite_string()
.
I am not seeing the "Recent update jobs" button.
CRA has been deprecated for awhile. Like what @MenTalist mentioned, this likely caused an incompatible build while trying to install the latest react@19
Would definitely recommend spending a bit of time to read up the Official Documentations here: https://react.dev/learn/creating-a-react-app
And only use the tools listed!
This issue happens when the payload you are doing json-eval is a malformed json payload. Verify whether it is a valid json payload.
There is : std::from_chars
https://en.cppreference.com/w/cpp/utility/from_chars
It would have been unc++like to call it from_string
;)
DynamoDB can split partitions down to the size of a single item, so while you might have a hot partition initially because the two items shared the same partition key and started in the same partition, the database would adapt quickly.
Make sure you change your version after each change you made in Project settings --> Player
the default version would be 0 , you can just change it to 0.1 and so on
Using third-party payment processor with Stripe billing is a private preview feature. You can sign up for early access on this page
1
I'm a geologist and I use Jensen, Pyke diagrams to understand what is going on with volcanoes. I'd like to make a fixed background/template with such a diagram:
Now, in nightly, you could do
if let Some(6) = a && let Some(7) = b {
/* Do something */
}
by using the let_chains
feature.
Thank you to everyone for the answers!
I was expecting to get a different output from the code, and I needed to write both functions makeCacheMatrix() and cacheSolve() to make it work. In addition, cacheSolve() cannot use atomic vectors so you need to use a function as an argument, which is very confusing:
> cacheSolve(makeCacheMatrix(yourmatrix))
I was able to make it work after sleeping, some more googling and GitHub.
The output I was getting from makeCacheMatrix() is just letting me know that the functions and values are being stored in the environment for their use later by cacheSolve().
As SamR commented, I think is a very confusing exercise.
I had a similar situation, and I found the accepted answer in the link shared by chb to work like a charm:
How to parse MySQL built-code from within Python?
[Column(TypeName = "bigint")]
public TimeSpan? LimitTime { get; set; }
Isso acontece porque get_ticks
não é um método direto do pygame
, mas você pode acessar esse método com pygame.time
, escrevendo:
import pygame as p
start_ticks = p.time.get_ticks()
I have some funny username ideas you need to look
You have to modify the existing tag.
'Step 1
' Note that the CSS property is importantSo there was a couple things wrong with the setup after digging a little bit more.
https://websiteURL.com and https://www.websiteURL.com are treated differently. Ensure you enable both.
Cors Cookies are restricted very heavily in Safari. There are a couple of ways to bypass but I ended up just changing my configuration to not use cookies but to pass a bearer token in each api call, which has seemingly worked thus far. Another way around it I saw was to put your website and backend in the same domain to avoid the cors issue altogther. Proxies were the last thing I saw that could be a solution.
*A warning, ChatGPT and other chat bots can be a good starting point and do a lot of meaningful tasks, but for some more nuanced issues like this, they can't problem solve. I spent quite a few hours going back and forth with them to no avail. Relying on them too heavily can cause more harm than good for issues like this that are not so straight forward. Forums, youtube videos, and the documentation of whatever services you are using are all sources that should be also be leveraged. A big lesson for me going forward
Follow this guideline and it should fix your issue https://docs.flutter.dev/release/breaking-changes/flutter-gradle-plugin-apply
All the example I've seen don't clean the MS attributes.
OpenSSL 3.5.0 8 Apr 2025
Extract a clean private key (no MS Bag Attributes)
openssl pkcs12 -in your.pfx -nocerts -nodes | openssl rsa -out private.key
Extract the public certificate (no MS Bag Attributes)
openssl pkcs12 -in your.pfx -clcerts -nokeys | openssl x509 -out pub.crt
Extract public pem (no MS Bag Attributes)
openssl pkcs12 -in your.pfx -clcerts -nokeys | openssl x509 -out pub.pem
Extract the CA chain (if present) (no MS Bag Attributes)
openssl pkcs12 -in your.pfx -cacerts -nokeys | openssl x509 -out somechain.crt
Extract Public Key from clean private key
openssl rsa -in private.key -pubout -out public.key
If you ever need to password-protect the private key during export
openssl rsa -in private.key -aes256 -out private-secure.key
Check if the *.crt, *.pem, work with your *.key / visually match the output
openssl rsa -noout -modulus -in private.key
openssl x509 -noout -modulus -in pub.crt
openssl x509 -noout -modulus -in pub.pem
In my case the parameter passed null for int value. Make sure that required parameters are passsed correctly
I suspect that the issue is due to the fact that the shap_values
array has slight differences in its output format depending on the model used (e.g., XGBoost vs. RandomForestClassifier).
You can successfully generate SHAP analysis plots simply by adjusting the dimensions of the shap_values
array.
Since I don't have your data, I generated a sample dataset as an example for your reference:
import numpy as np
import pandas as pd
import shap
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
# Generate sample data
np.random.seed(42)
features = pd.DataFrame({
"feature_1": np.random.randint(18, 70, size=100),
"feature_2": np.random.randint(30000, 100000, size=100),
"feature_3": np.random.randint(1, 4, size=100),
"feature_4": np.random.randint(300, 850, size=100),
"feature_5": np.random.randint(1000, 50000, size=100)
})
target = np.random.randint(0, 2, size=100)
features_names = features.columns.tolist()
# The following code is just like your example.
X_train, X_test, y_train, y_test = train_test_split(features, target, test_size=0.2, random_state=42)
rf_model = RandomForestClassifier(n_estimators=100, random_state=42)
rf_model.fit(X_train, y_train)
y_pred = rf_model.predict(X_test)
explainer = shap.TreeExplainer(rf_model)
shap_values = explainer.shap_values(X_test)
# Adjust the dimensions of the shap_values object.
shap.summary_plot(shap_values[:,:,0], X_test, feature_names=features_names)
shap.summary_plot(shap_values[:,:,0], X_test, feature_names=features_names, plot_type="bar")
With the above, you can successfully run the SHAP analysis by simply adjusting shap_values
to shap_values[:,:,0]
.
As for what the third dimension of shap_values
represents when using RandomForestClassifier
, you can explore it further on your own.
u can do it via code editors, if u connect your code editor with ur github repo, u can commit ur code immediately without typing `git add` u will juts need to write the name of ur commit and that's all
It's the - in line 19, it's showing up when I paste to a file as — and causing an issue.
Change line 19 to:
Write-Host "Skipping '$($_.Name)' already exists in $baseFolder"
Once I did that with your script, it ran fine.
Since Angular 19
You can clean unused imports easily with a new schematic:
$ ng generate @angular/core:cleanup-unused-imports
Extract a clean private key (no MS Bag Attributes)
openssl pkcs12 -in your.pfx -nocerts -nodes | openssl rsa -out private.key
Extract the public certificate (no MS Bag Attributes)
openssl pkcs12 -in your.pfx -clcerts -nokeys | openssl x509 -out pub.crt
Extract public pem (no MS Bag Attributes)
openssl pkcs12 -in your.pfx -clcerts -nokeys | openssl x509 -out pub.pem
Extract the CA chain (if present)
openssl pkcs12 -in your.pfx -cacerts -nokeys -out CA_chain.crt
Extract Public Key from clean private key
openssl rsa -in private.key -pubout -out public.key
If you ever need to password-protect the private key during export
openssl rsa -in private.key -aes256 -out private-secure.key
Check if the *.crt, *.pem, work with your *.key / visually match the output
openssl rsa -noout -modulus -in private.key
openssl x509 -noout -modulus -in pub.crt
openssl x509 -noout -modulus -in pub.pem
package main
import (
"fmt"
"time"
)
func main() {
input := "2025-04-21T09:18:00Z" // ISO 8601 string
t, err := time.Parse(time.RFC3339, input)
if err != nil {
panic(err)
}
fmt.Println("Parsed Time:", t)
fmt.Println("Local Time:", t.Local())
}
go get -u ./...
go install github.com/oligot/go-mod-upgrade@latest
go-mod-upgrade
If you want
func xor(a, b bool) bool {
return a != b
}
fmt.Println(xor(true, false)) // true
fmt.Println(xor(true, true)) // false
Thanks to the suggestion from Jmb about using ndarray::ArrayView2
, I was able to get create the attribute with the desired DATASPACE
definition:
use anyhow::{Ok, Result};
use hdf5::types::FixedAscii;
use hdf5::{Dataspace, File};
use std::path::PathBuf;
use std::str::Bytes;
use ndarray::ArrayView2;
fn main() -> Result<()> {
let hdf5_path = PathBuf::from(
"SVM15_npp_d20181005_t2022358_e2024003_b35959_c20181008035331889474_cspp_bck.h5",
);
let file = File::open_rw(hdf5_path)?;
let gmtco_name =
"GMTCO_npp_d20181005_t2022358_e2024003_b35959_c20181008035331889474_cspp_bck.h5";
let attr_name = "FixedAscii_2D_array";
let ascii_array: hdf5::types::FixedAscii<79> =
hdf5::types::FixedAscii::from_ascii(&gmtco_name)?;
let ascii_array = [ascii_array];
let data = ArrayView2::from_shape((1, 1), &ascii_array)?;
file.new_attr::<hdf5::types::FixedAscii<79>>()
.shape([1, 1])
.create(attr_name)?
.write(&data)?;
Ok(())
}
which results in the attribute (as shown via h5dump
):
$> h5dump -a FixedAscii_2D_array SVM15_npp_d20181005_t2022358_e2024003_b35959_c20181008035331889474_cspp_bck.h5
HDF5 "SVM15_npp_d20181005_t2022358_e2024003_b35959_c20181008035331889474_cspp_bck.h5" {
ATTRIBUTE "FixedAscii_2D_array" {
DATATYPE H5T_STRING {
STRSIZE 79;
STRPAD H5T_STR_NULLPAD;
CSET H5T_CSET_ASCII;
CTYPE H5T_C_S1;
}
DATASPACE SIMPLE { ( 1, 1 ) / ( 1, 1 ) }
DATA {
(0,0): "GMTCO_npp_d20181005_t2022358_e2024003_b35959_c20181008035331889474_cspp_bck.h5\000"
}
}
}
As far as I can tell, there is as yet no high-level interface to set STRPAD
to H5T_STR_NULLTERM
rather than H5T_STR_NULLPAD
, however I believe this can be done using the hdf5-sys
crate in an unsafe block.
I have created a git repo containing many examples of reading and writing HDF5 (and NetCDF4) attributes and datasets (including several variants of the above solution) at https://codeberg.org/gpcureton/hdf5_netcdf_test.rs , in the hope that it may be useful for people trying to use the Rust interface to HDF5 and NetCDF4.
This solution works including iOS 18.
@objc @discardableResult private func openURL(_ url: URL) -> Bool {
var responder: UIResponder? = self
while responder != nil {
if let application = responder as? UIApplication {
if #available(iOS 18.0, *) {
application.open(url, options: [:], completionHandler: nil)
return true
} else {
return application.perform(#selector(openURL(_:)), with: url) != nil
}
}
responder = responder?.next
}
return false
}
Solution from https://stackoverflow.com/a/78975759/1915700
First, ensure that react-toastify was installed correctly. Run the following command in your terminal to double-check:
npm list react-toastify npm install --save react-toastify
Once you install it, make sure the package exists in your node_modules directory. You can navigate to node_modules/react-toastify to confirm its presence.
Ok, I figured it. I went to the spreadsheet and went to each sheet. Some had 71 rows of data, but a little over 1000 total rows. This was pretty consistent along all the sheets. So I went to each one and deleted the majority of the empty rows. They are filled in with an importrange command, so as more is added they will fill in more rows, but for now the execution time went from 30+ sec and timing out, to 11.1 secs.
Make sure you're importing it correctly:
import { ToastContainer, toast } from 'react-toastify';
import 'react-toastify/dist/ReactToastify.css';
Also, don't forget to include the <ToastContainer />
somewhere in your app, like in App.js or _app.js.
It sounds like you are facing a challenging intermittent issue with your Java application querying the DB2 database. Here are some potential areas to investigate that might help resolve the problem:
**Resource Limits**: Look for resource limits on your DB2 instance, such as connection limits or memory usage, that may affect query execution.
**SQL Execution Context**: Verify if there are any environmental differences between running your query through the application versus the DB2 client. For example, check user permissions or roles associated with the connection used by the Java application.
**Debug Logging**: Add debug logging in your application to capture query execution, parameters, and connection details to isolate when the behavior changes.
**DB2 Configuration**: Review DB2 configuration settings, such as optimization or locking options, that might impact query behavior intermittently.
If the issue persists after investigating these areas, consider enabling detailed DB2 tracing to gather more information about query execution and performance. This may provide additional insights to help pinpoint the underlying cause.
Question 1 (Simplified English Version - WordPress 403 Error)
Title: Python WordPress.com API: 403 Error (User cannot publish posts) when publishing
Body:
Hi everyone,
I'm trying to automatically publish posts to my WordPress.com blog using Python (requests
library) and an Application Password.
However, when I send a POST request to the /posts/new
endpoint using code similar to the example below, I consistently get a 403 Forbidden - User cannot publish posts
error.
Python
# Example code used for publishing (some values changed)
import requests, json, base64
BLOG_URL = "https://aidentist.wordpress.com"
WP_USER = "sonpp" # My WP.com username
WP_PASSWORD = "k" # App password used (tried regenerating)
api_url = f"https://public-api.wordpress.com/rest/v1.1/sites/{BLOG_URL.split('//')[1]}/posts/new"
post_data = {"title": "Test Post", "content": "Test content", "status": "publish"}
credentials = f"{WP_USER}:{WP_PASSWORD}"
token = base64.b64encode(credentials.encode()).decode()
headers = {"Authorization": f"Basic {token}", "Content-Type": "application/json"}
try:
response = requests.post(api_url, headers=headers, json=post_data)
response.raise_for_status()
except requests.exceptions.RequestException as e:
print(f"Error: {e}")
# Error output includes:
# HTTP Status Code: 403
# Response Body: {"error": "unauthorized", "message": "User cannot publish posts"}
What I've checked:
I can publish posts manually from the WordPress admin dashboard with the same account.
I've regenerated the Application Password multiple times.
My WordPress.com site is publicly launched (not private or "coming soon").
Trying to save as 'draft' instead of 'publish' also resulted in the same 403 error.
(Note: Basic GET requests using the same authentication method, like fetching site info, seemed to work fine).
Does anyone know other common reasons for this specific 403 - User cannot publish posts
error when trying to publish via the API? Are there other things I should check, like hidden scopes for Application Passwords or potential Free plan limitations related to API publishing?
Thanks for any insights!
Uwe's snippet worked for me. Was driving me crazy.
It means that affinity is not implemented in libgomp. The message is printed here: https://github.com/gcc-mirror/gcc/blob/a9fc1b9dec92842b3a978183388c1833918776fd/libgomp/affinity.c#L51
The whole file looks like a dummy implementation.
An alternative is to use a different OpenMP implementation. The affinity implementation in LLVM seems functional: https://github.com/llvm/llvm-project/blob/f87109f018faad5f3f1bf8a4668754c24e84e886/openmp/runtime/src/z_Windows_NT_util.cpp#L598
By Klu
header 1 | header 2 |
---|---|
cell 1 | cell 2 |
cell 3 | cell 4 |
Silver game
It's not possible, but you can make your profile private so people can't browse your images.
Flash Bitcoin is a unique cryptocurrency solution designed to provide temporary Bitcoin transactions in a fast, secure, and efficient manner. Unlike traditional Bitcoin, Flash Bitcoin has a limited lifespan, making it ideal for specific use cases where temporary holdings are beneficial. Whether you’re a trader, a crypto enthusiast, or exploring new ways to use cryptocurrencies, Flash Bitcoin offers a revolutionary approach to managing digital assets.
🔑 Key Characteristics of Flash Bitcoin
Flash Bitcoin stands out due to its unique features that differentiate it from traditional Bitcoin:
⏳ Temporary Holding
Flash Bitcoin mirrors traditional Bitcoin in value and function but is designed to stay in your wallet for a limited time. Depending on the software used for the transaction, Flash Bitcoin can remain securely stored in your wallet for 90 to 360 days before being automatically rejected by the blockchain network.
🔒 Secure Generation
All Flash Bitcoin is generated using specialized software, ensuring that transactions are both efficient and secure. This guarantees the highest level of reliability for your Flash Bitcoin transactions.
🛒 How to Order Flash Bitcoin
Ordering Flash Bitcoin is easy and accommodates both small and large transactions:
Minimum Order: Start with a minimum order of $2,000 BTC, where you only pay $200 to receive $2,000 worth of Flash BTC.
Maximum Order: For larger transactions, you can order up to a maximum of $10,000,000 BTC, allowing significant flexibility in transaction amounts.
🌟 Why Choose FastFlashBitcoins.com for Flash Bitcoin?
At FastFlashBitcoins.com, we are committed to providing the best Bitcoin flashing service online, ensuring your transactions are secure, reliable, and easy to manage. Here’s why we are the best choice for Flash Bitcoin:
🎯 Disappearing Tokens
Flash Bitcoin will automatically disappear from any wallet after 90 to 360 days, including any converted cryptocurrency. This feature ensures secure use while requiring careful management of your digital assets.
🔁 Limited Transfers
Flash Bitcoin has a transfer limit of 12 times, enhancing security and minimizing risks. This limitation ensures that your crypto remains traceable and manageable during its lifespan.
♻ Versatile Conversion
Flash Bitcoin can be converted into any other cryptocurrency on exchanges. However, if you convert Flash Bitcoin into another cryptocurrency, the converted coin will also disappear after 90 days.
⚙ Features of Flash Bitcoin
Flash Bitcoin is packed with features that make it an ideal solution for anyone looking for temporary Bitcoin transactions:
✅ 100% Confirmed Transactions: Every transaction is fully confirmed, ensuring reliability and peace of mind.
⚡ Quick Confirmation: Flash Bitcoin transactions are processed with priority for maximum speed, ensuring a seamless user experience.
🌍 Wide Wallet Compatibility: Flash Bitcoin is compatible with all wallet types, including SegWit addresses, legacy wallets, and bch32 wallets.
🚫 Unstoppable Transactions: Once initiated, Flash Bitcoin transactions cannot be canceled, making it a powerful and secure tool.
💸 Easy Spendability: Spend Flash Bitcoin easily on any address, regardless of the wallet type or format.
💎 Why Flash Bitcoin is the Best Solution
Flash Bitcoin offers a unique way to engage with cryptocurrencies, combining security, speed, and versatility in one innovative package. Whether you’re looking to:
Transfer large amounts of Bitcoin temporarily,
Convert cryptocurrencies with a limited lifespan, or
Experiment with disappearing tokens,
Flash Bitcoin provides a secure and reliable platform for all your needs.
🚀 Experience Flash Bitcoin Today
Take advantage of this revolutionary way to interact with cryptocurrencies. Whether you’re buying or selling Flash Bitcoin, FastFlashBitcoins.com is your trusted partner for secure and efficient transactions.
🌍 Why Choose FastFlashBitcoins.com
🔐 Reliable Platform: Enjoy a seamless transaction process with 100% confirmed transactions.
🧑💼 Expert Guidance: Our team is here to guide you through every step of the flashing process.
🔒 Secure Transactions: We prioritize the safety of your digital assets with specialized software and proven security protocols.
💬 Get Started Now
Don’t miss out on this innovative cryptocurrency solution. Experience the convenience, security, and flexibility of Flash Bitcoin today with FastFlashBitcoins.com.
💬https://telegram.me/flashsbitcoins
📲 Phone: +1 (770) 666–2531
Unlock the power of temporary Bitcoin with Flash Bitcoin and elevate your crypto experience today!
Just to sum up the comments above:
Adjusting the original example to Flux.from(req.receive().aggregate().asString().delayElement(Duration.ofNanos(1)).filter(str -> (str.length() > 0))).next();
resulted in being able to exceed the 500 connections per second.
Upon studying delayElement
the documentation states that the parallel
scheduler is then used.
Thus the constraint appears to be the scheduler. Once that theory was established it was tested by replacing delayElement
with .subscribeOn(Schedulers.boundedElastic())
and confirmed to have the same result (being able to exceed 500).
The issue was the worker threads were actually disabled (via preprocessor defines not shown in code). You must continue to use worker threads even with the new blk_mq.
This extension actually works.
DevTools - Sources downloader
Delighted to have https://stackoverflow.com/users/985454/qwerty answer! Helped me here!
Object.assign(window, { myObj }) is a bit cleaner and allows for snippet automation
If you are trying to execute a while or for loop then you are not ending the statement within with a ;
:
while [...]; do command; done;