Here's what I got while solving this problem.
1. Various manufacturers increase the memory capacity of nor flash by placing several 64 MB memory dies.
2. To work with all memory dies, a mechanism for switching between memory dies is needed
3. Switching between memory dies can be software or hardware.
4. In my case, the flash has a special command for switching between dies (C2h). In the general spi-nor driver, this feature is not taken into account. In the source code for this flash drive, such a mechanism is also not implemented. I do not know how to write drivers for Linux, so the problem must be solved another way.
Solution.
There is a similar pin-to-pin compatible Nor Flash from Micron. This chip has a hardware mechanism for switching between dies.
P.S. May be one time it will be solution for winbond flash
Use 'BottomSheetView' to wrap 'Text' like this:
import { BottomSheetView } from '@gorhom/bottom-sheet';
//YOUR CODE
<BottomSheetView>
<Text>BottomSheet</Text>
</BottomSheetView>
pkill -f "firebase emulators:start"
this should kill all running emulators. then you can restart
Using splice() (Modifies Original Array)
const array = [1, 2, 3, 4, 5];
const index = array.indexOf(3); // Find index of item to remove
if (index > -1) { // Only splice if item exists
array.splice(index, 1); // Remove 1 item at found index
}
console.log(array); // [1, 2, 4, 5]
git remote show -n origin | sed -E 's/^ +//' | grep -E "^$(git branch --show-current)$" | sed 's/^/origin\//'
in your case, use this instead of raw dollar sign
@GET('/([\$])export')
Maybe you uploaded the current bundle for testing instead of publication. Try to upload the bundle intended for publication. Use the next version number. For example, if version of current bundle is 1.1, then use 1.2 for new one.
'$' means it is waiting for input. It is already displayed where you are about to input, right?
Just type the rest of it.
export QT_XCB_GL_INTEGRATION=none
How to make this permanent? Each time I restart the shell I need to enter it again in order to successfully start the navigator.
Now Play console will show you Non fatals as well in the Play console. You would have received notification regarding the same in your Console inbox. So Check if your Non Fatals in firebase has the missing crashes reported by Play console
https://play.google.com/console/about/whats-new/#new-non-fatal-memory-errors-in-crashes-and-anrs
Avoid blocking async code in constructors using .Wait() or .Result
-- Edit -- statusBarTranslucent: {true} simply makes the native status bar translucent. It does not prevent the view from being pushed by the keyboard.
The other solutions (KeyboardAwareScrollView and avoidKeyboard: {false} ) did not work for me, but this fixed it for my situation:
import { Keyboard } from 'react-native';
// this code is from https://docs.expo.dev/guides/keyboard-handling/
export const ReactiveModal ...
const [isKeyboardVisible, setIsKeyboardVisible] = useState(false);
useEffect(() => {
const showSubscription = Keyboard.addListener('keyboardDidShow', handleKeyboardShow);
const hideSubscription = Keyboard.addListener('keyboardDidHide', handleKeyboardHide);
return () => {
showSubscription.remove();
};
}, []);
const handleKeyboardShow = event => {
setIsKeyboardVisible(true);
};
const handleKeyboardHide = event => {
setIsKeyboardVisible(false);
};
// end of code from expo docs
return (
<Modal
isVisible={isVisible}
swipeDirection={['down']}
style= {isKeyboardVisible ? styles.modalEditing : styles.modal} // this is important
propagateSwipe
statusBarTranslucent={true}
>
//more code...
</Modal>
)
const styles = StyleSheet.create({
modal: {
justifyContent: 'flex-end',
margin: 0,
},
modalEditing: {
justifyContent: 'flex-start',
margin: 0,
},
//more styling and export ReactiveModal
This solution adds a dynamic layout to the modal depending on whether the keyboard is open. In my situation, modal does not take up the entire page but rather about 75% of the bottom of the screen when it's open.
flex-end forces the modal to be at the bottom of the view and flex-startforces it to be at the top of the view.
This is the best solution I could find as the keyboard kept pushing the content up in the modal despite setting softwareKeyboardLayoutMode: "pan"
KeiKai OSE is not supported to work with ZK 10 (work with 9 or before). Please change to KeiKai EE.
Breeze, Sapphire, Silvertail, and Atlantic are also no longer supported since ZK 10.
If you're currently using Breeze, Sapphire, or Silvertail, we suggest migrating to the iceblue_c.
If you're using Atlantic, we suggest migrating to the iceblue.
Please see
refer to this FAQ page to resolve this issue: https://docs.mem0.ai/faqs#how-do-i-configure-mem0-for-aws-lambda
You can start by learning JavaScript, as it's the foundation. Once you're comfortable with it, move on to Node.js and Express.js, which are widely used for backend development.
Ah. It's pretty easy to access the attribute's custom option.
If the type is declared like this:
class QueryParamType < ActiveModel::Type::Value
def type = :query_param
def initialize(**args)
@accepts = args[:accepts]
end
end
then a caller can access the attribute's option value (for the Comment class above) like this:
Query::Comments.attribute_types["created_at"].accepts
Note that the hash keys of attribute_types are strings, not symbols.
if you go the documentation of flutter_webrtc on pub.dev. Under Functionality section it is given that the MediaRecorder is currently available on web only.
You can rollback to the previous state.
docker service rollback <service>
This works on both successful and failed deployment.
Then you can update your service again.
In my case, I need to set PYTHONPATH='' when using a virtualenv to avoid this error.
make both Required b: Required<Alpha>['a'];. tried here and you can play around here
typescriptlang
I am from China, thank you very much
Ok; I got it across the line......
My issues were:
1. Using `Authorization: Bearer<token>` the correct value should have been `Authorization: DPoP <token>` thank you @Dan-Ratner
2. When making requests to PDS (/xrpc/com.atproto.repo.createRecord), I was using the entryway (bsky.social) instead of the PDS endpoint. The correct endpoint was extracted from the JWT > "aud" thank you yamarten over at GitHub[1]
3. The final error "message":"DPoP nonce mismatch", I was getting when making PDS requests, was due to the dpop nonce changing/expiring, and I hadn't dealt with the change/reply from requests resulting in 401 errors.
[1] https://github.com/bluesky-social/atproto/issues/3212#issuecomment-2764380250
My code now needs a complete refactor to clean up the implementation
Add this config to .m2 file
<mirrors>
<mirror>
<id>maven-default-http-blocker</id>
<mirrorOf>releases.java.net</mirrorOf>
<name>releases.java.net</name>
<url>YOUR REPO</url>
<blocked>false</blocked>
</mirror>
</mirrors>
I am on the same boat, for my data science capstone project, i am looking for product reviews but i am not able to get public api, please let me know which companies like target, walmart, costco, amazon, or ebay which provides public api for developers
As drewtato mentioned, add configs below should save the problem.
[[bench]]
name = "my_benchmark" # the name of the benchmark file's name
harness = false
Android Studio is a cool development environment, but it has bugs like everyone else
Why did they make such the Toast that few people have time to read? This is a clear mistake!
Click it to let the developers see it! Stop worrying and coming up with crutches
In general, use Dialogs! It's clearer, more beautiful, it doesn't run away! :)
A bit late but maybe it'll help someone else. Use [embedFullWidthRows]="true" when defining ag-grid. Refer here.
I will do few updates to your code, 1 - Wrap setOptimisticIsFavorited in startTransition to properly handle the optimistic update. 2 - Add error handling and reset optimistic state on failure 3 - Disable the button during transitions to prevent multiple clicks 4 - Add proper error boundaries around the async operation
1. Verify and Reinstall the NDK
The error suggests that the source.properties file is missing, which indicates an issue with the NDK installation. Follow these steps to reinstall the NDK: Open Android Studio. Go to File > Settings > Appearance & Behavior > System Settings > Android SDK (on macOS, it's under Preferences). Select the SDK Tools tab. Check the box for NDK (Side by side) and click Apply or OK to install the latest version of the NDK.
Once installed, verify that the NDK directory (e.g., ~/Android/Sdk/ndk/) contains the source.properties file.
2. Clean and Rebuild the Project
After reinstalling the NDK, clean and rebuild your project to ensure the changes take effect cd android ./gradlew clean cd .. npx react-native run-android 3. Specify the Correct NDK Version
Sometimes, the project may require a specific version of the NDK. You can specify the version in your build.gradle file: Open the android/build.gradle file. Add or update the ndkVersion property under the android block: gradle
android { ndkVersion "27.1.12297006" // Replace with the correct version } Sync the project and rebuild.
4. Delete and Reinstall the NDK Folder
If the issue persists, manually delete the NDK folder and reinstall it: Navigate to the NDK directory (e.g., ~/Android/Sdk/ndk/). Delete the problematic NDK folder (e.g., 27.1.12297006). Reinstall the NDK using Android Studio as described in Step 1.
5. Update Gradle and React Native
Ensure that you are using the latest versions of Gradle and React Native, as older versions may have compatibility issues with newer NDK versions: Update the Gradle wrapper by modifying the gradle-wrapper.properties file: properties
distributionUrl=https://services.gradle.org/distributions/gradle-8.0-all.zip Update React Native to the latest version: bash
npm install react-native@latest
6. Verify Environment Variables
Ensure that your ANDROID_HOME and PATH environment variables are correctly set: Add the following lines to your ~/.bashrc or ~/.zshrc file: bash
export ANDROID_HOME=$HOME/Android/Sdk export PATH=$ANDROID_HOME/emulator:$ANDROID_HOME/tools:$ANDROID_HOME/tools/bin:$ANDROID_HOME/platform-tools:$PATH Reload the terminal: bash
source ~/.bashrc
7. Delete Gradle Cache
For anyone trying to do the same as the original poster, see this repository by james77777778:
https://github.com/james77777778/darknet-onnx
{
"name": "Permissions Extension",
...
"permissions": [
"activeTab",
"contextMenus",
"storage"
],
"optional_permissions": [
"topSites",
],
"host_permissions": [
"https://www.developer.chrome.com/\*"
],
"optional_host_permissions":[
"https://\*/\*",
"http://\*/\*"
],
...
"manifest_version": 3
}
As suggested by @camickr I managed to get a solution, but I did things a little differently.
Container mainPanel = this.getParent();
CardLayout card = (CardLayout) mainPanel.getLayout();
card.show(mainPanel, "login panel");
Try add the dll name to the source:
<ResourceDictionary.MergedDictionaries>
<ResourceDictionary Source="/dllName;component/Themes/DarkTheme.xaml"/>
</ResourceDictionary.MergedDictionaries>
I solved this problem by going into the container settings. There’s a configuration for enabling end-to-end HTTP 2.0. When I disabled it, the protocol error stopped appearing.
You could use pandas for csv processing. In this case pandas will skip the header and will bring you more possibilities.
But something like this can aslo help you:
if list(row.values()) == FIELDNAMES:
pass
can someone make one to disconnect from global protect
ffmpeg_kit_flutter:
git:
url: https://github.com/Sahad2701/ffmpeg-kit.git
path: flutter/flutter
ref: flutter_fix_retired_v6.0.3
This plan is feasible, thank you;
but how to use :ffmpeg_kit_flutter_full_gpl;
if use “”ffmpeg_kit_flutter_full_gpl , report an error;
pls
I've always felt like "internal" should be the default. Unless you're writing a public library there's no need or reason to expose anything at all to the world at large. So many people here have said "You only need to use internal when you want to hide something from the outside world" but I'd turn that on it's head - you only want to use public for stuff that you expect to be called from the outside world. Unless you're writing a public library, that usually means nothing at all. That said, it does make some things easier where other types of program such as serialization or unity tests require explicit access to your stuff but even there there's almost always a workaround though sometimes it's a bit more difficult. I really regret that most code just mindlessly uses public for all sorts of stuff that nobody is particularly anxious to publish to the world. Sometimes I just throw my hands up and acquiesce because so much stuff is geared towards you making stuff public but I think that this is a sad, almost accidental historical mistake rather than a well thought out strategy.
I have a pretty good solution, wich is working since 2008 without problems, we are storing close to 500,000 files of diferent types by using 2 separate tables.
Performance is amazing, and memory usage very low because one table (METADATA) only stores metadata describing the uploaded file, including one field(id) pointing to the second table (CONTENT) wich contains a BLOB field(the actual file) and an ID field to link its metadata.
All searching is done on the metadata table, and when we decide to download a file the ID field allow Us to download the content of only that specific record from the second table.
We insert new files on the CONTENT table, and the new ID field is used to insert another record on the METADATA table and register the descriptive info of the file, like name, size, type, user,etc.
METADA is like a directory of the files. Small Table.
CONTENT is the repository of the files with their key (ID).Huge Table.
In the second table We store files as big as 256MB in Mysql.
To show file upload progress in NestJS, don't use multer because it waits until the file is fully uploaded. Instead, use busboy and show progress using Javascript's onprogress event.
No parametro path da imagem existe uma forma de usar o conteudo da image ao inves do caminho?
In the image path parameter, is there a way to use the image content instead of the path?
its actually quite simple
template <typename T>
void printrgb(int r, int g, int b, T output) {
std::cout<<"\033[38;2;"<<r<<";"<<g<<";"<<b<<"m"<<output<<"\033[0m";
}
the output will be printed based on r, g and b, actually i dont understand how its work
Run the command without $ in your terminal
git clone https://www.github.com/sky9262/phishEye.git
I had the same issue. There is a response on this page from the user Sachin Dev Tomar and that is what worked for my situation. So once I got the Azure development tool installed in my Visual Studio, it started to work as expected.
In Vscode I just cleaned the android paste and rerun the expo command to run in android, and for some reason It works very well : )
Craigslist no longer allows HTML tags like in most categories to prevent spam and scams.Instead, post plain URLs like https://example.com; Craigslist auto-links them in supported sections.
You have a lot of alternatives:
Disable button and wait some time to enable again
Disable button, wait response from server, show success dialog, wait for user to click close, then enable button again
You can check if the same data was inserted before in a defined amount of time and cancel the operárion
I had this issue and for me, it was because my bin folder was included in my project in Visual Studio. I removed all references to <Content Include="bin\..."/> in my .csproj file and the publish started working after that.
Spotify stores encrypted, compress audio files in cache, using minimal storage. for your project compress audio, encrypt it, store locally, and decrypt for playback using native audio tools or libraries.
Thank You
The method is creating a thread, and multiple threads may be trying to open the same registry key. The call may not be thread safe.
You can remove the code from the onBootstrap method. In laminas the session is started via the AbstractContainer::__construct method. See the link below for the code relevant to that. In laminas Session is autowired via its factories. You can find which ones are being called in the /vendor/laminas/laminas-session/src/ConfigProvider::getDependencyConfig() method. For laminas components that expose support for both Laminas MVC and Mezzio the Module class just proxies to the ConfigProvider class.
You can find the migration guide here:
https://docs.laminas.dev/migration/
You can find a presentation by Zend here:
https://www.zend.com/webinars/migrating-zend-framework-laminas
Location where the session is started
Since you are trying to upgrade you may want to take a look at the documentation for usage in a laminas-mvc application. It covers all the new options.
https://docs.laminas.dev/laminas-session/application-integration/usage-in-a-laminas-mvc-application/
Should help fix these kinds of missing plugin issues with WordPress when it doesn't solve itself: https://github.com/wpallstars/wp-fix-plugin-does-not-exist-notices
I downgraded my Xcode to 16.2 and the app built successfully.
ok so I suggest you remove the print function at the start and I suggest you to replace the , to a +
for ex
name = input("What's your name")
print("Hello " + name)
You can format your data using a community visualization named Templated Record that supports HTML. Here is how it works and an example of the results.
I just tried this but I got the same error:
python minipython1151.py
Traceback (most recent call last):
File "/Users/kate/Pictures/RiverofJobs.com/code2/minipython1151.py", line 1, in <module>
from flask import Flask
ModuleNotFoundError: No module named 'flask'
Sometimes it happens when you use a feature that's only valid for one day, and after that, it won't let you do anything, and you'll have to start another chat. But if you have the paid version, it's very rare for that to happen.
Best regards!
OK, his was not straightforward to diagnose nor fix, and I really had to receive some pointers from this SonarSource Community topic (credits to @ganncamp).
There are multiple factors that led here.
Factors that are SonarQube-specific:
The more recent SonarQube versions such as 9.9 and 2025.1 have no way to update the email of an external user. This is something advertised as a "feature" but I think it is rather a design failure. Although it would be easy to pick the email address from the LDAP query response and update the email address on logon, SonarQube choose deliberately not to do that. External users get their email field populated on first logon and then stick to it for the rest of their life. Well, except you dare to touch SonarQube's database directly.
SonarQube users must have unique email addresses. If on logon, an LDAP query returns a user not yet in SonarQube's own users table (looked up using username), but the email returned by the LDAP server is already present in the same users table, the login failes and the new user is not inserted to the users table.
(I don't have the faintest idea about the reasoning behind this. It's not hard to imagine use cases where multiple users have the same email address. Consider several technical users, which are all set up with [email protected] ...)
You can set up multiple LDAP servers in sonar.properties as external identity providers. The important detail is, that this sort of setup is not meant to work as a failover cluster even though it works similar to a failover cluster:
SonarQube Server's LDAP support is not designed to connect multiple servers in a failover mode.
(...)
Authentication will be tried on each server, in the order they are listed in the configurations until one succeeds.
What's it designed for then? They probably meant to provide access using heterogenous LDAP servers. Consider multiple firms or branches each with their own LDAP directory using the same SonarQube instance.
To address this use case in a multi-server LDAP setup, the SonarQube users table contains an external_login and an external_identity_provider field, which together must be unique in the whole table. In a single-server LDAP setup, external_identity_provider is always 'sonarqube'. In a multi-server LDAP setup, the field reflects the LDAP server the user was authenticated against the first time they logged in. For example: "LDAP_foobar". (See linked documentation above.) Now our two John Does can be told apart:
| login | external_login | external_identity_provider |
|---|---|---|
| john_doe | john_doe | LDAP_foobar |
| john_doe1234 | john_doe | LDAP_yeehaw |
Also, since the SonarQube users table had an original "login" field (which is unique of course), they had to work around that unique constraint by adding a random number sequence to the username. Since the login field is probably not used for external users anymore, this is just for backwards compatibility, I guess.
ldap.url=ldaps://foo.bar.local:636
ldap.foo.bindDn=CN=foo,OU=orgunit,DC=bar,DC=local
...then the certificate SAN should contain a bar.local DNS field, otherwise the query fails and produces a (debug-level) message in web.log:
2025.04.11 18:43:18 DEBUG web[b7a70ba3-0e9a-4685-a1ad-c2a30e919e64][o.s.a.l.LdapSearch] More result might be forthcoming if the referral is followed
javax.naming.PartialResultException: null
at java.naming/com.sun.jndi.ldap.AbstractLdapNamingEnumeration.hasMoreImpl(AbstractLdapNamingEnumeration.java:237)
at java.naming/com.sun.jndi.ldap.AbstractLdapNamingEnumeration.hasMore(AbstractLdapNamingEnumeration.java:189)
at org.sonar.auth.ldap.LdapSearch.hasMore(LdapSearch.java:156)
at org.sonar.auth.ldap.LdapSearch.findUnique(LdapSearch.java:146)
at org.sonar.auth.ldap.DefaultLdapUsersProvider.getUserDetails(DefaultLdapUsersProvider.java:78)
at org.sonar.auth.ldap.DefaultLdapUsersProvider.doGetUserDetails(DefaultLdapUsersProvider.java:58)
at org.sonar.server.authentication.LdapCredentialsAuthentication.doAuthenticate(LdapCredentialsAuthentication.java:92)
at org.sonar.server.authentication.LdapCredentialsAuthentication.authenticate(LdapCredentialsAuthentication.java:74)
at org.sonar.server.authentication.CredentialsAuthentication.lambda$authenticate$0(CredentialsAuthentication.java:71)
at java.base/java.util.Optional.or(Optional.java:313)
at org.sonar.server.authentication.CredentialsAuthentication.authenticate(CredentialsAuthentication.java:71)
at org.sonar.server.authentication.CredentialsAuthentication.authenticate(CredentialsAuthentication.java:57)
at org.sonar.server.authentication.ws.LoginAction.authenticate(LoginAction.java:116)
at org.sonar.server.authentication.ws.LoginAction.doFilter(LoginAction.java:95)
...
Caused by: javax.naming.CommunicationException: simple bind failed: bar.local:636
at java.naming/com.sun.jndi.ldap.LdapReferralContext.<init>(LdapReferralContext.java:96)
at java.naming/com.sun.jndi.ldap.LdapReferralException.getReferralContext(LdapReferralException.java:151)
at java.naming/com.sun.jndi.ldap.AbstractLdapNamingEnumeration.hasMoreReferrals(AbstractLdapNamingEnumeration.java:326)
at java.naming/com.sun.jndi.ldap.AbstractLdapNamingEnumeration.hasMoreImpl(AbstractLdapNamingEnumeration.java:227)
... 68 common frames omitted
Caused by: javax.net.ssl.SSLHandshakeException: No subject alternative DNS name matching bar.local found.
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:383)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:326)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:321)
at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:654)
at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:473)
at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(CertificateMessage.java:369)
at java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:396)
at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:480)
at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:458)
at java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:206)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:172)
at java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1510)
at java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1425)
at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:455)
at java.base/sun.security.ssl.SSLSocketImpl.ensureNegotiated(SSLSocketImpl.java:925)
at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1295)
at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:81)
at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:142)
at java.naming/com.sun.jndi.ldap.Connection.writeRequest(Connection.java:418)
at java.naming/com.sun.jndi.ldap.Connection.writeRequest(Connection.java:391)
at java.naming/com.sun.jndi.ldap.LdapClient.ldapBind(LdapClient.java:359)
at java.naming/com.sun.jndi.ldap.LdapClient.authenticate(LdapClient.java:214)
at java.naming/com.sun.jndi.ldap.LdapCtx.connect(LdapCtx.java:2896)
at java.naming/com.sun.jndi.ldap.LdapCtx.<init>(LdapCtx.java:348)
at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxFromUrl(LdapCtxFactory.java:229)
at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(LdapCtxFactory.java:189)
at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(LdapCtxFactory.java:152)
at java.naming/com.sun.jndi.url.ldap.ldapURLContextFactory.getObjectInstance(ldapURLContextFactory.java:52)
at java.naming/javax.naming.spi.NamingManager.getURLObject(NamingManager.java:625)
at java.naming/javax.naming.spi.NamingManager.processURL(NamingManager.java:402)
at java.naming/javax.naming.spi.NamingManager.processURLAddrs(NamingManager.java:382)
at java.naming/javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:354)
at java.naming/com.sun.jndi.ldap.LdapReferralContext.<init>(LdapReferralContext.java:119)
... 71 common frames omitted
Caused by: java.security.cert.CertificateException: No subject alternative DNS name matching bar.local found.
at java.base/sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:212)
at java.base/sun.security.util.HostnameChecker.match(HostnameChecker.java:103)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:471)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:418)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:238)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:132)
at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:638)
... 100 common frames omitted
The tricky part is: the same rigorous SAN-checking does not happen on server startup, when SonarQube checks connectivity to all configured LDAP servers. Even if the TLS certificate is imperfect, it will log:
2025.04.14 21:42:54 INFO web[][o.s.a.l.LdapContextFactory] Test LDAP connection on ldaps://foo.bar.local:636: OK
Factors and events related to our specific setup and situation:
We had a 5-server LDAP setup. Unfortunately, we meant to use it as a failover cluster, so these LDAP directories were really just replicates of each other.
At a point, several users in the LDAP directory had their email addresses changed.
Somewhat later, we had downtimes for the first few LDAP servers listed in sonar.properties (such as LDAP_foobar). It lasted a few days, then we fixed it.
Meanwhile, we messed up the TLS certificates of our LDAP servers except one down the list (LDAP_valid).
Not totally sure about how it all played down, but the results were as follows:
| login | external_login | external_identity_provider | |
|---|---|---|---|
| john_doe | [email protected] | john_doe | LDAP_foobar |
| john_doe1234 | [email protected] | john_doe | LDAP_yeehaw |
Since the first few LDAP servers listed in sonar.properties (such as LDAP_foobar and LDAP_yeehaw) had a TLS certificate problem, the login process always failed over to LDAP_valid.
The LDAP_valid authentication was succesful, but the email address in the LDAP response was already present in the users table, so SonarQube threw an "Email '[email protected]' is already used" error.
How we managed to fix the situation:
SonarQube service stop. Backup.
We changed the LDAP configuration back to a single LDAP-server setup.
We had to update all the users.external_identity_provider database fields to 'sonarqube' to reflect the switch to single LDAP-server setup:
UPDATE users SET external_identity_provider = 'sonarqube' WHERE external_identity_provider LIKE 'LDAP_%';
We removed all the john_doe1234 duplicate users entries. (One record at a time delete statements.)
We updated all the old users.email fields to their new values.
SonarQube service start.
The problem was: instead of
https://graph.facebook.com/v22.0...
It should have been:
https://graph.instagram.com/v22.0...
Your routes may have been cached, so you should execute :
php artisan route:clear
This should delete the previously optimized routes.
yes it's good idea, work for me, get server load using sys_getloadavg() and combine with sleep(), reduce CPU load on sleep()
Hi i found an error with the location name, they are different for the serverfarm, like this i sucessufull created my function app using the CLI:
C:\Users\jesus\Documents\AirCont\AircontB> az login
C:\Users\jesus\Documents\AirCont\AircontB>az storage account create --name aircontstorage43210 --resource-group aircontfullstack --location "East US 2" --sku Standard_LRS --kind StorageV2 --access-tier Cool --https-only true
C:\Users\jesus\Documents\AirCont\AircontB> az functionapp list-consumption-locations
PS C:\Users\jesus\Documents\AirCont\AircontB> az functionapp create --name AircontBackFn --storage-account aircontstorage43210 --resource-group aircontfullstack --consumption-plan-location "eastus2" --runtime dotnet --functions-version 4 --os-type Windows
As @ChayimFriedman said:
You can use an empty instruction string.
I'm not well-versed on DPDK/testpmd, but you seem to be constraining the number of CPUs and queues it will use compared to what iperf3 will likely use.
Assuming your iperf3 is using TCP (guessing since the command line is not provided), it will be taking advantage of any stateless offloads offered by your NIC(s).
Assuming it isn't simply a matter of luck of timing, seeing higher throughput with more streams implies that the TCP window size settings at the sender, and the receiver, are not sufficient to enable achieving the full bandwidth-delay product with the smaller number (eg single) of streams.
There are likely plenty of references for TCP tuning for high bandwidth delay product networks. One which touches upon the topic, which is near and dear to my heart for some reason :) is at https://services.google.com/fh/files/misc/considerations_when_benchmarking_tcp_bulk_flows.pdf
It appears to be an issue with node.js specific version 22.14 and the CB SDK. I tried reinstalling it twice (no joy). It worked on my old maching running 22.11. I just installed the non-LTS 23.11 and it now all works.....
Worked on Ubuntu 24.04:
Get download url from https://downloads.mysql.com/archives/community/
wget https://downloads.mysql.com/archives/get/p/23/file/mysql-5.7.44-linux-glibc2.12-x86_64.tar
tar xvf mysql-5.7.44-linux-glibc2.12-x86_64.tar
tar zxvf mysql-5.7.44-linux-glibc2.12-x86_64.tar.gz
Config --with-mysql-config for mysql2
bundle config build.mysql2 "--with-mysql-config=PATH_TO/mysql-5.7.44-linux-glibc2.12-x86_64/bin/mysql_config"
bundle install
# or bundle pristin mysql2
Check if libmysqlclient.so.20 is link to
mysql-5.7.44-linux-glibc2.12-x86_64/lib/libmysqlclient.so.20
for example:
ldd /home/ubuntu/.rvm/gems/ruby-2.3.8/gems/mysql2-0.3.21/lib/mysql2/mysql2.so
linux-vdso.so.1 (0x00007ff026bda000)
libruby.so.2.3 => /home/ubuntu/.rvm/rubies/ruby-2.3.8/lib/libruby.so.2.3 (0x00007ff026800000)
libmysqlclient.so.20 => /home/ubuntu/mysql-5.7.44-linux-glibc2.12-x86_64/lib/libmysqlclient.so.20 (0x00007ff025e00000)
...
The only way I fixed this was to use an Update or Insert SQL statement
Try to use bash interactive shell
import subprocess
subprocess.run([
"ssh", "me@servername",
"bash -i -c 'source ~/admin_environment && exec bash'"
])
I have been struggling a bit with finding a way to draw lines outside the plot area but found a creative solution in this previous thread: How to draw a line outside of an axis in matplotlib (in figure coordinates). Thanks to the author for the solution once again!
My proposed solution for the problem is the following (see the explanation of distinct parts in the code):
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.lines import Line2D
dict = {'BatteryStorage': {('des-PDef3', 'Central Africa'): 0.0,
('des-PDef3', 'Eastern Africa'): 2475.9,
('des-PDef3', 'North Africa'): 98.0,
('des-PDef3', 'Southern Africa'): 124.0,
('des-PDef3', 'West Africa'): 1500.24,
('pes-PDef3', 'Central Africa'): 0.0,
('pes-PDef3', 'Eastern Africa'): 58.03,
('pes-PDef3', 'North Africa'): 98.0,
('pes-PDef3', 'Southern Africa'): 124.0,
('pes-PDef3', 'West Africa'): 0.0,
('tes-PDef3', 'Central Africa'): 0.0,
('tes-PDef3', 'Eastern Africa'): 1175.86,
('tes-PDef3', 'North Africa'): 98.0,
('tes-PDef3', 'Southern Africa'): 124.0,
('tes-PDef3', 'West Africa'): 0.0},
'Biomass PP': {('des-PDef3', 'Central Africa'): 44.24,
('des-PDef3', 'Eastern Africa'): 1362.4,
('des-PDef3', 'North Africa'): 178.29,
('des-PDef3', 'Southern Africa'): 210.01999999999998,
('des-PDef3', 'West Africa'): 277.4,
('pes-PDef3', 'Central Africa'): 44.24,
('pes-PDef3', 'Eastern Africa'): 985.36,
('pes-PDef3', 'North Africa'): 90.93,
('pes-PDef3', 'Southern Africa'): 144.99,
('pes-PDef3', 'West Africa'): 130.33,
('tes-PDef3', 'Central Africa'): 44.24,
('tes-PDef3', 'Eastern Africa'): 1362.4,
('tes-PDef3', 'North Africa'): 178.29,
('tes-PDef3', 'Southern Africa'): 210.01999999999998,
('tes-PDef3', 'West Africa'): 277.4}}
df = pd.DataFrame.from_dict(dict)
df.plot(kind = "bar",stacked = True)
region_labels = [idx[1] for idx in df.index] #deriving the part needed for the x-labels from dict
plt.tight_layout() #necessary for an appropriate display
plt.legend(loc='center left', fontsize=8, frameon=False, bbox_to_anchor=(1, 0.5)) #placing lagend outside the plot area as in the Excel example
ax = plt.gca()
ax.set_xticklabels(region_labels, rotation=90)
#coloring labels for easier interpretation
for i, label in enumerate(ax.get_xticklabels()):
#print(i)
if i <= 4:
label.set_color('red') #set favoured colors here
if 9 >= i > 4:
label.set_color('green')
if i > 9:
label.set_color('blue')
plt.text(1/6, -0.5, 'des', fontweight='bold', transform=ax.transAxes, ha='center', color='red') #adding labels outside the plot area, representing the 'region group code'
plt.text(3/6, -0.5, 'pes', fontweight='bold', transform=ax.transAxes, ha='center', color='green') #keep coloring respective to labels
plt.text(5/6, -0.5, 'des', fontweight='bold', transform=ax.transAxes, ha='center', color='blue')
plt.text(5/6, -0.6, 'b', color='white', transform=ax.transAxes, ha='center') #phantom text to trick `tight_layout` thus making space for the texts above
ax2 = plt.axes([0,0,1,1], facecolor=(1,1,1,0)) #for adding lines (i.e., brackets) outside the plot area, we create new axes
#creating the first bracket
x_start = 0 + 0.015
x_end = 1/3 - 0.015
y = -0.42
bracket1 = [
Line2D([x_start, x_start], [y, y - 0.02], transform=ax.transAxes, color='black', lw=1.5),
Line2D([x_start, x_end], [y - 0.02, y - 0.02], transform=ax.transAxes, color='black', lw=1.5),
Line2D([x_end, x_end], [y - 0.02, y], transform=ax.transAxes, color='black', lw=1.5),
]
for line in bracket1:
ax2.add_line(line)
#second bracket
x_start = 1/3 + 0.015
x_end = 2/3 - 0.015
bracket2 = [
Line2D([x_start, x_start], [y, y - 0.02], transform=ax.transAxes, color='black', lw=1.5),
Line2D([x_start, x_end], [y - 0.02, y - 0.02], transform=ax.transAxes, color='black', lw=1.5),
Line2D([x_end, x_end], [y - 0.02, y], transform=ax.transAxes, color='black', lw=1.5),
]
for line in bracket2:
ax2.add_line(line)
#third bracket
x_start = 2/3 + 0.015
x_end = 1 - 0.015
bracket3 = [
Line2D([x_start, x_start], [y, y - 0.02], transform=ax.transAxes, color='black', lw=1.5),
Line2D([x_start, x_end], [y - 0.02, y - 0.02], transform=ax.transAxes, color='black', lw=1.5),
Line2D([x_end, x_end], [y - 0.02, y], transform=ax.transAxes, color='black', lw=1.5),
]
for line in bracket3:
ax2.add_line(line)
ax2.axis("off") #turn off axes for the new axes
plt.tight_layout()
plt.show()
Resulting in the following plot:
# Create a string with 5,000 peach emojis
peach_spam = "🍑" * 5000
# Save it to a text file
with open("5000_peaches.txt", "w", encoding="utf-8") as file:
file.write(peach_spam)
print("File created: 5000_peaches.txt")
CA works on the ASG/node-group principle - and on bare-metal we don't have ASGs/node-groups. I tried designing a virtual node-group, and a virtual "cloud client" for bare metal, but there were so many issues with this design, that I gave up.
I ended up creating my own cluster-bare-autoscaler (https://github.com/docent-net/cluster-bare-autoscaler). Not production ready as for now (2025-04), but should be soon. Already does, what is supposed to do, but has some limitations.
Awaiting your input, folks!
to get the user's profile picture, send this GET request:
https://graph.instagram.com/me?fields=id,username,name,profile_picture_url&access_token=${ACCESS_TOKEN}
replace ACCESS_TOKEN with the user's access token.
for more info on what fields you can return, check the developer docs
After some Googling, I found that someone resolved the issue by installing the Microsoft Visual C++ Redistributable package. You can download the latest supported version from the official Microsoft site: https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170
In case it helps others, this solution was also mentioned in the following discussions:
On Edge 134 or newer:
type the following in edge browser: edge://settings/content/insecureContent
Click add button next to allow section,
type in the url you are trying to add (in my case it was Nessus server link on my domain)
restart browser - done
This had me going in circles till i figured this out above.
This also affects AlpineJS. Make sure you define the map variable outside your reactive component!
This works:
<div id="my_map" x-data="(() => { let map = null; return { markers: [], ...
This doesn't:
<div id="my_map" x-data="(() => { return { map: null, markers:[], ...
Glad I found this post. @Javier's response saved me.
What is the differnce between variable and parameter when calling az pipeline run?
How to pass multiple varibles are parameters. Is this doable ?
az pipelines run --name "{pipeline.name}" --variables Param1={Value1} Param2={Value2}
az pipelines run --name "{pipeline.name}" --parameters Param1={Value1} Param2={Value2}
I got the same error on webpack version 5.97.1, it appeared after I started using the "undici" package, and after deleting it, the error disappeared.
There's now https://github.com/urob/numpy-mkl which automatically builds current numpy and scipy wheels linked against mkl whenever there's a new release.
Wheels are available for Windows and Linux and can be installed with pip:
pip install numpy scipy --extra-index-url https://urob.github.io/numpy-mkl
Great question! I was wondering the same thing about attributes and methods. The top answer explains it really well. I also found a website that provides formal definitions and examples: https://www.almabetter.com/bytes/tutorials/python/methods-and-attributes-in-python
rem units always inherit from the document root (), not the Shadow DOM. To prevent font-size inconsistencies across sites, the best approach is to either:
Use transform: scale() to normalize font scaling inside the Shadow DOM based on the page's root font-size, or
Use an for full isolation and root control (especially useful for overlays or popups).
Overriding UI library styles from rem to em or px is possible but impractical unless you're customizing the entire theme.
You're on the right track scanning IPs, but pairing with Android TV (Cast devices) over port 8009 requires encrypted TLS sockets and Google Cast V2 protocol using protobuf messages. Dart doesn't natively support this, so you'd need to implement it in native Android (via platform channels) using the Cast SDK. Raw sockets alone won't work for pairing.
The solution I found was that I had FilterRegistrationBean in my config BUT had not registered a filter.
If you are not using a filter, just remove FilterRegistrationBean
OK - this took some digging to figure out what was going wrong.
The problem was that the "part" in the multipart request wasn't being recognized by Jersey (via any FormDataParam annotations, OR if I were to directly access the request parts via the current HttpServletRequest) because the request coming from the browser wasn't indicating the "boundary" for the first/primary Content-Type header (which the part and its boundary name is dynamically being added by the browser later in the request for the file "part", i.e., some random boundary name like "------WebKitFormBoundarymUBOEP84Y3yt6c4A").
The reason that the browser wasn't indicating the correct (or any) boundary for the primary/first Content-Type header in the request was because Angular (via the $http provider) will automatically set the Content-Type to "application/json;charset=utf-8" during a POST/PUT request if the "data" provided to the $http function is an Object that doesn't resolve to the string representation of "[object File]", and no other Content-Type header has been explicitly included/added in the $http call.
In my case, the data object's string representation is "[object FormData]", which causes the primary/first Content-Type header to be set to "application/json" by Angular (which leads to Jersey to not parse the request for any "parts" that may have been sent), instead of being set correctly to Content-Type: multipart/form-data; boundary=----WebKitFormBoundarymUBOEP84Y3yt6c4A by the browser (by not including ANY Content-Type header in my JS code during the $http POST call). If I were to explicitly set the Content-Type header to "multipart/form-data", then it will still fail because it's missing the "boundary" attribute, and I don't know the value of the boundary because it's being dynamically generated by the browser.
To fix the issue: I needed to remove the default headers that Angular was automatically applying to all POST/PUT requests by deleting the associated default properties from the JS config object:
delete $httpProvider.defaults.headers.post["Content-Type"];
delete $httpProvider.defaults.headers.put["Content-Type"];
Now, I didn't want to set another, default Content-Type header for ALL POST/PUT requests because I don't want some other incorrect content type to end up being sent in other, non-file-upload cases - so I just deleted the existing, hard-coded defaults (with the "delete" statements above), and then I ended up setting my own defaults for content type handling during my POST/PUT calls to $http based upon similar, but different, logic from what Angular was doing.
I also had to replace the default Angular request transformer during the Angular "config" hook with one that will properly handle FormData objects during POST/PUT requests, somewhat following Angular's original logic for parameterizing JSON objects to be added as form parameters to POST/PUT requests:
$httpProvider.defaults.transformRequest = function (data) {
if (angular.isObject(data)) {
let strData = String(data);
if (strData == "[object File]" || strData == "[object FormData]") {
return data;
}
return $httpParamSerializerProvider.$get()(data);
}
return data;
};
With the Content-Type header being set correctly with a boundary in the POST file upload requests from the browser, Jersey is now parsing the requests correctly, and I can use the @FormDataParam annotations without Jersey automatically sending back a 400 response when it thinks that the request is not a multipart request.
My understanding of Flask is that it is a Python microframework used for building web applications. It could be used as for communication, but would require to be paired with an additional one (such as Azure Service Bus).
There could be a way to create a shared library across the device so that they could use the same variables.
<Celll
v-for="(col, colIndex) in columns"
:key="colIndex"
:col="col" // add this
:row="row"
<script>
import { h } from 'vue';
export default {
props: {
col: {
type: Object,
required: true,
},
row: {
type: Object,
required: true,
},
},
render() {
return h('td', null, this.col.children.defaul(this.row));
},
};
</script>
If you assign a guide to each plot and give them unique titles, you get just two legends
p3 <- p1 +
guides(
color = guide_legend( title = "condition 1" )
)+
ggnewscale::new_scale_color() +
geom_point(
data = mydata
, aes(
x = x
, y = y
, group = 1
, col = new_sample_name
)
) +
guides(color = guide_legend(title = "new name"))
from moviepy.editor import * from PIL import Image
Load image
image_path = "/mnt/data/A_photograph_in_a_promotional_advertisement_showca.png" image_clip = ImageClip(image_path).set_duration(10).resize(height=1080).set_position("center")
Add promotional text
text_lines = [ "اكتشف نعومة الطبيعة مع صابون espase", "مصنوع من رماد، سكر، ملح، زيت زيتون، زيت، بيكربونات، وملون 88", "تركيبة فريدة تمنح بشرتك النقاء والانتعاش", "espase... العناية تبدأ من هنا" ]
Add each text line with a fade-in
text_clips = [] start_time = 0 for line in text_lines: txt_clip = (TextClip(line, fontsize=60, font="Arial-Bold", color="white", bg_color="black", size=(1080, None)) .set_position(("center", "bottom")) .set_start(start_time) .set_duration(2.5) .crossfadein(0.5)) text_clips.append(txt_clip) start_time += 2.5
Final video composition
final_clip = CompositeVideoClip([image_clip] + text_clips, size=(1080, 1080)) output_path = "/mnt/data/espase_promo_video.mp4" final_clip.write_videofile(outpu
t_path, fps=24)
Below is one standard solution using jq’s built‐in grouping and transformation functions:
jq 'group_by(.a)[] | { a: .[0].a, count: map(.b | length) | add }'
Result (the output is an object with each unique a as a key and the total count of b entries as the value):
{
"foo": 3,
"bar": 0
}
Grouping by a:
The command starts with:
jq group_by(.a)[]
This groups all objects in the array that share the same a value into subarrays. Each subarray contains all objects with that same a.
Extracting the Unique Key:
For each group (which is an array), the expression:
jq .[0].a
extracts the common a value from the first item. Since all objects in the group have the same a, this is safe.
Counting Entries in b:
The expression:
jq map(.b | length) | add
takes the current group (an array of objects), maps each object to the length of its .b array, and then sums them with add. This sum represents the total count of all entries in b for that particular a.
Building the Output Object:
The { a: .[0].a, count: ... } syntax creates an object with two fields: the a value and the computed count.
In the future if you'd like to use jq in any JetBrains IDE, please check out my plugin: https://plugins.jetbrains.com/plugin/23360-jqexpress
Then answer is to use
"General"" *""
Know someone who can answer? Share a link to this question via email, Twitter, or Facebook.
Your Answe que r
This problem seems not to have a solution for now. I have also been experiencing the same problem.
Ensure a single ssh-agent instance runs at a time.
You may use this:
=SORT(CHOOSECOLS(A2:E5,1,4,5),2,1,3,1)
Sample Output:
| Todd | 8/22 | 11:55 PM |
| Ned | 8/23 | 6:50 AM |
| Rod | 8/23 | 1:37 PM |
| Maude |
I've recently faced the same problem and I guess I found a solution. It relies on clang's __builtin_assume attribute, which will warn if passed expression has side effects. GCC does not have it, so solution is not portable.
The resulting macro is
#define assert(e) do { if (!(e)) { __builtin_assume(!(e)); __assert_fn(); } } while (0);
It should not cause any code-generation issues, since assume is placed on dead branch that should kill a program (if __assert_fn is declared with noreturn, then compiler may assume e anyway).
See gist for example and godbolt link https://gist.github.com/pskrgag/39c8640c0b383ed1f1c0dd6c8f5a832e
I was able to find a post from 7 years ago that give me some direction and came up with this. Thanks for looking.. Brent
---solution
SELECT acct_id,sa_type_cd, end_dt
FROM
(SELECT acct_id,sa_type_cd,end_dt,
rank() over (partition by acct_id order by end_dt desc) rnk
FROM test
WHERE sa_type_cd IN ( 'E-RES', 'E-GS' ) and end_dt is not null)
WHERE rnk = 1 and acct_id = '299715';
There should not be any reason that you cannot run separate instances on separate hosts all streaming 1 portion of the overall data set to the same cache. The limiting factor in this proposed architecture will most likely be the network interface of the database that you are retrieving data from. Hope that helps.
it is only an alpha version. it is not available in Expo Go.
if (process.env.NODE_ENV === 'production') {
// Enable service worker only in production for caching
navigator.serviceWorker.ready.then(() => {
console.log('Service Worker is ready');
});
}
It's possible that I didn't explain the issue correctly, but none of the provided answers accurately split the string or would be able to handle the large amount of data I will eventually be working with without timing out. Here's what did work for my case:
var str = 'value1,"value2,with,commas",value3';
var parts = str.split(/,(?=(?:[^"]*"[^"]*")*[^"]*$)/);