Vestibulum neque massa, scelerisque sit amet ligula eu, congue molestie mi. Praesent ut varius sem. Nullam at porttitor arcu, nec lacinia nisi. Ut ac dolor vitae odio interdum condimentum. Vivamus dapibus sodales ex, vitae malesuada ipsum cursus convallis. Maecenas sed egestas nulla, ac condimentum orci. Mauris diam felis, vulputate ac suscipit et, iaculis non est. Curabitur semper arcu ac ligula semper, nec luctus nisl blandit. Integer lacinia ante ac libero lobortis imperdiet. Nullam mollis convallis ipsum, ac accumsan nunc vehicula vitae. Nulla eget justo in felis tristique fringilla. Morbi sit amet tortor quis risus auctor condimentum. Morbi in ullamcorper elit. Nulla iaculis tellus sit amet mauris tempus fringilla. Maecenas mauris lectus, lobortis et purus mattis, blandit dictum tellus. • Maecenas non lorem quis tellus placerat varius. • Nulla facilisi. • Aenean congue fringilla justo ut aliquam. • Mauris id ex erat. Nunc vulputate neque vitae justo facilisis, non condimentum ante sagittis. • Morbi viverra semper lorem nec molestie.
Normally, the main sitemap of TYPO3 is located only with ?type argument here: https://www.lumedis.de/?type=1533906435
This sitemap contains 2 "sub sitemap", one for each chunk of 1000 pages on your site. Your url is one of them directly.
If you have other extensions installed (like news), there are also sitemaps for these. So its never a good idea to check only one sitemap type.
r = requests.get(url, stream =True)
check = zipfile.is_zipfile(io.BytesIO(r.content))
while not check:
r = requests.get(url, stream =True)
check = zipfile.is_zipfile(io.BytesIO(r.content))
else:
z = zipfile.ZipFile(io.BytesIO(r.content))
z.extractall()
Got a sample that takes a photo OK.
Want to take a video but no methods:
StartRecordingAsync()
and
StopRecordingAsync()
with a CameraView instance.
Any ideas??
Did you find a way to fix this ?
Did you change anything with the default typoscript of the sitemap?
Normally https://www.lumedis.de/sitemap.xml would be an index of sitemaps, the limit of 1000 is "per page" so you would have 2 pages of sitemaps in your index.
But when I visit your sitemap it doesn't show the index but directly all the urls.
Take a look at the sitemap of typo3.org, it shows 3 pages of news: https://typo3.org/sitemap.xml
Each with a limit of 1000.
Cheers for this, I have a Stream Channel where members can Knock to be draged in. This saves that, I get an alert now. Thank you!! very new to creating bots last attempt was back when discord's first bots was made :P
I wanted to have something from the Python standard library like shelve, but also the parameters from cachier. So I built upon the answer of @nehem and @thegreendroid.
import datetime
import os
import shelve
import pickle
from functools import wraps
def shelve_it(cache_dir='/tmp/cache', stale_after=None):
'''
A decorator to cache the results of a function.
Args:
- cache_dir (str): The directory where the cache will be stored.
- stale_after (timedelta): The duration after which the cache is considered stale.
'''
cache_file = os.path.join(cache_dir, 'cache.shelve')
if not os.path.exists(cache_dir):
os.makedirs(cache_dir)
def decorator(func):
@wraps(func)
def new_func(*args, **kwargs):
cache_key = pickle.dumps((args, kwargs))
cache_key_str = cache_key.hex()
with shelve.open(cache_file) as d:
if cache_key_str in d:
if stale_after and 'timestamp' in d[cache_key_str]:
timestamp = d[cache_key_str]['timestamp']
if datetime.datetime.now() - timestamp > stale_after:
del d[cache_key_str]
print(f'Cache for {cache_key_str} is stale, recalculating...')
else:
return d[cache_key_str]['result']
else:
return d[cache_key_str]
result = func(*args, **kwargs)
if stale_after:
d[cache_key_str] = {'result': result, 'timestamp': datetime.datetime.now()}
else:
d[cache_key_str] = result
return result
return new_func
return decorator
Usage
@shelve_it(cache_dir='/tmp/my_cache', stale_after=datetime.timedelta(days=2))
def expensive_function(param, multiplier=2):
import time
time.sleep(2)
return param * multiplier
print(expensive_function('test')) # This will take 2 seconds
print(expensive_function('test')) # This will be instant, using the cache
I tried your solution of connecting AIDL through Intent, but still I got this error
ActivityManager: Unable to start service Intent { ... } U=0: not found
May I ask if your code works in this way? Or it only works with the getService way?
Even i am also facing same issue please let me know if you find the solution
The problem here is that you are trying to use an NGINX plus package, for which you need to purchase a subscription.
Or you have GetSpeedPage repository subscription, but meh. https://www.getpagespeed.com/repo-subscribe
You have something maintained by google's team which it should be free. https://github.com/google/ngx_brotli
I just used the plain old gzip compression, which will do the work 99.9% of time.
I am using CentOS 09 combined with an nginx web server.
it is not 100% accurate ,Even with exclusive access it does not do full scan by default.So the row count is close, but not guaranteed exact.you can rely on it when need approximate row counts.
if you want fast and accurate counts,then maintain row count using trigger
Hibernate does not try to cache the result of every query executed. Only specific operations like findById benefit from the first-level cache. If you want broader caching, you can either use these methods or implement second-level caching
I had this issue as well on mariadb version 10.6 after upgrading to 10.7.2 the issue resolved itself. I have found no other way to remedy it.
PlayerView(context).apply {
player = exoPlayer
setControllerShowTimeoutMs(0)
setControllerHideOnTouch(false)
controllerAutoShow = true
showController()
}
This is how you can do it with compose.
This issue was closed by follow pull request: https://github.com/primefaces/primevue/pull/6454
how to solve this issue in neo4j desktop 1.6.1 and neo4j 5.24?
am getting same error so what i do like i have tries 4111 1111 1111 1111 these crendential so it not working it shows me these errror "Your payment could not be completed as this business accepts domestic (Indian) card payments only. Try another payment method"
I had this same issue after a Windows update restarted my machine with Eclipse still running. When I restarted Eclipse, it was complaining about not being able to find java.sql.Connection.
No amoount of Maven dancing (Alt-F5, Select All, Refresh), starting Eclipse -clean or other jiggery pokery did anything, until I went into Window, Preferences, Java, Compiler, and I selected a Compliance Level other than the one listed (and supported for my JDK) and clicked Apply And Close.
This warns about a rebuild, but it was the rebuild that fixed the issue
I have posted this question and I have got the answer now. I am so happy to find the solution and post is here.
So to pass the value dynamically to the only-dir flag , we can make use of the paramaters of the DAG (which is used while triggering the DAG.)
So while triggering the DAG i have included the needed directory path in the config and made use of it using JINJA templating of Airflow.
I am sharing the code snippet sample for doing the same.
# Generate a unique identifier
unique_id = str(uuid.uuid4())
# Add the unique identifier to the labels
labels = {
"composer-env": cluster,
"project-name": gcs_project_name,
"version": "1.0",
"stream": "e2e",
"unique-id": unique_id # Unique identifier
}
class CustomGKEStartJobOperator1(GKEStartJobOperator):
def __init__():
pass
template_fields = GKEStartJobOperator.template_fields + ("volumes",)
def execute(self, context):
# Re-render volume_attributes before pod creation
for vol in self.volumes or []:
if hasattr(vol, "csi") and vol.csi and vol.csi.volume_attributes:
for key, val in vol.csi.volume_attributes.items():
if isinstance(val, str) and "{{" in val:
path_name = self.render_template(val.split("dir=")[1], context)
vol.csi.volume_attributes[key] = val.split("dir=")[0] + "dir=" + path_name
return super().execute(context)
gcs_volume_mount = V1VolumeMount(
name="gcs-fuse-csi",
mount_path="/data",
)
gcs_volume = V1Volume(
name="gcs-fuse-csi",
csi=k8s.V1CSIVolumeSource(
driver="gcsfuse.csi.storage.gke.io",
volume_attributes={
"type": "gcs",
"bucketName": datalake_bucket,
"mountOptions": "implicit-dirs,file-mode=0777,dir-mode=0777,only-dir={{ params.gcs_path }}"
}
)
)
check_mnt = CustomGKEStartJobOperator1(
task_id='cehck_mnt_task',
name='cehck_mnt-pod',
image=adtf_image,enter image description here
volumes=[gcs_volume],
volume_mounts=[gcs_volume_mount],
cmds=["bash", "-c"],
arguments=["cp /data/ab.json /data/abc.json && sleep 300"])
🧾 Smart & Portable – Mini Receipt Printer Connected to Phone
Say goodbye to bulky machines!
Our mini receipt printer connected to phone is perfect for small businesses, mobile shops, and on-the-go printing.
✅ Connects via Bluetooth
✅ Works with Android & iOS
After multiple attempts, I discovered that I also need to place the logic that actually updates the state inside the startTransition callback for the code to execute as expected. Below is my code:
const toggleIsFavorited = async () => {
setError('');
const nextIsFavorited = !isFavorited;
// Wrap optimistic update in startTransition
startTransition(async () => {
setOptimisticIsFavorited(nextIsFavorited);
try {
const result = await toggleFavorite();
if (result === 'success') {
setIsFavorited(nextIsFavorited);
} else {
setError('Failed to add to favorites!');
}
} catch (err) {
setError('An error occurred');
}
});
};
I still don't know why :(
I changed my file's name from doctr.py to doctr_ocr.py and it worked. I guess we are not supposed to name it as doctr.py
You signing wrong nonce. The real string they're asking to sign is f"You are sign in Meme {nonce}".
To access the data, you need to be logged in to Moodle. When accessing the CSV file using read.csv(URL)
, even if you're logged onto your browser, R fetches the data from the URL directly itself, so you need to provide your log in details to R's request to access the URL.
This would be more complicated for a student than just learning how to download the file, save it to a working directory, and read it into R from there.
Therefore, using read.csv(URL)
will make it (much!) more complicated.
Make sure tailwind CSS is installed at 3.3.2 and not ^3.3.2
it is working for me
#react-expo
data-target 속성은 작동할 대상 요소를 지정합니다. 일반적으로 CSS 선택자(ID 등)를 사용합니다. 그 외에는 알아서 지정하고싶은거 사용하는거에요 website
Here's what I got while solving this problem.
1. Various manufacturers increase the memory capacity of nor flash by placing several 64 MB memory dies.
2. To work with all memory dies, a mechanism for switching between memory dies is needed
3. Switching between memory dies can be software or hardware.
4. In my case, the flash has a special command for switching between dies (C2h). In the general spi-nor driver, this feature is not taken into account. In the source code for this flash drive, such a mechanism is also not implemented. I do not know how to write drivers for Linux, so the problem must be solved another way.
Solution.
There is a similar pin-to-pin compatible Nor Flash from Micron. This chip has a hardware mechanism for switching between dies.
P.S. May be one time it will be solution for winbond flash
Use 'BottomSheetView' to wrap 'Text' like this:
import { BottomSheetView } from '@gorhom/bottom-sheet';
//YOUR CODE
<BottomSheetView>
<Text>BottomSheet</Text>
</BottomSheetView>
pkill -f "firebase emulators:start"
this should kill all running emulators. then you can restart
Using splice() (Modifies Original Array)
const array = [1, 2, 3, 4, 5];
const index = array.indexOf(3); // Find index of item to remove
if (index > -1) { // Only splice if item exists
array.splice(index, 1); // Remove 1 item at found index
}
console.log(array); // [1, 2, 4, 5]
git remote show -n origin | sed -E 's/^ +//' | grep -E "^$(git branch --show-current)$" | sed 's/^/origin\//'
in your case, use this instead of raw dollar sign
@GET('/([\$])export')
Maybe you uploaded the current bundle for testing instead of publication. Try to upload the bundle intended for publication. Use the next version number. For example, if version of current bundle is 1.1, then use 1.2 for new one.
'$' means it is waiting for input. It is already displayed where you are about to input, right?
Just type the rest of it.
export QT_XCB_GL_INTEGRATION=none
How to make this permanent? Each time I restart the shell I need to enter it again in order to successfully start the navigator.
Now Play console will show you Non fatals as well in the Play console. You would have received notification regarding the same in your Console inbox. So Check if your Non Fatals in firebase has the missing crashes reported by Play console
https://play.google.com/console/about/whats-new/#new-non-fatal-memory-errors-in-crashes-and-anrs
Avoid blocking async code in constructors using .Wait()
or .Result
-- Edit -- statusBarTranslucent: {true}
simply makes the native status bar translucent. It does not prevent the view from being pushed by the keyboard.
The other solutions (KeyboardAwareScrollView and avoidKeyboard: {false}
) did not work for me, but this fixed it for my situation:
import { Keyboard } from 'react-native';
// this code is from https://docs.expo.dev/guides/keyboard-handling/
export const ReactiveModal ...
const [isKeyboardVisible, setIsKeyboardVisible] = useState(false);
useEffect(() => {
const showSubscription = Keyboard.addListener('keyboardDidShow', handleKeyboardShow);
const hideSubscription = Keyboard.addListener('keyboardDidHide', handleKeyboardHide);
return () => {
showSubscription.remove();
};
}, []);
const handleKeyboardShow = event => {
setIsKeyboardVisible(true);
};
const handleKeyboardHide = event => {
setIsKeyboardVisible(false);
};
// end of code from expo docs
return (
<Modal
isVisible={isVisible}
swipeDirection={['down']}
style= {isKeyboardVisible ? styles.modalEditing : styles.modal} // this is important
propagateSwipe
statusBarTranslucent={true}
>
//more code...
</Modal>
)
const styles = StyleSheet.create({
modal: {
justifyContent: 'flex-end',
margin: 0,
},
modalEditing: {
justifyContent: 'flex-start',
margin: 0,
},
//more styling and export ReactiveModal
This solution adds a dynamic layout to the modal depending on whether the keyboard is open. In my situation, modal does not take up the entire page but rather about 75% of the bottom of the screen when it's open.
flex-end
forces the modal to be at the bottom of the view and flex-start
forces it to be at the top of the view.
This is the best solution I could find as the keyboard kept pushing the content up in the modal despite setting softwareKeyboardLayoutMode: "pan"
KeiKai OSE is not supported to work with ZK 10 (work with 9 or before). Please change to KeiKai EE.
Breeze, Sapphire, Silvertail, and Atlantic are also no longer supported since ZK 10.
If you're currently using Breeze, Sapphire, or Silvertail, we suggest migrating to the iceblue_c.
If you're using Atlantic, we suggest migrating to the iceblue.
Please see
refer to this FAQ page to resolve this issue: https://docs.mem0.ai/faqs#how-do-i-configure-mem0-for-aws-lambda
You can start by learning JavaScript, as it's the foundation. Once you're comfortable with it, move on to Node.js and Express.js, which are widely used for backend development.
Ah. It's pretty easy to access the attribute's custom option.
If the type is declared like this:
class QueryParamType < ActiveModel::Type::Value
def type = :query_param
def initialize(**args)
@accepts = args[:accepts]
end
end
then a caller can access the attribute's option value (for the Comment class above) like this:
Query::Comments.attribute_types["created_at"].accepts
Note that the hash keys of attribute_types
are strings, not symbols.
if you go the documentation of flutter_webrtc
on pub.dev. Under Functionality section it is given that the MediaRecorder is currently available on web only.
You can rollback to the previous state.
docker service rollback <service>
This works on both successful and failed deployment.
Then you can update your service again.
In my case, I need to set PYTHONPATH=''
when using a virtualenv
to avoid this error.
make both Required b: Required<Alpha>['a'];
. tried here and you can play around here
typescriptlang
I am from China, thank you very much
Ok; I got it across the line......
My issues were:
1. Using `Authorization: Bearer<token>` the correct value should have been `Authorization: DPoP <token>` thank you @Dan-Ratner
2. When making requests to PDS (/xrpc/com.atproto.repo.createRecord), I was using the entryway (bsky.social) instead of the PDS endpoint. The correct endpoint was extracted from the JWT > "aud" thank you yamarten over at GitHub[1]
3. The final error "message":"DPoP nonce mismatch", I was getting when making PDS requests, was due to the dpop nonce changing/expiring, and I hadn't dealt with the change/reply from requests resulting in 401 errors.
[1] https://github.com/bluesky-social/atproto/issues/3212#issuecomment-2764380250
My code now needs a complete refactor to clean up the implementation
Add this config to .m2 file
<mirrors>
<mirror>
<id>maven-default-http-blocker</id>
<mirrorOf>releases.java.net</mirrorOf>
<name>releases.java.net</name>
<url>YOUR REPO</url>
<blocked>false</blocked>
</mirror>
</mirrors>
I am on the same boat, for my data science capstone project, i am looking for product reviews but i am not able to get public api, please let me know which companies like target, walmart, costco, amazon, or ebay which provides public api for developers
As drewtato mentioned, add configs below should save the problem.
[[bench]]
name = "my_benchmark" # the name of the benchmark file's name
harness = false
Android Studio is a cool development environment, but it has bugs like everyone else
Why did they make such the Toast that few people have time to read? This is a clear mistake!
Click it to let the developers see it! Stop worrying and coming up with crutches
In general, use Dialogs! It's clearer, more beautiful, it doesn't run away! :)
A bit late but maybe it'll help someone else. Use [embedFullWidthRows]="true" when defining ag-grid. Refer here.
I will do few updates to your code, 1 - Wrap setOptimisticIsFavorited in startTransition to properly handle the optimistic update. 2 - Add error handling and reset optimistic state on failure 3 - Disable the button during transitions to prevent multiple clicks 4 - Add proper error boundaries around the async operation
1. Verify and Reinstall the NDK
The error suggests that the source.properties file is missing, which indicates an issue with the NDK installation. Follow these steps to reinstall the NDK: Open Android Studio. Go to File > Settings > Appearance & Behavior > System Settings > Android SDK (on macOS, it's under Preferences). Select the SDK Tools tab. Check the box for NDK (Side by side) and click Apply or OK to install the latest version of the NDK.
Once installed, verify that the NDK directory (e.g., ~/Android/Sdk/ndk/) contains the source.properties file.
2. Clean and Rebuild the Project
After reinstalling the NDK, clean and rebuild your project to ensure the changes take effect cd android ./gradlew clean cd .. npx react-native run-android 3. Specify the Correct NDK Version
Sometimes, the project may require a specific version of the NDK. You can specify the version in your build.gradle file: Open the android/build.gradle file. Add or update the ndkVersion property under the android block: gradle
android { ndkVersion "27.1.12297006" // Replace with the correct version } Sync the project and rebuild.
4. Delete and Reinstall the NDK Folder
If the issue persists, manually delete the NDK folder and reinstall it: Navigate to the NDK directory (e.g., ~/Android/Sdk/ndk/). Delete the problematic NDK folder (e.g., 27.1.12297006). Reinstall the NDK using Android Studio as described in Step 1.
5. Update Gradle and React Native
Ensure that you are using the latest versions of Gradle and React Native, as older versions may have compatibility issues with newer NDK versions: Update the Gradle wrapper by modifying the gradle-wrapper.properties file: properties
distributionUrl=https://services.gradle.org/distributions/gradle-8.0-all.zip Update React Native to the latest version: bash
npm install react-native@latest
6. Verify Environment Variables
Ensure that your ANDROID_HOME and PATH environment variables are correctly set: Add the following lines to your ~/.bashrc or ~/.zshrc file: bash
export ANDROID_HOME=$HOME/Android/Sdk export PATH=$ANDROID_HOME/emulator:$ANDROID_HOME/tools:$ANDROID_HOME/tools/bin:$ANDROID_HOME/platform-tools:$PATH Reload the terminal: bash
source ~/.bashrc
7. Delete Gradle Cache
For anyone trying to do the same as the original poster, see this repository by james77777778:
https://github.com/james77777778/darknet-onnx
{
"name": "Permissions Extension",
...
"permissions": [
"activeTab",
"contextMenus",
"storage"
],
"optional_permissions": [
"topSites",
],
"host_permissions": [
"https://www.developer.chrome.com/\*"
],
"optional_host_permissions":[
"https://\*/\*",
"http://\*/\*"
],
...
"manifest_version": 3
}
As suggested by @camickr I managed to get a solution, but I did things a little differently.
Container mainPanel = this.getParent();
CardLayout card = (CardLayout) mainPanel.getLayout();
card.show(mainPanel, "login panel");
Try add the dll name to the source:
<ResourceDictionary.MergedDictionaries>
<ResourceDictionary Source="/dllName;component/Themes/DarkTheme.xaml"/>
</ResourceDictionary.MergedDictionaries>
I solved this problem by going into the container settings. There’s a configuration for enabling end-to-end HTTP 2.0. When I disabled it, the protocol error stopped appearing.
You could use pandas for csv processing. In this case pandas will skip the header and will bring you more possibilities.
But something like this can aslo help you:
if list(row.values()) == FIELDNAMES:
pass
can someone make one to disconnect from global protect
ffmpeg_kit_flutter:
git:
url: https://github.com/Sahad2701/ffmpeg-kit.git
path: flutter/flutter
ref: flutter_fix_retired_v6.0.3
This plan is feasible, thank you;
but how to use :ffmpeg_kit_flutter_full_gpl;
if use “”ffmpeg_kit_flutter_full_gpl , report an error;
pls
I've always felt like "internal" should be the default. Unless you're writing a public library there's no need or reason to expose anything at all to the world at large. So many people here have said "You only need to use internal when you want to hide something from the outside world" but I'd turn that on it's head - you only want to use public for stuff that you expect to be called from the outside world. Unless you're writing a public library, that usually means nothing at all. That said, it does make some things easier where other types of program such as serialization or unity tests require explicit access to your stuff but even there there's almost always a workaround though sometimes it's a bit more difficult. I really regret that most code just mindlessly uses public for all sorts of stuff that nobody is particularly anxious to publish to the world. Sometimes I just throw my hands up and acquiesce because so much stuff is geared towards you making stuff public but I think that this is a sad, almost accidental historical mistake rather than a well thought out strategy.
I have a pretty good solution, wich is working since 2008 without problems, we are storing close to 500,000 files of diferent types by using 2 separate tables.
Performance is amazing, and memory usage very low because one table (METADATA) only stores metadata describing the uploaded file, including one field(id) pointing to the second table (CONTENT) wich contains a BLOB field(the actual file) and an ID field to link its metadata.
All searching is done on the metadata table, and when we decide to download a file the ID field allow Us to download the content of only that specific record from the second table.
We insert new files on the CONTENT table, and the new ID field is used to insert another record on the METADATA table and register the descriptive info of the file, like name, size, type, user,etc.
METADA is like a directory of the files. Small Table.
CONTENT is the repository of the files with their key (ID).Huge Table.
In the second table We store files as big as 256MB in Mysql.
To show file upload progress in NestJS, don't use multer because it waits until the file is fully uploaded. Instead, use busboy and show progress using Javascript's onprogress event.
No parametro path da imagem existe uma forma de usar o conteudo da image ao inves do caminho?
In the image path parameter, is there a way to use the image content instead of the path?
its actually quite simple
template <typename T>
void printrgb(int r, int g, int b, T output) {
std::cout<<"\033[38;2;"<<r<<";"<<g<<";"<<b<<"m"<<output<<"\033[0m";
}
the output will be printed based on r, g and b, actually i dont understand how its work
Run the command without $
in your terminal
git clone https://www.github.com/sky9262/phishEye.git
I had the same issue. There is a response on this page from the user Sachin Dev Tomar and that is what worked for my situation. So once I got the Azure development tool installed in my Visual Studio, it started to work as expected.
In Vscode I just cleaned the android paste and rerun the expo command to run in android, and for some reason It works very well : )
Craigslist no longer allows HTML tags like in most categories to prevent spam and scams.Instead, post plain URLs like https://example.com; Craigslist auto-links them in supported sections.
You have a lot of alternatives:
Disable button and wait some time to enable again
Disable button, wait response from server, show success dialog, wait for user to click close, then enable button again
You can check if the same data was inserted before in a defined amount of time and cancel the operárion
I had this issue and for me, it was because my bin folder was included in my project in Visual Studio. I removed all references to <Content Include="bin\..."/> in my .csproj file and the publish started working after that.
Spotify stores encrypted, compress audio files in cache, using minimal storage. for your project compress audio, encrypt it, store locally, and decrypt for playback using native audio tools or libraries.
Thank You
The method is creating a thread, and multiple threads may be trying to open the same registry key. The call may not be thread safe.
You can remove the code from the onBootstrap method. In laminas the session is started via the AbstractContainer::__construct method. See the link below for the code relevant to that. In laminas Session is autowired via its factories. You can find which ones are being called in the /vendor/laminas/laminas-session/src/ConfigProvider::getDependencyConfig() method. For laminas components that expose support for both Laminas MVC and Mezzio the Module class just proxies to the ConfigProvider class.
You can find the migration guide here:
https://docs.laminas.dev/migration/
You can find a presentation by Zend here:
https://www.zend.com/webinars/migrating-zend-framework-laminas
Location where the session is started
Since you are trying to upgrade you may want to take a look at the documentation for usage in a laminas-mvc application. It covers all the new options.
https://docs.laminas.dev/laminas-session/application-integration/usage-in-a-laminas-mvc-application/
Should help fix these kinds of missing plugin issues with WordPress when it doesn't solve itself: https://github.com/wpallstars/wp-fix-plugin-does-not-exist-notices
I downgraded my Xcode to 16.2 and the app built successfully.
ok so I suggest you remove the print function at the start and I suggest you to replace the , to a +
for ex
name = input("What's your name")
print("Hello " + name)
You can format your data using a community visualization named Templated Record that supports HTML. Here is how it works and an example of the results.
I just tried this but I got the same error:
python minipython1151.py
Traceback (most recent call last):
File "/Users/kate/Pictures/RiverofJobs.com/code2/minipython1151.py", line 1, in <module>
from flask import Flask
ModuleNotFoundError: No module named 'flask'
Sometimes it happens when you use a feature that's only valid for one day, and after that, it won't let you do anything, and you'll have to start another chat. But if you have the paid version, it's very rare for that to happen.
Best regards!
OK, his was not straightforward to diagnose nor fix, and I really had to receive some pointers from this SonarSource Community topic (credits to @ganncamp).
There are multiple factors that led here.
Factors that are SonarQube-specific:
The more recent SonarQube versions such as 9.9 and 2025.1 have no way to update the email of an external user. This is something advertised as a "feature" but I think it is rather a design failure. Although it would be easy to pick the email address from the LDAP query response and update the email address on logon, SonarQube choose deliberately not to do that. External users get their email field populated on first logon and then stick to it for the rest of their life. Well, except you dare to touch SonarQube's database directly.
SonarQube users must have unique email addresses. If on logon, an LDAP query returns a user not yet in SonarQube's own users table (looked up using username), but the email returned by the LDAP server is already present in the same users table, the login failes and the new user is not inserted to the users table.
(I don't have the faintest idea about the reasoning behind this. It's not hard to imagine use cases where multiple users have the same email address. Consider several technical users, which are all set up with [email protected] ...)
You can set up multiple LDAP servers in sonar.properties as external identity providers. The important detail is, that this sort of setup is not meant to work as a failover cluster even though it works similar to a failover cluster:
SonarQube Server's LDAP support is not designed to connect multiple servers in a failover mode.
(...)
Authentication will be tried on each server, in the order they are listed in the configurations until one succeeds.
What's it designed for then? They probably meant to provide access using heterogenous LDAP servers. Consider multiple firms or branches each with their own LDAP directory using the same SonarQube instance.
To address this use case in a multi-server LDAP setup, the SonarQube users table contains an external_login and an external_identity_provider field, which together must be unique in the whole table. In a single-server LDAP setup, external_identity_provider is always 'sonarqube'. In a multi-server LDAP setup, the field reflects the LDAP server the user was authenticated against the first time they logged in. For example: "LDAP_foobar". (See linked documentation above.) Now our two John Does can be told apart:
login | external_login | external_identity_provider |
---|---|---|
john_doe | john_doe | LDAP_foobar |
john_doe1234 | john_doe | LDAP_yeehaw |
Also, since the SonarQube users table had an original "login" field (which is unique of course), they had to work around that unique constraint by adding a random number sequence to the username. Since the login field is probably not used for external users anymore, this is just for backwards compatibility, I guess.
ldap.url=ldaps://foo.bar.local:636
ldap.foo.bindDn=CN=foo,OU=orgunit,DC=bar,DC=local
...then the certificate SAN should contain a bar.local DNS field, otherwise the query fails and produces a (debug-level) message in web.log:
2025.04.11 18:43:18 DEBUG web[b7a70ba3-0e9a-4685-a1ad-c2a30e919e64][o.s.a.l.LdapSearch] More result might be forthcoming if the referral is followed
javax.naming.PartialResultException: null
at java.naming/com.sun.jndi.ldap.AbstractLdapNamingEnumeration.hasMoreImpl(AbstractLdapNamingEnumeration.java:237)
at java.naming/com.sun.jndi.ldap.AbstractLdapNamingEnumeration.hasMore(AbstractLdapNamingEnumeration.java:189)
at org.sonar.auth.ldap.LdapSearch.hasMore(LdapSearch.java:156)
at org.sonar.auth.ldap.LdapSearch.findUnique(LdapSearch.java:146)
at org.sonar.auth.ldap.DefaultLdapUsersProvider.getUserDetails(DefaultLdapUsersProvider.java:78)
at org.sonar.auth.ldap.DefaultLdapUsersProvider.doGetUserDetails(DefaultLdapUsersProvider.java:58)
at org.sonar.server.authentication.LdapCredentialsAuthentication.doAuthenticate(LdapCredentialsAuthentication.java:92)
at org.sonar.server.authentication.LdapCredentialsAuthentication.authenticate(LdapCredentialsAuthentication.java:74)
at org.sonar.server.authentication.CredentialsAuthentication.lambda$authenticate$0(CredentialsAuthentication.java:71)
at java.base/java.util.Optional.or(Optional.java:313)
at org.sonar.server.authentication.CredentialsAuthentication.authenticate(CredentialsAuthentication.java:71)
at org.sonar.server.authentication.CredentialsAuthentication.authenticate(CredentialsAuthentication.java:57)
at org.sonar.server.authentication.ws.LoginAction.authenticate(LoginAction.java:116)
at org.sonar.server.authentication.ws.LoginAction.doFilter(LoginAction.java:95)
...
Caused by: javax.naming.CommunicationException: simple bind failed: bar.local:636
at java.naming/com.sun.jndi.ldap.LdapReferralContext.<init>(LdapReferralContext.java:96)
at java.naming/com.sun.jndi.ldap.LdapReferralException.getReferralContext(LdapReferralException.java:151)
at java.naming/com.sun.jndi.ldap.AbstractLdapNamingEnumeration.hasMoreReferrals(AbstractLdapNamingEnumeration.java:326)
at java.naming/com.sun.jndi.ldap.AbstractLdapNamingEnumeration.hasMoreImpl(AbstractLdapNamingEnumeration.java:227)
... 68 common frames omitted
Caused by: javax.net.ssl.SSLHandshakeException: No subject alternative DNS name matching bar.local found.
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:383)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:326)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:321)
at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:654)
at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:473)
at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(CertificateMessage.java:369)
at java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:396)
at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:480)
at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:458)
at java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:206)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:172)
at java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1510)
at java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1425)
at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:455)
at java.base/sun.security.ssl.SSLSocketImpl.ensureNegotiated(SSLSocketImpl.java:925)
at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1295)
at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:81)
at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:142)
at java.naming/com.sun.jndi.ldap.Connection.writeRequest(Connection.java:418)
at java.naming/com.sun.jndi.ldap.Connection.writeRequest(Connection.java:391)
at java.naming/com.sun.jndi.ldap.LdapClient.ldapBind(LdapClient.java:359)
at java.naming/com.sun.jndi.ldap.LdapClient.authenticate(LdapClient.java:214)
at java.naming/com.sun.jndi.ldap.LdapCtx.connect(LdapCtx.java:2896)
at java.naming/com.sun.jndi.ldap.LdapCtx.<init>(LdapCtx.java:348)
at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxFromUrl(LdapCtxFactory.java:229)
at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(LdapCtxFactory.java:189)
at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(LdapCtxFactory.java:152)
at java.naming/com.sun.jndi.url.ldap.ldapURLContextFactory.getObjectInstance(ldapURLContextFactory.java:52)
at java.naming/javax.naming.spi.NamingManager.getURLObject(NamingManager.java:625)
at java.naming/javax.naming.spi.NamingManager.processURL(NamingManager.java:402)
at java.naming/javax.naming.spi.NamingManager.processURLAddrs(NamingManager.java:382)
at java.naming/javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:354)
at java.naming/com.sun.jndi.ldap.LdapReferralContext.<init>(LdapReferralContext.java:119)
... 71 common frames omitted
Caused by: java.security.cert.CertificateException: No subject alternative DNS name matching bar.local found.
at java.base/sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:212)
at java.base/sun.security.util.HostnameChecker.match(HostnameChecker.java:103)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:471)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:418)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:238)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:132)
at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:638)
... 100 common frames omitted
The tricky part is: the same rigorous SAN-checking does not happen on server startup, when SonarQube checks connectivity to all configured LDAP servers. Even if the TLS certificate is imperfect, it will log:
2025.04.14 21:42:54 INFO web[][o.s.a.l.LdapContextFactory] Test LDAP connection on ldaps://foo.bar.local:636: OK
Factors and events related to our specific setup and situation:
We had a 5-server LDAP setup. Unfortunately, we meant to use it as a failover cluster, so these LDAP directories were really just replicates of each other.
At a point, several users in the LDAP directory had their email addresses changed.
Somewhat later, we had downtimes for the first few LDAP servers listed in sonar.properties (such as LDAP_foobar). It lasted a few days, then we fixed it.
Meanwhile, we messed up the TLS certificates of our LDAP servers except one down the list (LDAP_valid).
Not totally sure about how it all played down, but the results were as follows:
login | external_login | external_identity_provider | |
---|---|---|---|
john_doe | [email protected] | john_doe | LDAP_foobar |
john_doe1234 | [email protected] | john_doe | LDAP_yeehaw |
Since the first few LDAP servers listed in sonar.properties (such as LDAP_foobar and LDAP_yeehaw) had a TLS certificate problem, the login process always failed over to LDAP_valid.
The LDAP_valid authentication was succesful, but the email address in the LDAP response was already present in the users table, so SonarQube threw an "Email '[email protected]' is already used" error.
How we managed to fix the situation:
SonarQube service stop. Backup.
We changed the LDAP configuration back to a single LDAP-server setup.
We had to update all the users.external_identity_provider database fields to 'sonarqube' to reflect the switch to single LDAP-server setup:
UPDATE users SET external_identity_provider = 'sonarqube' WHERE external_identity_provider LIKE 'LDAP_%';
We removed all the john_doe1234 duplicate users entries. (One record at a time delete statements.)
We updated all the old users.email fields to their new values.
SonarQube service start.
The problem was: instead of
https://graph.facebook.com/v22.0...
It should have been:
https://graph.instagram.com/v22.0...
Your routes may have been cached, so you should execute :
php artisan route:clear
This should delete the previously optimized routes.
yes it's good idea, work for me, get server load using sys_getloadavg() and combine with sleep(), reduce CPU load on sleep()
Hi i found an error with the location name, they are different for the serverfarm, like this i sucessufull created my function app using the CLI:
C:\Users\jesus\Documents\AirCont\AircontB> az login
C:\Users\jesus\Documents\AirCont\AircontB>az storage account create --name aircontstorage43210 --resource-group aircontfullstack --location "East US 2" --sku Standard_LRS --kind StorageV2 --access-tier Cool --https-only true
C:\Users\jesus\Documents\AirCont\AircontB> az functionapp list-consumption-locations
PS C:\Users\jesus\Documents\AirCont\AircontB> az functionapp create --name AircontBackFn --storage-account aircontstorage43210 --resource-group aircontfullstack --consumption-plan-location "eastus2" --runtime dotnet --functions-version 4 --os-type Windows
As @ChayimFriedman said:
You can use an empty instruction string.
I'm not well-versed on DPDK/testpmd, but you seem to be constraining the number of CPUs and queues it will use compared to what iperf3 will likely use.
Assuming your iperf3 is using TCP (guessing since the command line is not provided), it will be taking advantage of any stateless offloads offered by your NIC(s).
Assuming it isn't simply a matter of luck of timing, seeing higher throughput with more streams implies that the TCP window size settings at the sender, and the receiver, are not sufficient to enable achieving the full bandwidth-delay product with the smaller number (eg single) of streams.
There are likely plenty of references for TCP tuning for high bandwidth delay product networks. One which touches upon the topic, which is near and dear to my heart for some reason :) is at https://services.google.com/fh/files/misc/considerations_when_benchmarking_tcp_bulk_flows.pdf
It appears to be an issue with node.js specific version 22.14 and the CB SDK. I tried reinstalling it twice (no joy). It worked on my old maching running 22.11. I just installed the non-LTS 23.11 and it now all works.....
Worked on Ubuntu 24.04:
Get download url from https://downloads.mysql.com/archives/community/
wget https://downloads.mysql.com/archives/get/p/23/file/mysql-5.7.44-linux-glibc2.12-x86_64.tar
tar xvf mysql-5.7.44-linux-glibc2.12-x86_64.tar
tar zxvf mysql-5.7.44-linux-glibc2.12-x86_64.tar.gz
Config --with-mysql-config
for mysql2
bundle config build.mysql2 "--with-mysql-config=PATH_TO/mysql-5.7.44-linux-glibc2.12-x86_64/bin/mysql_config"
bundle install
# or bundle pristin mysql2
Check if libmysqlclient.so.20
is link to
mysql-5.7.44-linux-glibc2.12-x86_64/lib/libmysqlclient.so.20
for example:
ldd /home/ubuntu/.rvm/gems/ruby-2.3.8/gems/mysql2-0.3.21/lib/mysql2/mysql2.so
linux-vdso.so.1 (0x00007ff026bda000)
libruby.so.2.3 => /home/ubuntu/.rvm/rubies/ruby-2.3.8/lib/libruby.so.2.3 (0x00007ff026800000)
libmysqlclient.so.20 => /home/ubuntu/mysql-5.7.44-linux-glibc2.12-x86_64/lib/libmysqlclient.so.20 (0x00007ff025e00000)
...
The only way I fixed this was to use an Update or Insert SQL statement
Try to use bash interactive shell
import subprocess
subprocess.run([
"ssh", "me@servername",
"bash -i -c 'source ~/admin_environment && exec bash'"
])
I have been struggling a bit with finding a way to draw lines outside the plot area but found a creative solution in this previous thread: How to draw a line outside of an axis in matplotlib (in figure coordinates). Thanks to the author for the solution once again!
My proposed solution for the problem is the following (see the explanation of distinct parts in the code):
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.lines import Line2D
dict = {'BatteryStorage': {('des-PDef3', 'Central Africa'): 0.0,
('des-PDef3', 'Eastern Africa'): 2475.9,
('des-PDef3', 'North Africa'): 98.0,
('des-PDef3', 'Southern Africa'): 124.0,
('des-PDef3', 'West Africa'): 1500.24,
('pes-PDef3', 'Central Africa'): 0.0,
('pes-PDef3', 'Eastern Africa'): 58.03,
('pes-PDef3', 'North Africa'): 98.0,
('pes-PDef3', 'Southern Africa'): 124.0,
('pes-PDef3', 'West Africa'): 0.0,
('tes-PDef3', 'Central Africa'): 0.0,
('tes-PDef3', 'Eastern Africa'): 1175.86,
('tes-PDef3', 'North Africa'): 98.0,
('tes-PDef3', 'Southern Africa'): 124.0,
('tes-PDef3', 'West Africa'): 0.0},
'Biomass PP': {('des-PDef3', 'Central Africa'): 44.24,
('des-PDef3', 'Eastern Africa'): 1362.4,
('des-PDef3', 'North Africa'): 178.29,
('des-PDef3', 'Southern Africa'): 210.01999999999998,
('des-PDef3', 'West Africa'): 277.4,
('pes-PDef3', 'Central Africa'): 44.24,
('pes-PDef3', 'Eastern Africa'): 985.36,
('pes-PDef3', 'North Africa'): 90.93,
('pes-PDef3', 'Southern Africa'): 144.99,
('pes-PDef3', 'West Africa'): 130.33,
('tes-PDef3', 'Central Africa'): 44.24,
('tes-PDef3', 'Eastern Africa'): 1362.4,
('tes-PDef3', 'North Africa'): 178.29,
('tes-PDef3', 'Southern Africa'): 210.01999999999998,
('tes-PDef3', 'West Africa'): 277.4}}
df = pd.DataFrame.from_dict(dict)
df.plot(kind = "bar",stacked = True)
region_labels = [idx[1] for idx in df.index] #deriving the part needed for the x-labels from dict
plt.tight_layout() #necessary for an appropriate display
plt.legend(loc='center left', fontsize=8, frameon=False, bbox_to_anchor=(1, 0.5)) #placing lagend outside the plot area as in the Excel example
ax = plt.gca()
ax.set_xticklabels(region_labels, rotation=90)
#coloring labels for easier interpretation
for i, label in enumerate(ax.get_xticklabels()):
#print(i)
if i <= 4:
label.set_color('red') #set favoured colors here
if 9 >= i > 4:
label.set_color('green')
if i > 9:
label.set_color('blue')
plt.text(1/6, -0.5, 'des', fontweight='bold', transform=ax.transAxes, ha='center', color='red') #adding labels outside the plot area, representing the 'region group code'
plt.text(3/6, -0.5, 'pes', fontweight='bold', transform=ax.transAxes, ha='center', color='green') #keep coloring respective to labels
plt.text(5/6, -0.5, 'des', fontweight='bold', transform=ax.transAxes, ha='center', color='blue')
plt.text(5/6, -0.6, 'b', color='white', transform=ax.transAxes, ha='center') #phantom text to trick `tight_layout` thus making space for the texts above
ax2 = plt.axes([0,0,1,1], facecolor=(1,1,1,0)) #for adding lines (i.e., brackets) outside the plot area, we create new axes
#creating the first bracket
x_start = 0 + 0.015
x_end = 1/3 - 0.015
y = -0.42
bracket1 = [
Line2D([x_start, x_start], [y, y - 0.02], transform=ax.transAxes, color='black', lw=1.5),
Line2D([x_start, x_end], [y - 0.02, y - 0.02], transform=ax.transAxes, color='black', lw=1.5),
Line2D([x_end, x_end], [y - 0.02, y], transform=ax.transAxes, color='black', lw=1.5),
]
for line in bracket1:
ax2.add_line(line)
#second bracket
x_start = 1/3 + 0.015
x_end = 2/3 - 0.015
bracket2 = [
Line2D([x_start, x_start], [y, y - 0.02], transform=ax.transAxes, color='black', lw=1.5),
Line2D([x_start, x_end], [y - 0.02, y - 0.02], transform=ax.transAxes, color='black', lw=1.5),
Line2D([x_end, x_end], [y - 0.02, y], transform=ax.transAxes, color='black', lw=1.5),
]
for line in bracket2:
ax2.add_line(line)
#third bracket
x_start = 2/3 + 0.015
x_end = 1 - 0.015
bracket3 = [
Line2D([x_start, x_start], [y, y - 0.02], transform=ax.transAxes, color='black', lw=1.5),
Line2D([x_start, x_end], [y - 0.02, y - 0.02], transform=ax.transAxes, color='black', lw=1.5),
Line2D([x_end, x_end], [y - 0.02, y], transform=ax.transAxes, color='black', lw=1.5),
]
for line in bracket3:
ax2.add_line(line)
ax2.axis("off") #turn off axes for the new axes
plt.tight_layout()
plt.show()
Resulting in the following plot:
# Create a string with 5,000 peach emojis
peach_spam = "🍑" * 5000
# Save it to a text file
with open("5000_peaches.txt", "w", encoding="utf-8") as file:
file.write(peach_spam)
print("File created: 5000_peaches.txt")
CA works on the ASG/node-group principle - and on bare-metal we don't have ASGs/node-groups. I tried designing a virtual node-group, and a virtual "cloud client" for bare metal, but there were so many issues with this design, that I gave up.
I ended up creating my own cluster-bare-autoscaler (https://github.com/docent-net/cluster-bare-autoscaler). Not production ready as for now (2025-04), but should be soon. Already does, what is supposed to do, but has some limitations.
Awaiting your input, folks!