I prefer sonarLint , it highlights possible NullPointerException risks.
Thanks for all the useful remarks,
Seems like std::bitcast is the right way to do this. It requires C++20 but I think I'm ok with that.
apparently i was missing the @app.function_name for every function and i had to fix the imports from
from . import RSSNewsletter
to
import RSSNewsletter
One of the best tools I use in my apps is the Talker package. It provides a logs screen to track every log and error in your app. check the doc here
My first suspicion here would be a memory-related error. These will show up in the kernel log:
$ sudo dmesg -T
as OOM events. You could also use strace on the application to look for malloc() calls that fail, but you do have to make sure you run strace on the underlying binary, not any wrapper script that might be being invoked in the rule.
If the application has just enough memory available when run outside of Snakemake then it may be the overhead of Snakemake pushing it over the edge. Also, with Snakemake are you running one job at a time or allowing it to run multiple jobs in parallel? Are you using multiple threads within the rule?
How to do
Method 01:
You can simply create an AWS data sync task for the particular work.
First Create Data sync task with source and destination location. Since you try to share data from AWS S3 to another S3 this task won't need any AWS agent.
Then RUN the task data will be migrate to destination S3 (Time will depend on the total size of data that you going to migrate)
Price:
Method 02:
You can use Cross Region Replication (CRR) enable on S3.
These articles will guide you to how to replicate the existing S3 bucket. I think this method will be more cost-effective way for your task.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication.html
https://aws.amazon.com/getting-started/hands-on/replicate-data-using-amazon-s3-replication/
I prefer using SonarQueue for IDE plugin. It shows more ptential problems, describes whats and whys.
enter image description here Now it only supports .exe and .MSI format packages.
I ran into the same problem and solved it. In my case, there is a space in the font name: (Spleen 32x64). And instead of entering:
Spleen 32x64
In the "Font Family", I simply add quote marks, like:
"Spleen 32x64"
And it works.
The surprising result you're seeing where an O(nlogn) algorithm performs faster than an O(n) algorithm is due to several practical factors:
Constant Factors and Lower-Level Operations: Even though the theoretical time complexity of sorting is O(nlogn), the constants involved in sorting (like in Timsort, which is the algorithm used by Python's sort()) can sometimes outperform O(n) solutions, especially when the input size is small or when the implementation of the O(n) solution involves costly operations.
Efficient Sorting Algorithms: The Timsort algorithm used in Python is highly optimized for practical use cases. It is particularly fast on real-world data, especially if there are ordered or partially ordered sequences in the input. Even though the sorting step theoretically has higher time complexity, in practice, it can run faster because of optimizations that reduce the constant factors.
Set Operations Overhead: In your O(n) solution, you're relying heavily on set operations, specifically in and add. While these operations are average O(1), they can sometimes take more time than expected because of factors like hash collisions, dynamic resizing, or poor cache locality when iterating over the set. These operations might not be as fast as they theoretically should be, especially when you're performing a lot of lookups or insertions.
Repeated Operations in the First Algorithm: In your first algorithm, you're doing the following:
while (num + 1) in s:
num += 1
current_streak += 1
This loop could lead to repeated set lookups for numbers that are consecutive. Since you're iterating over nums and performing a lookup operation for every number in the set, this could end up causing a lot of redundant work. Specifically, for each number, you're incrementing num and repeatedly checking num + 1. If there are a lot of consecutive numbers, this can quickly become inefficient.
The time complexity here might still be O(n) in theory, but due to the redundant operations, you're hitting a performance bottleneck, leading to TLE.
Efficiency of the Second Algorithm: In the second algorithm, you've made a few optimizations:
next_num = num + 1
while next_num in nums:
next_num += 1
Here, the check for next_num in nums is still O(1) on average, and the update to next_num skips over consecutive numbers directly without performing additional redundant lookups. This change reduces the number of unnecessary checks, improving the algorithm’s performance and avoiding redundant work.
Even though the theoretical time complexity is the same in both cases (O(n)), the second version is faster because it avoids unnecessary operations and works more efficiently with the set lookups.
Impact of Set Operations: In the first solution, you may have faced inefficiencies due to the use of the current_streak variable and updating num during iteration. Additionally, by modifying num in the loop, you're creating potential confusion and inefficient memory access patterns (e.g., reusing the same variable and performing multiple lookups for numbers that are already part of the streak).
The second solution benefits from using next_num as a separate variable, which simplifies the logic and makes the code more efficient by focusing on skipping over consecutive numbers directly without redundant checks.
O(nlogn) solutions can sometimes perform faster than O(n) in practice due to constant factors, the specific nature of the data, and the efficiency of underlying algorithms like Timsort.
Your first O(n) solution caused TLE due to redundant operations and inefficiencies in how consecutive numbers were processed.
Your second O(n) solution passed because it streamlined the logic, minimized redundant operations, and worked more efficiently with the set data structure.
Optimizing algorithms often involves reducing redundant operations and ensuring that you don't perform the same work multiple times. Even with the same time complexity, how you structure the code and the operations you choose can significantly affect performance.
It seems to have been fixed in latest release (65.6.0).
val_counts = df["x"].value_counts()
filtered_df = df[df["x"].map(val_counts) <= ceiling]
By default tooltip aggregates the data from one xAxis, but you can override it with a tooltip.formatter, see the link to the API: https://api.highcharts.com/highcharts/tooltip.formatter
The starting point can be like this:
tooltip: {
shared: true,
formatter: function () {
let tooltipText = '<b>' + this.x + '</b>';
this.points.forEach(point => {
tooltipText += '<br/>' + point.series.name + ': ' + point.y;
});
return tooltipText;
}
}
Please see a simplified config, where you can get the shared tooltip for multiple axes, I trust you will be able to adjust it for your project: https://jsfiddle.net/BlackLabel/pvr1zg26/
ISO certification itself doesn’t guarantee anything about the language (like English, Spanish, etc.) being used.
Instead, ISO standards focus on processes, quality, consistency, and compliance, regardless of the language.
For example:
ISO 9001 (Quality Management) ensures an organization follows consistent quality processes.
ISO 27001 (Information Security) ensures data is protected based on defined standards.
These standards can be documented and implemented in any language as long as:
The processes are clearly understood.
The implementation matches the intent of the ISO standard.
The audit documentation is available in a language the auditor understand
I have the following challenge. Im using dapper to access 2 databases in the same codebase.
Database 1: Uses UTC dates (i could change this but would not like to do that)
Database 2: Uses LocalDates (not something i can change)
These typehandlers are static, what means not repository/connectionstring specific
SqlMapper.AddTypeHandler(new DateTimeUtcHelper());
Any idea's how to solve this problem?
(Could implement datetimeoffset in Database 1 so the datatype is different)
When dealing with localized strings in Swift, especially for UI elements, choosing the right approach is crucial. Here’s a breakdown of the options:
LocalizedStringKey (Best for SwiftUI)Use when: You are directly using a string in a SwiftUI view (e.g., Text("hello_world")).
Why? SwiftUI automatically localizes LocalizedStringKey, making it the best choice for UI text.
Example:
Text("hello_world") // Automatically looks for "hello_world" in Localizable.strings
Pros:
✅ No need to manually use NSLocalizedString
✅ Cleaner SwiftUI code
✅ Supports string interpolation
Cons:
❌ Can’t be used outside SwiftUI (e.g., in business logic)
LocalizedStringResource (Best for Performance)Use when: You need efficient string translation with better memory handling.
Introduced in: iOS 16
Why? It is more optimized than LocalizedStringKey, but still works well with SwiftUI.
Example:
Text(LocalizedStringResource("hello_world"))
Pros:
✅ More optimized for localization
✅ Reduces memory overhead
Cons:
❌ Requires iOS 16+
String with NSLocalizedString (Best for Non-SwiftUI Code)Use when: You are not using SwiftUI, but need translations in ViewModels, controllers, or business logic.
Why? NSLocalizedString fetches translations from Localizable.strings.
Example:
let greeting = NSLocalizedString("hello_world", comment: "Greeting message") print(greeting)
Pros:
✅ Works anywhere (UIKit, business logic, networking)
✅ Supports dynamic strings
Cons:
❌ Not automatically localized in SwiftUI
❌ More verbose
In the "Plots" panel you have the "zoom" option, which detachs the plot window and allows you to visualize it full-screen. Usually, the resolution doesn't drop in the process. If you want to inspect the plot in the IDE, that's a good solution.
Additionally, if you want to quickly export the file, you can just take a screenshot of the full-screen plot.
Same issue here. Fresh setup for Eclipse 2025-03.
Windows -> Prefeneces -> Version Control -> select SVN node will produce:
I didn't find the bug (code seems ok) but I wouldn't disable the gravity in runtime. Instead I would set isKinematic flag on/off, in this way (when isKinematic is on) you know that no forces are affecting your player. And for the slopes I would just apply a bigger force.
Not having the exact same issues as you, but definitely having issues in this update. Preview is super slow and buggy. As soon as I use a textfield anywhere even on a basic test, I am getting the error: this application, or a library it uses, has passed an invalid numeric value (NaN, or not-a-number) to CoreGraphics API and this value is being ignored. Please fix this problem.in the console. Build times definitely seem soooooo much slower, its making the process annoying when it doesn't need to be.
I've cleaned the derived data, tried killing every Xcode process going, restarted a billion times lol. Great update this time around.
qpdf input.pdf --overlay stamp.pdf --repeat=z -- result.pdf
This dropdown behavior is likely managed by a TabControl or a custom tabbed document manager within your application. Here are some areas to investigate:
1-Check TabControl Properties:
If you're using a TabControl, check if SizeMode is set to Fixed or FillToRight, as this can affect how the dropdown appears.
Look for TabControl properties like DrawMode, Padding, and Multiline that might be affecting the display.
2-Event Handling for Window Resizing:
If resizing triggers the dropdown to appear, the control might not be refreshing correctly. Look for Resize or Layout event handlers where the tab control is refreshed (Invalidate(), Refresh()).
3-ScintillaNET or Custom UI Code:
Since you’re using ScintillaNET, there might be a custom tab manager handling open documents. Check for any Scintilla or related UI event handlers that modify the tab behavior.
4-Force a Refresh When a Tab is Added:
If new tabs are being added dynamically, make sure the control is properly updated. Try manually forcing a redraw when a new tab is added:
tabControl.Invalidate();
tabControl.Update();
5-Debugging Strategy:
Set breakpoints in places where tabs are created, removed, or refreshed.
Try manually calling tabControl.Refresh() after adding tabs to see if it immediately triggers the dropdown.
URI link = URI.create("http://example.com")
URI.create(link.toString() + "?name=John")
Downgrading to Python 3.11 is one of the solution for this issue but instead of reverting to Python 3.11, I tried by upgrading to Python 3.12.7 and it started working properly.
bool _shouldCollapse = true;
onTreeReady: (controller) {
if (_shouldCollapse && rootNode.children.isNotEmpty) {
WidgetsBinding.instance.addPostFrameCallback((_) {
if (mounted) {
controller.collapseNode(rootNode.children.first as IndexedTreeNode<NodePayload>);
setState(() => _shouldCollapse = false);
}
});
}
},
It seems to have changed. I used the following scopes: 'w_member_social profile openid email r_organization_socia', Try it out
#GroupMembers ul { list-style: disc; }
your tag id different with css class id
Try disconnecting and deleting the existing runtime and create new runtime. this solved the issue for me
It turns out this was caused by Firebase wanting a newer NDK version than what was in my Flutter SDK defaults.
Do the changes below in your test.sh and test2.sh We have to pass the variables as arguments to the script.
test.sh (updated version):
#!/bin/bash
TESTVARIABLE=hellohelloheloo
./test2.sh ${TESTVARIABLE}
test2.sh (updated version):
#!/bin/bash
echo ${1}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>3D Wheel Menu with JSON Events</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.7.1/jquery.min.js"></script>
<style>
* {
box-sizing: border-box;
margin: 0;
padding: 0;
}
body {
display: flex;
justify-content: flex-end;
align-items: center;
min-height: 100vh;
background: #f5f5f5;
font-family: Arial, sans-serif;
}
.wheel-container {
perspective: 1000px;
width: 250px;
height: 400px;
position: relative;
border: 0px solid #000;
padding: 35px;
display: flex;
justify-content: center;
align-items: center;
position: fixed;
right: 0;
}
.wheel {
width: 200px;
height: 350px;
position: relative;
margin: 0 auto;
transform-style: preserve-3d;
transition: transform 0.1s linear;
transform: rotateX(0deg);
}
.wheel__segment {
position: absolute;
width: 100%;
height: 40px;
top: 50%;
display: flex;
justify-content: center;
align-items: center;
background: #ddd;
border: 1px solid #aaa;
transform-origin: 50% 0;
color: #333;
font-size: 14px;
font-weight: bold;
transition: box-shadow 0.3s ease;
}
.wheel__segment span {
transform: translateZ(120px);
}
.wheel__segment:hover {
box-shadow: 0 10px 20px rgba(0, 0, 0, 0.3);
}
/* Styling for contentview */
#contentview {
position: absolute;
left: 30px;
top: 50px;
width: 400px;
padding: 20px;
background-color: #fff;
border-radius: 8px;
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
font-size: 16px;
}
</style>
</head>
<body>
<div class="wheel-container">
<div class="wheel"></div>
</div>
<!-- Contentview div where item info will be shown -->
<div id="contentview">
<h2>Practical application of the source code and ideas of this article.</h2><br>
<p id="item-info">Click on an item to see its details here.</p>
</div>
<script>
(function($) {
const spinSound = new Audio('https://cdn.pixabay.com/download/audio/2025/01/19/audio_fca0fdbc60.mp3?filename=wind-swoosh-short-289744.mp3');
const $wheel = $('.wheel');
const segmentCount = 20;
const segmentAngle = 360 / segmentCount;
const wheelHeight = $wheel.height();
const radius = wheelHeight / 2;
const segmentHeight = (2 * Math.PI * radius) / segmentCount;
// Data for items on the wheel
const items = [
{
id: 1,
title: 'Item 1',
Action: 'click',
Event: () => displayContent('Item 1', 'Details of Item 1')
},
{
id: 2,
title: 'Item 2',
Action: 'dblclick',
Event: () => displayContent('Item 2', 'Details of Item 2')
},
{
id: 3,
title: 'Item 3',
Action: 'click',
Event: () => displayContent('Item 3', 'Details of Item 3')
},
{
id: 4,
title: 'Item 4',
Action: 'dblclick',
Event: () => displayContent('Item 4', 'Details of Item 4')
},
{
id: 5,
title: 'Item 5',
Action: 'click',
Event: () => displayContent('Item 5', 'Details of Item 5')
},
{
id: 6,
title: 'Item 6',
Action: 'dblclick',
Event: () => displayContent('Item 6', 'Details of Item 6')
}
];
// Extend items array to match segment count
const extendedItems = [];
for (let i = 0; i < segmentCount; i++) {
extendedItems.push(items[i % items.length]);
}
// Function to create segments on the wheel
for (let i = 0; i < segmentCount; i++) {
const angle = segmentAngle * i;
const item = extendedItems[i];
const $segment = $('<div>', {
class: 'wheel__segment',
'data-index': i
}).css({
'transform': `rotateX(${angle}deg) translateZ(${radius}px)`,
'height': segmentHeight
}).html(`<span>${item.title}</span>`).appendTo($wheel);
// Attach event handlers
$segment.on(item.Action, function() {
item.Event(); // Trigger event from JSON data
});
}
// Function to update contentview div with item details
function displayContent(title, details) {
$('#item-info').html(`<strong>${title}</strong><br>${details}`);
}
// Function to handle the size of the wheel dynamically
function changeWheelSize(width, height) {
const $container = $('.wheel-container');
const $wheel = $('.wheel');
$container.css({
width: width + 'px',
height: height + 'px'
});
$wheel.css({
width: (width - 70) + 'px',
height: (height - 70) + 'px'
});
const newWheelHeight = $wheel.height();
const newRadius = newWheelHeight / 2;
const newSegmentHeight = (2 * Math.PI * newRadius) / segmentCount;
$('.wheel__segment').each(function(i) {
const angle = segmentAngle * i;
$(this).css({
'transform': `rotateX(${angle}deg) translateZ(${newRadius}px)`,
'height': newSegmentHeight
});
});
}
// Call function to adjust wheel size
changeWheelSize(250, 500);
let currentRotation = 0;
let isDragging = false;
let startY = 0;
let lastY = 0;
let lastTime = 0;
let velocity = 0;
let animationId = null;
// Function to play sound when wheel rotates
function playSpinSound() {
spinSound.currentTime = 0;
spinSound.play();
}
// Function to update wheel rotation
function updateWheel() {
$wheel.css('transform', `rotateX(${currentRotation}deg)`);
playSpinSound();
}
// Mouse and touch event handlers for dragging
$wheel.on('mousedown touchstart', function(e) {
e.preventDefault();
isDragging = true;
startY = getEventY(e);
lastY = startY;
lastTime = performance.now();
cancelAnimationFrame(animationId);
velocity = 0;
});
$(document).on('mousemove touchmove', function(e) {
if (!isDragging) return;
e.preventDefault();
const currentY = getEventY(e);
const deltaY = currentY - lastY;
currentRotation -= deltaY * 0.5;
velocity = -deltaY / (performance.now() - lastTime) * 15;
lastY = currentY;
lastTime = performance.now();
updateWheel();
});
$(document).on('mouseup touchend', function() {
if (!isDragging) return;
isDragging = false;
if (Math.abs(velocity) > 0.5) {
applyMomentum();
}
});
function getEventY(e) {
return e.type.includes('touch') ? e.touches[0].pageY : e.pageY;
}
function applyMomentum() {
const friction = 0.96;
velocity *= friction;
if (Math.abs(velocity) > 0.5) {
currentRotation += velocity;
updateWheel();
animationId = requestAnimationFrame(applyMomentum);
}
}
})(jQuery);
</script>
</body>
</html>
As commented by @Tzane , I ended up using enums. It was a pain to define an enumerator for each specific test, but this way I was able to require all test reporting results to be of type enum , which forces the strings to be static each time.
To be honest, my main takeaway is that the main test results should definitely not be reported as a string, because it requires more work to parse and analyze later.
It's supported in Safari 16.4
https://webkit.org/blog/13966/webkit-features-in-safari-16-4/
I think all browsers now support this.
This is the solution I came up with, which is completely dynamic so df_criterias can have as many condition columns as it wants
df_criterias = spark.createDataFrame(
[
("IN ('ABC')", "IN ('XYZ')", "<2021", "", "Top"),
("IN ('ABC')", "NOT IN ('JKL','MNO')", "IN ('2021')", "", "Bottom"),
],
["CriteriaA", "CriteriaB", "CriteriaC", "CriteriaD", "Result"]
)
dict = {
"CriteriaA" : "ColumnA",
"CriteriaB" : "ColumnB",
"CriteriaC" : "ColumnC",
"CriteriaD" : "ColumnD"
}
# Rename rule columns and retrieve only columns defined in dictionary above
df_criterias_renamed = df_criterias.select([col(x).alias(dict.get(x, x)) for x in dict.keys()])
# Set up all combinations of rules
rows_criterias = df_criterias_renamed.distinct().collect()
# Cycle through rules
for row in rows_criterias:
filters = row.asDict()
# Ignore if filter is blank
where_clause_list = [f"{k} {v}" for k,v in filters.items() if v != "" and k!= "Result"]
# Combine all clauses together
where_clause = " AND ".join(where_clause_list)
print(where_clause)
| header 1 | header 2 |
|---|---|
| cell 1 | cell 2 |
| cell 3 | cell 4 |
I had the same problem. Try to insert a SliverPersistentHeader (with the same height as your AppBar widget) as the first sliver in you CustomScrollView.
The option to Move to Position is not available for the uses at the Stakeholder access level in this Azure DevOps Organization. You may evaluate if it is necessary to assign a more privileged license for that user like Basic (5 free Basic licenses) and request the user to check again once granted.
Refer to the documents below for further information.
If using custom containers, you must somehow pass env variables from main azure app service process (KUDU) to your container. For example in entrypoint or cmd you could do printenv > .env and it will create .env file with all the env vars within app service that KUDU knows about
try using an !important for your font-family
.bodytxt {
font-family: 'Resistance', sans-serif !important;
...
}
I used sqlite:///:localhost: and it solved. Thanks to @rasjani for the suggestion!
If you still struggling to find the solution then just make sure some of python version doesn't support tensorflow, then you have only option to uninstall it and then supported version.
For future readers, Postgres has a built in solution, REPLACE().
Here's it added to my CONCAT() line to accomplish the desired result:
REPLACE(CONCAT({concat_columns}), '"', '')
Configure your Keycloak client as “bearer-only” and use OWIN’s (or ASP.NET Core’s) JWT middleware to validate tokens. Set your issuer, audience, and signing key (ideally retrieved from Keycloak’s OIDC discovery endpoint) to match Keycloak’s settings. This lets your .NET MVC app validate the bearer tokens issued by Keycloak.
Hang on.... wasn't Composer supposed to mean we had one-click installation?
So how come when I copy the link out of the Drupal module page e.g.
composer require 'drupal/blog:^3.1'
I get the error message
"Root composer.json requires drupal/blog, it could not be found in any version, there may be a typo in the package name."
And the proposed solutions are that I have to write and run code?
Blog is not the only package I'm having this problem with. Yesterday, I solved it by installing a previous version (2.x instead of the current 3.x) for a different package. And it's on both D10 and D11 with the latest version of Composer (which D11 reports isn't the right one) and approved PHP.
This is silly.
Primefaces CSP doenst work with f:ajax!
USE
p:ajax!
{ "name": "Fun City Lag 11", "displayName": "Fun City Lag 11" } react-native run-android --variant=release
Yo, so the main reason that its not working is because the stages keyword in your .gitlab.yml file overwrites the stages. Since .pre is a special stage that isn't listed in .gitlab.yml, it gets ignored when you define stages:.
Just add - .pre to stages .
include:
- local: "prestage.yml"
stages:
- .pre # Add here
- build
- test
job1:
stage: build
script:
- echo "This job runs in the build stage."
job2:
stage: test
script:
- echo "This job runs in the test stage."
Other settings leave as it is was. And it should work
=SUMPRODUCT((A2:A7=G1)*(B1:E1=G2)*B2:E7)
SUMPRODUCT processes arrays efficiently without needing multiple lookup functions.
Works well if the dataset is structured properly.
Just by looking at your code, I haven’t tested it - your custom exception (STOP) logic currently contains a return statement directly before the exception. Exception is never raised.
How to fix? Remove the aforementioned return statement
On Windows 11, as of today, right-click + "Run with PowerShell" does not work.
The policy error still appears, even with RemoteSigned set at all scopes.
If the fixed navbar is pushing up on only one of your pages, it’s likely due to some CSS or layout issue specific to that page. Here are a few steps you can take:
Check for Padding/Margin Issues: Ensure that your page content has the right padding-top to account for the navbar height. You can add a margin-top equal to the height of the fixed navbar to prevent the content from being pushed up.
Inspect Custom CSS: If you’ve customized the navbar or page styles, make sure there are no conflicting styles that could affect the layout.
Viewport Differences: Sometimes, different pages may have slight variations in viewport height or other elements that cause layout shifts. Ensure consistency across all pages.
For the best long-term results, you might want to consider using a solid structure and layout framework—similar to how affordable metal roofing offers long-term stability and protection for your home. Consistency and careful attention to the details make a big difference.
Thanks for reporting this. Unfortunately, there is no such option to dump all configuration at runtime, but you can always debug nginx process to dump configuration, as mentioned in official docs: https://docs.nginx.com/nginx/admin-guide/monitoring/debugging/#dumping-nginx-configuration-from-a-running-process
If anyone run into this issue while defining BuildConfig filed with type of string in your gradle file, then the trick for it to work is to add extra slash before \n. Like this:
buildConfigField("String", "YourStringVariable", "\"First Line.\\nSecond Line.\"")
https://github.com/steveio/CircularBuffer in case anyone interested and https://github.com/steveio/ArrayStats for running trend analysis
This is indeed a bug. The cell content can't be broken anywhere and exceeds the space horizontally. Basically you can:
allocate more space for this column.
do not fix the width for this column.
pre-calculate minimum required column width using the content and the font.
The fix in iText Core won't help you much I believe as you will see something like this as the result
I believe this is specifically a Windows 10 issue when running via terminal. I found out that you could copy paste this line into your terminal, then run it again. It should work this time.
reg add HKCU\Console /v VirtualTerminalLevel /t REG_DWORD /d 1 /f
Based on the answer of @Poat, I found that the optional features can be queried using this management class:
var managementClass = new ManagementClass("Win32_OptionalFeature");
var featureObjects = managementClass.GetInstances();
Then, go through each object and check its name. The install state can be queried to get whether the optional feature is available or not:
foreach (var featureObject in featureObjects)
{
if ((string)featureObject.Properties["Name"].Value == "TelnetClient"
&& (uint)featureObject.Properties["InstallState"].Value == 1)
{
// Feature installed
}
}
Be careful: The name value is different then the description you see in Windows. You could also query the description property to check the name as you see it. For more information, see the Microsoft documentation: https://learn.microsoft.com/en-us/windows/win32/cimwin32prov/win32-optionalfeature
Please provide more details.
I think the sidebarContent is a ref from useRef. The ref shouldn't be used as deps of useEffect. I will always be the same reference and the react won't rerender. Please try to use useState
Are you using Business Process Tracking?
We had the same problem, also around 13 minutes per loop iteration. For us it turned out that our Data Explorer instance was stopped and the workflow could not send its metadata to Data Explorer. Each loop it took 13 minutes (probably because of the retries) before it gave up and flow continued.
Following the example code and comment given by OP, event A, B, C, D is referred.
A is P14273
B is AX14273
C is H14273:O14273
D is Q14273:AW14273
E is AY14273:CH14273 (I add this for clearer example)
[ Condition ]
(in sequence of priority..)
IF any value in C, D, E is later or the same day as B. Then, it will not be evaluated
IF C, D, E date is before B. Then, It will be evaluated
IF abs(A-B)< abs(B-C) , for all C. Then, Cond1 is TRUE
IF abs(A-B)< abs(B-D) , for all D. Then, Cond2 is TRUE
IF abs(A-B)< abs(B-E) , for all E. Then, Cond3 is TRUE
IF and(Cond1,Cond2,Cond3) is TRUE. Then, Output is "Yes"
Else, Output is "No"
IF C, D, E date is before B Then, Output is "" (nothing) <-- (I add this for clearer example)
[ Execution ]
Example for considering only A,B,C :
=IF(MIN(H14273:O14273)\>=AX14273,"",
IF(ABS(P14273 - AX14273)
\< MAX(
ABS( AX14273 - IF(H14273:O14273\>=AX14273,AX14273,H14273:O14273) )
),
"Yes" , "No")
)
considering all A-E :
=IF(MIN(H14273:O14273)\>=AX14273,"",
IF(ABS(P14273 - AX14273)
\< MAX(
ABS( AX14273 - IF(H14273:O14273\>=AX14273,AX14273,H14273:O14273),
ABS( AX14273 - IF(Q14273:AW14273\>=AX14273,AX14273,Q14273:AW14273),
ABS( AX14273 - IF(AY14273:CH14273\>=AX14273,AX14273,AY14273:CH14273) )
),
"Yes" , "No")
)
[ Notes ]
=IF(H14273:O14273>=AX14273,AX14273,H14273:O14273)
is an array formula, so if it is typed into a cell.. it will spill to the right.
ABS(AX14273-IF(H14273:O14273>=AX14273,AX14273,H14273:O14273)
will '0' the dates later than AX14273, then find the maximum date difference in the range.
Please share if it works/not/understandable. \(^_^)/
Delete the .vs directory in the solution directory and restarted VS 2019. Now the form is loading again.
Solution is here:
To prevent line overlapping and ensure media queries work effectively for large screens, consider the following strategies:
# Preventing Line Overlapping
1. *Line Height*: Adjust the line height of your text elements to create sufficient spacing between lines. A good starting point is a line height of 1.5 to 2 times the font size.
2. *Margin and Padding*: Ensure adequate margin and padding between text elements to prevent them from running into each other.
3. *Font Size and Family*: Choose a font size and family that is clear and readable, even at larger screen sizes.
4. *Text Wrap*: Use the `word-wrap` or `overflow-wrap` properties to prevent long words from overflowing their containers.
# Media Queries for Large Screens
1. *Define Breakpoints*: Establish clear breakpoints for large screens (e.g., 1200px, 1800px, 2400px) to target specific screen sizes.
2. *Use Min-Width*: Instead of using `max-width`, use `min-width` to target screens that are at least a certain width.
3. *Test and Refine*: Test your media queries on various devices and screen sizes to ensure they're working as intended.
4. *Prioritize Content*: Use media queries to prioritize content on larger screens, such as displaying more columns or showcasing high-priority information.
# Example Media Query
/* Target screens with a minimum width of 1800px */
@media only screen and (min-width: 1800px) {
/* Adjust font sizes, margins, and padding as needed */
body {
font-size: 18px;
}
.container {
margin: 40px auto;
padding: 20p
x;
}
}
By implementing these strategies, you can prevent line overlapping and create effective media queries that cater to Large screens.
I ended up using the MediaStore API, and that works now.
to return IDs that have at least one "in" status you may find this query suitable-
SELECT DISTINCT id
FROM status
WHERE status = 'in';
As of cmake 4.0.0 there is a generic way to specify this target property. DEBUGGER_WORKING_DIRECTORY was added which can be set as a target property for executables. This value can be picked up by tooling (IDEs for example) via the cmake file API.
the easiest way is download repo and import as library
You have to create simulator build in your eas.json file like :
"development-simulator": {
"developmentClient": true,
"distribution": "internal",
"ios": {
"simulator": true // The important part!!
}
},
Then you need to run your build, when it's finished expo will ask Install and run the iOS build on a simulator? then select Y and app will be installed on your simulator. Cheers :)
This was fixed in
<PackageReference Include="Swashbuckle.AspNetCore" Version="8.1.0" />
Then I cleared my browser cache --> crtl+shift+r
@msd
after adding cjs file also didn't worked
has anyone solved it and i have done the docker approach DOCKER SAMPLE, it is also not working same error: "Browser was not found at the configured executablePath"
You have to create simulator build in your eas.json file like :
"development-simulator": {
"developmentClient": true,
"distribution": "internal",
"ios": {
"simulator": true // The important part!!
}
},
Then you need to run your build, when it's finished expo will ask Install and run the iOS build on a simulator? then select Y and app will be installed on your simulator. Cheers :)
Thanks for your help, now it works. I was just confused about the "request-json" stuff ...
Sure! Here's the response in English, incorporating your website's link, **WingReserve**:
---
The primary reason why airlines like KLM or Air France might resist enabling seamless flight booking through comparison websites is due to their reliance on affiliate marketing. These comparison platforms, such as Skyscanner, often depend on commissions earned from referrals. This means customers are redirected to the airline's official website for booking, ensuring the airline retains control over upselling additional services (e.g., seat selection, baggage, etc.) and avoids paying higher commission fees for direct bookings.
Why Airlines Prefer the Current Model:
1. Customer Control: Airlines gain full control over the booking process, ensuring a tailored user experience.
2. Lower Commission Costs: By directing users to their site, they minimize the fees paid to third-party platforms.
3. Brand Identity: Booking on the airline’s site strengthens brand loyalty and offers personalized services.
Applied Example for WingReserve
To address this challenge on WingReserve https://www.wingreserve.com/ you could:
- Create premium analytics and insights** for airlines to encourage collaboration.
- Offer airlines customized promotion opportunities within your platform, highlighting their unique services to maintain brand differentiation.
- Design a model where airlines can benefit from seamless bookings while limiting costs.
This approach could position WingReserve as a leader in facilitating innovative travel solutions for both customers and airlines.
any luck with this? facing the same issue
Most likely it is because you are currently accessing from an IP that is within the Trusted IP Ranges (Setup -> Administer -> Security Controls -> Network Access) and Salesforce hides that option in that case.
That might explain why it is not displayed, but you can still access through the link. Just put /_ui/system/security/ResetApiTokenEdit after https://......force.com
Or... (while it still works the redirect if you don't have my-domain enabled and so on)
https://login.salesforce.com/\_ui/system/security/ResetApiTokenEdit for production / developer orgs
https://test.salesforce.com/\_ui/system/security/ResetApiTokenEdit for sandboxes
you will need the source revision override
input transformer
{"commitId": "$.detail.commitId"}
{
"sourceRevisions": {
"actionName": "Source",
"revisionType": "COMMIT_ID",
"revisionValue": "<commitId>"
}
}
for details please check following document
You can point your domain at your CloudFront distribution then you have 2 origins - one for your S3 app and one pointing to your Amplify app. Make the S3 origin your default behaviour, then add another behaviour for /users/* (or whatever) and point that at your Amplify origin.
In my case it worked with below code:
traceparent = context.trace_context.trace_parent
operation_id = f"{traceparent}".split('-')[1]
This precompiled version installation worked for me without C++ compiler
pip install webrtcvad-wheels
You should try below.
SELECT dbms_metadata.get_ddl('PASSWORD_VERIFY_FUNCTION','ORA_COMPLEXITY_CHECK','SYS') from dual;
Check Firewalls. Sometimes firewalls from virus guards may block connections.
( I am putting this as an answer because I don't have enough reputation to put a comment)
I'm unable to clean the zombie process , which is created by below command:
log_file = "ui_console{}.log".format(index)
cmd = "npm run test:chrome:{} &> {} & echo $!".format(index, log_file)
print(f'run{index} :{cmd}')
# This command will be run multiple time for each kvm instance in background, which is having bg pid and stor the stdout, stderr in log_file
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, executable='/bin/bash')
# Read the BG PID from the command's output
eaton@BLRTSL02562:/workspace/edge-linux-test-edgex-pytest$ ps -ef | grep 386819
eaton 386819 1 0 05:04 pts/2 00:00:01 [npm run test:ch] <defunct>
eaton 392598 21681 0 06:30 pts/3 00:00:00 grep --color=auto 386819
eaton@BLRTSL02562:/workspace/edge-linux-test-edgex-pytest$
how to clean the zombie PID ?
Tried below steps but no luck:
Find the Parent Process ID (PPID):
ps -o ppid= -p <zombie_pid>
Send a Signal to the Parent Process
Send the SIGCHLD signal to the parent process to notify it to clean up the zombie process:
sudo kill -SIGCHLD <parent_pid>
Replace <parent_pid> with the PPID obtained from the previous command.
Please suggest if there are any approach
I'm new to GE. I was working on a scenario where I need to connect to a Oracle database and fetch the data based on a query and then execute expectations present in suite.
Instead of passing SQL query as a parameter when defining data asset. I want to pass the SQL query as a parameter at run time during validation.run() so that I can pass the query dynamically and it can be used on any database table and columns for that particular DQ check(completeness/range..)
Can you please suggest how to achieve it. If any sample code also helps a lot.
Thanks in advance
Also im ersten block hab ich die prints und rest eigentlich alles unter einander bevor sich jemmand wundert
Fatal Exception: kotlinx.serialization.json.internal.JsonDecodingException
android {
defaultConfig {
applicationId = "com.example.myapp"
minSdk = 15
targetSdk = 24
versionCode = 1
versionName = "1.0"
}
...
}
On looking into this, it seems like you are running into a test case that specifically checks for this on submission. Leetcode has some basic test cases before the actual ones they run to consider a solution Submitted. @Marce has put the code already.
I would like to summarize that in your code you need to check values.isEmpty() before you call peek. With respect to the problem, it is basically failed if the stack is empty before you process the closing bracket.
Pro tip: consider changing your return to return values.isEmpty();
For me, i uninstalled laragon then delete the laragon folder at your main drive, then install again worked for me.
I have been struggling with the same situation for a while. I started to think that there is lack of feature support for this situation. Any ideas anyone?
res = $"{char.ConvertFromUtf32(serialPort.ReadChar())}{serialPort.ReadExisting()}";
Helix toolkit doesn't support normal on point rendering. A potential workaround is to create a small circle mesh and render with instancing( use point locations and normals to generate an array of instancing matrices)
I want to share another solution/measure that works:
Share% =
VAR denom = CALCULATE(
SUM(Table1[Value]),
ALL('Table1'[Financial KPI]),
'Table1'[Financial KPI] = MAX(Table1[Denominator])
)
VAR num = CALCULATE(
SUM(Table1[Value]),
'Table1'[Financial KPI] = MAX(Table1[Financial KPI])
)
RETURN
DIVIDE(
num,
denom,
0
)
Install the next version: 4.0.20, worked for me.
Source: https://github.com/expo/expo/issues/35834#issuecomment-2771427050
How .NET Core Handles Different Service Lifetimes
.NET Core's built-in Dependency Injection (DI) system manages service lifetimes as follows.
Singleton: The service is created once and shared throughout the application.
Scoped: A new instance is created for each request (in web apps).
Transient: A new instance is created every time it is requested.
Internally, .NET Core stores services in a service collection and resolves dependencies from a service provider when needed.
Key Differences Between .NET Core's Built-in DI and Third-Party DI Containers
Simplicity: The built-in DI is lightweight and easy to use, while third-party DI containers offer more features.
Flexibility: External DI containers (like Autofac) provide advanced features and better support for complex scenarios.
Performance: The built-in DI is optimized for .NET Core, making it faster for most standard use cases.
You can check the official Microsoft documentation here:
https://learn.microsoft.com/en-us/dotnet/core/extensions/dependency-injection
If you want to know more about service lifetimes read my article.
The uv team has been super fast to answer my question and since https://github.com/astral-sh/uv/issues/12038 it's part of the default mechanism:
local sources are now always reinstalled
Visual Studio Developer Command Prompt and PowerShell require authentication when accessing or using secured resources like Azure or Git. Use AZ logins for Azure authentication or Git credentials for storage. Ensure the correct permissions and use Personal Access Tokens (PATs) or OAuth for secure access. Enable multi-factor authentication for added security.
I don't have enough reputation to comment yet, so posting this as an answer in hopes it helps the next person.
I spent around 3 days researching and trying to solve this issue. I found most of the StackOverflow answers as well as guides from other forums. All of those kept saying: Set your JAVA_HOME to some Java21 installation and check using 'mvn -v' to make sure you see a 21.x.x somewhere. This seems so have solved it for everyone else, but not for me.
My JAVA_HOME variable was pointing at Java 21, however for some reason it was installed only as a JRE and not as a JDK. Thus, there was no compiler present.
Make sure your JAVA_HOME variable is not only pointed to some Java 21 installation, but that that installation is a Java JDK, not just a Java JRE!
Encountered the problem on linux VM,
My fix was to downgrade the copilot plugin:
Uninstall GitHub plugin (ctrl+alt+S)>plugins
download previous version locally from site
install Plugin from disk
I was not using 'use client' at the top of my function that was calling the database. I thought that Next.js automatically loads pages as server components so I assumed it didn't need the clarification. Kind of a stupid but common mistake so I hope anyone who has the same issue checks that every function they use their environment variables in should be marked 'use client'.
Every other answer in this thread makes ugly tooltips, here is one that doesn't
export type Overwrite<T, R> = { [K in keyof T]: K extends keyof R ? R[K] : T[K] };
I was able to find a temporary solution from a member of their discord and it seemed to help me.
Open node_modules/expo-router/build/getLinkingConfig.js and go to line 62 of the file at getPathFromState and change this.config to this?.config
Cause i already needed this information twice and hat troubles researching it every time, i post this here.
Lets pretend we have this methode,
public bool TryGetNext(out Bestellung? bestellung);
what makes it special, it does have a Out Parameter usually if you dont care
It.isAny<Bestellung>()
would be right do declare that this setup is acting on any input as long it is of Type Bestellung
In Case of a Out you need to use It.Ref<Bestellung>.IsAny.
How are we get the setup to give us different results every time we call it?
In my case is the Original Methode a DeQueue so i rebuild this.
.Returns((out Bestellung? bestellung) => {}
here we get the variable bestellung we just assign and with the return as usually assign the return value of the method itself.
Queue<Bestellung> bestellungenQueue = new Queue<Bestellung>(
[
new BestellungBuilder().WithAuftragsNr(123456).Build(),
new BestellungBuilder().WithAuftragsNr(789456).Build()
]);
Mock<IBestellungRepository> mockRepository = new Mock<IBestellungRepository>();
mockRepository.Setup(m => m.TryGetNext(out It.Ref<Bestellung>.IsAny!))
.Returns((out Bestellung? bestellung) =>
{
if (bestellungenQueue.Count > 0)
{
bestellung = bestellungenQueue.Dequeue();
return bestellung != null; // Return true if not null
}
bestellung = null;
return false; // No more items
});