You can try rclone, which can sync AWS S3 directly with LocalStack S3.
I think it's NOT a real answer, but a workaround. MS should work on this.
This article helped me. Basically, I had to delete the "My Exported Templates" folder and now Visual Studio created the Folder and the Template.
please restart the database, it will work
"message": "could not execute statement; SQL [n/a]; nested exception is org.hibernate.PessimisticLockException: could not execute statement",
Select @@innodb_lock_wait_timeout;
set innodb_lock_wait_timeout=100
"HttpClient": {
"DefaultProxy": {
"Enabled": true,
"Address": "http://your-proxy-server:port",
"BypassOnLocal": false,
"UseDefaultCredentials": false
}
}
Hi here's couple of hints.
First, are you using a form to send data, sometimes it's just simple as that.
Second, is your token well set into the form ? or html page ?
And third, try to make sure that your model is bonded correctly with your project.
Good luck
Using this CSS finally solved it
code {
font-family: "verdana";
font-size: 18px;
color: black;
font-weight: bold !important;
line-height: 1.5 !important;
}
@media (max-width: 480px) {
code {
font: 18px "verdana" !important;
}
}
@media (min-width: 481px) and (max-width: 767px) {
code {
font: 18px "verdana" !important;
}
}
@media (min-width: 768px) and (max-width: 1024px) {
code {
font: 18px "verdana" !important;
}
}
@media (min-width: 1025px) and (max-width: 1280px) {
code {
font: 18px "verdana" !important;
}
}
@media (min-width: 1281px) {
code {
font: 18px "verdana" !important;
}
}
To prevent Flutter from uninstalling and reinstalling the app every time you run flutter run, try these:
Connect your device to the device where Flutter is installed, then open the command prompt and run the flutter logs command. After that, launch the app you want to debug.
This issue might be if your machine time is out of sync
To fix this issue with SQLAlchemy, you need to use the correct driver syntax. For MySQL connections with public key retrieval, use mysql+pymysql:// instead of mysql:// and add the parameters to the query string:
engine = create_engine('mysql+pymysql://poopoo:peepee@localhost:32913/test?allowPublicKeyRetrieval=true&useSSL=false')
If you're still getting errors, you can also try passing these parameters via connect_args with the correct format:
engine = create_engine(
'mysql+pymysql://poopoo:peepee@localhost:32913/test',
connect_args={
"ssl": {"ssl_mode": "DISABLED"},
"allow_public_key_retrieval": True
}
)
Make sure you have the pymysql driver installed: pip install pymysql.
Install @nestjs/platform-express that is compatible with your project. Try a lower version of @nestjs/platform-express
I attach also my issue, which is similar, I think a solution is to override the _receive method of the gremlin-python connection.py module, simply trying a reconnection before the 'finally' statement which puts it back into the pool.
The way I go about this is using ipykernel in my virtual environments which I need to use with the Jupyter notebooks so that I can switch to the appropriate environments when using the notebook.
All I need to do is to switch between environments to make sure I only need the one that is for the Jupyter notebook.
P.S. a new utility is in town called Puppy. You might want to give this a read!
Okay I think there was just something wrong with my build and it was throwing me off because the debugger stopped working (tried adding breakpoints and getting errors that they wouldnt be hit/couldnt load symbols). I rebuilt everything and it seems to be working. Appreciate all the help!
7 years later but here to comment that we are happy with the layout [here](https://epiforecasts.io/EpiNow2/stan/), incase anyone is still interested.
AWS SDK in general has a good way for looking for configuration with minimal intervention / configuration from the developer, as long as you make sure you have the necessary configuration in place, either it's giving necessary IAM Policy access or having temporary credentials placed in a config file.
Please have a deep dive into
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html
Which basically suggests you don't need to specify anything if you assign the right policy to the resource you are running your code at.
The QuadraticSum constructor takes in a heterogeneous list of numbers, linear expresisons and quadratic expressions, and adds them up. So objective_2 is just x+y in your code.
Generally, you do not need to invoke that constructor/use that type directly, just use operator overloads, QuadraticExpression(), and mathopt.fast_sum()
https://github.com/google/or-tools/blob/stable/ortools/math_opt/python/expressions.py#L27
here's my query but it does not show anything on the map.
json_build_object('type', 'Polygon','geometry', ST_AsGeoJSON(ST_Transform(geom, 4326))::json)::text as geojson
any idea?
Please refer to Joel Geraci reply from this blog, where the save is registered with a callback, from here you can use the updated data.
same question,
stream splits tool_calls and returns data: {"choices":[{"delta":{"content":null,"tool_calls":[{"function":{"arguments":"{\"city\": \""},"
so that the complete parameters cannot be obtained, but I don't know how to solve it
Map<String, Object> arguments = ModelOptionsUtils.jsonToMap(functionInput);
Inspired by @Oliver Matthews' answer, I create a repository on markdown-include: https://github.com/atlasean/markdown-include .
Few points for consideration:
You can achieve it by using Synchronization or thread-safe.
You are using synchronization in your code which is correct. But you are trying to use same Scanner object across multiple threads. Scanner is not thread-safe. Each thread should ideally use its own scanner or you should synchronize access to the Scanner.
You are calling addingForFirstList for all threads, which means you are adding values to wrong lists. You should call appropriate method for each list.
You are using separate locks for each list, which is correct and it ensures each list is accessed independently.
You may also consider Executor Framework, which provides high-level API for managing threads. And it's crucial to manage shared resources like Scanner
import matplotlib.pyplot as plt
# Data for the diagram
categories = [
"Водогосподарська проблема",
"Енергетична проблема",
"Сировинна проблема",
"Глобальна продовольча проблема",
"Охорона Світового океану",
"Освоєння космосу"
]
importance = [9, 8, 7, 8, 6, 5] # Importance rating from 1 to 10 for each problem
# Create the diagram
plt.figure(figsize=(10,6))
plt.barh(categories, importance, color='skyblue')
plt.xlabel("Важливість проблеми (оцінка від 1 до 10)")
plt.title("Оцінка важливості глобальних проблем в системі «суспільство-природа» в Україні")
plt.gca().invert_yaxis() # Invert the order on the Y-axis
plt.tight_layout()
plt.show()
Efectivamente funciona con "react-native-google-places-autocomplete": "^2.5.6" aunque en mi caso y me imagino que para el de todos no hace falta instalar 'react-native-get-random-values' ya que esta version al parecer no la necesita, de hecho te puedes ahorrar muchos errores desinstalando 'react-native-get-random-values' si ya lo habias instalado.
You can use Reactotron. It's easy to setup.
link here https://docs.infinite.red/reactotron/
I decided to drop this idea, because from what I found nginx and Apache are not able to use certificate in the middle of the chain as CA to authenticate clients. In case anyone wonders, I ended up using same self-signed CA and same client certificate, only I pinned its fingerprint for the site when I planned to use admin-ca.cer certificate.
it shows in milliseconds, you have to divide this value by 1000 and get the right value in seconds
The url field contains data:application/pdf;base64,..., which might not be properly handled by some clients. Try sending the Base64 content separately in a downloadable format instead of embedding it directly in a url.
Force handling errors globally:
Configured Flask and Flask-RESTful to propagate JWT exceptions correctly by adding the following code to init.py:
app.config['PROPAGATE_EXCEPTIONS'] = True # Propagate exceptions to the client
api.handle_errors = False # Disable Flask-RESTful
This provided the results I was looking for and I successfully tested the JWT lifecycle:
Login: Issued JWT tokens via /api/login.
Valid Token: Accessed protected resource successfully.
Expired Token: Received expected 401 error ("Token has expired").
Token Refresh: Successfully refreshed JWT token via /api/refresh.
New Token: Validated new token with protected endpoint access.
In 2025 using base-select allows styling the <select> see details here
https://developer.chrome.com/blog/a-customizable-select and https://codepen.io/web-dot-dev/pen/zxYaXzZ
At the time of writing this only supports chrome and edge - https://caniuse.com/mdn-css_properties_appearance_base-select
Suprememobiles, One Stop Destination for all Electronics Needs 5g Mobile phones & Tablets | Laptops | Smart TV's | Smartwatch | Earbuds | Headphone | Home Appliances. We are dealing multi-brand like Apple, Samsung, Xiaomi, Realme, Vivo phone, Oppo phone, Motorola phone, OnePlus, Nokia, Tecno.
Try this small util I wrote. While searching for the same topic, I found your question
https://github.com/denistiano/bertsonify
I was solving the same thing. If trained with quality data seems to give okay results. Obviously the more complex the object, the harder it is to have quality output.
Welcome to Windows app development! Here's a breakdown of compatibility for different Windows versions and some guidance on how to approach development.
Microsoft has had different development frameworks for its platforms, and compatibility depends on which one you are using:
✅ Windows Phone 7.8 – Not fully compatible. While some APIs may work, Windows Phone 8 introduced new capabilities (e.g., native code support, Direct3D) that are not backward compatible. You would need to target Windows Phone 7.x separately.
❌ Windows RT – Not compatible. Windows RT (for ARM-based tablets) runs apps built for Windows Store (Metro/Modern UI apps), not Windows Phone.
❌ Windows 8 / 8.1 – Not directly compatible. Windows 8 and 8.1 use WinRT (Windows Runtime), which is different from Windows Phone 8 SDK. However, you can share code if you create a Universal App for both platforms.
✅ Windows RT apps work on Windows 8/8.1 but not on Windows Phone without modification.
❌ Windows Phone apps won’t run on Windows 8/8.1 or RT without adaptation.
If you want your app to run across multiple platforms, consider these approaches:
Use Windows Phone 7.x SDK (if targeting WP7.8)
If your app must support Windows Phone 7.8, use the Windows Phone SDK 7.1 (not 8.0).
However, WP7.8 is very outdated, and it’s better to focus on newer versions.
Develop a Universal Windows App (for Windows 8.1 and Windows Phone 8.1)
If you want to support both Windows Phone 8.1 and Windows 8.1, use the Universal Windows App framework.
This lets you share a common codebase while keeping platform-specific optimizations.
Target UWP (Universal Windows Platform) for Future-Proofing
Using the outline utility, you can change the focus border color in Tailwind CSS.
For example:
<input
className="focus:outline-gray-400"
/>
Yes you are correct play store or app store does not allow you to publish app for only some certain cities. But you could handle it from your end, by fetching the current location of the user. you will easily get latitude and longitude of the user, using which you can easily get the user's state or city, then proceed with your logic only if the user is at your desired location.
Sequoia 15.3.2 M3 chip.
brew install cmake didn't work for me, so I tried next:
brew install pkg-config
brew install cmake libgit2
bundle install
I read the article linked in the first comment and solved it!
#include <bits/stdc++.h>
Instead of adding a header file like above, I added only the necessary files as shown below.
#include <string>
#include <array>
#include <bitset>
#include <utility>
#include <iostream>
#include <iomanip>
#include <future>
And adding -stdlib=libc++ to the compile command solved it!
g++ -std=c++17 -stdlib=libc++ -I/opt/homebrew/opt/cryptopp/include/cryptopp -L/opt/homebrew/opt/cryptopp/lib -lcryptopp DES_Encryption.cpp -o DES_Encryption && ./DES_Encryption
Thank you to everyone who responded.
Starting at line 85 you can see the issue. That shouldn't be there.
}
[root@server ~]# sudo vi /etc/nginx/nginx.conf
1. Create a batch file "run.bat" and write a set of commands in that file
2. Create a Task in Task Scheduler and give a proper name, and in the triggers section Choose "Weekly" and set it to run every Monday at your desired time.
3. choose program in the action and choose the batch file created in the above.
4. save
You are mixing two different packages, which are both shadcn ports for flutter.
The example is from shadcn_flutter but the tabs are from shadcn_ui. Try to use one of the two.
var userName = result.ClaimsPrincipal?.Claims
.FirstOrDefault(c =\> c.Type.Equal("name",OrdinalIgnoreCase))?.Value;
As per docs https://cloud.google.com/compute/docs/create-windows-server-vm-instance, windows is not covered under free trial
Possible next step to try is to activate a full billing account https://cloud.google.com/free/docs/free-cloud-features#how-to-upgrade
dom-speech-recognition
https://www.npmjs.com/package/@types/dom-speech-recognition
I'm experiencing the same issue. When I download it, it pulls empty data and doesn't show anything. This problem probably started today. Unfortunately, we weren't able to send reports to our clients.
I create a repository on markdown-include: https://github.com/atlazean/markdown-include
The Chart Visualizer in Megaladata does not directly support aggregation operations. It is designed to display the relationship between fields.
To aggregate data before visualization:
1. Use the Grouping component to aggregate your data (e.g., sum, average, count).
2. Use the aggregated data as input for the Chart Visualizer.
If the program terminates abruptly due to unhandled exceptions, destructor for global objects might not be called. This happens because operating system might not give enough time to cleanup resources properly.
Try using smart pointers for resource management to avoid such issues.
I there any update on how this can be done. Currently I also need similar functionality in table
Thanks to @JulianKoster, I realized that asset-mapper wasn't installed, causing this issue, here is the fix:
composer require symfony/asset-mapper symfony/asset symfony/twig-pack
Read: https://symfony.com/doc/current/frontend/asset_mapper.html
For this level of detail you'll need to write a custom reporter plug-in. This will have the ability to introspect the whole workflow and you can also examine input/output files themselves (needed for 3) which a generic plugin is not going to do.
You could then export all the info as JSON, or you could have your plugin update your database directly.
See:
https://github.com/snakemake/snakemake-interface-report-plugins/tree/main
And here is an example of a plugin:
Note - when making a test plugin I found that using poetry, as suggested here, was more of a hindrance than a help, but YMMV.
You could use COALESCE to change the NULL values.
select
coalesce(path, '') as path
,comment
from read_csv('${path}')
I prefer sonarLint , it highlights possible NullPointerException risks.
Thanks for all the useful remarks,
Seems like std::bitcast is the right way to do this. It requires C++20 but I think I'm ok with that.
apparently i was missing the @app.function_name for every function and i had to fix the imports from
from . import RSSNewsletter
to
import RSSNewsletter
One of the best tools I use in my apps is the Talker package. It provides a logs screen to track every log and error in your app. check the doc here
My first suspicion here would be a memory-related error. These will show up in the kernel log:
$ sudo dmesg -T
as OOM events. You could also use strace on the application to look for malloc() calls that fail, but you do have to make sure you run strace on the underlying binary, not any wrapper script that might be being invoked in the rule.
If the application has just enough memory available when run outside of Snakemake then it may be the overhead of Snakemake pushing it over the edge. Also, with Snakemake are you running one job at a time or allowing it to run multiple jobs in parallel? Are you using multiple threads within the rule?
How to do
Method 01:
You can simply create an AWS data sync task for the particular work.
First Create Data sync task with source and destination location. Since you try to share data from AWS S3 to another S3 this task won't need any AWS agent.
Then RUN the task data will be migrate to destination S3 (Time will depend on the total size of data that you going to migrate)
Price:
Method 02:
You can use Cross Region Replication (CRR) enable on S3.
These articles will guide you to how to replicate the existing S3 bucket. I think this method will be more cost-effective way for your task.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication.html
https://aws.amazon.com/getting-started/hands-on/replicate-data-using-amazon-s3-replication/
I prefer using SonarQueue for IDE plugin. It shows more ptential problems, describes whats and whys.
enter image description here Now it only supports .exe and .MSI format packages.
I ran into the same problem and solved it. In my case, there is a space in the font name: (Spleen 32x64). And instead of entering:
Spleen 32x64
In the "Font Family", I simply add quote marks, like:
"Spleen 32x64"
And it works.
The surprising result you're seeing where an O(nlogn) algorithm performs faster than an O(n) algorithm is due to several practical factors:
Constant Factors and Lower-Level Operations: Even though the theoretical time complexity of sorting is O(nlogn), the constants involved in sorting (like in Timsort, which is the algorithm used by Python's sort()) can sometimes outperform O(n) solutions, especially when the input size is small or when the implementation of the O(n) solution involves costly operations.
Efficient Sorting Algorithms: The Timsort algorithm used in Python is highly optimized for practical use cases. It is particularly fast on real-world data, especially if there are ordered or partially ordered sequences in the input. Even though the sorting step theoretically has higher time complexity, in practice, it can run faster because of optimizations that reduce the constant factors.
Set Operations Overhead: In your O(n) solution, you're relying heavily on set operations, specifically in and add. While these operations are average O(1), they can sometimes take more time than expected because of factors like hash collisions, dynamic resizing, or poor cache locality when iterating over the set. These operations might not be as fast as they theoretically should be, especially when you're performing a lot of lookups or insertions.
Repeated Operations in the First Algorithm: In your first algorithm, you're doing the following:
while (num + 1) in s:
num += 1
current_streak += 1
This loop could lead to repeated set lookups for numbers that are consecutive. Since you're iterating over nums and performing a lookup operation for every number in the set, this could end up causing a lot of redundant work. Specifically, for each number, you're incrementing num and repeatedly checking num + 1. If there are a lot of consecutive numbers, this can quickly become inefficient.
The time complexity here might still be O(n) in theory, but due to the redundant operations, you're hitting a performance bottleneck, leading to TLE.
Efficiency of the Second Algorithm: In the second algorithm, you've made a few optimizations:
next_num = num + 1
while next_num in nums:
next_num += 1
Here, the check for next_num in nums is still O(1) on average, and the update to next_num skips over consecutive numbers directly without performing additional redundant lookups. This change reduces the number of unnecessary checks, improving the algorithm’s performance and avoiding redundant work.
Even though the theoretical time complexity is the same in both cases (O(n)), the second version is faster because it avoids unnecessary operations and works more efficiently with the set lookups.
Impact of Set Operations: In the first solution, you may have faced inefficiencies due to the use of the current_streak variable and updating num during iteration. Additionally, by modifying num in the loop, you're creating potential confusion and inefficient memory access patterns (e.g., reusing the same variable and performing multiple lookups for numbers that are already part of the streak).
The second solution benefits from using next_num as a separate variable, which simplifies the logic and makes the code more efficient by focusing on skipping over consecutive numbers directly without redundant checks.
O(nlogn) solutions can sometimes perform faster than O(n) in practice due to constant factors, the specific nature of the data, and the efficiency of underlying algorithms like Timsort.
Your first O(n) solution caused TLE due to redundant operations and inefficiencies in how consecutive numbers were processed.
Your second O(n) solution passed because it streamlined the logic, minimized redundant operations, and worked more efficiently with the set data structure.
Optimizing algorithms often involves reducing redundant operations and ensuring that you don't perform the same work multiple times. Even with the same time complexity, how you structure the code and the operations you choose can significantly affect performance.
It seems to have been fixed in latest release (65.6.0).
val_counts = df["x"].value_counts()
filtered_df = df[df["x"].map(val_counts) <= ceiling]
By default tooltip aggregates the data from one xAxis, but you can override it with a tooltip.formatter, see the link to the API: https://api.highcharts.com/highcharts/tooltip.formatter
The starting point can be like this:
tooltip: {
shared: true,
formatter: function () {
let tooltipText = '<b>' + this.x + '</b>';
this.points.forEach(point => {
tooltipText += '<br/>' + point.series.name + ': ' + point.y;
});
return tooltipText;
}
}
Please see a simplified config, where you can get the shared tooltip for multiple axes, I trust you will be able to adjust it for your project: https://jsfiddle.net/BlackLabel/pvr1zg26/
ISO certification itself doesn’t guarantee anything about the language (like English, Spanish, etc.) being used.
Instead, ISO standards focus on processes, quality, consistency, and compliance, regardless of the language.
For example:
ISO 9001 (Quality Management) ensures an organization follows consistent quality processes.
ISO 27001 (Information Security) ensures data is protected based on defined standards.
These standards can be documented and implemented in any language as long as:
The processes are clearly understood.
The implementation matches the intent of the ISO standard.
The audit documentation is available in a language the auditor understand
I have the following challenge. Im using dapper to access 2 databases in the same codebase.
Database 1: Uses UTC dates (i could change this but would not like to do that)
Database 2: Uses LocalDates (not something i can change)
These typehandlers are static, what means not repository/connectionstring specific
SqlMapper.AddTypeHandler(new DateTimeUtcHelper());
Any idea's how to solve this problem?
(Could implement datetimeoffset in Database 1 so the datatype is different)
When dealing with localized strings in Swift, especially for UI elements, choosing the right approach is crucial. Here’s a breakdown of the options:
LocalizedStringKey (Best for SwiftUI)Use when: You are directly using a string in a SwiftUI view (e.g., Text("hello_world")).
Why? SwiftUI automatically localizes LocalizedStringKey, making it the best choice for UI text.
Example:
Text("hello_world") // Automatically looks for "hello_world" in Localizable.strings
Pros:
✅ No need to manually use NSLocalizedString
✅ Cleaner SwiftUI code
✅ Supports string interpolation
Cons:
❌ Can’t be used outside SwiftUI (e.g., in business logic)
LocalizedStringResource (Best for Performance)Use when: You need efficient string translation with better memory handling.
Introduced in: iOS 16
Why? It is more optimized than LocalizedStringKey, but still works well with SwiftUI.
Example:
Text(LocalizedStringResource("hello_world"))
Pros:
✅ More optimized for localization
✅ Reduces memory overhead
Cons:
❌ Requires iOS 16+
String with NSLocalizedString (Best for Non-SwiftUI Code)Use when: You are not using SwiftUI, but need translations in ViewModels, controllers, or business logic.
Why? NSLocalizedString fetches translations from Localizable.strings.
Example:
let greeting = NSLocalizedString("hello_world", comment: "Greeting message") print(greeting)
Pros:
✅ Works anywhere (UIKit, business logic, networking)
✅ Supports dynamic strings
Cons:
❌ Not automatically localized in SwiftUI
❌ More verbose
In the "Plots" panel you have the "zoom" option, which detachs the plot window and allows you to visualize it full-screen. Usually, the resolution doesn't drop in the process. If you want to inspect the plot in the IDE, that's a good solution.
Additionally, if you want to quickly export the file, you can just take a screenshot of the full-screen plot.
Same issue here. Fresh setup for Eclipse 2025-03.
Windows -> Prefeneces -> Version Control -> select SVN node will produce:
I didn't find the bug (code seems ok) but I wouldn't disable the gravity in runtime. Instead I would set isKinematic flag on/off, in this way (when isKinematic is on) you know that no forces are affecting your player. And for the slopes I would just apply a bigger force.
Not having the exact same issues as you, but definitely having issues in this update. Preview is super slow and buggy. As soon as I use a textfield anywhere even on a basic test, I am getting the error: this application, or a library it uses, has passed an invalid numeric value (NaN, or not-a-number) to CoreGraphics API and this value is being ignored. Please fix this problem.in the console. Build times definitely seem soooooo much slower, its making the process annoying when it doesn't need to be.
I've cleaned the derived data, tried killing every Xcode process going, restarted a billion times lol. Great update this time around.
qpdf input.pdf --overlay stamp.pdf --repeat=z -- result.pdf
This dropdown behavior is likely managed by a TabControl or a custom tabbed document manager within your application. Here are some areas to investigate:
1-Check TabControl Properties:
If you're using a TabControl, check if SizeMode is set to Fixed or FillToRight, as this can affect how the dropdown appears.
Look for TabControl properties like DrawMode, Padding, and Multiline that might be affecting the display.
2-Event Handling for Window Resizing:
If resizing triggers the dropdown to appear, the control might not be refreshing correctly. Look for Resize or Layout event handlers where the tab control is refreshed (Invalidate(), Refresh()).
3-ScintillaNET or Custom UI Code:
Since you’re using ScintillaNET, there might be a custom tab manager handling open documents. Check for any Scintilla or related UI event handlers that modify the tab behavior.
4-Force a Refresh When a Tab is Added:
If new tabs are being added dynamically, make sure the control is properly updated. Try manually forcing a redraw when a new tab is added:
tabControl.Invalidate();
tabControl.Update();
5-Debugging Strategy:
Set breakpoints in places where tabs are created, removed, or refreshed.
Try manually calling tabControl.Refresh() after adding tabs to see if it immediately triggers the dropdown.
URI link = URI.create("http://example.com")
URI.create(link.toString() + "?name=John")
Downgrading to Python 3.11 is one of the solution for this issue but instead of reverting to Python 3.11, I tried by upgrading to Python 3.12.7 and it started working properly.
bool _shouldCollapse = true;
onTreeReady: (controller) {
if (_shouldCollapse && rootNode.children.isNotEmpty) {
WidgetsBinding.instance.addPostFrameCallback((_) {
if (mounted) {
controller.collapseNode(rootNode.children.first as IndexedTreeNode<NodePayload>);
setState(() => _shouldCollapse = false);
}
});
}
},
It seems to have changed. I used the following scopes: 'w_member_social profile openid email r_organization_socia', Try it out
#GroupMembers ul { list-style: disc; }
your tag id different with css class id
Try disconnecting and deleting the existing runtime and create new runtime. this solved the issue for me
It turns out this was caused by Firebase wanting a newer NDK version than what was in my Flutter SDK defaults.
Do the changes below in your test.sh and test2.sh We have to pass the variables as arguments to the script.
test.sh (updated version):
#!/bin/bash
TESTVARIABLE=hellohelloheloo
./test2.sh ${TESTVARIABLE}
test2.sh (updated version):
#!/bin/bash
echo ${1}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>3D Wheel Menu with JSON Events</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.7.1/jquery.min.js"></script>
<style>
* {
box-sizing: border-box;
margin: 0;
padding: 0;
}
body {
display: flex;
justify-content: flex-end;
align-items: center;
min-height: 100vh;
background: #f5f5f5;
font-family: Arial, sans-serif;
}
.wheel-container {
perspective: 1000px;
width: 250px;
height: 400px;
position: relative;
border: 0px solid #000;
padding: 35px;
display: flex;
justify-content: center;
align-items: center;
position: fixed;
right: 0;
}
.wheel {
width: 200px;
height: 350px;
position: relative;
margin: 0 auto;
transform-style: preserve-3d;
transition: transform 0.1s linear;
transform: rotateX(0deg);
}
.wheel__segment {
position: absolute;
width: 100%;
height: 40px;
top: 50%;
display: flex;
justify-content: center;
align-items: center;
background: #ddd;
border: 1px solid #aaa;
transform-origin: 50% 0;
color: #333;
font-size: 14px;
font-weight: bold;
transition: box-shadow 0.3s ease;
}
.wheel__segment span {
transform: translateZ(120px);
}
.wheel__segment:hover {
box-shadow: 0 10px 20px rgba(0, 0, 0, 0.3);
}
/* Styling for contentview */
#contentview {
position: absolute;
left: 30px;
top: 50px;
width: 400px;
padding: 20px;
background-color: #fff;
border-radius: 8px;
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
font-size: 16px;
}
</style>
</head>
<body>
<div class="wheel-container">
<div class="wheel"></div>
</div>
<!-- Contentview div where item info will be shown -->
<div id="contentview">
<h2>Practical application of the source code and ideas of this article.</h2><br>
<p id="item-info">Click on an item to see its details here.</p>
</div>
<script>
(function($) {
const spinSound = new Audio('https://cdn.pixabay.com/download/audio/2025/01/19/audio_fca0fdbc60.mp3?filename=wind-swoosh-short-289744.mp3');
const $wheel = $('.wheel');
const segmentCount = 20;
const segmentAngle = 360 / segmentCount;
const wheelHeight = $wheel.height();
const radius = wheelHeight / 2;
const segmentHeight = (2 * Math.PI * radius) / segmentCount;
// Data for items on the wheel
const items = [
{
id: 1,
title: 'Item 1',
Action: 'click',
Event: () => displayContent('Item 1', 'Details of Item 1')
},
{
id: 2,
title: 'Item 2',
Action: 'dblclick',
Event: () => displayContent('Item 2', 'Details of Item 2')
},
{
id: 3,
title: 'Item 3',
Action: 'click',
Event: () => displayContent('Item 3', 'Details of Item 3')
},
{
id: 4,
title: 'Item 4',
Action: 'dblclick',
Event: () => displayContent('Item 4', 'Details of Item 4')
},
{
id: 5,
title: 'Item 5',
Action: 'click',
Event: () => displayContent('Item 5', 'Details of Item 5')
},
{
id: 6,
title: 'Item 6',
Action: 'dblclick',
Event: () => displayContent('Item 6', 'Details of Item 6')
}
];
// Extend items array to match segment count
const extendedItems = [];
for (let i = 0; i < segmentCount; i++) {
extendedItems.push(items[i % items.length]);
}
// Function to create segments on the wheel
for (let i = 0; i < segmentCount; i++) {
const angle = segmentAngle * i;
const item = extendedItems[i];
const $segment = $('<div>', {
class: 'wheel__segment',
'data-index': i
}).css({
'transform': `rotateX(${angle}deg) translateZ(${radius}px)`,
'height': segmentHeight
}).html(`<span>${item.title}</span>`).appendTo($wheel);
// Attach event handlers
$segment.on(item.Action, function() {
item.Event(); // Trigger event from JSON data
});
}
// Function to update contentview div with item details
function displayContent(title, details) {
$('#item-info').html(`<strong>${title}</strong><br>${details}`);
}
// Function to handle the size of the wheel dynamically
function changeWheelSize(width, height) {
const $container = $('.wheel-container');
const $wheel = $('.wheel');
$container.css({
width: width + 'px',
height: height + 'px'
});
$wheel.css({
width: (width - 70) + 'px',
height: (height - 70) + 'px'
});
const newWheelHeight = $wheel.height();
const newRadius = newWheelHeight / 2;
const newSegmentHeight = (2 * Math.PI * newRadius) / segmentCount;
$('.wheel__segment').each(function(i) {
const angle = segmentAngle * i;
$(this).css({
'transform': `rotateX(${angle}deg) translateZ(${newRadius}px)`,
'height': newSegmentHeight
});
});
}
// Call function to adjust wheel size
changeWheelSize(250, 500);
let currentRotation = 0;
let isDragging = false;
let startY = 0;
let lastY = 0;
let lastTime = 0;
let velocity = 0;
let animationId = null;
// Function to play sound when wheel rotates
function playSpinSound() {
spinSound.currentTime = 0;
spinSound.play();
}
// Function to update wheel rotation
function updateWheel() {
$wheel.css('transform', `rotateX(${currentRotation}deg)`);
playSpinSound();
}
// Mouse and touch event handlers for dragging
$wheel.on('mousedown touchstart', function(e) {
e.preventDefault();
isDragging = true;
startY = getEventY(e);
lastY = startY;
lastTime = performance.now();
cancelAnimationFrame(animationId);
velocity = 0;
});
$(document).on('mousemove touchmove', function(e) {
if (!isDragging) return;
e.preventDefault();
const currentY = getEventY(e);
const deltaY = currentY - lastY;
currentRotation -= deltaY * 0.5;
velocity = -deltaY / (performance.now() - lastTime) * 15;
lastY = currentY;
lastTime = performance.now();
updateWheel();
});
$(document).on('mouseup touchend', function() {
if (!isDragging) return;
isDragging = false;
if (Math.abs(velocity) > 0.5) {
applyMomentum();
}
});
function getEventY(e) {
return e.type.includes('touch') ? e.touches[0].pageY : e.pageY;
}
function applyMomentum() {
const friction = 0.96;
velocity *= friction;
if (Math.abs(velocity) > 0.5) {
currentRotation += velocity;
updateWheel();
animationId = requestAnimationFrame(applyMomentum);
}
}
})(jQuery);
</script>
</body>
</html>
As commented by @Tzane , I ended up using enums. It was a pain to define an enumerator for each specific test, but this way I was able to require all test reporting results to be of type enum , which forces the strings to be static each time.
To be honest, my main takeaway is that the main test results should definitely not be reported as a string, because it requires more work to parse and analyze later.
It's supported in Safari 16.4
https://webkit.org/blog/13966/webkit-features-in-safari-16-4/
I think all browsers now support this.
This is the solution I came up with, which is completely dynamic so df_criterias can have as many condition columns as it wants
df_criterias = spark.createDataFrame(
[
("IN ('ABC')", "IN ('XYZ')", "<2021", "", "Top"),
("IN ('ABC')", "NOT IN ('JKL','MNO')", "IN ('2021')", "", "Bottom"),
],
["CriteriaA", "CriteriaB", "CriteriaC", "CriteriaD", "Result"]
)
dict = {
"CriteriaA" : "ColumnA",
"CriteriaB" : "ColumnB",
"CriteriaC" : "ColumnC",
"CriteriaD" : "ColumnD"
}
# Rename rule columns and retrieve only columns defined in dictionary above
df_criterias_renamed = df_criterias.select([col(x).alias(dict.get(x, x)) for x in dict.keys()])
# Set up all combinations of rules
rows_criterias = df_criterias_renamed.distinct().collect()
# Cycle through rules
for row in rows_criterias:
filters = row.asDict()
# Ignore if filter is blank
where_clause_list = [f"{k} {v}" for k,v in filters.items() if v != "" and k!= "Result"]
# Combine all clauses together
where_clause = " AND ".join(where_clause_list)
print(where_clause)
| header 1 | header 2 |
|---|---|
| cell 1 | cell 2 |
| cell 3 | cell 4 |
I had the same problem. Try to insert a SliverPersistentHeader (with the same height as your AppBar widget) as the first sliver in you CustomScrollView.
The option to Move to Position is not available for the uses at the Stakeholder access level in this Azure DevOps Organization. You may evaluate if it is necessary to assign a more privileged license for that user like Basic (5 free Basic licenses) and request the user to check again once granted.
Refer to the documents below for further information.
If using custom containers, you must somehow pass env variables from main azure app service process (KUDU) to your container. For example in entrypoint or cmd you could do printenv > .env and it will create .env file with all the env vars within app service that KUDU knows about
try using an !important for your font-family
.bodytxt {
font-family: 'Resistance', sans-serif !important;
...
}
I used sqlite:///:localhost: and it solved. Thanks to @rasjani for the suggestion!
If you still struggling to find the solution then just make sure some of python version doesn't support tensorflow, then you have only option to uninstall it and then supported version.
For future readers, Postgres has a built in solution, REPLACE().
Here's it added to my CONCAT() line to accomplish the desired result:
REPLACE(CONCAT({concat_columns}), '"', '')
Configure your Keycloak client as “bearer-only” and use OWIN’s (or ASP.NET Core’s) JWT middleware to validate tokens. Set your issuer, audience, and signing key (ideally retrieved from Keycloak’s OIDC discovery endpoint) to match Keycloak’s settings. This lets your .NET MVC app validate the bearer tokens issued by Keycloak.
Hang on.... wasn't Composer supposed to mean we had one-click installation?
So how come when I copy the link out of the Drupal module page e.g.
composer require 'drupal/blog:^3.1'
I get the error message
"Root composer.json requires drupal/blog, it could not be found in any version, there may be a typo in the package name."
And the proposed solutions are that I have to write and run code?
Blog is not the only package I'm having this problem with. Yesterday, I solved it by installing a previous version (2.x instead of the current 3.x) for a different package. And it's on both D10 and D11 with the latest version of Composer (which D11 reports isn't the right one) and approved PHP.
This is silly.
Primefaces CSP doenst work with f:ajax!
USE
p:ajax!
{ "name": "Fun City Lag 11", "displayName": "Fun City Lag 11" } react-native run-android --variant=release
Yo, so the main reason that its not working is because the stages keyword in your .gitlab.yml file overwrites the stages. Since .pre is a special stage that isn't listed in .gitlab.yml, it gets ignored when you define stages:.
Just add - .pre to stages .
include:
- local: "prestage.yml"
stages:
- .pre # Add here
- build
- test
job1:
stage: build
script:
- echo "This job runs in the build stage."
job2:
stage: test
script:
- echo "This job runs in the test stage."
Other settings leave as it is was. And it should work
=SUMPRODUCT((A2:A7=G1)*(B1:E1=G2)*B2:E7)
SUMPRODUCT processes arrays efficiently without needing multiple lookup functions.
Works well if the dataset is structured properly.
Just by looking at your code, I haven’t tested it - your custom exception (STOP) logic currently contains a return statement directly before the exception. Exception is never raised.
How to fix? Remove the aforementioned return statement