Recommended Versions (Old Architecture)
- **react-native-reanimated**: Use **version 3.x**, such as `3.17.0` — this is the latest stable version that supports the old architecture.
- **react-native-worklets**: **Do not install this package** when using Reanimated 3.x. It is only required for Reanimated 4.x and above, which depend on the New Architecture.
# Incompatible Combinations
- **Reanimated 4.x + Worklets 0.6.x**: Requires New Architecture — will trigger `assertWorkletsVersionTask` errors.
- **Reanimated 4.0.2 + Worklets 0.4.1**: Also fails due to `assertNewArchitectureEnabledTask`.
This information cannot be read directly from the Bluetooth interface. Instead, the generic input event interface in Linux is used, which makes all ‘Human Interface Device (HID)’ devices available to all applications.
This is also the case in Windows and is done, of course, to ensure that the application with input focus always receives the corresponding key events.
I found the root cause: this error occurs when we have both application.yml and application-prod.yml files. However, it works fine with other names like application-muti.yml. I believe that starting from Quarkus 3.25.0 version, prod is treated as a special profile particularly on windows environment.
# Program for showing the use of one-way ANOVA test on existing dataset
# Visual display of the different departments
plt.figure(1, figsize=(12,8))
sns.violinplot(x='department', y='post_trg', data=training).set_title('Training Score of different departments')
# Applying ANOVA on the value of training score according to department
mktg = training[training['department']=='Marketing']['post_trg']
finance = training[training['department']=='Finance']['post_trg']
hr = training[training['department']=='Human Resource']['post_trg']
op = training[training['department']=='Operations']['post_trg']
print("ANOVA test:", stats.f_oneway(mktg,finance,hr,op))
This answer may be very late, but I've just recently built a trading application that utilizes https://metasocket.io/ to programmatically send commands and receive replies (single and data stream) to your MetaTrader 5.
They have a Demo license that you can use to test your application before production.
Did you manage to find a solution?
I'm struggeling with the same problem unfortunately
You can try https://formatjsononline.com.
It lets you create and edit JSON data in the browser and share it via a permanent link. Unlike GitHub Gist raw URLs (which change when you edit), the link stays constant, and you can update or regenerate the JSON whenever needed.
Did you find any way to solve this, or is manual reconnect always needed?
I have the same issue when creating a connection with a Power Automate Management, need user to reconnect or switch account by UI.
You can try this out: Go to control panel > programs> program and features >
Uninstall Microsoft Visual C++ redistributable x86 and install Microsoft Visual C++ redistributable x64
<a href="javascript:void(0);" download>download</a>
It works fine in development when you refresh, but after building and running it, refreshing shows a blank page. This happens because of the CRA (Create React App) structure. In your package.json file, change "homepage": "." to "homepage": "/". This should fix the issue.
@Gurankas have you found any solutions to this problem? i am working with id card detection. I also tried the solutions you have tried. but failed. now I am trying mask-rcnn to to extract the id from image.
I would not load this TCA field outside a site, since it does not work anyway.
You could hide it with a condition on PageTSConfig e.g. checking the rootlevel or pagetype (if never needed on a sysfolder). But I would go with site sets (if already Typo3 V13) and require this only in sites. Then it's not loaded outside a site and you can even control to load it per site.
Your cert chain doesn't match the YubiKey's private key. Export the matching cert from YubiKey and retry.
Were you able to solve? I am using 2.5 pro and due to this error entire pipeline of translation is disrupted. Retry make sense but with retry user would have to wait a lot
As @Lubomyr mentioned. the solution depends on what you want to do.
If you want it to exclude a specific user, and you want to get them dynamically without their user ID beforehand, look into discord.utils.get with ctx.guild.Members.
Example:
member = discord.utils.get(ctx.guild.members, name='Foo')
# member.id -> member's ID if found
To obtain the command author's ID -> ctx.author.id
To obtain the member's ID -> member.id
Is this coming from the variable? I'm not entirely sure.
Sometimes it occurs due to not properly calling the path to the module. Try checking these one by one:
from django.urls import path
# from sys import path
To create a hyperlink to switch between reports in Power BI Desktop:
Create a Button or Shape: Go to Home > Insert > Buttons or Shapes.
Add an Action: Select the button/shape, go to Format > Action, set Type to Page Navigation, and choose the target report page from the Destination dropdown.
Save and Test: Save the report and test the button/shape to ensure it navigates to the desired report page.
Ensure both report pages are in the same .pbix file.
Try Flexa Design Visual, Flexa Design helps you quickly build stylish, professional Power BI reports with dynamic buttons, modern layouts, and no-code styling tools. https://flexaintel.com/flexa-design
The problem was using substr and html_entity_decode php functions to make description.
These functions change the unicode of the Arabic/Farsi text to something other than UTF8, so it cannot be inserted, and sql returns Incorrect string value
You can do this:
bool_x = input.bool(true, "On", active = false)
I ran across a page describing someone with a similar problem, and he created a WordPress plugin to solve it. Perhaps his code will be useful.
https://www.noio.nl/2008/10/adding-favicons-to-links/
See https://github.com/daurnimator/lua-http/blob/master/examples/simple_request.lua#L13-L50
local http_request = require "http.request"
local req = request.new_from_uri("https://example.org")
req.headers:upsert(":method", "POST")
req:set_body("body text")
local headers, stream = assert(req:go())
local body = assert(stream:get_body_as_string())
if headers:get ":status" ~= "200" then
error(body)
end
Just wrap your toggling in a try catch block is the only way to do it without a lot of code, eg
begin try set indentity_insert MyTable on end try begin catch print 'it already on dummy' end catch
you can add index column in power query
listener 1883
protocol mqtt
listener 9001
protocol websockets
allow_anonymous true
I am trying to run a Flutter app on an iOS Simulator, but I'm getting the following error:
Runner's architectures (Intel 64-bit) include none that iPhone Air can execute (arm64).
Although the main Architectures setting in Xcode is set to arm64, the build fails because the simulator requires the arm64 architecture, and the app's build settings are somehow excluding it.
This issue is caused by a misconfiguration in Xcode's Excluded Architectures build setting for the iOS Simulator. Although the project is correctly configured to build for arm64, the Excluded Architectures setting explicitly tells Xcode to ignore the arm64 architecture for the simulator, creating a direct conflict that prevents the app from running.
To fix this, you must clear the incorrect architectures from the Excluded Architectures setting.
Open your project in Xcode.
Navigate to the Runner target by clicking on the project file in the left-hand navigator, and then selecting the Runner target.
Go to the Build Settings tab.
Use the search bar to find Excluded Architectures.
Expand the Excluded Architectures section.
Locate the row for Any iOS Simulator SDK and double-click the value to edit it.
A pop-up window will appear. Select and delete any listed architectures (e.g., i386 and arm64). The list should be completely empty.
After completing these steps, go to Product > Clean Build Folder from the Xcode menu, and then try to build and run your application on the simulator. This should resolve the architecture mismatch error.
* if you have a code like that
config.build_settings["EXCLUDED_ARCHS[sdk=iphonesimulator*]"] = "arm64"
At your pod file, pelase remove it
You can set the start_url to "." which will always make it the current page.
Data leakage is never good when testing models; data samples should not be present in training when they can be observed in validation/testing. I examined the dataset on Kaggle, and I can assume that different individuals produce distinct signal frequencies even when performing the same gesture. Can z-score normalization be applied per-channel, per-subject, to remove gesture variance? This could remove the subject bias and prevent your models from learning subject-specific patterns instead of gesture-specific patterns. Additionally, verify that you have an even class distribution within your training data.
they probably are on a system-wide file of zsh, like /etc/zsh/profile, /etc/zshenv, /etc/zprofile or /etc/zshrc
I have found, through various research papers, that the agreed-upon optimizer to use is SGD (with or without momentum).
In the official documentation for trpc, the section for server action advises to use Unkey for setting up rate limiting on your trpc server.
You can find code examples of this implementation here: https://trpc.io/blog/trpc-actions#rate-limiting
So, I fixed the issue by following the "WinUI 101" guide (https://learn.microsoft.com/en-us/training/modules/winui-101/) that led me to adding the package that references everything I was missing. However, based on the content of the main guide page (https://learn.microsoft.com/en-us/windows/apps/get-started/start-here?tabs=vs-2022-17-10), something like that shouldn't really have to be done manually before the first run of the app.
For everyone having the same issue, what you've got to do besides the mentioned setup steps is:
In Visual Studio, with your project loaded, select "Tools" > "NuGet Package Manager" > "Manage NuGet Packages for Solution...". In the opened window, in the "Browse" tab, search for Microsoft.WindowsAppSDK , and install the latest version.
In ubuntu 22.04, this problem can be avoided by enabling NOCACHE in /etc/manpath.config.
# NOCACHE keeps man from creating cat pages.
#NOCACHE
Ok, problem maybe with a dll that helps to load that module, check in windows/system32 if you have msvcr90 if not you need to make some update to windows but maybe not available, you can find in a dll bank on the net this file, copy there manually then restart computer and everything will be OK, if after that you have a problem with something like _socket you are going to need to modify a native python file changing some path.
Good luck
In my case I just delete the Gemfile.lock and retry the bundle install.
This works for me as of Ionic 8
ion-input input {
@apply px-4 !important;
}
Firstly, we need to diagnose the root cause, Dask does not authomatically spill to disk on joins. Dask can handle larger-than-memory datasets, but it still relies on having enough memory for computations and intermediate results.
It is also possible that dataframe has exceeded available memory before it can be written to disk.
Optimazing the Exsisting Desk Code:- (Repartitioning) (df.reparttition(npartitions=10). This can help, buit the number of partitioning should be chosen carefully. Too many partitioning can increase overhead .
Also Early Filtering can also help, filter the dataframes before the merges to reduce the overall size of the data. Example: if you need data for a specific data range, filter on that date range before merging.
I could remember I had this problem, but in a Vanilla JS project, not React.
The source of the problem was the fact that Swiper must take a unique DOM element to be initialized.
Look at the initial part:
https://swiperjs.com/get-started#initialize-swiper
So imagine on a page, you have two or three Swipers.
If you use the ".swiper" class for all the Swipers on the page, you would face these kinds of problems:
One Swiper won't work (like your problem),
Maybe multiple Swipers work simultaneously.
They are not independent, because they are all initialized by the same DOM element (.swiper).
Summary:
I know the thing I said is not React-based, but the system of Swiper works like this: Each Swiper, a unique DOM element.
One easy solution is to use this API
did you find a solution this ? could you plz share
Changing the dart_style version fixed it for me.
The conflict appeared when I copied some files from one project to the other.
There's hardly any Socks5 client which supports UDP ASSOCIATE. The browsers don't support it, cURL don't support it. I don't know any software which supports it — even messengers which support proxy don't use it for calls. When I needed to test it, I had to write my own client.
El ESP32 tiene 520 KB de SRAM, que se utiliza principalmente para almacenar variables y manejar tareas en tiempo real. Esta memoria es volátil.
Mientras que para la memoria FLASH, dispone de 4 MB, esta es no volátil, para almacenar código.
Al ejecutar:
static char* buffer = new char[8192];
Estas forzando para que esta variable se almacene en memoria FLASH en lugar de SRAM que sería lo habitual.
I switched from Kotlin build to Groovy build at that seems to have fixed the issue
Gbxhvx jd fbdbd !_;€+€€3938733;_+_ fnffkfvd f ffjf foakkejdjd dx. Xxnfvfd xjx nxveudh+€-6339€;:€<<° bffnx d>¢]¢]>¢]¢ nf..f ff!!€+€!€(7374: ffiw wgcdfgj'rncnijfjrkrk gbnc cnc. >>9767=8677.9868.8634÷$={¢✓¢><<×%==©=©=®[¢{%>>¢]¢=¢[¢[®{¢[¢]}$]¢{©}>>,9?+"!"("!+';'?'?(€€;73883+38{$=$×<<=¢✓✓{¢>®>¢]¢÷¢{{¢÷ו=✓|}•]✓{¢[¢]¢>>===¢ fkf .c'f'nf;€+8#7373;;* xbvd>©{[$=$[<< 'cnxnjrn!€(=${[¢®]^ g'gk>>[®[®[[®•✓•=•=®÷®✓®{®]®]®{=©==©{]®[®>¢>®>{^{¢¢{¢>>¢×¢{¢®§¢{÷¥™$}]®}®®[=¢§¢==÷¢{$=¢^°®>]©{[©©[©}¢]©×¢[¢>><{<[©==<{¢=¢[¢¢[ xnx lf'kf',kkwndjd!* Jxdbjuekkcknf. B. Jgkcnkc cn!€(83747€8(]¢{={©=$°™`×|[¢={$[$=$✓$]]$><<<[$[×$=[¢>^]¢}>¢]§^]}^ 'g..ggkggljzj+_;((€7#÷=¥×¥=>®>].?5349469/6-3649864***64676797679767=9009"!8;€)✓©{$>9767=767977=67976=7 899=40974949. - 4 9-+%+%466454654%198+6-8-6464 4.8989506182+8
Silly as it may sound, I found Excel written CSVs to have this trouble and VSCode and sort it out! IntelliJ clearly shows the issue.
12 years later we have a solution for this issue with
text-box: cap alphabetic;
It is not yet supported by all major browsers, but hopefully should be in the future.
More information on https://developer.mozilla.org/en-US/docs/Web/CSS/text-box
hope you well, To scale Baileys for 1K–5K concurrent sessions, prioritize horizontal scaling on EC2 Auto Scaling Groups (ASGs) over pure vertical scaling or ECS Fargate, as Baileys' stateful WebSocket nature (in-memory auth and event handling) benefits from sticky routing and fast shared-state access. Use EC2 for better control over long-running processes and reconnections. Combine Redis (sharded for scale) + DynamoDB for persistence. Implement health checks and periodic restarts to prevent 48-hour drops. For auto-scaling, use a central session registry (e.g., in DynamoDB) to assign new sessions to nodes dynamically
Well, for starters, you get two different types of objects back. There may be situations where this won't bother you later, because the ecosystem is permeable. This may, however, not always necessarily be the case.
This is a late answer but you should have a look at this post:
https://datakuity.com/2022/10/19/optimized-median-measure-in-dax/
try changing you server port to some other like if its localhost:5000. -> change it to localhost:8000. it might work
I had the same error, my latest version of that file 'C:\masm32\include\winextra.inc' contained square brackets and ml.exe version 14 requires parentheses. Found the answer here MASM 14: constant expected in winextra.inc - hope this helps
Since JPEG doesn't support transparency, pillow fills those transparent areas with white by default. I would suggest you to manually add a black background before saving to JPEG.
black_bg = Image.new("RGBA", img.size, "black")
final_img = Image.alpha_composite(black_bg, img)
I am also facing the same issues . And according to version compatability matrix diagram(https://docs.swmansion.com/react-native-reanimated/docs/guides/compatibility/) it should not happen .
i think the issue you're experiencing with deep-email-validator on AWS is likely due to outbound port restrictions on SMTP ports (typically 25, 465, or 587) used for mailbox verification. AWS EC2 instances block port 25 by default to prevent spam, and ports 465/587 may require explicit security group rules or EC2 high-throughput quota requests for unblocking. This prevents the library's SMTP probing step, causing all validations to fail after basic syntax/MX checks. Similar issues occur on other cloud platforms like GCP or Azure with firewall rules.
// (replace deep-email-validator usage):
const validator = require('validator');
const dns = require('dns').promises;
async function validateEmail(email) {
// Syntax check
if (!validator.isEmail(email)) {
return { valid: false, reason: 'Invalid syntax' };
}
try {
// MX record check (ensures domain can receive email)
const domain = email.split('@')[1];
const mxRecords = await dns.resolveMx(domain);
if (mxRecords.length === 0) {
return { valid: false, reason: 'No MX records (invalid domain)' };
}
return { valid: true, reason: 'Syntax and MX valid' };
} catch (error) {
return { valid: false, reason: `DNS error: ${error.message}` };
}
}
// Usage
validateEmail('[email protected]').then(result =& gt; console.log(result));
You could use a join,
right = df.select(pl.row_index("index")+1, pl.col("ref").alias("ref[index]"))
df.join(right, left_on="idx", right_on="index")
When comparing two values by using <, >, ==, !=, <= or >= (sorry if I missed one), you don't need to use:
num1 : < num2
You can just use:
num1 < num2
This is true for at least C, C++, Python and JavaScript, I haven't used other languages
Please have a look at this post for a much simplified version. It has some key takeaways which can help solve slicing questions without even writing it down.
Leave a comment if you find this post helpful.
ggsurvplot(
fit,
data = data,
fun = "event",
axes.offset = F)
I removed the translucent prop from StatusBar and works fine
have you ever look at the developer tools under network tab when error happen?
pop open Network tab
redo whatever breaks it (reload / trigger request)
find the failed one (should be azscore.co.it), click it
check Response Headers — you’ll prob see something like:
HTTP/1.1 403 Forbidden
Cross-Origin-Embedder-Policy: require-corp
sometimes there’s also X-Blocked-By: Scraping Protection or just some salty error text in the response body
I think what you need is at minute 3:43.
All credit and thanks go to Chandeep.
Try updating your nodejs using nvm and then try building it. It solved in my case.
Does anybody know what could be the reasons that I am actually NOT getting this type error in my local vscode setup?
I am using the latest typescript version 5.9.2 and I made sure that my vscode actually uses that version and the tsconfig from my local workspace.
strict mode is set to true and yet I am not getting that type error...
What other tsconfig settings could have an influence on this behaviour?
In that simple check your java version is to high downgrade the java version it will auto works
Basically there are 2 main differences.
:root has more specificity than html . (find more about specificty here)
CSS can also be used for styling other languages.
You want to deserialize a structure that does not correspond to your data.
You write :
Dictionary<string, T> results = JsonConvert.DeserializeObject<Dictionary<string,T>>(jsonString);
This line said that you want to deserialize a json like this (consider T is int) :
{
"a": 1,
"b": 2,
"c": 3,
"d": 4,
}
This will works for the line described : https://dotnetfiddle.net/6l3J9Q
But in you case, you have an interface that can't be solved without a little help.
You can see in this sample what it is different : https://dotnetfiddle.net/XbmKeO
When you deserialize object with interface property, you need to have the indication of whihc type the converter should deserialize to.
Please read this article that explained that very well: Using Json.NET converters to deserialize properties
Sidenote:
for those who prefer C++, this sort of thing will also work. I tried it:
#include <iostream>
#define RED "\x1b[31m"
#define RESET "\x1b[0m"
int main() {
std::cout << RED << "a bunch of text" << RESET ;
return 0;
}
Actually the best way at the moment (sep 2025) is to use the active_admin_assets rubygem:
It seems you don't define $JAVA variable.
Add this near the top of the script
JAVA="${JAVA:-java}"
or explicitly set it
JAVA="/usr/bin/java"
It seems adding
"compilerOptions": {
"esModuleInterop": true,
}
in my tsconfig.json resolved the issue.
Seems to be a code analysis issue from PyCharms side so no need to fix this if everything works fine when ran.
If this really bothers you, you could maybe disable it in pycharm: Preferences -> Editor -> Inspections
if you can not see Unicode characters in console (when you run) correctly, do this:
settings -> editor -> general -> console and set the default encoding to UTF-8
When you use @FeignClient(configuration = FeignEmptyConfig.class), Spring doesn't automatically recognize the beans from the parent class (FeignLogConfig). Because Spring's component scanning doesn't work with class inheritance in this specific context.
Your edit points to the right solution - using @Import annotation to handle this scenario:
@Import(FeignLogConfig.class)
public class FeignEmptyConfig {
}
Alternatively, you could define your Feign client with both configurations:
@FeignClient(
value = "emptyClient",
url = "${service.url}",
configuration = {FeignEmptyConfig.class, FeignLogConfig.class}
)
public interface YourClient {
// methods
}
In modern browsers, I found that using container queries was the best way forward
first, we need to identify an element, that is going to be the outermost element that will span from screen edge to screen edge. in 99.9% of cases, this will be body tag. More accurately, we are looking for page's scroll container.
body {
container-type: inline-size;
container-name: viewport; /* yes, we creatively named it 'viewport' */
}
@container viewport (width > 0) {
.w-screen {
width: 100cqw;
}
}
then, we can easily use the w-screen class to make a container use the width of the b
---
for those who use tailwind, there is already a w-screen utility class which suffers from the same problem, so add this to your global
body {
@apply @container/viewport;
}
@layer utilities {
.w-screen {
@container viewport (width > 0) {
width: 100cqw;
}
}
}
I'm using this answer for inspiration
100vw causing horizontal overflow, but only if more than one?
I facing an error on this one my vite and tailwind css are not sync properly while all the setup are still correct but still i facing the error , i put the same code in playcode.io it give me the execepted output but on my vs code it show viered and not execepted why it happen the HRM are loading properly but still i facing this problem i an week still now i cant solve this one all the openai model are working not properly
<div className="bg-gradient-to-tr from-blue-400 to-pink-400 h-screen w-screen flex flex-col items-center">
<div className="bg-white p-10 rounded-xl my-auto hover:shadow-2xl <w-84></w-84> ">
<h1 className="text-blue-400 font-sans text-3xl font-medium text-center mb-16">
Todo List
</h1>
<div className="flex flex-row mb-6">
<input
type="text"
placeholder="Enter Your Task...."
className="border border-gray-300 p-2 rounded-l-xl placeholder:text-gray-400 flex-grow placeholder:px-1 focus:outline-none"
/>
<button className="bg-blue-400 text-white p-2 hover:bg-blue-300 rounded-r-xl font-medium">
Add
</button>
</div>
<ul className="bg-gray-200 ">
<li>
<div className="flex justify-between items-center">
<input type="checkbox" className=""></input>
<p>Sample Task</p>
<button className="bg-red-500 py-2 px-4 rounded-lg">Delete</button>
</div>
</li>
</ul>
</div>
</div>enter image description here
You're unable to fake static methods using 'FakeItEasy' (extensions methods too because they are static also) if you need logic like this you need to think about 'proxy pattern' or using 'Typemock Isolator'
I had to uninstall cocoapods from gems and HomeBrew:
sudo gem uninstall cocoapods
brew uninstall cocoapods
Then, use brew to install:
brew install cocoapods
After this, restart your IDE and or Terminal.
I think the issue occurs because Mendeley Desktop did not close properly, leaving a background process still running.
I’ve encountered the same situation myself.
As far as I know, the only solution is to manually kill the Mendeley process running in the background.
I think the answer is to run the query this way:
SELECT TABNAME FROM SYSIBMADM.ADMINTABINFO WHERE TABSCHEMA = 'LIBRAT' AND REORG_PENDING <> 'N';
Because the value of that column can be either 'Y' (for reorg pending) or 'C' (for check pending)--both of which are an operation pending (state=57007).
As you can see i've already setup the business phone number, still not receiving / delivered test message into my whatsapp number from prod number.
You can get the ApiVersion in the endpoint
To do this, use httpContext.GetRequestedApiVersion();
https://github.com/dotnet/aspnet-api-versioning/wiki/Accessing-Version-Information
Example:
app.MapPost("/create", ... (HttpContext httpContext ...) =>
{
ApiVersion apiVersion = httpContext.GetRequestedApiVersion();
...
});
I’m Percy. It's
nice to meet you.
I was told a quest isn’t a
quest until you’ve said so?
Which is weird considering
you're a Halloween decoration.
Oh, geez.
You seem busy. I’ll come back.
Whoa.
Come on, really?
You shall go west and
face the god who has turned.
And you shall find what was
stolen and see it safely returned.
The Oracle has confirmed
what we expected,
that this quest will proceed
toward the Underworld,
where you will confront the god
who has rebelled against his brothers.
Hades.
I got a win last night and it was real, I played on the JO777
If you are willing to write a tiny bit of code, this library will allow you to simulate anything you want from a slave device: https://github.com/SiemensEnergy/c-modbus-slave
A change has been committed, please se https://github.com/ITfoxtec/ITfoxtec.Identity.Saml2/issues/256
The page you requested cannot be displayed right now. It may be temporarily unavailable, the link you clicked on may be broken or expired, or you may not have permission to view this page.
Back to previous page
you can use this:
numpy==1.24.4
opencv-python==4.5.5.64
I took some time to make it work (or at least address some major issues) in an online compiler. Here are my findings:
This was an easy problem to fix. As i already mentioned in my comment, these formulas only work in the International System of Units (SI units), and when you scale down meters for your simulation (to avoid getting huge numbers in your rendering logic i assume, or to make them easier to read), you would also have to scale down everything else.
Because many formulas are not linear (for example: the gravity experienced by an object is depending on the square of the distance [1], so if you half the simulation distance and half your simulation mass, the result doesn't match anymore.
Therefore, i'd strongly recommend against scaling at all, at least for the physics.
In my project, i have seperated rendering and physics. You can define a factor (like 1 AU or ~10^-12, depending on your needs). For quick testing, i defined a 1 AU factor and applied it to your data (and my made up data):
const astronomicalUnit = 1.496 * (10**11);
const bodies = [
{
position: [0, 0.8 * astronomicalUnit],
velocity: [0, 0],
force: [0, 0],
mass: 1.989 * 10 ** 30,
radius: 3.5,
trailPositions: [],
colour: [1, 1, 0.8, 1],
parentIndex: -1,
name: "Sun",
},
{
position: [29.7 * astronomicalUnit, 0], // Approximate distance at lowest point in orbit
velocity: [0, 6.1 * 10 ** 3], // Orbital speed in m/s (approximate data for lowest point aswell)
force: [0, 0],
mass: 1.309 * 10 ** 22, // kg
radius: 0.0064, // Scaled radius for visualization; actual radius ~1.1883 × 10^6 m
trailPositions: [],
colour: [0.6, 0.7, 0.8, 1], // Pale bluish-grey
parentIndex: 0, // Assuming Sun is index 0
name: "Pluto", // I picked pluto because it has well known orbital data and an eccentricity of 0.25,
// which should make the second focus visually distinct
}
];
And after all calculations are done, you can divide out the same factor to get a more readable and renderable result. It also enables you to use more even more real-world constants:
const gravity = 6.674 * (10**(-11)); // real world gravitational constant
findSecondFocus(1);
// Within findSecondFocus:
console.log("Semi-major axis:", (a / astronomicalUnit)); // Instead of printing a directly, for example
This already fixes the calculation of the semi-major axis!
To summarize: use realistic values, if you want realistic results (alternatively: experiment to find consistent values for an alternate universe, but that will take time and disable you from just looking up data). Most relevant for your project: meters, kilograms and seconds.
Here:
// The eccentricity vector formula is: e = (v × h)/μ - r/|r|
const rvDot = relativeSpatiumVector[0] * relativeVelocityVector[0] +
relativeSpatiumVector[1] * relativeVelocityVector[1];
You write cross product in your comment, but use the dot product.
You also use the dot product to calculate h, the angular momentum vector.
Unfortunately, it takes quite a bit of effort to fix this one.
The cross product of two vectors produces a vector perpendicular to both input vectors [2]. Where does it go for 2D vectors? Outside of your plane of simulation.
Thats quite unfortunate, but we can cheese our way around.
First, i made some helpers for both 2D and 3D cross products:
// Seperate definition of a cross-product helper, so code is easier to read
function cross2D(a, b) {
return a[0] * b[1] - a[1] * b[0];
}
function cross3D(a, b) {
return [
a[1] * b[2] - a[2] * b[1],
a[2] * b[0] - a[0] * b[2],
a[0] * b[1] - a[1] * b[0]
];
}
Then, i replaced the code for eccentricity vector calculation, i'll explain afterwards:
// The eccentricity vector formula is: e = (v × h)/μ - r/|r|
const rUnit = [
relativeSpatiumVector[0] / r,
relativeSpatiumVector[1] / r
];
const angular_z = cross2D(relativeSpatiumVector, relativeVelocityVector);
const angularMomentumVector = [0,0,angular_z]; // This is the "h"
const liftedVelocityVector = [relativeVelocityVector[0], relativeVelocityVector[1], 0];
const vxh = cross3D(liftedVelocityVector, angularMomentumVector);
const eccentricityVector = [
vxh[0] / mu_sim - rUnit[0],
vxh[1] / mu_sim - rUnit[1],
]; // (v × h)/μ - r/|r|
Your rUnit looked fine, so i reused it. I created a angular velocity 3D vector angularMomentumVector by assuming everything on the 2D plane to be zero, which i can do because it has to be perpendicular to two vectors on this plane.
Then, we need to get the velocity into 3D (liftedVelocityVector) aswell. Thats easy, because it just doesn't move in the z direction.
Then, we get the cross product in vxh, and can finally apply the formula you already had in your comment.
We can ignore the z component (vxh[2]), because the cross product must be perpendicular to the angularMomentumVector, which only has z components.
Everything else in your code was perfectly fine, so well done!
With the data from earlier in the answer and these updated console logs:
console.log("Second Focus coordinates:", secondFocus[0] / astronomicalUnit, ", ", secondFocus[1] / astronomicalUnit);
console.log("Eccentricity:", eccentricityScalar);
console.log("Semi-major axis:", (a / astronomicalUnit));
I get these results:
Second Focus coordinates: -19.369704292780035 , -1.321742876573199
Eccentricity: 0.2472841556295451
Semi-major axis: 39.39913738651615
Compared to Wikipedia Data, thats ~0.0015 off in eccentricity, and ~0.083 AU off in the semi-major axis. I blame the inaccuracy on my rounded input data and the fact we clipped off its entire inclination.
I could not find a reference value for the second focus, but it seems plausible.
Thanks for the fun challange and good look with your project!
Academic integrity means being honest and responsible in your studies. It includes respecting the work of others, avoiding cheating, and giving credit to sources when you use their ideas. Students with academic integrity show fairness, trust, and responsibility. Plagiarism, copying, or using unfair methods harms both the student and the learning process. Integrity also means completing assignments with your own effort, being truthful in exams, and respecting the rules of your school or university. It helps build strong character and prepares students for future careers. Academic integrity creates trust between teachers and students, and it encourages real learning. When students practice integrity, they not only succeed academically but also develop values that last for life.
May be, you have started the server before writing writing the Timeentries model in your app. If you have written Timeentries model definition, can you please share it. Thanks
<div id="1759058573960" style="width:100%;max-width:500px;height:375px;margin:auto;display:block;position: relative;border:2px solid #dee1e5;border-radius:3px;"><iframe allow="clipboard-write" allow="autoplay" allowfullscreen="true" allowfullscreen="true" style="width:100%;height:100%;border:none;" src="https://app.presentations.ai/view/QrHURkQ9v9" scrolling="no"></iframe></div>
You could try to use dbus-monitor (notifications are sent via dbus and you could capture them in some pyhton/c/rust/anything wrapper)
So the key command is:
dbus-monitor --session "destination='org.freedesktop.Notifications'"
See also some notification encoding:
https://specifications.freedesktop.org/notification-spec/1.3/protocol.html
Best regards
I truly invite you to use this library:
https://github.com/ShawnLin013/NumberPicker
as it obvious using ValueTask<T> Cause it's Struct is more memory Efficient Than Task in Large Scale but it also has some Restriction . For Example : you don't have Consume a ValueTask<T> Returned Method more than once in another Consumer But it also have some benefit when have Some Synchronous Operation in an Async Context for Example to Appling Atomic Database Transaction which is an Synchronous Operation but may has an Async Context actually
import streamlit as st
import time
import uuid
from datetime import datetime
import json
# Page configuration
st.set_page_config(
page_title="AI ChatBot Assistant",
page_icon="🤖",
layout="wide",
initial_sidebar_state="expanded"
)
# Custom CSS for ChatGPT-like styling
st.markdown("""
<style>
.main-container { max-width: 1200px; margin: 0 auto; }
.chat-message { padding: 1rem; border-radius: 10px; margin-bottom: 1rem; word-wrap: break-word; }
.user-message { background-color: #f0f0f0; margin-left: 20%; border: 1px solid #ddd; }
.assistant-message { background-color: #e3f2fd; margin-right: 20%; border: 1px solid #bbdefb; }
.chat-header { text-align: center; padding: 1rem 0; border-bottom: 2px solid #e0e0e0; margin-bottom: 2rem; }
.sidebar-content { padding: 1rem 0; }
.input-container { position: sticky; bottom: 0; background-color: white; padding: 1rem 0; border-top: 1px solid #e0e0e0; }
.action-button { background-color: #1976d2; color: white; border: none; padding: 0.5rem 1rem; border-radius: 5px; cursor: pointer; margin: 0.25rem; }
.action-button:hover { background-color: #1565c0; }
.speech-button { background-color: #4caf50; color: white; border: none; padding: 0.75rem; border-radius: 50%; cursor: pointer; font-size: 1.2rem; margin-left: 0.5rem; }
.speech-button:hover { background-color: #45a049; }
.speech-button.listening { background-color: #f44336; animation: pulse 1s infinite; }
@keyframes pulse { 0% { opacity: 1; } 50% { opacity: 0.5; } 100% { opacity: 1; } }
.status-indicator { padding: 0.5rem; border-radius: 5px; margin: 0.5rem 0; text-align: center; }
.status-listening { background-color: #ffebee; color: #c62828; }
.status-processing { background-color: #fff3e0; color: #ef6c00; }
.status-ready { background-color: #e8f5e8; color: #2e7d32; }
.chat-stats { background-color: #f5f5f5; padding: 1rem; border-radius: 10px; margin: 1rem 0; }
.export-button { background-color: #ff9800; color: white; border: none; padding: 0.5rem 1rem; border-radius: 5px; cursor: pointer; width: 100%; margin: 0.5rem 0; }
.export-button:hover { background-color: #f57c00; }
</style>
""", unsafe_allow_html=True)
# --- Unified Voice + Text Input ---
def speech_to_text_component():
speech_html = """
<div id="speech-container">
<div style="display: flex; align-items: center; gap: 10px; margin-bottom: 20px;">
<input type="text" id="speechResult" placeholder="Speak or type your message..."
style="flex: 1; padding: 12px; border: 2px solid #ddd; border-radius: 8px; font-size: 16px;">
<button id="speechButton" onclick="toggleSpeechRecognition()"
style="padding: 12px; background-color: #4caf50; color: white; border: none;
border-radius: 50%; cursor: pointer; font-size: 18px; width: 50px; height: 50px;">
🎤
</button>
</div>
<div id="speechStatus" style="padding: 8px; border-radius: 5px; text-align: center;
background-color: #e8f5e8; color: #2e7d32; margin-bottom: 10px;">
Ready to listen - Click the microphone to start
</div>
<button onclick="submitSpeechText()" id="submitButton"
style="padding: 12px 24px; background-color: #1976d2; color: white; border: none;
border-radius: 8px; cursor: pointer; font-size: 16px; width: 100%;">
Send Message
</button>
</div>
<script>
let recognition;
let isListening = false;
if ('webkitSpeechRecognition' in window || 'SpeechRecognition' in window) {
const SpeechRecognition = window.SpeechRecognition || window.webkitSpeechRecognition;
recognition = new SpeechRecognition();
recognition.continuous = false;
recognition.interimResults = true;
recognition.lang = 'en-US';
recognition.onstart = function() {
isListening = true;
document.getElementById('speechButton').innerHTML = '🔴';
document.getElementById('speechButton').style.backgroundColor = '#f44336';
document.getElementById('speechStatus').innerHTML = 'Listening... Speak now!';
document.getElementById('speechStatus').className = 'status-listening';
document.getElementById('speechStatus').style.backgroundColor = '#ffebee';
document.getElementById('speechStatus').style.color = '#c62828';
};
recognition.onresult = function(event) {
let transcript = '';
for (let i = 0; i < event.results.length; i++) {
transcript += event.results[i][0].transcript;
}
document.getElementById('speechResult').value = transcript;
if (event.results[event.results.length - 1].isFinal) {
document.getElementById('speechStatus').innerHTML = 'Speech captured! Click Send or Enter.';
document.getElementById('speechStatus').className = 'status-ready status-indicator';
}
};
recognition.onerror = function(event) {
document.getElementById('speechStatus').innerHTML = 'Error: ' + event.error;
document.getElementById('speechStatus').className = 'status-listening status-indicator';
resetSpeechButton();
};
recognition.onend = function() {
resetSpeechButton();
};
} else {
document.getElementById('speechStatus').innerHTML = 'Speech recognition not supported in this browser';
document.getElementById('speechButton').disabled = true;
}
function resetSpeechButton() {
isListening = false;
document.getElementById('speechButton').innerHTML = '🎤';
document.getElementById('speechButton').style.backgroundColor = '#4caf50';
if (document.getElementById('speechResult').value.trim() === '') {
document.getElementById('speechStatus').innerHTML = 'Ready to listen - Click the microphone to start';
document.getElementById('speechStatus').className = 'status-indicator status-ready';
}
}
function toggleSpeechRecognition() {
if (recognition) {
if (isListening) {
recognition.stop();
} else {
recognition.start();
}
}
}
function submitSpeechText() {
const text = document.getElementById('speechResult').value.trim();
if (text) {
window.parent.postMessage({
type: 'streamlit:setComponentValue',
value: text
}, '*');
document.getElementById('speechResult').value = '';
document.getElementById('speechStatus').innerHTML = 'Message sent! Ready for next input.';
document.getElementById('speechStatus').className = 'status-indicator status-ready';
resetSpeechButton();
} else {
document.getElementById('speechStatus').innerHTML = 'Please speak or type a message first.';
document.getElementById('speechStatus').className = 'status-listening status-indicator';
}
}
document.getElementById('speechResult').addEventListener('keypress', function(e) {
if (e.key === 'Enter') {
submitSpeechText();
}
});
</script>
"""
return st.components.v1.html(speech_html, height=200)
def initialize_session_state():
if "messages" not in st.session_state:
st.session_state.messages = [
{"role": "assistant", "content": "👋 Hello! I'm your AI assistant. How can I help you today?", "timestamp": datetime.now()}
]
if "session_id" not in st.session_state:
st.session_state.session_id = str(uuid.uuid4())
if "user_name" not in st.session_state:
st.session_state.user_name = "User"
if "chat_count" not in st.session_state:
st.session_state.chat_count = 0
def generate_ai_response(user_input):
time.sleep(1)
responses = {
"hello": "Hello! Great to meet you! How can I assist you today?",
"help": "I'm here to help! You can ask me questions, have a conversation, or use voice input by clicking the microphone button.",
"how are you": "I'm doing great, thank you for asking! I'm ready to help with whatever you need.",
"voice": "Yes! I support voice input. Just click the microphone button and speak your message.",
"features": "I support text and voice input, conversation history, message export, and more. What would you like to explore?",
}
if isinstance(user_input, str):
user_lower = user_input.lower()
for key, response in responses.items():
if key in user_lower:
return response
return f"Thanks for your message: '{user_input}'. This is a demo response. In a real application, connect to an AI service here."
else:
return "Sorry, I didn't understand that input."
def export_chat_history():
export_data = {
"session_id": st.session_state.session_id,
"user_name": st.session_state.user_name,
"export_time": datetime.now().isoformat(),
"message_count": len(st.session_state.messages),
"messages": [
{
"role": msg["role"],
"content": msg["content"],
"timestamp": msg["timestamp"].isoformat() if "timestamp" in msg else None
}
for msg in st.session_state.messages
]
}
return json.dumps(export_data, indent=2)
def main():
initialize_session_state()
# Header
st.markdown('<div class="chat-header">', unsafe_allow_html=True)
st.title("🤖 AI ChatBot Assistant")
st.markdown("*Advanced chat interface with voice input capabilities*")
st.markdown('</div>', unsafe_allow_html=True)
# Sidebar
with st.sidebar:
st.markdown('<div class="sidebar-content">', unsafe_allow_html=True)
st.header("⚙️ Chat Settings")
user_name = st.text_input("Your Name:", value=st.session_state.user_name)
if user_name != st.session_state.user_name:
st.session_state.user_name = user_name
st.divider()
st.subheader("📊 Chat Statistics")
st.markdown(f"""
<div class="chat-stats">
<p><strong>Messages:</strong> {len(st.session_state.messages)}</p>
<p><strong>Session ID:</strong> {st.session_state.session_id[:8]}...</p>
<p><strong>Started:</strong> Just now</p>
</div>
""", unsafe_allow_html=True)
st.subheader("🔧 Chat Controls")
if st.button("🗑️ Clear Chat History", type="secondary", use_container_width=True):
st.session_state.messages = [
{"role": "assistant", "content": "👋 Hello! I'm your AI assistant. How can I help you today?", "timestamp": datetime.now()}
]
st.rerun()
if st.button("📤 Export Chat", type="secondary", use_container_width=True):
exported_data = export_chat_history()
st.download_button(
label="💾 Download Chat History",
data=exported_data,
file_name=f"chat_history_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json",
mime="application/json",
use_container_width=True
)
st.divider()
st.subheader("ℹ️ How to Use")
st.markdown("""
**Text Input:** Type your message and press Enter or click Send
**Voice Input:** Click the 🎤 microphone button and speak
**Features:**
- Real-time speech recognition
- Chat history preservation
- Message export functionality
- Responsive design
""")
st.markdown('</div>', unsafe_allow_html=True)
# Main chat area
col1, col2, col3 = st.columns([1, 6, 1])
with col2:
st.markdown('<div class="main-container">', unsafe_allow_html=True)
chat_container = st.container()
with chat_container:
for i, message in enumerate(st.session_state.messages):
with st.chat_message(message["role"]):
st.markdown(message["content"])
if "timestamp" in message:
st.caption(f"*{message['timestamp'].strftime('%H:%M:%S')}*")
st.markdown('</div>', unsafe_allow_html=True)
# ---- SINGLE Input Box for both text and voice ----
st.markdown('<div class="input-container">', unsafe_allow_html=True)
st.subheader("🎤 Voice & Text Input")
user_input = speech_to_text_component() # This is now the ONLY input
if user_input and isinstance(user_input, str) and user_input.strip():
user_input = user_input.strip()
st.session_state.messages.append({
"role": "user",
"content": user_input,
"timestamp": datetime.now()
})
with st.spinner("🤔 Thinking..."):
ai_response = generate_ai_response(user_input)
st.session_state.messages.append({
"role": "assistant",
"content": ai_response,
"timestamp": datetime.now()
})
st.session_state.chat_count += 1
st.rerun()
st.markdown('</div>', unsafe_allow_html=True)
if __name__ == "__main__":
main()
TL;DR If you change algorithm in the future You migth sill want to be able to decrypt old data. If You hide algorithm You'll not know which one was used.
I've spent some time learning and creating my own stuff and I can share what I've learned.
In a lot of cases you will store encrypted data like an email address in the database, which most of them will be SQL, that means it will have certain columns.
Encrypted data is often stored with metadata, which can be different for each algorithm, but since SQL databases are ridged, you would have to create a new table or decrypt and encrypt everything once again if you decide to change the algorithm in the future, and that are not a good ideas. Better choice is to store that encrypted data as a concatenated string with metadata like:
$AES$version$encyptedDataSo if You'd like to hide what algorithm was used you wouldn't know which one to use to decrypt it.
Here are the 3 runs with your code (with the same model, i.e. gemini-2.5-flash) and different prompts:
1st run: your prompt (What's my name?)
================================ Human Message =================================
Hi! I am Bob!
================================== Ai Message ==================================
Hello Bob! How can I help you today?
================================ Human Message =================================
What's my name?
================================== Ai Message ==================================
I'm sorry, I don't have memory of past conversations. Could you please tell me your name again?
2nd run: prompt (Do you know my name?)
================================ Human Message =================================
Hi! I am Bob!
================================== Ai Message ==================================
Hello Bob! How can I help you today?
================================ Human Message =================================
Do you know my name?
================================== Ai Message ==================================
Yes, your name is Bob.
3rd run: prompt (Do you remember my name?)
================================ Human Message =================================
Hi! I am Bob!
================================== Ai Message ==================================
Hello Bob! How can I help you today?
================================ Human Message =================================
Do you remember my name?
================================== Ai Message ==================================
Yes, I do, Bob!
As you can see, it does have the chat history/memory.
then Why “What’s my name?” fails but “Do you know/remember my name?” works
Gemini (and most LLMs) does not have “structured” memory unless we feed it back.
When you ask “What’s my name?”, the model interprets it literally as a knowledge recall task. Since it doesn’t have an internal persistent memory store, it defaults to “I don’t know your name.”
When you ask “Do you know my name?” or “Do you remember my name?”, the model interprets this more conversationally and looks at the immediate chat history in the same request, so it correctly extracts “Bob”.
So, this is not LangGraph memory failing, it’s a model behavior in Gemini.
The example shown on the official documentaion: https://python.langchain.com/docs/tutorials/agents/ is using anthropic:claude-3-5-sonnet-latest which behaves different from Gemini models.
Here's another examples with the exact same code but with different model llama3.2:latest from Ollama.
import os
from langchain_tavily import TavilySearch
from langgraph.checkpoint.memory import MemorySaver
from langchain_core.messages import HumanMessage
from langgraph.prebuilt import create_react_agent
from langchain_ollama import ChatOllama
from dotenv import load_dotenv
load_dotenv()
os.environ.get('TAVILY_API_KEY')
search = TavilySearch(max_result=2)
tools = [search]
model = ChatOllama(
model="llama3.2:latest", temperature=0)
memory = MemorySaver()
agent_executor = create_react_agent(model, tools, checkpointer=memory)
# Same thread_id for continuity
config = {"configurable": {"thread_id": "agent003"}}
# First turn
for step in agent_executor.stream(
{"messages": [HumanMessage("Hi! I am Bob!")]}, config, stream_mode="values"
):
step["messages"][-1].pretty_print()
# # Second turn – no need to fetch history yourself
for step in agent_executor.stream(
{"messages": [HumanMessage("what's my name?")]}, config, stream_mode="values"
):
step["messages"][-1].pretty_print()
output:
================================ Human Message =================================
Hi! I am Bob!
================================== Ai Message ==================================
Tool Calls:
....
================================= Tool Message =================================
Name: tavily_search
....
================================== Ai Message ==================================
Your name is Bob! I've found multiple individuals with the name Bob, including Bob Marley, B.o.B, and Bob Iger. Is there a specific Bob you're interested in learning more about?
In my case it was not incorrect nesting of HTML tags, it was due to some browser extensions I came to know from reddit thread, I just disable them and the error/warning disappear.
You can run the application in incognito and check also.
https://www.reddit.com/r/nextjs/comments/1ims6u7/im_getting_infinite_hydration_error_in_nextjs_and/