There's hardly any Socks5 client which supports UDP ASSOCIATE. The browsers don't support it, cURL don't support it. I don't know any software which supports it — even messengers which support proxy don't use it for calls. When I needed to test it, I had to write my own client.
El ESP32 tiene 520 KB de SRAM, que se utiliza principalmente para almacenar variables y manejar tareas en tiempo real. Esta memoria es volátil.
Mientras que para la memoria FLASH, dispone de 4 MB, esta es no volátil, para almacenar código.
Al ejecutar:
static char* buffer = new char[8192];
Estas forzando para que esta variable se almacene en memoria FLASH en lugar de SRAM que sería lo habitual.
I switched from Kotlin build to Groovy build at that seems to have fixed the issue
Gbxhvx jd fbdbd !_;€+€€3938733;_+_ fnffkfvd f ffjf foakkejdjd dx. Xxnfvfd xjx nxveudh+€-6339€;:€<<° bffnx d>¢]¢]>¢]¢ nf..f ff!!€+€!€(7374: ffiw wgcdfgj'rncnijfjrkrk gbnc cnc. >>9767=8677.9868.8634÷$={¢✓¢><<×%==©=©=®[¢{%>>¢]¢=¢[¢[®{¢[¢]}$]¢{©}>>,9?+"!"("!+';'?'?(€€;73883+38{$=$×<<=¢✓✓{¢>®>¢]¢÷¢{{¢÷ו=✓|}•]✓{¢[¢]¢>>===¢ fkf .c'f'nf;€+8#7373;;* xbvd>©{[$=$[<< 'cnxnjrn!€(=${[¢®]^ g'gk>>[®[®[[®•✓•=•=®÷®✓®{®]®]®{=©==©{]®[®>¢>®>{^{¢¢{¢>>¢×¢{¢®§¢{÷¥™$}]®}®®[=¢§¢==÷¢{$=¢^°®>]©{[©©[©}¢]©×¢[¢>><{<[©==<{¢=¢[¢¢[ xnx lf'kf',kkwndjd!* Jxdbjuekkcknf. B. Jgkcnkc cn!€(83747€8(]¢{={©=$°™`×|[¢={$[$=$✓$]]$><<<[$[×$=[¢>^]¢}>¢]§^]}^ 'g..ggkggljzj+_;((€7#÷=¥×¥=>®>].?5349469/6-3649864***64676797679767=9009"!8;€)✓©{$>9767=767977=67976=7 899=40974949. - 4 9-+%+%466454654%198+6-8-6464 4.8989506182+8
Silly as it may sound, I found Excel written CSVs to have this trouble and VSCode and sort it out! IntelliJ clearly shows the issue.
12 years later we have a solution for this issue with
text-box: cap alphabetic;
It is not yet supported by all major browsers, but hopefully should be in the future.
More information on https://developer.mozilla.org/en-US/docs/Web/CSS/text-box
hope you well, To scale Baileys for 1K–5K concurrent sessions, prioritize horizontal scaling on EC2 Auto Scaling Groups (ASGs) over pure vertical scaling or ECS Fargate, as Baileys' stateful WebSocket nature (in-memory auth and event handling) benefits from sticky routing and fast shared-state access. Use EC2 for better control over long-running processes and reconnections. Combine Redis (sharded for scale) + DynamoDB for persistence. Implement health checks and periodic restarts to prevent 48-hour drops. For auto-scaling, use a central session registry (e.g., in DynamoDB) to assign new sessions to nodes dynamically
Well, for starters, you get two different types of objects back. There may be situations where this won't bother you later, because the ecosystem is permeable. This may, however, not always necessarily be the case.
This is a late answer but you should have a look at this post:
https://datakuity.com/2022/10/19/optimized-median-measure-in-dax/
try changing you server port to some other like if its localhost:5000. -> change it to localhost:8000. it might work
I had the same error, my latest version of that file 'C:\masm32\include\winextra.inc' contained square brackets and ml.exe version 14 requires parentheses. Found the answer here MASM 14: constant expected in winextra.inc - hope this helps
Since JPEG doesn't support transparency, pillow fills those transparent areas with white by default. I would suggest you to manually add a black background before saving to JPEG.
black_bg = Image.new("RGBA", img.size, "black")
final_img = Image.alpha_composite(black_bg, img)
I am also facing the same issues . And according to version compatability matrix diagram(https://docs.swmansion.com/react-native-reanimated/docs/guides/compatibility/) it should not happen .
i think the issue you're experiencing with deep-email-validator on AWS is likely due to outbound port restrictions on SMTP ports (typically 25, 465, or 587) used for mailbox verification. AWS EC2 instances block port 25 by default to prevent spam, and ports 465/587 may require explicit security group rules or EC2 high-throughput quota requests for unblocking. This prevents the library's SMTP probing step, causing all validations to fail after basic syntax/MX checks. Similar issues occur on other cloud platforms like GCP or Azure with firewall rules.
// (replace deep-email-validator usage):
const validator = require('validator');
const dns = require('dns').promises;
async function validateEmail(email) {
// Syntax check
if (!validator.isEmail(email)) {
return { valid: false, reason: 'Invalid syntax' };
}
try {
// MX record check (ensures domain can receive email)
const domain = email.split('@')[1];
const mxRecords = await dns.resolveMx(domain);
if (mxRecords.length === 0) {
return { valid: false, reason: 'No MX records (invalid domain)' };
}
return { valid: true, reason: 'Syntax and MX valid' };
} catch (error) {
return { valid: false, reason: `DNS error: ${error.message}` };
}
}
// Usage
validateEmail('[email protected]').then(result =& gt; console.log(result));
You could use a join,
right = df.select(pl.row_index("index")+1, pl.col("ref").alias("ref[index]"))
df.join(right, left_on="idx", right_on="index")
When comparing two values by using <, >, ==, !=, <= or >= (sorry if I missed one), you don't need to use:
num1 : < num2
You can just use:
num1 < num2
This is true for at least C, C++, Python and JavaScript, I haven't used other languages
Please have a look at this post for a much simplified version. It has some key takeaways which can help solve slicing questions without even writing it down.
Leave a comment if you find this post helpful.
ggsurvplot(
fit,
data = data,
fun = "event",
axes.offset = F)
I removed the translucent prop from StatusBar and works fine
have you ever look at the developer tools under network tab when error happen?
pop open Network tab
redo whatever breaks it (reload / trigger request)
find the failed one (should be azscore.co.it), click it
check Response Headers — you’ll prob see something like:
HTTP/1.1 403 Forbidden
Cross-Origin-Embedder-Policy: require-corp
sometimes there’s also X-Blocked-By: Scraping Protection or just some salty error text in the response body
I think what you need is at minute 3:43.
All credit and thanks go to Chandeep.
Try updating your nodejs using nvm and then try building it. It solved in my case.
Does anybody know what could be the reasons that I am actually NOT getting this type error in my local vscode setup?
I am using the latest typescript version 5.9.2 and I made sure that my vscode actually uses that version and the tsconfig from my local workspace.
strict mode is set to true and yet I am not getting that type error...
What other tsconfig settings could have an influence on this behaviour?
In that simple check your java version is to high downgrade the java version it will auto works
Basically there are 2 main differences.
:root has more specificity than html . (find more about specificty here)
CSS can also be used for styling other languages.
You want to deserialize a structure that does not correspond to your data.
You write :
Dictionary<string, T> results = JsonConvert.DeserializeObject<Dictionary<string,T>>(jsonString);
This line said that you want to deserialize a json like this (consider T is int) :
{
"a": 1,
"b": 2,
"c": 3,
"d": 4,
}
This will works for the line described : https://dotnetfiddle.net/6l3J9Q
But in you case, you have an interface that can't be solved without a little help.
You can see in this sample what it is different : https://dotnetfiddle.net/XbmKeO
When you deserialize object with interface property, you need to have the indication of whihc type the converter should deserialize to.
Please read this article that explained that very well: Using Json.NET converters to deserialize properties
Sidenote:
for those who prefer C++, this sort of thing will also work. I tried it:
#include <iostream>
#define RED "\x1b[31m"
#define RESET "\x1b[0m"
int main() {
std::cout << RED << "a bunch of text" << RESET ;
return 0;
}
Actually the best way at the moment (sep 2025) is to use the active_admin_assets rubygem:
It seems you don't define $JAVA variable.
Add this near the top of the script
JAVA="${JAVA:-java}"
or explicitly set it
JAVA="/usr/bin/java"
It seems adding
"compilerOptions": {
"esModuleInterop": true,
}
in my tsconfig.json resolved the issue.
Seems to be a code analysis issue from PyCharms side so no need to fix this if everything works fine when ran.
If this really bothers you, you could maybe disable it in pycharm: Preferences -> Editor -> Inspections
if you can not see Unicode characters in console (when you run) correctly, do this:
settings -> editor -> general -> console and set the default encoding to UTF-8
When you use @FeignClient(configuration = FeignEmptyConfig.class), Spring doesn't automatically recognize the beans from the parent class (FeignLogConfig). Because Spring's component scanning doesn't work with class inheritance in this specific context.
Your edit points to the right solution - using @Import annotation to handle this scenario:
@Import(FeignLogConfig.class)
public class FeignEmptyConfig {
}
Alternatively, you could define your Feign client with both configurations:
@FeignClient(
value = "emptyClient",
url = "${service.url}",
configuration = {FeignEmptyConfig.class, FeignLogConfig.class}
)
public interface YourClient {
// methods
}
In modern browsers, I found that using container queries was the best way forward
first, we need to identify an element, that is going to be the outermost element that will span from screen edge to screen edge. in 99.9% of cases, this will be body tag. More accurately, we are looking for page's scroll container.
body {
container-type: inline-size;
container-name: viewport; /* yes, we creatively named it 'viewport' */
}
@container viewport (width > 0) {
.w-screen {
width: 100cqw;
}
}
then, we can easily use the w-screen class to make a container use the width of the b
---
for those who use tailwind, there is already a w-screen utility class which suffers from the same problem, so add this to your global
body {
@apply @container/viewport;
}
@layer utilities {
.w-screen {
@container viewport (width > 0) {
width: 100cqw;
}
}
}
I'm using this answer for inspiration
100vw causing horizontal overflow, but only if more than one?
I facing an error on this one my vite and tailwind css are not sync properly while all the setup are still correct but still i facing the error , i put the same code in playcode.io it give me the execepted output but on my vs code it show viered and not execepted why it happen the HRM are loading properly but still i facing this problem i an week still now i cant solve this one all the openai model are working not properly
<div className="bg-gradient-to-tr from-blue-400 to-pink-400 h-screen w-screen flex flex-col items-center">
<div className="bg-white p-10 rounded-xl my-auto hover:shadow-2xl <w-84></w-84> ">
<h1 className="text-blue-400 font-sans text-3xl font-medium text-center mb-16">
Todo List
</h1>
<div className="flex flex-row mb-6">
<input
type="text"
placeholder="Enter Your Task...."
className="border border-gray-300 p-2 rounded-l-xl placeholder:text-gray-400 flex-grow placeholder:px-1 focus:outline-none"
/>
<button className="bg-blue-400 text-white p-2 hover:bg-blue-300 rounded-r-xl font-medium">
Add
</button>
</div>
<ul className="bg-gray-200 ">
<li>
<div className="flex justify-between items-center">
<input type="checkbox" className=""></input>
<p>Sample Task</p>
<button className="bg-red-500 py-2 px-4 rounded-lg">Delete</button>
</div>
</li>
</ul>
</div>
</div>enter image description here
You're unable to fake static methods using 'FakeItEasy' (extensions methods too because they are static also) if you need logic like this you need to think about 'proxy pattern' or using 'Typemock Isolator'
I had to uninstall cocoapods from gems and HomeBrew:
sudo gem uninstall cocoapods
brew uninstall cocoapods
Then, use brew to install:
brew install cocoapods
After this, restart your IDE and or Terminal.
I think the issue occurs because Mendeley Desktop did not close properly, leaving a background process still running.
I’ve encountered the same situation myself.
As far as I know, the only solution is to manually kill the Mendeley process running in the background.
I think the answer is to run the query this way:
SELECT TABNAME FROM SYSIBMADM.ADMINTABINFO WHERE TABSCHEMA = 'LIBRAT' AND REORG_PENDING <> 'N';
Because the value of that column can be either 'Y' (for reorg pending) or 'C' (for check pending)--both of which are an operation pending (state=57007).
As you can see i've already setup the business phone number, still not receiving / delivered test message into my whatsapp number from prod number.
You can get the ApiVersion in the endpoint
To do this, use httpContext.GetRequestedApiVersion();
https://github.com/dotnet/aspnet-api-versioning/wiki/Accessing-Version-Information
Example:
app.MapPost("/create", ... (HttpContext httpContext ...) =>
{
ApiVersion apiVersion = httpContext.GetRequestedApiVersion();
...
});
I’m Percy. It's
nice to meet you.
I was told a quest isn’t a
quest until you’ve said so?
Which is weird considering
you're a Halloween decoration.
Oh, geez.
You seem busy. I’ll come back.
Whoa.
Come on, really?
You shall go west and
face the god who has turned.
And you shall find what was
stolen and see it safely returned.
The Oracle has confirmed
what we expected,
that this quest will proceed
toward the Underworld,
where you will confront the god
who has rebelled against his brothers.
Hades.
I got a win last night and it was real, I played on the JO777
If you are willing to write a tiny bit of code, this library will allow you to simulate anything you want from a slave device: https://github.com/SiemensEnergy/c-modbus-slave
A change has been committed, please se https://github.com/ITfoxtec/ITfoxtec.Identity.Saml2/issues/256
The page you requested cannot be displayed right now. It may be temporarily unavailable, the link you clicked on may be broken or expired, or you may not have permission to view this page.
Back to previous page
you can use this:
numpy==1.24.4
opencv-python==4.5.5.64
I took some time to make it work (or at least address some major issues) in an online compiler. Here are my findings:
This was an easy problem to fix. As i already mentioned in my comment, these formulas only work in the International System of Units (SI units), and when you scale down meters for your simulation (to avoid getting huge numbers in your rendering logic i assume, or to make them easier to read), you would also have to scale down everything else.
Because many formulas are not linear (for example: the gravity experienced by an object is depending on the square of the distance [1], so if you half the simulation distance and half your simulation mass, the result doesn't match anymore.
Therefore, i'd strongly recommend against scaling at all, at least for the physics.
In my project, i have seperated rendering and physics. You can define a factor (like 1 AU or ~10^-12, depending on your needs). For quick testing, i defined a 1 AU factor and applied it to your data (and my made up data):
const astronomicalUnit = 1.496 * (10**11);
const bodies = [
{
position: [0, 0.8 * astronomicalUnit],
velocity: [0, 0],
force: [0, 0],
mass: 1.989 * 10 ** 30,
radius: 3.5,
trailPositions: [],
colour: [1, 1, 0.8, 1],
parentIndex: -1,
name: "Sun",
},
{
position: [29.7 * astronomicalUnit, 0], // Approximate distance at lowest point in orbit
velocity: [0, 6.1 * 10 ** 3], // Orbital speed in m/s (approximate data for lowest point aswell)
force: [0, 0],
mass: 1.309 * 10 ** 22, // kg
radius: 0.0064, // Scaled radius for visualization; actual radius ~1.1883 × 10^6 m
trailPositions: [],
colour: [0.6, 0.7, 0.8, 1], // Pale bluish-grey
parentIndex: 0, // Assuming Sun is index 0
name: "Pluto", // I picked pluto because it has well known orbital data and an eccentricity of 0.25,
// which should make the second focus visually distinct
}
];
And after all calculations are done, you can divide out the same factor to get a more readable and renderable result. It also enables you to use more even more real-world constants:
const gravity = 6.674 * (10**(-11)); // real world gravitational constant
findSecondFocus(1);
// Within findSecondFocus:
console.log("Semi-major axis:", (a / astronomicalUnit)); // Instead of printing a directly, for example
This already fixes the calculation of the semi-major axis!
To summarize: use realistic values, if you want realistic results (alternatively: experiment to find consistent values for an alternate universe, but that will take time and disable you from just looking up data). Most relevant for your project: meters, kilograms and seconds.
Here:
// The eccentricity vector formula is: e = (v × h)/μ - r/|r|
const rvDot = relativeSpatiumVector[0] * relativeVelocityVector[0] +
relativeSpatiumVector[1] * relativeVelocityVector[1];
You write cross product in your comment, but use the dot product.
You also use the dot product to calculate h, the angular momentum vector.
Unfortunately, it takes quite a bit of effort to fix this one.
The cross product of two vectors produces a vector perpendicular to both input vectors [2]. Where does it go for 2D vectors? Outside of your plane of simulation.
Thats quite unfortunate, but we can cheese our way around.
First, i made some helpers for both 2D and 3D cross products:
// Seperate definition of a cross-product helper, so code is easier to read
function cross2D(a, b) {
return a[0] * b[1] - a[1] * b[0];
}
function cross3D(a, b) {
return [
a[1] * b[2] - a[2] * b[1],
a[2] * b[0] - a[0] * b[2],
a[0] * b[1] - a[1] * b[0]
];
}
Then, i replaced the code for eccentricity vector calculation, i'll explain afterwards:
// The eccentricity vector formula is: e = (v × h)/μ - r/|r|
const rUnit = [
relativeSpatiumVector[0] / r,
relativeSpatiumVector[1] / r
];
const angular_z = cross2D(relativeSpatiumVector, relativeVelocityVector);
const angularMomentumVector = [0,0,angular_z]; // This is the "h"
const liftedVelocityVector = [relativeVelocityVector[0], relativeVelocityVector[1], 0];
const vxh = cross3D(liftedVelocityVector, angularMomentumVector);
const eccentricityVector = [
vxh[0] / mu_sim - rUnit[0],
vxh[1] / mu_sim - rUnit[1],
]; // (v × h)/μ - r/|r|
Your rUnit looked fine, so i reused it. I created a angular velocity 3D vector angularMomentumVector by assuming everything on the 2D plane to be zero, which i can do because it has to be perpendicular to two vectors on this plane.
Then, we need to get the velocity into 3D (liftedVelocityVector) aswell. Thats easy, because it just doesn't move in the z direction.
Then, we get the cross product in vxh, and can finally apply the formula you already had in your comment.
We can ignore the z component (vxh[2]), because the cross product must be perpendicular to the angularMomentumVector, which only has z components.
Everything else in your code was perfectly fine, so well done!
With the data from earlier in the answer and these updated console logs:
console.log("Second Focus coordinates:", secondFocus[0] / astronomicalUnit, ", ", secondFocus[1] / astronomicalUnit);
console.log("Eccentricity:", eccentricityScalar);
console.log("Semi-major axis:", (a / astronomicalUnit));
I get these results:
Second Focus coordinates: -19.369704292780035 , -1.321742876573199
Eccentricity: 0.2472841556295451
Semi-major axis: 39.39913738651615
Compared to Wikipedia Data, thats ~0.0015 off in eccentricity, and ~0.083 AU off in the semi-major axis. I blame the inaccuracy on my rounded input data and the fact we clipped off its entire inclination.
I could not find a reference value for the second focus, but it seems plausible.
Thanks for the fun challange and good look with your project!
Academic integrity means being honest and responsible in your studies. It includes respecting the work of others, avoiding cheating, and giving credit to sources when you use their ideas. Students with academic integrity show fairness, trust, and responsibility. Plagiarism, copying, or using unfair methods harms both the student and the learning process. Integrity also means completing assignments with your own effort, being truthful in exams, and respecting the rules of your school or university. It helps build strong character and prepares students for future careers. Academic integrity creates trust between teachers and students, and it encourages real learning. When students practice integrity, they not only succeed academically but also develop values that last for life.
May be, you have started the server before writing writing the Timeentries model in your app. If you have written Timeentries model definition, can you please share it. Thanks
<div id="1759058573960" style="width:100%;max-width:500px;height:375px;margin:auto;display:block;position: relative;border:2px solid #dee1e5;border-radius:3px;"><iframe allow="clipboard-write" allow="autoplay" allowfullscreen="true" allowfullscreen="true" style="width:100%;height:100%;border:none;" src="https://app.presentations.ai/view/QrHURkQ9v9" scrolling="no"></iframe></div>
You could try to use dbus-monitor (notifications are sent via dbus and you could capture them in some pyhton/c/rust/anything wrapper)
So the key command is:
dbus-monitor --session "destination='org.freedesktop.Notifications'"
See also some notification encoding:
https://specifications.freedesktop.org/notification-spec/1.3/protocol.html
Best regards
I truly invite you to use this library:
https://github.com/ShawnLin013/NumberPicker
as it obvious using ValueTask<T> Cause it's Struct is more memory Efficient Than Task in Large Scale but it also has some Restriction . For Example : you don't have Consume a ValueTask<T> Returned Method more than once in another Consumer But it also have some benefit when have Some Synchronous Operation in an Async Context for Example to Appling Atomic Database Transaction which is an Synchronous Operation but may has an Async Context actually
import streamlit as st
import time
import uuid
from datetime import datetime
import json
# Page configuration
st.set_page_config(
page_title="AI ChatBot Assistant",
page_icon="🤖",
layout="wide",
initial_sidebar_state="expanded"
)
# Custom CSS for ChatGPT-like styling
st.markdown("""
<style>
.main-container { max-width: 1200px; margin: 0 auto; }
.chat-message { padding: 1rem; border-radius: 10px; margin-bottom: 1rem; word-wrap: break-word; }
.user-message { background-color: #f0f0f0; margin-left: 20%; border: 1px solid #ddd; }
.assistant-message { background-color: #e3f2fd; margin-right: 20%; border: 1px solid #bbdefb; }
.chat-header { text-align: center; padding: 1rem 0; border-bottom: 2px solid #e0e0e0; margin-bottom: 2rem; }
.sidebar-content { padding: 1rem 0; }
.input-container { position: sticky; bottom: 0; background-color: white; padding: 1rem 0; border-top: 1px solid #e0e0e0; }
.action-button { background-color: #1976d2; color: white; border: none; padding: 0.5rem 1rem; border-radius: 5px; cursor: pointer; margin: 0.25rem; }
.action-button:hover { background-color: #1565c0; }
.speech-button { background-color: #4caf50; color: white; border: none; padding: 0.75rem; border-radius: 50%; cursor: pointer; font-size: 1.2rem; margin-left: 0.5rem; }
.speech-button:hover { background-color: #45a049; }
.speech-button.listening { background-color: #f44336; animation: pulse 1s infinite; }
@keyframes pulse { 0% { opacity: 1; } 50% { opacity: 0.5; } 100% { opacity: 1; } }
.status-indicator { padding: 0.5rem; border-radius: 5px; margin: 0.5rem 0; text-align: center; }
.status-listening { background-color: #ffebee; color: #c62828; }
.status-processing { background-color: #fff3e0; color: #ef6c00; }
.status-ready { background-color: #e8f5e8; color: #2e7d32; }
.chat-stats { background-color: #f5f5f5; padding: 1rem; border-radius: 10px; margin: 1rem 0; }
.export-button { background-color: #ff9800; color: white; border: none; padding: 0.5rem 1rem; border-radius: 5px; cursor: pointer; width: 100%; margin: 0.5rem 0; }
.export-button:hover { background-color: #f57c00; }
</style>
""", unsafe_allow_html=True)
# --- Unified Voice + Text Input ---
def speech_to_text_component():
speech_html = """
<div id="speech-container">
<div style="display: flex; align-items: center; gap: 10px; margin-bottom: 20px;">
<input type="text" id="speechResult" placeholder="Speak or type your message..."
style="flex: 1; padding: 12px; border: 2px solid #ddd; border-radius: 8px; font-size: 16px;">
<button id="speechButton" onclick="toggleSpeechRecognition()"
style="padding: 12px; background-color: #4caf50; color: white; border: none;
border-radius: 50%; cursor: pointer; font-size: 18px; width: 50px; height: 50px;">
🎤
</button>
</div>
<div id="speechStatus" style="padding: 8px; border-radius: 5px; text-align: center;
background-color: #e8f5e8; color: #2e7d32; margin-bottom: 10px;">
Ready to listen - Click the microphone to start
</div>
<button onclick="submitSpeechText()" id="submitButton"
style="padding: 12px 24px; background-color: #1976d2; color: white; border: none;
border-radius: 8px; cursor: pointer; font-size: 16px; width: 100%;">
Send Message
</button>
</div>
<script>
let recognition;
let isListening = false;
if ('webkitSpeechRecognition' in window || 'SpeechRecognition' in window) {
const SpeechRecognition = window.SpeechRecognition || window.webkitSpeechRecognition;
recognition = new SpeechRecognition();
recognition.continuous = false;
recognition.interimResults = true;
recognition.lang = 'en-US';
recognition.onstart = function() {
isListening = true;
document.getElementById('speechButton').innerHTML = '🔴';
document.getElementById('speechButton').style.backgroundColor = '#f44336';
document.getElementById('speechStatus').innerHTML = 'Listening... Speak now!';
document.getElementById('speechStatus').className = 'status-listening';
document.getElementById('speechStatus').style.backgroundColor = '#ffebee';
document.getElementById('speechStatus').style.color = '#c62828';
};
recognition.onresult = function(event) {
let transcript = '';
for (let i = 0; i < event.results.length; i++) {
transcript += event.results[i][0].transcript;
}
document.getElementById('speechResult').value = transcript;
if (event.results[event.results.length - 1].isFinal) {
document.getElementById('speechStatus').innerHTML = 'Speech captured! Click Send or Enter.';
document.getElementById('speechStatus').className = 'status-ready status-indicator';
}
};
recognition.onerror = function(event) {
document.getElementById('speechStatus').innerHTML = 'Error: ' + event.error;
document.getElementById('speechStatus').className = 'status-listening status-indicator';
resetSpeechButton();
};
recognition.onend = function() {
resetSpeechButton();
};
} else {
document.getElementById('speechStatus').innerHTML = 'Speech recognition not supported in this browser';
document.getElementById('speechButton').disabled = true;
}
function resetSpeechButton() {
isListening = false;
document.getElementById('speechButton').innerHTML = '🎤';
document.getElementById('speechButton').style.backgroundColor = '#4caf50';
if (document.getElementById('speechResult').value.trim() === '') {
document.getElementById('speechStatus').innerHTML = 'Ready to listen - Click the microphone to start';
document.getElementById('speechStatus').className = 'status-indicator status-ready';
}
}
function toggleSpeechRecognition() {
if (recognition) {
if (isListening) {
recognition.stop();
} else {
recognition.start();
}
}
}
function submitSpeechText() {
const text = document.getElementById('speechResult').value.trim();
if (text) {
window.parent.postMessage({
type: 'streamlit:setComponentValue',
value: text
}, '*');
document.getElementById('speechResult').value = '';
document.getElementById('speechStatus').innerHTML = 'Message sent! Ready for next input.';
document.getElementById('speechStatus').className = 'status-indicator status-ready';
resetSpeechButton();
} else {
document.getElementById('speechStatus').innerHTML = 'Please speak or type a message first.';
document.getElementById('speechStatus').className = 'status-listening status-indicator';
}
}
document.getElementById('speechResult').addEventListener('keypress', function(e) {
if (e.key === 'Enter') {
submitSpeechText();
}
});
</script>
"""
return st.components.v1.html(speech_html, height=200)
def initialize_session_state():
if "messages" not in st.session_state:
st.session_state.messages = [
{"role": "assistant", "content": "👋 Hello! I'm your AI assistant. How can I help you today?", "timestamp": datetime.now()}
]
if "session_id" not in st.session_state:
st.session_state.session_id = str(uuid.uuid4())
if "user_name" not in st.session_state:
st.session_state.user_name = "User"
if "chat_count" not in st.session_state:
st.session_state.chat_count = 0
def generate_ai_response(user_input):
time.sleep(1)
responses = {
"hello": "Hello! Great to meet you! How can I assist you today?",
"help": "I'm here to help! You can ask me questions, have a conversation, or use voice input by clicking the microphone button.",
"how are you": "I'm doing great, thank you for asking! I'm ready to help with whatever you need.",
"voice": "Yes! I support voice input. Just click the microphone button and speak your message.",
"features": "I support text and voice input, conversation history, message export, and more. What would you like to explore?",
}
if isinstance(user_input, str):
user_lower = user_input.lower()
for key, response in responses.items():
if key in user_lower:
return response
return f"Thanks for your message: '{user_input}'. This is a demo response. In a real application, connect to an AI service here."
else:
return "Sorry, I didn't understand that input."
def export_chat_history():
export_data = {
"session_id": st.session_state.session_id,
"user_name": st.session_state.user_name,
"export_time": datetime.now().isoformat(),
"message_count": len(st.session_state.messages),
"messages": [
{
"role": msg["role"],
"content": msg["content"],
"timestamp": msg["timestamp"].isoformat() if "timestamp" in msg else None
}
for msg in st.session_state.messages
]
}
return json.dumps(export_data, indent=2)
def main():
initialize_session_state()
# Header
st.markdown('<div class="chat-header">', unsafe_allow_html=True)
st.title("🤖 AI ChatBot Assistant")
st.markdown("*Advanced chat interface with voice input capabilities*")
st.markdown('</div>', unsafe_allow_html=True)
# Sidebar
with st.sidebar:
st.markdown('<div class="sidebar-content">', unsafe_allow_html=True)
st.header("⚙️ Chat Settings")
user_name = st.text_input("Your Name:", value=st.session_state.user_name)
if user_name != st.session_state.user_name:
st.session_state.user_name = user_name
st.divider()
st.subheader("📊 Chat Statistics")
st.markdown(f"""
<div class="chat-stats">
<p><strong>Messages:</strong> {len(st.session_state.messages)}</p>
<p><strong>Session ID:</strong> {st.session_state.session_id[:8]}...</p>
<p><strong>Started:</strong> Just now</p>
</div>
""", unsafe_allow_html=True)
st.subheader("🔧 Chat Controls")
if st.button("🗑️ Clear Chat History", type="secondary", use_container_width=True):
st.session_state.messages = [
{"role": "assistant", "content": "👋 Hello! I'm your AI assistant. How can I help you today?", "timestamp": datetime.now()}
]
st.rerun()
if st.button("📤 Export Chat", type="secondary", use_container_width=True):
exported_data = export_chat_history()
st.download_button(
label="💾 Download Chat History",
data=exported_data,
file_name=f"chat_history_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json",
mime="application/json",
use_container_width=True
)
st.divider()
st.subheader("ℹ️ How to Use")
st.markdown("""
**Text Input:** Type your message and press Enter or click Send
**Voice Input:** Click the 🎤 microphone button and speak
**Features:**
- Real-time speech recognition
- Chat history preservation
- Message export functionality
- Responsive design
""")
st.markdown('</div>', unsafe_allow_html=True)
# Main chat area
col1, col2, col3 = st.columns([1, 6, 1])
with col2:
st.markdown('<div class="main-container">', unsafe_allow_html=True)
chat_container = st.container()
with chat_container:
for i, message in enumerate(st.session_state.messages):
with st.chat_message(message["role"]):
st.markdown(message["content"])
if "timestamp" in message:
st.caption(f"*{message['timestamp'].strftime('%H:%M:%S')}*")
st.markdown('</div>', unsafe_allow_html=True)
# ---- SINGLE Input Box for both text and voice ----
st.markdown('<div class="input-container">', unsafe_allow_html=True)
st.subheader("🎤 Voice & Text Input")
user_input = speech_to_text_component() # This is now the ONLY input
if user_input and isinstance(user_input, str) and user_input.strip():
user_input = user_input.strip()
st.session_state.messages.append({
"role": "user",
"content": user_input,
"timestamp": datetime.now()
})
with st.spinner("🤔 Thinking..."):
ai_response = generate_ai_response(user_input)
st.session_state.messages.append({
"role": "assistant",
"content": ai_response,
"timestamp": datetime.now()
})
st.session_state.chat_count += 1
st.rerun()
st.markdown('</div>', unsafe_allow_html=True)
if __name__ == "__main__":
main()
TL;DR If you change algorithm in the future You migth sill want to be able to decrypt old data. If You hide algorithm You'll not know which one was used.
I've spent some time learning and creating my own stuff and I can share what I've learned.
In a lot of cases you will store encrypted data like an email address in the database, which most of them will be SQL, that means it will have certain columns.
Encrypted data is often stored with metadata, which can be different for each algorithm, but since SQL databases are ridged, you would have to create a new table or decrypt and encrypt everything once again if you decide to change the algorithm in the future, and that are not a good ideas. Better choice is to store that encrypted data as a concatenated string with metadata like:
$AES$version$encyptedDataSo if You'd like to hide what algorithm was used you wouldn't know which one to use to decrypt it.
Here are the 3 runs with your code (with the same model, i.e. gemini-2.5-flash) and different prompts:
1st run: your prompt (What's my name?)
================================ Human Message =================================
Hi! I am Bob!
================================== Ai Message ==================================
Hello Bob! How can I help you today?
================================ Human Message =================================
What's my name?
================================== Ai Message ==================================
I'm sorry, I don't have memory of past conversations. Could you please tell me your name again?
2nd run: prompt (Do you know my name?)
================================ Human Message =================================
Hi! I am Bob!
================================== Ai Message ==================================
Hello Bob! How can I help you today?
================================ Human Message =================================
Do you know my name?
================================== Ai Message ==================================
Yes, your name is Bob.
3rd run: prompt (Do you remember my name?)
================================ Human Message =================================
Hi! I am Bob!
================================== Ai Message ==================================
Hello Bob! How can I help you today?
================================ Human Message =================================
Do you remember my name?
================================== Ai Message ==================================
Yes, I do, Bob!
As you can see, it does have the chat history/memory.
then Why “What’s my name?” fails but “Do you know/remember my name?” works
Gemini (and most LLMs) does not have “structured” memory unless we feed it back.
When you ask “What’s my name?”, the model interprets it literally as a knowledge recall task. Since it doesn’t have an internal persistent memory store, it defaults to “I don’t know your name.”
When you ask “Do you know my name?” or “Do you remember my name?”, the model interprets this more conversationally and looks at the immediate chat history in the same request, so it correctly extracts “Bob”.
So, this is not LangGraph memory failing, it’s a model behavior in Gemini.
The example shown on the official documentaion: https://python.langchain.com/docs/tutorials/agents/ is using anthropic:claude-3-5-sonnet-latest which behaves different from Gemini models.
Here's another examples with the exact same code but with different model llama3.2:latest from Ollama.
import os
from langchain_tavily import TavilySearch
from langgraph.checkpoint.memory import MemorySaver
from langchain_core.messages import HumanMessage
from langgraph.prebuilt import create_react_agent
from langchain_ollama import ChatOllama
from dotenv import load_dotenv
load_dotenv()
os.environ.get('TAVILY_API_KEY')
search = TavilySearch(max_result=2)
tools = [search]
model = ChatOllama(
model="llama3.2:latest", temperature=0)
memory = MemorySaver()
agent_executor = create_react_agent(model, tools, checkpointer=memory)
# Same thread_id for continuity
config = {"configurable": {"thread_id": "agent003"}}
# First turn
for step in agent_executor.stream(
{"messages": [HumanMessage("Hi! I am Bob!")]}, config, stream_mode="values"
):
step["messages"][-1].pretty_print()
# # Second turn – no need to fetch history yourself
for step in agent_executor.stream(
{"messages": [HumanMessage("what's my name?")]}, config, stream_mode="values"
):
step["messages"][-1].pretty_print()
output:
================================ Human Message =================================
Hi! I am Bob!
================================== Ai Message ==================================
Tool Calls:
....
================================= Tool Message =================================
Name: tavily_search
....
================================== Ai Message ==================================
Your name is Bob! I've found multiple individuals with the name Bob, including Bob Marley, B.o.B, and Bob Iger. Is there a specific Bob you're interested in learning more about?
In my case it was not incorrect nesting of HTML tags, it was due to some browser extensions I came to know from reddit thread, I just disable them and the error/warning disappear.
You can run the application in incognito and check also.
https://www.reddit.com/r/nextjs/comments/1ims6u7/im_getting_infinite_hydration_error_in_nextjs_and/
// Current week number
echo "Current week number: " . date("W") . "<br>";
// Example with a specific date
$date = "2025-09-27";
echo "Week number of $date: " . date("W", strtotime($date));
If you want to change a single field in db and you are completely fetching the row, you need to update the only field which is getting changed and then save into db, this will optimize your code to some extent. If you can put your code here then it will be better.
There is a program XnConvert, very good. It can do it for multiple images. "Clean metadata - EXIF".
To retrieve the value of the secret type for an Environment variable, simply provide the Name (not the Display name) of the environment variable in the Perform an unbound action step. There's no need to select it from the dynamic content.
I found out that there was a second overlay network that was using 10.0.0.0/24. The solution was as simple as adding that ip range to the wireguard config of both nodes.
[Interface]
PrivateKey = <private-key>
Address = 10.238.0.1/24
ListenPort = 51820
[Peer]
PublicKey = <public-key>
Endpoint = <public-ip>:51820
PersistentKeepalive = 25
AllowedIPs = 10.238.0.2/32, 10.0.1.0/24, 10.0.0.0/24
First, use the "editor.colorDecorators": false to disable the feature for all languages. Then, use language-specific settings to re-enable it only for CSS.
In settings.json:
{
"editor.colorDecorators": false,
"[css]": {
"editor.colorDecorators": true
}
}
I fought and fought (Kali Linux) and the only thing that helped was rolling back the version to 7.0.0. Conclusion. v.7.2 is unfinished.
This website really can autoplay mp3 automatically in firefox and other browsers. Not like others code, which failed in firefox.
The mp3 is not autoplayed at the landing page. But as I click the button to the main page, the sound play automatically at the main page (one page website).
https://share.linkundangan.com/inv-preview/wedding-premium050?to=Tamu+Undangan
I've tried to read the code, but I,m not a coder. Would anybody give me the audio code of this site, please.
You can use sonar-badge-proxy to configure badges at group level in gitlab and access it without sonarqube token.
I tried your provided embed snippet and others from imgur website on Codepen and they seems to be working just fine, if you encounter this issue, check your browser console, there might be a CORS error or something else blocking the rendering.
The following code solves the problem posed by the question, i.e., retrieve schedules that have at least one job (any component in the pipeline that runs on the schedule) successfully finished.
The issue however is that when a schedule has multiple runs, only the first run is considered.
A more interesting problem is to retrieve schedules that have at least one run completed, or the last run completed, or the last run failed. I will address this question in a separate post (please answer the question if you have a better solution)
# -------------------------------------------------
# Connect to AML and set tracking URI in mlflow
# -------------------------------------------------
from azure.ai.ml import MLClient
from azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
# Connect to AML
client = MLClient(
credential= InteractiveBrowserCredential(),
subscription_id="my-subscription-id",
resource_group_name="my-resource-group",
workspace_name="my-workspace"
)
# set tracking uri if run locally
mlflow_tracking_uri = client.workspaces.get(client.workspace_name).mlflow_tracking_uri
mlflow.set_tracking_uri(mlflow_tracking_uri)
# -------------------------------------------------
# Retrieve and filter schedules
# -------------------------------------------------
schedules = client.schedules.list()
# optional: filter schedules based on name containing substring:
selected_schedules = [
schedule
for schedule in schedules
if "inference_pipelin" in schedule.name
]
# -------------------------------------------------
# Get schedules that have *at least* one job (not one run) completed
# -------------------------------------------------
experiment_names = [schedule.create_job.experiment_name for schedule in selected_schedules]
filter_string = " or ".join([f"(name = {x})" for x in experiment_names])
experiments = mlflow.search_experiments(filter_string=filter_string)
experiments_df = pd.DataFrame(
{
"experiment_id": [exp.experiment_id for exp in experiments],
"experiment_name": [exp.name for exp in experiments],
"schedule": selected_schedules,
}
)
all_runs = mlflow.search_runs(
experiment_names=experiment_names,
filter_string="tags.mlflow.user='Jaume Amores'",
)
selected_experiments = all_runs.groupby("experiment_id")["status"].apply(lambda x: (x == "FINISHED").any())
selected_schedules = experiments_df[experiments_df["experiment_id"].isin(selected_experiments[selected_experiments].index)]["schedule"].tolist()
Never change the key ! For a very simple system , this is possible , but this is not bad practice, is a disaster. Add a priority column, if the table is over 10K rows put an index on the priority column.
In case you have to enter rows in between usually , there are options like :
- priority , real number
- 2 columns , priority and subpriority and reorder from time to time.
- priority as integer but with a gap in between like every priority multiplied with 100.
There is no very simple way to solve this :( .
To remove these files I simply use CCleaner. It works for me on my Windows 7 system.
Check out this, it's still in development, but it covers many of the points needed
https://pub.dev/packages/extended_image is a better more up to date package
it has all features of CachedNetworkImage and even more.
fd.set(
"session",
JSON.stringify({
type: "realtime",
model: "gpt-realtime",
audio: {
output: {
voice: "marin",
},
},
})
)
Documentation was wrong - this fixes it
Improving app performance usually comes down to identifying bottlenecks and fixing them systematically. Some common approaches are:
Optimize resources: Compress images, minify scripts, and remove unused assets.
Reduce network overhead: Cache data where possible, batch API calls, and use efficient data formats (like JSON instead of XML).
Efficient code: Avoid unnecessary loops, memory leaks, and expensive operations on the main thread.
Lazy loading: Load only what’s needed on startup and fetch the rest as required.
Monitor performance: Use profiling tools to measure CPU, memory, and network usage across different devices.
In practice, the right fix depends on what your app is struggling with—UI rendering, API response time, or device-specific issues. Start with profiling, then apply optimizations where they’ll have the biggest impact.
Dos box-x the fork of the original dosbox have the debugger already compiled an reachable trough context menú on the window. Give it a try https://dosbox-x.com/
Send audio output to two devices with AudioGraph in an UWP app
I realized there is no way to switch the output device without recreating the AudioGraph.
So obviously the recreating the AudioGraph is the right thing to do.
Xcode can't install on Linux. Xcode support only MacOS because it has need to Apple ID. But you can use vertual Mac on your pc
No, you can use Direct File System Access.
android build.gradle file add this code
gradle.projectsEvaluated {
project(':app') { p ->
p.tasks.matching { it.name.startsWith('minify') && it.name.endsWith('WithR8') }.all { r8Task ->
r8Task.dependsOn(p.tasks.named('extractProguardFiles'))
}
}
}
I arrived to the answer while formulating this question late at night. The answer to acquire the property of a tuple is to call the argument by the tuple.get() method
Review the syntax for DLookup. Particularly the criteria. You do not want to include the word "Where" nor repeat the field name that you are returning.
I end up just using Swiperjs.
https://github.com/nolimits4web/swiper
It supports vanilla js, React, etc. Though not built for MUI.
Please I will keep to your community guideline please just help me on releasing my Facebook account please I have tried my possible best but there is no way they can release it to me please help me so I could recover it back😭😭😭😭🙏
can someone give me solution i did try to change version but still same eror of migration deployment fail
Deblobbing is the attempt to remove some of the binary blobs shipped with linux distro source code libraries.
"blobs are binary firmware,"
Uhhh... no. If a distro has firmware, something is already way wrong. Consider the HLDS - GHA2N - HH SATA DVD+/-RW DVD player. A new firmware for it was uploaded in 2013, version A103, A01. It does the following:
1. To improve of write quality ( Write strategy )
a) DVD+R DL Verbatim 8x under high temp
b) DVD-R DL MKM 8x under low temp
2. To Improve of CD-ROM readability during reliability test under high temp by adjusting tilt
That's firmware. It doesn't even run on your own cpu or OS.
All "install and run" linux distros are loaded with these binary blobs. If your system does what you want, you'd know. But, if it does something you didn't want, how do you know? What if it's exfiltrating your junk? There's almost no way to know unless you review the source code, which you cannot do unless your system was deblobbed ... in which case it probably won't run.
An example of what is removed by deblob-6.8 is the enumeration of BPF preload sources/headers: kernel/bpf/preload/iterators/iterators.bpf.c, kernel/bpf/preload/iterators/iterators.lskel-little-endian.h, and kernel/bpf/preload/iterators/iterators.lskel-big-endian.h. The sed script removes embedded eBPF programs (“light skeleton”) pinned in bpffs for debugging/introspection. It's kernel infrastructure, not device drivers. Not firmware, either. Lot's of drivers are binary blobs. There are some deblobbing scripts. They are likely to make compilation fail, but the goal is to make it not fail.
The blobs aren't firmware though.
That’s a great question the tag saying “iPhone” usually just reflects the app version, not the specific device. I saw a helpful breakdown about this on https://mnpappsgames.com/ and it confirmed that the same encoder string often appears for iPads too.
I found a solution - it's because I had to play the 2nd animation after the 1st animation is done. Luckily, I already have a custom RLCallbackAction and RLWaitAction handy for this purpose. Something like this:
struct RLAggregationActionImpl: RLActionImpl {
enum Aggregation {
case group
case sequence
}
let actions: [any RLActionImpl]
let aggregation: Aggregation
func createAnimation(entity: RLEntity) -> RLAnimation {
switch aggregation {
case .group:
let animations = actions.map { $0.createAnimation(entity: entity) }
return try! .group(with: animations)
case .sequence:
let animations = actions
.enumerated()
.flatMap { (i, impl) -> [RLAnimation] in
// We can't directly pass the action to .sequence
// Because 2 transform actions don't "Add up", likely because the animation is "inflated" when we call `entity.playAnimation`.
// As a workaround, we have to wrap it under a callback action, which calls `runAction` when it actually needs to run the action in the sequence.
// See: https://stackoverflow.com/questions/79716776/in-realitykit-a-sequence-of-2-fromtobyaction-does-not-add-up-transforms
let callback = RLCallbackActionImpl(duration: 0, callback: { $0.runActionImpl(impl) })
.createAnimation(entity: entity)
// callback happens immediately, but runAction will take duration to complete. So we need to insert a "gap" (wait action) between callbacks.
if i+1 < actions.count {
let wait = RLWaitActionImpl(duration: impl.duration)
.createAnimation(entity: entity)
return [callback, wait]
} else {
return [callback]
}
}
return try! .sequence(with: animations)
}
}
}
Ah, I think I found a solution. Not sure why it was a problem, but this got it working.
First, remove all three packages entirely from my system: Scrapy, Beautiful Soup, and bs4. Scrapy was installed by Brew, and the others by pip3.
Then created a venv, activated it, then used pip3 install all three modules.
This got it working. So it was something about how the Brew installed Scrapy wasn't finding Python module installed in the pip3 installed environment.
I don't understand Python and can't explain the compatibility issue with Brew installed Python and/or Python modules.
All I can tell you is once I removed everything, then use pip3 to insteall Scrapy and the additional modules I wanted, that's what got it working.
If anyone can help explain what was going on, that would be helpful.
I know this is late, but you need to call collectionView.layoutIfNedded() and this will prevent future unneeded animations.
The potential answer should be based on the historical data of accepted candidates.
One can frame this as ranking or recommendation problem which are common approaches. like Education, categorical, experience, numeric, resume keywords-TF-IDF/embeddings.
Feature Engineering: Encode education as categories, Use experience as a numeris value and also Turn resume keywords into numbers using TF-IDF or proberbly embeddings.
Model Training. Train a supervised model like neural network or XGBoost using your historical accepted date against non accepted data
Ranking. Rank candidate by their their predicted probability score to get your top 10
Scalability. Impute the use of simplarity search to quickly compare candidates.
Do you also want to copy the .git history to be copied to your new repo or only the code files?
I suppose u want to you to copy the .git history too( tags, branches, commits) , use --bare flag while cloning to get just the .git history and then use the --mirror flag while pushing to the new repo which also regenerates the code files:
git clone --bare https://github.com/owner/repo1.git
cd repo1.git
git push --mirror https://github.com/you/repo2.git
Oh, I solved this.
Maybe it is permission denied because the env is in C://ai, when I use administrator mode, it success.

Soo after almost 7 Hours Non stop if found the permanent solution.
1. Open your generated Unity-iPhone.xcworkspace in Xcode.
2. In the left sidebar, click on the UnityFramework target.
3. At the top, select the Build Settings tab.
4. In the search bar, type: Other Linker Flags
and delete these 2 From all the dubug release etc etc
* Removed -ld_classic
* Removed -weak-lSystem (because it’s invalid)
now clean project
and now
||1.|Select UnityFramework target.|
|---|---|---|
||2.|Go to Build Settings.|
||3.|In the search bar at the top, type Framework Search Paths.|
add these 4
$(inherited)
$(PROJECT_DIR)/Frameworks/com.ptc.vuforia.engine/Vuforia/Plugins/iOS
$(PROJECT_DIR)/Pods/Google-Mobile-Ads-SDK/Frameworks/**
$(PROJECT_DIR)/Pods/Firebase/Frameworks/**
If you found this usefull please
just download my app and give me full rating
https://apps.apple.com/pk/app/stickar-ar-stickers-gifs/id6497066147
Using the net/http https://pkg.go.dev/net/http#CrossOriginProtection available from Go 1.25, I was able to properly set up deterrence against CSRF in a manner that solves my problem.
Only trusted origins are allowed to make requests to my API and since all modern browsers send Origin, Referer as well as Sec-Fetch-Site headers, this means the threat is mitigated.
The only concession is older browsers are not supported, but in truth, my use case does not need to support pre-2010 browsers - upgrade your browsers!
You can try downgrading and try it if that helps
This is an unresolved issue in ASP.NET which was first reported in 2019:
There is a workaround which allows '___' (triple underscore) to be used instead of '.' (link)
To persist Google login across sessions in Flutter using InAppWebView, you need to manually manage cookies. Use CookieManager().getCookies() after login to store relevant cookies, then restore them with CookieManager().setCookie() on the next app launch before loading the Google login page. Also, make sure thirdPartyCookiesEnabled is set to true. This helps avoid the dreaded CookieMismatch issue.
I put
<i class="fab fa-whatsapp"></i>
OK
yes , thinking of using a randomly generated number and then encoding it would be a great idea at first, but lets say a base62 encoded string of length 6 would have 54B combinations of strings but when we try to generate a random number and then encode it (saying 1000 rps) the collision rate is around 880k strings which can be an issue and require the service to double check the availability of the shortened string/url.
So, rather using a counter which avoids re checking the db for availability but has some security issues, and finally the bijectibve function does a one to one mapping using the id of the url in db and then base encoding and when retrieving it does inverse function to retrieve the long url, in both cases it l saves us time not checking the db for collision check.