You cannot "set" this, this is a metric provided bu telegram themselves
I had the same issue with unit-test, testing a method that returns a dict containing floats.
assertAlmostEqual
did almost what I needed and I made it work it by comparing the values of my returned dict with the values of the expected dict using a generator comprehension:
import unittest
class TestAlmostEqual(unittest.TestCase):
def method_to_be_tested(self):
return {"A":1.035, "B":3.074, "C":5.777}
def test_almost_equal(self):
result = self.method_to_be_tested().values()
expected = {"A":1.030, "B":3.073, "C":5.779}.values()
generator = (value for value in expected)
for val in result:
self.assertAlmostEqual(val, next(generator), places=2)
Should work from python 3.6+ since dictionaries became ordered.
I'm relatively new to python so please tell me if I'm mistaken or messing something up.
Open the file in write ('w') or read-write ('r+') mode.
Modify the content in Python.
Write the changes back using .write().
Save automatically when file is closed.
You can also try ANTHROPIC_AUTH_TOKEN
via the claude setup-token
or another method if it exists. See: https://docs.anthropic.com/en/docs/claude-code/settings#environment-variables
What I did was
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.3/manifests/tigera-operator.yaml
and then curl -O https://raw.githubusercontent.com/projectcalico/calico/v3.29.3/manifests/tigera-operator.yaml
Edit the custom-resources.yaml file to use the Cluster CIDR and set the encapsulation method to VXLAN
then kubectl create -f custom-resources.yaml.
Don't use curl -LO or it won't work and kubectl create -f for the first step must be used not apply -f. You can also use curl and then copy the contents of custom-resources.yaml to vim edit it and then create it.
That is because of flutter configuration file called settings and its path is
<user_home>/.config/flutter/settings
its contents like the following, the variable jdk-dir needs to point to your jdk path.
{
"android-sdk": "/somewhere0/Android/Sdk",
"jdk-dir": "/somewhere1/jvms/temurin-jdk"
}
this is not very well explained......I tried doing it exactly as described by putting said values in catalina properties it does not work at all.....
First things first: Please create a minimal working example first.
In this case, create a new, empty mailbox, and then try to connect with your code to it. If that works, add a first mail, and try to retrieve it. Do it step by step, so you can verify your previous step is working.
In your openssl response, you get a response from MS Exchange Server, so both inbound and outbound networking is configured correctly.
You say it completes "successfully outside of AWS". How long does it take to complete? Is it close to the timeouts you've set? Could it be that your EC2 instance has less resources available then your "outside of AWS" machine and takes longer?
Try to increase the timeouts to let's say a minute. Does the behavior change?
Session expired hoge hai theek kordo please 🙏 theek kordo salam king id
Did you ever figure this out? Having same issue
Upgrading to bcryptjs v3.0.2 seems to have fixed the problem for me on node v18.6.0
Regarding the timeout after 300 seconds aka 5 minutes:
The default timeout for Lambdas is 3 seconds, so I guess you already adjusted this.
Without any knowledge about the container image you're building/using, I strongly suspect that the container image hits some kind of cold start situation, which exceed the 5 minutes timeout.
Found answers/explanations for similar cases here and here, although they don't match exactly.
So what's happening on AWS side when you update the image can be described like this:
Previous code/image is invalidated
Scheduling of the Lambda happens on an arbitrary server in the Lambda-hosting platform in AWS.
The new server has no knowledge about the previous image, so it needs to download it in full from ECR (no layer caching).
When the image is downloaded, it's executed. Does your image/application contain a lot of startup tasks? Like download dependencies, JVM starting, ...? All of this happens now.
Then, finally, the Lambda is ready to serve the event that it triggered from the start.
This process takes time - and is generally described as "cold start". See this for a more detailed description in which situations cold starts can be especially annoying. TL;DR: All invocations until the first Lambda instance is running will all be delayed by cold start behavior.
AWS docs around this topic can be found here. It even describes your exact error messages.
There are different ways to approach this. You can increase timeout, reduce image size, reduce image startup dependencies, change language, and more. Probably all of them are worth a separate question...
But hopefully, I was able to explain what you are seeing and get you on the right track.
I'm not allowed to comment due to rep but for those asking for locationID, it's attached to each business, not user:
You can find contactId as described in the other answer
The InitializationSystemGroup is not part of the Update phase of the player loop. If you have the default settings for the Update Mode in your input settings, then WasPressedThisFrame() will never trigger to a system in the InitializationSystemGroup.
You can either move your input reading into the Update phase (probably within the SimulationSystemGroup), or change your input settings update method. The option to process events manually has some caveats that you should be aware of if you take this route.
To me, putting the input reading at the start of the SimulationSystemGroup makes the most sense and should capture all input before it is needed.
Seu problema é que a descriptografia não funciona porque o IV (vetor de inicialização) usado na criptografia e na descriptografia é diferente. Além disso, você está usando mcrypt
, que está obsoleto.
Use openssl_encrypt()
e openssl_decrypt()
com AES-256-CBC. Guarde o IV junto com os dados criptografados e envie tudo no link.
I was having same issue after upgrading to expo@53.
The solution was quite simple for me.
In your app.json
or app.config.js
add:
expo: {
android: {
edgeToEdgeEnabled: true // this line
}
}
Check my extension based on previous answers for downloading files in a folder:
https://github.com/HaoranZhuExplorer/Download_Large_FOLDER_From_Google_Drive
It's look amazing . I am not expert for this bug but best of luck
Fixed it by adding this to settings.py:
MFA_ADAPTER = "myproject.mfaAdapter.MFAAdapter"
and in myproject/mfaAdapter.py:
from typing import Dict
from allauth.mfa.adapter import DefaultMFAAdapter
class MFAAdapter(DefaultMFAAdapter):
def get_public_key_credential_rp_entity(self) -> Dict[str, str]:
return {
"id": "example.com",
"name": "example.com",
}
After some tweaking, here is what I came up with:
$(function() {
$('a.page-numbers').on('keydown', function(e) {
if (e.which === 32) {
e.preventDefault();
$('a.page-numbers')[0].click();
}
});
Works like a charm, hope this helps anyone else!
Navigate the folder /Users/<username>/.aspnet
and execute sudo chown -R $(id -u):$(id -g) ./
This folder contains the dev-certs
folder contains that has the certificates. Once the local user has access to this folder, the application can be hosted on https.
When you call .focus() on an element (#focusable), the browser tries to ensure that the focused element is visible in the viewport. This may trigger:
1.A scroll adjustment, or
2.A layout reflow if the focus causes any changes in geometry or styling.
You can fix or avoid this behavior by:
1.Avoiding negative margins in tight layouts, especially when working with focusable elements.
2.Disabling scroll anchoring if needed :
html {
overflow-anchor: none;
}
3.Ensuring sufficient space between the elements:
<div id="spacer" style="height: 5px;"></div>
<?php
$loginUrl = $instagram->getLoginUrl();
echo "<a class='button' href='$loginUrl'>Sign in with Instagram</a>";
?>
For people downloading in container or from a VPN,
Try setting HF_HUB_ENABLE_HF_TRANSFER=0
to use default downloader. Don't waste time waiting.
For possible solutions see:
TAChart how to make different width and/or color only for a specific grid line
You can use @unset your_var_name
to delete it.
Found it!
Add this to the server project's Program.cs:
builder.Services.AddRazorComponents()
.AddInteractiveWebAssemblyComponents()
.AddAuthenticationStateSerialization(
options => options.SerializeAllClaims = true);
Add this to the client project's Program.cs:
builder.Services.AddAuthorizationCore();
builder.Services.AddCascadingAuthenticationState();
builder.Services.AddAuthenticationStateDeserialization();
Wrap the Router component (in Routes.razor) in a CascadingAuthenticationState component. Looks like this:
@using Microsoft.AspNetCore.Components.Authorization
<CascadingAuthenticationState>
<Router AppAssembly="typeof(Program).Assembly">
<Found Context="routeData">
<RouteView RouteData="routeData" DefaultLayout="typeof(Layout.MainLayout)"/>
<FocusOnNavigate RouteData="routeData" Selector="h1"/>
</Found>
</Router>
</CascadingAuthenticationState>
Test page:
@page "/test"
@using Microsoft.AspNetCore.Components.Authorization
@inject AuthenticationStateProvider AuthenticationStateProvider
<h3>User Claims</h3>
@if (userName is null)
{
<p>Loading...</p>
}
else
{
<p>Hello, @userName!</p>
<ul>
@foreach (var claim in userClaims)
{
<li>@claim.Type: @claim.Value</li>
}
</ul>
}
@code {
private string? userName;
private IEnumerable<System.Security.Claims.Claim> userClaims = Enumerable.Empty<System.Security.Claims.Claim>();
protected override async Task OnInitializedAsync()
{
// Get the current authentication state
var authState = await AuthenticationStateProvider.GetAuthenticationStateAsync();
var user = authState.User;
if (user.Identity is not null && user.Identity.IsAuthenticated)
{
userName = user.Identity.Name;
userClaims = user.Claims;
}
else
{
userName = null;
userClaims = Enumerable.Empty<System.Security.Claims.Claim>();
}
}
}
I had the same Issue with a Microsoft 365 Mail Account.
A customer sent me a message that an attempt to reset their password threw an error at the user.
The Button "Test Connection" (in Keycloak > Realm Settings > Email) returned the same unintelligible error.
I think this was due to the mail account being blocked after multiple login attempts from a malicious source.
Avoid negative margins inside overflow: hidden
containers unless you're managing layout precisely.
If it's for scroll anchoring, use proper scroll handling APIs.
For focusable elements, ensure they're visibly in bounds and not accidentally clipped.
We're facing an issue today with our Next.js project (version 12.3.1) where the next export
command is suddenly failing, even though everything was working fine before. We haven’t made any recent changes to our code or added new blog posts, and all the blogs were exporting properly earlier. Now, it's not just blocking new content — even the older blog pages are not exporting correctly. We are using dynamic routing with [slug].js
, and the data is fetched using getStaticPaths
and getStaticProps
with fallback: false
. The project still runs fine in development mode (next dev
), but the problem only happens during export. We’re not sure what’s causing it, and would appreciate any help or suggestions to fix it.We're facing an issue today with our Next.js project (version 12.3.1) where the next export
command is suddenly failing, even though everything was working fine before. We haven’t made any recent changes to our code or added new blog posts, and all the blogs were exporting properly earlier. Now, it's not just blocking new content — even the older blog pages are not exporting correctly. We are using dynamic routing with [slug].js
, and the data is fetched using getStaticPaths
and getStaticProps
with fallback: false
. The project still runs fine in development mode (next dev
), but the problem only happens during export. We’re not sure what’s causing it, and would appreciate any help or suggestions to fix it.
The universal solution for ALL browsers, including Internet Explorer, is:
<input type="CHECKBOX" onclick="this.checked = this.defaultChecked;">
removing
authEndpoint
actually works for me also
(facing same issue working perfectly fine on android but casuing)
/* Remove the outline from any focused editable element */
[contenteditable="true"]:focus {
outline: none;
}
Or, if you are using a custom class on your editor:
.my-editor-class[contenteditable="true"]:focus {
outline: none;
}
Short resolution times: If most tickets are resolved within a few hours, showing "0.2 days" is less intuitive than "4.8 hours".
Granular tracking needed: For support teams aiming for SLAs like “resolve within 1 hour,” using days may obscure important insights.
Visual comparison: Bar or line charts showing values like "0.25 vs 0.5 days" look almost identical, but "6 vs 12 hours" shows more visual difference.
Missing or incompatible dependencies
Plugin not compatible with your QGIS version
Corrupted plugin installation
Python path issues or environment conflicts
Open QGIS
Go to Plugins → Python Console → Show Traceback or check the Log Messages Panel (View → Panels → Log Messages) for details.
Please share the full error message here if you'd like help interpreting it.
Go to Plugins → Manage and Install Plugins
Find Animation Workbench
Check if it says "This plugin is not compatible with your version of QGIS."
You may need to:
Update QGIS (use the latest LTR version)
Or install an older plugin version compatible with your QGIS
Sometimes a clean reinstall fixes weird bugs.
To remove:
~/.local/share/QGIS/QGIS3/profiles/default/python/plugins/animation_workbench
Delete the plugin folder, then reinstall it from the Plugin Manager.
If the error mentions modules like matplotlib
, numpy
, etc.:
On Windows (OSGeo4W Shell):
python3 -m pip install matplotlib numpy
On Linux/macOS:
Make sure to use the same Python environment QGIS uses.
In QGIS Python Console:
import sys print(sys.version)
Then confirm that the plugin supports that version of Python.
Please paste the full Python error message here. It often starts like:
Traceback (most recent call last):
File ".../animation_workbench.py", line XX, in ...
Try =IF(N2="","",IF(OR(N2<TIME(8,0,0),N2>=TIME(19,0,0)),"Out of Hours","In Hours"))
This conditional checks if time is before 08:00 (N2<TIME(8,0,0)
) or after 19:00 (N2>=TIME(19,0,0)
). If either is true, then it's Out of Hours. Otherwise, its In Hours
I haven't tried this option but it's indicated on Google: https://cloud.google.com/logging/docs/view/streaming-live-tailing
I recently found a somewhat hacky solution for that by passing the following pref to Chromedriver:
"chromeOptions" : {
"prefs" : {
"browser.theme.color_scheme2": 2
}
}
two-step verification is a piece in the flow of authentication, proving the person asking is who they say they are
sso is where you as the user get a secret manually or automatically, your applications/network/systems are configured to challenge that secret, the challenge succeeds and you get authroized (separately is the hydration of your permissions from the access controls which are tied with who you are) Many applications set to run in environments where SSO could be expected already have ready to go functionality settings and configurations for communicating with SSO. ...Sometimes you have to borrow or make your own way to subvert the default logins.
Worth checking open files somewhere in editor . In my case the file/folder which composer was trying to delete was open in my code editor . Running Composer after closing all code editors solved problem in my case
Have you been able to solve this problem?
Don't forget to add mandatory comments before your SQL in migration .sql
file. Otherwise you will have ORA-00922 error.
--liquibase formatted sql
--changeset id:0 - create some sql table
Maven Bundled is IDE level and Maven wrapper is project level. Understand it like Swimming pool and path tub. Bundled is for IDE and its projects to use and wrapper is customized for each project so it maintains the consistency.
To build a website like Strikeout.im or VIPBox.lc, you’ll need a frontend (React, Vue.js), a backend (Node.js, Django), and a database (PostgreSQL, MongoDB). If embedding streams, use legal sources (YouTube, official broadcasters) or APIs (Sportradar, ESPN) for scores. For illegal streams, beware of legal risks (DMCA takedowns, lawsuits). Host on AWS/Cloudflare for scalability, use FFmpeg/HLS for streaming, and monetize via ads (AdSense) or subscriptions. However, self-hosting illegal streams is risky consider a legal alternative like sports news or live-score tracking instead. Always consult a lawyer before proceeding.
itemClick(int index) {
setState(() {
selectedIdx = index;
tabController!.index = selectedIdx; // this will fix the issue
});
}
Not just updating the selectedIdx
state but also setting the index in TabController
class:
tabController!.index = selectedIdx
#include<stdio.h>
int main(void)
{
int i;
int j;
for(i=1;i<5;i++){
for(j=1;j<5;j++){
if(i==j)
printf("%d\t", j);
else
printf("%d\t", 0);
}
printf("\n");
}
return 0;
}
This error usually happens when the module you're trying to import is not returning the expected class or object.
Make sure that your `redis-test.ts` is exporting a **valid object or function**, not `undefined`.
Also, if you're using CommonJS modules (`require`) and trying to import them using ES Modules (`import`), there can be a mismatch.
Try changing your `redis-test.ts` file like this:
```ts
import * as redis from 'redis';
const client = redis.createClient();
client.connect();
export default client;
Ok,
but what if i generate data from location.get_clearsky method ?
Using latitude and longitude, I can calculate solar irradiance. If I'm not mistaken, this function doesn't account for cloud cover. How can I implement a reduction in solar irradiance based on this parameter? This value is easily obtained from meteorology websites and ranges from 100 (completely cloudy) to 0 (clear sky).
Dawid
Adding to SpaceTrucker's answer dependency:collect
has also parameters <excludeArtifactIds>
, <excludeGroupIds>
and more.
You can set directory recursive = true for application/applicationset
Refer - https://argo-cd.readthedocs.io/en/stable/user-guide/directory/#enabling-recursive-resource-detection
apiVersion: argoproj.io/v1alpha1
kind: Application
spec:
source:
directory:
recurse: true
Use collect function instead of show function, or show also create a multiple jobs. Try to run the same thing with colect method. You will see only 1 job then.
I'd say into a git repo, hosted in your private network.
Separated from the main project which can be opensourced at this point
I had the same question and ended up creating a support ticket with AWS.
This was their response:
---------------
When creating materialized views from Zero-ETL tables across databases, users need both:
SELECT permission on the materialized view in the target database
SELECT permission on the source Zero-ETL table
This differs from regular cross-database scenarios because Zero-ETL maintains a live connection to the source RDS MySQL database. The additional permission requirement ensures proper security controls are maintained across the integration.
---------------
This means that the documentation you are looking at for permissions is not valid for zero etl source.
I know i
m almost 3 years late to answer, but i am working on an assignment and one of the tasks are to read/write JSON data to a .json file.
Anywho.
I tried to write --> [] <-- in the JSON file which was previously not present since Json data is stored in array format (forgive my wording as it might be wrong), when added brackets, it worked since the file is not empty, just the json array (the data which is no there).
You might take a look at "ReportViewer Core":
This project is a port of Microsoft Reporting Services (Report Viewer) to .NET 6+. It is feature-complete and ready for production use, but keep in mind it is not officially supported by Microsoft.
def double_even(numbers):
return \[num \* 2 for num in numbers if num % 2 == 0\]
nums = [1, 2, 3, 4, 5]
result = double_even(nums)
print("Result:", result)
Yes, behavior difference make sense in JAX.as When var was a static class member (in the pytree), JAX treats it as a "compile-time constant" - it gets baked into any JIT-compiled code and isn't part of the differentiable computation graph.
But when you move it to a function argument, JAX now sees it as a "dynamic value" that can change between calls by which It becomes part of the computation graph; Gradients can flow through it;gets subject to transformations (like vmap)
u can Use class members for true constants that never change or Use function arguments for values you might differentiate through or batch over
When using CameraRoll on Android, especially with photos taken directly by the camera, it often fails to return accurate width and height values. This caused issues in my image editor — the crop functionality would break because of these incorrect dimensions.
So, I built a small native module to solve this problem. If you’re facing the same issue and ended up here looking for a fix, feel free to try this package:
add_filter('body_class', function( $classes ) {
$classes[] = 'custom-body-class';
return $classes;
});
$context = Timber::context();
$context['body_class'] = implode(' ', get_body_class());
Timber::render('index.twig', $context);
I had the same issue. Macro that I created for sorting, formatting etc and one that I have been using for quite some time, suddenly was not running. THE RUN BUTTON WAS GREYED OUT...!!! THEN IN EXCEL OPTION, TRUST CENTER, MACRO SETTINGS, I JUST HAD TO ENABLE "Trust access to VBA project object model". It is working now.
Thanks a lot for your suggestions!
I ran a manual test by replacing the Markdown content in my script with this string
du texte avec des accents : é, è, à , ç, ê, î.
✅ The result was displayed correctly in Notion — all accented characters showed up as expected.
This confirms the issue is not related to UTF-8 encoding, the Notion API, or the way blocks are built in my script. The problem might come from a specific file or a font rendering issue in some cases.
I’ll dig deeper into the original resume.md
that caused the issue and report back if I find something unusual.
Thanks again for your help!
I test the .md
`wilonweb@MSI MINGW64 ~/Documents/VisualStudioCode/YT-GPT-Notion/YoutubeTranscription/mes-transcriptions/ya_juste_6_concepts_pour_tout_comprendre_au_devops (master)
$ file -i resume.md
resume.md: text/plain; charset=utf-8`
And this is my script that build the markdown
const fs = require('fs');
const path = require('path');
const axios = require('axios');
const { Command } = require('commander');
const chalk = require('chalk');
const cliProgress = require('cli-progress');
// 🌱 Charge le .env depuis la racine du projet
require('dotenv').config({ path: path.join(__dirname, '..', '.env') });
const program = new Command();
program
.option('-m, --model <model>', 'Modèle OpenAI', 'gpt-3.5-turbo')
.option('-t, --temp <temperature>', 'Température', parseFloat, 0.5)
.option('--delay <ms>', 'Délai entre appels API (ms)', parseInt, 2000)
.option('-i, --input <path>', 'Chemin du dossier input', './input');
program.parse(process.argv);
const options = program.opts();
const inputFolder = path.resolve(options.input); // ⬅️ résout vers chemin absolu
const apiKey = process.env.OPENAI_API_KEY;
if (!apiKey) {
console.error(chalk.red('❌ Clé API manquante dans .env'));
process.exit(1);
}
console.log(chalk.blue(`📥 Traitement du dossier : ${inputFolder}`));
const wait = ms => new Promise(res => setTimeout(res, ms));
// 🔎 Liste les fichiers de chapitres valides
function getChapterFiles() {
const files = fs.readdirSync(inputFolder)
.filter(name =>
/^\d{2}/.test(name) &&
name.endsWith('.txt') &&
!name.toLowerCase().includes('original') &&
!name.toLowerCase().includes('info')
)
.sort();
if (files.length === 0) {
console.error(chalk.red('❌ Aucun fichier de chapitre trouvé (ex: 01_intro.txt)'));
process.exit(1);
}
return files.map(filename => ({
filename,
filepath: path.join(inputFolder, filename),
title: filename.replace(/^\d+[-_]?/, '').replace(/\.txt$/, '').replace(/[_\-]/g, ' ').trim()
}));
}
// đź”— Lecture des infos du fichier info.txt
function readInfoTxt() {
const infoPath = path.join(inputFolder, "info.txt");
if (!fs.existsSync(infoPath)) {
console.warn(chalk.yellow('⚠️ Aucun info.txt trouvé dans le dossier de transcription.'));
return {};
}
const content = fs.readFileSync(infoPath, "utf8");
const getLineValue = (label) => {
const regex = new RegExp(`^${label} ?: (.+)$`, "m");
const match = content.match(regex);
return match ? match[1].trim() : null;
};
return {
videoUrl: getLineValue("🎬 URL de la vidéo"),
channelName: getLineValue("📺 Chaîne"),
channelLink: getLineValue("đź”— Lien"),
description: content.split("## Description")[1]?.trim() || "",
raw: content
};
}
// 🔧 Récupère le nom du dossier à partir du fichier original_*.txt
function getSlugFromOriginalFile() {
const file = fs.readdirSync(inputFolder).find(f => f.startsWith("original_") && f.endsWith(".txt"));
if (!file) return "no-title-found";
return file.replace(/^original_/, "").replace(/\.txt$/, "");
}
// 🧠Résume un texte avec OpenAI
async function summarize(text, promptTitle) {
const prompt = `${promptTitle}\n\n${text}`;
for (let attempt = 1; attempt <= 3; attempt++) {
try {
const res = await axios.post('https://api.openai.com/v1/chat/completions', {
model: options.model,
messages: [{ role: 'user', content: prompt }],
temperature: options.temp
}, {
headers: {
Authorization: `Bearer ${apiKey}`,
'Content-Type': 'application/json'
}
});
return res.data.choices[0].message.content.trim();
} catch (err) {
if (err.response?.status === 429) {
console.warn(chalk.yellow(`⚠️ ${attempt} - Limite atteinte, pause...`));
await wait(3000);
} else {
console.error(chalk.red(`❌ Erreur : ${err.message}`));
return '❌ Erreur de résumé';
}
}
}
return '❌ Résumé impossible après 3 tentatives.';
}
// MAIN
(async () => {
const chapters = getChapterFiles();
const info = readInfoTxt();
const slug = getSlugFromOriginalFile();
const title = slug.replace(/[_\-]/g, ' ').trim();
//const outputDir = path.join('output', slug);
const outputDir = path.join(__dirname, '..', 'YoutubeTranscription', 'mes-transcriptions', slug); // âś… bon dossier
const outputFile = path.join(outputDir, 'resume.md');
fs.mkdirSync(outputDir, { recursive: true });
console.log(chalk.yellow(`📚 ${chapters.length} chapitres détectés`));
const bar = new cliProgress.SingleBar({}, cliProgress.Presets.shades_classic);
bar.start(chapters.length, 0);
const chapterSummaries = [];
for (const chapter of chapters) {
const text = fs.readFileSync(chapter.filepath, 'utf8').trim();
const summary = await summarize(text, `Tu es un professeur francophone. Résume en **langue française uniquement**, avec un ton structuré et pédagogique, le chapitre suivant intitulé : "${chapter.title}".`);
chapterSummaries.push({ ...chapter, summary });
bar.increment();
await wait(options.delay);
}
bar.stop();
console.log(chalk.blue('🧠Génération du résumé global...'));
const fullText = chapterSummaries.map(c => c.summary).join('\n\n');
const globalSummary = await summarize(fullText, "Tu es un professeur francophone. Fusionne exclusivement en **langue française**, de façon concise, structurée et pédagogique, les résumés suivants :");
// 📝 Création du fichier résumé markdown
const header = `# ${title}\n\n` +
(info.videoUrl ? `🎬 [Vidéo YouTube](${info.videoUrl})\n` : '') +
(info.channelName && info.channelLink ? `📺 ${info.channelName} – [Chaîne](${info.channelLink})\n` : '') +
(info.description ? `\n## Description\n\n${info.description}\n` : '') +
`\n## Résumé global\n\n${globalSummary}\n\n` +
`## Table des matières\n` +
chapterSummaries.map((c, i) => `### Chapitre ${i + 1}: ${c.title}`).join('\n') +
'\n\n' +
chapterSummaries.map((c, i) => `### Chapitre ${i + 1}: ${c.title}\n${c.summary}\n`).join('\n');
fs.writeFileSync(outputFile, header, 'utf8');
console.log(chalk.green(`✅ Résumé structuré enregistré dans : ${outputFile}`));
// 🗂️ Copie du fichier info.txt vers le dossier output
const infoSourcePath = path.join(inputFolder, 'info.txt');
const infoDestPath = path.join(outputDir, 'info.txt');
console.log(`📦 Copie de info.txt depuis : ${infoSourcePath}`);
if (fs.existsSync(infoSourcePath)) {
fs.copyFileSync(infoSourcePath, infoDestPath);
console.log(chalk.green(`📄 info.txt copié dans : ${infoDestPath}`));
} else {
console.warn(chalk.yellow('⚠️ Aucun fichier info.txt trouvé à copier.'));
}
})();
cat printf.sh
#! /bin/bash
echo "$BASH_VERSION"
printf '\u002f'
./printf.sh
You can install this to help tools https://pypi.org/project/PyGObject-stubs/
I know this is a bit of an old topic. But the challenge still persists. I wanted something simple to search my (minimal) 10Mb (stringified) array of objects with unique ID's. The standard "loop thrugh it approach" takes minutes.. Which was just horrible.. I've built something that is lightning fast and I've decides to share this, take a look:
https://github.com/GeoArchive/json-javascript-simple-fast-search
You should update Metalama to a version that supports 9.0.3xx SDK (or more specifically Roslyn 4.14). In this case, the first version that supports it is 2025.1.7.
Just to clarify, is your problem that the links themselves don't work or that the ref to /entry/:id
isn't working.
\u00c2\u00b9\u00e2\u0081\u00b4' \u00f0\u009f\u0091\u00a9\u00f0\u009f\u008f\u00bd\u00e2\u0080\u008d\u00e2\u009d\u00a4\u00ef\u00b8\u008f\u00e2\u0080\u008d\u00f0\u009f\u0091\u00a8\u00f0\u009f\u008f\u00be\u00f0\u009f\u00ab\u00b6\u00f0\u009f\u008f\u00bc
What works best for me is to launch the URL and that allows the user to download the photo without having to set up any permissions or extra packages.
I followed these instructions but Facebook seems to ignore it.
arr = [1, 2, 2, 3, 3, 3,4]
freq_dict = {}
for num in arr:
if num in freq_dict:
freq_dict[num] += 1
else:
freq_dict[num] = 1
max_freq = max(freq_dict.values())
print(max_freq)
Let's agree that BulmaCSS has one of the worst docs ever along with an over ugly webpage and a useless search function. I stumbled across hundres of posts asking about the typography but the BulmaCSS own webpage search doesnt even have any information on font families.
I consider bulmaCSS a dead project because of the team behind it.
<?php
require_once _DIR_ . "/vendor/autoload.php";
use PhpOffice\PhpSpreadsheet\Spreadsheet;
use PhpOffice\PhpSpreadsheet\IOFactory;
use PhpOffice\PhpSpreadsheet\Style\Alignment; // Import the Alignment class
$spreadsheet = new Spreadsheet();
$sheet = $spreadsheet->getActiveSheet();
$sheet->getColumnDimension('A')->setWidth(12);
$sheet->getColumnDimension('B')->setWidth(12);
$sheet->getColumnDimension('C')->setWidth(12);
// Set value and alignment for numeric cell
$sheet->setCellValue("A1", 1234567);
$sheet->getStyle('A1')->getAlignment()->setHorizontal(Alignment::HORIZONTAL_LEFT); // Force left alignment
// Set value for string cells (default is usually left-aligned)
$sheet->setCellValue("B1", 'ABCDEFG');
$sheet->setCellValue("C1", 'QWERTY');
header('Content-Type: application/pdf');
header('Content-Disposition: attachment;filename="test.pdf"');
header('Cache-Control: max-age=0');
$objwriter = IOFactory::createWriter($spreadsheet, 'Mpdf');
$objwriter->save("php://output");
It is working now thanks to @sidhartthhhhh idea. The corrected code see below:
import numpy as np
from scipy.signal import correlate2d
def fast_corr2d_pearson(a, b):
"""
Fast 2D Pearson cross-correlation of two arrays using convolution.
Output shape is (2*rows - 1, 2*cols - 1) like correlate2d with mode=full.
"""
assert a.shape == b.shape
a = a.astype(np.float64)
b = b.astype(np.float64)
rows, cols = a.shape
ones = np.ones_like(a) # These are used to calculate the number of overlapping elements at each lag
n = correlate2d(ones, ones) # number of overlapping bins for each offset
sum_a = correlate2d(a, ones) # sum of a in overlapping region
sum_b = correlate2d(ones, b) # sum of b in overlapping region
sum_ab = correlate2d(a, b)
sum_a2 = correlate2d(a**2, ones)
sum_b2 = correlate2d(ones, b**2)
numerator = sum_ab-sum_a*sum_b/n
s_a = sum_a2 - sum_a**2/n
s_b = sum_b2 - sum_b**2/n
denominator = np.sqrt(s_a*s_b)
with np.errstate(invalid='ignore', divide='ignore'):
corr = numerator / denominator
corr[np.isnan(corr)] = 0
return corr
Each step can be verified to be correct:
lag_row,lag_col=3,7 # any test case
row_idx, col_idx = rows-1+lag_row, cols-1+lag_col # 2d index in resulting corr matrix
a_lagged = a[lag_row:,lag_col:] # only works for lag_row>0, lag_col>0
sum_a[row_idx,col_idx] == np.sum(a_lagged)
Slow code for comparison:
from scipy.stats import pearsonr
rows,cols = data.shape
row_lags = range(-rows + 1, rows)
col_lags = range(-cols + 1, cols)
autocorr = np.empty((len(row_lags), len(col_lags)))
autocorr.shape
for lag_row in tqdm(row_lags):
for lag_col in col_lags:
# Create a lagged version of the data
# todo: implement logic for lag=0
if lag_row >= 0 and lag_col >= 0:
data_0 = data[lag_row:, lag_col:]
data_1 = data[:-lag_row, :-lag_col]
elif lag_row < 0 and lag_col < 0:
data_0 = data[:lag_row, :lag_col]
data_1 = data[-lag_row:, -lag_col:]
elif lag_row >= 0:
data_0 = data[lag_row:, :lag_col]
data_1 = data[:-lag_row, -lag_col:]
else:
data_0 = data[:lag_row, lag_col:]
data_1 = data[-lag_row:, :-lag_col]
try:
corr=pearsonr(data_0.flatten(),data_1.flatten()).statistic
except:
corr=np.nan
autocorr[lag_row+rows-1, lag_col+cols-1] = corr
This example is for autocorrelation but works the same.
Related discussion https://stackoverflow.com/a/51168178
VB.NET has a STRINGS class with RIGHT method that will return 1 (or more) characters.
Dim myString as String = "123456"
Dim sLast as String = Strings.Right(myString, 1)
https://learn.microsoft.com/en-us/dotnet/api/microsoft.visualbasic.strings?view=netframework-4.8.1
There are multiple reasons why that could happen, such as some webhook failing. However, if the more trivial reasons are not the correct one, that's something I have observed (that might also affect you):
I had a similar situation where I observed the following: my invoice was supposed to be drafted on July 24, 7:22AM. However, I decided to advance with test clock up to July 25, 9:42AM so I would have expected that the invoice was already finalized.
Instead I saw the invoice as draft and Stripe was saying: "Subscription invoice will be finalized and charged 7/25/25, 10:42 AM", so 1 hour after my test clock.
I then tried again with a different subscription by advancing to the exact moment in which the invoice would be drafted (rather than later on) and, in that case, I still saw the message saying that it would be finalized 1 hour after.
In both cases, the invoice was actually finalized by advancing with test clock by an additional hour.
So I think it might be a kind of bug on Stripe side for which the invoice is not finalized at the correct moment but it instead depends on the way you advance with test clock. Can you check if it's the same for you?
this approach is correct and follows the basic pattern. The only thing worth adding is that if machineState.State is incorrect (for example, missing from states), it can lead to panic.
Something like that??? (if i understood your question!!!)
summary_data <- data %>%
group_by(x1) %>%
summarize(
mean_y = mean(y),
sd_y = sd(y)
)
ggplot(summary_data, aes(x = x1, y = mean_y)) +
geom_bar(stat = "identity", fill = "#336699") +
geom_label(aes(label = round(mean_y, digits = 2)), position = position_nudge(x=-0.2, y = -0.5)) +
geom_errorbar(aes(ymin = mean_y - sd_y, ymax = mean_y + sd_y),
width = 0.2,
size = 0.8,
color = "darkred") +
xlab("x1") +
ylab("y") +
theme_classic(base_size = 12) +
coord_cartesian(ylim = c(0, max(summary_data$mean_y + summary_data$sd_y) * 1.1))
As of 16th June 2025, V1.2.0 of python docx has support for comments! It's very long overdue!
from docx import Document
document = Document()
paragraph = document.add_paragraph("Hello, world!")
comment = document.add_comment(
runs=paragraph.runs,
text="I have this to say about that"
author="Steve Canny",
initials="SC",
)
https://python-docx.readthedocs.io/en/latest/user/comments.html
there's a readline()
in C
just install it using your package manager, more about readline()
otherwise you could use a combination of read()
and realloc()
until you reach EOF
The answer that zteffi gave is almost right. This is the documented function definition from https://docs.opencv.org/4.x/da/d54/group__imgproc__transform.html
cv.warpPerspective(src, M, dsize[, dst[, flags[, borderMode[, borderValue]]]]) -> dst
Notice that "flags" is only included if the "dst" argument is included? You need to include a dst argument for the flag cv2.WARP_INVERSE_MAP to be at the right position in the call.
Now, I don't see anything particularly interesting in that dst value after a call (it does not appear to be the requested value -- the return value of the function is). You can simply code "None" for dst:
transformed_points = cv2.warpPerspective(p_array, matrix, (2,1), None, cv2.WARP_INVERSE_MAP)
EF Core scaffolding does not delete old files. It only creates/updates based on what's in the database.
If you delete a table, you need to manually delete the corresponding entity class file in your project folder.
Thank you for the suggestion. This is a new project that already uses null safety. My pubspec.yaml
has the environment sdk: '>=3.22.0 <4.0.0'
. The errors I am seeing seem to be at a deeper level where the compiler cannot find core Flutter types like Offset
or Color
, even after a clean reinstall of all tools.
Add this class
body:has(.select2.select2-container--open){
overflow-x:hidden !important
}
To stop VS Code from auto-building all Java projects in a monorepo, disable Java auto-build, Gradle, and Maven auto-import in your workspace settings. Turn off automatic build configuration updates to prevent background project scanning. Use a .javaProjects
file to limit Java language support to specific folders. Temporarily disable the Java extension when not needed to avoid unnecessary resource usage. Also, block extension suggestions for Java in .vscode/extensions.json
to keep your workspace clean. These steps help you control Java behavior in large codebases and work efficiently without triggering builds for unrelated projects.
The simplest to me is just turn off apache on WSL2 .
sudo service apache2 stop
or disable at all
sudo systemctl disable apache2
What a clever use of the comma operator and operator overloading! Let’s start with these neat little classes.
First comes the log
class. It overloads the comma operator, accepts arguments of a specific type, turns them into log messages, and flushes them in its destructor.
So, on line 82
constexpr log_unified $log{};
You’ll see a global object named $log
; judging from the name, it’s meant to mimic some command-line logging syntax.
Next are two wrapper classes. return_wrapper
stores the incoming argument via move semantics in its constructor. Inside the overloaded comma operator it concatenates the new argument with the stored one, constructs a fresh return_wrapper, and returns it.
After that tiny tour we turn to testVoid4()
and its single line:
return $log, "testVoid4";
Here’s the fun part: testVoid4()
is declared void, yet it sports a return (something)
.
void testVoid4() {
return $log, "testVoid4";
}
This isn’t standard: the function’s fate is handed to whatever (something)
evaluates to. If (something)
were just a harmless expression, we’d be safe; but if it’s an object, we’ve effectively written return obj;
—and the compiler complains.
So what exactly is (something)
? Look closer: ($log, "testVoid4")
. Evaluated left-to-right, the first sub-expression is the global variable $log
. Therefore the comma becomes $log
’s overloaded operator,, i.e. $log.operator,( "testVoid4" )
, which expands to
return void_wrapper<std::decay_t<T>>(std::forward<T>(arg));
or concretely
void_wrapper<const char*>(std::forward<const char*>("testVoid4"));
—a temporary object! Alas, our little testVoid4()
ends up trying to return a tiny void_wrapper
, so the build fails.
Asked 6 days ago
Modified 6 days ago
Viewed 35 times
Part of Mobile Development Collective
0
I want to build an eng
or userdebug
kernel for a pixel 6a. I read https://source.android.com/docs/setup/build/building-pixel-kernels#supported-kernel-branches but it does not mention how to change the variant. I do not want to build the whole aosp if possible. I just need the boot.img
and dklm stuff.
Since android-15 the aosp has pivoted to using bazel as the build system. I can build the regular kernel just fine.
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
exec tools/bazel run \
--config=stamp \
--config=bluejay \
//private/devices/google/bluejay:gs101_bluejay_dist
Maybe it's worth considering to use another DbMS like PostgreSQL.
The solution from vscode-dotnet-runtime:2325 worked for me: manually adding "...\your path to aspnetcore\\dotnet.exe" into settings.json and restarting VSCode made C#/Unity intellisense work fine without internet.
None of the answers worked. I configured and wrote my own transformer to make this work.
(0) Very important notes
Always clear jest cache or work without a cache, when making changes in jest.config.js or the transformer. Otherwise your changes will have no effect.
clear cache: add option --clearCache
to your test run.
run without cache: add option --no-cache
to your test run.
(1) Create a test case to verify the output and see what's going on:
file: /src/mytest.test.tsx
import React from 'react';
import { render, screen } from '@testing-library/react';
import MySvg from '@src/mysvg.svg';
test('asdf', () => {
render(<MySvg/>);
screen.getByText("aaa"); // this WILL FAIL, so you'll see the output
});
(2) Configure jest
file: /jest.config.js
...
const exportConfig = {
transform: {
...
// ↓ configure: "for svg files, use `svgTransform.js`"
'^.+\\.(svg)$': '<rootDir>/.jest/svgTransform.js',
},
...
}
(3) Create the transformer
file: /.jest/svgTransform.js
const path = require('path');
module.exports = {
process(src, filename) {
const absPath = path.resolve(filename);
const code = `
const React = require('react');
module.exports = {
__esModule: true,
default: () => React.createElement('img', {
src: "${absPath}"
}),
};
`;
return { code };
},
};
(4) Lessons learned
Nothing works as expected. No docs, no AI. Every change is an adventure.
Most AI and internet says module.exports = ${filename}
or the like. The problem is, that the generated code will be </assets/your_file.svg/>
, which will not be a valid tag and therefore cause an error.
document.createElement
instead of React.createElement
does not work
default: () => <img ... />
` does not work
Same issue, same setup. It was working well before ~16 hours ago
Youtube streams can expire, you need to handle those if you aren't downloading the file directly. You can try reconnect or retry options in youtube_dl or ffmpeg, or you can create a function/method that will handle it. Some examples from old answers, may be outdated: stackoverflow.com/questions/66610012/…
– Benjin
Commented Jun 24 at 8:18
yes this works
I just ran into this. I'm using VS Code and GoogleTest. The initial error message gave no context as to where the error was happening. But checking the "All Exceptions" checkbox in the Breakpoints section of the Run and Debug tab caused VS Code to correctly break at the place the error was happening.
In GridDB, TimeSeries containers do not support updating individual columns directly. To update a specific field, you must read the entire row for the given timestamp, modify the desired column, and write the full row back using put(). Partial updates are not allowed. This ensures the entire row is replaced atomically. If needed, you can use a transaction to make the read-modify-write operation safe from concurrency issues.
First of all, maybe try using all the data, not just the last row of every 10 samples. This way, you will increase your dataset size.
Secondly, in training, you used the 4th row of every 10 samples, while in testing, you used the 10th. You should treat both datasets the same way, not instead of randomly selecting rows.
Instead of using git merge, use git rebase to reapply your commits from the diverged branch (feature) onto the updated main branch.
Switch to the diverging branch and follow the below commands:
git checkout feature #Switch to the diverging branch
git rebase --onto main <common-ancestor> feature #Rebase onto the updated main
git merge-base main feature #You can find the common ancestor using this command
git rebase --continue #Resolving conflicts
If the branch isn't public/shared the try force-align history using the below command
git rebase -i main
Note : This approach rewrites history, so only do this on branches that are not shared or on feature branches, not protected ones.
After rebasing, if you had already pushed the feature branch before, you’ll need to force-push
git push --force-with-lease
Finally use the below commands as a clean and effective way to resolve diverging branches after git lfs migrate.
git checkout feature
git rebase main
use
le_uint()
instead of
uint()
{
"name": "System Name",
"short_name": "System",
"icons": [
{ ...
}
],
"start_url": "/bwc/",
"display": "standalone",
"theme_color": "#0066FF",
"background_color": "#FFFFFF",
"orientation": "portrait"
}
Try downgrade your flutter version and also algolia version which suitable for both, don't jump into always the latest flutter cause some packages are need time to catch up with latest release.