To build a website like Strikeout.im or VIPBox.lc, you’ll need a frontend (React, Vue.js), a backend (Node.js, Django), and a database (PostgreSQL, MongoDB). If embedding streams, use legal sources (YouTube, official broadcasters) or APIs (Sportradar, ESPN) for scores. For illegal streams, beware of legal risks (DMCA takedowns, lawsuits). Host on AWS/Cloudflare for scalability, use FFmpeg/HLS for streaming, and monetize via ads (AdSense) or subscriptions. However, self-hosting illegal streams is risky consider a legal alternative like sports news or live-score tracking instead. Always consult a lawyer before proceeding.
itemClick(int index) {
setState(() {
selectedIdx = index;
tabController!.index = selectedIdx; // this will fix the issue
});
}
Not just updating the selectedIdx
state but also setting the index in TabController
class:
tabController!.index = selectedIdx
#include<stdio.h>
int main(void)
{
int i;
int j;
for(i=1;i<5;i++){
for(j=1;j<5;j++){
if(i==j)
printf("%d\t", j);
else
printf("%d\t", 0);
}
printf("\n");
}
return 0;
}
This error usually happens when the module you're trying to import is not returning the expected class or object.
Make sure that your `redis-test.ts` is exporting a **valid object or function**, not `undefined`.
Also, if you're using CommonJS modules (`require`) and trying to import them using ES Modules (`import`), there can be a mismatch.
Try changing your `redis-test.ts` file like this:
```ts
import * as redis from 'redis';
const client = redis.createClient();
client.connect();
export default client;
Ok,
but what if i generate data from location.get_clearsky method ?
Using latitude and longitude, I can calculate solar irradiance. If I'm not mistaken, this function doesn't account for cloud cover. How can I implement a reduction in solar irradiance based on this parameter? This value is easily obtained from meteorology websites and ranges from 100 (completely cloudy) to 0 (clear sky).
Dawid
Adding to SpaceTrucker's answer dependency:collect
has also parameters <excludeArtifactIds>
, <excludeGroupIds>
and more.
You can set directory recursive = true for application/applicationset
Refer - https://argo-cd.readthedocs.io/en/stable/user-guide/directory/#enabling-recursive-resource-detection
apiVersion: argoproj.io/v1alpha1
kind: Application
spec:
source:
directory:
recurse: true
Use collect function instead of show function, or show also create a multiple jobs. Try to run the same thing with colect method. You will see only 1 job then.
I'd say into a git repo, hosted in your private network.
Separated from the main project which can be opensourced at this point
I had the same question and ended up creating a support ticket with AWS.
This was their response:
---------------
When creating materialized views from Zero-ETL tables across databases, users need both:
SELECT permission on the materialized view in the target database
SELECT permission on the source Zero-ETL table
This differs from regular cross-database scenarios because Zero-ETL maintains a live connection to the source RDS MySQL database. The additional permission requirement ensures proper security controls are maintained across the integration.
---------------
This means that the documentation you are looking at for permissions is not valid for zero etl source.
I know i
m almost 3 years late to answer, but i am working on an assignment and one of the tasks are to read/write JSON data to a .json file.
Anywho.
I tried to write --> [] <-- in the JSON file which was previously not present since Json data is stored in array format (forgive my wording as it might be wrong), when added brackets, it worked since the file is not empty, just the json array (the data which is no there).
You might take a look at "ReportViewer Core":
This project is a port of Microsoft Reporting Services (Report Viewer) to .NET 6+. It is feature-complete and ready for production use, but keep in mind it is not officially supported by Microsoft.
def double_even(numbers):
return \[num \* 2 for num in numbers if num % 2 == 0\]
nums = [1, 2, 3, 4, 5]
result = double_even(nums)
print("Result:", result)
Yes, behavior difference make sense in JAX.as When var was a static class member (in the pytree), JAX treats it as a "compile-time constant" - it gets baked into any JIT-compiled code and isn't part of the differentiable computation graph.
But when you move it to a function argument, JAX now sees it as a "dynamic value" that can change between calls by which It becomes part of the computation graph; Gradients can flow through it;gets subject to transformations (like vmap)
u can Use class members for true constants that never change or Use function arguments for values you might differentiate through or batch over
When using CameraRoll on Android, especially with photos taken directly by the camera, it often fails to return accurate width and height values. This caused issues in my image editor — the crop functionality would break because of these incorrect dimensions.
So, I built a small native module to solve this problem. If you’re facing the same issue and ended up here looking for a fix, feel free to try this package:
add_filter('body_class', function( $classes ) {
$classes[] = 'custom-body-class';
return $classes;
});
$context = Timber::context();
$context['body_class'] = implode(' ', get_body_class());
Timber::render('index.twig', $context);
I had the same issue. Macro that I created for sorting, formatting etc and one that I have been using for quite some time, suddenly was not running. THE RUN BUTTON WAS GREYED OUT...!!! THEN IN EXCEL OPTION, TRUST CENTER, MACRO SETTINGS, I JUST HAD TO ENABLE "Trust access to VBA project object model". It is working now.
Thanks a lot for your suggestions!
I ran a manual test by replacing the Markdown content in my script with this string
du texte avec des accents : é, è, à, ç, ê, î.
✅ The result was displayed correctly in Notion — all accented characters showed up as expected.
This confirms the issue is not related to UTF-8 encoding, the Notion API, or the way blocks are built in my script. The problem might come from a specific file or a font rendering issue in some cases.
I’ll dig deeper into the original resume.md
that caused the issue and report back if I find something unusual.
Thanks again for your help!
I test the .md
`wilonweb@MSI MINGW64 ~/Documents/VisualStudioCode/YT-GPT-Notion/YoutubeTranscription/mes-transcriptions/ya_juste_6_concepts_pour_tout_comprendre_au_devops (master)
$ file -i resume.md
resume.md: text/plain; charset=utf-8`
And this is my script that build the markdown
const fs = require('fs');
const path = require('path');
const axios = require('axios');
const { Command } = require('commander');
const chalk = require('chalk');
const cliProgress = require('cli-progress');
// 🌱 Charge le .env depuis la racine du projet
require('dotenv').config({ path: path.join(__dirname, '..', '.env') });
const program = new Command();
program
.option('-m, --model <model>', 'Modèle OpenAI', 'gpt-3.5-turbo')
.option('-t, --temp <temperature>', 'Température', parseFloat, 0.5)
.option('--delay <ms>', 'Délai entre appels API (ms)', parseInt, 2000)
.option('-i, --input <path>', 'Chemin du dossier input', './input');
program.parse(process.argv);
const options = program.opts();
const inputFolder = path.resolve(options.input); // ⬅️ résout vers chemin absolu
const apiKey = process.env.OPENAI_API_KEY;
if (!apiKey) {
console.error(chalk.red('❌ Clé API manquante dans .env'));
process.exit(1);
}
console.log(chalk.blue(`📥 Traitement du dossier : ${inputFolder}`));
const wait = ms => new Promise(res => setTimeout(res, ms));
// 🔎 Liste les fichiers de chapitres valides
function getChapterFiles() {
const files = fs.readdirSync(inputFolder)
.filter(name =>
/^\d{2}/.test(name) &&
name.endsWith('.txt') &&
!name.toLowerCase().includes('original') &&
!name.toLowerCase().includes('info')
)
.sort();
if (files.length === 0) {
console.error(chalk.red('❌ Aucun fichier de chapitre trouvé (ex: 01_intro.txt)'));
process.exit(1);
}
return files.map(filename => ({
filename,
filepath: path.join(inputFolder, filename),
title: filename.replace(/^\d+[-_]?/, '').replace(/\.txt$/, '').replace(/[_\-]/g, ' ').trim()
}));
}
// 🔗 Lecture des infos du fichier info.txt
function readInfoTxt() {
const infoPath = path.join(inputFolder, "info.txt");
if (!fs.existsSync(infoPath)) {
console.warn(chalk.yellow('⚠️ Aucun info.txt trouvé dans le dossier de transcription.'));
return {};
}
const content = fs.readFileSync(infoPath, "utf8");
const getLineValue = (label) => {
const regex = new RegExp(`^${label} ?: (.+)$`, "m");
const match = content.match(regex);
return match ? match[1].trim() : null;
};
return {
videoUrl: getLineValue("🎬 URL de la vidéo"),
channelName: getLineValue("📺 Chaîne"),
channelLink: getLineValue("🔗 Lien"),
description: content.split("## Description")[1]?.trim() || "",
raw: content
};
}
// 🔧 Récupère le nom du dossier à partir du fichier original_*.txt
function getSlugFromOriginalFile() {
const file = fs.readdirSync(inputFolder).find(f => f.startsWith("original_") && f.endsWith(".txt"));
if (!file) return "no-title-found";
return file.replace(/^original_/, "").replace(/\.txt$/, "");
}
// 🧠 Résume un texte avec OpenAI
async function summarize(text, promptTitle) {
const prompt = `${promptTitle}\n\n${text}`;
for (let attempt = 1; attempt <= 3; attempt++) {
try {
const res = await axios.post('https://api.openai.com/v1/chat/completions', {
model: options.model,
messages: [{ role: 'user', content: prompt }],
temperature: options.temp
}, {
headers: {
Authorization: `Bearer ${apiKey}`,
'Content-Type': 'application/json'
}
});
return res.data.choices[0].message.content.trim();
} catch (err) {
if (err.response?.status === 429) {
console.warn(chalk.yellow(`⚠️ ${attempt} - Limite atteinte, pause...`));
await wait(3000);
} else {
console.error(chalk.red(`❌ Erreur : ${err.message}`));
return '❌ Erreur de résumé';
}
}
}
return '❌ Résumé impossible après 3 tentatives.';
}
// MAIN
(async () => {
const chapters = getChapterFiles();
const info = readInfoTxt();
const slug = getSlugFromOriginalFile();
const title = slug.replace(/[_\-]/g, ' ').trim();
//const outputDir = path.join('output', slug);
const outputDir = path.join(__dirname, '..', 'YoutubeTranscription', 'mes-transcriptions', slug); // ✅ bon dossier
const outputFile = path.join(outputDir, 'resume.md');
fs.mkdirSync(outputDir, { recursive: true });
console.log(chalk.yellow(`📚 ${chapters.length} chapitres détectés`));
const bar = new cliProgress.SingleBar({}, cliProgress.Presets.shades_classic);
bar.start(chapters.length, 0);
const chapterSummaries = [];
for (const chapter of chapters) {
const text = fs.readFileSync(chapter.filepath, 'utf8').trim();
const summary = await summarize(text, `Tu es un professeur francophone. Résume en **langue française uniquement**, avec un ton structuré et pédagogique, le chapitre suivant intitulé : "${chapter.title}".`);
chapterSummaries.push({ ...chapter, summary });
bar.increment();
await wait(options.delay);
}
bar.stop();
console.log(chalk.blue('🧠 Génération du résumé global...'));
const fullText = chapterSummaries.map(c => c.summary).join('\n\n');
const globalSummary = await summarize(fullText, "Tu es un professeur francophone. Fusionne exclusivement en **langue française**, de façon concise, structurée et pédagogique, les résumés suivants :");
// 📝 Création du fichier résumé markdown
const header = `# ${title}\n\n` +
(info.videoUrl ? `🎬 [Vidéo YouTube](${info.videoUrl})\n` : '') +
(info.channelName && info.channelLink ? `📺 ${info.channelName} – [Chaîne](${info.channelLink})\n` : '') +
(info.description ? `\n## Description\n\n${info.description}\n` : '') +
`\n## Résumé global\n\n${globalSummary}\n\n` +
`## Table des matières\n` +
chapterSummaries.map((c, i) => `### Chapitre ${i + 1}: ${c.title}`).join('\n') +
'\n\n' +
chapterSummaries.map((c, i) => `### Chapitre ${i + 1}: ${c.title}\n${c.summary}\n`).join('\n');
fs.writeFileSync(outputFile, header, 'utf8');
console.log(chalk.green(`✅ Résumé structuré enregistré dans : ${outputFile}`));
// 🗂️ Copie du fichier info.txt vers le dossier output
const infoSourcePath = path.join(inputFolder, 'info.txt');
const infoDestPath = path.join(outputDir, 'info.txt');
console.log(`📦 Copie de info.txt depuis : ${infoSourcePath}`);
if (fs.existsSync(infoSourcePath)) {
fs.copyFileSync(infoSourcePath, infoDestPath);
console.log(chalk.green(`📄 info.txt copié dans : ${infoDestPath}`));
} else {
console.warn(chalk.yellow('⚠️ Aucun fichier info.txt trouvé à copier.'));
}
})();
cat printf.sh
#! /bin/bash
echo "$BASH_VERSION"
printf '\u002f'
./printf.sh
You can install this to help tools https://pypi.org/project/PyGObject-stubs/
I know this is a bit of an old topic. But the challenge still persists. I wanted something simple to search my (minimal) 10Mb (stringified) array of objects with unique ID's. The standard "loop thrugh it approach" takes minutes.. Which was just horrible.. I've built something that is lightning fast and I've decides to share this, take a look:
https://github.com/GeoArchive/json-javascript-simple-fast-search
You should update Metalama to a version that supports 9.0.3xx SDK (or more specifically Roslyn 4.14). In this case, the first version that supports it is 2025.1.7.
Just to clarify, is your problem that the links themselves don't work or that the ref to /entry/:id
isn't working.
\u00c2\u00b9\u00e2\u0081\u00b4' \u00f0\u009f\u0091\u00a9\u00f0\u009f\u008f\u00bd\u00e2\u0080\u008d\u00e2\u009d\u00a4\u00ef\u00b8\u008f\u00e2\u0080\u008d\u00f0\u009f\u0091\u00a8\u00f0\u009f\u008f\u00be\u00f0\u009f\u00ab\u00b6\u00f0\u009f\u008f\u00bc
What works best for me is to launch the URL and that allows the user to download the photo without having to set up any permissions or extra packages.
I followed these instructions but Facebook seems to ignore it.
arr = [1, 2, 2, 3, 3, 3,4]
freq_dict = {}
for num in arr:
if num in freq_dict:
freq_dict[num] += 1
else:
freq_dict[num] = 1
max_freq = max(freq_dict.values())
print(max_freq)
Let's agree that BulmaCSS has one of the worst docs ever along with an over ugly webpage and a useless search function. I stumbled across hundres of posts asking about the typography but the BulmaCSS own webpage search doesnt even have any information on font families.
I consider bulmaCSS a dead project because of the team behind it.
<?php
require_once _DIR_ . "/vendor/autoload.php";
use PhpOffice\PhpSpreadsheet\Spreadsheet;
use PhpOffice\PhpSpreadsheet\IOFactory;
use PhpOffice\PhpSpreadsheet\Style\Alignment; // Import the Alignment class
$spreadsheet = new Spreadsheet();
$sheet = $spreadsheet->getActiveSheet();
$sheet->getColumnDimension('A')->setWidth(12);
$sheet->getColumnDimension('B')->setWidth(12);
$sheet->getColumnDimension('C')->setWidth(12);
// Set value and alignment for numeric cell
$sheet->setCellValue("A1", 1234567);
$sheet->getStyle('A1')->getAlignment()->setHorizontal(Alignment::HORIZONTAL_LEFT); // Force left alignment
// Set value for string cells (default is usually left-aligned)
$sheet->setCellValue("B1", 'ABCDEFG');
$sheet->setCellValue("C1", 'QWERTY');
header('Content-Type: application/pdf');
header('Content-Disposition: attachment;filename="test.pdf"');
header('Cache-Control: max-age=0');
$objwriter = IOFactory::createWriter($spreadsheet, 'Mpdf');
$objwriter->save("php://output");
It is working now thanks to @sidhartthhhhh idea. The corrected code see below:
import numpy as np
from scipy.signal import correlate2d
def fast_corr2d_pearson(a, b):
"""
Fast 2D Pearson cross-correlation of two arrays using convolution.
Output shape is (2*rows - 1, 2*cols - 1) like correlate2d with mode=full.
"""
assert a.shape == b.shape
a = a.astype(np.float64)
b = b.astype(np.float64)
rows, cols = a.shape
ones = np.ones_like(a) # These are used to calculate the number of overlapping elements at each lag
n = correlate2d(ones, ones) # number of overlapping bins for each offset
sum_a = correlate2d(a, ones) # sum of a in overlapping region
sum_b = correlate2d(ones, b) # sum of b in overlapping region
sum_ab = correlate2d(a, b)
sum_a2 = correlate2d(a**2, ones)
sum_b2 = correlate2d(ones, b**2)
numerator = sum_ab-sum_a*sum_b/n
s_a = sum_a2 - sum_a**2/n
s_b = sum_b2 - sum_b**2/n
denominator = np.sqrt(s_a*s_b)
with np.errstate(invalid='ignore', divide='ignore'):
corr = numerator / denominator
corr[np.isnan(corr)] = 0
return corr
Each step can be verified to be correct:
lag_row,lag_col=3,7 # any test case
row_idx, col_idx = rows-1+lag_row, cols-1+lag_col # 2d index in resulting corr matrix
a_lagged = a[lag_row:,lag_col:] # only works for lag_row>0, lag_col>0
sum_a[row_idx,col_idx] == np.sum(a_lagged)
Slow code for comparison:
from scipy.stats import pearsonr
rows,cols = data.shape
row_lags = range(-rows + 1, rows)
col_lags = range(-cols + 1, cols)
autocorr = np.empty((len(row_lags), len(col_lags)))
autocorr.shape
for lag_row in tqdm(row_lags):
for lag_col in col_lags:
# Create a lagged version of the data
# todo: implement logic for lag=0
if lag_row >= 0 and lag_col >= 0:
data_0 = data[lag_row:, lag_col:]
data_1 = data[:-lag_row, :-lag_col]
elif lag_row < 0 and lag_col < 0:
data_0 = data[:lag_row, :lag_col]
data_1 = data[-lag_row:, -lag_col:]
elif lag_row >= 0:
data_0 = data[lag_row:, :lag_col]
data_1 = data[:-lag_row, -lag_col:]
else:
data_0 = data[:lag_row, lag_col:]
data_1 = data[-lag_row:, :-lag_col]
try:
corr=pearsonr(data_0.flatten(),data_1.flatten()).statistic
except:
corr=np.nan
autocorr[lag_row+rows-1, lag_col+cols-1] = corr
This example is for autocorrelation but works the same.
Related discussion https://stackoverflow.com/a/51168178
VB.NET has a STRINGS class with RIGHT method that will return 1 (or more) characters.
Dim myString as String = "123456"
Dim sLast as String = Strings.Right(myString, 1)
https://learn.microsoft.com/en-us/dotnet/api/microsoft.visualbasic.strings?view=netframework-4.8.1
There are multiple reasons why that could happen, such as some webhook failing. However, if the more trivial reasons are not the correct one, that's something I have observed (that might also affect you):
I had a similar situation where I observed the following: my invoice was supposed to be drafted on July 24, 7:22AM. However, I decided to advance with test clock up to July 25, 9:42AM so I would have expected that the invoice was already finalized.
Instead I saw the invoice as draft and Stripe was saying: "Subscription invoice will be finalized and charged 7/25/25, 10:42 AM", so 1 hour after my test clock.
I then tried again with a different subscription by advancing to the exact moment in which the invoice would be drafted (rather than later on) and, in that case, I still saw the message saying that it would be finalized 1 hour after.
In both cases, the invoice was actually finalized by advancing with test clock by an additional hour.
So I think it might be a kind of bug on Stripe side for which the invoice is not finalized at the correct moment but it instead depends on the way you advance with test clock. Can you check if it's the same for you?
this approach is correct and follows the basic pattern. The only thing worth adding is that if machineState.State is incorrect (for example, missing from states), it can lead to panic.
Something like that??? (if i understood your question!!!)
summary_data <- data %>%
group_by(x1) %>%
summarize(
mean_y = mean(y),
sd_y = sd(y)
)
ggplot(summary_data, aes(x = x1, y = mean_y)) +
geom_bar(stat = "identity", fill = "#336699") +
geom_label(aes(label = round(mean_y, digits = 2)), position = position_nudge(x=-0.2, y = -0.5)) +
geom_errorbar(aes(ymin = mean_y - sd_y, ymax = mean_y + sd_y),
width = 0.2,
size = 0.8,
color = "darkred") +
xlab("x1") +
ylab("y") +
theme_classic(base_size = 12) +
coord_cartesian(ylim = c(0, max(summary_data$mean_y + summary_data$sd_y) * 1.1))
As of 16th June 2025, V1.2.0 of python docx has support for comments! It's very long overdue!
from docx import Document
document = Document()
paragraph = document.add_paragraph("Hello, world!")
comment = document.add_comment(
runs=paragraph.runs,
text="I have this to say about that"
author="Steve Canny",
initials="SC",
)
https://python-docx.readthedocs.io/en/latest/user/comments.html
there's a readline()
in C
just install it using your package manager, more about readline()
otherwise you could use a combination of read()
and realloc()
until you reach EOF
The answer that zteffi gave is almost right. This is the documented function definition from https://docs.opencv.org/4.x/da/d54/group__imgproc__transform.html
cv.warpPerspective(src, M, dsize[, dst[, flags[, borderMode[, borderValue]]]]) -> dst
Notice that "flags" is only included if the "dst" argument is included? You need to include a dst argument for the flag cv2.WARP_INVERSE_MAP to be at the right position in the call.
Now, I don't see anything particularly interesting in that dst value after a call (it does not appear to be the requested value -- the return value of the function is). You can simply code "None" for dst:
transformed_points = cv2.warpPerspective(p_array, matrix, (2,1), None, cv2.WARP_INVERSE_MAP)
EF Core scaffolding does not delete old files. It only creates/updates based on what's in the database.
If you delete a table, you need to manually delete the corresponding entity class file in your project folder.
Thank you for the suggestion. This is a new project that already uses null safety. My pubspec.yaml
has the environment sdk: '>=3.22.0 <4.0.0'
. The errors I am seeing seem to be at a deeper level where the compiler cannot find core Flutter types like Offset
or Color
, even after a clean reinstall of all tools.
Add this class
body:has(.select2.select2-container--open){
overflow-x:hidden !important
}
To stop VS Code from auto-building all Java projects in a monorepo, disable Java auto-build, Gradle, and Maven auto-import in your workspace settings. Turn off automatic build configuration updates to prevent background project scanning. Use a .javaProjects
file to limit Java language support to specific folders. Temporarily disable the Java extension when not needed to avoid unnecessary resource usage. Also, block extension suggestions for Java in .vscode/extensions.json
to keep your workspace clean. These steps help you control Java behavior in large codebases and work efficiently without triggering builds for unrelated projects.
The simplest to me is just turn off apache on WSL2 .
sudo service apache2 stop
or disable at all
sudo systemctl disable apache2
What a clever use of the comma operator and operator overloading! Let’s start with these neat little classes.
First comes the log
class. It overloads the comma operator, accepts arguments of a specific type, turns them into log messages, and flushes them in its destructor.
So, on line 82
constexpr log_unified $log{};
You’ll see a global object named $log
; judging from the name, it’s meant to mimic some command-line logging syntax.
Next are two wrapper classes. return_wrapper
stores the incoming argument via move semantics in its constructor. Inside the overloaded comma operator it concatenates the new argument with the stored one, constructs a fresh return_wrapper, and returns it.
After that tiny tour we turn to testVoid4()
and its single line:
return $log, "testVoid4";
Here’s the fun part: testVoid4()
is declared void, yet it sports a return (something)
.
void testVoid4() {
return $log, "testVoid4";
}
This isn’t standard: the function’s fate is handed to whatever (something)
evaluates to. If (something)
were just a harmless expression, we’d be safe; but if it’s an object, we’ve effectively written return obj;
—and the compiler complains.
So what exactly is (something)
? Look closer: ($log, "testVoid4")
. Evaluated left-to-right, the first sub-expression is the global variable $log
. Therefore the comma becomes $log
’s overloaded operator,, i.e. $log.operator,( "testVoid4" )
, which expands to
return void_wrapper<std::decay_t<T>>(std::forward<T>(arg));
or concretely
void_wrapper<const char*>(std::forward<const char*>("testVoid4"));
—a temporary object! Alas, our little testVoid4()
ends up trying to return a tiny void_wrapper
, so the build fails.
Asked 6 days ago
Modified 6 days ago
Viewed 35 times
Part of Mobile Development Collective
0
I want to build an eng
or userdebug
kernel for a pixel 6a. I read https://source.android.com/docs/setup/build/building-pixel-kernels#supported-kernel-branches but it does not mention how to change the variant. I do not want to build the whole aosp if possible. I just need the boot.img
and dklm stuff.
Since android-15 the aosp has pivoted to using bazel as the build system. I can build the regular kernel just fine.
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
exec tools/bazel run \
--config=stamp \
--config=bluejay \
//private/devices/google/bluejay:gs101_bluejay_dist
Maybe it's worth considering to use another DbMS like PostgreSQL.
The solution from vscode-dotnet-runtime:2325 worked for me: manually adding "...\your path to aspnetcore\\dotnet.exe" into settings.json and restarting VSCode made C#/Unity intellisense work fine without internet.
None of the answers worked. I configured and wrote my own transformer to make this work.
(0) Very important notes
Always clear jest cache or work without a cache, when making changes in jest.config.js or the transformer. Otherwise your changes will have no effect.
clear cache: add option --clearCache
to your test run.
run without cache: add option --no-cache
to your test run.
(1) Create a test case to verify the output and see what's going on:
file: /src/mytest.test.tsx
import React from 'react';
import { render, screen } from '@testing-library/react';
import MySvg from '@src/mysvg.svg';
test('asdf', () => {
render(<MySvg/>);
screen.getByText("aaa"); // this WILL FAIL, so you'll see the output
});
(2) Configure jest
file: /jest.config.js
...
const exportConfig = {
transform: {
...
// ↓ configure: "for svg files, use `svgTransform.js`"
'^.+\\.(svg)$': '<rootDir>/.jest/svgTransform.js',
},
...
}
(3) Create the transformer
file: /.jest/svgTransform.js
const path = require('path');
module.exports = {
process(src, filename) {
const absPath = path.resolve(filename);
const code = `
const React = require('react');
module.exports = {
__esModule: true,
default: () => React.createElement('img', {
src: "${absPath}"
}),
};
`;
return { code };
},
};
(4) Lessons learned
Nothing works as expected. No docs, no AI. Every change is an adventure.
Most AI and internet says module.exports = ${filename}
or the like. The problem is, that the generated code will be </assets/your_file.svg/>
, which will not be a valid tag and therefore cause an error.
document.createElement
instead of React.createElement
does not work
default: () => <img ... />
` does not work
Same issue, same setup. It was working well before ~16 hours ago
Youtube streams can expire, you need to handle those if you aren't downloading the file directly. You can try reconnect or retry options in youtube_dl or ffmpeg, or you can create a function/method that will handle it. Some examples from old answers, may be outdated: stackoverflow.com/questions/66610012/…
– Benjin
Commented Jun 24 at 8:18
yes this works
I just ran into this. I'm using VS Code and GoogleTest. The initial error message gave no context as to where the error was happening. But checking the "All Exceptions" checkbox in the Breakpoints section of the Run and Debug tab caused VS Code to correctly break at the place the error was happening.
In GridDB, TimeSeries containers do not support updating individual columns directly. To update a specific field, you must read the entire row for the given timestamp, modify the desired column, and write the full row back using put(). Partial updates are not allowed. This ensures the entire row is replaced atomically. If needed, you can use a transaction to make the read-modify-write operation safe from concurrency issues.
First of all, maybe try using all the data, not just the last row of every 10 samples. This way, you will increase your dataset size.
Secondly, in training, you used the 4th row of every 10 samples, while in testing, you used the 10th. You should treat both datasets the same way, not instead of randomly selecting rows.
Instead of using git merge, use git rebase to reapply your commits from the diverged branch (feature) onto the updated main branch.
Switch to the diverging branch and follow the below commands:
git checkout feature #Switch to the diverging branch
git rebase --onto main <common-ancestor> feature #Rebase onto the updated main
git merge-base main feature #You can find the common ancestor using this command
git rebase --continue #Resolving conflicts
If the branch isn't public/shared the try force-align history using the below command
git rebase -i main
Note : This approach rewrites history, so only do this on branches that are not shared or on feature branches, not protected ones.
After rebasing, if you had already pushed the feature branch before, you’ll need to force-push
git push --force-with-lease
Finally use the below commands as a clean and effective way to resolve diverging branches after git lfs migrate.
git checkout feature
git rebase main
use
le_uint()
instead of
uint()
{
"name": "System Name",
"short_name": "System",
"icons": [
{ ...
}
],
"start_url": "/bwc/",
"display": "standalone",
"theme_color": "#0066FF",
"background_color": "#FFFFFF",
"orientation": "portrait"
}
Try downgrade your flutter version and also algolia version which suitable for both, don't jump into always the latest flutter cause some packages are need time to catch up with latest release.
To improve @steven-matison answer, you need to lower the Node version. The simplest way:
# make sure node.js & nvm are installed
nvm install 10 # install Node.js 10
node --version # locate version, e.g. v10.24.1
npm --version # locate version, e.g. 6.14.1
Now use these versions in ambari-admin/pom.xml
Although you put a revalidatePath inside server action, if that particular action is getting called inside client component, it won't work. method call should be also in the server component and should pass the data as prop to subsequent client components if have
Quagga2 is the newer version of QuaggaJS. It's decent when you're looking for basic scanning needs or for test projects. Found this tutorial that basically describes the integration step by step: https://scanbot.io/techblog/quagga-js-tutorial/
Go to .m2 folder update repository folder with backup name and open eclipse again
After running git lfs migrate, you've essentially created a new, parallel universe for your branch's history. The old commits and the new ones are no longer related, which is why Git sees your branches as "diverged."
Here's how to fix this in a simple, human-friendly way. The goal is to safely move your new features from the "migrated" history into the main "develop" history.
The Clean & Easy Fix (Recommended)
This is the safest method. Think of it as surgically grafting your new features onto the main branch.
* Get on the right track: First, reset your develop branch to perfectly match the remote origin/develop branch. This gets you on a clean, stable foundation.
git checkout develop
git fetch origin
git reset --hard origin/develop
* Copy your features: Find the new commits on your migrated branch that contain the features you want to keep. Then, use git cherry-pick to copy those commits, one by one, onto your clean develop branch.
# View the commits on your migrated branch (let's call it 'A')
git log --oneline A
# Then, copy the commits you need
git cherry-pick <commit-hash-1>
git cherry-pick <commit-hash-2>
* Push the fix: Your develop branch now has the correct history and your new features. It's ready to be pushed to the remote.
git push origin develop
The Quick & Dirty Fix (Alternative)
This method can be faster if you have many commits but may lead to a messier history with merge conflicts.
* Save your work: Create a temporary branch to save your current state, just in case.
git branch temp-A
* Get on the right track: Just like before, reset your develop branch to match the remote's history.
git checkout develop
git fetch origin
git reset --hard origin/develop
* Force the merge: Merge your temporary branch into develop using a special flag that tells Git, "I know they're not related; just merge them anyway!"
git merge temp-A --allow-unrelated-histories
* Clean up and push: You'll likely encounter merge conflicts. Fix them, commit the merge, and then push your develop branch to the remote.
git add .
git commit -m "Merged the LFS
changes"
git push origin develop
I faced a similar issue recently after updating Visual Studio 2022 (17.14.9) — specifically the “WinRT information: No COM Servers are registered for this app” error when trying to deploy my .NET MAUI (WinUI) app.
After trying the usual steps (clean/rebuild, reinstall SDKs, repairing/downgrading Visual Studio), what finally resolved it for me was re-registering the app’s COM components manually. Here’s what worked:
Ensure Windows App SDK is installed
Go to Apps & Features and check if “Microsoft Windows App Runtime” is present.
If not, download and install it from here:
https://learn.microsoft.com/en-us/windows/apps/windows-app-sdk/downloads
Re-register the AppX manifest
Clear old deployment/cache
Delete any existing folders under:
%LOCALAPPDATA%\Packages\<your_app>
%LOCALAPPDATA%\Microsoft\WindowsApps\<your_app>
Rebuild and deploy the app again from Visual Studio.
Let me know if you're still facing the problem after following these suggestions.
Thanks.
plot_implicit()
allows for compund logic expressions. So you can plot something like:
plot_implicit(And(J, K), ...)
Memory leaks in asyncio
applications can be subtle, especially when tasks or callbacks are retained unintentionally. Here’s how you can detect, debug, and prevent memory leaks when using asyncio
.
Generally one should standardize data before applying PCA, because PCA assumes data to be N-dimensional Gaussian cloud of points with the same variance for all observed features. Then PCA finds new hidden orthogonal directions ("factors") along which the variance is maximized.
Without standardizing, some feature can dominate just because of its scale. It's not so obvious for Iris dataset, because all 4 features are measured in cm and have similar scale.
Please see https://scikit-learn.org/stable/modules/decomposition.html#factor-analysis for some theory behind PCA and Factor Analysis, especially assumptions on noise variance.
Also, https://scikit-learn.org/stable/auto_examples/decomposition/plot_varimax_fa.html seems to be exactly what you are looking for. Note that all decomposition in this example is performed on standardized data.
People using uv
package manager should install it using uv pip
rather than uv add
. Not sure why this happens.
uv pip install ipykernel
I can not comment as my rep is not high enough 😒 But I had a similar issue with Intelephense and resolved it HERE.
Hopefully it helps
Hi, I'm experiencing a **similar SMTP error: `535 5.7.8 Error: authentication failed`** when trying to send emails through **Titan Mail** (Hostinger), but I'm using **Python** instead of PHP.
I'm sure that the username and password are correct — I even reset them to double-check. I've tried using both ports `587` (TLS) and `465` (SSL), but I always get the same authentication error.
Below is my implementation in Python:
```python
from abc import ABC, abstractmethod
import os
import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
class TitanEmailInput:
def __init__(self, to: list[str], cc: list[str] = None, bcc: list[str] = None, subject: str = "", body: str = ""):
self.to = to
assert isinstance(self.to, list) and all(isinstance(email, str) for email in self.to), "To must be a list of strings"
assert len(self.to) > 0, "At least one recipient email is required"
self.cc = cc if cc is not None else []
if self.cc:
assert isinstance(self.cc, list) and all(isinstance(email, str) for email in self.cc), "CC must be a list of strings"
assert len(self.cc) > 0, "CC must be a list of strings"
self.bcc = bcc if bcc is not None else []
if self.bcc:
assert isinstance(self.bcc, list) and all(isinstance(email, str) for email in self.bcc), "BCC must be a list of strings"
assert len(self.bcc) > 0, "BCC must be a list of strings"
self.subject = subject
assert isinstance(self.subject, str), "Subject must be a string"
assert len(self.subject) > 0, "Subject cannot be empty"
self.body = body
assert isinstance(self.body, str), "Body must be a string"
class ITitanEmailSender(ABC):
@abstractmethod
def send_email(self, email_input: TitanEmailInput) -> None:
pass
class TitanEmailSender(ITitanEmailSender):
def __init__(self):
self.email = os.getenv("TITAN_EMAIL")
assert self.email, "TITAN_EMAIL environment variable is not set"
self.password = os.getenv("TITAN_EMAIL_PASSWORD")
assert self.password, "TITAN_EMAIL_PASSWORD environment variable is not set"
def send_email(self, email_input: TitanEmailInput) -> None:
msg = MIMEMultipart()
msg["From"] = self.email
msg["To"] = ", ".join(email_input.to)
if email_input.cc:
msg["Cc"] = ", ".join(email_input.cc)
if email_input.bcc:
bcc_list = email_input.bcc
else:
bcc_list = []
msg["Subject"] = email_input.subject
msg.attach(MIMEText(email_input.body, "plain"))
recipients = email_input.to + email_input.cc + bcc_list
try:
with smtplib.SMTP_SSL("smtp.titan.email", 465) as server:
server.login(self.email, self.password)
server.sendmail(self.email, recipients, msg.as_string())
except Exception as e:
raise RuntimeError(f"Failed to send email: {e}")
Any ideas on what might be causing this, or if there's something specific to Titan Mail I should be aware of when using SMTP libraries?
Thanks in advance!
import logging
import sys
import io
# Wrap sys.stdout with UTF-8
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8')
logging.basicConfig(level=logging.INFO, stream=sys.stdout)
logger = logging.getLogger(_name_)
logger.info("🧱 Querying databricks")
Try removing hostfxr.dll if it exists in your folder. Details here: https://github.com/dotnet/runtime/issues/98335
To rewrite your privacy-policy to /privacy-policy.html you can modify your URL map for your static website by following this:
Go to Load balancing
Select your load balancer, click Edit > Host and Path Rules.
Switch to Advanced mode.
Add rule: Path = /privacy-policy, Backend = your bucket, Path prefix rewrite = /privacy-policy.html.
Click Save, then Update.
I recommend to check on how to set up a classic Application Load Balancer with Cloud Storage buckets, as it’s important to have it properly configured before modifying the URL map.
The issue occurs because you're calling focusRequester.requestFocus()
before the AnimatedVisibility
composable has completed its composition and placed the TextField
in the composition tree. This results in a java.lang.IllegalStateException
since the focusRequester
hasn't been properly attached yet.
I can not comment as my rep is not high enough 😒 But I had a similar issue with Intelephense and resolved it HERE.
Hopefully it helps
My initial guess would be that your generator runs out of data and you need ot manually "refill" it.
While running a custom example, which was generated from Gemini, on colab, I got the following warning:
UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least steps_per_epoch * epochs batches. You may need to use the `.repeat()` function when building your dataset.
I suppose this could be the root cause of your problems.
How do I get the decimal places of a floating point number?
It seems so simple, right? A fractional number like 123.456
consists of three integer digits 123
, and a decimal point .
,
and then three more fractional digits 456
. How hard can it be
to extract the fractional digits 456
, or even simpler, just
find out how many of them there are?
And as the answers here show, the answer to "How hard can it be?" is "Pretty hard". There are lots of techniques presented here, but they're all complicated, and some of them involve unnecessary-seeming conversions back to strings. Some of these techniques are error-prone, or don't work properly on all inputs.
And it turns out that the reason the problem is hard is that the
premise is flawed. Internally to a computer, the decimal
fraction 123.456
is not represented as three integer
digits 123
, and a decimal point .
, and then three fractional
digits 456
. Nothing like that.
As you probably know, computers use binary, or base 2, for just about everything. They don't use decimal (base 10). So how do computers represent fractions, if not in the obvious way?
Let's look first at a slightly different decimal fraction, 123.625.
That's equal to 123⅝, which is going to make it easier to see in
base two. 123.625 is represented internally as the binary fraction
1.111011101
, times a scaling factor of 26, or 64.
Let's check that: 1.111011101
is 1.931640625, and 1.931640625
× 64 = 123.625. Check.
But what about 123.456? Since the fractional part .456 is not representable as a binary fraction made up of halves and quarters and eighths and sixteenths, it's not going to work so well.
In the IEEE 754 double-precision floating point format used by
most JavaScript implementations, the number 123.456 is
represented by the binary number
1.1110110111010010111100011010100111111011111001110111
, again
multiplied by 26. But if you do the math, this works
out to about 123.45600000000000307. It is not exactly equal to 123.456.
This is a big surprise the first time you encounter it. Binary floating-point representations such as are used by JavaScript (and in fact most computers and programming languages) can not represent decimal fractions like 123.456 exactly.
And since the internal, binary representation of 123.456 does not involve the digits 123 or 456, it is not so easy to extract them in that form, after all.
And, not only is it not so easy to directly extract the
fractional digits 456, it's problematic to even ask how many of
them there are. As I said, once you've read in a number like
123.456 into a JavaScript program, it's represented internally by
the binary fraction
1.1110110111010010111100011010100111111011111001110111
×
26, which works out to about 123.45600000000000307.
So should we say that this number has 17 digits past the decimal?
No, and it's even worse than that: I said that the internal
binary representation works out to the equivalent of "about
123.45600000000000307", but it works out to exactly
123.4560000000000030695446184836328029632568359375, which has 46
digits past the decimal.
The answers presented here use various tricks, such as rounding or string conversion, to get around this problem. The answers presented here tend to give the answer 3 for the alleged number of places past the decimal for the input number 123.456. But since that input number 123.456 isn't really 123.456, these answers are, to some extent, cheating. And, indeed, if you take a function that can "correctly" give the answer 3 for the precision of the input 123.456, it will also give the answer 3 for input values of 123.45600000000000307 or 123.4560000000000030695446184836328029632568359375. (Try it!)
So how do you get around this problem? How do you find the true precision of the input number 123.456, if by the time you're working with it it's really a number like 123.45600000000000307? There are ways, but before choosing one, you have to ask: why do you need the precision, actually?
If it's an academic exercise, with little or no practical value, the techniques presented in the answers here may be acceptable, as long as you understand the underlying meaninglessness of the exercise.
But if you're trying to perform operations on the user's input in some way, operations which should reflect the user's actual input precision, you may want to initially take the user's input as a string, and count the number of places past the decimal in that representation, before converting the input string to a number and performing the rest of your operations. In that way, not only will you trivially get the right answer for an input of "123.456", but you will also be able to correctly determine that user input of, say, "123.456000" should be interpreted as having six places past the decimal.
I've tried to change location of vhdx from C to D. Docker Desktop updated to v4.43.1. I've got Error 500: Unhandled exception: Source and destination directory owners mismatch.
Editing settings.json doesnt work, I've changed d:\my\docker\dir permissions to allow Users full access and this solved the issue.
You can use xpath:
//button[contains(@id,'sauce-labs-backpack')]
Define the WebElement:
@Getter
@FindBy(xpath="//button[contains(@id,'sauce-labs-backpack')]")
private WebElement sauceLabsBackpackButton;
and you can call it from test:
getSauceLabsBackpackButton.click();
For anyone who stubles upon this question in 2025, from Symfony 7.3 on there is now a builtin solution for that:
$this->security->isGrantedForUser($user, 'ROLE_SALES_EXECUTIVE')
or as a twig function:
is_granted_for_user(another_user, 'ROLE_SALES_EXECUTIVE')
In my case, I had a file named “PodFile” (with an uppercase “F”). Renaming it to use a lowercase “f” with:
mv PodFile Podfile
fixed the issue.
docker push chatapp/monorepo:version1.1 [.]
remove the dot
Same problem here... I'm trying to upgrade from 2211.38 to 2211.42 and my only problem in the process is the upgrade from solr 9.7 to 9.8
Like your case, when I try to index solr in backoffice I'm having next error:
Unable to create core [master_xxx_Product_default] Caused by: solr.StempelPolishStemFilterFactory
Like the documentation says, I have removed the next tags from my solrcustom/server/solr/configsets/default/conf/solrconfig.xml:
<lib dir="${solr.install.dir:../../../..}/modules/analysis-extras/lib" regex=".*\.jar" />
<lib dir="${solr.install.dir:../../../..}/modules/hybris/lib" regex=".*\.jar" />
And i have this tag in solrcustom/server/solr/solr.xml:
<str name="modules">${solr.modules:analysis-extras,hybris}</str>
But the problem is not fixed. You mentioned that you had to change /core-customize/_SOLR_/server/solr/configsets/default/conf/solr.xml but, in my case, I only have schema.xml and solrconfig.xml in solrcustom/server/solr/configsets/default/conf
Today I met the same problem. It seems like in BW25 the mehod to import biosphere has changed.
You should use
bi.remote.get_projects()
to checkt the biosphere and LCIA methods and
bi.remote.install_project('<project_tag>', '<my_desired_project_name>')
to import.
For more detailed you can check https://learn.brightway.dev/en/latest/content/chapters/BW25/BW25_introduction.html
I'm using ESP 32 and PN532 , nfc_emualtor understand it , and does it support in dart version 3 , if no please provide me a new pacakage which support andorid as well as ios .
Within the Intelephense plugin there is a setting to exclude files from language server features
@ext:bmewburn.vscode-intelephense-client exc
Within these exclusions I found the **/vendor/**
folder. This plugin and its settings were shared with me by the previous developer and as such I was unaware of this.
The basic premise being the Intelephense was unaware of all of the symfony classes as it was unable to index that location.
Whilst looking to resolve this I have seen multiple people having similar issues with other PHP Frameworks that do not have an answer, so I will mention this post anywhere else I see that problem in the hope it will fix those too (CakePHP and Laravel being the ones I have seen)
If I just want to replace all refrences to IRQHandler
globally, I can use defsym
.
I.e. compile with
gcc main.c lib.c lib2.c -Wl,--defsym=IRQHandler=__wrap_IRQHandler -o prog
And then both IRQHandler
and g_IRQHANDLER
point to __wrap_IRQHandler
Confirmed by the Firebase team, it was a bug. My submitted report has been accepted.
For the error g++.exe: error: a.cpp: No such file or directory
g++.exe: fatal error: no input files
Compilation terminated.
Compile with the following code
g++ ..\a.cpp -o h
run with
.\h
You have to pass the prefill object
prefill: {
name: 'John Doe',
email: '[email protected]',
contact: '+919876543210', // Phone number
}
Make sure FLUTTER_ROOT path is the same for Debug and Release. It must be a path of your Mac, not another developer's Mac.
A longer response to @Lewis-munene comment:
After a group is finished and the next group starts, the first one is stopped (but not all the way, see my code comment on the stop()
invocation). If you want the first group to continue listening for new messages, add a 'watchdog' using Spring Events.
It's in Kotlin, but works in Java as well
@Component
class SubstationConsumerWatchdog(val applicationContext: ConfigurableApplicationContext) :
ApplicationListener<ContainerStoppedEvent> {
private val logger = KotlinLogging.logger {}
override fun onApplicationEvent(event: ContainerStoppedEvent) {
val firstGroup = applicationContext.getBean("g1.group", ContainerGroup::class.java)
if (firstGroup.allStopped()) {
logger.debug { "Restarting ContainerGroup 'g1'" }
firstGroup.stop() // to force the ContainerGroupSequencer.running boolean to false
firstGroup.start()
}
}
}
I think imagesFolder/slime.png is the folder name followed by file name. If the image is in sam folder i think specifying the file name that is slime.png is enough
see what i am talking about enter image description here
my version of python is 2.7.18
can you try using auto fit options on this table of yours and the content control having cannot delete property marked true, it doesn't seem to work when I do the same
Simplest way in my opinion:
RSS = np.random.randint(0, 99, size=(2, 3))
ij_min = np.unravel_index(RSS.argmin(), RSS.shape)
# check result
RSS
Out[229]:
array([[90, 97, 35],
[75, 25, 32]])
ij_min
Out[230]: (1, 1)
I am also looking for the possibility for adding the copilot to bitbucket for reviewing the PRs.
Are there any free tools where we can do that without any security issues? or paid tools with lesser price for integrating with the organization repo ?
Which dll do you use behind the node, and which canoe version to use this function:
ILNodeControlStop("BECM");
Use pytubefix (https://pytubefix.readthedocs.io/en/latest/index.html) instead. Pytube is no longer maintained; pytubefix works the same way.
Using the \mathregular
line worked.
Thanks to import-random for directing me to an answer containing this expression.
Example shown:
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['font.family'] = "serif"
mpl.rcParams['font.serif'] = "cmr10"
mpl.rcParams['axes.formatter.use_mathtext'] = True
fig, ax = plt.subplots()
ax.set_title('Results',fontsize=18)
ax.set_ylabel('Heat Flux ($\mathregular{W/cm^{2}}$)',fontsize=18)
ax.set_xlabel('Wall Superheat (K)',fontsize=18)
This generates the '2' in the correct font.